Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: Data Engineer.
United States Jobs Expertini

Urgent! Data Engineer Job Opening In San Francisco California – Now Hiring Perfict Global, Inc.



Job description

Job Title : Data Engineer

Location: San Francisco CA

Hybrid – Monday/Friday Remote + Onsite

595 Market St. #200, San Francisco, CA on Tuesday/Wed/Thursday

Duration: Long Term Contact



Roles & Responsibilities


  • You will build systems, core libraries and frameworks that power our batch and streaming Data and ML applications.

    The services you build will integrate directly with LendingClub's products, opening the door to new features.

  • You will work with modern data technologies such as Hadoop, Spark, DBT,

  • Dagster/Airflow, Atlan, Trino, etc., modern data platforms such as

  • Databricks and Snowflake and cloud technologies across AWS stack

  • Build data pipelines that transform raw data into canonical schema representing business entities and publish it into the Data Lake

  • Implement internal process improvements: automating manual processes, optimizing data delivery, reducing cloud costs, redesigning infrastructure for greater scalability, etc.

  • Work with stakeholders including the Business, Product, Program and Engineering teams to deliver required data in time with high quality at reasonable cost

  • Implement processes and systems to monitor Data Quality, Observability, Governance and Lineage.

  • Support operations to manage the production environment and help in resolving production issues with RCA

  • Write unit/integration tests, adopt Test-driven development, contribute to engineering wiki, and document design/implementation etc.


Roles & Resposibilities


  • 4+ years of experience and a bachelor's degree in computer science or a related field; or equivalent work experience

  • Working experience of distributed systems Hadoop, Spark, Hive, Kafka, DBT and Airflow/Dagster

  • At least 2 year of production coding experience in data pipeline implementation in Python

  • Experience working with public cloud platforms, preferably AWS

  • Experience working with Databricks and/or Snowflake

  • Experience in Git, JIRA, Jenkins, shell scripting

  • Familiarity with Agile methodology, test-driven development, source control management and test automation

  • Experience supporting and working with cross-functional teams in a dynamic environment

  • You have excellent collaborative problem solving and communication skills and are empathetic to others

  • You believe in simple and elegant solutions and give paramount importance to quality

  • You have a track record of building fast, reliable, and high-quality data pipelines




 


Required Skill Profession

Architecture And Engineering



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your Data Engineer Potential: Insight & Career Growth Guide