Data Engineer-Los Altos, CA(Hybrid)-USC Only

0 views
Skip to first unread message

David Miller

unread,
Feb 23, 2026, 10:54:38 AM (6 days ago) Feb 23
to

Role: Sr Data Engineer

Location: Los Altos, CA (Hybrid)

Duration:  12+ months 

Visa- USC Only 


 

Role Summary

A Data Engineer designs, builds, and maintains reliable and scalable data systems and pipelines that support business analytics, reporting, and machine learning initiatives. This role ensures that large volumes of raw data from various sources are efficiently ingested, transformed, stored, and made available for analysis and downstream applications.


Key Responsibilities

Data Pipeline & ETL Development

  • Design, develop, and maintain robust ETL (extract, transform, load) and data ingestion pipelines.
  • Integrate data from multiple internal and external sources into centralized data platforms (e.g., data warehouses or data lakes).
  • Automate repetitive workflows to ensure efficient and accurate data movement.

Data Architecture & Optimization

  • Build, optimize, and scale data infrastructure to support analytics and business intelligence workloads.
  • Implement data modeling, ensure data quality and integrity, and manage schema designs for structured and unstructured datasets.
  • Monitor and optimize database performance, query execution, and data storage strategies.

Collaboration & Support

  • Work closely with data scientists, analysts, software engineers, and business stakeholders to understand and support data requirements.
  • Troubleshoot and resolve data-related issues, ensuring reliable and timely data flow for analytical and operational use.

Data Quality & Governance

  • Implement practices to ensure data security, privacy, and compliance with relevant laws and industry standards.
  • Maintain documentation of data architectures, pipelines, and processes for knowledge sharing and future reference.

Cloud & Tool Integration

  • Leverage cloud platforms such as AWS, Azure, or GCP to build scalable data solutions (storage, compute, orchestration).
  • Utilize big data frameworks and tools like Hadoop, Spark, Kafka, and workflow orchestration tools (e.g., Apache Airflow).

Required Skills & Qualifications

  • Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, IT, or related field.
  • Programming: Proficiency in Python, SQL, and/or Scala for data manipulation and pipeline development.
  • Database Expertise: Strong knowledge of SQL and NoSQL databases, data warehousing (Redshift, BigQuery, Snowflake).
  • Big Data Tools: Experience with distributed processing frameworks like Hadoop and Spark, and messaging systems such as Kafka.
  • Cloud Platforms: Hands-on experience with cloud data services and platforms (AWS, Azure, GCP).
  • Data Modeling & Architecture: Ability to design efficient data models and manage complex schema designs.
  • Communication & Collaboration: Strong communication skills to work with cross-functional teams and translate business needs into technical solutions.

Preferred (Nice-to-Have)

  • Experience with container technologies and orchestration (Docker, Kubernetes).
  • Knowledge of workflow automation tools (e.g., Apache Airflow, dbt).
  • Prior experience with analytics platforms or machine learning support infrastructure.
  • Relevant certifications in cloud platforms or big data technologies.

Impact & Value

Data Engineers are critical in enabling data-driven decision-making by turning raw information into reliable and structured datasets that power analytics, reporting, and AI/ML solutions. Their work ensures organizations can scale their data capabilities while maintaining quality, security, and performance.

Reply all
Reply to author
Forward
0 new messages