Role 1 : Senior Data Quality Engineer – Data Pipelines (AWS) Must be local to NJ , Have 15 years of exp

3 views
Skip to first unread message

Swapna Sankurathri

unread,
9:52 AM (7 hours ago) 9:52 AM
to c2cpr...@googlegroups.com

Role 1 : Senior Data Quality Engineer – Data Pipelines (AWS) Must be local to NJ , Have 15 years of exp

Role 2:Senior Data Engineer  15 + years  AWS  Must be local to NJ/NY 

Swa...@everestconsulting.net

Direct :732 609 8666

 

 

Senior Data Quality Engineer – Data Pipelines (AWS)

Location: Hybrid – Manhattan must be local to NJ must have 15 years of exp

Employment Type :Contract

Role Summary

We are seeking a Senior Quality Engineer  to own test planning and validation of data pipelines within an AWS-based data lakehouse environment.

This role will work closely with developers, DevOps, and data teams to ensure data integrity and reliability across CI/CD pipelines and cloud workflows.

 

The position involves designing and maintaining test automation, independently analyzing logs to troubleshoot failures, and contributing to stable and reliable data processing using AWS services.

Ideal Candidate Profile

 

14+ years overall experience in software quality, testing, and data pipeline validation

5+ years in testing AWS-based data pipelines and cloud workflows

Experience working in Agile environments (Scrum or Kanban)

Familiarity with cloud-based and DevOps-driven development environments

Working knowledge of SQL and relational databases

Experience with software testing tools (e.g., Jira, qTest, ALM, Selenium)

Basic proficiency with Git version control (clone, commit, push, pull, branching)

Basic command-line proficiency (Linux, PowerShell, CMD)

Hands-on experience with AWS services: S3, Glue, Athena, CodeCommit, CodeBuild, CodePipeline

Experience with Terraform/Terrahub in CI/CD pipelines

Log analysis and troubleshooting in build/deployment pipelines

 

Key Responsibilities

Own end-to-end test planning and execution for AWS-based data lakehouse pipelines and integrations

Design, develop, and maintain automated test scripts to validate data pipelines and support CI/CD workflows

Validate data workflows using AWS services including S3, Glue, and Athena

Independently support and troubleshoot CI/CD pipelines using CodeCommit, CodeBuild, and CodePipeline

Review build and deployment logs to identify failures and perform root cause analysis

Work with Terraform/Terrahub components within build pipelines (CodeBuild)

Perform functional, regression, and data validation testing across releases

Track, document, and drive defects through resolution in collaboration with development teams

Define test strategies, scope, and estimates for releases and test cycles

Collaborate with cross-functional teams to support UAT and production readiness

Communicate testing progress, risks, and results to stakeholders

 

Education:

 

Bachelor’s degree in Computer Science, Engineering, Information Systems, or equivalent experience.

 

Senior Data Engineer  15 + years  AWS  Must be local to NJ/NY 

Location: Hybrid – Manhattan

Employment Type: Contract

Customer : The Clearing House

Location/ Work Mode: Manhattan ,NY – 2 days in a week from office . If you have any matching profile who worked with Trianz opportunities earlier but now not available in NY, discuss with us . .  35 Billing hours Per week

Job Type: Contract

Duration : 6 months subject to extension

Compensation  73 preferred   C2C exceptions discuss with us.

Interview Process--  Recruitment Prescreening- 20 minutes intro videocall, Trianz Technical – 30 -45 minutes video call, Customer call ( 2 rounds)  – 45- 60 minutes video call.

Project Starting :- The moment we have a qualified profile. .

 

We are seeking a Senior Data Engineer to design, build, and support scalable data pipelines and analytics datasets that power enterprise reporting across Finance, Technology, and Operations.

 

This role focuses on ingesting data from corporate systems, organizing it in a cloud-based data lake, and enabling reliable reporting through Amazon QuickSight.

The ideal candidate is a hands-on engineer comfortable with modern AWS data services, collaborating with business stakeholders, and supporting production reporting workloads. You will help establish practical standards for data ingestion, transformation, and reporting in a growing analytics environment.

Context: You will build and maintain Python-based ETL pipelines within AWS Glue, own transformation logic, manage orchestration and metadata cataloging, and ensure data is securely processed and accessible for downstream analytics and AI tools.

This role requires strong ownership of pipeline reliability, data quality, and performance tuning. Experience with dbt and Snowflake will become increasingly important as the platform evolves.

 

Key Responsibilities

 

 

 

Design, build, and maintain scalable data ingestion frameworks using AWS native services (Glue, Lambda, S3, Step Functions) and SnapLogic

Architect and manage the enterprise data lake on S3 using Apache Iceberg, including partitioning strategies, schema evolution, metadata optimization, and lifecycle management

Develop robust ETL/ELT pipelines to standardize, cleanse, and enrich source system data for analytics and operational use cases

Build and maintain reporting-ready datasets and queries using Amazon Athena and AWS Glue metadata

Implement and monitor data quality frameworks, including validation rules, reconciliation checks, and anomaly detection

Collaborate with Finance, Technology, and Operations stakeholders to translate business requirements into scalable data solutions

Establish and enforce data governance best practices: documentation, lineage tracking, access controls, and change management

Monitor pipeline health and performance, troubleshoot data issues, and support recurring reporting cycles

Performance Standards

 

Deliver projects accurately and on schedule

Maintain professionalism and accountability

Demonstrate effective teamwork and cross-functional collaboration

Align with Technology & Operations strategic goals

Communicate clearly with internal and external stakeholders

Ideal Candidate Profile

 

14+ years of overall software development experience, with significant exposure to data-centric systems

7+ years of hands-on experience in data engineering, analytics engineering, or related roles

Strong SQL skills with proven experience designing and building analytical datasets

Hands-on experience with AWS data platforms: S3, Glue, Athena, and AWS Unified Data Catalog

Experience integrating data from SaaS and enterprise systems using ETL/ELT tools such as SnapLogic

Experience supporting BI tools like Amazon QuickSight, Tableau, or Power BI

Ability to work independently and collaborate effectively with both technical and non-technical stakeholders

 

 

 

 

 

 

Respectfully & Best Regards

Swapna S

Everest Consulting Group Inc

3840 Park Ave Suite #203, Edison, NJ 08820

Swa...@everestconsulting.net

Direct :732 609 8666

732-548 2700 Ext: 8666

Mobile: 215 806 4493

https://www.linkedin.com/in/swapna-sankurathri-1b17191a

 

 

Swapna Sankurathri

unread,
11:01 AM (6 hours ago) 11:01 AM
to Swapna Sankurathri
Reply all
Reply to author
Forward
0 new messages