Hiring : Data Engineer - 12+ Yrs Expd with AWS, Python, Spark, ETL Pipelines, SQL, and Pytest, Expertise in Apache Spark( MUST) Terraform, Docker, or Kubernetes (Preferred).

3 views
Skip to first unread message

nara...@3mkllc.com

unread,
Aug 28, 2025, 5:32:38 PM (11 days ago) Aug 28
to direct-client-requirements3, direct-client-requirements4, st...@3mkllc.com

Hello,
Greetings for the day!!!
Please review the below roles and advise the best time to connect with you. If you are interested, you can reach me on Linkedin: 
www.linkedin.com/in/narayanarao2 and share resumes to nara...@3mkllc.com

Note:  WANT TO GET MY DIRECT CLIENT REQUIREMENTS DAILY? Please click on this link and clink on “Ask to join Group” -  https://groups.google.com/u/1/g/direct-client-requirements3 (Cntrl+Click on this link)

 

Hiring : Data Engineer – 12+ Yrs Expd with AWS, Python, Spark, ETL Pipelines, SQL, and Pytest, Expertise in Apache Spark( MUST) Terraform, Docker, or Kubernetes (Preferred).

Location: Fort Mill, SC or Austin, TX

Rate: Open/hr

Contract: Long Term.

 

 

We are looking for a skilled Data Engineer to join our team and help build robust, scalable, and efficient data pipelines.

 

The ideal candidate will have strong expertise in AWS, Python, Spark, ETL Pipelines, SQL, and Pytest. This role involves designing, implementing, and optimizing data pipelines to support analytics,

 

business intelligence, and machine learning initiatives. 

 

Required Skills & Qualifications: 

·                Experience in Data Engineering or a related field. 

·                Strong proficiency in AWS (S3, Glue, Lambda, EMR, Redshift, etc.) for cloud-based data processing. 

·                Hands-on experience with Python for data processing and automation. 

·                Expertise in Apache Spark for distributed data processing. 

·                Solid understanding of ETL pipeline design and data warehousing concepts. 

·                Proficiency in SQL for querying and managing relational databases. 

·                Experience writing unit and integration tests using Pytest

·                Familiarity with CI/CD pipelines and version control systems (e.g., Git). 

·                Strong problem-solving skills and ability to work in a fast-paced environment. 

 

Preferred Qualifications: 

·                Experience with Terraform, Docker, or Kubernetes

·                Knowledge of big data tools such as Apache Kafka or Airflow

·                Exposure to data governance and security best practices. 

 

 

Key Responsibilities: 

·                Design, develop, and maintain ETL pipelines using AWS services, Python, and Spark

·                Optimize data ingestion, transformation, and storage processes for high-performance data processing. 

·                Work with structured and unstructured data, ensuring data integrity, quality, and governance. 

·                Develop SQL queries to extract and manipulate data efficiently from relational databases. 

·                Implement data validation and testing frameworks using Pytest to ensure data accuracy and reliability. 

·                Collaborate with data scientists, analysts, and software engineers to build scalable data solutions. 

·                Monitor and troubleshoot data pipelines to ensure smooth operation and minimal downtime. 

·                Stay up-to-date with industry trends, tools, and best practices for data engineering and cloud technologies. 

 

 

 

Thanks & Regards,

 

Narayana Rao

Sr Manager (Recruitments)

3MK Software Solutions LLC

Email: nara...@3mkllc.com

Website: http://3mkllc.com/

Connect me on Linkedin too for daily updates and REQUIREMENTS: linkedin.com/in/narayanarao2

Note:  WANT TO GET MY DIRECT CLIENT REQUIREMENTS DAILY? Please click on below link and clink on “Ask to join Group”

https://groups.google.com/u/1/g/direct-client-requirements3

 

Reply all
Reply to author
Forward
0 new messages