FW: Capgemini - CGEMJP00328679 - Python Developer- Data Warehousing - Charlotte, NC - Onsite

0 views
Skip to first unread message

Shiva Krishna

unread,
Feb 2, 2026, 12:12:46 PM (6 days ago) Feb 2
to Shiva Krishna

 

 

From: Ayush Tomar <vmst...@webmsi.com> On Behalf Of VMS Team
Sent: Monday, February 2, 2026 12:11 PM
Cc: Anu Anand <a...@webmsi.com>; Sanjay K. Joshi <san...@webmsi.com>
Subject: Capgemini - CGEMJP00328679 - Python Developer- Data Warehousing - Charlotte, NC - Onsite

 

*** NEED TO REMEMBER ***

 

Please do not post Capgemini requirements on the internet or job boards.

 

Hi,

 

Please find the details of new requirement specified below.

 

Requisition ID: CGEMJP00328679

Period: 03/09/2026 to 09/30/2026

 

Onboarding Process:

• Selected Candidate must be willing to go to the closest Capgemini/Client office location as indicated by the project team to meet and greet with a Capgemini team member prior to starting their assignment.

• If the candidate is not local, Capgemini will pay the expenses.

Description:


Role name: Python Developer- Data Warehousing
Work site: Charlotte, NC - onsite
Start date: Immediate availability.

Job Description:
We are seeking a highly experienced and skilled Solutions Architect to lead the design, architecture, and optimization of our enterprise-level data processing and application platforms on AWS. The ideal candidate will possess deep expertise in a range of AWS services and big data technologies, ensuring our solutions are scalable, cost-effective, secure, and align with strategic business objectives.
Key Responsibilities
Design and document comprehensive, secure, and scalable technical solutions on AWS leveraging services like Pyspark, flink, Fargate, EMR, EC2, DynamoDB, DocumentDB, Airflow(MWAA).
API Development: Oversee the design and development of performant and secure Python APIs to expose data and application functionalities to internal and external systems.
6-10 years of experience in AWS services.
6-10 years Strong hand on experience working and expected to work on the development
Data Pipeline Development: Lead the architecture of complex batch and real-time data pipelines using big data processing frameworks, including PySpark ensuring efficient data flow and transformation.
Orchestration and Automation: Implement robust workflow orchestration using Apache Airflow (MWAA) for building, scheduling, and monitoring complex data and application workflows.
Database Management: Design and optimize data storage solutions using relational and NoSQL databases, including Amazon DynamoDB and Amazon DocumentDB, to meet performance and scalability requirements.
Infrastructure as Code (IaC): Drive the automation of infrastructure deployment using IaC tools (e.g., Terraform or CloudFormation) and implement CI/CD pipelines following DevOps best practices.
Technical Leadership & Collaboration: Provide technical guidance and mentorship to data engineering and development teams, ensuring alignment with architectural vision and best practices. Collaborate with stakeholders to translate business requirements into technical specifications.
Performance Optimization & Cost Management: Monitor and optimize the performance, cost-efficiency, and resource utilization of all deployed solutions on AWS.

Thanks & Regards,

Ayush Tomar

Millennium Software Inc.

2000 Town Center, Suite 300, Southfield, MI, 48075

Visit us at www.webmsi.com

 

Reply all
Reply to author
Forward
0 new messages