Role : Lead Data Engineer with AWS
Location : Lewisville, TX (Onsite)
Duration : Contract
Experience : 10+ years
Job Description:
Mandatory Skills: AWS-Fargate, AWS-Kinesis and Confluent Kafka
Responsibilities:
- Build and
document automated data pipelines from a wide range of data sources with
an emphasis on automation and scale: Handled large streaming data set
(volume / size) and performed initial Data Pattern identification with
AWS-Kinesis.
- Develop highly
available applications and APIs to support near-real-time integrations
using an AWS-based technology stack: Designed and developed REST APIs as
prototype to support near-real-time {vehicle / any other domain based}
data using SWAGGER and presented the design to the business stakeholders.
Converted the prototypes to full scale development using AWS technology
stack such as AWS-Event Bridge, AWS-SQS, AWS-SNS &
Confluent-KAFKA.
- Ensure product
and technical features are delivered to spec and on-time in a DevOps
fashion: Authored "Software-Design-Specification" based on
Product Feature requirement & user stories, implemented within
agile-sprint cadence of the program.
- Contribute to
overall architecture, framework, and design patterns to store and process
high data volumes: Collaboratively worked with Architect and
contributed on improving the large datasets by decomposing un-structured
data into "Structure", "Semi-Structured" along with
design patterns {Circuit Break / MVC, etc} and designed the storage
framework with AWS-S3/AWS-Dynamo-DM/AWS-RDS.
Skills:
- Bachelors:
Degree in Computer Science, Informatics, or a related field required.
- 7+ years of
experience in a data engineering role
- 3+ years of
experience with AWS and related services (e.g., EC2, S3, SNS, Lambda, IAM,
Snowflake)
- Hands-on
experience with ETL tools and techniques (Desirable)
- Basic
proficiency with a dialect of ANSI SQL, APIs, and Python
- Knowledge of
and experience with RDBMS platforms, such as MS SQL Server, MySQL, NoSQL,
Postgres.
--