Role: Sr Data Engineer – Kafka
Location: Durham, NC
Must-Have
· Cloud platform experience
· Python
· Kafka
· Public Cloud automation using Azure
DevOps, ARM templates or CFT templates
· Scripting languages and automate
processes: Helm Charts, Yaml, JSON and Python
· Network security
· Database technologies SQL and NoSQL
· Designing and developing CI/CD
pipelines for automated application deployments, using either Jenkins, Harness,
UDeploy Artifactory, BitBucket or Docker Container tools
- 4-6 years of experience
in development, analytics, design, and software development
- 2 years of building and
deploying applications in a large-scale environment on one or more of the
following Cloud platforms – AWS, Azure, Kubernetes. Certification in any
of these is a plus.
- Experience implementing
distributed data processing, event-driven data processing, and data
integration patterns (ETL, ELT).
- Designing and
implementing continuous delivery database pipelines that serve the purpose
of provisioning non-prod and production environments
- Experience developing
tools and solutions to integrate, automate, and orchestrate cloud the operational database needs demonstrating Automation, Azure/AWS Managed
Data Service, Azure/AWS DevOps, and ARM templates.
- Event Streaming/Kafka
experience and/or Confluent (nice to have)
- Scripting languages and
automate processes: Helm Charts, Yaml, JSON and Python
- Experience designing and
developing CI/CD pipelines/strategies for automated application
deployments, using Jenkins, Harness, UDeploy Artifactory, BitBucket and
Docker Container tools.
--