Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Big Data Engineer

18 views
Skip to first unread message

mike p

unread,
Dec 4, 2018, 1:26:39 PM12/4/18
to
Hi
Please check following Job description once

Title: Data Engineer
Duration: 1+ years
Openings: 1
Position Type: Contract
Location: Durnham, NC, United States
Visa: USC, GC, OPT Preferred but they need to be on W2

Description:
We provide end-to-end architectural strategies and spearhead innovative solutions for our customers. This includes a very large realm of activities, including but not limited to: influencing product strategy, developing scalable solutions, partnering with squads in launching valuable and clear solutions to our customers.
As Data Engineer, you will utilize your expertise to have an end-to-end vision, and to see how a logical design will translate into one or more physical databases, and how the data will flow through the successive stages involved. You have a passion for delivering solutions in a client obsessed environment that will give you opportunities to grow multi-dimensionally.
The Expertise We're Looking For
• 5+ years' experience in Large Big Data Development and Deployment Automation in Private/Public cloud preferably on AWS
• Understanding principles, best practices and trade-offs of schema design for both Relational and NoSQL database systems
• Good Understanding of Big Data NoSQL databases/technologies (DynamoDB, Hive, Spark, MongoDB)
• Experience with DevOps, Continuous Integration and Continuous Delivery (Maven, Jenkins, Stash, Ansible, Docker)
• Extensive Experience Programming in Python, Ruby, Go, JavaScript and Java, as well as strong Unix shell skills
• Define structure, integrate, govern, store, describe, model, and maintain data in the enterprise for accuracy and usage maintaining current state
• Support policies and procedures enforced by the data governance committee to ensure best practices of data architecture including accountability, governance, and requirements
• Document data inventory and data flow diagrams to determine what can be measured, when and how
• Experience building Data Ingestion on the cloud (using tools like Glue, Apache Sqoop or other vendor products like Talend or StreamSets)
The Purpose of Your Role
• Determine database structural requirements by analyzing client operations, applications, and programming; reviewing objectives with clients; evaluating current systems;
• Develop database solutions by designing proposed system; defining database physical structure and functional capabilities, security, back-up, and recovery specifications.
• Maintain database performance by identifying and resolving production and application development problems; calculating optimum values for parameters; evaluating, integrating, and installing new releases; completing maintenance; answering user questions.
The Skills You Bring
• Your expertise in languages such as Python, Ruby, Go, JavaScript and Java, as well as strong Unix shell skills
• Your Big Data Skills with popular stacks like Hadoop and Spark
• Your knowledge of AWS CloudFormation, OpenStack HEAT templates and Terraform
• Your expertise in all phases of data modeling, from conceptualization to database optimization.
• Your ability to map the systems and interfaces used to manage data, sets standards for data management, analyzes current state and conceives desired future state, and conceives projects needed to close the gap between current state and future goals
• Your desire and aptitude for learning new technologies
• Your excellent verbal and written communication skills
The Value You Deliver
• Build a strategy to re-invent systems and tools to create a continuous cycle of Innovation
• Create data monitoring models for each product and work with our marketing team to create models ahead of new releases
• Ability to build data models supporting complex transformation
• Identifying and ingesting new data sources and performing feature engineering for integration into models
0 new messages