Role : Big Data/Spark Engineer
Location: locations: Rockville MD or Tysons VA.
Big Data/Spark Engineer for a contract assignment with one of our premier Regulatory clients. Must be local
to one of these office locations: Rockville MD or Tysons VA.
Hybrid work is required - non local candidates will not be accepted.
2-step interview process - onsite interview mandatory!
Only top 2 candidates will be presented - to build scalable data pipelines.
High-volume AWS environment, extremely large data.
Top Skills: Spark, SQL, Python, AI assisted workflows, Automation validation.
Seeking a Big Data Engineer to design, develop, and optimize large-scale data processing systems. Will
work closely with cross-functional teams to architect data pipelines, implement data integration solutions,
and ensure the performance, scalability, and reliability of big data platforms. Ideally will have deep expertise
in distributed systems, cloud platforms, and modern big data technologies such as Hadoop, Spark etc.
Responsibilities :
Design, develop, and maintain large-scale data processing pipelines using Big Data technologies (e.g.,
Hadoop, Spark, Python, Scala).
Implement data ingestion, storage, transformation, and analysis of solutions that are scalable, efficient, and
reliable.
Stay current with industry trends and emerging Big Data technologies to continuously improve the data
architecture.
Collaborate with cross-functional teams to understand business requirements and translate them into
technical solutions.
Optimize and enhance existing data pipelines for performance, scalability, and reliability.
Develop automated testing frameworks and implement continuous testing for data quality assurance.
Conduct unit, integration, and system testing to ensure the robustness and accuracy of data pipelines.
Work with data scientists and analysts to support data-driven decision-making across the organization.
Ability to write and maintain automated unit, integration, and end-to-end tests.
Monitor and troubleshoot data pipelines in production environments to identify and resolve issues.
Required Skills :
Bachelor's degree in Computer Science, Information Systems or related discipline with at least five (5) years
of related experience, or equivalent training and/or work experience; Master's degree and past Financial
Services industry experience preferred.
Demonstrated technical expertise in Object Oriented and database technologies/concepts which resulted in
deployment of enterprise quality solutions.
Past experience with developing enterprise quality solutions in an iterative or Agile environment.
Extensive knowledge of industry leading software engineering approaches including Test Automation, Build
Automation and Configuration Management frameworks.
Experience with object oriented programming languages such as Java, Scala or Python.
Essential Technical Skills:
AI Tool Proficiency: Hands-on experience with AI development tools (GitHub Copilot, Q Developer,
ChatGPT, Claude, etc.).
Technical Background: Strong software development background with ability to contribute to technical
discussions.
Agile Methodology: Extensive experience with Scrum, Kanban, and continuous improvement practices.
Big Data technologies
Experience with Big data technologies such as Hadoop, Spark, Hive & Trino.
Understanding of common issues like:
Data skew and strategies to mitigate it.
Working with massive data volumes in PetaBytes.
Troublehsooting job failures due to resource limitations, bad data, scalability challenged.
Real-world debugging and mitigation stories.
AI Skills
Prompt Engineering: Proficiency in crafting effective prompts for AI coding assistants and analysis tools.
AI Workflow Design: Experience redesigning development processes to leverage AI capabilities.
Data Analysis: Ability to interpret AI-generated insights and translate them into actionable team
improvements.
Change Management: Experience leading teams through AI adoption and workflow transformation.
SQL Skills (Window Functions, Joins, Complex Queries)
Comfort with SQL window functions, multi-table joins, aggregations.
Provide examples of write/optimize SQL queries.
Handle edge cases like NULLs, duplicates, ordering, etc.
Apache Spark (Development, Internals & Tuning)
Understanding of Spark's core architecture — executors, tasks, stages, DAG.
Spark performance tuning techniques: partitioning, caching, broadcast joins, etc.
Troubleshooting slow running/stuck jobs or resource issues in Spark.
Experience optimizing Spark jobs for large-scale datasets.
Cloud Technologies
Exposure to AWS services like S3, EMR, Glue, Lambda, Athena, etc.
Use S3 with Spark (e.g., dealing with file formats, consistency issues).
EKS, Serverless knowledge, etc.
Programming - Python or Scala
Ability to write clean, modular, and performant code.
Functional programming concepts (e.g., immutability, higher-order functions).
Real-world use cases where they wrote scalable data processing code.
Understanding of collections, concurrency, and memory management.
Good to have:
Experience with managing production data pipelines/ETL systems.
Experience with CI/CD.
Experience writing test cases.
AWS certifications.
Regards,
|
Sasikumar
|
| Fax: 732-399-9513 Address: 15 Corporate Pl S, Suite #450, Piscataway Township, NJ 08854 Email: sasi....@intellyk.com | Website: www.intellyk.com
|
In my absence you may reach my manager - jyothi....@intellyk.com
DISCLAIMER: THIS IS NOT UNSOLICITED MAIL. UNDER BILL 1618 TITLE III PASSED BY THE 105TH USA CONGRESS THIS EMAIL CANNOT BE CONSIDERED AS SPAM AS LONG AS WE INCLUDE OUR CONTACT INFORMATION AND AN OPTION TO BE REMOVED FROM OUR EMAILING LIST. IF YOU HAVE RECEIVED THIS MESSAGE IN ERROR OR, ARE NOT INTERESTED IN RECEIVING OUR EMAILS, PLEASE ACCEPT OUR APOLOGIES. PLEASE REPLY WITH REMOVE IN THE SUBJECT LINE. ALL REMOVAL REQUESTS WILL BE HONORED. WE SINCERELY APOLOGIZE FOR ANY INCONVENIENCE CAUSED