Hiring for Big Data Engineer (Hadoop / Spark Platform)

0 views
Skip to first unread message

Joshua Daniel

unread,
Dec 19, 2025, 2:50:49 PM (12 hours ago) Dec 19
to jos...@innoverglobalinc.com
Job Title:

Big Data Engineer (Hadoop / Spark Platform)

Location:

Dallas, TX or Chicago, IL (Hybrid)

Duration:

12+ Months

Visa:

USC / Green Card Only

Job Description:

Innover Global is looking for an experienced Big Data Engineer with strong hands-on expertise in administering, architecting, and supporting enterprise Apache Hadoop platforms. This role focuses on Hadoop cluster management, Spark-based data processing, ETL pipelines, and platform-level performance, security, and scalability.

The ideal candidate has real-world experience working with HDFS, YARN, Spark, Hive, and Hadoop security frameworks in production environments, and can collaborate closely with data engineering, analytics, and application teams to support large-scale data platforms.

Key Responsibilities:

Architect, implement, and maintain enterprise-scale Hadoop platforms

Administer and support Hadoop clusters, ensuring high availability, reliability, and performance

Manage, monitor, and tune HDFS, YARN, and MapReduce

Develop and support large-scale data processing using Apache Spark and PySpark

Work with Hive for data warehousing and HBase for NoSQL data storage

Design, build, and support ETL pipelines using Spark, Hive, and SQL

Implement and manage Hadoop security including Kerberos authentication, authorization, and access controls

Monitor cluster health, troubleshoot production issues, and perform capacity planning and performance tuning

Collaborate with data engineering, analytics, DevOps, and application teams

Ensure compliance with enterprise security, governance, and data standards

Participate in platform upgrades, enhancements, and operational support activities

Required Skills & Qualifications:

Strong experience as a Big Data Engineer, Hadoop Administrator, or Hadoop Platform Engineer

Hands-on expertise with:

HDFS

YARN

MapReduce

Apache Spark

Hive

HBase

Strong working experience with PySpark and SQL

Solid understanding of ETL concepts and big data pipelines

Experience supporting Hadoop platforms in production environments

Strong troubleshooting, monitoring, and performance tuning skills

Excellent communication and collaboration skills

Nice to Have:

Experience with Cloudera or Hortonworks Hadoop distributions

Exposure to cloud-based big data platforms

Experience with workflow orchestration tools such as Oozie or Apache Airflow

Background as a Java Developer with Big Data / Hadoop experience

Knowledge of data governance and enterprise security standards

Contact:

📧 Jos...@innoverglobalinc.com
Reply all
Reply to author
Forward
0 new messages