Job Description:
Job Title: Informatica Lead with Databricks
Location: Secaucus, NJ or Branchville, NJ (Hybrid / Onsite as required)
Employment Type: Contract / C2C / W2
Duration: Long Term
Experience: 10+ Years
---------------------------------------------------------------
Job Summary
We are seeking an experienced Informatica Lead with Databricks expertise for a long-term contract opportunity. The candidate will lead data integration and modernization initiatives, working closely with business, architecture, and cloud teams to design, migrate, and optimize scalable data pipelines using Informatica and Databricks.
---------------------------------------------------------------
Key Responsibilities
Lead development and delivery of Informatica ETL/ELT solutions (PowerCenter / IDMC)
Design and implement Databricks-based pipelines using Apache Spark and Delta Lake
Drive migration of legacy ETL workloads to cloud-native architectures
Perform data ingestion, transformation, and optimization for large datasets
Ensure best practices for data quality, governance, lineage, and security
Review ETL code, conduct performance tuning, and resolve production issues
Collaborate with cross-functional teams including data architects and cloud engineers
Support CI/CD processes for data pipelines
Provide technical guidance and mentoring to junior developers
---------------------------------------------------------------
Required Skills & Qualifications
10+ years of experience in Data Engineering / ETL Development
Strong hands-on experience with:
Informatica PowerCenter and/or Informatica IDMC
Databricks (Spark, Delta Lake, Jobs, Workflows)
Advanced SQL skills (Oracle, SQL Server, Snowflake, etc.)
Experience with cloud platforms (AWS, Azure, or GCP)
Strong understanding of data warehousing and ETL architecture
Proven experience in performance tuning and troubleshooting ETL pipelines
Excellent communication and stakeholder interaction skills
---------------------------------------------------------------
Preferred / Nice-to-Have Skills
Experience with Snowflake or other cloud data warehouses
Python or Scala for Spark development
Experience with CI/CD tools (Git, Jenkins, Azure DevOps, GitHub Actions)
Knowledge of data governance, metadata management, and security standards
Prior experience in leading or mentoring ETL teams