Urgent Hiring for the Position || Data Engineer || Minnesota, MN (Remote)

1 view
Skip to first unread message

Mansi

unread,
Mar 17, 2025, 9:46:48 AMMar 17
to mans...@ktekresourcing.com


Hello,

My name is Mansi Sen, and I am a Technical Recruiter for K-Tek Resourcing.
 
We are searching for Professionals below business requirements for one of our clients. Please read through the requirements and connect with us in case it suits your profile.

Please see the Job Description and if you feel Interested then send me your updated resume at  Mans...@ktekresourcing.com  

Please don't share a candidate who has already submitted  with HCL America.
We need a candidate who has good knowledge in Technical skills as I mentioned in the Job Description .

Job Title:  Data Engineer
Location: Minnesota, MN (Remote)
Duration: Long Term
Implementation Partner - HCL America

Job Description - 

Mandatory Skills - Python, Pyspark and SQL

 

Job Description -

·         Develop, optimize, and maintain ETL/ELT pipelines using PySpark and SQL.

·         Work with structured and unstructured data to build scalable data solutions.

·         Write efficient and scalable PySpark scripts for data transformation and processing.

·         Optimize SQL queries, stored procedures, and indexing strategies to enhance performance.

·         Design and implement data models, schemas, and partitioning strategies for large-scale datasets.

·         Collaborate with Data Scientists, Analysts, and other Engineers to integrate data workflows.

·         Ensure data quality, validation, and consistency in data pipelines.

·         Implement error handling, logging, and monitoring for data pipelines.

·         Work with cloud platforms (AWS, Azure, or GCP) for data processing and storage.

·         Optimize data pipelines for cost efficiency and performance.

 

Technical Skills Required-

Strong experience in Python for data engineering tasks.

Proficiency in PySpark for large-scale data processing.

Deep understanding of SQL (Joins, Window Functions, CTEs, Query Optimization).

Experience in ETL/ELT development using Spark and SQL.

Experience with cloud data services (AWS Glue, Databricks, Azure Synapse, GCP BigQuery).

Familiarity with orchestration tools (Airflow, Apache Oozie).

Experience with data warehousing (Snowflake, Redshift, BigQuery).

Understanding of performance tuning in PySpark and SQL.

Familiarity with version control (Git) and CI/CD pipelines.



USA | Canada| India

Thanks & Regards,

Mansi Sen

Mans...@ktekresourcing.com

Reply all
Reply to author
Forward
0 new messages