JD-
Job Description We are looking for an experienced Data Engineer with experience in Snowflake , Python, AWS, Oracle, Airflow & TWS to join our team! The ideal candidate will have the desired requirements shown below, along with a willingness to learn new and exciting technologies. Responsibilities include: Design, develop, and maintain scalable ETL/ELT pipelines for structured and unstructured datasets. Implement and optimize data storage, retrieval, and transformation processes in Snowflake and Oracle databases. Cloud migrations for data pipelines using AWS and Snowflake, Participating in all aspects of the software development lifecycle for Snowflake solutions, including planning, requirements, development, testing, and quality assurance. Integrate and manage workload orchestration using Airflow and Tivoli Workload Scheduler (TWS) Create robust data processing scripts and tools in Python with a strong emphasis on modular, testable, and reusable code. Write complex queries and stored procedures for high-volume, high-performance database applications. Optimize query performance and troubleshoot database issues. Collaborating with cross-functional teams to define, design and ship new features. Troubleshooting incidents, identify root causes, fix and document problems, and implement preventive measures. Qualifications: Working experience on Snowflake Platform (Certification preferred) Strong SQL and PL/SQL Working Cloud knowledge on AWS Working Snowflake knowledge with Designing data warehouses Strong hands-on experience in Python with ETL concepts Good understanding on Metadata and data lineage Hands on knowledge on SQL Analytical functions, Views, Materialized Views Strong knowledge and hands-on experience in SQL, Unix shell scripting Strong understanding of data warehouse concepts, dimensional modeling, and ETL best practices