elevant degree in Computer Science or a related field preferred.
· 5 or more years of relevant work experience required.
· Experience designing and implementing scalable data engineering solutions using Databricks and Apache Spark required.
· Strong expertise in Delta Lake architecture, distributed computing, and large-scale data processing frameworks.
· Experience building and optimizing ETL/ELT pipelines in Databricks within cloud environments such as Azure, AWS, or GCP.
· Experience supporting machine learning model deployment and lifecycle management within Databricks preferred.
· Proficiency in Python or Scala for data engineering and platform development.