Hi Vendors,
Position - Azure Data bricks Architect
Location - Remote
Key Responsibilities
Design and implement scalable, reliable, and efficient data pipelines on Databricks.
Architect end-to-end data platforms integrating Databricks with cloud environments (Azure, AWS, or GCP).
Collaborate with data engineers, data scientists, and business stakeholders to translate data requirements into scalable technical solutions.
Optimize and tune Databricks clusters for performance, scalability, and cost efficiency.
Implement and enforce data security, governance, and compliance best practices.
Develop architectural standards, frameworks, and documentation for Databricks usage.
Lead troubleshooting and resolution of complex platform and integration issues.
Establish CI/CD and DevOps practices for data engineering workflows.
Essential Skills
Strong hands-on experience with Databricks architecture and administration
Deep expertise in Apache Spark
Proficiency in:
Python
Scala
SQL
Experience with cloud platforms:
Microsoft Azure (preferred)
AWS
Google Cloud Platform
Strong knowledge of:
Data warehousing concepts
ETL/ELT processes
Data modeling (Dimensional & Lakehouse)
Experience with:
Cluster optimization & performance tuning
Cost optimization strategies
Security (RBAC, IAM, encryption)
Data governance frameworks
Desirable Skills
Experience with Delta Lake & Lakehouse architecture
CI/CD implementation for data pipelines
Infrastructure as Code (Terraform, ARM templates)
Experience with streaming pipelines (Spark Structured Streaming)
Knowledge of Unity Catalog
Exposure to DevOps practices
Experience in regulated environments (GDPR, HIPAA, etc.)
Experience Required
6–8 years of overall IT experience
Minimum 3+ years of hands-on Databricks architecture experience
Proven experience architecting large-scale enterprise data platforms