HI,
My name is Adarsh, and I am a Technical Recruiter from Empower Professionals Inc. I wanted to reach out regarding
Sr. Data Engineer – Data & Intelligence
with one of our clients based in Frisco, TX (Hybrid)
. Please let me know if you are available in the job market and interested in this role - if so, we can connect and speak further.
Role: Sr. Data Engineer – Data & Intelligence
Location: Frisco, TX (Hybrid) Local
Duration: 12+ Months
In-Person interview required
Must Have:
- Advanced SQL (CTEs, window functions, query optimization, execution
plans)
- Snowflake (Snowpipe, Streams, Tasks, Dynamic Tables, performance tuning)
- Databricks with PySpark (Delta Live Tables, Unity Catalog, job clusters)
- Python (pandas, PySpark, pytest, logging)
- DBT (models, tests, snapshots, macros, CI integration)
- Airflow or Azure Data Factory (DAG design, scheduling, alerting)
- Azure Cloud (ADLS Gen2, Event Hub, ADF, Azure Functions, Key Vault)
- Streaming technologies (Apache Kafka / Azure Event Hub, Spark Structured
Streaming)
- Domain: Finance domain – Billing, Revenue, GL, OPEX, revenue recognition,
period-end close cycles
Responsibilities:
- Data Pipeline Development
-
Design and build scalable, reliable ELT/ETL pipelines for finance data (billing, revenue, GL, OPEX).
-
Implement batch and incremental ingestion patterns (full load, CDC, watermark-based).
-
Build idempotent, rerunnable pipelines with robust error handling, retry logic, and dead-letter queue patterns.
- Platform & Tooling
-
Develop and optimize pipelines using Snowflake (Snowpipe, Streams, Tasks, Dynamic Tables, performance tuning).
-
Build data processing workflows in Databricks (PySpark, Delta Live Tables, Unity Catalog, job clusters).
-
Create and maintain dbt models, tests, snapshots, macros, and packages with CI integration.
-
Orchestrate data workflows using Airflow or Azure Data Factory (DAG design, dependencies, scheduling, alerts).
- Cloud Infrastructure
-
Work within Azure (ADLS Gen2, Event Hub, ADF, Azure Functions, Key Vault) and/or AWS (S3, Glue, Lambda, Secrets Manager).
-
Apply Infrastructure as Code fundamentals (Terraform, Bicep) for pipeline and resource provisioning.
-
Apply cloud cost awareness including compute sizing, partitioning strategies, and storage optimization.
- Languages & Frameworks
-
Write advanced SQL (CTEs, window functions, query tuning, execution plan analysis).
-
Develop in Python (pandas, PySpark, requests, pytest, logging).
-
Read and modify existing Scala/Spark jobs as needed.
-
Use shell scripting for automation and operational tasks.
- Streaming & Real Time Processing
-
Build near real time pipelines using Apache Kafka / Azure Event Hub.
-
Implement Spark Structured Streaming with stateful aggregations, watermarking, and checkpointing.
-
Support finance use cases such as revenue reconciliation and fraud signal feeds.
- Data Quality & Testing
-
Implement unit and integration testing for pipelines (pytest, dbt tests).
-
Create data quality checks (row counts, nulls, duplicates, referential integrity).
-
Use Great Expectations or custom frameworks for validation.
-
Monitor SLAs for pipeline latency and data freshness with alerting.
- Data Modeling Support
-
Implement architected schemas (star, snowflake, data vault).
-
Manage Slowly Changing Dimensions (SCD Type 1 & 2) for finance entities.
-
Define partitioning and clustering strategies for large-scale finance tables.
-
Support semantic layer definitions (metrics and dimensions).
- DevOps & Engineering Practices
-
Participate in CI/CD for data pipelines using GitHub Actions or Azure DevOps.
-
Follow Git branching strategies (trunk-based, feature branches).
-
Perform code reviews and enforce engineering standards.
-
Support environment promotion patterns (dev → QA → prod).
- Security & Governance
-
Implement RBAC and row/column-level security in Snowflake and Databricks.
-
Ensure PII and CPNI handling per T Mobile TISS 310 policy.
-
Manage secrets securely (Key Vault, environment variables, no hardcoded credentials).
-
Implement data lineage and audit instrumentation for compliance.
- Collaboration & Communication
-
Partner with Data Architects to translate design specs into production-ready pipelines.
-
Work closely with Data Analysts to optimize downstream consumption performance.
-
Communicate pipeline incidents and data issues clearly to business stakeholders.
-
Participate in on-call rotation to support production pipelines.
- Senior-Level Expectations
-
Own delivery of complex, multi-source pipelines with minimal direction.
-
Mentor junior and mid-level data engineers through pairing and code reviews.
-
Identify and drive technical debt reduction alongside feature delivery.
-
Contribute to and shape team standards, templates, and reusable components.
-
Influence tooling, framework, and platform decisions across the team.
Requirements:
- 8+ years of experience in data engineering or platform engineering
roles.
- Strong experience with Snowflake, Databricks, and dbt in production
environments.
- Advanced SQL and Python skills.
- Experience building finance or regulated data pipelines at scale.
- Preferred Qualifications
- Telecom industry experience (ARPU, churn, prepaid/postpaid metrics).
- Experience with both Azure and AWS cloud platforms.
- Prior experience supporting financial reporting and period-end close
cycles.
In compliance with the salary transparency law, the expected pay range for this role is $50 – $60/hr. Actual compensation
depends on experience and interview evaluation.
Awaiting your quick response. Thanks
P.S. Empower is a top vendor to clients such as Apex Systems LLC, Sogeti, Randstad, CapGemini, UST and more.
Thanks
Adarsh Sharma
Technical Recruiter | Empower Professionals
Ada...@empowerprofessionals.com
......................................................................................................................................
| 100 Franklin Square Drive – Suite 104 | Somerset, NJ 08873
www.empowerprofessionals.com
Certified NJ and NY Minority Business Enterprise (NMSDC)
Note: We respect your Online Privacy. This is not an unsolicited mail. Under Bills.1618 Title III passed by the 105th U.S. Congress this mail cannot be considered Spam as long as we include Contact information and a method to be removed from our mailing list.
If you are not interested in receiving our e-mails then please reply with a "REMOVE" in the subject line and mention all the e-mail addresses to be removed with any e-mail addresses, which might be diverting the e-mails to you. We are sorry for the inconvenience.
This e-mail and any files transmitted with it are for the sole use of the intended recipient(s) and may contain confidential and privileged information. If you are not the intended recipient(s), please reply to the sender and destroy all copies of the original
message. Any unauthorized review, use, disclosure, dissemination, forwarding, printing or copying of this email, and/or any action taken in reliance on the contents of this e-mail is strictly prohibited and may be unlawful.