Data Fabric Technical Architect
Houston, TX
5 Days Onsite
Note : Data Fabric Technical Architect with Power BI & Azure position
Roles and responsibilities:
- Design the target Fabric lakehouse + Purview architecture, including medallion layers, OneLake usage, and key services. This establishes the core technical blueprint that all ingestion, modeling, and analytics work will follow. Outputs include architecture diagrams and key non-functional requirements.
- Define the ingestion and integration approach across EFR, RADS, GADS, EIA, and regional sources, including patterns for batch and near real-time. Document how pipelines will be orchestrated, monitored, and scaled across regions and tenants. This becomes the reference for all future data integration work.
- Design the semantic and data-modeling patterns for Silver and Gold, including star schemas, conformed dimensions, and entity relationships. Specify modeling conventions for performance, compliance, events, and other core domains. These patterns ensure consistent, reusable models across the platform.
- Define the security architecture that enforces CEI/CUI boundaries, Zero Trust principles, and regulatory obligations across Fabric and Purview. Document how RBAC, network boundaries, encryption, and logging will work end-to-end. This architecture is required before onboarding regional and E-ISAC data.
- Design how Purview will be used for catalogs, classifications, lineage, and policy management across the estate. Specify collections, scan patterns, roles, and integration points with Fabric workspaces and security. This provides the backbone for metadata-driven governance.
- Consolidate all architectural decisions into a single, versioned blueprint document covering platform, security, governance, and integration. Ensure traceability back to requirements and validate alignment with client enterprise architecture principles. This document becomes the baseline for build and QA.
- Lead Implementation of reusable ingestion patterns and pipelines for initial systems (EFR, RADS, GADS, regional feeds, etc.). Include logging, error handling, and parameterization so new feeds can be onboarded with minimal custom code. This framework is the backbone for all upstream data movements.
- Partner with cloud admin to configure Purview collections, register data sources, and schedule initial scans of key systems and lakehouse objects. Validate that metadata, lineage, and classifications are being captured as designed. This step brings the catalog to life and makes data assets discoverable.
- Work with BA to create first-generation semantic models that expose curated entities, measures, and hierarchies for priority domains. These V1 models will be used for early testing, QA, and as a foundation for domain-level modeling in later phases. Feedback from these models will refine standards and design patterns.
- Define and implement reconciliation rules at the domain level to ensure that cross-source and cross-region differences are consistently resolved. Capture rules in both documentation and technical logic so they can be repeated and audited. This improves trust in domain-level metrics.
Thanks & Regards,
Confidentiality & Disclaimer:
This e-mail message, including any attachments contains information that may be privileged or confidential and is the property of the Cyma Systems Inc. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message. Cyma Systems Inc does not accept any liability for virus infected mails.
CYMA SYSTEMS INC is an Equal Opportunity Employer (EOE). Qualified applicants are considered for employment without regard to age, race, color, religion, sex, national origin, sexual orientation, disability, or veteran status.