Requirement - Principal AI Architect

0 views
Skip to first unread message

vamsi s

unread,
Apr 20, 2026, 10:08:35 AM (yesterday) Apr 20
to vam...@sritechsolutions.com

Role: Principal AI Architect (Technical Architecture · AI Strategy · Hands-On Implementation · Market Intelligence)

Location: Remote

Experience:13+

 

ABOUT THE ROLE:

We are looking for a Principal AI Architect — a deeply technical, hands-on leader who sits at the intersection of AI research, system architecture, and pragmatic implementation. This is not a management role. It is a high-impact individual contributor position for someone who loves writing code, designing systems, and staying ahead of the AI market curve. You will set the technical direction for our AI engineering practice, drive architecture decisions from first principles, and act as the team's primary connection to fast-moving developments in LLMs, agentic frameworks, retrieval systems, and applied ML. You will own how we build AI — not just what we build.

 

WHAT YOU'LL DO

AI Architecture & System Design

  • Design end-to-end AI system architectures: multi-agent orchestration layers, RAG pipelines, hybrid retrieval systems (knowledge graphs + vector search), text-to-SQL engines, and real-time inference APIs.
  • Define and own technical blueprints for new AI products — from data ingestion and embedding pipelines through to response generation, evaluation, and production monitoring.
  • Solve hard engineering problems: latency, precision/recall trade-offs, context window management, hallucination mitigation, and cost-efficient LLM usage at scale.
  • Make deliberate, well-documented architecture decisions with clear trade-off analysis (build vs. buy, framework selection, deployment topology).

 

Hands-On Implementation

  • Write production-quality code — Python, SQL, API services — across the full AI lifecycle: data qualification, model training, evaluation, containerised deployment, and API serving.
  • Build and own reusable, framework-quality components (chunking pipelines, retrieval layers, agent tool-calling modules) that accelerate team velocity.
  • Prototype rapidly, validate with real data, and iterate; bring proofs-of-concept to production-grade systems with full observability and evaluation harnesses.
  • Own CI/CD pipelines, Docker-based deployment, and production telemetry for AI services.

 

AI Market Intelligence & Technology Strategy

  • Continuously track and evaluate the AI landscape — new LLMs, agentic frameworks (LangGraph, Google ADK, CrewAI, AutoGen), retrieval methods, fine-tuning techniques, and emerging tooling.
  • Run structured technology evaluations: benchmark new models and frameworks against production requirements and make clear, evidence-based adoption recommendations.
  • Translate AI market trends into actionable roadmap inputs — surfacing opportunities for step-change capability improvements before competitors do.
  • Represent the team at AI conferences, read papers, engage with the open-source community, and bring back signal that shapes our technical direction.

 

Technical Standards & Team Enablement

  • Establish and enforce engineering standards: code review practices, PR templates, testing frameworks, and reusable component libraries that reduce development-to-deployment cycle times.
  • Serve as the senior technical reviewer for all AI system designs — providing deep, opinionated feedback that raises the quality bar across the team.
  • Pair with engineers on hard problems, conduct design sessions, and create internal reference architectures and runbooks that embed best practices across the team.
  • Design and run internal AI upskilling sessions (e.g., capstone programmes, architecture workshops) to keep the team sharp on production-grade AI development.

 

Cross-Functional Technical Partnership

  • Partner closely with Product, Data Science, and Platform Engineering to align AI architecture with product direction, data constraints, and infrastructure capabilities.
  • Communicate complex technical trade-offs clearly to non-technical stakeholders — translating architecture decisions into business impact narratives.
  • Work with compliance and legal teams in regulated environments (financial services, cybersecurity) to ensure AI systems meet governance, auditability, and data-handling requirements.

 

WHAT WE'RE LOOKING FOR

Must-Have Experience

  • 12+ years of hands-on experience in AI/ML engineering and data science, with significant depth in production system delivery.
  • Deep, working expertise in LLM application development: LangChain, LangGraph, tool-calling agents, RAG, prompt engineering, embedding pipelines, and hybrid retrieval.
  • Proven track record architecting and shipping multi-agent systems, knowledge graph-powered retrieval (Neo4j or equivalent), and real-time inference APIs.
  • Strong ML fundamentals: XGBoost, deep learning, NLP, time-series forecasting, propensity modelling, experimental design, and causal inference.
  • Experience delivering AI systems in regulated industries (financial services, cybersecurity, healthcare) with SOX, GDPR, or SOC 2 compliance awareness.
  • Expert-level Python and SQL; fluency with GCP, AWS, Docker, FastAPI, BigQuery, FAISS, and CI/CD tooling.

 

Technical Depth:

  • Ability to design hybrid retrieval architectures that balance precision (graph traversal) and semantic recall (vector similarity), with reranking layers — not just off-the-shelf RAG.
  • Hands-on experience reducing LLM inference latency in production (e.g., redesigning pipelines from multi-minute to sub-30-second response times).
  • Experience building telemetry and evaluation frameworks that measure AI system quality, coding behaviour, and model performance in production.
  • Comfort reading AI research papers and quickly assessing whether new techniques are production-ready or hype.

 

The Right Mindset

  • You are a builder first — you are most energised when you are designing systems and writing code, not in meetings.
  • You actively follow the AI market: you know what dropped on Hugging Face last week, you have opinions on LangGraph vs. AutoGen, and you can explain why.
  • You make decisions with incomplete information, document your reasoning, and course-correct quickly when new data arrives.
  • You raise the bar for the engineers around you without gatekeeping — your instinct is to enable, not to bottleneck.

 

PREFERRED QUALIFICATIONS

  • Master's or PhD in Computer Science, Operations Research, Statistics, or a related quantitative field
  • AWS Certified Machine Learning Engineer or GCP Professional ML Engineer certification.
  • Completion of an AI Strategy or AI Governance programme.
  • Prior experience at a data science / ML services firm, enterprise SaaS, or fintech — where you shipped AI to external customers, not just internal tools.
  • Hands-on experience with Snowflake Cortex or comparable enterprise LLM deployment platforms.
  • Open-source contributions to AI/ML tooling, published technical writing, or conference presentations.

 With Regards,  


Vamsi Sattaru US IT Technical Recruiter


Reply all
Reply to author
Forward
0 new messages