C2C role: AI Engineer with Java || Remote

2 views
Skip to first unread message

Yogesh Singh

unread,
Apr 16, 2026, 4:25:57 PMApr 16
to Yogesh Singh

Hi,

Please share profiles for the role below, including visa status, location, and LinkedIn ID.

LinkedIn ID should be before 2020

USC/GC on W2 for the role.

Position-Java+ AI Engineer 12+ Years

Location - Remote

Duration-Long termĀ 

Ā $60/hr. C2C max H1/H4/EAD

Client:: TCS

Note** 5.5% VMS fee included and Net 60 as payment terms.

Role: AI Engineer with Java LLM/RAG

Descriptions:

"• Design, develop, and deploy AI/ML and Generative AI solutions including LLM based applications, RAG pipelines, agents, and predictive models
• Translate business use cases into production ready AI solutions with measurable outcomes
• Deep knowledge in Java streams technology
• Implement LLM orchestration, prompt engineering, vector search, embeddings, and model fine tuning
• Develop scalable APIs and microservices to integrate AI capabilities into enterprise applications
• Collaborate with Data Engineers, Data Scientists, Product Owners, and Cloud teams across onshore offshore models
• Implement MLOps / LLMOps practices including CI/CD, monitoring, versioning, model governance, and observability
• Ensure Responsible AI, security, compliance, and data privacy by design
• Support production deployments, performance tuning, and continuous improvement of AI systems."
"• 4–8+ years of experience in software engineering, ML engineering, or AI solution development
• Strong proficiency in Java and experience
• Hands on experience with Generative AI / LLMs, including RAG, embeddings, prompt engineering, and agents
• Solid understanding of data engineering concepts, SQL/NoSQL, and feature pipelines
• Experience deploying AI solutions on cloud platforms (GCP preferred; AWS/ Azure acceptable)
• Familiarity with Docker, Kubernetes, and CI/CD pipelines
• Strong problem solving, communication, and stakeholder collaboration skills"

Skills: AI and Automation

Ā 

Additional Notes:

  • LinkedIn profile must be created before 2020

Ā 

Thanks

Yogesh Pratap Singh


The information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any computer.

Yogesh Singh

unread,
Apr 17, 2026, 3:11:34 PMApr 17
to Yogesh Singh

Hi,

Please share profiles for the role below, including visa status, location, and LinkedIn ID.

LinkedIn ID should be before 2020

Client:: TCS

Note** 5.5% VMS fee included and Net 60 as payment terms.

Ā 

Position-Azure Automation & Observability Engineer (Python, Datadog) 10+ Years

Location -Plano , Texas

Duration-Long TermĀ 

Bill Rate-$65/hr. C2C

GC on W2 only for this role.

Ā 

Must have skills: Azure Functions, Datadog, CI/CD,

Ā 

Role Description:
Design, develop, and maintain automation solutions using Python and scripting languages such as Shell and PowerShell to streamline operations and reduce manual effort. Implement and manage monitoring, alerting, and observability using Datadog, including dashboards, log analytics, and performance metrics. Build and support automation workflows on Microsoft Azure, leveraging services like Azure Functions, Logic Apps, and CI/CD pipelines. Integrate automation with cloud, application, and infrastructure components to enable proactive issue detection and self-healing. Collaborate with DevOps, SRE, and application teams to improve reliability, performance, and operational efficiency. Support production environments through automated health checks, incident response, and root cause analysis. Ensure automation solutions follow security, compliance, and enterprise governance standards.

Ā 

Preferred Qualifications:

  • Familiarity with relational databases (SQL Server, Oracle), including schema design, stored procedures, and query optimization in the Odessa data model.
  • Experience with API integrations (REST, SOAP, JSON/XML-based data exchange), including use of tools like Postman for debugging.
  • Preferred exposure to Odessa Studio or Odessa Designer for workflow and business rule configuration.
  • Understanding of captive finance business processes, such as OEM dealer programs, subsidy rates, floorplan, or commercial lease structures.
  • Basic scripting skills (PowerShell, Python, or similar) for automation of routine support tasks.

Education Requirements:

  • Bachelor’s in Computer Science or other related field.
  • 5 years of experience.

Ā 

Ā 

Thanks, and Regards
Yogesh Pratap Singh
Account Manager
Precision Technologies Corp

Ā 

Ā 

Ā 

TĀ 732-348-5000 x 837Ā  Ā Ā M- 7322284897
1100 Cornwall Road, #134, Monmouth Junction, NJ
www.PrecisionTechCorp.com

Ā 

Ā Ā Ā 

Ā 

Ā 

Ā 

Yogesh Singh

unread,
Apr 20, 2026, 1:52:53 PMĀ (12 days ago)Ā Apr 20
to Yogesh Singh

Hi,

Ā 

Please share profiles for the role below, including visa status, location, and LinkedIn ID.

LinkedIn ID should be before 2020

Ā 

Role : Sr AWS Cloud Engineer (AWS certification Must) 12+ years

Location : SFO CA (onsite)Ā Ā  Need Local profiles only

Rate : $65/hr. C2C H1B/H4/EAD

USC/GC W2 only(No exceptions)

Ā 

Must have : AWS certifications

Ā 

We are seeking a driven Sr AWS Cloud Engineer with a strong foundation in cloud platform engineering, security, and operations. This role is ideal for a self‑starter and go‑getter who takes pride in designing systems that are secure by design, verifiable by default, and continuously ready for internal and external scrutiny.

Ā 

What You’ll Do

  • Design and operate AWS cloud environments that are secure, scalable, and operationally mature
  • Own reliability, security, and operational health of cloud environments
  • Lead initiatives independently from concept through delivery
  • Partner with application teams to enable secure self‑service cloud capabilities
  • Proactively identify risks, gaps, and process weaknesses and drive improvements
  • Develop automation and tooling using modern scripting and programming languages

Ā 

What You Bring (Required)

  • Strong hands-on experience with AWS cloud services
  • Deep understanding of cloud security and compliance fundamentals, including:
    • Organizational‑level identity and access governance
    • Identity and access management (IAM), Least privilege principles
    • Logging, monitoring, and traceability
    • Encryption and data protection
    • Configuration standardization and drift prevention
  • Experience with preventive and detective guardrails to ensure systems remain within defined security and operational boundaries
  • Proven ability to lead initiatives independently
  • Self‑starter mentality — proactive, decisive, and execution‑focused
  • Hands‑on experience with Infrastructure as Code (CloudFormation or Terraform)
  • Automation & scripting experience with Python, JavaScript, or Shell
  • Strong hands-on CI/CD pipelines that enforce security by default, including policy checks and approval gates

Ā 

Nice to Have

  • Experience with AWS networking (VPCs, subnets, routing, dns)
  • AWS Multi-Account Control Tower / Account Factory experience
  • Familiarity with compliance frameworks (SOC 2, ISO 27001, PCI, HIPAA)
  • AWS certifications (Security, Solutions Architect, SysOps)
  • Containers (ECS/EKS) or serverless architectures (Lambda)

Ā 

Ā 

Thanks, and Regards
Yogesh Pratap Singh
Account Manager
Precision Technologies Corp

Ā 

Ā 

Ā 

Ā 

Ā 

Ā 

Ā 

Yogesh Singh

unread,
Apr 20, 2026, 6:18:17 PMĀ (12 days ago)Ā Apr 20
to Yogesh Singh

Hi,

Ā 

Please share profiles for the role below, including visa status, location, and LinkedIn ID.

LinkedIn ID should be before 2020

Ā 

Role : Sr AWS Cloud Engineer (AWS certification Must) 12+ years

Location : SFO CA (onsite)Ā Ā 

Rate : $65/hr. C2C H1B/H4/EAD

Yogesh Singh

unread,
Apr 21, 2026, 1:01:56 PMĀ (11 days ago)Ā Apr 21
to Yogesh Singh

Hi,

Ā 

Please share profiles for the role below, including visa status, location, and LinkedIn ID.

LinkedIn ID should be before 2020

Ā 

Ā 

Job Title:Ā Databricks Data Engineer with DevOps Skills
Location:Ā Los Angeles, CA (Hybrid) Must be within 100miles
Hire Type:Ā C2C
Rate : $60/hr. C2C H1B/H4/EAD

USC/GC W2 only(No exceptions)

Ā 


Job Summary

We are looking for an experiencedĀ Databricks Data EngineerĀ with strong DevOps expertise to join our data engineering team. The ideal candidate will design, build, and optimize large-scale data pipelines on theĀ Databricks Lakehouse Platform on AWS, while driving automated CI/CD and deployment practices. This role requires deep expertise in PySpark, SQL, AWS cloud services, and modern DevOps tooling. You will collaborate closely with cross-functional teams to deliver scalable, secure, and high-performance data solutions.


Must Demonstrate – Critical Skills & Architectural Competencies

  • Designing and implementing Databricks-based Lakehouse architectures on AWS
  • Clear separation of compute vs. serving layers
  • Ability to design low-latency data/API access strategies (beyond Spark-only patterns)
  • Strong understanding of caching strategies for performance and cost optimization
  • Data partitioning, storage optimization, and file layout strategy
  • Ability to handle multi-terabyte structured or time-series datasets
  • Skill in requirement probing and identifying architectural priorities
  • AĀ player-coach mindset — hands-on engineering combined with technical leadership

Key Responsibilities

1. Data Pipeline Development

  • Design, build, and maintain scalable ETL/ELT pipelines using Databricks on AWS
  • Develop high-performance data processing workflows using PySpark/Spark and SQL
  • Integrate data from Amazon S3, relational databases, and semi/unstructured sources
  • Implement Delta Lake best practices: schema evolution, ACID transactions, OPTIMIZE, ZORDER, partitioning, and file-size tuning
  • Ensure architectures support high-volume, multi-terabyte workloads

2. DevOps & CI/CD

  • Implement CI/CD pipelines for Databricks using Git, GitLab, GitHub Actions, or AWS-native tools
  • Build and manage automated deployments usingĀ Databricks Asset Bundles
  • Manage version control for notebooks, workflows, libraries, and environment configurations
  • Automate cluster policies, job creation, environment provisioning, and configuration management
  • Support infrastructure-as-code viaĀ TerraformĀ (preferred) or CloudFormation

3. Collaboration & Business Support

  • Work with data analysts and BI teams to prepare curated datasets for reporting and analytics
  • Collaborate with product owners, engineering teams, and business partners to translate requirements into scalable implementations
  • Document data flows, technical architecture, and DevOps/deployment workflows

4. Performance & Optimization

  • Tune Spark clusters, workflows, and queries for cost efficiency and compute performance
  • Monitor pipelines, troubleshoot failures, and maintain high reliability
  • Implement logging, monitoring, and observability across workflows and jobs
  • Apply caching strategies and workload optimization for low-latency consumption patterns

5. Governance & Security

  • Implement and maintain data governance usingĀ Unity Catalog
  • Enforce access controls, security policies, and data compliance requirements
  • Ensure lineage, quality checks, and auditability across all data flows

Technical Skills

Databricks (Hands-On Experience Required):

  • Delta Lake, Unity Catalog, Lakehouse Architecture
  • Delta Live Pipelines, Databricks Runtime, Table Triggers, Databricks Workflows

Programming & Query Languages:

  • PySpark, Apache Spark, Advanced SQL

AWS Cloud Services:

  • S3, IAM, Glue/Glue Catalog, Lambda, Secrets Manager
  • Kinesis (optional but beneficial)

DevOps & Automation:

  • Git/GitLab, CI/CD Pipelines, Databricks Asset Bundles
  • Terraform (preferred), CloudFormation

Other:

  • Relational databases and data warehouse concepts

Preferred Experience

  • Knowledge of streaming technologies such as Structured Streaming / Spark Streaming
  • Experience building real-time or near real-time data pipelines
  • Exposure to advanced Databricks runtime configurations and performance tuning

Certifications (Optional but Preferred)

  • Databricks Certified Data Engineer Associate or Professional
  • AWS Certified Data Engineer or AWS Solutions Architect

Ā 

Ā 

Thanks, and Regards
Yogesh Pratap Singh
Account Manager
Precision Technologies Corp

Ā 

Ā 

Ā 

TĀ 732-348-5000 x 837Ā  Ā Ā M- 7322284897


1100 Cornwall Road, #134, Monmouth Junction, NJ
www.PrecisionTechCorp.com

Ā 

Ā Ā Ā 

Yogesh Singh

unread,
Apr 22, 2026, 2:59:34 PMĀ (10 days ago)Ā Apr 22
to Yogesh Singh

Hi,

Ā 

Please share profiles for the role below, including visa status, location, and LinkedIn ID.

LinkedIn ID should be before 2020

Ā 

Need DL from CA only.

Job Title:Ā AI/ML Ā lead/ Architect — Databricks & AWS 10+ Years

Location:Ā Los Angeles, CA (Hybrid) Must be within 100miles
Hire Type:Ā C2C
Rate : $70/hr. C2C H1B/H4/EAD

USC/GC W2 only(No exceptions)

Ā 


Role Overview

Seeking an experienced AI/ML Architect/Lead with deep expertise in Databricks on AWS to design and implement scalable ML platforms, Lakehouse systems, and large-scale data pipelines across multi-terabyte environments.

Ā 

This role requires working with large, multi-terabyte datasets, advanced analytics, and end‑to‑end ML lifecycle management using Databricks, Python, PySpark, and AWS-native services.


Required Skills

  • Databricks on AWS (Delta Lake, Unity Catalog, MLflow, Workflows)
  • PySpark / Spark SQL / Databricks Notebooks
  • Python (pandas, numpy, scikit-learn)
  • ETL/ELT pipelines & Bronze/Silver/Gold Lakehouse architecture
  • ML model development, training & production deployment
  • Multi-terabyte time-series data processing & optimization
  • AWS — S3, IAM, Glue Catalog, Networking
  • Spark tuning, caching, partitioning & shuffle minimization
  • MLOps pipeline management

Preferred Skills

  • CI/CD — GitHub Actions / GitLab CI
  • Deep Learning — TensorFlow / PyTorch
  • Production-grade MLOps & deployment pipelines
  • AWS-native compute & API integration

Requirements

  • Bachelor's/Master's in CS, Data Science, or related field
  • 10+ years in Data Engineering / ML Architecture
  • Strong communication & stakeholder management

Ā 

Thanks, and Regards
Yogesh Pratap Singh
Account Manager
Precision Technologies Corp

Ā 

Ā 

Ā 

TĀ 732-348-5000 x 837Ā  Ā Ā M- 7322284897
1100 Cornwall Road, #134, Monmouth Junction, NJ
www.PrecisionTechCorp.com

Ā 

Ā Ā Ā 

Ā 

Ā 

Ā 

Yogesh Singh

unread,
Apr 23, 2026, 5:21:56 PMĀ (9 days ago)Ā Apr 23
to Yogesh Singh

Hi,

Ā 

Please share profiles for the role below, including visa status, location, and LinkedIn ID.

LinkedIn ID should be before 2020

Ā 

Role : AWS Senior Site Reliability Engineer (SRE) - Release & Observability Focus

Location : Scottsdale AZ (100% Onsite) DL from AZ

Rate : $60/hr. C2C H1B/H4/EAD

USC/GC W2 only(No exceptions)

Ā 

Ā 

Ā 

Senior Site Reliability Engineer (SRE) - Release & Observability Focus

Score out of 10

Key Responsibilities

Ā 

Solid hands-on experience in SREĀ or Release EngineeringĀ Roles

Ā 

Strong experience deploying and operating containerized applicationsĀ on KubernetesĀ acrossĀ on-PremĀ and AWS Cloud

Ā 

Strong of Linux andĀ networkingĀ fundamentals

Ā 

Own release automation, deployment strategies, rollback mechanisms, and release validation

Ā 

Proven experience supporting REST APIĀ services in production environments

Ā 

Dr. Continuous improvements in release safety, reliability,Ā monitoring,Ā alertingĀ and operational readiness

Ā 

Experience with monitoring and observability tools such as Splunk, Prometheus/Grafana

Ā 

Lead troubleshooting of complex production incidents and service degradations

Ā 

Participate in on call rotations and lead incident response and post incidence reviews

Ā 

Nice To Have

Ā 

PythonĀ scripting for automation and platform tooling

Ā 

Knowledge or experience with HoneycombĀ for observability

Ā 

Ā 

Ā 

Thanks, and Regards
Yogesh Pratap Singh
Account Manager
Precision Technologies Corp

Ā 

Ā 

Ā 

TĀ 732-348-5000 x 837Ā  Ā Ā M- 7322284897
1100 Cornwall Road, #134, Monmouth Junction, NJ
www.PrecisionTechCorp.com

Ā 

Ā Ā Ā 

Ā 

Ā 

Ā 

Yogesh Singh

unread,
Apr 24, 2026, 10:12:55 AMĀ (8 days ago)Ā Apr 24
to Yogesh Singh

Hi,

Ā 

Please share profiles for the role below, including visa status, location, and LinkedIn ID.

LinkedIn ID should be before 2020

Ā 

Role : Azure Databricks Data Engineer with Devops 12+ years

Location : Remote

USC/GC W2 only(No exceptions)

$60/hr. C2C(H1/H4/EAD only)

Remote in CA/NY

Ā 

Job Summary

We are looking for an experienced Databricks Data Engineer with strong DevOps expertise to join our data engineering team. The ideal candidate will design, build, and optimize large-scale pipelines on the Databricks Lakehouse Platform on Azure, while driving automated CI/CD and deployment practices. This role requires strong skills in PySpark, SQL, Azure Ā cloud services, and modern DevOps tooling. You will collaborate closely with cross-functional teams to deliver scalable, secure, and high-performance data solutions.


Must Demonstrate (Critical Skills & Architectural Competencies)

  • Designing and implementing Databricks-based Lakehouse architectures on Azure
  • Clear separation of compute vs. serving layers
  • Ability to design low-latency data/API access strategies (beyond Spark-only patterns)
  • Strong understanding of caching strategies for performance and cost optimization
  • Data partitioning, storage optimization, and file layout strategy
  • Ability to handle multi-terabyte structured or time-series datasets
  • Skill in requirement probing, identifying what matters architecturally
  • A player-coach mindset: hands-on engineering + technical leadership

Ā 

Key Responsibilities

1. Data Pipeline Development

  • Design, build, and maintain scalable ETL/ELT pipelines using Databricks on Azure.
  • Develop high-performance data processing workflows using PySpark/Spark and SQL.
  • Integrate data from Amazon S3, relational databases, and semi/non‑structured sources.
  • Implement Delta Lake best practices including schema evolution, ACID, OPTIMIZE, ZORDER, partitioning, and file-size tuning.
  • Ensure architectures support high-volume, multi-terabyte workloads.

2. DevOps & CI/CD

  • Implement CI/CD pipelines for Databricks using Git, GitLab, GitHub Actions, or Azure -native tools.
  • Build and manage automated deployments using Databricks Asset Bundles.
  • Manage version control for notebooks, workflows, libraries, and environment configuration.
  • Automate cluster policies, job creation, environment provisioning, and configuration management.
  • Support infrastructure-as-code via Terraform (preferred) or CloudFormation.

3. Collaboration & Business Support

  • Work with data analysts and BI teams to prepare curated datasets for reporting and analytics.
  • Collaborate closely with product owners, engineering teams, and business partners to translate requirements into scalable implementations.
  • Document data flows, technical architecture, and DevOps/deployment workflows.

4. Performance & Optimization

  • Tune Spark clusters, workflows, and queries for cost efficiency and compute performance.
  • Monitor pipelines, troubleshoot failures, and maintain high reliability.
  • Implement logging, monitoring, and observability across workflows and jobs.
  • Apply caching strategies and workload optimization techniques to support low-latency consumption patterns.

5. Governance & Security

  • Implement and maintain data governance using Unity Catalog.
  • Enforce access controls, security policies, and data compliance requirements.
  • Ensure lineage, quality checks, and auditability across data flows.

Technical Skills

  • Strong hands-on experience with Databricks, including:
    • Delta Lake
    • Unity Catalog
    • Lakehouse Architecture
    • Delta Live Pipelines
    • Databricks Runtime
    • Table Triggers
    • Databricks Workflows
  • Proficiency in PySpark, Spark, and advanced SQL.
  • Expertise with AWS cloud services, including:
    • S3
    • IAM
    • Glue / Glue Catalog
    • Lambda
    • Kinesis (optional but beneficial)
    • Secrets Manager
  • Strong understanding of DevOps tools:
    • Git / GitLab
    • CI/CD pipelines
    • Databricks Asset Bundles
  • Familiarity with Terraform is a plus.
  • Experience with relational databases and data warehouse concepts.

Preferred Experience

  • Knowledge of streaming technologies like Structured Streaming/Spark Streaming.
  • Experience building real-time or near real-time pipelines.
  • Exposure to advanced Databricks runtime configurations and performance tuning.

Certifications (Optional)

  • Databricks Certified Data Engineer Associate / Professional

AWS Data Engineer or AWS Solutions Architect certification

Ā 

Thanks, and Regards
Yogesh Pratap Singh
Account Manager
Precision Technologies Corp

Ā 

Ā 

Ā 

TĀ 732-348-5000 x 837Ā  Ā Ā M- 7322284897
1100 Cornwall Road, #134, Monmouth Junction, NJ
www.PrecisionTechCorp.com

Ā 

Ā Ā Ā 

Ā 

Ā 

Ā 

Yogesh Singh

unread,
Apr 27, 2026, 11:48:37 AMĀ (5 days ago)Ā Apr 27
to Yogesh Singh

Hi,

Ā 

Please share profiles for the role below, including visa status, location, and LinkedIn ID.

LinkedIn ID should be before 2020

šŸ“‹ Job Description — AWS Python Developer(AWS Certified) 10+ years

Ā 

Role: AWS Python Developer Location: Plano, TX (Onsite) Must be local(No relocation)Ā Ā  Employment Type: Contract Rate: $60/hr. C2C Experience: 10+ Years Years

Visa: H1/H4/EADĀ Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā  Ā USC/GC on W2 only


šŸ” Role Summary

We are looking for a skilled AWS certified Python Developer with strong hands-on experience in building and deploying serverless business logic using Python and AWS Lambda for a client based in Plano, TX.


šŸ”“ Must Have — Non-Negotiable

āœ… Strong Python — production-grade application/backend development (not scripting or data science only) āœ… AWS Lambda — hands-on writing & deploying Lambda functions in production āœ… Business Logic Development — built APIs, workflows, or processing engines in Python āœ… AWS Core Services — hands-on with S3, API Gateway, DynamoDB, SQS/SNS (minimum 3–4 services) āœ… Serverless & Event-Driven Architecture — understands serverless design patterns āœ… REST API Development — designed and consumed APIs in production environments


⭐ Good to Have

šŸ”¹ FastAPI, Flask, or Django — Python API frameworks šŸ”¹ Step Functions — workflow orchestration šŸ”¹ Terraform / CloudFormation — IaC experience šŸ”¹ Docker / Kubernetes / ECS — containerization šŸ”¹ CI/CD — CodePipeline, GitHub Actions, or Jenkins šŸ”¹ AWS Certified Developer or Solutions Architect certification

Ā 

Ā 

Thanks, and Regards
Yogesh Pratap Singh
Account Manager
Precision Technologies Corp

Ā 

Ā 

Ā 

TĀ 732-348-5000 x 837Ā  Ā Ā M- 7322284897
1100 Cornwall Road, #134, Monmouth Junction, NJ
www.PrecisionTechCorp.com

Ā 

Ā Ā Ā 

Ā 

Ā 

Ā 

Yogesh Singh

unread,
Apr 27, 2026, 3:37:16 PMĀ (5 days ago)Ā Apr 27
to Yogesh Singh

Hi,

Ā 

Please share profiles for the role below, including visa status, location, and LinkedIn ID.

LinkedIn ID should be before 2020


šŸ“‹ Senior Business Process Consultant | ServiceNow

Location: Onsite — Bridgewater, NJ / Charlotte, NC / Bellevue, WA / Plano, TX / Santa Clara, CA

Ā 

Visa: H1/H4/EADĀ Ā Ā Ā Ā Ā Ā Ā  Rate: $70/hr. C2CĀ Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā  Ā USC/GC on W2 only


šŸ… Mandatory Certifications

šŸ’” Missing one cert is acceptable if candidate is willing to complete post joining.

āœ… CSA āœ… CIS āœ… Industry-Specific Cert (e.g., HR Pro) āœ… ITIL Foundation


šŸ” Role Summary

Senior client-facing specialist leading process transformation initiatives on ServiceNow. Trusted advisor to business & IT leaders, owning end-to-end process design across one or more domains while ensuring ITIL v4 best practices, scalability, and upgrade-safe solutions.


šŸ”‘ Must Have Skills

āœ… 5–7 years in process consulting, business analysis, or service management transformation āœ… Deep expertise in ServiceNow OOTB process models — ITSM, CSM, HRSD, CMDB, SecOps, or GRC āœ… Strong ITIL v4 knowledge (Managing Professional / Strategist preferred) āœ… Experience leading As-Is / To-Be process design, workshops & stakeholder management āœ… Proficiency in process modeling tools — BPMN, Visio, Miro, Lucidchart āœ… Experience working in Agile/Scrum environments with Product Owners āœ… Strong UAT, governance, KPI, SLA/OLA definition experience āœ… Excellent facilitation & communication skills for executive audiences


⭐ Good to Have

šŸ”¹ ServiceNow Micro-Certs — IntegrationHub, Flow Designer, Performance Analytics šŸ”¹ Associate Business Process Owner certification šŸ”¹ Cloud/EA Certs — AWS/Azure, TOGAF, IT4IT

Ā 

Ā 

Ā 

Thanks, and Regards
Yogesh Pratap Singh
Account Manager
Precision Technologies Corp

Ā 

Ā 

Ā 

TĀ 732-348-5000 x 837Ā  Ā Ā M- 7322284897
1100 Cornwall Road, #134, Monmouth Junction, NJ
www.PrecisionTechCorp.com

Ā 

Ā Ā Ā 

Ā 

Ā 

Ā 

Yogesh Singh

unread,
Apr 28, 2026, 2:03:42 PMĀ (4 days ago)Ā Apr 28
to Yogesh Singh

Hi,

Ā 

Please share profiles for the role below, including visa status, location, and LinkedIn ID.

LinkedIn ID should be before 2020

Ā 

Job Title: AI/ML Architect with Databricks , Azure (No Sr. profiles)

Location : Los Angeles CA or New York NY (Hybrid or Remote )

Hire type : FTE / C2C

Rate : $70/hr./ C2C

Salary : $150K/Annum with benefits

USC/GC on W2 only

Ā 

Role Overview

We are seeking an experienced AI/ML Architect with deep hands-on expertise in Databricks on AWS to lead the design and implementation of scalable, high‑performance data and machine learning platforms. The ideal candidate combines architectural thinking with strong engineering execution, demonstrating the ability to build modern lakehouse systems, optimize large‑scale pipelines, and drive analytical and ML capabilities across the organization.

This role requires working with large, multi-terabyte datasets, advanced analytics, and end‑to‑end ML lifecycle management using Databricks, Python, PySpark, and AWS-native services.


Must Demonstrate (Critical Competencies)

Designing Databricks‑based lakehouse architectures on Azure (Delta Lake + S3 + Unity Catalog).

Clear separation of compute vs. serving layers in distributed architectures.

Low-latency API strategy where Spark is insufficient (e.g., leveraging optimized services or caching).

Caching strategies to accelerate reads and reduce compute cost.

Data partitioning, file size tuning, and optimization strategies for large-scale pipelines.

Experience handling multi-terabyte structured time‑series workloads.

Ability to distill architectural significance from ambiguous business requirements.

Strong curiosity, questioning, and requirement‑probing mindset.

Player‑coach approach: hands-on technical depth + ability to guide design.


Key Responsibilities

AI/ML & Advanced Analytics

Develop, train, and optimize ML models using Python, PySpark, MLflow, and Databricks Machine Learning.

Conduct exploratory data analysis (EDA) to identify patterns, trends, and insights in large datasets.

Deploy ML models into production using MLflow, Databricks Workflows, or other MLOps pipelines.

Build analytics solutions such as forecasting, anomaly detection, segmentation, or recommendation systems.

Design ML architectures aligned with Databricks Lakehouse on Azure.

Data Engineering & Lakehouse Architecture

Architect and build scalable ETL/ELT pipelines using PySpark, SQL, and Databricks Workflows.

Implement Delta Lake best practices, including OPTIMIZE, ZORDER, partitioning, and schema evolution.

Design lakehouse layers (Bronze/Silver/Gold) with strong separation of compute and serving layers.

Optimize cluster performance and jobs using Spark tuning, caching, and shuffle minimization.

Work with multi-terabyte, time-series, high‑velocity data in a distributed environment.

Ensure robust data availability for downstream ML and analytics workloads.

AWS Cloud Integration

Architect end-to-end data and ML solutions using Azure services, including:

S3 for storage

IAM for identity & access

Glue Catalog for metadata management

Networking for secure, high‑throughput data movement

Integrate Databricks with AWS-native compute, API layers, and low-latency endpoints.

Business Collaboration & Leadership

Translate business problems into scalable analytical or ML architectures.

Communicate complex statistical and architectural concepts to non‑technical stakeholders.

Collaborate with product, engineering, and business leaders to drive data-informed initiatives.

Provide design leadership while remaining hands-on in execution.


Skills & Qualifications

Required

Bachelor’s or Master’s in Computer Science, Data Science, Engineering, Statistics, or related field.

10+ years of experience in data engineering, ML engineering, or AI/ML architecture roles.

Deep expertise in Databricks on AWS, including:

PySpark / Spark SQL

Databricks Notebooks

Delta Lake

Unity Catalog

MLflow

Databricks Jobs & Workflows

Strong programming ability in Python (pandas, numpy, scikit-learn).

Demonstrated experience with large-scale, multi-terabyte data processing.

Strong understanding of ML algorithms, distributed systems, and data optimization.

Preferred

Experience with MLOps and production deployment pipelines.

Strong grasp of AWS-native data and compute services.

Understanding of CI/CD using GitHub Actions, GitLab CI, or similar.

Familiarity with deep learning frameworks (TensorFlow, PyTorch).


Key Competencies

Strong analytical and problem-solving skills.

Ability to work in fast-paced, highly collaborative environments.

Excellent communication and presentation abilities.

Self-driven with exceptional attention to architectural detail.

Ā 

Ā 

Thanks, and Regards
Yogesh Pratap Singh
Account Manager
Precision Technologies Corp

Ā 

Ā 

Ā 

TĀ 732-348-5000 x 837Ā  Ā Ā M- 7322284897
1100 Cornwall Road, #134, Monmouth Junction, NJ
www.PrecisionTechCorp.com

Ā 

Ā Ā Ā 

Ā 

Ā 

Ā 

Yogesh Singh

unread,
Apr 30, 2026, 4:48:59 PMĀ (2 days ago)Ā Apr 30
to Yogesh Singh

Hi,

Ā 

Kindly read below before sharing any resumes.

Ā 

Please share profiles for the role below, including visa status, location, and LinkedIn ID.

Ā 

Role: Business Analyst-Guidewire Billing Center
Location: Hartford, CT (Hybrid) Rate: $60/hr. Max H1/H4/EAD

USC GC on W2 for this role

LinkedIn ID should be before 2020


Ā 

Job Summary

Senior Guidewire BillingCenter Business Analyst responsible for end-to-end functional analysis, solution design, and implementation. Involves working with stakeholders, optimizing billing processes, and delivering scalable solutions within P&C insurance ecosystems.

Ā 

Ā 

Required Skills & Experience (Priority)

  • 7+ years as a Business Analyst with strong P&C insurance domain expertise
  • 4+ years hands-on experience with Guidewire BillingCenter (configuration, workflows, rules, batch, integrations)
  • Strong understanding of billing lifecycle: payments, financial transactions, adjustments, suspense, direct vs agency billing
  • Experience with user stories, business rules, data mapping, and integrations (API, middleware, payment gateways, ERP)
  • Proficient in Agile/Scrum (Jira or Azure DevOps)
  • Strong communication, analytical, and stakeholder management skills

Ā 

Key Responsibilities

  • Gather and document requirements (BRDs, FRDs, user stories)
  • Define billing workflows, data mapping, and integrations
  • Provide BillingCenter functional expertise and guide configuration vs customization
  • Collaborate with business, architects, dev, and QA teams across SDLC
  • Support SIT/UAT, defect triage, and process improvements
  • Lead stakeholder discussions and mentor junior analysts

Ā 

Ā Thanks

Yogesh Singh

Ā 

Ā 

Reply all
Reply to author
Forward
0 new messages