Hi,
Please share profiles for the role below, including visa status, location, and LinkedIn ID.
LinkedIn ID should be before 2020
USC/GC on W2 for the role.
Position-Java+ AI Engineer 12+ Years
Location - Remote
Duration-Long termĀ
Ā $60/hr. C2C max H1/H4/EAD
Client:: TCS
Note** 5.5% VMS fee included and Net 60 as payment terms.
Role: AI Engineer with Java LLM/RAG
Descriptions:
"⢠Design, develop, and deploy AI/ML and Generative AI solutions including LLM based applications, RAG pipelines, agents, and predictive models
⢠Translate business use cases into production ready AI solutions with measurable outcomes
⢠Deep knowledge in Java streams technology
⢠Implement LLM orchestration, prompt engineering, vector search, embeddings, and model fine tuning
⢠Develop scalable APIs and microservices to integrate AI capabilities into enterprise applications
⢠Collaborate with Data Engineers, Data Scientists, Product Owners, and Cloud teams across onshore offshore models
⢠Implement MLOps / LLMOps practices including CI/CD, monitoring, versioning, model governance, and observability
⢠Ensure Responsible AI, security, compliance, and data privacy by design
⢠Support production deployments, performance tuning, and continuous improvement of AI systems."
"⢠4ā8+ years of experience in software engineering, ML engineering, or AI solution development
⢠Strong proficiency in Java and experience
⢠Hands on experience with Generative AI / LLMs, including RAG, embeddings, prompt engineering, and agents
⢠Solid understanding of data engineering concepts, SQL/NoSQL, and feature pipelines
⢠Experience deploying AI solutions on cloud platforms (GCP preferred; AWS/ Azure acceptable)
⢠Familiarity with Docker, Kubernetes, and CI/CD pipelines
⢠Strong problem solving, communication, and stakeholder collaboration skills"
Skills: AI and Automation
Ā
Additional Notes:
Ā
Thanks
Yogesh Pratap Singh
Hi,
Please share profiles for the role below, including visa status, location, and LinkedIn ID.
LinkedIn ID should be before 2020
Client:: TCS
Note** 5.5% VMS fee included and Net 60 as payment terms.
Ā
Position-Azure Automation & Observability Engineer (Python, Datadog) 10+ Years
Location -Plano , Texas
Duration-Long TermĀ
Bill Rate-$65/hr. C2C
GC on W2 only for this role.
Ā
Must have skills: Azure Functions, Datadog, CI/CD,
Ā
Role Description:
Design, develop, and maintain automation solutions using Python and scripting languages such as Shell and PowerShell to streamline operations and reduce manual effort. Implement and manage monitoring, alerting, and observability using Datadog, including dashboards, log analytics, and performance metrics. Build and support automation workflows on Microsoft Azure, leveraging services like Azure Functions, Logic Apps, and CI/CD pipelines. Integrate automation with cloud, application, and infrastructure components to enable proactive issue detection and self-healing. Collaborate with DevOps, SRE, and application teams to improve reliability, performance, and operational efficiency. Support production environments through automated health checks, incident response, and root cause analysis. Ensure automation solutions follow security, compliance, and enterprise governance standards.
Ā
Preferred Qualifications:
Education Requirements:
Ā
Ā
Thanks, and Regards Ā Ā Ā | TĀ 732-348-5000 x 837Ā Ā Ā M- 7322284897 Ā
|
Ā
Ā
Ā
Hi,
Ā
Please share profiles for the role below, including visa status, location, and LinkedIn ID.
LinkedIn ID should be before 2020
Ā
Role : Sr AWS Cloud Engineer (AWS certification Must) 12+ years
Location : SFO CA (onsite)Ā Ā Need Local profiles only
Rate : $65/hr. C2C H1B/H4/EAD
USC/GC W2 only(No exceptions)
Ā
Must have : AWS certifications
Ā
We are seeking a driven Sr AWS Cloud Engineer with a strong foundation in cloud platform engineering, security, and operations. This role is ideal for a selfāstarter and goāgetter who takes pride in designing systems that are secure by design, verifiable by default, and continuously ready for internal and external scrutiny.
Ā
What Youāll Do
Ā
What You Bring (Required)
Ā
Nice to Have
Ā
Ā
Thanks, and Regards Ā Ā Ā | Ā |
Ā
Ā
Ā
Hi,
Ā
Please share profiles for the role below, including visa status, location, and LinkedIn ID.
LinkedIn ID should be before 2020
Ā
Role : Sr AWS Cloud Engineer (AWS certification Must) 12+ years
Location : SFO CA (onsite)Ā Ā
Rate : $65/hr. C2C H1B/H4/EAD
Hi,
Ā
Please share profiles for the role below, including visa status, location, and LinkedIn ID.
LinkedIn ID should be before 2020
Ā
Ā
Job Title:Ā Databricks Data Engineer with DevOps Skills
Location:Ā Los Angeles, CA (Hybrid) Must be within 100miles
Hire Type:Ā C2C
Rate : $60/hr. C2C H1B/H4/EAD
USC/GC W2 only(No exceptions)
Ā
Job Summary
We are looking for an experiencedĀ Databricks Data EngineerĀ with strong DevOps expertise to join our data engineering team. The ideal candidate will design, build, and optimize large-scale data pipelines on theĀ Databricks Lakehouse Platform on AWS, while driving automated CI/CD and deployment practices. This role requires deep expertise in PySpark, SQL, AWS cloud services, and modern DevOps tooling. You will collaborate closely with cross-functional teams to deliver scalable, secure, and high-performance data solutions.
Must Demonstrate ā Critical Skills & Architectural Competencies
Key Responsibilities
1. Data Pipeline Development
2. DevOps & CI/CD
3. Collaboration & Business Support
4. Performance & Optimization
5. Governance & Security
Technical Skills
Databricks (Hands-On Experience Required):
Programming & Query Languages:
AWS Cloud Services:
DevOps & Automation:
Other:
Preferred Experience
Certifications (Optional but Preferred)
Ā
Ā
Thanks, and Regards Ā Ā Ā |
TĀ 732-348-5000 x 837Ā Ā Ā M- 7322284897 |
|
Hi,
Ā
Please share profiles for the role below, including visa status, location, and LinkedIn ID.
LinkedIn ID should be before 2020
Ā
Need DL from CA only.
Job Title:Ā AI/ML Ā lead/ Architect ā Databricks & AWS 10+ Years
Location:Ā Los Angeles, CA (Hybrid) Must be within 100miles
Hire Type:Ā C2C
Rate : $70/hr. C2C H1B/H4/EAD
USC/GC W2 only(No exceptions)
Ā
Role Overview
Seeking an experienced AI/ML Architect/Lead with deep expertise in Databricks on AWS to design and implement scalable ML platforms, Lakehouse systems, and large-scale data pipelines across multi-terabyte environments.
Ā
This role requires working with large, multi-terabyte datasets, advanced analytics, and endātoāend ML lifecycle management using Databricks, Python, PySpark, and AWS-native services.
Required Skills
Preferred Skills
Requirements
Ā
Thanks, and Regards Ā Ā Ā | TĀ 732-348-5000 x 837Ā Ā Ā M- 7322284897 Ā
|
Ā
Ā
Ā
Hi,
Ā
Please share profiles for the role below, including visa status, location, and LinkedIn ID.
LinkedIn ID should be before 2020
Ā
Role : AWS Senior Site Reliability Engineer (SRE) - Release & Observability Focus
Location : Scottsdale AZ (100% Onsite) DL from AZ
Rate : $60/hr. C2C H1B/H4/EAD
USC/GC W2 only(No exceptions)
Ā
Ā
Ā | Senior Site Reliability Engineer (SRE) - Release & Observability Focus |
Score out of 10 | Key Responsibilities |
Ā | Solid hands-on experience in SREĀ or Release EngineeringĀ Roles |
Ā | Strong experience deploying and operating containerized applicationsĀ on KubernetesĀ acrossĀ on-PremĀ and AWS Cloud |
Ā | Strong of Linux andĀ networkingĀ fundamentals |
Ā | Own release automation, deployment strategies, rollback mechanisms, and release validation |
Ā | Proven experience supporting REST APIĀ services in production environments |
Ā | Dr. Continuous improvements in release safety, reliability,Ā monitoring,Ā alertingĀ and operational readiness |
Ā | Experience with monitoring and observability tools such as Splunk, Prometheus/Grafana |
Ā | Lead troubleshooting of complex production incidents and service degradations |
Ā | Participate in on call rotations and lead incident response and post incidence reviews |
Ā | Nice To Have |
Ā | PythonĀ scripting for automation and platform tooling |
Ā | Knowledge or experience with HoneycombĀ for observability |
Ā
Ā
Ā
Thanks, and Regards Ā Ā Ā | TĀ 732-348-5000 x 837Ā Ā Ā M- 7322284897 Ā
|
Ā
Ā
Ā
Hi,
Ā
Please share profiles for the role below, including visa status, location, and LinkedIn ID.
LinkedIn ID should be before 2020
Ā
Role : Azure Databricks Data Engineer with Devops 12+ years
Location : Remote
USC/GC W2 only(No exceptions)
$60/hr. C2C(H1/H4/EAD only)
Remote in CA/NY
Ā
Job Summary
We are looking for an experienced Databricks Data Engineer with strong DevOps expertise to join our data engineering team. The ideal candidate will design, build, and optimize large-scale pipelines on the Databricks Lakehouse Platform on Azure, while driving automated CI/CD and deployment practices. This role requires strong skills in PySpark, SQL, Azure Ā cloud services, and modern DevOps tooling. You will collaborate closely with cross-functional teams to deliver scalable, secure, and high-performance data solutions.
Must Demonstrate (Critical Skills & Architectural Competencies)
Ā
Key Responsibilities
1. Data Pipeline Development
2. DevOps & CI/CD
3. Collaboration & Business Support
4. Performance & Optimization
5. Governance & Security
Technical Skills
Preferred Experience
Certifications (Optional)
AWS Data Engineer or AWS Solutions Architect certification
Ā
Thanks, and Regards Ā Ā Ā | TĀ 732-348-5000 x 837Ā Ā Ā M- 7322284897 Ā
|
Ā
Ā
Ā
Hi,
Ā
Please share profiles for the role below, including visa status, location, and LinkedIn ID.
LinkedIn ID should be before 2020
š Job Description ā AWS Python Developer(AWS Certified) 10+ years
Ā
Role: AWS Python Developer Location: Plano, TX (Onsite) Must be local(No relocation)Ā Ā Employment Type: Contract Rate: $60/hr. C2C Experience: 10+ Years Years
Visa: H1/H4/EADĀ Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā USC/GC on W2 only
š Role Summary
We are looking for a skilled AWS certified Python Developer with strong hands-on experience in building and deploying serverless business logic using Python and AWS Lambda for a client based in Plano, TX.
š“ Must Have ā Non-Negotiable
ā Strong Python ā production-grade application/backend development (not scripting or data science only) ā AWS Lambda ā hands-on writing & deploying Lambda functions in production ā Business Logic Development ā built APIs, workflows, or processing engines in Python ā AWS Core Services ā hands-on with S3, API Gateway, DynamoDB, SQS/SNS (minimum 3ā4 services) ā Serverless & Event-Driven Architecture ā understands serverless design patterns ā REST API Development ā designed and consumed APIs in production environments
ā Good to Have
š¹ FastAPI, Flask, or Django ā Python API frameworks š¹ Step Functions ā workflow orchestration š¹ Terraform / CloudFormation ā IaC experience š¹ Docker / Kubernetes / ECS ā containerization š¹ CI/CD ā CodePipeline, GitHub Actions, or Jenkins š¹ AWS Certified Developer or Solutions Architect certification
Ā
Ā
Thanks, and Regards Ā Ā Ā | TĀ 732-348-5000 x 837Ā Ā Ā M- 7322284897 Ā
|
Ā
Ā
Ā
Hi,
Ā
Please share profiles for the role below, including visa status, location, and LinkedIn ID.
LinkedIn ID should be before 2020
š Senior Business Process Consultant | ServiceNow
Location: Onsite ā Bridgewater, NJ / Charlotte, NC / Bellevue, WA / Plano, TX / Santa Clara, CA
Ā
Visa: H1/H4/EADĀ Ā Ā Ā Ā Ā Ā Ā Rate: $70/hr. C2CĀ Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā USC/GC on W2 only
š Mandatory Certifications
š” Missing one cert is acceptable if candidate is willing to complete post joining.
ā CSA ā CIS ā Industry-Specific Cert (e.g., HR Pro) ā ITIL Foundation
š Role Summary
Senior client-facing specialist leading process transformation initiatives on ServiceNow. Trusted advisor to business & IT leaders, owning end-to-end process design across one or more domains while ensuring ITIL v4 best practices, scalability, and upgrade-safe solutions.
š Must Have Skills
ā 5ā7 years in process consulting, business analysis, or service management transformation ā Deep expertise in ServiceNow OOTB process models ā ITSM, CSM, HRSD, CMDB, SecOps, or GRC ā Strong ITIL v4 knowledge (Managing Professional / Strategist preferred) ā Experience leading As-Is / To-Be process design, workshops & stakeholder management ā Proficiency in process modeling tools ā BPMN, Visio, Miro, Lucidchart ā Experience working in Agile/Scrum environments with Product Owners ā Strong UAT, governance, KPI, SLA/OLA definition experience ā Excellent facilitation & communication skills for executive audiences
ā Good to Have
š¹ ServiceNow Micro-Certs ā IntegrationHub, Flow Designer, Performance Analytics š¹ Associate Business Process Owner certification š¹ Cloud/EA Certs ā AWS/Azure, TOGAF, IT4IT
Ā
Ā
Ā
Thanks, and Regards Ā Ā Ā | TĀ 732-348-5000 x 837Ā Ā Ā M- 7322284897 Ā
|
Ā
Ā
Ā
Hi,
Ā
Please share profiles for the role below, including visa status, location, and LinkedIn ID.
LinkedIn ID should be before 2020
Ā
Job Title: AI/ML Architect with Databricks , Azure (No Sr. profiles)
Location : Los Angeles CA or New York NY (Hybrid or Remote )
Hire type : FTE / C2C
Rate : $70/hr./ C2C
Salary : $150K/Annum with benefits
USC/GC on W2 only
Ā
Role Overview
We are seeking an experienced AI/ML Architect with deep hands-on expertise in Databricks on AWS to lead the design and implementation of scalable, highāperformance data and machine learning platforms. The ideal candidate combines architectural thinking with strong engineering execution, demonstrating the ability to build modern lakehouse systems, optimize largeāscale pipelines, and drive analytical and ML capabilities across the organization.
This role requires working with large, multi-terabyte datasets, advanced analytics, and endātoāend ML lifecycle management using Databricks, Python, PySpark, and AWS-native services.
Must Demonstrate (Critical Competencies)
Designing Databricksābased lakehouse architectures on Azure (Delta Lake + S3 + Unity Catalog).
Clear separation of compute vs. serving layers in distributed architectures.
Low-latency API strategy where Spark is insufficient (e.g., leveraging optimized services or caching).
Caching strategies to accelerate reads and reduce compute cost.
Data partitioning, file size tuning, and optimization strategies for large-scale pipelines.
Experience handling multi-terabyte structured timeāseries workloads.
Ability to distill architectural significance from ambiguous business requirements.
Strong curiosity, questioning, and requirementāprobing mindset.
Playerācoach approach: hands-on technical depth + ability to guide design.
Key Responsibilities
AI/ML & Advanced Analytics
Develop, train, and optimize ML models using Python, PySpark, MLflow, and Databricks Machine Learning.
Conduct exploratory data analysis (EDA) to identify patterns, trends, and insights in large datasets.
Deploy ML models into production using MLflow, Databricks Workflows, or other MLOps pipelines.
Build analytics solutions such as forecasting, anomaly detection, segmentation, or recommendation systems.
Design ML architectures aligned with Databricks Lakehouse on Azure.
Data Engineering & Lakehouse Architecture
Architect and build scalable ETL/ELT pipelines using PySpark, SQL, and Databricks Workflows.
Implement Delta Lake best practices, including OPTIMIZE, ZORDER, partitioning, and schema evolution.
Design lakehouse layers (Bronze/Silver/Gold) with strong separation of compute and serving layers.
Optimize cluster performance and jobs using Spark tuning, caching, and shuffle minimization.
Work with multi-terabyte, time-series, highāvelocity data in a distributed environment.
Ensure robust data availability for downstream ML and analytics workloads.
AWS Cloud Integration
Architect end-to-end data and ML solutions using Azure services, including:
S3 for storage
IAM for identity & access
Glue Catalog for metadata management
Networking for secure, highāthroughput data movement
Integrate Databricks with AWS-native compute, API layers, and low-latency endpoints.
Business Collaboration & Leadership
Translate business problems into scalable analytical or ML architectures.
Communicate complex statistical and architectural concepts to nonātechnical stakeholders.
Collaborate with product, engineering, and business leaders to drive data-informed initiatives.
Provide design leadership while remaining hands-on in execution.
Skills & Qualifications
Required
Bachelorās or Masterās in Computer Science, Data Science, Engineering, Statistics, or related field.
10+ years of experience in data engineering, ML engineering, or AI/ML architecture roles.
Deep expertise in Databricks on AWS, including:
PySpark / Spark SQL
Databricks Notebooks
Delta Lake
Unity Catalog
MLflow
Databricks Jobs & Workflows
Strong programming ability in Python (pandas, numpy, scikit-learn).
Demonstrated experience with large-scale, multi-terabyte data processing.
Strong understanding of ML algorithms, distributed systems, and data optimization.
Preferred
Experience with MLOps and production deployment pipelines.
Strong grasp of AWS-native data and compute services.
Understanding of CI/CD using GitHub Actions, GitLab CI, or similar.
Familiarity with deep learning frameworks (TensorFlow, PyTorch).
Key Competencies
Strong analytical and problem-solving skills.
Ability to work in fast-paced, highly collaborative environments.
Excellent communication and presentation abilities.
Self-driven with exceptional attention to architectural detail.
Ā
Ā
Thanks, and Regards Ā Ā Ā | TĀ 732-348-5000 x 837Ā Ā Ā M- 7322284897 Ā
|
Ā
Ā
Ā
Hi,
Ā
Kindly read below before sharing any resumes.
Ā
Please share profiles for the role below, including visa status, location, and LinkedIn ID.
Ā
Role: Business Analyst-Guidewire Billing Center
Location: Hartford, CT (Hybrid) Rate: $60/hr. Max H1/H4/EAD
USC GC on W2 for this role
LinkedIn ID should be before 2020
Ā
Job Summary
Senior Guidewire BillingCenter Business Analyst responsible for end-to-end functional analysis, solution design, and implementation. Involves working with stakeholders, optimizing billing processes, and delivering scalable solutions within P&C insurance ecosystems.
Ā
Ā
Required Skills & Experience (Priority)
Ā
Key Responsibilities
Ā
Ā Thanks
Yogesh Singh
Ā
Ā