Hi,
Kindly share your updated resume if you are good with the JD below. I will be happy to share more information.
Role :
Palantir Foundry Use Case Engineer
Location: Connecticut State, CT Onsite
Duration: 6 months
JD
· Hands-on experience configuring Palantir Foundry
use cases (datasets, pipelines,
ontology, Contour, Workshop, Slate) .
· Experience working with ontology-based data
modeling for enterprise analytics.
· Exposure to cloud-native environments (AWS) and
distributed data systems.
· Experience delivering in Agile / POD-based models
with short sprint cycles and business
demos
Good to Have
· Experience configuring business-facing dashboards
and analytics for quality,
manufacturing, supply chain, or customer analytics
domains
· Understanding of data governance, usage
monitoring, and standardization practices in
a CoE model. · Basic exposure to applied analytics
or pattern detection within Foundry use cases
Roles & Responsibilities
Role Summary
The Palantir Foundry Use Case Engineer will be
responsible for configuring, building, and
operationalizing business use cases on the Palantir
Foundry platform. This role focuses on
rapid use-case configuration, ontology modeling,
pipeline development, and business-facing
analytics—working in agile, business-led 2-week
sprints as part of a Palantir Center of
Excellence (CoE) delivery model.
This is not a traditional data engineering role. The
engineer is expected to translate business
problems into Palantir-native solutions with speed,
structure, and governance.
Key Responsibilities
Use Case Configuration & Delivery
· Configure end-to-end Palantir Foundry use cases
including data ingestion, ontology
setup, transformations, and analytics dashboards.
· Rapidly deliver business-facing applications using
Foundry components such as
datasets, pipelines, ontology manager, Contour,
Workshop, and Slate.
· Work in 2-week agile sprints with frequent demos
and business feedback loops
· Enable quick value realization through iterative
configuration rather than heavy custom
coding.
Ontology & Data Modelling
· Design and configure ontology-driven data models
aligned to business entities and
processes (e.g., quality, supply chain, sales,
manufacturing)
· Ensure consistency in object definitions,
relationships, and metrics through governance
standards defined by the CoE
Data Pipelines & Transformations
· Build and maintain Foundry pipelines using
PySpark-based transformations for
structured and semi-structured data.
· Integrate data from multiple enterprise sources
via Foundry data connections and
ingestion frameworks.
· Optimize pipelines for performance, reliability,
and cost efficiency in cloud environments
(AWS-based deployments).
Business Enablement & Adoption
· Partner closely with Product Owners and Business
Analysts to translate business
requirements into configured Foundry solutions
· Support business enablement through self-service
analytics, documentation, and
walkthroughs of configured use cases
· Monitor application usage and adoption to guide
enhancements and prioritization.
Generic Managerial Skills, If any
· Good Communication Skills
Key Words to search in Resume
Palantir, AWS, Ontology‑based data modeling
Pre-Screening Questionnaire
Please select which Requirements category from 1-9
for this IOU /ISU. 1
1. AI.Cloud
2. CBO / IT IS
3. Cyber Security
4. ESU
5. IOTDE
6. TCS Interactive
7. MOT
8. Data & AI
9. SEED
Role Descriptions: 1) Design and develop end-to-end
data pipelines using Palantir Foundry (Code Repositories| Pipeline Builder|
Transforms| Workshop| Contour| etc. as applicable).
2) Build and maintain curated datasets| data lineage| and data quality controls across ingestion| transformation| and serving layers.
3) Develop data transformations using Python PySpark SQL and optimize pipeline performance for large-scale datasets.
4) Implement Foundry Ontology (objects| actions| relationships) and enable operational workflows through ontology-driven applications.
5) Develop and support Foundry applications dashboards for business use-cases (where required) using Foundry tools (e.g.| Workshop| Contour).
6) Collaborate with stakeholders to understand requirements| translate them into data products| and deliver iteratively using Agile practices.
7) Ensure security| governance| and compliance using Foundry access controls| data policies| and auditing best practices.
8) Integrate Foundry with enterprise systems via APIs connectors and support data interoperability patterns.
9) Establish CICD practices for Foundry code repositories| perform peer code reviews| and enforce coding standards.
10)
Troubleshoot production issues| perform root-cause analysis| and drive
continuous improvements in reliability and performance.
Essential Skills: 1) Strong hands-on experience with
Palantir Foundry - Transforms and Pipeline Builder| Foundry Code Repositories|
Data modeling and curated datasets| Foundry ontology concepts (preferred for
senior roles)2) Experience in implementing data governance| access controls|
and platform best practices.3) Strong coding skills in Python (mandatory)
PySpark strongly preferred.4) Advanced SQL skills (complex joins| window
functions| optimization).5) Experience with batch near-real-time data
processing patterns.6) Strong understanding of data warehousing concepts|
dimensional modeling| and data quality checks.
Desirable Skills:
Keyword:
Skills: Digital : Python~Palantir Core
Experience Required: 10 & Above
Thanks and Regards
Ramashish Kumar
Valzo Soft Solutions LLP
A: 12600 Deerfield Parkway ,
Suite -2123, Alpharetta, GA-30004
LinkedIn : linkedin.com/in/ramashishkumar86