llm-d Joins CNCF for High-Performance Distributed Inference

3 views
Skip to first unread message

Eyal Estrin

unread,
Mar 27, 2026, 4:30:28 AMMar 27
to
llm-d: a Kubernetes-native high-performance distributed LLM inference framework
https://llm-d.ai/

Welcome llm-d to the CNCF: Evolving Kubernetes into SOTA AI infrastructure
https://www.cncf.io/blog/2026/03/24/welcome-llm-d-to-the-cncf-evolving-kubernetes-into-sota-ai-infrastructure/

Kubernetes as AI Infrastructure: Google Cloud, llm-d, and the CNCF
https://cloud.google.com/blog/products/containers-kubernetes/llm-d-officially-a-cncf-sandbox-project

Donating llm-d to the Cloud Native Computing Foundation
https://research.ibm.com/blog/donating-llm-d-to-the-cloud-native-computing-foundation

Why we’re contributing llm-d to the CNCF: Standardizing the future of AI
https://www.redhat.com/en/blog/why-were-contributing-llm-d-cncf-standardizing-future-ai

Deploying Disaggregated LLM Inference Workloads on Kubernetes
https://developer.nvidia.com/blog/deploying-disaggregated-llm-inference-workloads-on-kubernetes/

Welcome to llm-d: a Kubernetes-native high-performance distributed LLM inference framework
https://github.com/llm-d




Eyal Estrin
Author | Cloud Architect | AWS • Azure • GCP Insights
Social: @eyalestrin
Connect: https://linktr.ee/eyalestrin Blog: https://security-24-7.com
Reply all
Reply to author
Forward
0 new messages