Open source MLOps for AI that scales
Move from experimentation to production using a trusted, open source MLOps platform. Take the complexity out of deploying and maintaining your models with automated workflows, security patching, and tooling integrations that span the entire end-to-end machine learning lifecycle.

Why choose Canonical
for open source MLOps?10 years of security maintenance
Automated lifecycle management
End-to-end tooling integration
Deploy on any public or private cloud
Simple per node support subscription
What is MLOps?

Machine learning operations (MLOps) is like DevOps for machine learning. It is a set of practices that automates machine learning workflows, ensuring scalability, portability, and reproducibility.
End-to-end open source MLOps
Canonical's MLOps stack delivers all the open source solutions you need to streamline the complete machine learning lifecycle. These tools are tightly integrated to ensure a smooth MLOps journey, from experimentation to production.
Charmed Kubeflow
Charmed Kubeflow is the foundation of Canonical MLOps. It is an enterprise-ready platform for deploying, scaling, and managing AI workflows on any cloud.
Charmed MLFlow
Charmed MLFlow is our solution for managing the model lifecycle. Track your experiments, package code in a reproducible format, and store and deploy models - all using a lightweight platform that can be deployed on any infrastructure.
Charmed Feast
Charmed Feast is an enterprise-grade feature store that enables you to bridge the gap between data engineering and model deployment. Native integration with Charmed Kubeflow ensures a seamless experience.
Fully open source, fully supported

Each component of Canonical's MLOps stack is fully open source, and backed by long-term, enterprise-grade security maintenance and support commitments:
- Up to 10 years of security maintenance
- Regular updates and CVE fixes
- Optional 24/7 support with SLAs
The entire MLOps platform is covered by a simple per node, per year subscription.
What customers say
“We wanted one partner for the whole on-premise cloud because we're not just supporting Kubernetes but also our Ceph clusters, managed Postgres, Kafka, and AI tools such as Kubeflow and Spark. These were all the services that were needed and with this we could have one nice, easy joined-up approach.”
Michael Hawkshaw
IT Service Manager
European Space Agency
“Partnering with Canonical lets us concentrate on our core business. Our data scientists can focus on data manipulation and model training rather than managing infrastructure.”
Machine Learning Engineer
Entertainment Technology Provider
MLOps services
Canonical's experts deliver a range of services to help you move faster and smarter with AI projects.
Kick start your AI/ML journey with an MLOps Workshop
Build your tailored MLOps architecture in just 5 days. In a custom workshop, we'll help you design AI infrastructure for any use case and level up your in-house expertise to accelerate your machine learning initiatives.
MLOps Consulting
Move faster with Canonical's MLOps consulting services. Our experts can design and deploy your full stack AI environment from the ground up on your substrate of choice.
Managed MLOps
Let us run the platform so your team can focus on developing and deploying models. Streamline operational service delivery and offload the design, implementation and management of your MLOps environment.
Open source MLOps resources

Data & AI solutions
There's no machine learning without data. Explore our solutions that bridge the gap between data and AI.

Guide to MLOps
Learn how to take your models to production using open source MLOps platforms in this whitepaper.

What is MLOps?
Read the blog to dig deeper into the fundamental principles of MLOps.