A simple yet powerful open-source framework that integrates all your ML tools.
Explore the docs »
Features · Roadmap · Report Bug · Vote New Features · Read Blog · Meet the Team
🎉 Version 0.40.1 is out. Check out the release notes here.
🏁 Table of Contents
🤹 ZenML is an extensible, open-source MLOps framework for creating portable, production-ready machine learning pipelines. By decoupling infrastructure from code, ZenML enables developers across your organization to collaborate more effectively as they develop to production.
-
💼 ZenML gives data scientists the freedom to fully focus on modeling and experimentation while writing code that is production-ready from the get-go.
-
👨💻 ZenML empowers ML engineers to take ownership of the entire ML lifecycle end-to-end. Adopting ZenML means fewer handover points and more visibility on what is happening in your organization.
-
🛫 ZenML enables MLOps infrastructure experts to define, deploy, and manage sophisticated production environments that are easy to use for colleagues.
ZenML provides a user-friendly syntax designed for ML workflows, compatible with any cloud or tool. It enables centralized pipeline management, enabling developers to write code once and effortlessly deploy it to various infrastructures.
Install ZenML via PyPI. Python 3.7 - 3.10 is required:
pip install "zenml[server]"
Take a tour with the guided quickstart by running:
zenml go
ZenML allows you to create and manage your own MLOps platform using best-in-class open-source and cloud-based technologies. Here is an example of how you could set this up for your team:
For full functionality ZenML should be deployed on the cloud to enable collaborative features as the central MLOps interface for teams.
In case your machine is authenticated with one of the big three cloud providers, this command will do the full deployment for you.
zenml deploy --provider aws # aws, gcp and azure are supported providers
You can also choose to deploy with docker or helm with full control over the configuration and deployment. Check out the docs to find out how.
ZenML boasts a ton of integrations into popular MLOps tools. The ZenML Stack concept ensures that these tools work nicely together, therefore bringing structure and standardization into the MLOps workflow.
Deploying and configuring this is super easy with ZenML. For AWS, this might look a bit like this
# Deploy and register an orchestrator and an artifact store
zenml orchestrator deploy kubernetes_orchestrator --flavor kubernetes --cloud aws
zenml artifact-store deploy s3_artifact_store --flavor s3
# Register this combination of components as a stack
zenml stack register production_stack --orchestrator kubernetes_orchestrator --artifact-store s3_artifact_store --set # Register your production environment
When you run a pipeline with this stack set, it will be running on your deployed Kubernetes cluster.
You can also deploy your own tooling manually or register existing tooling.
Here's an example of a hello world ZenML pipeline in code:
# run.py
from zenml import pipeline, step
@step
def step_1() -> str:
"""Returns the `world` substring."""
return "world"
@step
def step_2(input_one: str, input_two: str) -> None:
"""Combines the two strings at its input and prints them."""
combined_str = input_one + ' ' + input_two
print(combined_str)
@pipeline
def my_pipeline():
output_step_one = step_1()
step_2(input_one="hello", input_two=output_step_one)
if __name__ == "__main__":
my_pipeline()
python run.py
Open up the ZenML dashboard using this command.
zenml show
ZenML is being built in public. The roadmap is a regularly updated source of truth for the ZenML community to understand where the product is going in the short, medium, and long term.
ZenML is managed by a core team of developers that are responsible for making key decisions and incorporating feedback from the community. The team oversees feedback via various channels, and you can directly influence the roadmap as follows:
- Vote on your most wanted feature on our Discussion board.
- Start a thread in our Slack channel.
- Create an issue on our Github repo.
We would love to develop ZenML together with our community! Best way to get
started is to select any issue from the good-first-issue
label. If you
would like to contribute, please review our Contributing
Guide for all relevant details.
The first point of call should be our Slack group. Ask your questions about bugs or specific use cases, and someone from the core team will respond. Or, if you prefer, open an issue on our GitHub repo.
ZenML is distributed under the terms of the Apache License Version 2.0. A complete version of the license is available in the LICENSE file in this repository. Any contribution made to this project will be licensed under the Apache License Version 2.0.