Features · Roadmap · Report Bug · Vote New Features · Read Blog · Contribute to Open Source · Meet the Team
🎉 Version 0.55.3 is out. Check out the release notes here.
🏁 Table of Contents
🤹 ZenML is an extensible, open-source MLOps framework for creating portable, production-ready machine learning pipelines. By decoupling infrastructure from code, ZenML enables developers across your organization to collaborate more effectively as they develop to production.
-
💼 ZenML gives data scientists the freedom to fully focus on modeling and experimentation while writing code that is production-ready from the get-go.
-
👨💻 ZenML empowers ML engineers to take ownership of the entire ML lifecycle end-to-end. Adopting ZenML means fewer handover points and more visibility on what is happening in your organization.
-
🛫 ZenML enables MLOps infrastructure experts to define, deploy, and manage sophisticated production environments that are easy to use for colleagues.
ZenML provides a user-friendly syntax designed for ML workflows, compatible with any cloud or tool. It enables centralized pipeline management, enabling developers to write code once and effortlessly deploy it to various infrastructures.
Install ZenML via PyPI. Python 3.8 - 3.11 is required:
pip install "zenml[server]"
Take a tour with the guided quickstart by running:
zenml go
ZenML allows you to create and manage your own MLOps platform using best-in-class open-source and cloud-based technologies. Here is an example of how you could set this up for your team:
For full functionality ZenML should be deployed on the cloud to enable collaborative features as the central MLOps interface for teams.
Currently, there are two main options to deploy ZenML:
-
ZenML Cloud: With ZenML Cloud, you can utilize a control plane to create ZenML servers, also known as tenants. These tenants are managed and maintained by ZenML's dedicated team, alleviating the burden of server management from your end.
-
Self-hosted deployment: Alternatively, you have the flexibility to deploy ZenML on your own self-hosted environment. This can be achieved through various methods, including using our CLI, Docker, Helm, or HuggingFace Spaces.
ZenML boasts a ton of integrations into popular MLOps tools. The ZenML Stack concept ensures that these tools work nicely together, therefore bringing structure and standardization into the MLOps workflow.
Deploying and configuring this is super easy with ZenML. For AWS, this might look a bit like this
# Deploy and register an orchestrator and an artifact store
zenml orchestrator deploy kubernetes_orchestrator --flavor kubernetes --cloud aws
zenml artifact-store deploy s3_artifact_store --flavor s3
# Register this combination of components as a stack
zenml stack register production_stack --orchestrator kubernetes_orchestrator --artifact-store s3_artifact_store --set # Register your production environment
When you run a pipeline with this stack set, it will be running on your deployed Kubernetes cluster.
You can also deploy your own tooling manually.
Here's an example of a hello world ZenML pipeline in code:
# run.py
from zenml import pipeline, step
@step
def step_1() -> str:
"""Returns the `world` substring."""
return "world"
@step
def step_2(input_one: str, input_two: str) -> None:
"""Combines the two strings at its input and prints them."""
combined_str = input_one + ' ' + input_two
print(combined_str)
@pipeline
def my_pipeline():
output_step_one = step_1()
step_2(input_one="hello", input_two=output_step_one)
if __name__ == "__main__":
my_pipeline()
python run.py
Open up the ZenML dashboard using this command.
zenml show
ZenML is being built in public. The roadmap is a regularly updated source of truth for the ZenML community to understand where the product is going in the short, medium, and long term.
ZenML is managed by a core team of developers that are responsible for making key decisions and incorporating feedback from the community. The team oversees feedback via various channels, and you can directly influence the roadmap as follows:
- Vote on your most wanted feature on our Discussion board.
- Start a thread in our Slack channel.
- Create an issue on our GitHub repo.
We would love to develop ZenML together with our community! The best way to get
started is to select any issue from the good-first-issue
label
and open up a Pull Request! If you
would like to contribute, please review our Contributing
Guide for all relevant details.
The first point of call should be our Slack group. Ask your questions about bugs or specific use cases, and someone from the core team will respond. Or, if you prefer, open an issue on our GitHub repo.
We have identified a critical security vulnerability in ZenML versions prior to 0.46.7. This vulnerability potentially allows unauthorized users to take ownership of ZenML accounts through the user activation feature. Please read our blog post for more information on how we've addressed this.
ZenML is distributed under the terms of the Apache License Version 2.0. A complete version of the license is available in the LICENSE file in this repository. Any contribution made to this project will be licensed under the Apache License Version 2.0.