Skip to content

An open-source system for building data applications.

License

Notifications You must be signed in to change notification settings

atsuhiro/dagster

 
 

Repository files navigation



Introduction

Dagster is a system for building modern data applications.

Combining an elegant programming model and beautiful tools, Dagster allows infrastructure engineers, data engineers, and data scientists to seamlessly collaborate to process and produce the trusted, reliable data needed in today's world.

Install

To get started:

pip install dagster dagit


This installs two modules:

  • dagster | The core programming model and abstraction stack; stateless, single-node, single-process and multi-process execution engines; and a CLI tool for driving those engines.
  • dagit | A UI and rich development environment for Dagster, including a DAG browser, a type-aware config editor, and a streaming execution interface.

Learn

Next, jump right into our tutorial, or read our complete documentation. If you're actively using Dagster or have questions on getting started, we'd love to hear from you; come join our slack!

Contributing

For details on contributing or running the project for development, check out our contributing guide.

Integrations

Dagster works with the tools and systems that you're already using with your data, including:

Integration Dagster Library
Apache Airflow dagster-airflow
Allows Dagster pipelines to be scheduled and executed, either containerized or uncontainerized, as Apache Airflow DAGs.
Apache Spark dagster-spark · dagster-pyspark
Libraries for interacting with Apache Spark and Pyspark.
Dask dagster-dask
Provides a Dagster integration with Dask / Dask.Distributed.
DataDog dagster-datadog
Provides a Dagster resource for publishing metrics to DataDog.
Great Expectations Expectations in Dagster
The Great Expectations framework is designed to promote data quality checks for data warehouses. In Dagster, expectations are a first-class citizen.
 /  Jupyter / Papermill dagstermill
Built on the papermill library, dagstermill is meant for integrating productionized Jupyter notebooks into dagster pipelines.
PagerDuty dagster-pagerduty
A library for creating PagerDuty alerts from Dagster workflows.
Snowflake dagster-snowflake
A library for interacting with the Snowflake Data Warehouse.
Cloud Providers
AWS dagster-aws
A library for interacting with Amazon Web Services. Provides integrations with S3, EMR, and (coming soon!) Redshift.
GCP dagster-gcp
A library for interacting with Google Cloud Platform. Provides integrations with BigQuery and Cloud Dataproc.

This list is growing as we are actively building more integrations, and we welcome contributions!

Example Projects

Several example projects are provided under the examples folder demonstrating how to use Dagster, including:

  1. examples/airline-demo: A substantial demo project illustrating how these tools can be used together to manage a realistic data pipeline.
  2. examples/event-pipeline-demo: An example illustrating a typical web event processing pipeline with S3, Scala Spark, and Snowflake.

About

An open-source system for building data applications.

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 75.2%
  • TypeScript 21.9%
  • Jupyter Notebook 1.1%
  • Scala 0.7%
  • JavaScript 0.3%
  • Shell 0.3%
  • Other 0.5%