Once you have followed the steps in the getting_started/installation.adoc section to install the Operator and its dependencies, you will now deploy a Airflow cluster and its dependencies. Afterwards you can verify that it works by running and tracking an example DAG.
As we have installed the external dependencies required by Airflow (Postgresql and Redis) we can now install the Airflow cluster itself.
Supported versions for PostgreSQL and Redis can be found in the Airflow documentation.
A secret with the necessary credentials must be created, this entails database connection credentials as well as an admin account for Airflow itself. Create a file called airflow-credentials.yaml
:
link:example$getting_started/code/airflow-credentials.yaml[role=include]
And apply it:
link:example$getting_started/code/getting_started.sh[role=include]
The connections.secretKey
will be used for securely signing the session cookies and can be used for any other security related needs by extensions. It should be a long random string of bytes.
connections.sqlalchemyDatabaseUri
must contain the connection string to the SQL database storing the Airflow metadata.
connections.celeryResultBackend
must contain the connection string to the SQL database storing the job metadata (in the example above we are using the same postgresql database for both).
connections.celeryBrokerUrl
must contain the connection string to the Redis instance used for queuing the jobs submitted to the airflow worker(s).
The adminUser
fields are used to create an admin user.
Please note that the admin user will be disabled if you use a non-default authentication mechanism like LDAP.
An Airflow cluster is made of up three components:
-
webserver
: this provides the main UI for user-interaction -
workers
: the nodes over which the job workload will be distributed by the scheduler -
scheduler
: responsible for triggering jobs and persisting their metadata to the backend database
Create a file named airflow.yaml
with the following contents:
link:example$getting_started/code/airflow.yaml[role=include]
And apply it:
link:example$getting_started/code/getting_started.sh[role=include]
Where:
-
metadata.name
contains the name of the Airflow cluster -
the label of the Docker image provided by Stackable must be set in
spec.version
-
spec.executor
: this setting determines how the cluster will run (for more information see https://airflow.apache.org/docs/apache-airflow/stable/executor/index.html#executor-types). This field takes one of two values,celery
orkubernetes
:celery
will deploy workers as defined by the role and specifyCeleryExecutor
internally.kubernetes
will setKubernetesExecutor
internally, and the scheduler will take responsibility for creating (and terminating) pods for the DAG individual tasks. -
the
spec.loadExamples
key is optional and defaults tofalse
. It is set totrue
here as the example DAGs will be used when verifying the installation. -
the
spec.exposeConfig
key is optional and defaults tofalse
. It is set totrue
only as an aid to verify the configuration and should never be used as such in anything other than test or demo clusters. -
the previously created secret must be referenced in
spec.credentialsSecret
Note
|
Please note that the version you need to specify for spec.version is not only the version of Apache Airflow which you want to roll out, but has to be amended with a Stackable version as shown. This Stackable version is the version of the underlying container image which is used to execute the processes. For a list of available versions please check our
image registry.
It should generally be safe to simply use the latest image version that is available.
|
This will create the actual Airflow cluster.
When creating an Airflow cluster, a database-initialization job is first started to ensure that the database schema is present and correct (i.e. populated with an admin user). A Kubernetes job is created which starts a pod to initialize the database. This can take a while.
You can use kubectl to wait on the resource, although the cluster itself will not be created until this step is complete.:
link:example$getting_started/code/getting_started.sh[role=include]
The job status can be inspected and verified like this:
kubectl get jobs
which will show something like this:
NAME COMPLETIONS DURATION AGE airflow 1/1 85s 11m
Then, make sure that all the Pods in the StatefulSets are ready:
kubectl get statefulset
The output should show all pods ready, including the external dependencies:
NAME READY AGE airflow-postgresql 1/1 16m airflow-redis-master 1/1 16m airflow-redis-replicas 1/1 16m airflow-scheduler-default 1/1 11m airflow-webserver-default 1/1 11m airflow-worker-default 2/2 11m
The completed set of pods for the Airflow cluster will look something like this:
When the Airflow cluster has been created and the database is initialized, Airflow can be opened in the
browser: the webserver UI port defaults to 8080
can be forwarded to the local host:
link:example$getting_started/code/getting_started.sh[role=include]
The Webserver UI can now be opened in the browser with http://localhost:8080
. Enter the admin credentials from the Kubernetes secret:
Since the examples were loaded in the cluster definition, they will appear under the DAGs tabs:
Select one of these DAGs by clicking on the name in the left-hand column e.g. example_complex
. Click on the arrow in the top right of the screen, select "Trigger DAG" and the DAG nodes will be automatically highlighted as the job works through its phases.
Great! You have set up an Airflow cluster, connected to it and run your first DAG!
Alternative: Using the command line
If you prefer to interact directly with the API instead of using the web interface, you can use the following commands. The DAG is one of the example DAGs named example_complex
. To trigger a DAG run via the API requires an initial extra step to ensure that the DAG is not in a paused state:
link:example$getting_started/code/getting_started.sh[role=include]
A DAG can then be triggered by providing the DAG name (in this case, example_complex
). The response identifies the DAG identifier, which we can parse out of the JSON like this:
link:example$getting_started/code/getting_started.sh[role=include]
If we read this identifier into a variable such as dag_id
(or replace it manually in the command) we can run this command to access the status of the DAG run:
link:example$getting_started/code/getting_started.sh[role=include]
Look at the usage-guide/index.adoc to find out more about configuring your Airflow cluster and loading your own DAG files.