-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update vendor-images (minor) #248
Conversation
f5ce00b
to
4fc3231
Compare
4fc3231
to
bee0d10
Compare
🎉 This PR is included in version 1.0.0 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
🎉 This PR is included in version 1.0.0 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
🎉 This PR is included in version 1.0.0 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
🎉 This PR is included in version 1.0.0 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
🎉 This PR is included in version 1.0.0 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
🎉 This PR is included in version 1.0.0 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
🎉 This PR is included in version 1.0.0 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
🎉 This PR is included in version 1.0.0 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
🎉 This PR is included in version 1.0.0 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
🎉 This PR is included in version 1.0.0 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
🎉 This PR is included in version 1.0.0 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
🎉 This PR is included in version 1.0.0 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
This PR contains the following updates:
0.36.11-alpha
->0.39.7-alpha
0.36.11-alpha
->0.39.7-alpha
2.2.5-python3.8
->2.3.1-python3.8
v0.10.3
->v0.11.3
4.45.0-debian-10-r0
->4.48.1-debian-10-r0
1.23.7
->1.24.1
11.15.0-debian-10-r9
->11.16.0-debian-10-r9
6.0.16-debian-10-r0
->6.2.7-debian-10-r0
1.12.1-debian-10-r11
->1.37.0-debian-10-r11
3.7.0-debian-10-r58
->3.8.0-debian-10-r58
v3.22.3
->v3.23.1
v3.22.3
->v3.23.1
v3.22.3
->v3.23.1
v3.22.3
->v3.23.1
v3.22.3
->v3.23.1
v2.4.1
->v2.5.0
v1.3.3
->v1.8.1
1.28
->1.35
v0.11.1
->v0.13.0
v0.11.1
->v0.13.0
v0.22.3
->v0.26.4
v0.22.3
->v0.26.4
v0.22.3
->v0.26.4
v0.8.0
->v0.11.0
1.7.1
->1.8.1
1.7.1
->1.8.1
1.7.1
->1.8.1
1.7.1
->1.8.1
1.7.1
->1.8.1
1.7.1
->1.8.1
1.7.1
->1.8.1
1.7.1
->1.8.1
1.7.1
->1.8.1
1.7.1
->1.8.1
v3.1.14-license-compliance
->v3.2.3-license-compliance
1.0.0
->1.8.0
0.4.2
->0.5.0
v2.28.1
->v2.31.2
v2.30.3
->v2.31.2
4.44-alpine
->4.48-alpine
1.3.2
->1.4.1
1.3.2
->1.4.1
0.14.2
->0.16.1
v2.0.10
->v2.7.0
v1.1.3
->v1.2.1
v2.2.4
->v2.4.2
v1.5.3
->v1.6.2
v2.6.0
->v2.7.0
v0.42.4
->v0.43.2
1.21.6
->1.22.0
0.90.11
->0.91.1
0.49.0
->0.52.0
13.6
->13.7
v0.20.3
->v0.22.5
v2.1.15
->v2.3.4
v3.2.11
->v3.3.6
v3.2.11
->v3.3.6
v7.1.3
->v7.3.0
v0.33.0
->v0.41.1
v0.52.1
->v0.56.3
v0.52.1
->v0.56.3
v0.23.0
->v0.24.0
v1.2.2
->v1.3.1
v2.31.2
->v2.36.0
0.24.0
->0.29.0
v0.23.2
->v0.26.0
v1.26.2
->v1.27.3
3.9.19-management
->3.10.2-management
v1.6.3
->v1.8.1
v1.6.3
->v1.8.1
v1.8.9
->v1.9.4
2.6.3
->2.8.0
0.9.2
->0.11.0
0.9.2
->0.11.0
1.24.0
->1.25.0
v0.4.0
->0.5.1
Release Notes
apache/airflow
v2.3.1
Significant Changes
^^^^^^^^^^^^^^^^^^^
No significant changes.
Bug Fixes
^^^^^^^^^
CeleryExecutor
(#23690)dag-processor
fetch metadata database config (#23575)reschedule
to the serialized fields for theBaseSensorOperator
(#23674)execution_timeout
as timedelta (#23655)send_callback
method forCeleryKubernetesExecutor
andLocalKubernetesExecutor
(#23617)PythonVirtualenvOperator
templated_fields (#23559)root_dag_id
too (#23536)KubernetesJobWatcher
getting stuck on resource too old (#23521)moved
table exists (#23491)core__sql_alchemy_conn__cmd
(#23441)airflow dags show
for mapped operator (#23339)ti.mark_success_url
(#23330)<Time />
in Mapped Instance table (#23313)Doc only changes
^^^^^^^^^^^^^^^^
dag_processing.processor_timeouts
to counters section (#23393)expand()
andpartial()
(#23373)Misc/Internal
^^^^^^^^^^^^^
v2.3.0
For production docker image related changes, see the
Docker Image Changelog <https://airflow.apache.org/docs/docker-stack/changelog.html>
_.Significant Changes
^^^^^^^^^^^^^^^^^^^
Passing
execution_date
toXCom.set()
,XCom.clear()
,XCom.get_one()
, andXCom.get_many()
is deprecated (#19825)""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
Continuing the effort to bind TaskInstance to a DagRun, XCom entries are now also tied to a DagRun. Use the
run_id
argument to specify the DagRun instead.Task log templates are now read from the metadata database instead of
airflow.cfg
(#20165)""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
Previously, a task’s log is dynamically rendered from the
[core] log_filename_template
and[elasticsearch] log_id_template
config values at runtime. This resulted in unfortunate characteristics, e.g. it is impractical to modify the config value after an Airflow instance is running for a while, since all existing task logs have be saved under the previous format and cannot be found with the new config value.A new
log_template
table is introduced to solve this problem. This table is synchronized with the aforementioned config values every time Airflow starts, and a new fieldlog_template_id
is added to every DAG run to point to the format used by tasks (NULL
indicates the first ever entry for compatibility).Minimum kubernetes library version bumped from
3.0.0
to21.7.0
(#20759)"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
.. note::
This is only about changing the
kubernetes
library, not the Kubernetes cluster. Airflow support forKubernetes version is described in
Installation prerequisites <https://airflow.apache.org/docs/apache-airflow/stable/installation/prerequisites.html>
_.No change in behavior is expected. This was necessary in order to take advantage of a
bugfix <https://github.com/kubernetes-client/python-base/commit/70b78cd8488068c014b6d762a0c8d358273865b4>
_ concerning refreshing of Kubernetes API tokens with EKS, which enabled the removal of someworkaround code <https://github.com/apache/airflow/pull/20759>
_.XCom now defined by
run_id
instead ofexecution_date
(#20975)"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
As a continuation to the TaskInstance-DagRun relation change started in Airflow 2.2, the
execution_date
columns on XCom has been removed from the database, and replaced by anassociation proxy <https://docs.sqlalchemy.org/en/13/orm/extensions/associationproxy.html>
_ field at the ORM level. If you access Airflow’s metadata database directly, you should rewrite the implementation to use therun_id
column instead.Note that Airflow’s metadatabase definition on both the database and ORM levels are considered implementation detail without strict backward compatibility guarantees.
Non-JSON-serializable params deprecated (#21135).
"""""""""""""""""""""""""""""""""""""""""""""""""
It was previously possible to use dag or task param defaults that were not JSON-serializable.
For example this worked previously:
.. code-block:: python
@dag.task(params={"a": {1, 2, 3}, "b": pendulum.now()})
def datetime_param(value):
print(value)
datetime_param("{{ params.a }} | {{ params.b }}")
Note the use of
set
anddatetime
types, which are not JSON-serializable. This behavior is problematic because to override these values in a dag run conf, you must use JSON, which could make these params non-overridable. Another problem is that the support for param validation assumes JSON. Use of non-JSON-serializable params will be removed in Airflow 3.0 and until then, use of them will produce a warning at parse time.You must use
postgresql://
instead ofpostgres://
insql_alchemy_conn
for SQLAlchemy 1.4.0+ (#21205)""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
When you use SQLAlchemy 1.4.0+, you need to use
postgresql://
as the scheme in thesql_alchemy_conn
.In the previous versions of SQLAlchemy it was possible to use
postgres://
, but using it inSQLAlchemy 1.4.0+ results in:
.. code-block::
E sqlalchemy.exc.NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:postgres
If you cannot change the scheme of your URL immediately, Airflow continues to work with SQLAlchemy
1.3 and you can downgrade SQLAlchemy, but we recommend updating the scheme.
Details in the
SQLAlchemy Changelog <https://docs.sqlalchemy.org/en/14/changelog/changelog_14.html#change-3687655465c25a39b968b4f5f6e9170b>
_.auth_backends
replacesauth_backend
configuration setting (#21472)""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
Previously, only one backend was used to authorize use of the REST API. In 2.3 this was changed to support multiple backends, separated by whitespace. Each will be tried in turn until a successful response is returned.
This setting is also used for the deprecated experimental API, which only uses the first option even if multiple are given.
airflow.models.base.Operator
is removed (#21505)""""""""""""""""""""""""""""""""""""""""""""""""""""
Previously, there was an empty class
airflow.models.base.Operator
for “type hinting”. This class was never really useful for anything (everything it did could be done better withairflow.models.baseoperator.BaseOperator
), and has been removed. If you are relying on the class’s existence, useBaseOperator
(for concrete operators),airflow.models.abstractoperator.AbstractOperator
(the base class of bothBaseOperator
and the AIP-42MappedOperator
), orairflow.models.operator.Operator
(a union typeBaseOperator | MappedOperator
for type annotation).Zip files in the DAGs folder can no longer have a
.py
extension (#21538)""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
It was previously possible to have any extension for zip files in the DAGs folder. Now
.py
files are going to be loaded as modules without checking whether it is a zip file, as it leads to less IO. If a.py
file in the DAGs folder is a zip compressed file, parsing it will fail with an exception.auth_backends
includes session (#21640)"""""""""""""""""""""""""""""""""""""""""""
To allow the Airflow UI to use the API, the previous default authorization backend
airflow.api.auth.backend.deny_all
is changed toairflow.api.auth.backend.session
, and this is automatically added to the list of API authorization backends if a non-default value is set.Default templates for log filenames and elasticsearch log_id changed (#21734)
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
In order to support Dynamic Task Mapping the default templates for per-task instance logging has changed. If your config contains the old default values they will be upgraded-in-place.
If you are happy with the new config values you should remove the setting in
airflow.cfg
and let the default value be used. Old default values were:[core] log_filename_template
:{{ ti.dag_id }}/{{ ti.task_id }}/{{ ts }}/{{ try_number }}.log
[elasticsearch] log_id_template
:{dag_id}-{task_id}-{execution_date}-{try_number}
[core] log_filename_template
now uses "hive partition style" ofdag_id=<id>/run_id=<id>
by default, which may cause problems on some older FAT filesystems. If this affects you then you will have to change the log template.If you have customized the templates you should ensure that they contain
{{ ti.map_index }}
if you want to use dynamically mapped tasks.If after upgrading you find your task logs are no longer accessible, try adding a row in the
log_template
table withid=0
containing your previous
log_id_template
andlog_filename_template
. For example, if you used the defaults in 2.2.5:.. code-block:: sql
BaseOperatorLink's
get_link
method changed to take ati_key
keyword argument (#21798)"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
In v2.2 we "deprecated" passing an execution date to XCom.get methods, but there was no other option for operator links as they were only passed an execution_date.
Now in 2.3 as part of Dynamic Task Mapping (AIP-42) we will need to add map_index to the XCom row to support the "reduce" part of the API.
In order to support that cleanly we have changed the interface for BaseOperatorLink to take an TaskInstanceKey as the
ti_key
keyword argument (as execution_date + task is no longer unique for mapped operators).The existing signature will be detected (by the absence of the
ti_key
argument) and continue to work.ReadyToRescheduleDep
now only runs whenreschedule
is True (#21815)"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
When a
ReadyToRescheduleDep
is run, it now checks whether thereschedule
attribute on the operator, and always reports itself as passed unless it is set to True. If you use this dep class on your custom operator, you will need to add this attribute to the operator class. Built-in operator classes that use this dep class (including sensors and all subclasses) already have this attribute and are not affected.The
deps
attribute on an operator class should be a class level attribute (#21815)""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
To support operator-mapping (AIP 42), the
deps
attribute on operator class must be a set at the class level. This means that if a custom operator implements this as an instance-level variable, it will not be able to be used for operator-mapping. This does not affect existing code, but we highly recommend you to restructure the operator's dep logic in order to support the new feature.Deprecation:
Connection.extra
must be JSON-encoded dict (#21816)""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
TLDR
Airflow's Connection is used for storing credentials. For storage of information that does not
fit into user / password / host / schema / port, we have the
extra
string field. Its intentionwas always to provide for storage of arbitrary key-value pairs, like
no_host_key_check
in the SSHhook, or
keyfile_dict
in GCP.But since the field is string, it's technically been permissible to store any string value. For example
one could have stored the string value
'my-website.com'
and used this in the hook. But this is a verybad practice. One reason is intelligibility: when you look at the value for
extra
, you don't have any ideawhat its purpose is. Better would be to store
{"api_host": "my-website.com"}
which at least tells yousomething about the value. Another reason is extensibility: if you store the API host as a simple string
value, what happens if you need to add more information, such as the API endpoint, or credentials? Then
you would need to convert the string to a dict, and this would be a breaking change.
For these reason, starting in Airflow 3.0 we will require that the
Connection.extra
field storea JSON-encoded Python dict.
How will I be affected?
dexidp/dex
v2.31.2
Compare Source
This is a maintenance release upgrading Go to apply some security patches.
The official container image for this release can be pulled from
What's Changed
Full Changelog: dexidp/dex@v2.31.1...v2.31.2
v2.31.1
Compare Source
This is a maintenance release upgrading Go to apply some security patches.
What's Changed
Full Changelog: dexidp/dex@v2.31.0...v2.31.1
v2.31.0
Compare Source
The official docker release for this release can be pulled from
What's Changed
Dependency updates
Configuration
📅 Schedule: At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
This PR has been generated by Renovate Bot.