forked from celery/celery
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Changelog
4729 lines (3087 loc) · 149 KB
/
Changelog
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
================
Change history
================
.. contents::
:local:
.. _version-2.3.1:
2.3.1
=====
:release-date: 2011-08-07 08:00 P.M BST
Fixes
-----
* The :setting:`CELERY_AMQP_TASK_RESULT_EXPIRES` setting did not work,
resulting in an AMQP related error about not being able to serialize
floats while trying to publish task states (Issue #446).
.. _version-2.3.0:
2.3.0
=====
:release-date: 2011-08-05 12:00 P.M BST
:tested: cPython: 2.5, 2.6, 2.7; PyPy: 1.5; Jython: 2.5.2
.. _v230-important:
Important Notes
---------------
* Now requires Kombu 1.2.1
* Results are now disabled by default.
The AMQP backend was not a good default because often the users were
not consuming the results, resulting in thousands of queues.
While the queues can be configured to expire if left unused, it was not
possible to enable this by default because this was only available in
recent RabbitMQ versions (2.1.1+)
With this change enabling a result backend will be a conscious choice,
which will hopefully lead the user to read the documentation and be aware
of any common pitfalls with the particular backend.
The default backend is now a dummy backend
(:class:`celery.backends.base.DisabledBackend`). Saving state is simply an
noop operation, and AsyncResult.wait(), .result, .state, etc. will raise
a :exc:`NotImplementedError` telling the user to configure the result backend.
For help choosing a backend please see :ref:`task-result-backends`.
If you depend on the previous default which was the AMQP backend, then
you have to set this explicitly before upgrading::
CELERY_RESULT_BACKEND = "amqp"
.. note::
For django-celery users the default backend is still ``database``,
and results are not disabled by default.
* The Debian init scripts have been deprecated in favor of the generic-init.d
init scripts.
In addition generic init scripts for celerybeat and celeryev has been
added.
.. _v230-news:
News
----
* Automatic connection pool support.
The pool is used by everything that requires a broker connection. For
example applying tasks, sending broadcast commands, retrieving results
with the AMQP result backend, and so on.
The pool is disabled by default, but you can enable it by configuring the
:setting:`BROKER_POOL_LIMIT` setting::
BROKER_POOL_LIMIT = 10
A limit of 10 means a maximum of 10 simultaneous connections can co-exist.
Only a single connection will ever be used in a single-thread
environment, but in a concurrent environment (threads, greenlets, etc., but
not processes) when the limit has been exceeded, any try to acquire a
connection will block the thread and wait for a connection to be released.
This is something to take into consideration when choosing a limit.
A limit of :const:`None` or 0 means no limit, and connections will be
established and closed every time.
* Introducing Chords (taskset callbacks).
A chord is a task that only executes after all of the tasks in a taskset
has finished executing. It's a fancy term for "taskset callbacks"
adopted from
`Cω <http://research.microsoft.com/en-us/um/cambridge/projects/comega/>`_).
It works with all result backends, but the best implementation is
currently provided by the Redis result backend.
Here's an example chord::
>>> chord(add.subtask((i, i))
... for i in xrange(100))(tsum.subtask()).get()
9900
Please read the :ref:`Chords section in the user guide <chords>`, if you
want to know more.
* Time limits can now be set for individual tasks.
To set the soft and hard time limits for a task use the ``time_limit``
and ``soft_time_limit`` attributes:
.. code-block:: python
import time
@task(time_limit=60, soft_time_limit=30)
def sleeptask(seconds):
time.sleep(seconds)
If the attributes are not set, then the workers default time limits
will be used.
New in this version you can also change the time limits for a task
at runtime using the :func:`time_limit` remote control command::
>>> from celery.task import control
>>> control.time_limit("tasks.sleeptask",
... soft=60, hard=120, reply=True)
[{'worker1.example.com': {'ok': 'time limits set successfully'}}]
Only tasks that starts executing after the time limit change will be affected.
.. note::
Soft time limits will still not work on Windows or other platforms
that do not have the ``SIGUSR1`` signal.
* Redis backend configuration directive names changed to include the
``CELERY_`` prefix.
===================================== ===================================
**Old setting name** **Replace with**
===================================== ===================================
`REDIS_HOST` `CELERY_REDIS_HOST`
`REDIS_PORT` `CELERY_REDIS_PORT`
`REDIS_DB` `CELERY_REDIS_DB`
`REDIS_PASSWORD` `CELERY_REDIS_PASSWORD`
===================================== ===================================
The old names are still supported but pending deprecation.
* PyPy: The default pool implementation used is now multiprocessing
if running on PyPy 1.5.
* celeryd-multi: now supports "pass through" options.
Pass through options makes it easier to use celery without a
configuration file, or just add last-minute options on the command
line.
Example use:
$ celeryd-multi start 4 -c 2 -- broker.host=amqp.example.com \
broker.vhost=/ \
celery.disable_rate_limits=yes
* celerybeat: Now retries establishing the connection (Issue #419).
* celeryctl: New ``list bindings`` command.
Lists the current or all available bindings, depending on the
broker transport used.
* Heartbeat is now sent every 30 seconds (previously every 2 minutes).
* ``ResultSet.join_native()`` and ``iter_native()`` is now supported by
the Redis and Cache result backends.
This is an optimized version of ``join()`` using the underlying
backends ability to fetch multiple results at once.
* Can now use SSL when sending error e-mails by enabling the
:setting:`EMAIL_USE_SSL` setting.
* ``events.default_dispatcher()``: Context manager to easily obtain
an event dispatcher instance using the connection pool.
* Import errors in the configuration module will not be silenced anymore.
* ResultSet.iterate: Now supports the ``timeout``, ``propagate`` and
``interval`` arguments.
* ``with_default_connection`` -> ``with default_connection``
* TaskPool.apply_async: Keyword arguments ``callbacks`` and ``errbacks``
has been renamed to ``callback`` and ``errback`` and take a single scalar
value instead of a list.
* No longer propagates errors occurring during process cleanup (Issue #365)
* Added ``TaskSetResult.delete()``, which will delete a previously
saved taskset result.
* Celerybeat now syncs every 3 minutes instead of only at
shutdown (Issue #382).
* Monitors now properly handles unknown events, so user-defined events
are displayed.
* Terminating a task on Windows now also terminates all of the tasks child
processes (Issue #384).
* celeryd: ``-I|--include`` option now always searches the current directory
to import the specified modules.
* Cassandra backend: Now expires results by using TTLs.
* Functional test suite in ``funtests`` is now actually working properly, and
passing tests.
.. _v230-fixes:
Fixes
-----
* celeryev was trying to create the pidfile twice.
* celery.contrib.batches: Fixed problem where tasks failed
silently (Issue #393).
* Fixed an issue where logging objects would give "<Unrepresentable",
even though the objects were.
* ``CELERY_TASK_ERROR_WHITE_LIST`` is now properly initialized
in all loaders.
* celeryd_detach now passes through command-line configuration.
* Remote control command ``add_consumer`` now does nothing if the
queue is already being consumed from.
.. _version-2.2.7:
2.2.7
=====
:release-date: 2011-06-13 16:00 P.M BST
* New signals: :signal:`after_setup_logger` and
:signal:`after_setup_task_logger`
These signals can be used to augment logging configuration
after Celery has set up logging.
* Redis result backend now works with Redis 2.4.4.
* celeryd_multi: The :option:`--gid` option now works correctly.
* celeryd: Retry wrongfully used the repr of the traceback instead
of the string representation.
* App.config_from_object: Now loads module, not attribute of module.
* Fixed issue where logging of objects would give "<Unrepresentable: ...>"
.. _version-2.2.6:
2.2.6
=====
:release-date: 2011-04-15 16:00 P.M CEST
.. _v226-important:
Important Notes
---------------
* Now depends on Kombu 1.1.2.
* Dependency lists now explicitly specifies that we don't want python-dateutil
2.x, as this version only supports py3k.
If you have installed dateutil 2.0 by accident you should downgrade
to the 1.5.0 version::
pip install -U python-dateutil==1.5.0
or by easy_install::
easy_install -U python-dateutil==1.5.0
.. _v226-fixes:
Fixes
-----
* The new ``WatchedFileHandler`` broke Python 2.5 support (Issue #367).
* Task: Don't use ``app.main`` if the task name is set explicitly.
* Sending emails did not work on Python 2.5, due to a bug in
the version detection code (Issue #378).
* Beat: Adds method ``ScheduleEntry._default_now``
This method can be overridden to change the default value
of ``last_run_at``.
* An error occurring in process cleanup could mask task errors.
We no longer propagate errors happening at process cleanup,
but log them instead. This way they will not interfere with publishing
the task result (Issue #365).
* Defining tasks did not work properly when using the Django
``shell_plus`` utility (Issue #366).
* ``AsyncResult.get`` did not accept the ``interval`` and ``propagate``
arguments.
* celeryd: Fixed a bug where celeryd would not shutdown if a
:exc:`socket.error` was raised.
.. _version-2.2.5:
2.2.5
=====
:release-date: 2011-03-28 06:00 P.M CEST
.. _v225-important:
Important Notes
---------------
* Now depends on Kombu 1.0.7
.. _v225-news:
News
----
* Our documentation is now hosted by Read The Docs
(http://docs.celeryproject.org), and all links have been changed to point to
the new URL.
* Logging: Now supports log rotation using external tools like `logrotate.d`_
(Issue #321)
This is accomplished by using the ``WatchedFileHandler``, which re-opens
the file if it is renamed or deleted.
.. _`logrotate.d`:
http://www.ducea.com/2006/06/06/rotating-linux-log-files-part-2-logrotate/
* :ref:`tut-otherqueues` now documents how to configure Redis/Database result
backends.
* gevent: Now supports ETA tasks.
But gevent still needs ``CELERY_DISABLE_RATE_LIMITS=True`` to work.
* TaskSet User Guide: now contains TaskSet callback recipes.
* Eventlet: New signals:
* ``eventlet_pool_started``
* ``eventlet_pool_preshutdown``
* ``eventlet_pool_postshutdown``
* ``eventlet_pool_apply``
See :mod:`celery.signals` for more information.
* New :setting:`BROKER_TRANSPORT_OPTIONS` setting can be used to pass
additional arguments to a particular broker transport.
* celeryd: ``worker_pid`` is now part of the request info as returned by
broadcast commands.
* TaskSet.apply/Taskset.apply_async now accepts an optional ``taskset_id``
argument.
* The taskset_id (if any) is now available in the Task request context.
* SQLAlchemy result backend: taskset_id and taskset_id columns now have a
unique constraint. (Tables need to recreated for this to take affect).
* Task Userguide: Added section about choosing a result backend.
* Removed unused attribute ``AsyncResult.uuid``.
.. _v225-fixes:
Fixes
-----
* multiprocessing.Pool: Fixes race condition when marking job with
``WorkerLostError`` (Issue #268).
The process may have published a result before it was terminated,
but we have no reliable way to detect that this is the case.
So we have to wait for 10 seconds before marking the result with
WorkerLostError. This gives the result handler a chance to retrieve the
result.
* multiprocessing.Pool: Shutdown could hang if rate limits disabled.
There was a race condition when the MainThread was waiting for the pool
semaphore to be released. The ResultHandler now terminates after 5
seconds if there are unacked jobs, but no worker processes left to start
them (it needs to timeout because there could still be an ack+result
that we haven't consumed from the result queue. It
is unlikely we will receive any after 5 seconds with no worker processes).
* celerybeat: Now creates pidfile even if the ``--detach`` option is not set.
* eventlet/gevent: The broadcast command consumer is now running in a separate
greenthread.
This ensures broadcast commands will take priority even if there are many
active tasks.
* Internal module ``celery.worker.controllers`` renamed to
``celery.worker.mediator``.
* celeryd: Threads now terminates the program by calling ``os._exit``, as it
is the only way to ensure exit in the case of syntax errors, or other
unrecoverable errors.
* Fixed typo in ``maybe_timedelta`` (Issue #352).
* celeryd: Broadcast commands now logs with loglevel debug instead of warning.
* AMQP Result Backend: Now resets cached channel if the connection is lost.
* Polling results with the AMQP result backend was not working properly.
* Rate limits: No longer sleeps if there are no tasks, but rather waits for
the task received condition (Performance improvement).
* ConfigurationView: ``iter(dict)`` should return keys, not items (Issue #362).
* celerybeat: PersistentScheduler now automatically removes a corrupted
schedule file (Issue #346).
* Programs that doesn't support positional command line arguments now provides
a user friendly error message.
* Programs no longer tries to load the configuration file when showing
``--version`` (Issue #347).
* Autoscaler: The "all processes busy" log message is now severity debug
instead of error.
* celeryd: If the message body can't be decoded, it is now passed through
``safe_str`` when logging.
This to ensure we don't get additional decoding errors when trying to log
the failure.
* ``app.config_from_object``/``app.config_from_envvar`` now works for all
loaders.
* Now emits a user-friendly error message if the result backend name is
unknown (Issue #349).
* :mod:`celery.contrib.batches`: Now sets loglevel and logfile in the task
request so ``task.get_logger`` works with batch tasks (Issue #357).
* celeryd: An exception was raised if using the amqp transport and the prefetch
count value exceeded 65535 (Issue #359).
The prefetch count is incremented for every received task with an
ETA/countdown defined. The prefetch count is a short, so can only support
a maximum value of 65535. If the value exceeds the maximum value we now
disable the prefetch count, it is re-enabled as soon as the value is below
the limit again.
* cursesmon: Fixed unbound local error (Issue #303).
* eventlet/gevent is now imported on demand so autodoc can import the modules
without having eventlet/gevent installed.
* celeryd: Ack callback now properly handles ``AttributeError``.
* ``Task.after_return`` is now always called *after* the result has been
written.
* Cassandra Result Backend: Should now work with the latest ``pycassa``
version.
* multiprocessing.Pool: No longer cares if the putlock semaphore is released
too many times. (this can happen if one or more worker processes are
killed).
* SQLAlchemy Result Backend: Now returns accidentally removed ``date_done`` again
(Issue #325).
* Task.request contex is now always initialized to ensure calling the task
function directly works even if it actively uses the request context.
* Exception occuring when iterating over the result from ``TaskSet.apply``
fixed.
* eventlet: Now properly schedules tasks with an ETA in the past.
.. _version-2.2.4:
2.2.4
=====
:release-date: 2011-02-19 12:00 AM CET
.. _v224-fixes:
Fixes
-----
* celeryd: 2.2.3 broke error logging, resulting in tracebacks not being logged.
* AMQP result backend: Polling task states did not work properly if there were
more than one result message in the queue.
* ``TaskSet.apply_async()`` and ``TaskSet.apply()`` now supports an optional
``taskset_id`` keyword argument (Issue #331).
* The current taskset id (if any) is now available in the task context as
``request.taskset`` (Issue #329).
* SQLAlchemy result backend: `date_done` was no longer part of the results as it had
been accidentally removed. It is now available again (Issue #325).
* SQLAlchemy result backend: Added unique constraint on `Task.task_id` and
`TaskSet.taskset_id`. Tables needs to be recreated for this to take effect.
* Fixed exception raised when iterating on the result of ``TaskSet.apply()``.
* Tasks Userguide: Added section on choosing a result backend.
.. _version-2.2.3:
2.2.3
=====
:release-date: 2011-02-12 04:00 P.M CET
.. _v223-fixes:
Fixes
-----
* Now depends on Kombu 1.0.3
* Task.retry now supports a ``max_retries`` argument, used to change the
default value.
* `multiprocessing.cpu_count` may raise :exc:`NotImplementedError` on
platforms where this is not supported (Issue #320).
* Coloring of log messages broke if the logged object was not a string.
* Fixed several typos in the init script documentation.
* A regression caused `Task.exchange` and `Task.routing_key` to no longer
have any effect. This is now fixed.
* Routing Userguide: Fixes typo, routers in :setting:`CELERY_ROUTES` must be
instances, not classes.
* :program:`celeryev` did not create pidfile even though the
:option:`--pidfile` argument was set.
* Task logger format was no longer used. (Issue #317).
The id and name of the task is now part of the log message again.
* A safe version of ``repr()`` is now used in strategic places to ensure
objects with a broken ``__repr__`` does not crash the worker, or otherwise
make errors hard to understand (Issue #298).
* Remote control command ``active_queues``: did not account for queues added
at runtime.
In addition the dictionary replied by this command now has a different
structure: the exchange key is now a dictionary containing the
exchange declaration in full.
* The :option:`-Q` option to :program:`celeryd` removed unused queue
declarations, so routing of tasks could fail.
Queues are no longer removed, but rather `app.amqp.queues.consume_from()`
is used as the list of queues to consume from.
This ensures all queues are available for routing purposes.
* celeryctl: Now supports the `inspect active_queues` command.
.. _version-2.2.2:
2.2.2
=====
:release-date: 2011-02-03 04:00 P.M CET
.. _v222-fixes:
Fixes
-----
* Celerybeat could not read the schedule properly, so entries in
:setting:`CELERYBEAT_SCHEDULE` would not be scheduled.
* Task error log message now includes `exc_info` again.
* The `eta` argument can now be used with `task.retry`.
Previously it was overwritten by the countdown argument.
* celeryd-multi/celeryd_detach: Now logs errors occuring when executing
the `celeryd` command.
* daemonizing cookbook: Fixed typo ``--time-limit 300`` ->
``--time-limit=300``
* Colors in logging broke non-string objects in log messages.
* ``setup_task_logger`` no longer makes assumptions about magic task kwargs.
.. _version-2.2.1:
2.2.1
=====
:release-date: 2011-02-02 04:00 P.M CET
.. _v221-fixes:
Fixes
-----
* Eventlet pool was leaking memory (Issue #308).
* Deprecated function ``celery.execute.delay_task`` was accidentally removed,
now available again.
* ``BasePool.on_terminate`` stub did not exist
* celeryd detach: Adds readable error messages if user/group name does not
exist.
* Smarter handling of unicode decod errors when logging errors.
.. _version-2.2.0:
2.2.0
=====
:release-date: 2011-02-01 10:00 AM CET
.. _v220-important:
Important Notes
---------------
* Carrot has been replaced with `Kombu`_
Kombu is the next generation messaging framework for Python,
fixing several flaws present in Carrot that was hard to fix
without breaking backwards compatibility.
Also it adds:
* First-class support for virtual transports; Redis, Django ORM,
SQLAlchemy, Beanstalk, MongoDB, CouchDB and in-memory.
* Consistent error handling with introspection,
* The ability to ensure that an operation is performed by gracefully
handling connection and channel errors,
* Message compression (zlib, bzip2, or custom compression schemes).
This means that `ghettoq` is no longer needed as the
functionality it provided is already available in Celery by default.
The virtual transports are also more feature complete with support
for exchanges (direct and topic). The Redis transport even supports
fanout exchanges so it is able to perform worker remote control
commands.
.. _`Kombu`: http://pypi.python.org/pypi/kombu
* Magic keyword arguments pending deprecation.
The magic keyword arguments were responsibile for many problems
and quirks: notably issues with tasks and decorators, and name
collisions in keyword arguments for the unaware.
It wasn't easy to find a way to deprecate the magic keyword arguments,
but we think this is a solution that makes sense and it will not
have any adverse effects for existing code.
The path to a magic keyword argument free world is:
* the `celery.decorators` module is deprecated and the decorators
can now be found in `celery.task`.
* The decorators in `celery.task` disables keyword arguments by
default
* All examples in the documentation have been changed to use
`celery.task`.
This means that the following will have magic keyword arguments
enabled (old style):
.. code-block:: python
from celery.decorators import task
@task
def add(x, y, **kwargs):
print("In task %s" % kwargs["task_id"])
return x + y
And this will not use magic keyword arguments (new style):
.. code-block:: python
from celery.task import task
@task
def add(x, y):
print("In task %s" % add.request.id)
return x + y
In addition, tasks can choose not to accept magic keyword arguments by
setting the `task.accept_magic_kwargs` attribute.
.. admonition:: Deprecation
Using the decorators in :mod:`celery.decorators` emits a
:class:`PendingDeprecationWarning` with a helpful message urging
you to change your code, in version 2.4 this will be replaced with
a :class:`DeprecationWarning`, and in version 3.0 the
:mod:`celery.decorators` module will be removed and no longer exist.
Similarly, the `task.accept_magic_kwargs` attribute will no
longer have any effect starting from version 3.0.
* The magic keyword arguments are now available as `task.request`
This is called *the context*. Using thread-local storage the
context contains state that is related to the current request.
It is mutable and you can add custom attributes that will only be seen
by the current task request.
The following context attributes are always available:
===================================== ===================================
**Magic Keyword Argument** **Replace with**
===================================== ===================================
`kwargs["task_id"]` `self.request.id`
`kwargs["delivery_info"]` `self.request.delivery_info`
`kwargs["task_retries"]` `self.request.retries`
`kwargs["logfile"]` `self.request.logfile`
`kwargs["loglevel"]` `self.request.loglevel`
`kwargs["task_is_eager` `self.request.is_eager`
**NEW** `self.request.args`
**NEW** `self.request.kwargs`
===================================== ===================================
In addition, the following methods now automatically uses the current
context, so you don't have to pass `kwargs` manually anymore:
* `task.retry`
* `task.get_logger`
* `task.update_state`
* `Eventlet`_ support.
This is great news for I/O-bound tasks!
To change pool implementations you use the :option:`-P|--pool` argument
to :program:`celeryd`, or globally using the
:setting:`CELERYD_POOL` setting. This can be the full name of a class,
or one of the following aliases: `processes`, `eventlet`, `gevent`.
For more information please see the :ref:`concurrency-eventlet` section
in the User Guide.
.. admonition:: Why not gevent?
For our first alternative concurrency implementation we have focused
on `Eventlet`_, but there is also an experimental `gevent`_ pool
available. This is missing some features, notably the ability to
schedule ETA tasks.
Hopefully the `gevent`_ support will be feature complete by
version 2.3, but this depends on user demand (and contributions).
.. _`Eventlet`: http://eventlet.net
.. _`gevent`: http://gevent.org
* Python 2.4 support deprecated!
We're happy^H^H^H^H^Hsad to announce that this is the last version
to support Python 2.4.
You are urged to make some noise if you're currently stuck with
Python 2.4. Complain to your package maintainers, sysadmins and bosses:
tell them it's time to move on!
Apart from wanting to take advantage of with-statements, coroutines,
conditional expressions and enhanced try blocks, the code base
now contains so many 2.4 related hacks and workarounds it's no longer
just a compromise, but a sacrifice.
If it really isn't your choice, and you don't have the option to upgrade
to a newer version of Python, you can just continue to use Celery 2.2.
Important fixes can be backported for as long as there is interest.
* `celeryd`: Now supports Autoscaling of child worker processes.
The :option:`--autoscale` option can be used to configure the minimum
and maximum number of child worker processes::
--autoscale=AUTOSCALE
Enable autoscaling by providing
max_concurrency,min_concurrency. Example:
--autoscale=10,3 (always keep 3 processes, but grow to
10 if necessary).
* Remote Debugging of Tasks
``celery.contrib.rdb`` is an extended version of :mod:`pdb` that
enables remote debugging of processes that does not have terminal
access.
Example usage:
.. code-block:: python
from celery.contrib import rdb
from celery.task import task
@task
def add(x, y):
result = x + y
rdb.set_trace() # <- set breakpoint
return result
:func:`~celery.contrib.rdb.set_trace` sets a breakpoint at the current
location and creates a socket you can telnet into to remotely debug
your task.
The debugger may be started by multiple processes at the same time,
so rather than using a fixed port the debugger will search for an
available port, starting from the base port (6900 by default).
The base port can be changed using the environment variable
:envvar:`CELERY_RDB_PORT`.
By default the debugger will only be available from the local host,
to enable access from the outside you have to set the environment
variable :envvar:`CELERY_RDB_HOST`.
When `celeryd` encounters your breakpoint it will log the following
information::
[INFO/MainProcess] Got task from broker:
tasks.add[d7261c71-4962-47e5-b342-2448bedd20e8]
[WARNING/PoolWorker-1] Remote Debugger:6900:
Please telnet 127.0.0.1 6900. Type `exit` in session to continue.
[2011-01-18 14:25:44,119: WARNING/PoolWorker-1] Remote Debugger:6900:
Waiting for client...
If you telnet the port specified you will be presented
with a ``pdb`` shell::
$ telnet localhost 6900
Connected to localhost.
Escape character is '^]'.
> /opt/devel/demoapp/tasks.py(128)add()
-> return result
(Pdb)
Enter ``help`` to get a list of available commands,
It may be a good idea to read the `Python Debugger Manual`_ if
you have never used `pdb` before.
.. _`Python Debugger Manual`: http://docs.python.org/library/pdb.html
* Events are now transient and is using a topic exchange (instead of direct).
The `CELERYD_EVENT_EXCHANGE`, `CELERYD_EVENT_ROUTING_KEY`,
`CELERYD_EVENT_EXCHANGE_TYPE` settings are no longer in use.
This means events will not be stored until there is a consumer, and the
events will be gone as soon as the consumer stops. Also it means there
can be multiple monitors running at the same time.
The routing key of an event is the type of event (e.g. `worker.started`,
`worker.heartbeat`, `task.succeeded`, etc. This means a consumer can
filter on specific types, to only be alerted of the events it cares about.
Each consumer will create a unique queue, meaning it is in effect a
broadcast exchange.
This opens up a lot of possibilities, for example the workers could listen
for worker events to know what workers are in the neighborhood, and even
restart workers when they go down (or use this information to optimize
tasks/autoscaling).
.. note::
The event exchange has been renamed from "celeryevent" to "celeryev"
so it does not collide with older versions.
If you would like to remove the old exchange you can do so
by executing the following command::
$ camqadm exchange.delete celeryevent
* `celeryd` now starts without configuration, and configuration can be
specified directly on the command line.
Configuration options must appear after the last argument, separated
by two dashes::
$ celeryd -l info -I tasks -- broker.host=localhost broker.vhost=/app
* Configuration is now an alias to the original configuration, so changes
to the original will reflect Celery at runtime.
* `celery.conf` has been deprecated, and modifying `celery.conf.ALWAYS_EAGER`
will no longer have any effect.
The default configuration is now available in the
:mod:`celery.app.defaults` module. The available configuration options
and their types can now be introspected.
* Remote control commands are now provided by `kombu.pidbox`, the generic
process mailbox.
* Internal module `celery.worker.listener` has been renamed to
`celery.worker.consumer`, and `.CarrotListener` is now `.Consumer`.
* Previously deprecated modules `celery.models` and
`celery.management.commands` have now been removed as per the deprecation
timeline.
* [Security: Low severity] Removed `celery.task.RemoteExecuteTask` and
accompanying functions: `dmap`, `dmap_async`, and `execute_remote`.
Executing arbitrary code using pickle is a potential security issue if
someone gains unrestricted access to the message broker.
If you really need this functionality, then you would have to add
this to your own project.
* [Security: Low severity] The `stats` command no longer transmits the
broker password.
One would have needed an authenticated broker connection to receive
this password in the first place, but sniffing the password at the
wire level would have been possible if using unencrypted communication.
.. _v220-news:
News
----
* The internal module `celery.task.builtins` has been removed.
* The module `celery.task.schedules` is deprecated, and
`celery.schedules` should be used instead.
For example if you have::
from celery.task.schedules import crontab
You should replace that with::
from celery.schedules import crontab
The module needs to be renamed because it must be possible
to import schedules without importing the `celery.task` module.
* The following functions have been deprecated and is scheduled for
removal in version 2.3:
* `celery.execute.apply_async`
Use `task.apply_async()` instead.
* `celery.execute.apply`
Use `task.apply()` instead.