You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Traceback (most recent call last):
File "/var/lang/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/opt/python/lib/python3.9/site-packages/datadog_lambda/handler.py", line 10, in <module>
from datadog_lambda.wrapper import datadog_lambda_wrapper
File "/opt/python/lib/python3.9/site-packages/datadog_lambda/wrapper.py", line 23, in <module>
from datadog_lambda.patch import patch_all
File "/opt/python/lib/python3.9/site-packages/datadog_lambda/patch.py", line 14, in <module>
from ddtrace import patch_all as patch_all_dd
File "/var/task/ddtrace/__init__.py", line 24, in <module>
from ddtrace.internal import telemetry
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "/var/task/ddtrace/internal/module.py", line 220, in _exec_module
self.loader.exec_module(module)
File "/var/task/ddtrace/internal/telemetry/__init__.py", line 12, in <module>
from .writer import TelemetryWriter
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "/var/task/ddtrace/internal/module.py", line 220, in _exec_module
self.loader.exec_module(module)
File "/var/task/ddtrace/internal/telemetry/writer.py", line 19, in <module>
from ...settings import _config as config
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "/var/task/ddtrace/internal/module.py", line 220, in _exec_module
self.loader.exec_module(module)
File "/var/task/ddtrace/settings/__init__.py", line 2, in <module>
from .config import Config
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "/var/task/ddtrace/internal/module.py", line 220, in _exec_module
self.loader.exec_module(module)
File "/var/task/ddtrace/settings/config.py", line 188, in <module>
class Config(object):
File "/var/task/ddtrace/settings/config.py", line 195, in Config
_extra_services_queue = multiprocessing.get_context("fork" if sys.platform != "win32" else "spawn").Queue(
File "/var/lang/lib/python3.9/multiprocessing/context.py", line 103, in Queue
return Queue(maxsize, ctx=self.get_context())
File "/var/lang/lib/python3.9/multiprocessing/queues.py", line 43, in __init__
self._rlock = ctx.Lock()
File "/var/lang/lib/python3.9/multiprocessing/context.py", line 68, in Lock
return Lock(ctx=self.get_context())
File "/var/lang/lib/python3.9/multiprocessing/synchronize.py", line 162, in __init__
SemLock.__init__(self, SEMAPHORE, 1, 1, ctx=ctx)
File "/var/lang/lib/python3.9/multiprocessing/synchronize.py", line 57, in __init__
sl = self._semlock = _multiprocessing.SemLock(
What is the result that you expected?
No error because DataDog is great :)
The text was updated successfully, but these errors were encountered:
We just hit this issue after accepting our bot-created automated version updates - oof.
This error also exists in 2.0 versions. 1.20.10 was the highest I could go, but did not test every 2.0 version.
The above linked AWS documentation reveals what I'd wager is the root cause:
Due to the Lambda execution environment not having /dev/shm (shared memory for processes) support, you can’t use multiprocessing.Queue or multiprocessing.Pool.
They further claim that
.. you can use multiprocessing.Pipe instead of multiprocessing.Queue to accomplish what you need ...
Summary of problem
AWS Lambda gives
OSError: [Errno 38] Function not implemented
with ddtrace 2.1.0See Parallel Processing in Python with AWS Lambda
Which version of dd-trace-py are you using?
2.1.0 causes the issue
2.0.2 does not cause the issue
Which version of pip are you using?
pip-23.2.1
Which libraries and their versions are you using?
How can we reproduce your problem?
Deploy to AWS Lambda.
What is the result that you get?
What is the result that you expected?
No error because DataDog is great :)
The text was updated successfully, but these errors were encountered: