Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logging error in Loguru Handler: TypeError: __init__() missing 1 required positional argument: 'message' #329

Closed
mahadi opened this issue Sep 18, 2020 · 22 comments · Fixed by #963
Labels
enhancement Improvement to an already existing feature

Comments

@mahadi
Copy link

mahadi commented Sep 18, 2020

Hello,

i am getting the following error message sometimes which makes me assume that somewhere in my application i am writing None the logger.

--- Logging error in Loguru Handler #2 ---
Record was: None
Traceback (most recent call last):
  File "/home/.../lib/python3.8/site-packages/loguru/_handler.py", line 265, in _queued_writer
    message = queue.get()
  File "/home/.../lib/python3.8/multiprocessing/queues.py", line 358, in get
    return _ForkingPickler.loads(res)
TypeError: __init__() missing 1 required positional argument: 'message'
--- End of logging error ---

A review did not yield the place where it happens.
So i wonder how could i find out the place which writes a None to the logger...
The traceback is decoupled from the place where the message gets put, so i am searching for the correct place where i should put the check for a possible None.

@Delgan
Copy link
Owner

Delgan commented Sep 18, 2020

Hi. Which loguru version are you using please?

The Record was: None error message is tricky. It actually means that loguru was unable to retrieve the record object at the time the error occurred. It happens only in one case: while trying to call queue.get() to retrieve the logged message. See here:

loguru/loguru/_handler.py

Lines 268 to 274 in b77f4fd

try:
message = queue.get()
except Exception:
with lock:
if not self._error_interceptor.should_catch():
raise
self._error_interceptor.print(None)

The error happens because when enqueue=True, the record needs to be serialized and deserialized using pickle. If the record contains object which can't be deserialized, the message will not be logged. Such exception is likely related to custom elements added to the record["extra"] dict while calling logger.bind() or, more recently, when using **kwargs in logger.info("Message", foo=my_object).

Can you spot somewhere in your code places where non-trivial objects are logged?

@mahadi
Copy link
Author

mahadi commented Sep 18, 2020

loguru version is 0.5.0

The only place where i am using some kind of "extra" feature is:

    def catch_uncaught_sys_exceptions(exc_type, exc_value, exc_traceback):
        logger.opt(exception=(exc_type, exc_value, exc_traceback)).critical('Uncaught exception')
        logger.capture_sentry_exception()

    def catch_uncaught_thread_exceptions(info):
        logger.opt(exception=(info.exc_type, info.exc_value, info.exc_traceback)).critical('Uncaught exception in thread')
        logger.capture_sentry_exception()

    sys.excepthook = catch_uncaught_sys_exceptions
    threading.excepthook = catch_uncaught_thread_exceptions

where logger.capture_sentry_exception() is just a forwarding to sentrie's capture_exception method.

Could logger.info(None) or logger.exception(None) also triggere this behaviour. A quick test in REPL did not...

@Delgan
Copy link
Owner

Delgan commented Sep 18, 2020

Could logger.info(None) or logger.exception(None) also triggere this behaviour. A quick test in REPL did not...

No, that's not possible. I am convinced that there is something wrong with deserialization.

Do you call somewhere logger.info("Foo: {bar}", bar=non_trivial_object)? Because the keyword arguments are automatically added to the extra dictionary since 0.5.0.

@mahadi
Copy link
Author

mahadi commented Sep 18, 2020

I triple checked it, but there is no call with extra arguments in my project.

Is it possible that the usage of better_exceptions can cause this? I fixed already 2 cases in my application where the __repr__ called during formatting of better_exceptions caused infinite recursion. Perhaps it could also be a candidate for this issue?

@Delgan
Copy link
Owner

Delgan commented Sep 18, 2020

I fixed already 2 cases in my application where the repr called during formatting of better_exceptions caused infinite recursion.

Is this something that could be fixed on loguru side? If so I would be interested to see an example.

Perhaps it could also be a candidate for this issue?

I'm not sure... Do you have access to the loguru installation folder? Maybe I can send you a patch which would display the faulty source line in case of problem.

@mahadi
Copy link
Author

mahadi commented Sep 26, 2020

Is this something that could be fixed on loguru side? If so I would be interested to see an example.

My issue was two times that the __repr__ call was returning the __str__ method of my object. So first somewhere the call of str(object) was raising an exception. Then the exception was formatted using better_exceptions which uses repr(object) to provide more details. But repr called str and raised an exception and so on and so on, creatign an endless loop.

So one could say that this is a bad design decision on my side and i would somehow agree.

But indeed better_exception formatting could break the endless loop by catching exceptions when calling repr itself and not following the path down again. I just dont know if that would break some useful information reception in other use cases.

Do you have access to the loguru installation folder

I do. What would you suggest to insert for tracing the issue?

@mahadi
Copy link
Author

mahadi commented Sep 26, 2020

It seems you discovered this already :)

Qix-/better-exceptions@c170bf6

@Delgan
Copy link
Owner

Delgan commented Sep 26, 2020

Yep, the calls made to repr() are indeed "protected" by a try / except block:

def _format_value(self, v):
try:
v = repr(v)
except Exception:
v = "<unprintable %s object>" % type(v).__name__

So maybe the endless loop you're seeing does not come from that. 😕

Actually, you don't even need to modify the source code to investigate the enqueue issue further. You can just add a new sink which mimics the faulty behavior, but not using a thread. That way, the error reported by Loguru should contain the record dict and hence help you debug this.

import multiprocessing

queue = multiprocessing.SimpleQueue()

def sink(record):
    queue.put(record)
    queue.get()

logger.add(sink, enqueue=False)

@mahadi
Copy link
Author

mahadi commented Sep 26, 2020

So maybe the endless loop you're seeing does not come from that.

Thats interesting... After removing the repr --> str call i never experienced this issue again... 😕

@mahadi
Copy link
Author

mahadi commented Sep 26, 2020

You can just add a new sink which mimics the faulty behavior, but not using a thread. That way, the error reported by Loguru should contain the record dict and hence help you debug this.

I dont understand how this would mimic the the issue's root cause? Can you explain it?

I have two sinks defined:

    logger.add(sys.stdout,
               level="WARNING",
               format=LOG_FORMAT)
    logger.add(logfile_path,
               level="INFO",
               format=LOG_FORMAT,
               rotation=rotator.should_rotate,
               compression=compress_in_subprocess,
               enqueue=True)

The error message is visible only on stdout but not in the logfile.

@Delgan
Copy link
Owner

Delgan commented Sep 26, 2020

Thats interesting... After removing the repr --> str call i never experienced this issue again... confused

Weird. Maybe the exception raised did not inherited from Exception (see Exception hierarchy), or maybe it was using an old version of better_exceptions. 😕

I dont understand how this would mimic the the issue's root cause? Can you explain it?

You can see in the traceback that the error comes from queue.get() in the _queued_writer function. The error is displayed on stdout as a fallback, because writing the log message to the file failed, but the problem comes from the handler configured with enqueue=True.

When enqueue=True, the record is first put() in a multiprocessing.SimpleQueue() and later it's get() from within a thread. For some reason, queue.get() is raising an exception but we don't have access to the record. I'm suggesting to basically copy the behavior of enqueue=True by serializing and then deserializing the record dict to be logged. As it fails when enqueue=True, it should fail with this new sink too, but this time we will have access to the record (as there is no thread) and the error displayed on stdout should give more useful information about the problem.

@mahadi
Copy link
Author

mahadi commented Oct 2, 2020

Ok, thanks for the explanation! Output:

--- Logging error in Loguru Handler #3 ---
Record was: {
    'elapsed': datetime.timedelta(seconds=132, microseconds=999579), 
    'exception': None, 
    'extra': {}, 
    'file': (name='monitoring.py', path='.../rules/monitoring.py'), 
    'function': 'record_monitoring', 
    'level': (name='INFO', no=20, icon='ℹ️'), 
    'line': 40, 
    'message': 
        Record(
            source='monitoring-services:event', 
            protocol='_device_state',
            type='event', 
            trace_id='Service[Telegram]#2', 
            timestamp=datetime.datetime(2020, 10, 2, 10, 41, 14, 853032, tzinfo=tzfile('/usr/share/zoneinfo/Europe/Berlin')), 
            device='Telegram', 
            identifier='connection_state', 
            value=255, 
            rich_value='UNKNOWN_ERROR', 
            value_changed=True, 
            info={'exception': NetworkError()}
        ), 
    'module': 'monitoring', 
    'name': 'rules.monitoring', 
    'process': (id=107, name='MainProcess'), 
    'thread': (id=140019188610816, name='Engine'), 
    'time': datetime(2020, 10, 2, 10, 41, 15, 651014, tzinfo=datetime.timezone(datetime.timedelta(seconds=7200), 'CEST'))
}
Traceback (most recent call last):
  File "/home/.../lib/python3.8/site-packages/loguru/_handler.py", line 176, in emit
    self._sink.write(str_record)
  File "/home/.../lib/python3.8/site-packages/loguru/_simple_sinks.py", line 119, in write
    self._function(message)
  File ".../logger.py", line 27, in sink
    queue.get()
  File "/home/.../lib/python3.8/multiprocessing/queues.py", line 358, in get
    return _ForkingPickler.loads(res)
TypeError: __init__() missing 1 required positional argument: 'message'
--- End of logging error ---

@Delgan
Copy link
Owner

Delgan commented Oct 2, 2020

Thank you, finally we found an explanation for this issue!

That happens because the NetworkError() in your message can't be deserialized using pickle.loads().

import pickle
from telegram.error import NetworkError

error = NetworkError("The error message (required)")
pickled = pickle.dumps(error)      # OK
unpikcled = pickle.loads(pickled)  # NOK
# TypeError: __init__() missing 1 required positional argument: 'message'

It seems to be a known problem in Python with custom exception: Cannot unpickle Exception subclass.

If the NetworkError fails to be de-serialized, so does the loguru record dict which contains it.

Now, you should be able to identify where the message is logged from (line 40 in your "monitoring.py" file). Can you refactor it in order to avoid logging the Record() object (which may contains unpicklable value)?

On the other hand... Which loguru version are you using please? I made a small change very recently converting the message to a str before sending it to the sink. I think if this change is effective, you should no longer see the error (because deserializing a str is not a problem contrary to a Record). If you're able to update Loguru to v0.5.3, please do so and hopefully the error should be gone!

@mahadi
Copy link
Author

mahadi commented Oct 2, 2020

ok, so i gave it a try with 0.5.3 (was using 0.5.0 before). Currently still getting an error of similar shape:

--- Logging error in Loguru Handler #2 ---
Record was: None

--- Logging error in Loguru Handler #3 ---
Record was: {
    'elapsed': datetime.timedelta(seconds=72, microseconds=541279), 
    'exception': (
        type=<class 'telegram.error.NetworkError'>, 
        value=NetworkError(), 
        traceback=<traceback object at 0x7f97e43744c0>), 
    'extra': {}, 
    'file': (
        name='telegram.py', 
        path='.../outputs/telegram.py'
    ), 
    'function': 'send_message', 
    'level': (name='ERROR', no=40, icon='❌'), 
    'line': 108, 
    'message': "Could not send notification for ['OPCUADevice[WAGO PFC 200 0750-8212]#2', 'SenertecDevice[BHKW 1 (links)]#2']", 
    'module': 'telegram', 
    'name': 'outputs.telegram', 
    'process': (id=1326, name='MainProcess'), 
    'thread': (id=140290356000512, name='TelegramHandler'), 
    'time': datetime(2020, 10, 2, 13, 38, 15, 779391, tzinfo=datetime.timezone(datetime.timedelta(seconds=7200), 'CEST'))
}
Traceback (most recent call last):
  File "/home/.../lib/python3.8/site-packages/loguru/_handler.py", line 270, in _queued_writer
    message = queue.get()
  File "/home/.../lib/python3.8/multiprocessing/queues.py", line 358, in get
    return _ForkingPickler.loads(res)
  File "/home/.../lib/python3.8/site-packages/loguru/_recattrs.py", line 77, in _from_pickled_value
    value = pickle.loads(pickled_value)
TypeError: __init__() missing 1 required positional argument: 'message'
--- End of logging error ---

Traceback (most recent call last):
  File "/home/.../lib/python3.8/site-packages/telegram/vendor/ptb_urllib3/urllib3/connectionpool.py", line 614, in urlopen
    httplib_response = self._make_request(conn, method, url,
  File "/home/.../lib/python3.8/site-packages/telegram/vendor/ptb_urllib3/urllib3/connectionpool.py", line 402, in _make_request
    six.raise_from(e, None)
  File "<string>", line 2, in raise_from
  File "/home/.../lib/python3.8/site-packages/telegram/vendor/ptb_urllib3/urllib3/connectionpool.py", line 398, in _make_request
    httplib_response = conn.getresponse()
  File "/home/.../lib/python3.8/http/client.py", line 1347, in getresponse
    response.begin()
  File "/home/.../lib/python3.8/http/client.py", line 307, in begin
    version, status, reason = self._read_status()
  File "/home/.../lib/python3.8/http/client.py", line 268, in _read_status
    line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
  File "/home/.../lib/python3.8/socket.py", line 669, in readinto
    return self._sock.recv_into(b)
  File "/home/.../lib/python3.8/ssl.py", line 1241, in recv_into
    return self.read(nbytes, buffer)
  File "/home/.../lib/python3.8/ssl.py", line 1099, in read
    return self._sslobj.read(len, buffer)
ConnectionResetError: [Errno 104] Connection reset by peer

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/.../lib/python3.8/site-packages/telegram/utils/request.py", line 218, in _request_wrapper
    resp = self._con_pool.request(*args, **kwargs)
  File "/home/.../lib/python3.8/site-packages/telegram/vendor/ptb_urllib3/urllib3/request.py", line 68, in request
    return self.request_encode_body(method, url, fields=fields,
  File "/home/.../lib/python3.8/site-packages/telegram/vendor/ptb_urllib3/urllib3/request.py", line 148, in request_encode_body
    return self.urlopen(method, url, **extra_kw)
  File "/home/.../lib/python3.8/site-packages/telegram/vendor/ptb_urllib3/urllib3/poolmanager.py", line 244, in urlopen
    response = conn.urlopen(method, u.request_uri, **kw)
  File "/home/.../lib/python3.8/site-packages/telegram/vendor/ptb_urllib3/urllib3/connectionpool.py", line 665, in urlopen
    retries = retries.increment(method, url, error=e, _pool=self,
  File "/home/.../lib/python3.8/site-packages/telegram/vendor/ptb_urllib3/urllib3/util/retry.py", line 347, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/home/.../lib/python3.8/site-packages/telegram/vendor/ptb_urllib3/urllib3/packages/six.py", line 685, in reraise
    raise value.with_traceback(tb)
  File "/home/.../lib/python3.8/site-packages/telegram/vendor/ptb_urllib3/urllib3/connectionpool.py", line 614, in urlopen
    httplib_response = self._make_request(conn, method, url,
  File "/home/.../lib/python3.8/site-packages/telegram/vendor/ptb_urllib3/urllib3/connectionpool.py", line 402, in _make_request
    six.raise_from(e, None)
  File "<string>", line 2, in raise_from
  File "/home/.../lib/python3.8/site-packages/telegram/vendor/ptb_urllib3/urllib3/connectionpool.py", line 398, in _make_request
    httplib_response = conn.getresponse()
  File "/home/.../lib/python3.8/http/client.py", line 1347, in getresponse
    response.begin()
  File "/home/.../lib/python3.8/http/client.py", line 307, in begin
    version, status, reason = self._read_status()
  File "/home/.../lib/python3.8/http/client.py", line 268, in _read_status
    line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
  File "/home/.../lib/python3.8/socket.py", line 669, in readinto
    return self._sock.recv_into(b)
  File "/home/.../lib/python3.8/ssl.py", line 1241, in recv_into
    return self.read(nbytes, buffer)
  File "/home/.../lib/python3.8/ssl.py", line 1099, in read
    return self._sslobj.read(len, buffer)
telegram.vendor.ptb_urllib3.urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File ".../outputs/telegram.py", line 97, in send_message
    msg = self.bot.sendMessage(
  File "/home/.../lib/python3.8/site-packages/telegram/bot.py", line 66, in decorator
    result = func(self, *args, **kwargs)
  File "/home/.../lib/python3.8/site-packages/telegram/bot.py", line 256, in send_message
    return self._message(url, data, disable_notification=disable_notification,
  File "/home/.../lib/python3.8/site-packages/telegram/bot.py", line 123, in _message
    result = self._request.post(url, data, timeout=timeout)
  File "/home/.../lib/python3.8/site-packages/telegram/utils/request.py", line 324, in post
    result = self._request_wrapper('POST', url,
  File "/home/.../lib/python3.8/site-packages/telegram/utils/request.py", line 224, in _request_wrapper
    raise NetworkError('urllib3 HTTPError {0}'.format(error))
telegram.error.NetworkError: urllib3 HTTPError ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/.../lib/python3.8/site-packages/loguru/_handler.py", line 177, in emit
    self._sink.write(str_record)
  File "/home/.../lib/python3.8/site-packages/loguru/_simple_sinks.py", line 119, in write
    self._function(message)
  File ".../logger.py", line 27, in sink
    queue.get()
  File "/home/.../lib/python3.8/multiprocessing/queues.py", line 358, in get
    return _ForkingPickler.loads(res)
  File "/home/.../lib/python3.8/site-packages/loguru/_recattrs.py", line 77, in _from_pickled_value
    value = pickle.loads(pickled_value)
TypeError: __init__() missing 1 required positional argument: 'message'
--- End of logging error ---

Use case:
What i am doing is to send myself a telegram notification if the connection do a device in the (local) network fails.
As the telegram service can be offline at the same time i want to additionally log the exception (stored im my Record object) also to log files. So the empty network exception is raised from this 3rd party library.
What i could do is, to searialze these records my self before.

Basically it would be great to have this error caught at loguru's side as otherwise i would need to implement a fix for implementation details of the logging /telegram library.

However, i am really thankful for you great support!

@Delgan
Copy link
Owner

Delgan commented Oct 2, 2020

Ok, this new error is actually the same problem that has been reported in #342!

The NetworkError() is no longer contained in the message but in the exception of the loguru record dict. As before, the same issue happens: the NetworkError can't be unpickled and hence the queue.get() fails. In the next release, Loguru will properly catch such error while (un)pickling exception object.

I shared two possible workarounds as the end of this comment: #342 (comment)
Please take a look and tell me if it suffices to fix your problem for now.

@mahadi
Copy link
Author

mahadi commented Oct 2, 2020

I am using the following workaround for now

try:
    tensorrt.do_something()
except InferenceServerException:
    type, _, tb = sys.exc_info()
    logger.opt(exception=(type, None, tb)).error("An error occurred")

But i will finally wait for the next release to get this resolved cleanly. Thank you!

@mahadi
Copy link
Author

mahadi commented Jul 7, 2023

Today i experienced the issue again in a similar project and finally solved the issue with the same approach as above. But i found something interesting which is not really clear to me for now:

I am building a docker image which contains my python project.

  • If i build the image using FROM mambaorg/micromamba:jammy i am experiencing this issue (when running in the docker container in WSL2 or PS (using Docker Desktop))
  • If i build the image using FROM ubuntu:22.04 the issue is not present
  • If i run the code directly in WSL2 the issue is also not present

Finally, there are additional steps in my ubuntu:22.04 based image but still this could be a potential hint for #877 ?

In case it helps we could review the files if needed.

@Delgan
Copy link
Owner

Delgan commented Jul 8, 2023

Hi @mahadi. Thanks for the update. Can you clarify what make you think this ticket is related to #877?

The problem discussed here seems to be an error occurring when deserializing an incompletely implemented Exception.
#877 seems to be something completely different.

However, I'm actually surprised that the error may or may not appear depending on the environment in which the code is running. If you have more details and a way of reproducing it, it could indeed be interesting.

@mahadi
Copy link
Author

mahadi commented Jul 28, 2023

Today i dont understand what i was thinking when i linked this issue, looking at it today it doesnt make any sense... sorry.

@Delgan
Copy link
Owner

Delgan commented Jul 28, 2023

Haha, no problem. :)

By the way, I think the original error reported in this thread has been fixed on master branch and the fix should be available in next release. 👍

@mahadi
Copy link
Author

mahadi commented Jul 28, 2023

Great to hear, can you link the PR? I would just be interested how you solved it :)

@Delgan
Copy link
Owner

Delgan commented Jul 28, 2023

Actually, I realize the problem is not exactly the same, sorry. The PR is #905 and fix problem occurring when Exception cannot be pickled, whereas your issue was with an Exception that couldn't be un-pickled. However, I plan to fix it in a similar way for the next release.

jmctune referenced this issue in dqx-translation-project/dqxclarity Oct 9, 2023
[![Mend
Renovate](https://app.renovatebot.com/images/banner.svg)](https://renovatebot.com)

This PR contains the following updates:

| Package | Change | Age | Adoption | Passing | Confidence |
|---|---|---|---|---|---|
| [loguru](https://github.com/Delgan/loguru)
([changelog](https://github.com/Delgan/loguru/blob/master/CHANGELOG.rst))
| `==0.7.0` -> `==0.7.2` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/loguru/0.7.2?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![adoption](https://developer.mend.io/api/mc/badges/adoption/pypi/loguru/0.7.2?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![passing](https://developer.mend.io/api/mc/badges/compatibility/pypi/loguru/0.7.0/0.7.2?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/loguru/0.7.0/0.7.2?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

### Release Notes

<details>
<summary>Delgan/loguru (loguru)</summary>

###
[`v0.7.2`](https://github.com/Delgan/loguru/blob/HEAD/CHANGELOG.rst#072-2023-09-11)

[Compare
Source](https://github.com/Delgan/loguru/compare/0.7.1...0.7.2)

\=====================

- Add support for formatting of `ExceptionGroup` errors (`#&#8203;805
<https://github.com/Delgan/loguru/issues/805>`\_).
- Fix possible `RuntimeError` when using
`multiprocessing.set_start_method()` after importing the `logger`
(`#&#8203;974 <https://github.com/Delgan/loguru/issues/974>`\_)
- Fix formatting of possible `__notes__` attached to an `Exception`
(`#&#8203;980 <https://github.com/Delgan/loguru/issues/980>`\_).

###
[`v0.7.1`](https://github.com/Delgan/loguru/blob/HEAD/CHANGELOG.rst#071-2023-09-04)

[Compare
Source](https://github.com/Delgan/loguru/compare/0.7.0...0.7.1)

\=====================

- Add a new `context` optional argument to `logger.add()` specifying
`multiprocessing` context (like `"spawn"` or `"fork"`) to be used
internally instead of the default one (`#&#8203;851
<https://github.com/Delgan/loguru/issues/851>`\_).
- Add support for true colors on Windows using ANSI/VT console when
available (`#&#8203;934 <https://github.com/Delgan/loguru/issues/934>`*,
thanks `@tunaflsh <https://github.com/tunaflsh>`*).
- Fix possible deadlock when calling `logger.complete()` with concurrent
logging of an asynchronous sink (`#&#8203;906
<https://github.com/Delgan/loguru/issues/906>`\_).
- Fix file possibly rotating too early or too late when re-starting an
application around midnight (`#&#8203;894
<https://github.com/Delgan/loguru/issues/894>`\_).
- Fix inverted `"<hide>"` and `"<strike>"` color tags (`#&#8203;943
<https://github.com/Delgan/loguru/pull/943>`*, thanks `@tunaflsh
<https://github.com/tunaflsh>`*).
- Fix possible untraceable errors raised when logging non-unpicklable
`Exception` instances while using `enqueue=True` (`#&#8203;329
<https://github.com/Delgan/loguru/issues/329>`\_).
- Fix possible errors raised when logging non-picklable `Exception`
instances while using `enqueue=True` (`#&#8203;342
<https://github.com/Delgan/loguru/issues/342>`*, thanks `@ncoudene
<https://github.com/ncoudene>`*).
- Fix missing seconds and microseconds when formatting timezone offset
that requires such accuracy (`#&#8203;961
<https://github.com/Delgan/loguru/issues/961>`\_).
- Raise `ValueError` if an attempt to use nanosecond precision for time
formatting is detected (`#&#8203;855
<https://github.com/Delgan/loguru/issues/855>`\_).

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR has been generated by [Mend
Renovate](https://www.mend.io/free-developer-tools/renovate/). View
repository job log
[here](https://developer.mend.io/github/dqx-translation-project/dqxclarity).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy4wLjMiLCJ1cGRhdGVkSW5WZXIiOiIzNy44LjEiLCJ0YXJnZXRCcmFuY2giOiJtYWluIn0=-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Improvement to an already existing feature
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants