You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First, let me express my appreciation to you for this wonderful logging library. It's my pleasure to be the author of the first issue.
logger_tt helps me a lot. But I meet a problem.
I used to think that logger_tt.getLogger(name) will get a logger with same behavior with the logger_tt.logger, since they are configured by a global setup_logging.
However, I find that they behaves differently. For example, in the multi-processing case, the logger_tt.logger will add the process name to the logger, while logger_tt.getLogger(name) does not.
The behavior seems a little strange to me since there is only one global configuration there.
And when I want to set another formatter to the logger obtained by logger_tt.getLogger(name), the formatter seems not working. The codes are as follows:
You need to add %(processName)s to your format string.
The logger that is imported from logger_tt is just a pre-configured logger. But it can switch the format string base on the name of process or thread that calls it.
For the second problem with ColoredFormatter, could you give a minimal reproducible code that I can run? Also a picture or a description of what the result should be and what the reality is.
I'm still encountered problems in multi-processing environment.
The log output in single process output like:
[2021-05-01 03:46:54] INFO: The 0-th batch finished training the machine. Historical average loss = 0.7541559338569641.
however, it output in multi-process:
WARNING:machine:The 0-th batch finished training the machine. Historical average loss = 0.8452694416046143.
WARNING:machine:The 0-th batch finished training the machine. Historical average loss = 0.8452694416046143.
WARNING:machine:The 1000-th batch finished training the machine. Historical average loss = 0.19884416460990906.
WARNING:machine:The 1000-th batch finished training the machine. Historical average loss = 0.19884416460990906.
I only called setup in both cases. Do I need to call setup in each process?
First, let me express my appreciation to you for this wonderful logging library. It's my pleasure to be the author of the first issue.
logger_tt helps me a lot. But I meet a problem.
I used to think that logger_tt.getLogger(name) will get a logger with same behavior with the logger_tt.logger, since they are configured by a global setup_logging.
However, I find that they behaves differently. For example, in the multi-processing case, the logger_tt.logger will add the process name to the logger, while logger_tt.getLogger(name) does not.
The behavior seems a little strange to me since there is only one global configuration there.
And when I want to set another formatter to the logger obtained by logger_tt.getLogger(name), the formatter seems not working. The codes are as follows:
Could you please help to look through this problem?
The text was updated successfully, but these errors were encountered: