Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What is the difference between logger_tt.logger and logger_tt.getLogger(name)? #1

Open
rudaoshi opened this issue Mar 6, 2021 · 3 comments

Comments

@rudaoshi
Copy link

rudaoshi commented Mar 6, 2021

First, let me express my appreciation to you for this wonderful logging library. It's my pleasure to be the author of the first issue.

logger_tt helps me a lot. But I meet a problem.

I used to think that logger_tt.getLogger(name) will get a logger with same behavior with the logger_tt.logger, since they are configured by a global setup_logging.

However, I find that they behaves differently. For example, in the multi-processing case, the logger_tt.logger will add the process name to the logger, while logger_tt.getLogger(name) does not.

The behavior seems a little strange to me since there is only one global configuration there.

And when I want to set another formatter to the logger obtained by logger_tt.getLogger(name), the formatter seems not working. The codes are as follows:

    logger = logger_tt.getLogger(name)

    FORMAT = "%(asctime)s [$BOLD%(name)-20s$RESET][%(levelname)-18s] %(message)s ($BOLD%(filename)s$RESET:%(lineno)d)"
    COLOR_FORMAT = formatter_message(FORMAT, True)

    color_formatter = ColoredFormatter(COLOR_FORMAT)

    for handler in logger.handlers:
        handler.setFormatter(color_formatter)

    return logger

Could you please help to look through this problem?

@Dragon2fly
Copy link
Owner

Dragon2fly commented Apr 3, 2021

You need to add %(processName)s to your format string.
The logger that is imported from logger_tt is just a pre-configured logger. But it can switch the format string base on the name of process or thread that calls it.

For the second problem with ColoredFormatter, could you give a minimal reproducible code that I can run? Also a picture or a description of what the result should be and what the reality is.

@rudaoshi
Copy link
Author

rudaoshi commented May 1, 2021

Thank you for your response!

I'm still encountered problems in multi-processing environment.

The log output in single process output like:

[2021-05-01 03:46:54] INFO: The 0-th batch finished training the machine. Historical average loss = 0.7541559338569641.

however, it output in multi-process:

WARNING:machine:The 0-th batch finished training the machine. Historical average loss = 0.8452694416046143.
WARNING:machine:The 0-th batch finished training the machine. Historical average loss = 0.8452694416046143.
WARNING:machine:The 1000-th batch finished training the machine. Historical average loss = 0.19884416460990906.
WARNING:machine:The 1000-th batch finished training the machine. Historical average loss = 0.19884416460990906.

I only called setup in both cases. Do I need to call setup in each process?

@Dragon2fly
Copy link
Owner

Could you provide a minimal example of your code?
Also which platform are you using, Linux? Windows?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants