We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I want to use model.predict in a loop. It keeps printing this:
GPU available: True (cuda), used: True TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs HPU available: False, using: 0 HPUs LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Is there way to stop it from printing?
The text was updated successfully, but these errors were encountered:
First, import the logging module:
import logging
Then, add the following line to suppress the message:
logging.getLogger("lightning.pytorch.utilities.rank_zero").setLevel(logging.WARNING)
This will prevent the following output from being printed:
GPU available: True (cuda), used: True TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs HPU available: False, using: 0 HPUs
Next, add this line to suppress another message:
logging.getLogger("lightning.pytorch.accelerators.cuda").setLevel(logging.WARNING)
This will stop the following output from appearing:
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Good luck!
Sorry, something went wrong.
Related: sktime/sktime#6891 - should we perhaps address this at the source, and add a verbosity option, @XinyuWuu?
Good idea.
No branches or pull requests
I want to use model.predict in a loop.
It keeps printing this:
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Is there way to stop it from printing?
The text was updated successfully, but these errors were encountered: