-
Notifications
You must be signed in to change notification settings - Fork 289
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce Deadlock Probability #84
Reduce Deadlock Probability #84
Conversation
e0f8884
to
436d941
Compare
436d941
to
39b7f09
Compare
autoPyTorch/pipeline/components/setup/traditional_ml/classifier_models/classifiers.py
Outdated
Show resolved
Hide resolved
autoPyTorch/pipeline/components/setup/traditional_ml/classifier_models/classifiers.py
Outdated
Show resolved
Hide resolved
autoPyTorch/pipeline/components/setup/traditional_ml/classifier_models/classifiers.py
Outdated
Show resolved
Hide resolved
autoPyTorch/pipeline/components/setup/traditional_ml/classifier_models/classifiers.py
Outdated
Show resolved
Hide resolved
autoPyTorch/pipeline/components/setup/traditional_ml/classifier_models/classifiers.py
Outdated
Show resolved
Hide resolved
""" | ||
preprocessing = [] | ||
estimator = [] | ||
skip_steps = ['data_loader', 'trainer', 'lr_scheduler', 'optimizer', 'network_init', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we can have a verbose option where we include- trainer, lr_scheduler, optimizer and network_init?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was coding this and seeing the outcome, and it just doesn't add much information to 'what the best estimator is' because it's part of the construction of the model, but not part of the model itself.
I have a better proposal -- I would like to create a command like this.
Tpot is able to print the python code on how to train the model that you did. So if your goal were to see what happened (and for debug purposes) it would be great to produce a file that contains PyTorch commands with not only the scheduler but also the config (like it is nice to know we know more that the fact that we picked adam
optimizers). If this is better, then I would like to disentangle this export_pipeline command from this PR and create an issue for this.
What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes this would be ideal to use. Could you add an issue so we don't forget?
""" | ||
preprocessing = [] | ||
estimator = [] | ||
skip_steps = ['data_loader', 'trainer', 'lr_scheduler', 'optimizer', 'network_init', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The PR looks great, I have just added minor comments that will give a bit more information to the user.
3c3797c
to
e336373
Compare
…elopment_loggermsg
…or_development_loggermsg
Use the logger port instead of the logger for the TAE execution.
Add show_models() for debug purposes
Remove tensorboard output because killing a run in the process of writing to disk halts the complete search process and python does not handle the recovery nice. This is something we should look fixing in pynisher.
Minor fixes like empty cuda in when not needed.