Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce Deadlock Probability #84

Conversation

franchuterivera
Copy link
Contributor

@franchuterivera franchuterivera commented Feb 2, 2021

Use the logger port instead of the logger for the TAE execution.

Add show_models() for debug purposes

Remove tensorboard output because killing a run in the process of writing to disk halts the complete search process and python does not handle the recovery nice. This is something we should look fixing in pynisher.

Minor fixes like empty cuda in when not needed.

@franchuterivera franchuterivera changed the title Logger enhancements Reduce Deadlock Probability Feb 18, 2021
@franchuterivera franchuterivera added the bug Something isn't working label Feb 18, 2021
@franchuterivera franchuterivera marked this pull request as ready for review February 18, 2021 20:07
"""
preprocessing = []
estimator = []
skip_steps = ['data_loader', 'trainer', 'lr_scheduler', 'optimizer', 'network_init',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can have a verbose option where we include- trainer, lr_scheduler, optimizer and network_init?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was coding this and seeing the outcome, and it just doesn't add much information to 'what the best estimator is' because it's part of the construction of the model, but not part of the model itself.

I have a better proposal -- I would like to create a command like this.

Tpot is able to print the python code on how to train the model that you did. So if your goal were to see what happened (and for debug purposes) it would be great to produce a file that contains PyTorch commands with not only the scheduler but also the config (like it is nice to know we know more that the fact that we picked adam optimizers). If this is better, then I would like to disentangle this export_pipeline command from this PR and create an issue for this.
What do you think?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes this would be ideal to use. Could you add an issue so we don't forget?

"""
preprocessing = []
estimator = []
skip_steps = ['data_loader', 'trainer', 'lr_scheduler', 'optimizer', 'network_init',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here

Copy link
Contributor

@ravinkohli ravinkohli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR looks great, I have just added minor comments that will give a bit more information to the user.

@ravinkohli ravinkohli merged commit 5adc607 into automl:refactor_development Feb 22, 2021
github-actions bot pushed a commit that referenced this pull request Feb 22, 2021
github-actions bot pushed a commit to ravinkohli/Auto-PyTorch that referenced this pull request Feb 23, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants