-
Notifications
You must be signed in to change notification settings - Fork 381
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pickling error of config function #441
Comments
Could you provide a minimal complete script of the not working example including your |
@JarnoRFB any ideas? |
This problem is probably related to the fact that |
@Qwlouse ok... do you have suggestion for another approach for |
@Qwlouse or could Sacred not use wrapt functions? I want to combine ray tune with sacred and that requires a pickleable function... |
Sacred works very well with ray tune. You can import the experiment inside the trainable function, apply config updates provided by tune, and then run the experiment. |
@flukeskywalker can you provide an example / colab? |
Here's an example adapted from code I've used successfully, assuming Note: the import ray
import ray.tune as tune
from sacred.observers import MongoObserver
def train(config, reporter):
import time, random
time.sleep(random.uniform(0.0, 10.0))
from train import ex
ex.observers.append(MongoObserver.create(db_name='my_db'))
config['verbose'] = False
ex.run(config_updates=config)
result = ex.current_run.result
print(f'Types of result is {type(result)}')
reporter(result=result, done=True)
ray.init(num_cpus=64, num_gpus=0)
if __name__ == '__main__':
tune.register_trainable("train_func", train)
tune.run_experiments({
'my_experiment': {
'run': 'train_func',
'stop': {'result': 1000},
'config': {
'n_layers': tune.grid_search([5, 6]),
'batch_size': tune.grid_search([256, 1024, 2048]),
},
'resources_per_trial': {"cpu": 1, "gpu": 0},
'num_samples': 10,
}
}) |
@flukeskywalker so I made a simple project based on your guidance and it works! Thanks :) However it only works if everything is in the same level module. Here is a more "complete" project setup, and currently tune isn't running. https://gitlab.com/SumNeuron/extune please take a look |
Sorry, I don't have time to look into the code these days. My guess is that this is related to the module being in the Python path for each ray actor, but that's all I can say at this point. I personally do not prefer organizing code the way you are doing in |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
In a previous Issue I showed requested support for the
multiprocessing
library to help launch multiple experiments.If I make the slight change to this:
the error I get is:
The text was updated successfully, but these errors were encountered: