You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thus both instances are run using the same .py script.
Also, each instance is run in an individual terminal, but simultaneously.
I am afraid that, because two instances have the same identity parameters (run_id='example1', host='127.0.0.1', nameserver='127.0.0.1'), the optimizer in one instance will catch the result of the worker from another instance.
It becomes like two workers are working for one optimizer, while I only expect one worker for one optimizer.
Is this true?
The text was updated successfully, but these errors were encountered:
It turned out I cannot have two BOHB running with the same nameserver and nameserver_port (or port, depending on which function we use).
I made a copy of the .py file and make the only difference between the two .py files to be the port number (keeping the host to be 127.0.0.1 to make BOHB) run in local.
This works for me.
I am a newbie to computer networking. I will be very appreciative if you can confirm that I am doing the sane move.
Hello,
you are doing the right thing. You can't have to master instances on the same host using the same port. Make sure that the workers also know the respective port, so that they connect to the right instance.
Hi,
I am wondering if two (sequential) BOHB instances will interfere with each other when they have the same networking parameters?
I consider a BOHB instance as running the code in example 1 to complete (following the first example in the tutorial https://automl.github.io/HpBandSter/build/html/auto_examples/example_1_local_sequential.html).
Thus both instances are run using the same .py script.
Also, each instance is run in an individual terminal, but simultaneously.
I am afraid that, because two instances have the same identity parameters (
run_id='example1', host='127.0.0.1', nameserver='127.0.0.1'
), the optimizer in one instance will catch the result of the worker from another instance.It becomes like two workers are working for one optimizer, while I only expect one worker for one optimizer.
Is this true?
The text was updated successfully, but these errors were encountered: