Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BC] Inlining for size configs. #410

Merged
merged 3 commits into from
Dec 20, 2024

Conversation

tvmarino
Copy link
Collaborator

Adding configs for collecting imitation learning trajectories for inlining for size.

Copy link
Collaborator

@boomanaiden154 boomanaiden154 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This patch seems to do a lot more than just add some gin configs?

from tf_agents.system import system_multiprocessing as multiprocessing

flags.FLAGS['gin_files'].allow_override = True
flags.FLAGS['gin_bindings'].allow_override = True
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where are these flags defined? They should be declared within the entrypoint rather than overriden.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is used in the same way as in, e.g.,

flags.FLAGS['gin_bindings'].allow_override = True

From what I remember I need it because there are gin bindings set by common.gin which I need to overwrite. We had discussed this with Mircea a while back and this was the recommended way to proceed.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is now fixed, the flags are removed as it turns out they were not needed anymore.

@tvmarino
Copy link
Collaborator Author

This patch seems to do a lot more than just add some gin configs?

The patch adds a runner file because not everything can be set in gin files. As discussed, there is a problem with passing functions in gin files which need to be later pickled. The patch adds a runner and a python config file to define the needed functions for the specific inlining problem.

generate_bc_trajectories_lib.ModuleWorker.policy_paths=[]
generate_bc_trajectories_lib.ModuleWorker.exploration_policy_paths=[]
generate_bc_trajectories_lib.ModuleWorker.explore_on_features=None
generate_bc_trajectories_lib.ModuleWorker.base_path=''
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(as discussed offline) let's go with the patch as-is right now, and in a follow-up we remove this from gin and pipe it through from its flag.

@tvmarino tvmarino merged commit 7e94d9e into google:main Dec 20, 2024
11 checks passed
@tvmarino tvmarino deleted the imitation_learning_configs branch December 20, 2024 19:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants