Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

polish(davide) add example of Gail entry + config for Mujoco and Cartpole #114

Merged
merged 30 commits into from
Nov 19, 2021

Conversation

davide97l
Copy link
Collaborator

@davide97l davide97l commented Nov 1, 2021

Description

This repo adds a config + entry to train an imitation policy to imitate an expert policy using GAIL. However the same entry can be generalized to any reward model whose interface is similar to GAIL. It includes the data collection phase and can save both the IL policy and reward model during training.

Important: this PR also changes the formula of the loss and reward of GAIL, changes have been approved after a discussion with @Will-Nie and @PaParaZz1. Now the loss function is according to the formula defined in the original paper.
Check the related comment for more details.

Related Issue

TODO

Check List

  • merge the latest version source branch/repo, and resolve all the conflicts
  • pass style check
  • pass all the tests

@PaParaZz1 PaParaZz1 added algo Add new algorithm or improve old one serial Serial training related labels Nov 1, 2021
@davide97l davide97l changed the title Added example of Gail entry + config for Mujoco and Cartpole (davide) add example of Gail entry + config for Mujoco and Cartpole Nov 2, 2021
@davide97l
Copy link
Collaborator Author

As we can see from the plot above, with the new formula GAIL can train Cartpole faster:
aaa

The loss of GAIl changes in the following way:
photo_2021-11-02_16-00-56
.

@davide97l
Copy link
Collaborator Author

image
The reward function has also been changed, this further improves GAIL performance.

@PaParaZz1 PaParaZz1 changed the title (davide) add example of Gail entry + config for Mujoco and Cartpole polish(davide) add example of Gail entry + config for Mujoco and Cartpole Nov 4, 2021
reward = torch.chunk(reward, reward.shape[0], dim=0)
for item, rew in zip(data, reward):
item['reward'] = rew
item['reward'] = -torch.log(rew)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shall we add a small number in log to avoid potential explosion? e.g. -torch.log(rew + 1e-8)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It makes sense, I will add it, thank you for your finding.

learning_rate=1e-3,
update_per_collect=100,
expert_data_path='cartpole_dqn/expert_data_train.pkl',
load_path='cartpole_dqn_gail/reward_model/ckpt/ckpt_last.pth.tar',
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is load_path refering to the well-trained reward model you finally save? can we make this clearer? (let other users directly understanding that 'load_path' refers to the final reward model the algorithm learned)

Also, Why do we need load_path key here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are 2 load_path:

  • policy.load_path is the path where the state_dict of the policy is saved
  • reward_model.load_path is the path where the state_dict of the reward model is saved

I will write a comment in the config file to clarify it

os.makedirs(path)
except FileExistsError:
pass
path = os.path.join(path, 'ckpt_last.pth.tar')
Copy link
Collaborator

@Will-Nie Will-Nie Nov 7, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe 'ckpt_final_reward_model.pth.tar' is more clearer? ( as we had ckpt_best.pth.tar before, referring to the best model for the training the policy.)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Line 143: path = os.path.join(cfg.exp_name, 'reward_model', 'ckpt') creates a new path (reward_model/ckpt) where to save the state dict of the reward model. Given the explicit name, it should be clear that are pth.tar files inside that directory refer to the state dict of the reward model.

@@ -119,7 +163,7 @@ def _train(self, train_data: torch.Tensor, expert_data: torch.Tensor) -> float:
loss_1: torch.Tensor = torch.log(out_1 + 1e-8).mean()
out_2: torch.Tensor = self.reward_model(expert_data)
loss_2: torch.Tensor = torch.log(1 - out_2 + 1e-8).mean()
loss: torch.Tensor = loss_1 + loss_2
loss: torch.Tensor = - (loss_1 + loss_2)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add comment for this modification

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will add this comment: log(x) with 0<x<1 is negative, so to reduce this loss we have to minimize the opposite

self.fc2 = nn.Linear(64 + 1, 1) # here we add 1 to take consideration of the action concat
self.a = nn.Sigmoid()

def forward(self, x: torch.Tensor, ) -> torch.Tensor:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove this redundant comma

self.conv3 = nn.Conv2d(16, 16, 3, stride=1)
self.conv4 = nn.Conv2d(16, 16, 3, stride=1)
self.fc1 = nn.Linear(784, 64)
self.fc2 = nn.Linear(64 + 1, 1) # here we add 1 to take consideration of the action concat
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you should set action_size argument


def forward(self, x: torch.Tensor, ) -> torch.Tensor:
# input: x = [B, 4 x 84 x 84 + 1], last element is action
actions = torch.unsqueeze(x[:, -1], -1) # [B, 1]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

action should be transformed into one-hot, because it is a categorial variable rather than scalar variable

ding/reward_model/gail_irl_model.py Outdated Show resolved Hide resolved
dizoo/box2d/lunarlander/entry/lunarlander_dqn_gail_main.py Outdated Show resolved Hide resolved
@PaParaZz1 PaParaZz1 merged commit d1bc138 into opendilab:main Nov 19, 2021
puyuan1996 pushed a commit to puyuan1996/DI-engine that referenced this pull request Dec 14, 2021
…pole (opendilab#114)

* added gail entry

* added lunarlander and cartpole config

* added gail mujoco config

* added mujoco exp

* update22-10

* added third exp

* added metric to evaluate policies

* added GAIL entry and config for Cartpole and Walker2d

* checked style and unittest

* restored lunarlander env

* style problems

* bug correction

* Delete expert_data_train.pkl

* changed loss of GAIL

* Update walker2d_ddpg_gail_config.py

* changed gail reward from -D(s, a) to -log(D(s, a))

* added small constant to reward function

* added comment to clarify config

* Update walker2d_ddpg_gail_config.py

* added lunarlander entry + config

* Added Atari discriminator + Pong entry config

* Update gail_irl_model.py

* Update gail_irl_model.py

* added gail serial pipeline and onehot actions for gail atari

* related to previous commit

* removed main files

* removed old comment
puyuan1996 pushed a commit to puyuan1996/DI-engine that referenced this pull request Apr 18, 2022
…pole (opendilab#114)

* added gail entry

* added lunarlander and cartpole config

* added gail mujoco config

* added mujoco exp

* update22-10

* added third exp

* added metric to evaluate policies

* added GAIL entry and config for Cartpole and Walker2d

* checked style and unittest

* restored lunarlander env

* style problems

* bug correction

* Delete expert_data_train.pkl

* changed loss of GAIL

* Update walker2d_ddpg_gail_config.py

* changed gail reward from -D(s, a) to -log(D(s, a))

* added small constant to reward function

* added comment to clarify config

* Update walker2d_ddpg_gail_config.py

* added lunarlander entry + config

* Added Atari discriminator + Pong entry config

* Update gail_irl_model.py

* Update gail_irl_model.py

* added gail serial pipeline and onehot actions for gail atari

* related to previous commit

* removed main files

* removed old comment
SolenoidWGT pushed a commit to SolenoidWGT/DI-engine that referenced this pull request Aug 22, 2023
* fix/fix_submodule_err (opendilab#61)

* fix/fix_submodule_err

---------

Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>

* fix issue templates (opendilab#65)

* fix(tokenizer): refactor tokenizer and update usage in readme (opendilab#51)

* update tokenizer example

* fix(readme, requirements): fix typo at Chinese readme and select a lower version of transformers (opendilab#73)

* fix a typo in readme

* in order to find InternLMTokenizer, select a lower version of Transformers

---------

Co-authored-by: gouhchangjiang <gouhchangjiang@gmail.com>

* [Doc] Add wechat and discord link in readme (opendilab#78)

* Doc:add wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* [Docs]: add Japanese README (opendilab#43)

* Add Japanese README

* Update README-ja-JP.md

replace message

* Update README-ja-JP.md

* add repetition_penalty in GenerationConfig in web_demo.py (opendilab#48)

Co-authored-by: YWMditto <862779238@qq.com>

* use fp16 in instruction (opendilab#80)

* [Enchancement] add more options for issue template (opendilab#77)

* [Enchancement] add more options for issue template

* update qustion icon

* fix link

* Use tempfile for convert2hf.py (opendilab#23)

Fix InternLM/InternLM#50

* delete torch_dtype of README's example code (opendilab#100)

* set the value of repetition_penalty to 1.0 to avoid random outputs (opendilab#99)

* Update web_demo.py (opendilab#97)

Remove meaningless log.

* [Fix]Fix wrong string cutoff in the script for sft text tokenizing (opendilab#106)

* docs(install.md): update dependency package transformers version to >= 4.28.0 (opendilab#124)

Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>

* docs(LICENSE): add license (opendilab#125)

* add license of colossalai and flash-attn

* fix lint

* modify the name

* fix AutoModel map in convert2hf.py (opendilab#116)

* variables are not printly as expect (opendilab#114)

* feat(solver): fix code to adapt to torch2.0 and provide docker images (opendilab#128)

* feat(solver): fix code to adapt to torch2.0

* docs(install.md): publish internlm environment image

* docs(install.md): update dependency packages version

* docs(install.md): update default image

---------

Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>

* add demo test (opendilab#132)

Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>

* fix web_demo cache accelerate (opendilab#133)

* fix(hybrid_zero_optim.py): delete math import

* Update embedding.py

---------

Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>
Co-authored-by: Kai Chen <chenkaidev@gmail.com>
Co-authored-by: Yang Gao <Gary1546308416AL@gmail.com>
Co-authored-by: Changjiang GOU <gouchangjiang@gmail.com>
Co-authored-by: gouhchangjiang <gouhchangjiang@gmail.com>
Co-authored-by: vansin <msnode@163.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: YWMditto <46778265+YWMditto@users.noreply.github.com>
Co-authored-by: YWMditto <862779238@qq.com>
Co-authored-by: WRH <12756472+wangruohui@users.noreply.github.com>
Co-authored-by: liukuikun <24622904+Harold-lkk@users.noreply.github.com>
Co-authored-by: x54-729 <45304952+x54-729@users.noreply.github.com>
Co-authored-by: Shuo Zhang <zhangshuolove@live.com>
Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>
Co-authored-by: huangting4201 <1538303371@qq.com>
Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
Co-authored-by: ytxiong <45058324+yingtongxiong@users.noreply.github.com>
Co-authored-by: Zaida Zhou <58739961+zhouzaida@users.noreply.github.com>
Co-authored-by: kkscilife <126147887+kkscilife@users.noreply.github.com>
Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>
Co-authored-by: hw <45089338+MorningForest@users.noreply.github.com>
SolenoidWGT added a commit to SolenoidWGT/DI-engine that referenced this pull request Aug 22, 2023
* fix/fix_submodule_err (opendilab#61)

* fix/fix_submodule_err

---------

Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>

* fix issue templates (opendilab#65)

* fix(tokenizer): refactor tokenizer and update usage in readme (opendilab#51)

* update tokenizer example

* fix(readme, requirements): fix typo at Chinese readme and select a lower version of transformers (opendilab#73)

* fix a typo in readme

* in order to find InternLMTokenizer, select a lower version of Transformers

---------

Co-authored-by: gouhchangjiang <gouhchangjiang@gmail.com>

* [Doc] Add wechat and discord link in readme (opendilab#78)

* Doc:add wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* [Docs]: add Japanese README (opendilab#43)

* Add Japanese README

* Update README-ja-JP.md

replace message

* Update README-ja-JP.md

* add repetition_penalty in GenerationConfig in web_demo.py (opendilab#48)

Co-authored-by: YWMditto <862779238@qq.com>

* use fp16 in instruction (opendilab#80)

* [Enchancement] add more options for issue template (opendilab#77)

* [Enchancement] add more options for issue template

* update qustion icon

* fix link

* Use tempfile for convert2hf.py (opendilab#23)

Fix InternLM/InternLM#50

* delete torch_dtype of README's example code (opendilab#100)

* set the value of repetition_penalty to 1.0 to avoid random outputs (opendilab#99)

* Update web_demo.py (opendilab#97)

Remove meaningless log.

* [Fix]Fix wrong string cutoff in the script for sft text tokenizing (opendilab#106)

* docs(install.md): update dependency package transformers version to >= 4.28.0 (opendilab#124)

Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>

* docs(LICENSE): add license (opendilab#125)

* add license of colossalai and flash-attn

* fix lint

* modify the name

* fix AutoModel map in convert2hf.py (opendilab#116)

* variables are not printly as expect (opendilab#114)

* feat(solver): fix code to adapt to torch2.0 and provide docker images (opendilab#128)

* feat(solver): fix code to adapt to torch2.0

* docs(install.md): publish internlm environment image

* docs(install.md): update dependency packages version

* docs(install.md): update default image

---------

Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>

* add demo test (opendilab#132)

Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>

* fix web_demo cache accelerate (opendilab#133)

* Doc: add twitter link (opendilab#141)

* Feat add checkpoint fraction (opendilab#151)

* feat(config): add checkpoint_fraction into config

* feat: remove checkpoint_fraction from configs/7B_sft.py

---------

Co-authored-by: wangguoteng.p <wangguoteng925@qq.com>

* [Doc] update deployment guide to keep consistency with lmdeploy (opendilab#136)

* update deployment guide

* fix error

* use llm partition (opendilab#159)

Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>

* test(ci_scripts): clean test data after test, remove unnecessary global variables, and other optimizations (opendilab#165)

* test: optimization of ci scripts(variables, test data cleaning, etc).

* chore(workflows): disable ci job on push.

* fix: update partition

* test(ci_scripts): add install requirements automaticlly,trigger event about lint check and other optimizations (opendilab#174)

* add pull_request in lint check

* use default variables in ci_scripts

* fix format

* check and install requirements automaticlly

* fix format

---------

Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>

* feat(profiling): add a simple memory profiler (opendilab#89)

* feat(profiling): add simple memory profiler

* feat(profiling): add profiling argument

* feat(CI_workflow): Add PR & Issue auto remove workflow (opendilab#184)

* feat(ci_workflow): Add PR & Issue auto remove workflow

Add a workflow for stale PR & Issue  auto remove
- pr & issue well be labeled as stale for inactive in 7 days
- staled PR & Issue  well be remove in 7 days
- run this workflow every day on 1:30 a.m.

* Update stale.yml

* feat(bot): Create .owners.yml for Auto Assign (opendilab#176)

* Create .owners.yml: for issue/pr assign automatically

* Update .owners.yml

* Update .owners.yml

fix typo

* [feat]: add pal reasoning script (opendilab#163)

* [Feat] Add PAL inference script

* Update README.md

* Update tools/README.md

Co-authored-by: BigDong <yudongwang1226@gmail.com>

* Update tools/pal_inference.py

Co-authored-by: BigDong <yudongwang1226@gmail.com>

* Update pal script

* Update README.md

* restore .ore-commit-config.yaml

* Update tools/README.md

Co-authored-by: BigDong <yudongwang1226@gmail.com>

* Update tools/README.md

Co-authored-by: BigDong <yudongwang1226@gmail.com>

* Update pal inference script

* Update READMD.md

* Update internlm/utils/interface.py

Co-authored-by: Wenwei Zhang <40779233+ZwwWayne@users.noreply.github.com>

* Update pal script

* Update pal script

* Update script

* Add docstring

* Update format

* Update script

* Update script

* Update script

---------

Co-authored-by: BigDong <yudongwang1226@gmail.com>
Co-authored-by: Wenwei Zhang <40779233+ZwwWayne@users.noreply.github.com>

* test(ci_scripts): add timeout settings and clean work after the slurm job (opendilab#185)

* restore pr test on develop branch

* add mask

* add post action to cancel slurm job

* remove readonly attribute on job log

* add debug info

* debug job log

* try stdin

* use stdin

* set default value avoid error

* try setting readonly on job log

* performance echo

* remove debug info

* use squeue to check slurm job status

* restore the lossed parm

* litmit retry times

* use exclusive to avoid port already in use

* optimize loop body

* remove partition

* add {} for variables

* set env variable for slurm partition

---------

Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>

* refactor(tools): move interface.py and import it to web_demo (opendilab#195)

* move interface.py and import it to web_demo

* typo

* fix(ci): fix lint error

* fix(ci): fix lint error

---------

Co-authored-by: Sun Peng <sunpengsdu@gmail.com>
Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>
Co-authored-by: Kai Chen <chenkaidev@gmail.com>
Co-authored-by: Yang Gao <Gary1546308416AL@gmail.com>
Co-authored-by: Changjiang GOU <gouchangjiang@gmail.com>
Co-authored-by: gouhchangjiang <gouhchangjiang@gmail.com>
Co-authored-by: vansin <msnode@163.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: YWMditto <46778265+YWMditto@users.noreply.github.com>
Co-authored-by: YWMditto <862779238@qq.com>
Co-authored-by: WRH <12756472+wangruohui@users.noreply.github.com>
Co-authored-by: liukuikun <24622904+Harold-lkk@users.noreply.github.com>
Co-authored-by: x54-729 <45304952+x54-729@users.noreply.github.com>
Co-authored-by: Shuo Zhang <zhangshuolove@live.com>
Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>
Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
Co-authored-by: ytxiong <45058324+yingtongxiong@users.noreply.github.com>
Co-authored-by: Zaida Zhou <58739961+zhouzaida@users.noreply.github.com>
Co-authored-by: kkscilife <126147887+kkscilife@users.noreply.github.com>
Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>
Co-authored-by: hw <45089338+MorningForest@users.noreply.github.com>
Co-authored-by: Guoteng <32697156+SolenoidWGT@users.noreply.github.com>
Co-authored-by: wangguoteng.p <wangguoteng925@qq.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: zachtzy <141206206+zachtzy@users.noreply.github.com>
Co-authored-by: cx <759046501@qq.com>
Co-authored-by: Jaylin Lee <61487970+APX103@users.noreply.github.com>
Co-authored-by: del-zhenwu <dele.zhenwu@gmail.com>
Co-authored-by: Shaoyuan Xie <66255889+Daniel-xsy@users.noreply.github.com>
Co-authored-by: BigDong <yudongwang1226@gmail.com>
Co-authored-by: Wenwei Zhang <40779233+ZwwWayne@users.noreply.github.com>
Co-authored-by: huangting4201 <huangting3@sensetime.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
algo Add new algorithm or improve old one serial Serial training related
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants