-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RLlib; Offline RL] Validate episodes before adding them to the buffer. #48083
[RLlib; Offline RL] Validate episodes before adding them to the buffer. #48083
Conversation
…t no duplicates or fragments are added to the replay buffer b/c it cannot handle these. Furthermore, refined tests for 'OfflinePreLearner'. Signed-off-by: simonsays1980 <simon.zehnder@gmail.com>
…mplete_episodes' when recording episodes. This ensures that episodes can be read in again for training. Signed-off-by: simonsays1980 <simon.zehnder@gmail.com>
…dium'. Signed-off-by: simonsays1980 <simon.zehnder@gmail.com>
…ent read batch size in case 'EpisodeType' or 'BatchType' data is stored in offline datasets. Added some docstrings. Signed-off-by: simonsays1980 <simon.zehnder@gmail.com>
@@ -179,6 +188,9 @@ def __call__(self, batch: Dict[str, np.ndarray]) -> Dict[str, List[EpisodeType]] | |||
) | |||
for state in batch["item"] | |||
] | |||
# Ensure that all episodes are done and no duplicates are in the batch. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dumb question: Why would we expect duplicates at all here? Any algorithm that would be in danger of running into something like this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great question: This is tricky. I just ran into it because in my own projects I was using small datasets for testing. It could be that your data is so small that pulling a batch contains mjultiple duplicates. Let's say you have a dataset of size 10 and you pull a batch of size 100 (silly maybe, but if you have episodes one row could be multiple timesteps) now 90 rows would be duplicates in the batch.
@@ -16,6 +16,9 @@ | |||
# and (if needed) use their values toset up `config` below. | |||
args = parser.parse_args() | |||
|
|||
args.enable_new_api_stack = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove these?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. Thanks @simonsays1980 for the PR!
Just a few nits.
Signed-off-by: simonsays1980 <simon.zehnder@gmail.com>
…r. (ray-project#48083) Signed-off-by: JP-sDEV <jon.pablo80@gmail.com>
…r. (ray-project#48083) Signed-off-by: mohitjain2504 <mohit.jain@dream11.com>
Why are these changes needed?
At the moment the
OfflinePreLearner
samples recorded episodes orSampleBatch
es from aray.data
dataset and then adds them to a buffer which corrdinates the time step sampling. It could potentially happen thatterminated
nortruncated
and therefore could and certainly will be fragmented in time order (i.e. we maybe sample first an episode chunk that contains timesteps 11 to 21 before we sample 0 to 11).In both cases the buffer would raise an error as soon as
SingleAgentEpisode.concat
is called.This PR introduces a
_validate_episodes
method to theOfflinePreLearner
to check episodes for duplicates and fragments and returns only unique episodes that are not in the buffer, yet. It disallows uncompleted episodes and thereby ensures that no fragments are added. Users are responsible to record only full episodes.Furthermore, this PR adds a new configuration parameter to the
AlgorithmConfig
:input_read_batch_size
that controls for the size of the raw data batch pulled from an offline dataset. This can be used in cases ofEpisodeType
orBatchType
row formats that contain usually multiple timesteps and therefore even a single row might contain enough timesteps fortrain_batch_size_per_learner
.Related issue number
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/
under thecorresponding
.rst
file.