-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[docs] batch inference docs #34567
[docs] batch inference docs #34567
Conversation
Signed-off-by: Max Pumperla <max.pumperla@googlemail.com>
Signed-off-by: Max Pumperla <max.pumperla@googlemail.com>
Signed-off-by: Max Pumperla <max.pumperla@googlemail.com>
Signed-off-by: Max Pumperla <max.pumperla@googlemail.com>
Signed-off-by: Max Pumperla <max.pumperla@googlemail.com>
Signed-off-by: Max Pumperla <max.pumperla@googlemail.com>
Signed-off-by: Max Pumperla <max.pumperla@googlemail.com>
Signed-off-by: Max Pumperla <max.pumperla@googlemail.com>
|
||
|
||
# __hf_quickstart_prediction_start__ | ||
scale = ray.data.ActorPoolStrategy(2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
scale = ray.data.ActorPoolStrategy(2) | |
scale = ray.data.ActorPoolStrategy(size=2) |
Kwargs will be required in strict mode for actor pool strategy.
ndarrays (``Dict[str, np.ndarray]``), with each key-value pair representing a column | ||
in the table. | ||
|
||
* **Tensor datasets** (single-column): Each batch will be a single |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will be disallowed in strict mode.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The strict mode by default PR is ahead of schedule--- we can likely merge by end of today. Given this, how about making the new doc directly for the new API?
Co-authored-by: angelinalg <122562471+angelinalg@users.noreply.github.com> Signed-off-by: Max Pumperla <max.pumperla@googlemail.com>
Co-authored-by: angelinalg <122562471+angelinalg@users.noreply.github.com> Signed-off-by: Max Pumperla <max.pumperla@googlemail.com>
Co-authored-by: angelinalg <122562471+angelinalg@users.noreply.github.com> Signed-off-by: Max Pumperla <max.pumperla@googlemail.com>
Co-authored-by: angelinalg <122562471+angelinalg@users.noreply.github.com> Signed-off-by: Max Pumperla <max.pumperla@googlemail.com>
@ericl the reason is that we wanted to merge this soon, and then follow up with the latest changes. I think it's valuable to have a 2.4 snapshot and then apply the delta of the API changes. That's what we agreed on in the Data standup. If you insist, I can make the changes now. @pcmoritz @zhe-thoughts any thoughts? |
@ericl @pcmoritz @maxpumperla @amogkam : Yes I think let's merge this one first and have a follow-up to reflect changes for strict mode. Not a big deal, but structurally it's easier to understand |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds pretty good. Can you address the TOC order change though? That is my only blocking comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, let's do the strict mode simplifications as a follow up. It will get a bunch simpler 🥳
Signed-off-by: Max Pumperla <max.pumperla@googlemail.com>
Signed-off-by: Max Pumperla <max.pumperla@googlemail.com>
Preview: https://anyscale-ray--34567.com.readthedocs.build/en/34567/data/batch_inference.html Signed-off-by: Max Pumperla <max.pumperla@googlemail.com> Co-authored-by: angelinalg <122562471+angelinalg@users.noreply.github.com>
Preview: https://anyscale-ray--34567.com.readthedocs.build/en/34567/data/batch_inference.html