Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Doc reviews + enhancements #561

Closed
dhaniram-kshirsagar opened this issue Jul 29, 2020 · 2 comments · Fixed by #584
Closed

Doc reviews + enhancements #561

dhaniram-kshirsagar opened this issue Jul 29, 2020 · 2 comments · Fixed by #584
Assignees
Labels
documentation Improvements or additions to documentation

Comments

@dhaniram-kshirsagar
Copy link
Contributor

dhaniram-kshirsagar commented Jul 29, 2020

📚 Documentation

This has been created to review overall doc changes and figure out if there are any technical or doc flow shortcomings-
Findings -

  1. Add a link to custom handler doc in MAR doc
  2. Provide more details around requirements.txt which is a parameter to torch-model-archive when you someone wants to install model-specific python packages

....more to be added and fixed

@deepakbabel
Copy link
Contributor

deepakbabel commented Jul 31, 2020

Some documentations sections appear to be broken in examples section. Here are the steps that I tried:

  1. Navigate to https://github.com/pytorch/serve/tree/master/examples.
    The following URL's are not updated properly:
  • Serving torchvision image classification models in TorchServe
  • Serving custom model with custom service handler
  • Serving text classification model
  • Serving object detection model
  • Serving image segmentation model

Also subsection for Hugging Face Transformers is missing from the above list.

@deepakbabel
Copy link
Contributor

In examples section for batch inference, there is some typos. Details below:
Navigate to https://github.com/pytorch/serve/blob/master/docs/batch_inference_with_ts.md#batch-inference-with-torchserve-using-resnet-152-model

In the "Run inference to test the model" section, the port number is missing while infering:
curl localhost/predictions/resnet-152-batch -T kitten.jpg
As the port number is not specified, it will normally default to port 80 and hence throw error while trying to run inference.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants