Skip to content

Commit

Permalink
minor changes
Browse files Browse the repository at this point in the history
  • Loading branch information
saileshd1402 committed Nov 16, 2023
1 parent c03d00a commit 6defd01
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion docs/gpt-in-a-box/vm/inference_requests.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ The Inference Server can be inferenced through the TorchServe Inference API. Fin
| inference_server_endpoint | localhost |
| inference_port | 8080 |

The following are example cURL commands to Inference the Inference Server.
The following are example cURL commands to send inference requests to the Inference Server.

## Inference Requests
The following is the template command for inferencing with a text file:
Expand Down
2 changes: 1 addition & 1 deletion docs/gpt-in-a-box/vm/management_requests.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ The Inference Server can be managed through the TorchServe Management API. Find
| inference_server_endpoint | localhost |
| management_port | 8081 |

The following are example cURL commands to Manage the Inference Server.
The following are example cURL commands to send management requests to the Inference Server.

## Describe Registered Model
Once a model is loaded on the Inference Server, we can use the following request to describe the model and it's configuration.
Expand Down

0 comments on commit 6defd01

Please sign in to comment.