Skip to content

Commit

Permalink
Merge branch 'staging_0_1_1' into issue_205
Browse files Browse the repository at this point in the history
  • Loading branch information
maaquib authored Apr 30, 2020
2 parents 6642df0 + 9e06b12 commit 0f93841
Show file tree
Hide file tree
Showing 4 changed files with 82 additions and 3 deletions.
50 changes: 50 additions & 0 deletions ISSUE_TEMPLATE/bug_template.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
---
name: "\U0001F41B Bug report"
about: Create a report to help us improve

---

Your issue may already be reported!
Please search on the [issue tracker](https://github.com/pytorch/serve/issues) before creating one.

## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
* torchserve version:
* torch version:
* torchvision version [if any]:
* torchtext version [if any]:
* torchaudio version [if any]:
* java version:
* Operating System and version:

## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->

## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->

## Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug -->

## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
1.
2.
...

## Failure Logs [if any]
<!--- Provide any relevant log snippets or files here. -->

## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Installed using source? [yes/no]:
* Are you planning deploy it using docker container? [yes/no]:
* Is it a CPU or GPU environment?:
* Using a default/custom handler? [If possible upload/share custom handler/model]:
* What kind of model is it e.g. vision, text, audio?:
* Are you planning to use local models from model-store or public url being used e.g. from S3 bucket etc.?
[If public url then provide link.]:
* Provide config.properties, logs [ts.log] and parameters used for model registration/update APIs:
* Link to your project [if any]:
9 changes: 9 additions & 0 deletions ISSUE_TEMPLATE/doc_template.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
---
name: "\U0001F4DA Documentation"
about: Report an issue related to https://pytorch.org/serve/

---

## 📚 Documentation

<!-- A clear and concise description of what content in https://pytorch.org/serve/ is an issue. If this has to do with the general https://pytorch.org website, please file an issue at https://github.com/pytorch/pytorch.github.io/issues/new/choose instead. If this has to do with https://pytorch.org/tutorials, please file an issue at https://github.com/pytorch/tutorials/issues/new -->
20 changes: 20 additions & 0 deletions ISSUE_TEMPLATE/feature_template.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
---
name: "\U0001F680 Feature request"
about: Suggest an idea for this project

---

<!--
Thank you for suggesting an idea to improve torchserve model serving experience.
Please fill in as much of the template below as you're able.
-->

## Is your feature request related to a problem? Please describe.
<!-- Please describe the problem you are trying to solve. -->

## Describe the solution
<!-- Please describe the desired behavior. -->

## Describe alternatives solution
<!-- Please describe alternative solutions or features you have considered. -->
Original file line number Diff line number Diff line change
Expand Up @@ -46,11 +46,11 @@ def initialize(self, context):
if not os.path.isfile(model_def_path):
raise RuntimeError("Missing the model.py file")

state_dict = torch.load(model_pt_path, map_location=self.device)
state_dict = torch.load(model_pt_path)
from model import ResNet152ImageClassifier
self.model = ResNet152ImageClassifier()
self.model.load_state_dict(state_dict)

self.model.to(self.device)
self.model.eval()
logger.debug('Model file {0} loaded successfully'.format(model_pt_path))

Expand Down Expand Up @@ -87,7 +87,7 @@ def preprocess(self, request):
])
input_image = Image.open(io.BytesIO(image))
input_image = my_preprocess(input_image).unsqueeze(0)

input_image = Variable(input_image).to(self.device)
if input_image.shape is not None:
if image_tensor is None:
image_tensor = input_image
Expand Down

0 comments on commit 0f93841

Please sign in to comment.