-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add from_inference
to KeyPoints
#1147
Conversation
supervision/keypoint/core.py
Outdated
from inference import get_model | ||
|
||
image = cv2.imread(<SOURCE_IMAGE_PATH>) | ||
model = get_model(model_id="yolov8s-640") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem with this doc string is that "yolov8s-640"
is an object detection model.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch. Putting a placeholder "<POSE_MODEL_ID>"
until we have a well-trained COCO human-pose on prod.
@LinasKo looks lie there are some conflicts ;) |
* Example incorrectly suggested yolov8s-640 * Replaced with "<POSE_MODEL_ID>" until we have a COCO human pose model on prod. I don't think "horse-pose/3" is helpful - the use case is uncommon and the training quality is poor.
Solved. |
@SkalskiP Once again, ready for review. I've tested that it works both inference_sdk, inference, roboflow. |
I tested this PR using this Colab: https://colab.research.google.com/drive/1udslR-XHRRcfT4CPkdEoR_O6mwiNk3kr?usp=sharing everything works. |
Description
from_inference
.Tested with:
inference_sdk.InferenceHTTPClient
clientinference server
, same clientinference.get_model
roboflow-python
Untested:
roboflow
has some inconsistencies in the response (e.g. saying it's a ClassificationModel, but the response is mostly the same. User also needs to callresult = model.predict(img_url, hosted=True).json()["predictions"][0]
before callingfrom_inference
, and that's not documented here.Type of change
Please delete options that are not relevant.
How has this change been tested, please provide a testcase or example of how you tested the change?
It's expected that your prod roboflow api key will be set as
API_KEY
env var.Inference:
Any specific deployment considerations
For example, documentation changes, usability, usage/costs, secrets, etc.
Docs