-
Notifications
You must be signed in to change notification settings - Fork 866
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
when to use ArgumentParser, raise "unrecognized arguments: --sock-type unix --sock-name /tmp/.ts.sock.9000" #3299
Comments
Hi @james-joobs |
Starting with this argument's parser, trying to execute a cli "torchserve --start --ncs ...", to got an error:
To remove these arguments, there's no more raising such this error and going well. |
Thanks @james-joobs for the additional information. Now its more clear to me where the issue is. The BaseHandler or a derived class is not executed directly from the cli. Its the model_service_worker.py that gets called. I you want to give additional parameters to your handler you can use the model_config.yaml file as described here. Its included in the model packaging step and it contains pre-specified elements (like pt2 + parallelism configs) but you can also add custom parameter in there too. The BaseHandler reads the file during the initialization and if you do not call super().initialization() in your handler you can load the file's content like this from the request context: serve/ts/torch_handler/base_handler.py Lines 151 to 152 in a2ba1c7
Going through the doc right now and I am afraid this needs a bit of a polish. I'll self assign the issue and try to document the model_config.yaml file in the next days. Let me know if this does not fit you r use case or you have further questions. |
🐛 Describe the bug
Error logs
model_service_worker.py: error: unrecognized arguments: --sock-type unix --sock-name /tmp/.ts.sock.9000
Installation instructions
python ./ts_scripts/install_dependencies.py --cuda=cu121
Model Packaging
Torchserve --start --ncs
config.properties
Nothing
Versions
torch 2.3.0+cu121
torch-model-archiver 0.11.1
torch-workflow-archiver 0.2.14
torchaudio 2.3.0+cu121
torchmetrics 1.4.1
torchserve 0.11.1
Repro instructions
torch --start
Possible Solution
To do exception when to use dependencies to read Argparse in model_service_worker.py, there're nothing to except to call Argparse classes.
Or replace Argparse class in model_service_worker.py to passing in arguments with some variables
The text was updated successfully, but these errors were encountered: