-
Notifications
You must be signed in to change notification settings - Fork 744
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add the presets for Tritonserver #1085
Conversation
Still have three kinds of issues here:
This is based on on the following c++ code:: I followed this: https://github.com/bytedeco/javacpp/wiki/Mapping-Recipes#ignoring-attributes-and-macros, and add the line as: But this doesn't work here, though looks like in most cases it works. What's the reason here?
This is based on the following c++ code:
lots of errors are about enum, such as the following c++ code: How to deal with these cases? |
It's "TRITONBACKEND_DECLSPEC", and not "TTRITONBACKEND_DECLSPEC", right?
You'll probably need to add that one to the list above as well.
This error doesn't seem related to the enum, and that should work as is. Could you explain why you believe they are connected? |
Yes, my mistake, fixed.
This is gone when the first one about typo is fixed.
This issue still there. Since it will use "TRITONSERVER_DataType *" as variable data type, looks like javacpp can not translate it correctly. When the compiler compiles the generated jnitritonserver.cpp file, the error is: error: cannot convert 'TRITONSERVER_DataType' {aka 'TRITONSERVER_datatype_enum'} to 'TRITONSERVER_DataType*' {aka 'TRITONSERVER_datatype_enum*'} I think if this is not an enum, but an int, there wont be an translation issue here. |
What is the signature in Java for the native wrapper method of TRITONBACKEND_InputPropertiesForHostPolicy()? Please update the files in src/gen on your fork! |
Yes, updated the files in src/gen. The issue is still there. still not works, looks like this is only works for c++ struct/class? |
The Java signature of |
looks like I need to modify the generated file to put a int pointer there, right? |
No, it works automatically because |
@Platform( | ||
value = {"linux-arm64", "linux-ppc64le", "linux-x86_64", "windows-x86_64"}, | ||
compiler = "cpp11", | ||
include = {"tritonbackend.h", "tritonrepoagent.h", "tritonserver.h"}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here, "tritonserver.h" should probably be moved before "tritonbackend.h".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here, "tritonserver.h" should probably be moved before "tritonbackend.h".
Yes, it works! It gets compiled eventually. Also, I removed the nouse gererated TensorRT files.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, simply remove src/gen and rerun the build to get rid of old files.
@saudet updated the PR. Again, marked with "//jack" are places that I dont know how to deal with. Also, I added the simple.cc file for reference. |
@saudet I need to catch up with the schedule:) I need your help to finish this sample before the end of this week, bro. |
Got it, don't worry, should be able to get this done tomorrow. Are you sure though that simple.cc is a standalone program? |
some time ago, I saw someone did something like: Also, I will try myself tomorrow about this. |
you can find some doc here: https://github.com/triton-inference-server/server/blob/main/docs/inference_protocols.md#c-api and, as you know, the original c++ file is here: |
@saudet I just tried, basically, simple can run with: gcc simple.cc -o simple -I/opt/tritonserver/include -I/workspace/tritonserver_21.07_source/server-2.12.0/ -I/usr/local/cuda-11.4/targets/x86_64-linux/include -D TRITON_ENABLE_GPU -D TRITON_MIN_COMPUTE_CAPABILITY=5.3 -L/opt/tritonserver/lib -L/usr/local/lib -L/usr/local/cuda-11.4/targets/x86_64-linux/lib -ltritonserver -lpthread -lcudart -lstdc++ here's the command line: root@872b34da9e75:/workspace/tritonserver_21.07_source/server-2.12.0/src/servers# ./simple |
Yes, thanks, it looks like it was intended to be used as "simple" standalone sample code, so that looks OK. However, it is quite complex for a "simple" example. I wasn't able to take enough time today, but I should be able to work through that tomorrow. |
thanks! Bro! I will also try to add something to simple.java |
@saudet How about simplify the simple sample if it's really too long? How about bypass some of the difficult part? |
It's fine, I should have something that runs by the end of the day. |
Then I will try to start some discussion about use case/scenario/benchmark in the place you mentioned last time, adding the guy you mentioned. |
Ok, along with a few fixes to the presets, I pushed an initial version of Simple.java that compiles fine and does something when executed with BTW, that C API is pretty low-level. It would make sense to write on top of that a high-level Java API... |
OK, will do it tomorrow. |
@saudet errors when compiling: Downloaded from central: https://repo.maven.apache.org/maven2/commons-codec/commons-codec/1.11/commons-codec-1.11.jar (335 kB at 391 kB/s) what's the probable reason? |
@saudet doesn't work... my steps:
failed with the above error. |
The presets has no "-gpu" extension, this won't work, just do |
OK, it works: Downloaded from central: https://repo.maven.apache.org/maven2/org/codehaus/plexus/plexus-utils/3.0.20/plexus-utils-3.0.20.jar (243 kB at 124 kB/s) I will try to add some model_repo for it.:) |
root@872b34da9e75:/workspace/forSimpleSample/javacpp-presets/tritonserver/samples# mvn exec:java -Djavacpp.platform=linux-x86_64 -Dexec.args="-r /workspace/tritonserver_21.07_source/server-2.12.0/docs/examples/model_repository/models" successfully! milestone I hit!:) ca y est. @saudet |
@saudet added README and some other slight modifications, pls review. One more thing, how to generate the apidoc files? |
There's still a couple of places that need to be updated, at least those files: |
done. pls review. |
@saudet looks like passed CI |
Yes, it seems almost ready to be merged. 👍 Do you plan to add support for linux-arm64 and windows-x86_64 as well, or is tritonserver pretty much limited to linux-x86_64 only? |
Since I dont have tegra-based board on-hand, I need to check with ARM support later. |
For ARM based platform other than Tegra, maybe can have a try, let me check with the repo of Triton, I remember there's some ARM support there? |
It looks like there are binaries available for download on the release page for both linux-arm64 and windows-x86_64: https://github.com/triton-inference-server/server/releases So, to build against those platforms as well, we could download them automatically, for example, as part of the cppbuild.sh script. Would you like to work on that right away? Or would like to merge what we have now and work on that later maybe? |
for me, since I don't have those platform (board), I prefer to defer the support of ARM and Windows later. |
Tritonserver is used to deploy ML/DL Inference service.