-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
added support to intel gpu in a custom build #7047
base: main
Are you sure you want to change the base?
Conversation
Don't we need to modify OpenVINO backend as well? I believe we were not installing openVINO GPU plugin. |
Agree, looks like at least this place need to be updated https://github.com/triton-inference-server/openvino_backend/blob/main/CMakeLists.txt#L213 (not sure why custom plugins.xml is required?) |
@tanmayv25 Related change in the openvino_backend is included in https://github.com/triton-inference-server/openvino_backend/pull/74/files |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need a better way to enable backends to provide customizations without adding top level flags
build.py
Outdated
@@ -1115,6 +1115,17 @@ def create_dockerfile_linux( | |||
|
|||
ENV LD_LIBRARY_PATH=/usr/local/tensorrt/lib/:/opt/tritonserver/backends/tensorrtllm:$LD_LIBRARY_PATH | |||
""" | |||
if FLAGS.enable_intel_gpu and ("openvino" in backends): | |||
df += """ | |||
RUN apt-get update && apt-get install -y gpg-agent wget |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
instead of docker file change - can this be handled in the cmake / build steps for the backend?
Thought:
Can we pass an a cmake variable to customize the build and pull this dependency in
(
Line 2343 in 0a543b8
parser.add_argument( |
@tanmayv25 , @nv-kmcgill53 - woud this would also require some changes to how we install backends once build is complete? -
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about having a setup script as a part of the backend repository? We can decide on a standardized approach common across all the backend. The build.py can then access this setup script to install all the runtime dependencies and prepare environment to use the backend.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea of plug and play backends entails building backends at one place and then copying the backend libraries (*.so
's) to the runtime environment with tritonserver and core.
Hence, having a script to setup env would help in all use-cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will need some design work - but I like that idea
Additional build flag --enable-intel-gpu which includes in the docker image required runtime drivers for Intel GPU
Related to https://github.com/triton-inference-server/openvino_backend/pull/74/files