Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: jetpack-l4t based llama.cpp container #846

Open
gerred opened this issue Jul 29, 2024 · 1 comment
Open

feat: jetpack-l4t based llama.cpp container #846

gerred opened this issue Jul 29, 2024 · 1 comment
Labels
enhancement New feature or request

Comments

@gerred
Copy link
Member

gerred commented Jul 29, 2024

User Story

As a user
I want to run accelerated models on Jetson AGX Orins
So that I can take advantage of the device's NPUs

Additional context

Compiling using the l4t-jetpack nvidia container: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-jetpack
Example of building containerized with all build tools: https://github.com/dusty-nv/jetson-containers/tree/dev

Containers don't seem to need --gpus=all, and the Jetson + Jetpack comes with it's own nvidia container toolkit ready to go. It looks like this is a combination of the CPU UDS bundle with the inferencing containers being built on the jetpack image.

@gerred gerred added the enhancement New feature or request label Jul 29, 2024
@gerred
Copy link
Member Author

gerred commented Jul 29, 2024

I have a Jetson available on Tailscale as needed for testing, need to upgrade the storage on it before we can iterate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant