Releases: helmut-hoffer-von-ankershoffen/jetson
Releases · helmut-hoffer-von-ankershoffen/jetson
TensorFlow Serving meta and public endpoint
- enh: Provide meta information in response of webservice containing TF model name, Jetson model name, timestamp and duration in milliseconds
- enh: Provide make target
make tensorflow-serving-predict-public
showing access to public SSL endpoint onhttps://tensorflow-serving.polarize.ai provided
via CloudFront -> K8s/Traefic loadbalancer -> K8s/service -> webservice on Jetson Node
Automation
refactor: Automatically repack CUDA ml libraries as part of provisioning thus reducing the number of commands to enter
refactor: Automatically create K8s token as part of provisioning K8s on (new) nodes thus reducing the number of commands to enter
enh: Provide more build meta in Docker images
Versioned images
- enh: Images pushed to Docker Hub are now tagged with release versions using
make publish-all [tag]
- enh: All images contain build metadata in
/meta
directory
Enhancements and fix
- feat: Provide
make publish-all
to publish all images to DockerHub - enh: Set additional CUDA compute capability 6.2 for jetson/xavier/tensorflow-serving-base allowing run on NVIDIA Jetson TX2
- fix: Set compute CUDA capability to 5.3 for jetson/nano/tensorflow-serving-base
- enh: Allow passing any options from Skaffold to Docker Build via custom builders
- doc: Add hyperlinks to README
Jetson AGX Xavier Developer Kit
Support for Jetson AGX Xavier Developer Kit including
- Automated provisioning of guest VM (Ubuntu) using Vagrant+Virtual Box+Ansible as required for using NVIDIA SDK Manager on Mac
- Automated build of custom Kernel for Xavier supporting Kubernetes inc. Weave Networking
- Automated build of custom rootfs for Xavier providing SDK components on-flash
- Support for headless OEM setup of Xavier
- Automated provisioning of Xavier after flash with identical featureset as for Jetson Nano
- Automated integration of NVMe SSD during provisioning for adequate storage of Docker images and volumes
- Automated and persistent entering of performance mode and set max frequencies for CPU/GPU/EMC frequencies for improved performance
- Automated build, test, deploy and publish of Docker images and Kubernetes deployments for Xavier with identical featureset as for Nano
TensorFlow Serving
- Base image for TensorFlow Serving inc. latest TensorFlow core, adapted to CUDA capabilities of Jetson devices
- Python / Fast API based webservice as facade for TensorFlow serving inc. health check for K8s probes, interactive OAS 3 documentation, request and response validation, accessing TensorFlow Serving via its REST or gRPC endpoint
- Integration of Googles container structure tests in all builds
NVIDIA JetPack 4.2.1
- Works with NVIDIA JetPack 4.2.1
- Automatically repackage libraries bundled with the JetPack image such as CUDNN, TensorRT and python bindings for Docker Builds
- Semi-automatic setup of a SATA SSD drive connected via USB 3.0 as boot device thus largely increasing throughput on disk heavy workloads
Basics for ML
- Auto-bootstrap of macOS development environment
- Auto-provisioning of basics on Jetson node
- Auto-deploy of Jupyter server supporting CUDA accelerated Tensorflow+Keras running in Kubernetes Pod on Jetson node