Skip to content

Latest commit

 

History

History
53 lines (35 loc) · 2.24 KB

README.md

File metadata and controls

53 lines (35 loc) · 2.24 KB

TensorRT Applications

NV_TensorRT_Visual_2C_RGB-625x625-1

This alwaysAI applications set uses TensorRT binaries to do the local inferencing on a NVIDIA Jetson device, these binaries can be found in the alwaysAI model catalog. The model will start with TRT and end with the Jetson device name it should be run on, for example nano. These binaries are the most efficient way to do inferencing on NVIDIA Jetson device. Currently alwaysAI supports TensorRT binaries for Jetson Nano, Xavier NX, and AGX Xavier.

TRT-FLOW

Repo Programs

Folder Description
license-vehicle-detection Program detects vehicles and license plates, currently setup to work with Jetson Nano
distance-between-hands Program detects distance between hands in meters, this application needs an Intel RealSense camera to run and is setup to work with Jetson Nano

Requirements

Usage

Once the alwaysAI tools are installed on your development machine (or edge device if developing directly on it) you can run the following CLI commands:

To perform initial configuration of the app:

aai app configure

If you're running on a Jetson device other than a Nano, add the model from the catalog and update your app. For example, for an Xavier NX:

aai app models add alwaysai/TRT_ssd_mobilenet_v1_coco_vehicle_license_xavier

Then update app.py to reflect the new model ID.

To prepare the runtime environment and install app dependencies:

aai app install

To start the app:

aai app start

Support