Skip to content

Commit

Permalink
Fix various typos
Browse files Browse the repository at this point in the history
  • Loading branch information
naushir committed Sep 25, 2024
1 parent 40d579c commit 292cf48
Show file tree
Hide file tree
Showing 2 changed files with 20 additions and 18 deletions.
26 changes: 14 additions & 12 deletions documentation/asciidoc/accessories/ai-camera/getting-started.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -24,10 +24,12 @@ $ sudo apt install imx500-all

This command:

* installs the `/lib/firmware/imx500_loader.fpk` and `/lib/firmware/imx500_main.fpk` firmware files required to operate the IMX500 sensor
* installs the `/lib/firmware/imx500_loader.fpk` and `/lib/firmware/imx500_firmware.fpk` firmware files required to operate the IMX500 sensor
* places a number of neural network model firmware files in `/usr/share/imx500-models/`
* installs the IMX500 post-processing software stages in `rpicam-apps`
* installs the Sony network model packaging tools

NOTE: The IMX500 kernel device driver loads all the firmware files (loader, main, and network) when the camera starts. This may take several minutes if the neural network model firmware has not been previously cached. The demos below display a progress bar on the console to indicate firmware loading progress.
NOTE: The IMX500 kernel device driver loads all the firmware files when the camera starts. This may take several minutes if the neural network model firmware has not been previously cached. The demos below display a progress bar on the console to indicate firmware loading progress.

=== Reboot

Expand All @@ -44,13 +46,13 @@ Once all the system packages are updated and firmware files installed, we can st

=== `rpicam-apps`

The xref:../computers/camera_software.adoc#rpicam-apps[`rpicam-apps` camera applications] include IMX500 object inference and pose estimation stages that can be run in the post-processing pipeline. For more information about the post-processing pipeline, see xref:../computers/camera_software.adoc#post-process-file[the post-processing documentation].
The xref:../computers/camera_software.adoc#rpicam-apps[`rpicam-apps` camera applications] include IMX500 object detection and pose estimation stages that can be run in the post-processing pipeline. For more information about the post-processing pipeline, see xref:../computers/camera_software.adoc#post-process-file[the post-processing documentation].

The examples on this page use post-processing JSON files located in `/usr/share/rpicam-assets/`.

==== Object inference
==== Object detection

The MobileNet SSD neural network performs basic object detection, providing bounding boxes and confidence values for each object found. `imx500_mobilenet_ssd.json` contains the configuration parameters for the IMX500 object inferencing post-processing stage using the MobileNet SSD neural network.
The MobileNet SSD neural network performs basic object detection, providing bounding boxes and confidence values for each object found. `imx500_mobilenet_ssd.json` contains the configuration parameters for the IMX500 object detection post-processing stage using the MobileNet SSD neural network.

`imx500_mobilenet_ssd.json` declares a post-processing pipeline that contains two stages:

Expand All @@ -77,7 +79,7 @@ To record video with object detection overlays, use `rpicam-vid` instead. The fo
$ rpicam-vid -t 10s -o output.264 --post-process-file /usr/share/rpicam-assets/imx500_mobilenet_ssd.json --width 1920 --height 1080 --framerate 30
----

You can configure the `imx500_object_inference` stage in many ways.
You can configure the `imx500_object_detection` stage in many ways.

For example, `max_detections` defines the maximum number of objects that the pipeline will detect at any given time. `threshold` defines the minimum confidence value required for the pipeline to consider any input as an object.

Expand Down Expand Up @@ -105,22 +107,22 @@ image::images/imx500-posenet.jpg[IMX500 PoseNet]

You can configure the `imx500_posenet` stage in many ways.

For example, `max_detections` defines the maximum number of body points that the pipeline will detect at any given time. `threshold` defines the minimum confidence value required for the pipeline to consider input as a body point.
For example, `max_detections` defines the maximum number of bodies that the pipeline will detect at any given time. `threshold` defines the minimum confidence value required for the pipeline to consider input as a body.

=== Picamera2

For examples of image classification, object inference, object segmentation, and pose estimation using Picamera2, see https://github.com/raspberrypi/picamera2-imx500/blob/main/examples/imx500/[the `picamera2-imx500` GitHub repository].
For examples of image classification, object detection, object segmentation, and pose estimation using Picamera2, see https://github.com/raspberrypi/picamera2/blob/main/examples/imx500/[the `picamera2` GitHub repository].

Most of the examples use OpenCV for some additional processing, so if you haven't done so previsouly, please run:
Most of the examples use OpenCV for some additional processing, so if you haven't done so previously, please run:

[source,console]
----
$ sudo apt install python3-opencv
----

Now download the repository to your Raspberry Pi to run the examples. You'll find example files in the root directory, with additional information in the `README.md` file.
Now download the https://github.com/raspberrypi/picamera2[the `picamera2` repository] to your Raspberry Pi to run the examples. You'll find example files in the root directory, with additional information in the `README.md` file.

Run the following script from the repository to run YOLOv8 object inference:
Run the following script from the repository to run YOLOv8 object detection:

[source,console]
----
Expand All @@ -131,5 +133,5 @@ To try pose estimation in Picamera2, run the following script from the repositor

[source,console]
----
$ python imx500_pose_estimation_yolov8n_demo.py --model /usr/share/imx500-models/imx500_network_yolov8n_pose.rpk
$ python imx500_pose_estimation_higherhrnet_demo.py
----
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
== Model Conversion
== Model Deployment

The process of deploying a new neural network model to the Raspberry Pi AI Camera will normally consist of the following steps:

Expand All @@ -13,7 +13,7 @@ The first three steps will normally be performed on a more powerful computer suc

The creation of neural network models is beyond the scope of this guide. Existing models can be re-used, or new ones created using popular frameworks like TensorFlow or PyTorch.

For more information, readers are referred to the official https://developer.aitrios.sony-semicon.com/en/raspberrypi-ai-camera[Sony IMX500 developer website].
For more information, readers are referred to the official https://developer.aitrios.sony-semicon.com/en/raspberrypi-ai-camera[AITRIOS Developer] website.

=== Quantisation and Compression

Expand All @@ -26,7 +26,7 @@ pip install model_compression_toolkit

and information and tutorials can be found at the project's https://github.com/sony/model_optimization[GitHub page].

At the end of this process, you should export the converted model in either Keras (for TensorFlow) or ONNX (for PyTorch) format.
The _Model Compression Toolkit_ will genearate a quantised model in either Keras (for TensorFlow) or ONNX (for PyTorch) format.

=== Conversion

Expand Down Expand Up @@ -64,11 +64,11 @@ imxconv-pt -i <compressed ONNX model> -o <output folder>

In both cases, the output folder will be created containing, among other things, a memory usage report, plus a `packerOut.zip` file which is what we will need to copy to the Pi for the final step.

Again, for more information on the model conversion process, please refer to the official https://developer.aitrios.sony-semicon.com/en/raspberrypi-ai-camera/documentation/imx500-converter[Sony IMX500 converter documentation].
Again, for more information on the model conversion process, please refer to the official https://developer.aitrios.sony-semicon.com/en/raspberrypi-ai-camera/documentation/imx500-converter[IMX500 Converter] documentation.

=== Packaging

The final step, which we run an a Raspberry Pi, is packaging the model into into a firmware file. Before proceeding, we must install the necessary tools:
The final step, which we run on a Raspberry Pi, is packaging the model into an _RPK_ file. This _RPK_ file is then uploaded to the IMX500 camera when running the neural network model. Before proceeding, we must install the necessary tools:

[source,console]
----
Expand All @@ -84,4 +84,4 @@ imx500-package.sh -i <path to packerOut.zip> -o <output folder>

The output folder should finally contain a file `network.rpk`, the name of which is what we pass to our IMX500 camera applications.

More specific instructions on all these tools, and their constraints is out of scope for this tutorial. For a more comprehensive set of instructions and further specifics on the tools used, please see the official https://developer.aitrios.sony-semicon.com/en/raspberrypi-ai-camera/documentation/imx500-packager[Sony IMX500 packager documentation].
More specific instructions on all these tools, and their constraints is out of scope for this tutorial. For a more comprehensive set of instructions and further specifics on the tools used, please see the official https://developer.aitrios.sony-semicon.com/en/raspberrypi-ai-camera/documentation/imx500-packager[IMX500 Packager] documentation.

0 comments on commit 292cf48

Please sign in to comment.