Skip to content

Devinwong/Vision-AI-Dev-Kit-Hol

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 

Repository files navigation

Exercise 1 – Set up device and deploy the get-started vision module

In this exercise, you will set up your camera and deploy the get-started vision module with the guidance of the OOBE (Out-Of-Box Experience) webpages.

Connecting the AI Vision Developer Kit to Internet and Azure IoT Hub/Edge

  1. Connect a USB-type C cable from your workstation to the AI Vision Developer Kit. This is the only cable we need to provide power and transfer data.
  2. Make sure the device LEDs are blinking RED. From your workstation, check the Wi-Fi scanning results and you will see an MSIOT_xxxxxx (printed on the label at the bottom of the device) discovered where xxxxxx is the last 6 characters of your Wi-Fi mac address. For example, it looks like MSIOT_BD097D. Please check this clear so that you won't connect to other attendees' cameras. Connect to that Soft Access Point broadcasted by Peabody.

The default passphrase is printed on a label at the bottom of the camera, for example: password: 88888888. If you didn't see this line on the label, please use 12345678 as passphrase.

  1. Once you connected to above Wi-Fi network, a default OOBE setup page will pop up automatically. If not, please launch a browser and visit http://setupaicamera.ms.
  2. Follow the setup page to fill in the necessary information. You can use the default configuration. For the Wi-Fi SSID in the 3rd page, please select an AP that can connect to internet (not MSIOT_xxxxxx). The Wi-Fi profile will be stored in Peabody so that it can automatically connect to the same Wi-Fi network after reboot.

Tips : The Wi-Fi connection in the lab venue may be slow and congested because of the limited bandwidth for too many devices connecting at the same time. Please feel free to use your own cell phone hotspot if you encounter issues.

  1. Follow the steps in the webpages to create corresponding Azure resources and deploy the get-started vision module from Azure Marketplace.
  2. You will be able to see the inferencing output on HDMI on completion.

In this exercise, you set up the Vision AI Dev Kit so that it connects to internet and IoT Hub. When you went through the OOBE webpages to create the IoT Edge device, it also associated to your camera in the background. Finally, it deployed the get-started vision module from Azure Marketplace dynamically and then you were able to see the inferencing output on HDMI.

This is the system architecture. System architecture

Exercise 2 – Build your own AI model

We'll build our own AI model (Image Classification) to detect when someone is wearing a hard hat. You will share a hard hat with other attendees to validate your model built with Custom Vision.

Setup up a new Custom Vision project

  1. Login to the Azure Custom Vision Service (Preview) at https://www.customvision.ai/.
  2. Create a new project, use these recommended settings: Give it a name like Simulated HardHat Detector Project Type - [Classification] Create a resource group Classification Type - [Multiclass (Single tag per image)] Domain - [General(compact)] Export Capabilities - Vision AI Dev Kit

Upload and tag your training data

Some training images have already been collected for you for the hard hat use case. Download them and upload them to your Custom Vision project:

  1. Downlaod the .zip file at this location: https://1drv.ms/u/s!AkzLzaBpSgoMo9hXX4NPjd8QrfhQLA?e=M3ehCL
  2. Decompress it
  3. Upload images to custom vision per tag (HardHat/NoHardHat) and tag them appropriately them during upload. Upload all pictures names similarly (like HardHat) at the same time.

Train and export your custom model

  1. To train it with your new training data, go to your Custom Vision project and click on Train.
  2. To export it, select tab Performances and Export button. Choose the Vision AI Dev Kit option, right click on the download button and copy the link. Check the screenshot
  3. To confirm the link is really pointing to a zip file, visit that link on a browser and see it's asking you to download a zip. If so, keep this link for next step.

Updating the configuration of the Get Started module to use your new custom model

  1. Login to the http://portal.azure.com and go to your IoT Hub resource
  2. Click on IoT Edge tab and then on your camera device name.
  3. Click on the AIVisionDevKitGetStartedModule module name and click on Module Identity Twin
  4. Update the ModelZipUrl to map to your new URLs and hit Save

Within a few minutes, your device should now be running your custom model! You can check the progress locally from the device. For details please see next exercise.

Test your new model

  1. Put your hard hat on and smile at the camera!
  2. Verify that the camera picks it up and correctly classify you as wearing the hard hat.

In this exercise, you trained your own Image Classification model on Microsoft Custom Vision Service. Then you got the DLC files from the Custom Vision Service output. By using the Module Identity Twin, you replaced your model directly from cloud storage to local device in a few minutes.

Exercise 3 (Optional) – Build your own Object detection AI model

As a next step, you could reuse the same training images but build an object detection model instead of the image classification model in exercise 2. Again, please use Custom Vision Service https://www.customvision.ai/ and its labeling tool.

Exercise 4 (Optional) – Access the device locally

The device provides a shell to directly access to local resources. Here are a few commands you may find them useful. Please use a command prompt to get started.

Please install the USB driver (QUD) first. After installed it, please launch a command prompt and traverse to desktop > Lab Room >Platform tools and then start using the following commands.

Commands Descriptions
adb devices Check if your workstation detects the camera correctly
adb shell Enter the local shell
ifconfig Check the network configurations
docker ps List what containers are running on Docker
docker images List what docker images have been downloaded to local device
docker logs -f edgeAgent Check by following the logs output from edgeAgent container. Ctrl + c to exit
docker logs -f AIVisionDevKitGetStartedModule Check by following the logs output from the vision module

To learn more about this Vision AI Dev Kit, visit https://azure.github.io/Vision-AI-DevKit-Pages/docs/Get_Started/

About

Vision-AI-Dev-Kit-Hol scripts

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published