Skip to content

briancsavage-shift/Tello-Gesture-Controller

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tello Gesture Controller

  1. Setup & Installation
  2. Repo Structure
  3. Usage Steps
  4. Implementation Steps
  5. Resources & References

Steps Still Todo

1. [ ] Make script to handle data generation and training processes
2. [ ] Collect more training data from drone webcam
3. [ ] Scale dataset to multiple via randomly adding noise and varying training examples
4. [ ] Swap object detector for hand landmark detector?
5. [ ] ???

Setup and Installation

🟢 With Weights 🟢

  1. Clone repository using git clone https://github.com/briancsavage/Tello-Hand-Signal-Controller.git
  2. Navigate to repository using cd Tello-Hand-Signal-Controller
  3. Activate virtual environment via . venv/Scripts/activate in the root of the repo directory.
  4. Install the required dependencies via pip install -r requirements.txt in the activated environment.
  5. To run inference from the webcam run python /src/router.py, this will pull image data from webcam(0) on the system.

🟠 With Data 🟠

  1. Our goal is to train the ssd_mobilenet_v2 model using the TensorFlow training script and our hand signal training and testing data
  2. Assuming you already have the repository locally, perform steps 2-4 of the With Weights section above to activate the virtual environment and install the necessary dependencies
  3. The only dependency that isn't completeled handled by pip is TensorFlow Object Detection API, and thus, we need to follow the setups steps here.
    • The directory, TensorFlow should be place in the same parent directory as the Tello-Hand-Signal-Controller directory (i.e. adjacent directories).
  4. Use the provided, recognizer.py to call the training script from within the TensorFlow Object Detection API

🔴 Without Either 🔴

  1. Our goal is to collect training data via the webcam of the computer and then once we have our labeled data, we can complete the previous step titled, With Data and train up the sign detection model.
  2. Use the provided, recognizer.py to pull image data from the computer webcam, displays current label being collected to console with instructions on how to construct the hand signal, and saves the image into the data directory within the subdirectory titled images (i.e. Tello-Hand-Signal-Controller/data/images)
  3. Use a LabelImg to annotate the collected images with each of the hand signal operations the drone should recognize (i.e. up, down, ...)
  4. Goto the With Data section above and complete steps to train the ssd-mobilenet-v2 with the labelled object detection image data.

Repo Structure

data/
│
├── images/
│   ├── ...
│   ├── up-abc123.jpg
│   ├── up-def456.jpg
│   └── up-ghi789.jpg
│
├── annotations/
│   ├── ...
│   ├── up/
│   └── down/
│
├── records/
│   ├── train.record
│   └── test.record
│
├── train/
│   ├── ...
│   ├── up-abc123.jpg
│   └── up-abc123.xml
│
└── test/
    ├── ...
    ├── up-def456.jpg
    └── up-def456.xml
    
src/
│
├── utils/
│   └── generate_tfrecord.py
│
├── main.py
├── controller.py
├── detector.py
├── recognizer.py
└── router.py

models/
│
├── face-detection/
│   └── landmark_weights.pt
│
└── ssd-mobilenet-v2/
    ├── ...
    ├── ckpt-7.index
    ├── ckpt-7.data-000-of-001
    ├── ckpt-8.index
    └── ckpt-8.data-000-of-001

venv/
│
└── ...

setup.py
requirements.txt
README.txt

Usage Steps

After necessary dependency installation and training

python /src/router.py

Implementation Steps

  1. TODO in final report

Resources and References

  1. TensorFlow Object Detection Tutorial
  2. TensorFlow XML-to-TFRecord Converter
  3. TensorFlow Record Format Specs

About

Control Tello drone movement using hand signals

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages