Steps Still Todo
1. [ ] Make script to handle data generation and training processes
2. [ ] Collect more training data from drone webcam
3. [ ] Scale dataset to multiple via randomly adding noise and varying training examples
4. [ ] Swap object detector for hand landmark detector?
5. [ ] ???
🟢 With Weights 🟢
- Clone repository using
git clone https://github.com/briancsavage/Tello-Hand-Signal-Controller.git
- Navigate to repository using
cd Tello-Hand-Signal-Controller
- Activate virtual environment via
. venv/Scripts/activate
in the root of the repo directory. - Install the required dependencies via
pip install -r requirements.txt
in the activated environment. - To run inference from the webcam run
python /src/router.py
, this will pull image data fromwebcam(0)
on the system.
🟠 With Data 🟠
- Our goal is to train the
ssd_mobilenet_v2
model using the TensorFlow training script and our hand signal training and testing data - Assuming you already have the repository locally, perform steps 2-4 of the With Weights section above to activate the virtual environment and install the necessary dependencies
- The only dependency that isn't completeled handled by
pip
isTensorFlow Object Detection API
, and thus, we need to follow the setups steps here.- The directory,
TensorFlow
should be place in the same parent directory as theTello-Hand-Signal-Controller
directory (i.e. adjacent directories).
- The directory,
- Use the provided,
recognizer.py
to call the training script from within theTensorFlow Object Detection API
🔴 Without Either 🔴
- Our goal is to collect training data via the webcam of the computer and then once we have our labeled data, we can complete the previous step titled, With Data and train up the sign detection model.
- Use the provided,
recognizer.py
to pull image data from the computer webcam, displays current label being collected to console with instructions on how to construct the hand signal, and saves the image into the data directory within the subdirectory titled images (i.e.Tello-Hand-Signal-Controller/data/images
) - Use a LabelImg to annotate the collected images with each of the hand signal operations the drone should recognize (i.e.
up
,down
, ...) - Goto the With Data section above and complete steps to train the
ssd-mobilenet-v2
with the labelled object detection image data.
data/
│
├── images/
│ ├── ...
│ ├── up-abc123.jpg
│ ├── up-def456.jpg
│ └── up-ghi789.jpg
│
├── annotations/
│ ├── ...
│ ├── up/
│ └── down/
│
├── records/
│ ├── train.record
│ └── test.record
│
├── train/
│ ├── ...
│ ├── up-abc123.jpg
│ └── up-abc123.xml
│
└── test/
├── ...
├── up-def456.jpg
└── up-def456.xml
src/
│
├── utils/
│ └── generate_tfrecord.py
│
├── main.py
├── controller.py
├── detector.py
├── recognizer.py
└── router.py
models/
│
├── face-detection/
│ └── landmark_weights.pt
│
└── ssd-mobilenet-v2/
├── ...
├── ckpt-7.index
├── ckpt-7.data-000-of-001
├── ckpt-8.index
└── ckpt-8.data-000-of-001
venv/
│
└── ...
setup.py
requirements.txt
README.txt
After necessary dependency installation and training
python /src/router.py
TODO in final report