sudo pip3 install ––upgrade pip
pip3 install ––upgrade setuptools
The TensorFlow installation is straightforward. Use Pip and this command to install it:
pip3 install tensorflow
Verify the installation was successful by checking the software package information:
pip3 show tensorflow
The system should display the version and other data about TensorFlow.
Finally, install Keras with the following command:
pip3 install keras
Verify the installation by displaying the package information:
pip3 show keras
-
Checkout MediaPipe repository.
$ git clone https://github.com/kurshakuz/krsl-recogniton.git # Change directory into MediaPipe root directory $ cd mediapipe
-
Install Bazel.
Follow the official Bazel documentation to install Bazel 3.4 or higher.
For Nvidia Jetson and Raspberry Pi devices with ARM Ubuntu, Bazel needs to be built from source.
# For Bazel 3.4.0 wget https://github.com/bazelbuild/bazel/releases/download/3.4.0/bazel-3.4.0-dist.zip sudo apt-get install build-essential openjdk-8-jdk python zip unzip unzip bazel-3.4.0-dist.zip env EXTRA_BAZEL_ARGS="--host_javabase=@local_jdk//:jdk" bash ./compile.sh sudo cp output/bazel /usr/local/bin/
-
Install OpenCV and FFmpeg.
Option 1. Use package manager tool to install the pre-compiled OpenCV libraries. FFmpeg will be installed via libopencv-video-dev.
Note: Debian 9 and Ubuntu 16.04 provide OpenCV 2.4.9. You may want to take option 2 or 3 to install OpenCV 3 or above.
$ sudo apt-get install libopencv-core-dev libopencv-highgui-dev \ libopencv-calib3d-dev libopencv-features2d-dev \ libopencv-imgproc-dev libopencv-video-dev
Debian 9 and Ubuntu 18.04 install the packages in
/usr/lib/x86_64-linux-gnu
. MediaPipe's [opencv_linux.BUILD
] and [ffmpeg_linux.BUILD
] are configured for this library path. Ubuntu 20.04 may install the OpenCV and FFmpeg packages in/usr/local
, Please follow the option 3 below to modify the [WORKSPACE
], [opencv_linux.BUILD
] and [ffmpeg_linux.BUILD
] files accordingly.Option 2. Run [
setup_opencv.sh
] to automatically build OpenCV from source and modify MediaPipe's OpenCV config.Option 3. Follow OpenCV's documentation to manually build OpenCV from source code.
-
For running desktop examples on Linux only (not on OS X) with GPU acceleration.
# Requires a GPU with EGL driver support. # Can use mesa GPU libraries for desktop, (or Nvidia/AMD equivalent). sudo apt-get install mesa-common-dev libegl1-mesa-dev libgles2-mesa-dev # To compile with GPU support, replace --define MEDIAPIPE_DISABLE_GPU=1 # with --copt -DMESA_EGL_NO_X11_HEADERS --copt -DEGL_NO_X11 # when building GPU examples.
-
Run the Hello World desktop example.
$ export GLOG_logtostderr=1 # if you are running on Linux desktop with CPU only $ bazel run --define MEDIAPIPE_DISABLE_GPU=1 \ mediapipe/examples/desktop/hello_world:hello_world # If you are running on Linux desktop with GPU support enabled (via mesa drivers) $ bazel run --copt -DMESA_EGL_NO_X11_HEADERS --copt -DEGL_NO_X11 \ mediapipe/examples/desktop/hello_world:hello_world # Should print: # Hello World! # Hello World! # Hello World! # Hello World! # Hello World! # Hello World! # Hello World! # Hello World! # Hello World! # Hello World!
-
To build, for example, MediaPipe Hands, run:
bazel build -c opt --define MEDIAPIPE_DISABLE_GPU=1 mediapipe/examples/desktop/hand_tracking:hand_tracking_cpu
-
To run the application:
GLOG_logtostderr=1 bazel-bin/mediapipe/examples/desktop/hand_tracking/hand_tracking_cpu \ --calculator_graph_config_file=mediapipe/graphs/hand_tracking/hand_tracking_desktop_live.pbtxt
This will open up your webcam as long as it is connected and on. Any errors is likely due to your webcam being not accessible.
Note: This currently works only on Linux, and please first follow OpenGL ES Setup on Linux Desktop.
-
To build, for example, MediaPipe Hands, run:
bazel build -c opt --copt -DMESA_EGL_NO_X11_HEADERS --copt -DEGL_NO_X11 \ mediapipe/examples/desktop/hand_tracking:hand_tracking_gpu
-
To run the application:
GLOG_logtostderr=1 bazel-bin/mediapipe/examples/desktop/hand_tracking/hand_tracking_gpu \ --calculator_graph_config_file=mediapipe/graphs/hand_tracking/hand_tracking_mobile.pbtxt
This will open up your webcam as long as it is connected and on. Any errors is likely due to your webcam being not accessible, or GPU drivers not setup properly.
cd sign_prediction/
Insert your data to any folder and pass it as a --processed_data_path
variable . The result will appear in the same folder and will printed in the terminal.
python3 predict.py --processed_data_path='./test_video_output/' --files_nested=0
Otherwise, see the results of training on the gathered images as below:
python3 predict.py --processed_data_path='./V2-videos-5signers-isolated-signs-out/Relative/' --files_nested=1
If you want to directly run the whole prediction algorithm starting from video input, place them in the video-folder
and run following script:
cd ..
python3 recognition.py --input_data_path='./sign-prediction/recognition/video-folder/' --output_data_path='./sign-prediction/recognition/video-folder-out/'
Some example videos from the SpreadTheSign dataset are already placed there, and if you run it, terminal will show the actual class, predicted class, and the confidence percentage.