Skip to content

Latest commit

 

History

History
235 lines (111 loc) · 10.3 KB

README.md

File metadata and controls

235 lines (111 loc) · 10.3 KB

🦿 Using Deep Learning to Predict Robotic Ankle Dynamics

This repository contains software that was used in the deep learning pipeline of the following research:

Description

This software was developed to build models that can estimate and predict powered ankle-foot prosthesis (PAFP) torques. Typically, ankle torques are computed offline using inverse dynamics from motion capture. However, this method is time-intensive, limited to a gait laboratory setting, and requires a large array of reflective markers to be attached to the body. A practical alternative must be developed to provide biomechanical information to high-bandwidth prosthesis control systems to enable predictive controllers. This software applies deep learning to build dynamical system models capable of accurately estimating and predicting prosthetic ankle torque from inverse dynamics using only six signals that can be accessed in real time using wearable sensors. This application of deep learning provides an avenue towards the development of predictive control systems for powered limbs aimed at optimizing prosthetic ankle torque (i.e., impedance control).

losses

Table of Contents

Installation

  • Download this repository and move it to your desired working directory

  • Open the Anaconda prompt

  • Navigate to your working directory using the cd command

  • Run the following command in the Anaconda prompt:

     conda env create --name NAME --file environment.yml
    

    where NAME needs to be changed to the name of the conda virtual environment for this project. This environment contains all the package installations and dependencies for this project.

  • Run the following command in the Anaconda prompt:

     conda activate NAME
    

    This activates the conda environment containing all the required packages and their versions.

  • Run the following command in the Anaconda prompt:

     pip install -e .
    

    This command installs a project package and tells Python to look for the library code within the /src folder.

Usage

The primary neural network training and evaluation pipeline script is scripts/main_dnn_training_pipeline.py. The main pipeline script calls functions from packages in the src/ folder, including data processing, training protocols, and other computations. Intermediate neural networks during training are saved as .pt files to the results/ folder. In addition, fully-trained neural networks from Optuna trials are saved as pickle files to the results/ folder. The final test results, including test performance metrics and the best-performing hyperparameters in the Optuna hyperparameter optimization, are also saved as a pickle file to this folder. Finally, the test results are saved to a multi-page pdf located in the /results folder. This repository was set up using practices described in the Good Research Code Handbook.

Data Structure

Your data structure class should be a dictionary with the following key-value pairs:

  • data-dict: each key of this dictionary represents a collected time series (e.g., walking trial) and each value is a Pandas DataFrame where the rows represent time steps (i.e., samples) and each column represents a variable (i.e., measurements or sensor signals). Each column should be labeled and there should be one column that represents time.

  • file names-list: each element of this list contains a string of characters describing each collected time series (e.g., walking trial).

Variables to Change

There are a few Python variables that call out input feature and target output variable names. These should be changed to correspond to the model inputs and outputs for your particular system.

  • /src/data_processing.py: within the "get_lookback_window" function, change the "columns" parameters for variables data_x and data_y.     

         tmp_x = np.array([omega_motor, hip_position, ankle_position, left_force, right_force, U])
         data_x = pd.DataFrame(np.transpose(tmp_x),
                            columns=['w_m','th_h','th_a','F_l','F_r','i_m'])
         tmp_y = np.array([ankle_torque])
         data_y = pd.DataFrame(np.transpose(tmp_y), columns=['Tau'])
    

  • /src/testing_and_evaluation.py: within the "main_test" function, change the "target_signal_names" variable to correspond to the model output(s) for your particular system.

Folders

  • data: Where you put raw data for your project. Unfortunately, we are not able to share our raw data for administrative/privacy reasons, but I still thought it would be cool to share the code.

  • results: Where you put results, including checkpoints, pickle files, as well as figures and tables.

  • scripts: Where you put the main executable scripts - Python and bash alike - as well as any .ipynb notebooks.

  • src: Python modules for the project. This is the kind of python code that you import.

  • tests: Where you put tests for your code.

Files

  • .gitignore contains a list of files that git should ignore.

  • README.md contains a description of the project.

  • environment.yml allows you to recreate the exact Python environment that is used to run this analysis in a virtual environment.

  • setup.py allows you to pip-install our custom packages in src/ and import them into the main pipeline script even though they are in a different folder.

Features

  • Three neural network model architectures:

  • Feedforward network (FFN)

  • Gated recurrent unit (GRU)

  • Dual-stage attention-based gated recurrent unit (DA-GRU)

  • Implementation of a hyperparameter optimization protocol via the Optuna Python library.

  • Options for long-term time series forecasting

  • Data preprocessing modules (e.g., splitting, resampling, normalization, etc.)

  • Neural network training progress bar via the tqdm Python library.

  • Early stopping regularization with the validation dataset used to avoid overfitting on the training dataset.

  • Flexible with hyperparameter values (e.g., number of hidden units, number of layers, learning rate, etc.) and constant variable values (e.g., number of inputs, number of outputs, number of Optuna trials, etc.)

  • Modules for model prediction analysis and visualizations using the test dataset.

Results

Sample results from a complete DNN training, validation, and testing run.

Training and Validation Loss

One-Sample-Ahead Predictions

Twenty-Samples-Ahead Predictions

Model Comparisons

Model Comparison

One-sample ahead model predictions of powered ankle-foot prosthesis (PAFP) torques across gait cycles compared to motion capture (MoCap) measurements. The periodic time series are time-normalized across the gait cycle for better visualization. The solid lines represent the mean and the width of the traces represent ±1 standard deviation.

Tests

In progress.

Future Work

This deep learning pipeline is currently being expanded to train deep neural network models that characterize multiple input, multiple output (MIMO) robotic prosthesis systems, specifically for the COBRA system. This is in contrast to the previously developed models that have multiple inputs from wearable sensors and only a single output (i.e., prosthetic ankle torque). Training accurate MIMO system models using deep learning enables us to run forward simulations (i.e., rollouts). In other words, we can simulate the response of the system to arbitrary inputs at various initial states. This tool would allow us to test and tune various prosthesis control methods and configurations prior to implementation. In addition, it also provides a means for experimentation and exploration of control actions without risking the safety of the prosthesis user.

Credits

License

MIT