-
Attach Arduino Nano BLE-33 to Marker:
- Use tape to securely attach the Arduino Nano BLE-33 device to a marker. Ensure it is glued towards the nib of the marker, not at the end.
-
Open Arduino IDE and Run the Model:
- Launch the Arduino IDE and upload/run our model onto the Arduino Nano BLE-33 device.
-
Start Writing the capital Alphabets:
- Begin writing (only capital Alphabets) with the marker. The model will capture acceleration and gyroscope values to predict the letter you are writing.
- Initiate Data Capture:
- Our model is programmed to start capturing values when the marker is suddenly brought down. Move the pen swiftly in the z-direction from above to down onto the paper.
- Write Alphabet:
- Once the data capture starts, write the desired alphabet with the marker.
- Stop Data Capture:
- After completing the writing, repeat the process in reverse by swiftly bringing the pen upwards. This action will stop the program from recording values.
- If you followed the steps in the correct order, you can view the output in the serial monitor of the Arduino IDE.
This project aims to convert handwritten alphabet characters into digital text using accelerometer and gyroscope data. It is divided into three main parts: Data generator, Preprocessing, and Model.
We utilized the Serial Library to capture the output from the Serial monitor of the Arduino IDE, where our Nano BLE – 33 was attached to a marker. This data was then converted into float values and saved into a CSV file named "letter_data.csv" for training our model.
We converted the accelerometer and gyroscope data, initially in string form, into arrays of floats. To standardize the lengths of recordings, we employed linear interpolation. This ensured uniformity in the number of timestamps for each letter, aiming for 60 timestamps per letter.
For letters with more than 60 timestamps, we employed down-sampling to uniformly select 60 timestamps.
Accelerometer and gyroscope values were combined into a 3-D array, which was then transformed into a 2-D array of shape (number of samples, 360).
We standardized the data using the Z-score method, ensuring consistency and comparability in the features.
Categorical alphabet variables were converted into numerical values using Label Encoder, facilitating model training.
The dataset was split into training and test sets in an 80:20 ratio, respectively. The training data was used to fit the model.
We constructed a TensorFlow Sequential model comprising Convolutional and Dense layers. The model architecture is as follows:
model = Sequential([
Conv1D(filters=64, kernel_size=12, activation='relu', input_shape=input_shape),
MaxPooling1D(pool_size=6),
Conv1D(filters=128, kernel_size=12, activation='relu'),
MaxPooling1D(pool_size=6),
Flatten(),
Dense(100, activation='relu'),
Dropout(0.5),
Dense(11, activation='softmax')
])
We compiled the model using the Adam optimizer and sparse_categorical_crossentropy loss function. The model was trained for 100 epochs with a validation split of 0.2.