Pytorch implementation of the CRNN model. In this repository I explain how to train a license plate-recognition model with pytorch-lightning.
pip install -r requirements.txt
Before training the model, it's a good practice to calculate the mean
and std
of the input dataset and therefore
normalize the model using proper values instead of merely normalizing with magical 0.5
.
python creating_dataset.py
You can also use the images in the datasets.zip (Amirkabir University Plates Datasets) and put it in NR directory without the dataset configuration using creating_dataset.py code and List.csv labels. Make sure that the dataset has the following structure for training:
├── data-dir
│ ├── train
│ │ ├──<plate>.jpg
│ │ ├──<plate>.jpg
│ │ ├──...
│ ├── val
│ │ ├──<plate>.jpg
│ │ ├──<plate>.jpg
...
NOTE: Only .jpg
, .png
, and .jpeg
extensions are supported!
Then checkout the alphabets.py
module. It contains the alphabets characters that are required for training.
If the existing alphabets do not meet your requirements create a new dictionary containing your required alphabets and
then add it to the ALPHABETS
variable with a specific name. You can get your character set using the following
command:
python get_character_sets.py --data_directory <path-to-dataset>
The output will be like the following:
[INFO] characters: +ابتثجدزسشصطعقلمنهوپگی۰۱۲۳۴۵۶۷۸۹
Run the following command to get the mean
, std
and n_classes
of your input dataset:
python dataset.py --train_directory <your-train-dir> --alphabet_name FA_LPR --batch_size 128
or
python dataset.py --train_directory <your-train-dir> --alphabets +ابتثجدزسشصطعقلمنهوپگی۰۱۲۳۴۵۶۷۸۹ --batch_size 128
The output should be like below:
[INFO] MEAN: [0.4845], STD: [0.1884]
[INFO] N_CLASSES: 35 ---> ابپتشثجدزسصطعفقکگلمنوهی+۰۱۲۳۴۵۶۷۸۹
Run the following command to get the optimal img_w. For longer label length the img_w should be longer than usual, unless
the ctc-loss
returns nan.
python get_optimum_img_w.py --alphabets ابپتشثجدزسصطعفقکگلمنوهی+۰۱۲۳۴۵۶۷۸۹ --data_directory <your-train-dir>
the output should be like below:
[INFO] max_length of this dataset is 8, optimal img_w is: 100
Or get all stats using the following command:
python get_all_stats.py --data_directory <your-dataset-dir>
Get the stats and replace them with values provided for img_w
, MEAN
, STD
, and N_CLASSES
in the settings.py
module under
the BasicConfig
class, or simply pass them as input arguments.
After modifying the aforementioned configs, run the following command to train the model:
python train.py
or
python train.py --img_w 100 --n_workers 8 --train_directory <your-trian-dir> --val_directory <your-val-dir> --mean 0.4845 --std 0.1884 --alphabets ابپتشثجدزسصطعفقکگلمنوهی+۰۱۲۳۴۵۶۷۸۹
- Note: For training in colab workspace you should update your pytorch-lightning pypi package then reinstall the 1.9.0 version.
To see all the configs:
python train.py -h
Output
optional arguments:
-h, --help show this help message and exit
--train_directory TRAIN_DIRECTORY
path to the dataset, default: ./dataset
--val_directory VAL_DIRECTORY
path to the dataset, default: ./dataset
--output_dir OUTPUT_DIR
path to the output directory, default: ./output
--epochs EPOCHS number of training epochs
--device DEVICE what should be the device for training, default is cuda
--mean MEAN [MEAN ...]
dataset channel-wise mean
--std STD [STD ...] dataset channel-wise std
--img_w IMG_W dataset img width size
--n_workers N_WORKERS
number of workers used for dataset collection
--batch_size BATCH_SIZE
batch size number
--alphabets ALPHABETS
alphabets used in the process
For inference run the following code:
python crnn_inference.py --model_path {path-to-your-output-dir}/best.ckpt --img_path sample_images/۱۴ق۹۱۸۱۱_7073.jpg
The output should be like the following:
۱۴ق۹۱۸۱۱
Image examples:
https://ceit.aut.ac.ir/~keyvanrad/download/ML971/project/
Password: ML971Data