The need for semi-supervised learning stems from the cost of annotating datasets. Datasets do not always come with fully labeled data. Henceforth, we can cleverly perform compound training with labeled and unlabeled data. One of the semi-supervised methods we can employ is pseudo-labeling. At first, we train our model with labeled data and generate labels for unlabeled data based on it. Next, we train the pre-trained model with both labeled and unlabeled data. This is possible due to pseudo-labels generated earlier. In this project, DenseNet-121 is utilized. The model is trained on CIFAR-10 with 1000 labels.
To cater to your curiosity, you can catch the implementation here.
Refer to the table below to discern the quantitative results.
Test Metric | Score |
---|---|
Accuracy | 62.77% |
Loss | 1.274 |
The loss curve on the labeled train and validation sets of the model.
The accuracy curve on the labeled train and validation sets of the model.
The loss curve on the labeled and unlabeled train and validation sets of the model.
The accuracy curve on the labeled and unlabeled train and validation sets of the model.
This 3 × 3 image grid shows the qualitative results on the test set.