Skip to content

imomayiz/Dictionnary-Learning-for-Sparse-Representations

Repository files navigation

Dictionnary-Learning-for-Sparse-Representations

This work contains the work done in the project Recent Advances in Machine Learning.

Here we provide a general view of the theme, implemeting in the Julia language 3 papers:

K-SVD: https://sites.fas.harvard.edu/~cs278/papers/ksvd.pdf (Based on the implementation of Ishita Takeshi).

GDDL (Greedy Deep Dictionary Learning) : https://arxiv.org/pdf/1602.00203.pdf

Online Dictionary Learning for Sparse Coding: https://www.di.ens.fr/sierra/pdfs/icml09.pdf

Each paper is implemented in a different Jupyter Notebook, so that you can test them.

We have used for our experiments the MNIST dataset and the CIFAR dataset:

Results

K-SVD (Task: Compression, MNIST):

Original:

original

Reconstructed (95% of sparsity):

rec

GDDL (Task: Interpretability,MNIST):

Layers of the encoder:

lay1

lay2

lay3

Online Dictionary Learning for Sparse Coding (Task: Compression,CIFAR)

Original:

original

Reconstructed (95% of sparsity):

rec

Releases

No releases published

Packages

No packages published

Languages