Skip to content

Efficient implementation of the clustering using the Expectation Maximization (EM) and K-means algorithms

Notifications You must be signed in to change notification settings

Aid91/Clustering

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Clustering

Efficient implementation of the clustering using the Expectation Maximization (EM) and K-means algorithms

Requirements

Usage

To apply the EM algorithm on a dateset one needs to call:

$  L = gmm.EM(max_iter=max_iter, tol = tol)

To apply the K-means algorithm on a dataset one needs to call:

$ D = gmm.k_means(max_iter=max_iter, tol=tol)

To sample from an already trained GMM model, one needs to call:

$ Y = gmm.sample(N=N)

Several examples can be found within the main.py file.

Sample results

[1] Results of the EM algorithm over iterations

em

Sample data

[1] Unlabelled data

unlabelled_data

[2] Labelled data

labelled_data

Results of the EM algorithm

[1] Final result of the EM algorithm

em

[2] Final result of the EM algorithm (covariance matrices)

em_cov

[3] EM cost function over iterations

em_cost

Results of the K-means algorithm

[1] Final result of the K-means algorithm

kmeans

[2] K-means cost function over iterations

kmeans_cost

Sampling results

[1] Sampling results from already trained GMM model

sampling

About

Efficient implementation of the clustering using the Expectation Maximization (EM) and K-means algorithms

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages