Skip to content

Discrete Hidden Markov Models based on OpenCV, cvhmm, opencv hmm

Notifications You must be signed in to change notification settings

omidsakhi/cv-hmm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 

Repository files navigation

cv-hmm

Discrete Hidden Markov Models based on OpenCV

keywords

hiddem markov models, hmm, opencv, cvhmm, opencv-hmm

What you get by running main.cpp:

First we define Transition, Emission and Initial Probabilities of the model

TRANS:
0.5 0.5 0
0 0.7 0.3
0 0 1

EMIS:
0.5 0.5 0 0
0 0.5 0.5 0
0 0 0.5 0.5

INIT:
1 0 0


As an example, we generate 25 sequences each with 20 observations
per sequence using the defined Markov model

Generated Sequences:
0: 12223222232322222333
1: 10111122322223233322
2: 11212112123333333222
3: 01121223322332332233
4: 01223332223332222222
5: 11222221212333222223
6: 22112322323233323233
7: 21122221121322332233
8: 11332333223322222323
9: 01222222232233222322
10: 02232333322232222332
11: 11223323323233232332
12: 21123333332332222222
13: 22112333332233323222
14: 22232333333233323233
15: 12233332232332223232
16: 12123233322233232333
17: 13322323333323333222
18: 01123323223333223332
19: 22223223333332222232
20: 12223232232223323223
21: 11323332222323332223
22: 10123322323233333223
23: 11222111323323323223
24: 02122233332222333233

Generated States:
0: 12222222222222222222
1: 00001122222222222222
2: 01111111112222222222
3: 01111222222222222222
4: 01112222222222222222
5: 00111111112222222222
6: 11112222222222222222
7: 11111111111222222222
8: 11222222222222222222
9: 01222222222222222222
10: 01122222222222222222
11: 01222222222222222222
12: 11112222222222222222
13: 11112222222222222222
14: 11122222222222222222
15: 11122222222222222222
16: 01122222222222222222
17: 12222222222222222222
18: 01112222222222222222
19: 11222222222222222222
20: 11222222222222222222
21: 01222222222222222222
22: 00122222222222222222
23: 01111111222222222222
24: 01111122222222222222


Problem 1: Given the observation sequence and a model,
how do we efficiently compute P(O|Y), the probability of
the observation sequence, given the model?
Example: To demonstrate this we estimate the probabilities
for all sequences, given the defined model above.

logpseq0 -146.34
logpseq1 -488.064
logpseq2 -488.262
logpseq3 -351.493
logpseq4 -214.724
logpseq5 -351.296
logpseq6 -214.724
logpseq7 -419.68
logpseq8 -214.724
logpseq9 -214.724
logpseq10 -146.34
logpseq11 -214.724
logpseq12 -214.724
logpseq13 -214.724
logpseq14 -77.9556
logpseq15 -146.34
logpseq16 -214.724
logpseq17 -146.34
logpseq18 -283.109
logpseq19 -77.9556
logpseq20 -146.34
logpseq21 -214.724
logpseq22 -283.109
logpseq23 -419.878
logpseq24 -214.724


Problem 2: Given the model and an observation sequence,
how do we find an optimal state sequence for the underlying
Markov Process? One answer is by using Viterbi algorithm.
As an example here we estimate the optimal states for all sequences
using Viterbi algorithm and the defined model.

0: 01222222222222222222
1: 00111122222222222222
2: 01111111122222222222
3: 01111222222222222222
4: 01222222222222222222
5: 01111111112222222222
6: 01112222222222222222
7: 01111111111222222222
8: 01222222222222222222
9: 01222222222222222222
10: 01222222222222222222
11: 01222222222222222222
12: 01122222222222222222
13: 01112222222222222222
14: 22222222222222222222
15: 01222222222222222222
16: 01122222222222222222
17: 02222222222222222222
18: 01122222222222222222
19: 22222222222222222222
20: 01222222222222222222
21: 01222222222222222222
22: 00122222222222222222
23: 01111111222222222222
24: 01122222222222222222


Problem 3: Given an observation sequence O (can be several observations),
how do we find a model that maximizes the probability of O ?
The answer is by using the Baum-Welch algorithm to train a model.
To demonstrate this, initially we define a model by guess
and we estimate the parameters of the model for all the sequences
that we already got.

TRANS:
0.66294 0.33706 1.1053e-030
8.07039e-031 0.630023 0.369977
1.01521e-030 9.60315e-031 1

EMIS:
0.113895 0.342418 0.407146 0.13654
0.0834312 0.119509 0.490896 0.306164
0.0833345 0.0877286 0.454319 0.374617

INIT:
0.732816 0.133851 0.133333


done.

About

Discrete Hidden Markov Models based on OpenCV, cvhmm, opencv hmm

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages