HMM visualizer — Viterbi, Forward, Backward, Posterior decoding

Step through four classic HMM inference algorithms on a 2-state CpG-island toy (B = background, I = island). Viterbi is max-product over paths and returns the most-likely state path. Forward and Backward are sum-product over all paths and return the data likelihood P(x). Posterior decoding combines them as γ[s][i] = P(state_i = s | x) and picks each position independently. Switch with the dropdown at the top-left. Companion to the algorithm code at cpg-island-hmm.

idle

Viterbi trellis   V[state][position] = max log-prob ending in that state

state B (background) state I (island) current cell traceback path
step through to see the per-cell computation

Highlighted in the parameter tables on the right: which π / a / e values feed the current cell. Viterbi uses one transition (the argmax winner); Forward and Backward use all transitions out of / into the row. The active line in the recurrence below the parameter tables is highlighted too.

Decoded path   most-likely state sequence

HMM parameters

Initial probabilities π

BI
π0.500.50

Transition matrix A

from \ toBI
B0.900.10
I0.200.80

Emission matrix E

stateACGT
B0.400.100.100.40
I0.100.400.400.10

Background prefers A/T; island prefers C/G. (Real CpG-island HMMs use 8 emitting states for dinucleotide context — this 2-state version captures the core idea.)

Recurrence


    

Used in the wild