API reference

Sequence formatting

Most algorithms below ingest the data with two positional arguments obs_seq (mandatory) and control_seq (optional), and a keyword argument seq_ends (optional).

  • If the data consists of a single sequence, obs_seq and control_seq are the corresponding vectors of observations and controls, and you don't need to provide seq_ends.
  • If the data consists of multiple sequences, obs_seq and control_seq are concatenations of several vectors, whose end indices are given by seq_ends. Starting from separate sequences obs_seqs and control_seqs, you can run the following snippet:
obs_seq = reduce(vcat, obs_seqs)
control_seq = reduce(vcat, control_seqs)
seq_ends = cumsum(length.(obs_seqs))

Types

HiddenMarkovModels.HMMType
struct HMM{V<:(AbstractVector), M<:(AbstractMatrix), VD<:(AbstractVector), Vl<:(AbstractVector), Ml<:(AbstractMatrix)} <: AbstractHMM

Basic implementation of an HMM.

Fields

  • init::AbstractVector: initial state probabilities

  • trans::AbstractMatrix: state transition probabilities

  • dists::AbstractVector: observation distributions

  • loginit::AbstractVector: logarithms of initial state probabilities

  • logtrans::AbstractMatrix: logarithms of state transition probabilities

source

Interface

HiddenMarkovModels.transition_matrixFunction
transition_matrix(hmm)
transition_matrix(hmm, control)

Return the matrix of state transition probabilities for hmm (possibly when control is applied).

Note

When processing sequences, the control at time t influences the transition from time t to t+1 (and not from time t-1 to t).

source
HiddenMarkovModels.obs_distributionsFunction
obs_distributions(hmm)
obs_distributions(hmm, control)

Return a vector of observation distributions, one for each state of hmm (possibly when control is applied).

These distribution objects should implement

  • Random.rand(rng, dist) for sampling
  • DensityInterface.logdensityof(dist, obs) for inference
  • StatsAPI.fit!(dist, obs_seq, weight_seq) for learning
source

Utils

Base.randFunction
rand([rng,] hmm, T)
rand([rng,] hmm, control_seq)

Simulate hmm for T time steps, or when the sequence control_seq is applied.

Return a named tuple (; state_seq, obs_seq).

source
Base.eltypeFunction
eltype(hmm, obs, control)

Return a type that can accommodate forward-backward computations for hmm on observations similar to obs.

It is typically a promotion between the element type of the initialization, the element type of the transition matrix, and the type of an observation logdensity evaluated at obs.

source
HiddenMarkovModels.seq_limitsFunction
seq_limits(seq_ends, k)

Return a tuple (t1, t2) giving the begin and end indices of subsequence k within a set of sequences ending at seq_ends.

source

Inference

DensityInterface.logdensityofFunction
logdensityof(hmm)

Return the prior loglikelihood associated with the parameters of hmm.

source
logdensityof(hmm, obs_seq; ...)
logdensityof(hmm, obs_seq, control_seq; seq_ends)

Run the forward algorithm to compute the loglikelihood of obs_seq for hmm, integrating over all possible state sequences.

source
HiddenMarkovModels.joint_logdensityofFunction
joint_logdensityof(hmm, obs_seq, state_seq; ...)
joint_logdensityof(
    hmm,
    obs_seq,
    state_seq,
    control_seq;
    seq_ends
)

Run the forward algorithm to compute the the joint loglikelihood of obs_seq and state_seq for hmm.

source
HiddenMarkovModels.forwardFunction
forward(hmm, obs_seq; ...)
forward(hmm, obs_seq, control_seq; seq_ends)

Apply the forward algorithm to infer the current state after sequence obs_seq for hmm.

Return a tuple (storage.α, storage.logL) where storage is of type ForwardStorage.

source
HiddenMarkovModels.viterbiFunction
viterbi(hmm, obs_seq; ...)
viterbi(hmm, obs_seq, control_seq; seq_ends)

Apply the Viterbi algorithm to infer the most likely state sequence corresponding to obs_seq for hmm.

Return a tuple (storage.q, storage.logL) where storage is of type ViterbiStorage.

source
HiddenMarkovModels.forward_backwardFunction
forward_backward(hmm, obs_seq; ...)
forward_backward(hmm, obs_seq, control_seq; seq_ends)

Apply the forward-backward algorithm to infer the posterior state and transition marginals during sequence obs_seq for hmm.

Return a tuple (storage.γ, storage.logL) where storage is of type ForwardBackwardStorage.

source

Learning

HiddenMarkovModels.baum_welchFunction
baum_welch(hmm_guess, obs_seq; ...)
baum_welch(
    hmm_guess,
    obs_seq,
    control_seq;
    seq_ends,
    atol,
    max_iterations,
    loglikelihood_increasing
)

Apply the Baum-Welch algorithm to estimate the parameters of an HMM on obs_seq, starting from hmm_guess.

Return a tuple (hmm_est, loglikelihood_evolution) where hmm_est is the estimated HMM and loglikelihood_evolution is a vector of loglikelihood values, one per iteration of the algorithm.

Keyword arguments

  • atol: minimum loglikelihood increase at an iteration of the algorithm (otherwise the algorithm is deemed to have converged)
  • max_iterations: maximum number of iterations of the algorithm
  • loglikelihood_increasing: whether to throw an error if the loglikelihood decreases
source
StatsAPI.fit!Function
StatsAPI.fit!(
    hmm, fb_storage::ForwardBackwardStorage,
    obs_seq, [control_seq]; seq_ends,
)

Update hmm in-place based on information generated during forward-backward.

This function is allowed to reuse fb_storage as a scratch space, so its contents should not be trusted afterwards.

source

In-place versions

Forward

HiddenMarkovModels.ForwardStorageType
struct ForwardStorage{R}

Fields

Only the fields with a description are part of the public API.

  • α::Matrix: posterior last state marginals α[i] = ℙ(X[T]=i | Y[1:T])

  • logL::Vector: one loglikelihood per observation sequence

  • B::Matrix

  • c::Vector

source

Viterbi

HiddenMarkovModels.ViterbiStorageType
struct ViterbiStorage{R}

Fields

Only the fields with a description are part of the public API.

  • q::Vector{Int64}: most likely state sequence q[t] = argmaxᵢ ℙ(X[t]=i | Y[1:T])

  • logL::Vector: one joint loglikelihood per pair of observation sequence and most likely state sequence

  • logB::Matrix

  • ϕ::Matrix

  • ψ::Matrix{Int64}

source

Forward-backward

HiddenMarkovModels.ForwardBackwardStorageType
struct ForwardBackwardStorage{R, M<:AbstractArray{R, 2}}

Fields

Only the fields with a description are part of the public API.

  • γ::Matrix: posterior state marginals γ[i,t] = ℙ(X[t]=i | Y[1:T])

  • ξ::Vector{M} where {R, M<:AbstractMatrix{R}}: posterior transition marginals ξ[t][i,j] = ℙ(X[t]=i, X[t+1]=j | Y[1:T])

  • logL::Vector: one loglikelihood per observation sequence

  • B::Matrix

  • α::Matrix

  • c::Vector

  • β::Matrix

  • Bβ::Matrix

source

Baum-Welch

Miscellaneous

HiddenMarkovModels.fit_in_sequence!Function
fit_in_sequence!(dists, i, x, w)

Modify the i-th element of dists by fitting it to an observation sequence x with associated weight sequence w.

Default behavior:

fit!(dists[i], x, w)

Override for Distributions.jl (in the package extension)

dists[i] = fit(eltype(dists), turn_into_vector(x), w)
source

Internals

HiddenMarkovModels.LightDiagNormalType
struct LightDiagNormal{T1, T2, T3, V1<:AbstractArray{T1, 1}, V2<:AbstractArray{T2, 1}, V3<:AbstractArray{T3, 1}}

An HMMs-compatible implementation of a multivariate normal distribution with diagonal covariance, enabling allocation-free in-place estimation.

This is not part of the public API and is expected to change.

Fields

  • μ::AbstractVector: means

  • σ::AbstractVector: standard deviations

  • logσ::AbstractVector: log standard deviations

source
HiddenMarkovModels.LightCategoricalType
struct LightCategorical{T1, T2, V1<:AbstractArray{T1, 1}, V2<:AbstractArray{T2, 1}}

An HMMs-compatible implementation of a discrete categorical distribution, enabling allocation-free in-place estimation.

This is not part of the public API and is expected to change.

Fields

  • p::AbstractVector: class probabilities

  • logp::AbstractVector: log class probabilities

source
HiddenMarkovModels.log_transition_matrixFunction
log_transition_matrix(hmm)
log_transition_matrix(hmm, control)

Return the matrix of state transition log-probabilities for hmm (possibly when control is applied).

Falls back on transition_matrix.

Note

When processing sequences, the control at time t influences the transition from time t to t+1 (and not from time t-1 to t).

source
HiddenMarkovModels.argmaxplus_transmul!Function
argmaxplus_transmul!(y, ind, A, x)

Perform the in-place multiplication transpose(A) * x in the sense of max-plus algebra, store the result in y, and store the index of the maximum for each component of y in ind.

source

Index