Control dependency
Here, we give a example of controlled HMM (also called input-output HMM), in the special case of Markov switching regression.
using Distributions
using HiddenMarkovModels
import HiddenMarkovModels as HMMs
using LinearAlgebra
using Random
using StableRNGs
using StatsAPI
rng = StableRNG(63);
Model
A Markov switching regression is like a classical regression, except that the weights depend on the unobserved state of an HMM. We can represent it with the following subtype of AbstractHMM
(see Custom HMM structures), which has one vector of coefficients $\beta_i$ per state.
struct ControlledGaussianHMM{T} <: AbstractHMM
init::Vector{T}
trans::Matrix{T}
dist_coeffs::Vector{Vector{T}}
end
In state $i$ with a vector of controls $u$, our observation is given by the linear model $y \sim \mathcal{N}(\beta_i^\top u, 1)$. Controls must be provided to both transition_matrix
and obs_distributions
even if they are only used by one.
function HMMs.initialization(hmm::ControlledGaussianHMM)
return hmm.init
end
function HMMs.transition_matrix(hmm::ControlledGaussianHMM, control::AbstractVector)
return hmm.trans
end
function HMMs.obs_distributions(hmm::ControlledGaussianHMM, control::AbstractVector)
return [Normal(dot(hmm.dist_coeffs[i], control), 1.0) for i in 1:length(hmm)]
end
In this case, the transition matrix does not depend on the control.
Simulation
d = 3
init = [0.6, 0.4]
trans = [0.7 0.3; 0.2 0.8]
dist_coeffs = [-ones(d), ones(d)]
hmm = ControlledGaussianHMM(init, trans, dist_coeffs);
Simulation requires a vector of controls, each being a vector itself with the right dimension.
Let us build several sequences of variable lengths.
control_seqs = [[randn(rng, d) for t in 1:rand(100:200)] for k in 1:1000];
obs_seqs = [rand(rng, hmm, control_seq).obs_seq for control_seq in control_seqs];
obs_seq = reduce(vcat, obs_seqs)
control_seq = reduce(vcat, control_seqs)
seq_ends = cumsum(length.(obs_seqs));
Inference
Not much changes from the case with simple time dependency.
best_state_seq, _ = viterbi(hmm, obs_seq, control_seq; seq_ends)
([2, 2, 2, 2, 2, 2, 2, 2, 2, 2 … 1, 2, 2, 2, 2, 2, 2, 2, 2, 2], [-240.79840873206743, -361.9753154985462, -359.1457540543804, -244.82470121843878, -169.70329775178112, -263.78654158066894, -326.1025115188699, -372.81300733342596, -280.74702714238936, -296.0075678714037 … -235.85437738255308, -298.27773786183536, -247.58343527189314, -241.19083334795545, -266.3779982019227, -288.2929314867078, -276.16269881836257, -225.75571356635973, -324.18950241743545, -264.38821968810123])
Learning
Once more, we override the fit!
function. The state-related parameters are estimated in the standard way. Meanwhile, the observation coefficients are given by the formula for weighted least squares.
function StatsAPI.fit!(
hmm::ControlledGaussianHMM{T},
fb_storage::HMMs.ForwardBackwardStorage,
obs_seq::AbstractVector,
control_seq::AbstractVector;
seq_ends,
) where {T}
(; γ, ξ) = fb_storage
N = length(hmm)
hmm.init .= 0
hmm.trans .= 0
for k in eachindex(seq_ends)
t1, t2 = HMMs.seq_limits(seq_ends, k)
hmm.init .+= γ[:, t1]
hmm.trans .+= sum(ξ[t1:t2])
end
hmm.init ./= sum(hmm.init)
for row in eachrow(hmm.trans)
row ./= sum(row)
end
U = reduce(hcat, control_seq)'
y = obs_seq
for i in 1:N
W = sqrt.(Diagonal(γ[i, :]))
hmm.dist_coeffs[i] = (W * U) \ (W * y)
end
end
Now we put it to the test.
init_guess = [0.5, 0.5]
trans_guess = [0.6 0.4; 0.3 0.7]
dist_coeffs_guess = [-1.1 * ones(d), 1.1 * ones(d)]
hmm_guess = ControlledGaussianHMM(init_guess, trans_guess, dist_coeffs_guess);
hmm_est, loglikelihood_evolution = baum_welch(hmm_guess, obs_seq, control_seq; seq_ends)
first(loglikelihood_evolution), last(loglikelihood_evolution)
(-262443.29180374224, -258966.73330792715)
How did we perform?
cat(hmm_est.trans, hmm.trans; dims=3)
2×2×2 Array{Float64, 3}:
[:, :, 1] =
0.697807 0.302193
0.20208 0.79792
[:, :, 2] =
0.7 0.3
0.2 0.8
hcat(hmm_est.dist_coeffs[1], hmm.dist_coeffs[1])
3×2 Matrix{Float64}:
-1.00261 -1.0
-0.998463 -1.0
-1.00094 -1.0
hcat(hmm_est.dist_coeffs[2], hmm.dist_coeffs[2])
3×2 Matrix{Float64}:
0.997503 1.0
1.00089 1.0
0.997715 1.0
This page was generated using Literate.jl.