The following notes resumes what has been presented in sub-section 3.3 - Implémentation Matlab of the French report. Through the code snippets, I present four steps of implementing the SIS particle filter for our problem, and give on the way some interpretations/ explanations.
Firstly, we generate a sequence of random variables (xt) that represents the real degradation level, using the transition kernel. Then, at each instant (t), (xt) is added up a Gaussian noise to give the corresponding measurement value (Yt). In practice, the particle filter will re-estimate, based on these measurement values, the hidden state (xt).
%% Create the monitoring data
T = 500; % Time length
% At each instant (t), do
for t = 1:T
% Update the cumulated degradation level
x = x + gamrnd(a,b);
% Create the corresponding measurement value
y = x + dev_noise*randn;
end
Note that:
gamrnd(a,b)
generates a random variable from the distribution Γ(k, θ).dev_noise
represents the standard deviation (σ) of the measurement noise.dev_noise*randn
generates a random variable from the distribution Ν(0, σ²).
Suppose that at the beginning, the system is new (x₀ = 0) and we don't perform any observation, thus (y₀) doesn't exist. Also, suppose that the initial distribution is Normal distribution: p(x₀) = Ν(0, σ₀²).
%% Initialize the particle filter
% Initial state
x = 0;
% Set of Ns samples representing the degradation state
x_P = zeros(1,Ns);
% Importance weights associated with Ns samples
P_w = zeros(1,Ns);
% Simulate according to the initial distribution
for i = 1:Ns
x_P(i) = x + dev_init_sample*randn;
P_w(i) = 1/Ns;
end
Note that:
dev_init_sample
represents the standard deviation (σ₀) of the initial distribution.dev_init_sample*randn
generates a random variable from the distribution Ν(0, σ²)
We dispose now a set of (Ns) particles that represents the initial degradation state of the system. These particles characterizes the a priori law under the viewpoint of Bayesian estimation. When receiving the first measurement value (y₁), the particle filter gets going. The essence of the filter is constituted of two stages that generate the samples according to the importance law, then weight these samples (i.e., update their associated weights) to form a new set of (samples, weights) which approximate the a posteriori law p(x₁|y₁).
Since the transition kernel is chosen as the importance law, the first stage, prediction (sampling) consists of mutate the samples using this kernel. After this stage, we obtain a new set of independent samples.
T = 500; % Time length
% At each instant (t), do:
for t = 1:T
%% Initialize the particle filter
% ...
%% The body of the particle filter
% Each of Ns particles involves independently
for i = 1:Ns
% Prediction stage
% Mutate the sample according to transition kernel
x_P_update(i) = x_P(i) + gamrnd (a,b);
% Correction and Estimation stage
% ...
end
% ...
end
When the first measurement value (y₁) is available, we evaluate the likelihood of each sample and compute its corresponding weight. In fact, the weight quantifies the adequacy of each sample vis-à-vis the current observation. The sample that offers a value (this value is got thanks to the observation equation of the model) closer to (y₁) will be assigned a more significant weight. Notice that in order to compute the weight, we must, at first, determine the likelihood function p(y₁|x₁). Finally, we perform the normalization of the weights to obtain a set of particles (samples, weights), then estimate the degradation level by doing the weighted sum of all the sample values.
T = 500; % Time length
% At each instant (t), do:
for t = 1:T
%% Initialize the particle filter
% ...
%% The body of the particle filter
% Each of Ns particles involves independently
for i = 1:Ns
% Prediction stage
% ...
% Correction and Estimation stage
y_update(i) = x_P_update(i);
P_w(i) = P_w(i)*(1/(dev_noise*sqrt(2*pi)))*exp(-(y_update(i)-y)^2/(2*dev_noise^2));
end
% Normalize the importance weights
P_w = P_w./sum(P_w);
% Estimate the degradation level
for i = 1:Ns
x_estimate = x_estimate + P_w(i)*x_P_update(i);
end
% Propagate the samples to next instant (t)
x_P = x_P_update;
end
The command line y_update(i) = x_P_update(i)
refers to the observation equation y = x + ε
of the model. For our problem, fortunately, this equation is simple. But for other cases, for example, with other measurement techniques, it may be non-linear. Therefore, instead of using directly x_P_update(i)
in the computation of the importance weights, we add the above-mentionned command to match the general case the algorithm SIS particle filter. At the end of the cycle, the samples are propagated to next instant (t=2) to characterize the a priori law p(x₂|y₁).