-
Notifications
You must be signed in to change notification settings - Fork 0
/
SampEn.m
54 lines (43 loc) · 1.74 KB
/
SampEn.m
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
function saen = SampEn( dim, r, data, tau )
% SAMPEN Sample Entropy
% calculates the sample entropy of a given time series data
% SampEn is conceptually similar to approximate entropy (ApEn), but has
% following differences:
% 1) SampEn does not count self-matching. The possible trouble of
% having log(0) is avoided by taking logarithm at the latest step.
% 2) SampEn does not depend on the datasize as much as ApEn does. The
% comparison is shown in the graph that is uploaded.
% dim : embedded dimension
% r : tolerance (typically 0.2 * std)
% data : time-series data
% tau : delay time for downsampling (user can omit this, in which case
% the default value is 1)
%
%---------------------------------------------------------------------
% coded by Kijoon Lee, kjlee@ntu.edu.sg
% Mar 21, 2012
%---------------------------------------------------------------------
if nargin < 4, tau = 1; end
if tau > 1, data = downsample(data, tau); end
N = length(data);
correl = zeros(1,2);
dataMat = zeros(dim+1,N-dim);
for i = 1:dim+1
dataMat(i,:) = data(i:N-dim+i-1);
end
for m = dim:dim+1
count = zeros(1,N-dim);
tempMat = dataMat(1:m,:);
for i = 1:N-m
% calculate Chebyshev distance, excluding self-matching case
dist = max(abs(tempMat(:,i+1:N-dim) - repmat(tempMat(:,i),1,N-dim-i)));
% calculate Heaviside function of the distance
% User can change it to any other function
% for modified sample entropy (mSampEn) calculation
D = (dist < r);
count(i) = sum(D)/(N-dim);
end
correl(m-dim+1) = sum(count)/(N-dim);
end
saen = log(correl(1)/correl(2));
end