TIM allows to compute many of the measures using temporally adaptive estimators. This means that a time-window is used to concentrate on a smaller piece of the time series at a given time instant and no samples are used for the estimation beyond the time-window.
TIM allows the user to specify a weighting function, also called a filter, for the samples inside a time-window. The weighting function allows, for example, to give more weight to those samples near in time compared to those far away. There is only one requirement for a weighting function: the weights inside the time-window must sum to a non-zero value. This sum-value need not be 1, because TIM divides the weighting values by the sum-value. The weighting function is centered on the current time instant.
The default weighting function in TIM is the Kronecker delta, which is 1 for the samples at the current time instant, and (implicitly) 0 for other time instants. This results in an estimator which is very efficient to evaluate, but has high variance. To lower the variance, one can average (low-pass filter) the output as a post-process. One should note that in this post-process the actual time-window of samples from which the estimates are dependent on grows.
The Kronecker delta weighting function combined with averaging has been studied before in the literature and is known to give good results. In contrast, there is no research available for the wider weighting functions yet. Thus we recommend that you use the default weighting function. The wider weighting functions are included in TIM on an experimental basis. The run-times of the temporal estimators are linearly dependent on the size of support of the weighting function.
The temporal entropy combinations output an array of data, where each element corresponds to an estimate at the given time instant. An important thing to know is that the size of the output array is independent of the lags that are used. The size is given by the minimum number of samples among the used signals. To give time correspondence between the output and the original signals, the output array is padded with NaNs before and after the temporal estimates. The less the signals have common in time, the less the temporal estimates and the more the NaNs. The first element of the output array always corresponds to time instant 0.
To illustrate, consider the following 1D signal:
>> A = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
We will now compute the temporal mutual information of A with a 2 positions delayed version of itself:
>> mutual_information_t(A, A, 2, 0, 2)
This results in:
NaN NaN 1.5000 1.8333 2.0833 2.0833 2.0833 2.0833 1.8333 1.5000
To see where the NaNs come from, consider the involved signals written matched in time:
0 1 2 3 4 5 6 7 8 9
0 1 2 3 4 5 6 7 8 9
The entropy combination is defined only on that time interval where all signals are present, which in this case is [2, 9]. Thus the first two elements are given NaNs.