GSoC week 6: minimum complexity echo state network
Up until now we used reservoir generated mainly through a random process, and this approach requires a lot of fine parameter tuning. And even when the optimal parameters are found, the prediction is rundependent and can show different results with different generations of the reservoir. Is this the only way possible to contruct an Echo State Network (ESN)? Is there a deterministic way to build a ESN? These are the question posed in [1], and the following post is an illustration of the implementation in ReservoirComputing.jl of their construction of a deterministic input layer and three reservoirs. As always we will quickly lay out the theory, then an example will be given.
Minimum complexity reservoir and input layer
The usual construction of a reservoir implies the creation of a random sparse matrix, with given sparsity and dimension, and following rescaling of the values in order to have set the spectral radius to be under a determined value, usually one, in order to ensure the Echo State Property (ESP) [2]. As already stated in the work done in the 4th week, this construction, although efficient, could have some downsides. The particular problem we want to solve with the current implementation is the one given by the randomness of the process: both the reservoir and the input layer construction are initially generated as random and later rescaled. The paper we are following for a possible solution [1] introduces three different constructions for a deterministic reservoir:
 Delay Line Reservoir (DLR): is composed of units organized in a line. The elements of the lower subdiagonal of the reservoir matrix have nonzero values, and all are the same.
 DLR with backward connections (DLRB): based on the DLR each reservoir unit is also connected to the preceding neuron. This is obtained setting as nonzero the elements of both the upper and lower subdiagonal, with two different values.
 Simple Cycle Reservoir (SCR): is composed by units organized in a cycle. The nonzero elements of the reservoir are the lower subdiagonal and the upper right corner, all set to the same weight.
In addition to these reservoirs, also a contruction for the input layer is given: all input connections have the same absolute weight and the sign of each value is determined randomly by a draw from a Bernoulli distribution of mean 1/2. In the paper is stated that any other imposition of sign over the input weight deteriorates the results, so a little randomness is manteined even in this construction, but of course is still far from the original implementation.
Implementation in ReservoirComputing
The implementation of the construction of reservoir and input layer as described in the paper is straightforward: following the instructions we created three different functions for the reservoir named DLR()
, DLRB()
and SCR()
that take as input
res_size
the size of the reservoirweight
the value for the weightsfb_weight
the value for the feedback weights, only needed for theDLRB()
function.
The result of each function is a reservoir matrix with the requested construction. In addition we also added a min_complex_input
function, taking as input
res_size
the size of the reservoirin_size
the size of the input arrayweight
the value of the weights
and giving as output the minimum complexity input layer.
Example
For this example we are goind to use the Henon map, defined as $$x_{x+1} = 1  ax_n^2 + y_n$$ $$ y_{n+1} = bx_n $$
The attractor depends on the two values \( a, b \) and shows chaotic behaviour for the classical values of \( a=1.4 \) and \( b=0.3 \).
To obtain a dataset for the Henon map this time we will use the DynamicalSystems package. Before starting the work we will need to download all the necessary utilies and import them:




Now we can generate the Henon map, which will be shifted by 0.5 and scaled by 2, in order to have consistency with the paper. At the same time we are going to wash out any initial transient and construct the training, train
, and testing, test
, datasets, following the values given by the paper:


One step ahead prediction
Now we can set the parameters for the construction of the ESN, for which we followed closely the ones given in the paper, outside for the ridge regression value. Note that since some values are corresponding to our default (activation function, alpha and non linear algorithm) we will omit them for clarity.


We can now build both the standard ESN and three other ESNs based on the novel reservoir implementation. We are going to need the four of them for a comparison of the results:




In order to test the accuracy of the predictions given by different architectures we are going to use the Normalized Mean Square Error (NMSE), defined as
$$NMSE = \frac{<\hat{y}(t)y(t)^2>}{<y(t)<y(t)>^2>}$$
where \( \hat{y}(t) \) is the readout output, \( y(t) \) is the target output, \( <\cdot> \) indicates the empirical mean and \( \cdot \) is the Euclidean norm. A simple NMSE
function is created:


Now we can iterate and test the output of all the different implementations in a one step ahead prediction task:




The standard ESN shows the best results, but the NMSE given by the minimum complexity ESNs are actually not bad. The results are better than those presented in the paper for all the architectures so they are not directly comparable, but the best performing ESN between the minimum complexity ones seems to be the DLRBbased, something that is also true in the paper.
Attractor reconstruction
Now we want to venture into something that is not done in the paper: we want to see if this deterministic implementation of reservoirs and input layers are capable of reconstructing the Henon attractor. We will use the ESNs already built and we will predict the system for predict_len
steps to see if the behaviour is manteined. We will do so only through an eye test, but it should suffice to have a general idea of the capabilities of these reservoirs.
To start we will plot the actual data, in order to have something to compare the resuls to:


Now let’s see if the standard ESN is able to predict correctly this attractor


Not bad, but we already know the capabilities of the ESN. We are here to test the minimum complexity construction, so let us start with DLR


The predictions are not as clear cut as we would like, but the behaviour is manteined nevertheless. Actually impressive considering the simple construction of the reservoir. Trying the two other constructions gives the following:




The results are somewhat similar between each other, and a deeper quantitative analysis is needed to determine the best performing construction, but this was not the aim of this post. We wanted to see if these basic implementations of reservoirs and input layers were capable not only of maintaining a short term prediction capability, but also if they were still able to mimic the behaviour of a chaotic attractor in the long term and it seems that both of these statements are proven to be correct. This seminal paper not only sheds light on the still inexplored possibilities of ESN reservoir constructions, but also shows that very little complexity is needed for this model to obtain very good results in a short amount of time.
As always, if you have any questions regarding the model, the package or you have found errors in my post, please donâ€™t hesitate to contact me!
Documentation
[1] Rodan, Ali, and Peter Tino. “Minimum complexity echo state network.” IEEE transactions on neural networks 22.1 (2010): 131144.
[2] Yildiz, Izzet B., Herbert Jaeger, and Stefan J. Kiebel. “Revisiting the echo state property.” Neural networks 35 (2012): 19.