Welcome to ESN’s documentation!¶
-
class
echo_state_network.echo_state_network.
BaseEchoStateNetwork
(k_in: int = 2, input_scaling: float = 1.0, spectral_radius: float = 0.0, bias: float = 0.0, ext_bias: bool = False, leakage: float = 1.0, reservoir_size: float = 500, k_res: float = 10, reservoir_activation: str = 'tanh', bi_directional: bool = False, teacher_scaling: float = 1.0, teacher_shift: float = 0.0, solver: str = 'ridge', beta: float = 1e-06, random_state: int = None)¶ Base class for ESN classification and regression.
Warning: This class should not be used directly. Use derived classes instead.
New in version 0.00.
-
drop_out
(drop_out_rate=0.0)¶ Experimental dropout strategy for the ESN. After passing some data through the network and collecting reservoir activations, a percentile of nodes with very low activity can be removed. Note that this works best without bidirectional mode right now.
Warnings : This is experimental and needs to be validated at first. Publications will follow!!!
- Parameters
drop_out_rate (double, default 0.0) – Determines the percentile of nodes to be removed
-
finalize
(n_jobs=0)¶ Finalize the training by solving the linear regression problem and deleting xTx and xTy attributes.
- Parameters
n_jobs (int, default: 0) – If n_jobs is larger than 1, then the linear regression for each output dimension is computed separately using joblib.
-
fit
(X, y)¶ Fit the model to the data matrix X and target(s) y.
- Parameters
X (ndarray of shape (n_samples, n_features)) – The input data
y (ndarray of shape (n_samples, ) or (n_samples, n_outputs)) – The target values (class labels in classification, real numbers in regression).
- Returns
self
- Return type
returns a trained ESN model.
-
partial_fit
(X, y, update_output_weights=True)¶ Fit the model to the data matrix X and target(s) y without finalizing it. This can be used to add more training data later.
- Parameters
X (ndarray of shape (n_samples, n_features)) – The input data
y (ndarray of shape (n_samples, ) or (n_samples, n_outputs)) – The target values (class labels in classification, real numbers in regression).
update_output_weights (bool, default True) – If False, no output weights are computed after passing the current data through the network. This is computationally more efficient in case of a lot of outputs and a large dataset that is fitted incrementally.
- Returns
self
- Return type
returns a trained ESN model.
-
predict
(X, keep_reservoir_state=False)¶ Predict using the trained ESN model
- Parameters
X (array-like, shape (n_samples, n_features)) – The input data.
keep_reservoir_state (bool, default False) – If True, the reservoir state is kept and can be accessed from outside. This is useful for visualization
- Returns
y_pred – The predicted values
- Return type
array-like, shape (n_samples,) or (n_samples, n_outputs)
-
-
class
echo_state_network.echo_state_network.
ESNClassifier
(k_in: int = 2, input_scaling: float = 1.0, spectral_radius: float = 0.0, bias: float = 0.0, ext_bias: bool = False, leakage: float = 1.0, reservoir_size: int = 500, k_res: int = 10, reservoir_activation: str = 'tanh', bi_directional: bool = False, teacher_scaling: float = 1.0, teacher_shift: float = 0.0, solver: str = 'ridge', beta: float = 1e-06, random_state: int = None)¶ Echo State Network classifier.
This model optimizes the mean squared error loss function using linear regression.
New in version 0.00.
- Parameters
k_in (int, default 2) – This element represents the sparsity of the connections between the input and recurrent nodes. It determines the number of features that every node inside the reservoir receives.
input_scaling (float, default 1.0) – This element represents the input scaling factor from the input to the reservoir. It is a global scaling factor for the input weight matrix.
spectral_radius (float, default 0.0) – This element represents the spectral radius of the reservoir weights. It is a global scaling factor for the reservoir weight matrix.
bias (float, default 0.0) – This element represents the bias scaling of the bias weights. It is a global scaling factor for the bias weight matrix.
leakage (float, default 1.0) – This element represents the leakage of the reservoir. Depending on the value, it acts as a short- or long-term memory coefficient.
reservoir_size (int, default 500) – This element represents the number of neurons in the reservoir.
k_res (int, default 10) – This element represents the sparsity of the connections inside the reservoir. It determines the number of nodes that every node inside the reservoir is connected with.
reservoir_activation ({'tanh', 'identity', 'logistic', 'relu'}) –
- This element represents the activation function in the reservoir.
’identity’, no-op activation, useful to implement linear bottleneck, returns f(x) = x
’logistic’, the logistic sigmoid function, returns f(x) = 1 / (1 + exp(-x)).
’tanh’, the hyperbolic tan function, returns f(x) = tanh(x).
’relu’, the rectified linear unit function, returns f(x) = max(0, x)
teacher_scaling (float, default 1.0) – This element represents the teacher scaling factor. In some cases, for example for imbalanced datasets, it might make sense to increase this hyperparameter.
teacher_shift (float, default 0.0) – This element represents the teacher shift factor. In some cases, for example for imbalanced datasets, it might make sense to increase this hyperparameter.
bi_directional (bool, default False) – If True, the input sequences are passed through the network two times, forward and backward.
solver ({'ridge', 'pinv'}) – The solver for weight optimization. - ‘pinv’ uses the pseudoinverse solution of linear regression. - ‘ridge’ uses L2 penalty while computing the linear regression
beta (float, optional, default 0.0001) – L2 penalty (regularization term) parameter.
random_state (int, RandomState instance or None, optional, default None) – If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
-
TODO
¶
Notes
TODO
References
TODO
-
fit
(X, y)¶ Fit the model to the data matrix X and target(s) y.
- Parameters
X (ndarray of shape (n_samples, n_features)) – The input data
y (ndarray of shape (n_samples, ) or (n_samples, n_outputs)) – The target values (class labels in classification, real numbers in regression).
- Returns
self
- Return type
returns a trained ESN model.
-
partial_fit
(X, y, classes=None, update_output_weights=True)¶ Fit the model to the data matrix X and target(s) y without finalizing it. This can be used to add more training data later.
- Parameters
X (ndarray of shape (n_samples, n_features)) – The input data
y (ndarray of shape (n_samples, ) or (n_samples, n_outputs)) – The target values (class labels in classification, real numbers in regression).
update_output_weights (bool, default True) – If False, no output weights are computed after passing the current data through the network. This is computationally more efficient in case of a lot of outputs and a large dataset that is fitted incrementally.
- Returns
self
- Return type
returns a trained ESN classifier.
-
predict
(X, keep_reservoir_state=False)¶ Predict the classes using the trained ESN classifier
- Parameters
X (array-like, shape (n_samples, n_features)) – The input data.
keep_reservoir_state (bool, default False) – If True, the reservoir state is kept and can be accessed from outside. This is useful for visualization
- Returns
y_pred – The predicted classes
- Return type
array-like, shape (n_samples,) or (n_samples, n_outputs)
-
predict_proba
(X, keep_reservoir_state=False)¶ Predict the probability estimates using the trained ESN classifier
- Parameters
X (array-like, shape (n_samples, n_features)) – The input data.
keep_reservoir_state (bool, default False) – If True, the reservoir state is kept and can be accessed from outside. This is useful for visualization
- Returns
y_pred – The predicted probability estimates
- Return type
array-like, shape (n_samples,) or (n_samples, n_outputs)
-
class
echo_state_network.echo_state_network.
ESNRegressor
(k_in: int = 2, input_scaling: float = 1.0, spectral_radius: float = 0.0, bias: float = 0.0, ext_bias: bool = False, leakage: float = 1.0, reservoir_size: int = 500, k_res: int = 10, reservoir_activation: str = 'tanh', bi_directional: bool = False, teacher_scaling: float = 1.0, teacher_shift: float = 0.0, solver: str = 'ridge', beta: float = 1e-06, random_state: int = None)¶ Echo State Network regressor.
This model optimizes the mean squared error loss function using linear regression.
New in version 0.00.
- Parameters
k_in (int, default 2) – This element represents the sparsity of the connections between the input and recurrent nodes. It determines the number of features that every node inside the reservoir receives.
input_scaling (float, default 1.0) – This element represents the input scaling factor from the input to the reservoir. It is a global scaling factor for the input weight matrix.
spectral_radius (float, default 0.0) – This element represents the spectral radius of the reservoir weights. It is a global scaling factor for the reservoir weight matrix.
bias (float, default 0.0) – This element represents the bias scaling of the bias weights. It is a global scaling factor for the bias weight matrix.
leakage (float, default 1.0) – This element represents the leakage of the reservoir. Depending on the value, it acts as a short- or long-term memory coefficient.
reservoir_size (int, default 500) – This element represents the number of neurons in the reservoir.
k_res (int, default 10) – This element represents the sparsity of the connections inside the reservoir. It determines the number of nodes that every node inside the reservoir is connected with.
reservoir_activation ({'tanh', 'identity', 'logistic', 'relu'}) –
- This element represents the activation function in the reservoir.
’identity’, no-op activation, useful to implement linear bottleneck, returns f(x) = x
’logistic’, the logistic sigmoid function, returns f(x) = 1 / (1 + exp(-x)).
’tanh’, the hyperbolic tan function, returns f(x) = tanh(x).
’relu’, the rectified linear unit function, returns f(x) = max(0, x)
teacher_scaling (float, default 1.0) – This element represents the teacher scaling factor. In some cases, for example for imbalanced datasets, it might make sense to increase this hyperparameter.
teacher_shift (float, default 0.0) – This element represents the teacher shift factor. In some cases, for example for imbalanced datasets, it might make sense to increase this hyperparameter.
bi_directional (bool, default False) – If True, the input sequences are passed through the network two times, forward and backward.
solver ({'ridge', 'pinv'}) – The solver for weight optimization. - ‘pinv’ uses the pseudoinverse solution of linear regression. - ‘ridge’ uses L2 penalty while computing the linear regression
beta (float, optional, default 0.0001) – L2 penalty (regularization term) parameter.
random_state (int, RandomState instance or None, optional, default None) – If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
-
TODO
¶
Notes
TODO
References
TODO
-
fit
(X, y)¶ Fit the model to the data matrix X and target(s) y.
- Parameters
X (ndarray of shape (n_samples, n_features)) – The input data
y (ndarray of shape (n_samples, ) or (n_samples, n_outputs)) – The target values (class labels in classification, real numbers in regression).
- Returns
self
- Return type
returns a trained ESN model.
-
partial_fit
(X, y, update_output_weights=True)¶ Fit the model to the data matrix X and target(s) y without finalizing it. This can be used to add more training data later.
- Parameters
X (ndarray of shape (n_samples, n_features)) – The input data
y (ndarray of shape (n_samples, ) or (n_samples, n_outputs)) – The target values (class labels in classification, real numbers in regression).
update_output_weights (bool, default True) – If False, no output weights are computed after passing the current data through the network. This is computationally more efficient in case of a lot of outputs and a large dataset that is fitted incrementally.
- Returns
self
- Return type
returns a trained ESN classifier.
-
predict
(X, keep_reservoir_state=False)¶ Predict the classes using the trained ESN regressor
- Parameters
X (array-like, shape (n_samples, n_features)) – The input data.
keep_reservoir_state (bool, default False) – If True, the reservoir state is kept and can be accessed from outside. This is useful for visualization
- Returns
y_pred – The predicted classes
- Return type
array-like, shape (n_samples,) or (n_samples, n_outputs)