Main class of Feedforward Closed Loop Learning.
More...
#include <fcl.h>
Main class of Feedforward Closed Loop Learning.
Create an instance of this class to do the learning. It will create the whole network with an input layer, layers and an output layer. Learning is done iterative by first setting the input values and errors and then calling doStep().
(C) 2017,2018-2022, Bernd Porr bernd.nosp@m.@gla.nosp@m.sgown.nosp@m.euro.nosp@m..tech (C) 2017,2018, Paul Miller paul@.nosp@m.glas.nosp@m.gowne.nosp@m.uro..nosp@m.tech
GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007
◆ FeedforwardClosedloopLearning()
FeedforwardClosedloopLearning::FeedforwardClosedloopLearning |
( |
int |
num_of_inputs, |
|
|
int * |
num_of_neurons_per_layer_array, |
|
|
int |
_num_layers |
|
) |
| |
Constructor: FCL without any filters.
- Parameters
-
num_of_inputs | Number of inputs in the input layer |
num_of_neurons_per_layer_array | Number of neurons in each layer |
_num_layers | Number of layer (needs to match with array above) |
◆ ~FeedforwardClosedloopLearning()
FeedforwardClosedloopLearning::~FeedforwardClosedloopLearning |
( |
| ) |
|
Destructor De-allocated any memory.
◆ doStep() [1/2]
void FeedforwardClosedloopLearning::doStep |
( |
double * |
input, |
|
|
double * |
error |
|
) |
| |
Performs the simulation step.
- Parameters
-
input | Array with the input values |
error | Array of the error signals |
◆ doStep() [2/2]
void FeedforwardClosedloopLearning::doStep |
( |
double * |
input, |
|
|
int |
n1, |
|
|
double * |
error, |
|
|
int |
n2 |
|
) |
| |
Python wrapper function.
Not public.
◆ getLayer()
Layer* FeedforwardClosedloopLearning::getLayer |
( |
int |
i | ) |
|
|
inline |
Gets a pointer to a layer.
- Parameters
-
- Returns
- A pointer to a layer class.
◆ getLayers()
Layer** FeedforwardClosedloopLearning::getLayers |
( |
| ) |
|
|
inline |
Returns all Layers.
- Returns
- Returns a two dimensional array of all layers.
◆ getNumInputs()
int FeedforwardClosedloopLearning::getNumInputs |
( |
| ) |
|
|
inline |
Gets the number of inputs.
- Returns
- The number of inputs
◆ getNumLayers()
int FeedforwardClosedloopLearning::getNumLayers |
( |
| ) |
|
|
inline |
Gets the total number of layers.
- Returns
- The total number of all layers.
◆ getOutput()
double FeedforwardClosedloopLearning::getOutput |
( |
int |
index | ) |
|
|
inline |
Gets the output from one of the output neurons.
- Parameters
-
index | The index number of the output neuron. |
- Returns
- The output value of the output neuron.
◆ getOutputLayer()
Layer* FeedforwardClosedloopLearning::getOutputLayer |
( |
| ) |
|
|
inline |
Gets the output layer.
- Returns
- A pointer to the output layer which is also a Layer class.
◆ initWeights()
void FeedforwardClosedloopLearning::initWeights |
( |
double |
max = 0.001 , |
|
|
int |
initBias = 1 , |
|
|
Neuron::WeightInitMethod |
weightInitMethod = Neuron::MAX_OUTPUT_RANDOM |
|
) |
| |
Inits the weights in all layers.
- Parameters
-
max | Maximum value of the weights. |
initBias | If the bias also should be initialised. |
weightInitMethod | See Neuron::WeightInitMethod for the options. |
◆ loadModel()
bool FeedforwardClosedloopLearning::loadModel |
( |
const char * |
name | ) |
|
Loads the while network.
- Parameters
-
◆ saveModel()
bool FeedforwardClosedloopLearning::saveModel |
( |
const char * |
name | ) |
|
Saves the whole network.
- Parameters
-
◆ seedRandom()
void FeedforwardClosedloopLearning::seedRandom |
( |
int |
s | ) |
|
|
inline |
Seeds the random number generator.
- Parameters
-
◆ setActivationFunction()
Sets the activation function of the Neuron.
- Parameters
-
◆ setBias()
void FeedforwardClosedloopLearning::setBias |
( |
double |
_bias | ) |
|
Sets globally the bias.
- Parameters
-
_bias | Sets globally the bias input to all neurons. |
◆ setDecay()
void FeedforwardClosedloopLearning::setDecay |
( |
double |
decay | ) |
|
Sets a typical weight decay scaled with the learning rate.
- Parameters
-
decay | The larger the faster the decay. |
◆ setLearningRate()
void FeedforwardClosedloopLearning::setLearningRate |
( |
double |
learningRate | ) |
|
Sets globally the learning rate.
- Parameters
-
learningRate | Sets the learning rate for all layers and neurons. |
◆ setLearningRateDiscountFactor()
void FeedforwardClosedloopLearning::setLearningRateDiscountFactor |
( |
double |
_learningRateDiscountFactor | ) |
|
|
inline |
Sets how the learnign rate increases or decreases from layer to layer.
- Parameters
-
_learningRateDiscountFactor | A factor of >1 means higher learning rate in deeper layers. |
◆ setMomentum()
void FeedforwardClosedloopLearning::setMomentum |
( |
double |
momentum | ) |
|
Sets the global momentum for all layers.
- Parameters
-
momentum | Defines the intertia of the weight change over time. |
The documentation for this class was generated from the following file: