|
MiniDNN
|
#include <Convolutional.h>
Public Member Functions | |
| Convolutional (const int in_width, const int in_height, const int in_channels, const int out_channels, const int window_width, const int window_height) | |
| void | init (const Scalar &mu, const Scalar &sigma, RNG &rng) |
| void | forward (const Matrix &prev_layer_data) |
| const Matrix & | output () const |
| void | backprop (const Matrix &prev_layer_data, const Matrix &next_layer_data) |
| const Matrix & | backprop_data () const |
| void | update (Optimizer &opt) |
| std::vector< Scalar > | get_parameters () const |
| void | set_parameters (const std::vector< Scalar > ¶m) |
| std::vector< Scalar > | get_derivatives () const |
| Convolutional (const int in_width, const int in_height, const int in_channels, const int out_channels, const int window_width, const int window_height) | |
| void | init (const Scalar &mu, const Scalar &sigma, RNG &rng) |
| void | forward (const Matrix &prev_layer_data) |
| const Matrix & | output () const |
| void | backprop (const Matrix &prev_layer_data, const Matrix &next_layer_data) |
| const Matrix & | backprop_data () const |
| void | update (Optimizer &opt) |
| std::vector< Scalar > | get_parameters () const |
| void | set_parameters (const std::vector< Scalar > ¶m) |
| std::vector< Scalar > | get_derivatives () const |
Public Member Functions inherited from MiniDNN::Layer | |
| Layer (const int in_size, const int out_size) | |
| virtual | ~Layer () |
| int | in_size () const |
| int | out_size () const |
Convolutional hidden layer
Currently only supports the "valid" rule of convolution.
Definition at line 23 of file Convolutional.h.
|
inline |
Constructor
| in_width | Width of the input image in each channel. |
| in_height | Height of the input image in each channel. |
| in_channels | Number of input channels. |
| out_channels | Number of output channels. |
| window_width | Width of the filter. |
| window_height | Height of the filter. |
Definition at line 59 of file Convolutional.h.
|
inline |
Constructor
| in_width | Width of the input image in each channel. |
| in_height | Height of the input image in each channel. |
| in_channels | Number of input channels. |
| out_channels | Number of output channels. |
| window_width | Width of the filter. |
| window_height | Height of the filter. |
Definition at line 64 of file Convolutional_DHT.h.
|
inlinevirtual |
Initialize layer parameters using \(N(\mu, \sigma^2)\) distribution
| mu | Mean of the normal distribution. |
| sigma | Standard deviation of the normal distribution. |
| rng | The random number generator of type RNG. |
Implements MiniDNN::Layer.
Definition at line 67 of file Convolutional.h.
|
inlinevirtual |
Compute the output of this layer
The purpose of this function is to let the hidden layer compute information that will be passed to the next layer as the input. The concrete behavior of this function is subject to the implementation, with the only requirement that after calling this function, the Layer::output() member function will return a reference to the output values.
| prev_layer_data | The output of previous layer, which is also the input of this layer. prev_layer_data should have in_size rows as in the constructor, and each column of prev_layer_data is an observation. |
Implements MiniDNN::Layer.
Definition at line 84 of file Convolutional.h.
|
inlinevirtual |
Obtain the output values of this layer
This function is assumed to be called after Layer::forward() in each iteration. The output are the values of output hidden units after applying activation function. The main usage of this function is to provide the prev_layer_data parameter in Layer::forward() of the next layer.
out_size rows as in the constructor, and have number of columns equal to that of prev_layer_data in the Layer::forward() function. Each column represents an observation. Implements MiniDNN::Layer.
Definition at line 110 of file Convolutional.h.
|
inlinevirtual |
Compute the gradients of parameters and input units using back-propagation
The purpose of this function is to compute the gradient of input units, which can be retrieved by Layer::backprop_data(), and the gradient of layer parameters, which could later be used by the Layer::update() function.
| prev_layer_data | The output of previous layer, which is also the input of this layer. prev_layer_data should have in_size rows as in the constructor, and each column of prev_layer_data is an observation. |
| next_layer_data | The gradients of the input units of the next layer, which is also the gradients of the output units of this layer. next_layer_data should have out_size rows as in the constructor, and the same number of columns as prev_layer_data. |
Implements MiniDNN::Layer.
Definition at line 118 of file Convolutional.h.
|
inlinevirtual |
Obtain the gradient of input units of this layer
This function provides the next_layer_data parameter in Layer::backprop() of the previous layer, since the derivative of the input of this layer is also the derivative of the output of previous layer.
Implements MiniDNN::Layer.
Definition at line 166 of file Convolutional.h.
|
inlinevirtual |
Update parameters after back-propagation
| opt | The optimization algorithm to be used. See the Optimizer class. |
Implements MiniDNN::Layer.
Definition at line 171 of file Convolutional.h.
|
inlinevirtual |
Get serialized values of parameters
Implements MiniDNN::Layer.
Definition at line 182 of file Convolutional.h.
|
inlinevirtual |
Set the values of layer parameters from serialized data
Reimplemented from MiniDNN::Layer.
Definition at line 192 of file Convolutional.h.
|
inlinevirtual |
Get serialized values of the gradient of parameters
Implements MiniDNN::Layer.
Definition at line 201 of file Convolutional.h.
|
inlinevirtual |
Initialize layer parameters using \(N(\mu, \sigma^2)\) distribution
| mu | Mean of the normal distribution. |
| sigma | Standard deviation of the normal distribution. |
| rng | The random number generator of type RNG. |
Implements MiniDNN::Layer.
Definition at line 73 of file Convolutional_DHT.h.
|
inlinevirtual |
Compute the output of this layer
The purpose of this function is to let the hidden layer compute information that will be passed to the next layer as the input. The concrete behavior of this function is subject to the implementation, with the only requirement that after calling this function, the Layer::output() member function will return a reference to the output values.
| prev_layer_data | The output of previous layer, which is also the input of this layer. prev_layer_data should have in_size rows as in the constructor, and each column of prev_layer_data is an observation. |
Implements MiniDNN::Layer.
Definition at line 90 of file Convolutional_DHT.h.
|
inlinevirtual |
Obtain the output values of this layer
This function is assumed to be called after Layer::forward() in each iteration. The output are the values of output hidden units after applying activation function. The main usage of this function is to provide the prev_layer_data parameter in Layer::forward() of the next layer.
out_size rows as in the constructor, and have number of columns equal to that of prev_layer_data in the Layer::forward() function. Each column represents an observation. Implements MiniDNN::Layer.
Definition at line 127 of file Convolutional_DHT.h.
|
inlinevirtual |
Compute the gradients of parameters and input units using back-propagation
The purpose of this function is to compute the gradient of input units, which can be retrieved by Layer::backprop_data(), and the gradient of layer parameters, which could later be used by the Layer::update() function.
| prev_layer_data | The output of previous layer, which is also the input of this layer. prev_layer_data should have in_size rows as in the constructor, and each column of prev_layer_data is an observation. |
| next_layer_data | The gradients of the input units of the next layer, which is also the gradients of the output units of this layer. next_layer_data should have out_size rows as in the constructor, and the same number of columns as prev_layer_data. |
Implements MiniDNN::Layer.
Definition at line 135 of file Convolutional_DHT.h.
|
inlinevirtual |
Obtain the gradient of input units of this layer
This function provides the next_layer_data parameter in Layer::backprop() of the previous layer, since the derivative of the input of this layer is also the derivative of the output of previous layer.
Implements MiniDNN::Layer.
Definition at line 193 of file Convolutional_DHT.h.
|
inlinevirtual |
Update parameters after back-propagation
| opt | The optimization algorithm to be used. See the Optimizer class. |
Implements MiniDNN::Layer.
Definition at line 198 of file Convolutional_DHT.h.
|
inlinevirtual |
Get serialized values of parameters
Implements MiniDNN::Layer.
Definition at line 209 of file Convolutional_DHT.h.
|
inlinevirtual |
Set the values of layer parameters from serialized data
Reimplemented from MiniDNN::Layer.
Definition at line 219 of file Convolutional_DHT.h.
|
inlinevirtual |
Get serialized values of the gradient of parameters
Implements MiniDNN::Layer.
Definition at line 228 of file Convolutional_DHT.h.