Expert
estimation
X
1
𝑥
11
𝑥
12
𝑥
13
𝑦
1
X
2
𝑥
21
𝑥
22
𝑥
23
𝑦
2
X
3
𝑥
31
𝑥
32
𝑥
33
𝑦
3
…
…
…
…
…
X
n
𝑥
𝑛1
𝑥
𝑛2
𝑥
𝑛3
𝑦
𝑛
The architecture of the neural network. The problem
can be reduced to a problem solved by a multi-layer
neural network with direct signal propagation and reverse
error propagation [7,8]. How to build a neural network is
solved in two stages:
* Selection of the type (architecture) of the neural
network;
* Selection of weights (training) of the neural network.
In the first step, the following is performed: which
neurons are used (number of inputs, transfer functions);
how to connect the neurons to each other; determines the
inputs and outputs of the neural network.
If we take into account that the number of independent
variables is three and one dependent variable, then to train
this data to a neural network, we will create a neural
network perceptron with three inputs and one output. A
neural network has a hidden layer with two neurons
(Figure 1). The type of selected neural network without
feedbacks and the neurons of all layers are fully
connected, that is, the output of each neuron of the q – th
layer is connected to the input of each neuron (q+1) of the
layer [4].
Fig. 1. Neural set with three inputs.
in the figure:
Figure 1 is described architecture of neural
network. In the figure:
• 𝑋 {𝑥
1
, 𝑥
2
, 𝑥
3
} – vector of network input
signals;
• w
1
-w
6
– synaptic scales or scales of
connections between incoming and
hidden layer neurons.
• w
7
-w
8
- scales of connections between
neurons of the hidden and outgoing
layer;
• 𝑦̅ - The weighted sum of the neural
network (net) is the sum of the input
vectors and the values of the hidden
neurons multiplied by the corresponding
weights. Calculated by the formula:
net = ∑
𝑥
𝑖
𝑤
𝑖
𝑛
𝑖=0
(3)
In the neurons of the hidden layer, activation functions
are used. That is, the weighted sum in the neurons of the
hidden layer- net(h
i
) will send to the output is not correct
— the neuron must process and calculate the output
signal. For this purpose, an activation function is used,
which converts the weighted sum into a number, which
will be the output of the neuron. The activation function is
denoted as ϕ(net) [9]
Neural network training algorithm. Neural network
training is the search for such a set of weight coefficients,
in which the input signal after passing through the
network provides the result we need. The generalized
learning process of a neural network is schematically
shown in Figure 2.
Fig. 2. Generalized neural network training algorithm.
The most common method of teaching a multi – layer
perceptron is the method of error back propagation and
refers to the methods of teaching with a teacher [8].
The learning algorithm looks like this:
Step 0. The training sample is loaded to the vector {x}.
The weights of the connections between all neurons are
initialized (the weights of all connections are initialized
with random small values).
Dostları ilə paylaş: |