1 Neural Networks 3 Practical Work Book 3

1 Neural Networks 3 Practical Work Book 3
Neural Networks
3
Practical Work Book
3
LAB TASK NO 1, 2 and 3
3
Objective: Introduction to Neural Network Toolbox of MATLAB 5.3
3
LAB TASK NO 4
11
Objective: To generate and plot different activation functions (transfer functions) used in simulation of
neural networks
11
LAB TASK NO 5
14
Objective: To
14
•
Generate and draw most commonly used activation functions
14
•
Simulate a single neuron and find its output.
14
•
Create and generate ADALINE
14
LAB TASK NO 6
18
Objective: To
18
•
Create and train a Perceptron
18
•
Create and train a Multiplayer Perceptron Network
18
LAB TASK NO 7
23
Objective: To
23
•
Prepare data for training and testing of an MLP network
23
•
Develop an MLP network to approximate a non-linear function
23
•
Develop an MLP network to solve some real world problem
23
LAB TASK NO 8
25
Objective: To
25
•
Create a radial basis function network with Gaussian function its hidden layer.
25
•
Train a radial basis function network with given input-output data sets.
25
LAB TASK NO 9
27
Objective: To
27
•
Develop an MLP and RBF network for a practical application
27
•
Design an intelligent controller using MLP and RBF networks
27
LAB TASK NO 10
29
Objective: To
29
•
29
Generate a linear associator
1
•
Compute the weights and output of the associator
29
•
Compute the weights and output by using the pseudo-inverse rule
29
LAB TASK NO 11
32
Objective: To
32
•
Create several map topologies
32
•
Calculate distances of neurons from a particular neuron
32
•
Create and train a self-organizing map
32
LAB TASK NO 12
35
Objective: To
35
•
Create and train Hopfield Network
35
•
Create and train an Elman Network
35
LAB TASK NO 13
38
Objective: To
38
•
Draw block diagrams of Neural networks using SIMULINK
38
•
Effectively use SIMULINK to train neural networks
38
2
Neural Networks
Practical Work Book
LAB TASK NO 1, 2 and 3
Objective: Introduction to Neural Network Toolbox of MATLAB 5.3
Neural Network Toolbox 4.0.6
Description
Introduction and Key Features
Working with Neural Networks
Neural Network Toolbox GUI
Network Architectures and Training and Learning Functions
Simulink Support and Control System Applications
Pre- and Post-Processing Functions and Improving Generalization
1. Introduction
The Neural Network Toolbox extends MATLAB® with tools for designing, implementing, visualizing,
and simulating neural networks. Neural networks are invaluable for applications where formal
analysis would be difficult or impossible, such as pattern recognition and nonlinear system
identification and control. The Neural Network Toolbox provides comprehensive support for many
proven network paradigms, as well as a graphical user interface (GUI) that enables you to design
and manage your networks. The modular, open, and extensible design of the toolbox simplifies the
creation of customized functions and networks.
Key Features
GUI for creating, training, and simulating neural networks
Support for the most commonly used supervised and unsupervised network architectures
Comprehensive set of training and learning functions
Simulink® blocks for building neural networks and advanced blocks for control systems
applications
Support for automatically generating Simulink blocks from neural network objects
Modular network representation, enabling an unlimited number of input-setting layers and
network interconnections
Pre- and post-processing functions for improving network training and assessing network
3
performance
Routines for improving generalization
Visualization functions for viewing network performance
2. Working with the Neural Network Toolbox
Like its counterpart in the biological nervous system, a neural network can learn, and therefore can
be trained to find solutions, recognize patterns, classify data, and forecast future events. The
behavior of a neural network is defined by the way its individual computing elements are connected
and by the strength of those connections, or weights. The weights are automatically adjusted by
training the network according to a specified learning rule until it performs the desired task
correctly.
The Neural Network Toolbox GUI makes it easy to work with neural networks. It enables you to
import large and complex data sets and quickly create, initialize, train, simulate, and manage your
networks. Simple graphical representations enable you to visualize and understand network
architecture.
A single-layer feed-forward network. The unique notation simplifies understanding of network
architectures.
Because neural networks require intensive matrix computations, MATLAB provides a natural
framework for rapidly implementing neural networks and for studying their behavior and
application.
Documentation and Examples
The Neural Network Toolbox User's Guide was written by Professor Emeritus Howard Demuth and
Mark Beale, developers of the Neural Network Toolbox and authors, with Professor Martin Hagen, of
Neural Network Design. The User's Guide is of textbook quality and provides a thorough treatment
of neural network architectures, paradigms, and neural network applications. It also includes a
tutorial and application examples. Additional demonstrations and application examples are included
with the product.
4
3. Neural Network Toolbox GUI
This tool lets you import potentially large and complex data sets. The GUI also enables you to
create, initialize, train, simulate, and manage your networks. Simple graphical representations allow
you to visualize and understand network architecture.
The Neural Network Toolbox GUI. Dialogs and panes let you visualize your network (top), evaluate
training results (bottom), and manage your networks (center).
4. Network Architectures
The Neural Network Toolbox supports both supervised and unsupervised networks.
Supervised Networks
Supervised neural networks are trained to produce desired outputs in response to sample inputs,
making them particularly well suited to modeling and controlling dynamic systems, classifying noisy
data, and predicting future events. The Neural Network Toolbox supports four supervised networks:
Feed-forward networks have one-way connections from input to output layers. They are most
commonly used for prediction, pattern recognition, and nonlinear function fitting. Supported
feed-forward networks include feed-forward backpropagation, cascade-forward backpropagation,
feed-forward input-delay backpropagation, linear, and perceptron networks.
Radial basis networks provide an alternative fast method for designing nonlinear feed-forward
networks. Supported variations include generalized regression and probabilistic neural networks.
Recurrent networks use feedback to recognize spatial and temporal patterns. Supported
recurrent networks include Elman and Hopfield.
Learning vector quantization (LVQ) is a powerful method for classifying patterns that are not
linearly separable. LVQ lets you specify class boundaries and the granularity of classification.
5
Unsupervised Networks
Unsupervised neural networks are trained by letting the network continually adjust itself to new
inputs. They find relationships within data and can automatically define classification schemes. The
Neural Network Toolbox supports two types of self-organizing, unsupervised networks:
Competitive layers recognize and group similar input vectors. By using these groups, the
network automatically sorts the inputs into categories.
Self-organizing maps learn to classify input vectors according to similarity. Unlike
competitive layers, they also preserve the topology of the input vectors, assigning nearby inputs
to nearby categories.
5. Training and Learning Functions
Training and learning functions are mathematical procedures used to automatically adjust the
network’s weights and biases. The training function dictates a global algorithm that affects all the
weights and biases of a given network. The learning function can be applied to individual weights
and biases within a network.
The Neural Network Toolbox supports a variety of training algorithms, including several gradient
descent methods, conjugate gradient methods, the Levenberg-Marquardt algorithm (LM), and the
resilient backpropogation algorithm (Rprop).
A suite of learning functions, including gradient descent, hebbian learning, LVQ, Widrow-Hoff, and
Kohonen, is also provided.
Supported Training Functions
trainb – Batch training with weight and bias learning rules
trainbfg – BFGS quasi-Newton backpropagation
trainbr – Bayesian regularization
trainc – Cyclical order incremental update
traincgb – Powell-Beale conjugate gradient backpropagation
traincgf – Fletcher-Powell conjugate gradient backpropagation
traincgp – Polak-Ribiere conjugate gradient backpropagation
traingd – Gradient descent backpropagation
traingda – Gradient descent with adaptive learning rate backpropagation
traingdm – Gradient descent with momentum backpropagation
traingdx – Gradient descent with momentum & adaptive linear backpropagation
trainlm – Levenberg-Marquardt backpropagation
trainoss – One step secant backpropagations
6
trainr – Random order incremental update
trainrp – Resilient backpropagation (Rprop)
trains – Sequential order incremental update
trainscg – Scaled conjugate gradient backpropagation
MATLAB graphics enhance insight into neural
network behavior. This plot compares training
rates for backpropogation (108 steps) and the
Levenberg-Marquardt method (5 steps).
Supported Learning Functions
learncon – Conscience bias learning function
learngd – Gradient descent weight/bias learning function
learngdm – Gradient descent with momentum weight/bias learning function
learnh – Hebb weight learning function
learnhd – Hebb with decay weight learning rule
learnis – Instar weight learning function
learnk – Kohonen weight learning function
learnlv1 – LVQ1 weight learning function
learnlv2 – LVQ2 weight learning function
learnos – Outstar weight learning function
learnp – Perceptron weight and bias learning function
learnpn – Normalized perceptron weight and bias learning function
learnsom – Self-organizing map weight learning function
learnwh – Widrow-Hoff weight and bias learning rule
7
6. Simulink Support
The Neural Network Toolbox provides a set of blocks for building neural networks in Simulink. These
blocks are divided into three libraries:
Transfer function blocks take a net input vector and generate a corresponding output vector.
Net input function blocks take any number of weighted input vectors, weight layer output vectors,
and bias vectors, and return a net-input vector.
Weight function blocks take a neuron’s weight vector and apply it to an input vector (or a layer
output vector) to get a weighted input value for a neuron.
Alternatively, you can create and train your net works in the MATLAB environment and
automatically generate network simulation blocks for use with Simulink. This approach also enables
you to view your networks graphically.
A three-layer neural network converted into Simulink blocks. Neural network simulation blocks for
use in Simulink can be automatically generated with the gensim command.
8
Control System Applications
The Neural Network Toolbox lets you apply neural networks to the identification and control of
nonlinear systems. The toolbox includes descriptions, demonstrations, and Simulink blocks for three
popular control applications: model predictive control, feedback linearization, and model reference
adaptive control.
You can incorporate neural network predictive control blocks included in the toolbox into your
Simulink models. By changing the parameters of these blocks, you can tailor the network’s
performance to your application.
Model Predictive Control Example
This example shows the model predictive control of a continuous stirred tank reactor (CSTR). This
controller creates a neural network model of a nonlinear plant to predict future plant response to
potential control signals. An optimization algorithm then computes the control signals that optimize
future plant performance.
You can incorporate neural network predictive control blocks included in the toolbox into your
existing Simulink models. By changing the parameters of these blocks you can tailor the network's
performance for your application.
A Simulink® model that includes the neural network predictive control block and CSTR plant model
(top left). Dialogs and panes let you visualize validation data (lower left) and manage the neural
network control block (lower right) and your plant identification (upper right).
9
7. Pre- and Post-Processing Functions
Pre-processing the network inputs and targets improves the efficiency of neural network training.
Post-processing enables detailed analysis of network performance.
The Neural Network Toolbox provides pre- and post-processing functions that enable you to:
Reduce the dimensions of the input vectors using principal component analysis
Perform regression analysis between the network response and the corresponding target
Scale inputs and targets so that they fall in the range [-1,1]
Normalize the mean and standard deviation of the training set
Improving Generalization
Improving the network's ability to generalize helps prevent over fitting, a common problem in
neural network design. Over fitting occurs when a network has memorized the training set but has
not learned to generalize to new inputs. Over fitting produces a relatively small error on the training
set but will produce a much larger error when new data is presented to the network.
The Neural Network Toolbox provides two solutions to improve generalization:
Regularization modifies the network's performance function, the measure of error that the
training process minimizes. By changing it to include the size of the weights and biases, training
produces a network that not only performs well with the training data, but produces smoother
behavior when presented with new data.
Early stopping is a technique that uses two different data sets. The training set, which is
used to update the weights and biases and the validation set, which is used to stop training when
the network begins to over fit the data.
10
LAB TASK NO 4
Objective: To generate and plot different activation functions (transfer
functions) used in simulation of neural networks
Software/Hardware Requirements:
• Pentium PC with Windows 98/2000/XP
• MATLAB 5.3 or latest with Signal Processing and Neural Network
Toolbox
1. Introduction:
Neural Networks are composed of simple elements (neurons) operating in parallel. These
elements are inspired by biological nervous system. A neuron with p inputs is shown in
the following figure.
w1
x1
x2 w2
u
Σ
M
M
wp
f
y
b
xp
Figure 1: Model of a single neuron
In the above figure, x1, x2, …., xp are inputs, w1, w2, …., wp are the corresponding
weights, b is the bias and f is the activation function. In this lab we shall generate and plot
different choices of f available in Neural Network Toolbox of MATLAB.
2. Activation Function:
Activation function is a linear or non-linear function which takes the argument u and
produces the output y, as shown in Figure 1. You will learn more about these functions in
your class. Many activation functions have been proposed. You have to generate and plot
some of the functions as listed below.
(a) Hard Limiter: This activation function can be generated as follows:
u = -3:0.01:3;
y = hardlim(u);
plot(u,y)
(b) Symmetrical hardlimiter: This function may be generated as
u = -3:0.01:3;
y = hardlims(u);
plot(n,y)
(c) Linear Activation: Use the following code to generate this function:
u = -3:0.01:3;
y = purelin(u);
plot(u,y)
grid
11
(d) Log-sigmoid transfer function: This is a commonly used activation function in
feedforward neural networks and can be generated as follows:
u = -3:0.01:3;
y = logsig(u);
plot(u,y)
(e) Tangent hyperbolic function: A very famous activation function which is widely
used in multilayer perceptron networks. This function can be generated as:
u = -3:0.01:3;
y = tanh(u);
plot(u,y)
(f) Gaussian Function: This is a popular choice in Radial Basis Function Networks.
You can generate this function as follows:
u = -3:0.01:3;
y = radbas(u);
plot(u,y)
(f) Positive linear function: You can use the following code to generate this function
u = -5:0.1:5;
y = poslin(u)
plot(u,y)
(g) Saturating linear transfer function: Neural Network toolbox has the function satlin
to generate this activation function.
u=-3:0.01:3;
y=satlin(u);
plot(u,y)
axis([-4 4 -1 2])
(h) Symmetric saturating linear transfer function: You may use the following code to
generate this function
u=-3:0.01:3;
y=satlins(u);
plot(u,y)
(i) Soft Max Transfer Function: This is usually used with committee machines and can
be generated as follows:
u = [-0.5; 0.5];
y=softmax(u);
bar(y)
You will learn more about this when we study committee machines
(j) Triangular basis function: Generate this function by using the following code:
u = -3:0.01:3;
y = tribas(u)
plot(u,y)
(k) Competitive function: This function is used with unsupervised neural networks. The
following code explains how to generate this function:
u=[0;1; -0.5; 0.5];
y=compet(u);
bar(y)
12
3. Exercises:
3.1 Write true or false for the following:
(i) The output of a hardlimiter is always digital.
(ii) linear and positive linear functions are just two names of the same function.
(iii) Symmetrical hardlimiter is used with both analogue as well as digital
networks.
(iv) The input argument of the competitive function should be a vector.
(v)
The log-sigmoid transfer function is only used with analogue networks.
3.2.1 Write the basic difference between the following:
(i)
linear and saturating linear function.
(ii)
Log-sigmoid and tangent hyperbolic function
(iii) Saturating linear and symmetrical saturating linear function.
3.3 Consider Figure 2 with w1 = 0.5, w2 = -0.1 and w3 = 1.0. Find the output of the
network when the activation function is:
(a) harllimiter (b) linear (purelin) (c) tangent hyperbolic (d) log-sigmoid
3.4
Write mathematical expressions for the following:
(i) linear (ii) Gaussian (iii) hard limiter (iv) symmetrical hardlimiter
13
LAB TASK NO 5
Objective: To
• Generate and draw most commonly used activation functions
• Simulate a single neuron and find its output.
• Create and generate ADALINE
Software/Hardware Requirements:
• Pentium PC with Windows 98/2000/XP
• MATLAB 5.3 or latest with signal processing and Neural Network
Toolbox
.
Model of an artificial neuron:
A single artificial neuron is shown in Fig. 1. In this Fig. w1, w2,…., wp are weights, b is
the bias and x1, x2, …, xp are the inputs.
x1
W1
x2
W2
u
+
:
xp Wp
Activati
on
Function
Output,
y
b
Fig. 1
From Fig. 1,
u=
n
∑ (w x
n =1
n n
+ b)
The output of the neuron is
 n

y = f (u ) = f  ∑ (w n xn + b )
 n =1

where f is the activation function.
You may either use MATLAB code to simulate this model or you may use the built-in
neural network block set. In this lab, we shall be particularly interested in the neural
network block set.
1. Neural Network block set:
Neural network toolbox provides a set of blocks you can use to build neural networks in
Simulink.
Bring up the neural network toolbox block set with this command:
neural
The result will be the following window, which contains four blocks. Each of these
blocks contains additional blocks. Note, in Neural Network Toolbox version 3, control
systems block is not available.
14
Transfer function blocks: Double click on the Transfer Functions block in the above
window to bring up a window containing several transfer function blocks. Each of these
blocks takes a net input vector whose dimensions are the same as the input vector. You
are already familiar with these transfer function models.
Net input blocks: Double click on the net input functions block in the Neural window
to bring up a window containing two net input function blocks
15
Each of these blocks takes any number of weighted input vectors, and bias vectors,
and returns a net input vector.
Weight Blocks: Double click on the weight functions block in the neural window to
bring up a window containing three weight function blocks.
Each of these blocks takes a neuron’s weight vector and applies it to an input vector
(or a layer output vector) to get a weighted input value for a neuron.
It is important to note that the blocks above expect the neuron’s weight vector to be
defined as a column vector. This is because Simulink signals can be column vectors,
but can not be matrices or row vectors.
Example 1: Consider a linear neuron with three inputs p1 = 2, p2 = 1, p3 = 3. The
corresponding weights are w1 = 0.5, w2 = 1 and w3 = 2. Draw a Simulink diagram to
implement this neuron and hence find its output.
Solution: One possible set-up is shown below:
16
The output of this neuron should be 8.
An alternative simulation set-up is as under
Again, you can verify that the output is 8, as expected.
A very efficient alternative is given below (you should preferably use this model in
your simulations)
Exercise 1: Construct a simulink model of a neuron with input x = [2 2 1 3], weight
vector [0.1 0.3 0.2 1] and bias = 2. Compute the output when the activation function
of the neuron is (i) linear (ii) hardlimiter (iii) symmetrical hardlimiter (iii) logsigmoidal.
Verify the results manually.
Exercise 2: Repeat exercise 1 for x = [-0.5 -0.2 1 1.5 3] and bias b = 4.
Answer the following very briefly:
Q1: Define (i) weight (ii) bias (iii) activation function
Q2: A neuron has three inputs x1 = 2, x2 = 1 and x3 = 0.5. The corresponding weights
are w1 = 0.5, w2 = 1 and w3 = 2. Calculate the output of the neuron when the activation
function is (a) hard limiter (b) linear (c) logsigmoid
This question must be solved manually (without using MATLAB). You may verify the
result from MATLAB if you have time.
17
LAB TASK NO 6
Objective: To
• Create and train a Perceptron
• Create and train a Multiplayer Perceptron Network
Hardware/Software Requirements: Pentium PC with MATLAB, Signal Processing and
Neural Network Toolbox.
1. Perceptrons: A perceptron with p inputs is shown below:
w1
net
w2
y
Σ
wp
Hard limiter
The perceptron neuron produces a 1 if the net input into the activation function is equal to
or greater than zero, otherwise it produces a zero. The hard limiter gives a perceptron the
ability to classify input vectors by dividing the input space into two regions. Two
classification regions are formed by the decision boundary line wp + b = 0. The line is
perpendicular to the weight matrix w and shifted according to the bias b. Input vectors
above and to the left of the line L will result in a net input greater than zero, and therefore
cause the hard limit neuron to output a 1. Input vectors below and to the right of the line ,
cause the neuron to output 0.
At this stage, you should run the demonstration program nn4db. With it you can move a
decision boundary around, pick new inputs to classify, and see how the repeated
application of the learning rule yields a network that does classify the input vectors
properly.
1.1 Creating a Perceptron: A Perceptron can be created with the function newp.
net = newp(P, S)
where P is an p×2 matrix of minimum and maximum values for p inputs and S is the
number of neurons.
The code below creates a perceptron network with a single one element input vector and
one neuron. The range for the single element of the single input vector is [0 2].
net = newp([0 2],1)
Example1: Find the output of the Perceptron shown below:
1
-1
1
1
1
+
Hard
limite
y
18
Solution: Use the following code to find the output y:
net = newp([-1 1;-1 1],1);
net.IW{1,1} = [-1 1];
net.b{1} = [1];
p = [1;1];
y = sim(net,p)
Exercise: Repeat the above example when the input vector is p = [-1;1].
The following example demonstrates how a perceptron network can be trained.
 2
 1 
Example 2: Find weights and biases of a perceptron with inputs p 1 =   , p 2 =   ,
 2
 − 2
 − 2
p3 =  
 2
 − 1
and p 4 =   . The corresponding target vector is [0 1 0 1];
1
Solution:
p1 = [2; 2]; p2 = [1; -2]; p3 = [-2; 2]; p4 = [-1; 1];
p = [p1 p2 p3 p4];
t = [0 1 0 0];
net = newp([-2 2;-2 2],1);
net.trainparam.passes = 10;
net = adapt(net,p,t);
y = sim(net,p)
% The final weights bias are:
weights = net.IW{1,1}
bias
= net.b{1}
Exercise: Repeat the above example for the following input vectors.
 2
 1 
 − 2
 − 1






p 1 =  2 , p 2 =  − 2 , p 3 =  2  ’ p 4 =  1 
1 
 − 1 
 2 
 2 
2. Multilayer Perceptron Networks (MLPs):
Multilayer Perceptron architecture is probably the most widely used architecture of
feedforwad neural networks. These networks use the well known error back-propagation
algorithm for training. Many variants of this algorithm are available in the literature. A
list of these variants and their corresponding MATLAB functions are given below:
19
Table 1: MATLAB functions for Back-propagation algorithm
MATLAB Function
trainbfg
trainbr
traincgb
traincgf
traincgp
traingd
train gda
traingdm
traingdx
trainlm
trainoss
trainrp
trainscg
Algorithm
Quasi-Newton Back-Propagation
Baysian regulation back-propagation
Conjugate gradient back-propagation with Powell-Beale
restarts
Conjugate gradient back-propagation with Fletcher-Reeves
updates
Conjugate gradient back-propagation with Polak-Ribiere
updates
Gradient Descent back-propagation
Gradient Descent back-propagation with adaptive learning
rate
Gradient Descent back-propagation with momentum
Gradient Descent back-propagation with adaptive learning
rate & momentum
Levenberg-Marquardt back-propagation
One step secant back-propagation
RPROP back-propagation
Scaled conjugate gradient back-propagation
We shall only make use of traingd, traingda, traingdx and trainlm.
2.1 Creating an MLP network
First in training a feedforward MLP network is to create the network object. The
function
newff creates a trainable feedforward MLP network. It requires six inputs and return
the
network object:
net = newff(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF)
where
PR - Rx2 matrix of min and max values for R input elements.
Si - Size of ith layer, for Nl layers.
TFi - Transfer function of ith layer, default = 'tansig'.
BTF - Backprop network training function, default = 'trainlm'.
BLF - Backprop weight/bias learning function, default = 'learngdm'.
PF - Performance function, default = 'mse'.
20
The following example demonstrates the use of newff.
Example: Create one hidden layer MLP network with input vector P and target vector T.
Use tan-sigmoid function in hidden layer neurons and a linear transfer function in the
output neuron. Use three neurons in the hidden layer and one neuron in the output layer.
The input and target vectors are given below:
p = [-1 2;0 5];
t = [-1 -1 1 1];
The traingd training function is to be used. In how many
epochs, the error goal is
achieved during training?
Solution: Use the following MATLAB code
p = [-1 -1 2 2;0 5 0 5];
t = [-1 -1 1 1];
net=newff([-1 2;0 5],[3 1],{‘tansig’,‘purelin’},’traingd’‘learngd’,’mse’)
net.trainparam.epochs=1000;
net.trainparam.goal = 0.0001;
[net,tr]=train(net,p,t);
y=sim(net,p);
plot(p,t,'xr',p,y,'ob')
display('The final weights and biases are:')
pause
input_weights = net.IW{1,1}
layer_weights = net.LW{2,1}
biases_of_hidden_neurons = net.b{1}
bias_of_output_neuron = net.b{2}
pause
display(‘The output of the network is’)
y
You may increase number of epochs or number of neurons in the hidden layer if training
is not successful.
In the above example, we used traingd to train the network. This function implements
the conventional gradient descent back-propagation algorithm with a fixed learning rate
and no momentum. The performance of the algorithm is very sensitive to the proper
setting of the learning rate. If the learning rate is set too high, the algorithm may oscillate
and become unstable. If the learning rate is too small, the algorithm will take too long to
converge. It is not practical to determine the optimal setting for the learning rate before
training, and, in fact, the optimal learning rate changes during the training process, as the
algorithm moves across the performance surface.
The performance of the conventional back-propagation algorithm can be improved if we
allow the learning rate to change during the training process. An adapt learning process
will attempt to keep the learning step size as large as possible while keeping learning
stable. The learning rate is made responsive to the complexity of the local error surface.
21
An adaptive learning rate requires some changes in the training procedure used by
traingd. First, the initial network output and error are calculated. At each epoch new
weights and biases are calculated using the current learning rate. New outputs and errors
are then calculated.
As with momentum, if the new error exceeds the old error by more than a pre-defined
ratio max_perf_in (typically 1.04), the new weights and biases are discarded. In addition
the learning rate is decreased (typically by multiplying by lr_dec = 0.7). Otherwise, the
new weights etc. are kept. If the new error is less than the old error, the learning rate is
increased (typically by multiplying by lr_inc = 1.05).
This procedure increases the learning rate, but only to the extent that the network can
learn without large error increases. Thus a near optimal learning rate is obtained for the
local terrain. When a larger error rate could result in stable learning, the learning rate is
increased. When the learning rate is too high to guarantee a decrease in error, it gets
decreased until stable learning resumes.
The following program uses the function traingdx which implements the backpropagation rule with momentum and adaptive learning rate.
p = [-1 –1 2 2;0 5 0 5];
t = [-1 –1 1 1];
net=newff([-1 2;0 5],[3
1],{'tansig',‘purelin'},'traingdx','learngd','mse');
net.trainparam.show = 25
net.trainparam.epochs=1000
net.trainparam.goal = 0.0001
net.trainparam.lr = 0.05
net.trainparam.lr_inc = 1.05
net.trainparam.lr_dec = 0.7
net.trainparam.mc = 0.95
net.trainparam.max_perfect_inc = 1.04
[net,tr]=train(net,p,t);
y=sim(net,p);
plot(p,t,'xr',p,y,'ob')
 0
Exercise: Develop an MLP network (with one hidden layer) with inputs p 1 =   ,
 0
1 
1
 0
p 2 =   , p 3 =   , p 4 =   . The target vector is [0 1 1 0]. Train the network when
 0
1
1 
the number of hidden layer neurons is (a) 1 (b) 3 (c) 5. Use both traingd and
traingdx functions. Comment on the performance of the networks.
Solve this problem with other functions given in Table 1.
22
LAB TASK NO 7
Objective: To
• Prepare data for training and testing of an MLP network
• Develop an MLP network to approximate a non-linear function
• Develop an MLP network to solve some real world problem
Hardware/software Requirement:
• Pentium PC
• MATLAB with Neural Network Toolbox
Task 1: Investigate the use of back-propagation learning to achieve one-to-one mappings,
as described here:
1. f(x) = 1/x.
1≤x≤100
2. f(x) = exp(-x),
1≤x≤10
For each mapping, do the following:
(a) Set up two sets of data, one for network training, and the other for testing.
(b) Use the training data set to compute the synaptic weights of the network,
assuming to have a single hidden layer.
(c) Evaluate the computing accuracy of the network by using the test data.
Use a single hidden layer but with a variable number of hidden neurons.
Investigate how the network performance is affected by varying the size of the
hidden layer.
Task2: The data presented in the Table given below shows the weights of eye lenses of
wild Australian rabbits as a function of age. No simple analytical function can exactly
interpolate these data, because we do not have a single-valued function. Instead, we a
non-linear least squares model of this data set, using a negative exponential, as
described by
y = 233.846(1-exp(-0.006042x)) + ε
where ε is an error term.
Using the back-propagation algorithm, design a multilayer perceptron that
provides a non-linear approximation to this data set. Compare your result against the
least squares model described.
23
Ages
(days)
15
15
15
18
28
29
37
37
44
50
50
60
61
64
65
65
72
75
Table: Weights of eye lenses of Wild Australian Rabbits
Weights Ages
Weights Ages
Weights Ages
(mg)
(days)
(mg)
(days)
(mg)
(days)
174.18
338
75
94.6
218
21.66
347
92.5
218
173.03
22.75
82
219
173.54
354
22.3
85
105
178.86
357
101.7
224
31.25
91
375
225
177.68
44.79
91
102.9
173.73
394
97
110
227
40.55
513
104.3
232
159.98
50.25
98
232
161.29
535
125
134.9
46.88
187.07
554
130.68
237
52.03
142
591
246
176.13
63.47
142
140.58
183.4
448
147
155.3
258
61.13
152.2
276
186.26
660
81
147
189.66
705
285
150
144.5
73.09
723
186.09
300
142.15
79.09
159
756
301
186.7
139.81
165
79.51
186.8
768
305
183
153.22
65.31
860
195.1
145.72
312
192
71.9
216.41
317
151.1
86.1
195
Weights
(mg)
203.23
188.38
189.7
195.31
202.63
224.82
203.3
209.7
233.9
234.7
244.3
231
242.4
230.77
242.57
232.12
246.7
24
LAB TASK NO 8
Objective: To
• Create a radial basis function network with Gaussian function its
hidden layer.
• Train a radial basis function network with given input-output
data sets.
Hardware/software Requirements:
• Pentium PC
• MATLAB with Neural Network Toolbox
As discussed in the class, an RBF network is a powerful alternative to Multilayer
Perceptron (MLP) network. An RBF network uses a radial basis function as an activation
function. A number of choices of radial basis functions is available. In this lab. we shall
only use the Gaussian function, which is the most widely used activation function in RBF
networks.
A Gaussian function may be generated as follows:
x = -3:0.01:3;
u = radbas(x);
plot(x,u)
Radial Basis Function Networks can be designed with the function newrbe.m and
newrb.m. The function newrbe.m can produce a network with zero errors on training
vectors. It is called in the following way:
net = newrbe(P,T, SPREAD)
The function newrbe takes matrices of input vectors P and target vectors T, and a spread
constant SPREAD (i.e. width of the Gaussian function) for the radial basis layer , and
returns a network with weights and biases such that the outputs are exactly T when the
inputs are P. This function creates as many hidden layer neurons as there are input
vectors in P.
 0
Example: Develop an RBF network (with one hidden layer) with inputs p 1 =   ,
 0
1 
1
 0
p 2 =   , p 3 =   , p 4 =   . The target vector is [0 1 1 0].
 0
1
1 
Solution:
p = [p1 p2 p3 p4];
t = [0 1 1 0];
net =newrbe(p,t,1);
y = sim(net,p)
Find weights and biases of this network and compare the performance of this network
with the MLP network trained in the previous lab.
25
The drawback of newrbe is that it produces a network with as many hidden neurons as
there are input vectors. For this reason, newrbe does not return an acceptable solution
when many input vectors are needed to properly define a network. The function newrb
removes this disadvantage. It creates a radial basis function network iteratively, one
neuron at a time. Neurons are added to the network until the sum squared error falls
beneath an error goal or a maximum number of neurons has been reached. The call for
this function is
net = newrb(P,T, GOAL, SPREAD)
The function newrb takes matrices of input and target vectors, P and T, and design
parameters GOAL and SPREAD, and returns the desired network.
Exercise:
(a) Repeat the above example with the function newrb. Let the error goal be 0.
(b) Design RBF networks for all the problems of lab5 and hence compare the
performance of MLP and RBF networks.
26
LAB TASK NO 9
Objective: To
• Develop an MLP and RBF network for a practical application
• Design an intelligent controller using MLP and RBF networks
Hardware/software requirements:
• Pentium PC
• MATLAB with Neural Network toolbox
Task 1: The input and output data sets of the main steam controller (stage 1) of the
thermal power station Jamshoro are given below:
Input
Output
518
515
518
515
615
516
515
515
525
515
518
520
510
515
510
510
510
510
510
510
510
515
500
510
525
500
510
520
512
510
515
510
518
510
(a) Develop a single hidden layer with tanh activation function in the hidden layer
and linear transfer function in the output layer. Record number of neuron in each
layer, and weights and biases of the network. Comment on the performance of the
network.
(b) Repeat (a) for a radial basis function network.
Which network performed well?
Task
2:
Construct
the
following:
In the above model, S is the saturator having saturation limits from –35 to +35, rate
limiter has limits from –7 to 7, and PID is the proportional plus integral plus derivative
controller with P = 15, I = 0.5 and D = 30. Our purpose is to develop a neural network
controller which replaces the PID controller. For this, do the following:
Step1: Simulate/run this model for a step input of 10 degrees. Check the output response
and save it with appropriate file name.
27
510
510
510
510
Step2: Generate and record data from the input and output of the controller. Save this
data in a separate file. (for this use “to workspace” block at the input and output of the
PID).
Step 3: Train an MLP or RBF or Elman network (RBF preferable) with the input/output
data of step 2.
Step4: Use gensim function to obtain a simulink block of the network.
Step 5: Replace PID with your trained network (simulink block of step 4).
Step 6: Run your simulation and see the plot of the output for a step input of 10 degrees.
What are your comments about your trained controller?
28
LAB TASK NO 10
Objective: To
• Generate a linear associator
• Compute the weights and output of the associator
• Compute the weights and output by using the pseudo-inverse rule
Hardware/software requirements: same as previous labs
1. Linear Associator:
A linear associator is shown in the following diagram
x1
w1
y
M
xq
w2
+
x2
wq
The weight vector of the associator is computed as
W = dX
where d is the desired (target) vector and X is the input vector. Following example
demonstrates the use of MATLAB to compute weight vector and output the
associator.
Example1: Suppose that the prototype input/output vectors are




 0.5 
 0.5 




 − 0.5 


0.5 
 1 
1 





,d =
,d =
 x1 =
x 2 =
,

 0.5  1  − 1 
 − 0.5 2 1 










 − 0.5 
 − 0.5 




(a) State whether the patterns are orthonormal.
(b) Design an autoassociator for these patterns. Use the Hebb rule
(c) Find output of the associator when input is (i) x1 (ii) x2
Solution: (a)
x1 and x2 will be orthonormal if x1Tx2 = 0,
program and check the orthonormality
x1=[0.5;-0.5;0.5;-0.5];
x2=[0.5;0.5;-0.5;-0.5];
xx=x1'*x2
xy=x1'*x1
yy=x2'*x2
x1Tx1 = x22x2 = 1. Use the following
29
(b) The weight vector can be computed as W = dX. Use the following code to compute
W.
d1=[1;-1];
d2=[1;1];
d=[d1 d2];
X = [x1 x2]';
W = d*X
(c) The outputs are
y1=W*x1
y2=W*x2
Was the associator successful?
Exercise1: Repeat example1 for the following input/output pair
 0.5774 
 0.5774 


x1 =  − 0.5773 , d1 = [−1] ,
x 2 =  0.5774  , d 2 = [1]
 − 0.5774
 − 0.5774
Why the associator was not successful in this case.
2. Pseudoinverse Rule:
This rule is used with the Hebbian learning when the patterns are not orthogonal. This
rule computes the weights as follows
X+ = (XTX)-1XT
W = dX+
where X is the input matrix and d is the target vector.
Example2: Consider the following input/output patterns
1
1


x1 =  − 1 , d1 = [−1];
x 2 =  1  , d 2 = [1]
 − 1
 − 1
(a) Show that the patterns are not mutually orthogonal
(b) Find the weight matrix
(c) Check the success of the rule.
Solution:
(a) x1=[1;-1;-1];
x2=[1;1;-1];
xx=x1'*x2
xy=x1'*x1
yy=x2'*x2
(b)
x=[x1 x2];
xplus=inv(x'*x)*x';
d1=[-1];
30
d2=[1];
d = [d1 d2];
W=d*xplus
(c)disp(‘the output due to x1 is’)
y1 = W*x1
disp(‘second output is’)
y2 = W*x2
Exercise2: Consider a linear associator with following
input/output vectors
1
1
 − 1
1
1
1


, d1 =  
x1 =
x2 =  , d2 =  
1
 − 1
 − 1
1
 
 
 − 1
 − 1
(a) Use the Hebb rule to find the appropriate weight
matrix for this linear associator.
(b) Repeat (a) using the pseudoinverse rule
(c) Apply the input x1 to the linear associator using
the weight matrix of part (a), then using the weight
matrix of part (b).
(d) Repeat (c) for input x2.
Exercise 3: Consider the prototype patterns given below.
(a) Are these patterns orthogonal?
(b) Design an autoasociator for these patterns. Use the
Hebb rule.
(c) What response does the network give to the test
input pattern, pt, given below.
T
T
z 1 = [1 1 − 1 1 − 1 − 1] , x 2 = [− 1 1 1 1 1 − 1] ,
x t = [1 1 1 1 1 − 1]
Exercise 4: Consider an autoassociation problem in which
there are three prototype patterns (x1, x2 and x3). Design
autoassociative networks to recognize these patterns, using
both the Hebb rule and the pseudoinverse rule. Check their
performance on the test pattern xt shown below.
1
1
 − 1
 − 1
1
1
1
1
 
 
 
 
 − 1
1
 − 1
 − 1
 
 
 
 
x3 =  1  ,
x t =  − 1
x1 =  − 1 , x 2 =  − 1 ,
1
1
1
1
 
 
 
 
1
 − 1
 − 1
 − 1
1
1
1
1
 
 
 
 
31
LAB TASK NO 11
Objective: To
• Create several map topologies
• Calculate distances of neurons from a particular neuron
• Create and train a self-organizing map
Hardqware/software requirements:
• Pentium PC
• MATLAB with Neural Network Toolbox
1. Self-Organizing Map topologies:
You can specify different topologies for the original neuron locations with the functions
gridtop, hextop or randtop.
The gridtop topology starts with neurons in a rectangular grid. For example, suppose you
want a 2 by 3 array of six neurons. You can get this with
pos = gridtop(2,3)
Here neuron 1 has the position (0,0), neuron 2 has the position (1,0), neuron 3 has the
position (0,1) etc.
An 8 by 10 set of neurons in a gridtop topology can be created and plotted with the
code shown below:
pos = gridtop(8,10);
plotsom(pos)
The hextop function creates a similar set of neurons but they are in a hexadecimal
pattern.
A 2 by 3 pattern of hextop neurons is generated as follows:
pos = hextop(2,3)
A 8 by 10 set of neurons in a hextop topology can be created and plotted with the code
shown below:
pos = hextop(8,10);
plotsom(pos)
Finally the randtop function creates neurons in an n-dimensional random pattern. The
following code generates a random pattern of neurons.
pos = randtop(2,3)
An 8 by 10 set of neurons in a randtop topology can be created and plotted with the code
shown below:
pos = randtop(8,10);
plotsom(pos)
2. Distance Functions:
In Neural Network toolbox of MATLAB there are four distinct ways to calculate
distances from a particular neurons to its neighbors. Each calculation method is
implemented with a special function (dist, linkdist, mandist and boxdist). In this lab we
shall only use dist and boxdist functions..
The dist function calculates the Euclidean distance from a home neuron to any other
neuron. Suppose we have three neurons:
32
pos = [0 1 2; 0 1 2];
we will find distance from each neuron to the other with
D = dist(pos)
Thus, the distance from neuron 1 to itself is 0, the distance from neuron 1 to neuron 2 is
1.4142 etc. These are indeed the Euclidean distances.
The box distances can be calculated as follows:
Suppose we have six neurons in a gridtop configuration.
pos = gridtop(2,3);
Then the box distances are
D = boxdis(pos)
The distance from neuron 1 to 2, 3 and 4 is just 1, for they are in the immediate
neighborhood. The distance from neuron 1 to both 5 and 6 is 2. The distance from neuron
1 to both 5 and 6 is 2. The distance from both 3 and 4 to all neurons is just 1.
3. Creating a self-organizing map Neural Network:
You can create a new self-organizing map with the function newsom. Consider the
following example:
Example1:
Suppose we wish to create a network having input vectors with two elements that fall in
the range 0 to 2 and 0 to 1 respectively. Further suppose that we want to have six neurons
in a hexadecimal 2 by 3 network. The code to obtain this network is
net = newsom([0 2; 0 1],[2 3]);
Suppose the vectors to train on are
P = [0.1 0.3 1.2 1.1 1.8 1.7 0.1 0.3 1.2 1.1 1.8 1.7; …
0.2 0.1 0.3 0.1 0.3 0.2 1.8 1.8 1.9 1.9 1.7 1.8];
we can plot all of this with
plot(P(1,:), P(2,:),’.g’,’markersize’,20)
hold on
plotsom(net.iw{1,1},net.layers{1}.distances)
hold off
we can train the network for 1000 epochs as
net.trainparam.epochs = 1000
net = train(net,P)
plotsom(net.iw{1,1}, net.layers{1}.distances)
Example 2: Consider 100 two element unit input vectors spread evenly between 00 and
900.
angles = linspace(0,0.5*pi, 100);
p = [sin(angles) cos(angles)];
plot(p(1,:), p(2,:),’+r’)
% plot of input data
Define a self-organizing map as a one dimensional layer of 10 neurons. This can be done
as follows:
Net = newsom([0 1;0 1], [10]];
33
Train the network for 1000 epochs:
net = net.trainparam.epochs = 1000
net = train(net,p);
Plot the trained network as:
Plotsom(net.iw{1,1}, net.layers{1}, distances)
Example 3: Two-dimensional self organizing map:
This example shows how a two dimensional self-organizing map can be trained.
First some random input data is created with the following code.
P = rands(2,1000);
plot(P(1,: ), P(2,:), ‘+r’)
Create a new network with 5 by 6 layer of neurons to classify the above vectors. This can
be done as
net = newson([0 1;0 1], [5,6]];
Visualize the network as
plotsom(net.iw{1,1},net.layers{1}. distances);
train the network for 1000 epoches:
net.trainParam.epochs = 1000
net = train(net, P);
plotsom(net.iw{1,1}, net.layers{1}.distances)
Repeat the above example with 5000 epochs and 1000 epochs. Do you see any
improvement?
34
LAB TASK NO 12
Objective: To
• Create and train Hopfield Network
• Create and train an Elman Network
Hardware/software requirements:
• Pentium PC
• MATLAB with Neural Network Toolbox
1. Hopfield Networks:
We can create a new hopfield network by using the function newhop. The following
examples illustrate the use of this function.
Example 1: Suppose we wish to design a network which produces the following target
output:
T = [-1 –1 1;1 -1 1]’;
We can execute the design with
net = newhop(T);
The output of the network can be found by typing
y = sim(net,2,[ ],T)
where [ ] emphasizes that there is no input to the network.
To see if the network can correct a corrupted vector, run
the following code which simulates the Hopfield network for
five timesteps. (Since Hopfield networks have no inputs,
the second argument to SIM is {Q TS} = [1 5] when using cell
array notation.)
Ai = {[-0.9; -0.8; 0.7]};
y = sim(net,{1 5},{},Ai);
y{1}
Example 2: Consider a Hopfield network with just two neurons. Each neuron has a bias
and weights to accommodate two element input vectors weighted. We define the target
equilibrium points as follows:
T = [1 -1; -1 1]’;
These target stable points are given to newhop to obtain weights and biases of a Hopfield
network:
net = newhop(T);
The design returns a set of weights and a bias for each neuron. The results are obtained
from
w = net.LW{1,1}
and
b = net.b{1,1}
Next the design is tested with the target vector T as follows:
y = sim(net,2,[ ], T);
35
2. Elman Networks: The Elman network commonly is a two layer network with
feedback from the first layer output to the first layer input. An Elman network with
two or more layers can be created with the function newelm.
Example: Suppose we have a sequence of single element input vectors in the range
from 0 to 1. Suppose that we want to have five hidden layer tansig neurons and a
single logsig output layer. The following code creates the desired network:
P = round(rand(1,8));
A sequence to be presented to a network should be in cell array form. We can
convert P to this form with
Pseq = con2seq(P)
Now we can find the output of the network with the function sim.
y = sim(net, Pseq)
we will convert this back to recurrent form with
z = seq2con(y);
and can finally display the output in concurrent form with
z{1,1}
Example 3: This example is concerned with the training of Elman networks.
Suppose that we wish to train a network with an input P and target T given by
P = round(rand(1,8));
T = [0 0 0 1 1 0 0 1];
Let the network has five hidden neurons in the first layer:
net = newelm([0 1],[5 1],{‘tansig’, ‘logsig’});
Train the network as follows:
Pseq = con2seq(P);
Tseq = con2seq(T);
net = train(net,Pseq, Tseq);
Find the output:
y = sim(net, Pseq);
z = seq2con(y);
z{1,1}
The difference between the target output and the simulated network output is
diff1 = T – z{1,1}
Example 4: Amplitude Detection of a waveform
Amplitude detection requires that a waveform be presented to a network through
time, and that the network output the amplitude of the waveform. The following code
defines two sine waveforms, one with an amplitude of 1.0 and the other with an
amplitude of 2.0:
P1 = sin(1:20);
P2 = sin(1:20)*2;
The target outputs for these waveforms will be their amplitudes:
T1 = ones(1,20);
T2 = ones(1,20)*2;
These waveforms can be combined into a sequence where each waveforms occurs twice.
These longer waveforms will be used to train the Elman network.
36
P = [P1 P2 P1 P2];
T =[ T1 T2 T1 T2];
Pseq = con2seq(P);
Tseq = con2seq(T);
Let 10 neurons are used to train the network:
net = mewelm([-2 2], [10 1],{‘tansig’, ‘purelin’}, ‘traingdx’);
net = train(net, Pseq, Tseq);
Increase the number of epochs if the training is not successful.
To test the network, the original inputs are presented, and its outputs are calculated with:
y = sim(net,Pseq);
time = 1:length(P);
plot(time,t,’—‘,time,cat(2,a{:}))
title(‘Testing amplitude detection’)
xlabel(‘Time’)
ylabel(‘Target -- output ---‘)
37
LAB TASK NO 13
Objective: To
• Draw block diagrams of Neural networks using SIMULINK
• Effectively use SIMULINK to train neural networks
Hardware/Software Requirements:
• Pentium PC
• MATLAB and SIMULINK with Neural Networks
Neural network toolbox provides a set of blocks you can use to build neural networks in Simulink or which
can be used by the function gensim to generate the simulink version of any network you have created in
MATLAB.
Bring up the neural network toolbox block set with this command:
Neural
The result will be the following window, which contains four blocks. Each of these
blocks contains additional blocks. Note, in Neural Network Toolbox version 3, control
systems block is not available.
Neural Window
Transfer function blocks: Double click on the Transfer Functions block in the above
window to bring up a window containing several transfer function blocks. Each of these
blocks takes a net input vector whose dimensions are the same as the input vector. You
are already familiar with these transfer function models.
38
Net input blocks: Double click on the net input functions block in the Neural window to
bring up a window containing two net input function blocks
.
Each of these blocks takes any number of weighted input vectors, and bias vectors, and
returns a net input vector.
Weight Blocks: Double click on the weight functions block in the neural window to
bring up a window containing three weight function blocks.
Each of these blocks take a neuron’s weight vector and applies it to an input vector (or a
layer output vector) to get a weighted input value for a neuron.
It is important to note that the blocks above expect the neuron’s weight vector to be
defined as a column vector. This is because Simulink signals can be column vectors, but
can not be matrices or row vectors.
39
Example 1: Consider a linear neuron with three inputs p1 = 2, p2 = 1, p3 = 3. The
corresponding weights are w1 = 0.5, w2 = 1 and w3 = 2. Draw a Simulink diagram to
implement this neuron and hence find its output.
Solution: one possible set-up is shown below:
The output of this neuron should be 8.
An alternative simulation set-up is as under
Again, you can verify that the output is 8, as expected.
A very efficient alternative is given below (you should preferably use this model in your
simulations)
40
Exercise 1: Construct a simulink model of a neuron with input p = [2 2 1 3],
weight vector [0.1 0.3 0.2 1] and bias =[ 2 2 2 2]. Compute the output when the
activation function of the neuron is (i) linear (ii) hardlimiter (output either +1 or 1) (iii) logsigmoidal. Verify the results manually.
Block Generation: The function gensim generates block description of networks so you
can simulate them in Simulink.
gensim(net, st)
The second argument to gensim determines the sample time, which is normally chosen to
be some positive real value. If a network has no delays associated with its input weights
or layer weights this value can be set to –1. A value of –1 tells gensim to generate a
network with continuous sampling.
Example 2: Consider a simple problem defining a set of inputs P and the corresponding
targets T:
P = [1 2 3 4 5];
T = [1 3 5 7 9];
41
The code below designs a linear layer to solve this problem
net = newlind(P,T);
We can test the network on our original inputs as follows:
y = sim(net,P)
The results returned should show that the network has solved the problem.
Call gensim as follows to generate a Simulink version of the network:
gensim(net, -1)
The call to gensim results in the following screen appearing.
To test the network double click on input 1 block at left. The input block is actually a
standard constant block. Change the constant value from the initial randomly generated
value to 2 (say), then select close.
Select start from the simulation menu. Simulink will momentarily pause as it simulates
the system. When the simulation is over double click the scope to see the display on the
network’s response.
42
Exercise 2: Construct the following
In the above model, S is the saturator having saturation limits from –35 to +35, rate
limiter has limits from –7 to 7, and PID is the proportional plus integral plus derivative
controller with P = 15, I = 0.5 and D = 30. Our purpose is to develop a neural network
controller which replaces the PID controller. For this, do the following:
Step1: Simulate/run this model for a step input of 10 degrees. Check the output response
and save it with appropriate file name.
Step2: Generate and record data from the input and output of the controller. Save this
data in a separate file.
Step 3: Train an MLP or RBF or Elman network (RBF preferable) with the input/output
data of step 2.
Step4: Use gensim function to obtain a simulink block of the network.
Step 5: Replace PID with your trained network (simulink block of step 4).
Step 6: Run your simulation and see the plot of the output for a step input of 10 degrees.
What are your comments about your trained controller?
43
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement