nural network
DESCRIPTION
Neural networkTRANSCRIPT
-
1.0 INTRODUCTION
2.0 DATA SET
Input data
Box width(m) = w
Box height(m) = h
Fill height(m) = H
Reinforcement provided (mm2) = A
Output data
Thickness of box (mm) = t
Our organization Road Development Authority design structures for roads .Box culverts are
the main type of drainage structures we designed,In our organization we have typical culvert
table for different sizes and different fill height ,That can be use to train a neural network for
predicting unknown parameters
Total number of data = 37
2/3 Number of data = 25
Number of training examples = 25
Number of testing examples = 12
-
w(m) h(m)
fill height
H(m)
RF Area A
(mm2/m)
1005
top slab
thickness t
(mm)
200
2 2 1 0 1795 250
1 1 1 0
Data set
3 1 1.5 0 1005 225
4 1.5 1.5 0 1340 200
5 2.5 1.5 0 2094 225
6 1 2 0 1005 200
7 2 2 0 1795 225
8 2.5 2 0 2094 225
9 1.5 2.5 0 1340 200
10 2 2.5 0 1795 225
11 3 1 0 1608 275
12 3 1.5 0 1608 275
13 3 2 0 1340 275
14 3.5 1 0 1608 300
15 3.5 1.5 0 1608 300
16 3.5 2.5 0 2094 350
17 4 1 0 2094 350
Training
17 4 1 0 2094 350
18 4 1.5 0 2094 350
19 4 2.5 0 2094 350
20 2 2 2 1340 250
21 2 2 4 1795 300
22 3 2 2 1340 300
23 3 2 6 2094 400
24 3 3 2 1340 300
25 3 3 6 2094 400
1 1.5 1 0 1340 200
2 2.5 1 0 2094 225
3 2 1.5 0 1795 225
4 1.5 2 0 1340 200
5 1 2.5 0 1005 200
6 2.5 2.5 0 2094 225
7 3 2.5 0 1340 275
8 3.5 2 0 2094 300
9 4 2 0 2094 350
12 3 3 4 1795 350
10 2 2 6 2094 350
11 3 2 4 1795 350
Testing
-
3.0 NURAL NETWORK
WinNN32 software used to train and test data set
3.1 TRAIN 4:3:1 NURAL NETWORK
Input layer nodes Mid layer nodes Output layer node
The raw data was normalized before training to get better performance. Eta and Alpha values
were set to 0.5. Sigmoid transfer function used and target error values adjusted so that 90%
or more good patterns achieve. Here three and two middle layers were tried with two
different target errors. This had given four distinct predictions.
Same data set is used in mathematical regression and test data set was tested accordingly.
All out put data with Mean absolute error and 1-Ratioaverage is displayed
Number of parameters = 4 x 3 + 3 + 3 x 1 + 1 = 19
No of parameters x 1.5 = 19 x 1.5
= 28.5
Number of training examples = 25
But it is possible to train above neural network since number of parameter are less than the
training example available
-
Artificial Neural Network (3 middle nodes and target error of 0.01)prediction slab
thickness(mm)
225
219 225
212 200
216 200
255 225
1.08
351 350
329 350
Network Target
Mean
Absolute
error
16
30
6
12
13
16
8
25
1
21
I 1-RAVG I
1.13
0.97
1.06
1.06
1.07
1.03
1.08
1.00
0.94
283 275
325 300
213 200
241
376 350 26
14364 350
16
1.07
1.04
0.05
-
Artificial Neural Network (3 middle nodes and target error of 0.005)prediction slab
thickness(mm)
Network Target
Mean
Absolute
error
I 1-RAVG I
229 200 29 1.14
229 225 4 1.02
220 225 5 0.98
205 200 5 1.03
202 200 2 1.01
228 225 3 1.01
283 275 8 1.03
337 300 37 1.12
352 350 2 1.00
365 350 15 1.04
392 350 42 1.12
391 350 41 1.12
16 0.05
-
3.2 TRAIN 4:2:1 NURAL NETWORK
Input layer nodes Mid layer nodes Output layer node
Number of parameters = 4 x 2 + 2 + 2 x 1 + 1 = 13Number of parameters = 4 x 2 + 2 + 2 x 1 + 1 = 13
No of parameters x 1.5 = 13 x 1.5
= 19.5
Number of training examples = 25
-
Artificial Neural Network (2 middle nodes and target error of 0.01)prediction slab
thickness(mm)
Network Target
Mean
Absolute
error
I 1-RAVG I
222 200 22 1.11
248 225 23 1.10
224 225 1 0.99
217 200 17 1.08
215 200 15 1.08
238 225 13 1.06
277 275 2 1.01
331 300 31 1.10
343 350 7 0.98
314 350 36 0.90
379 350 29 1.08379 350 29 1.08
379 350 29 1.08
19 0.05
-
Artificial Neural Network (2 middle nodes and target error of 0.005)prediction slab
thickness(mm)
Network Target
Mean
Absolute
error
I 1-RAVG I
211 200 11 1.05
233 225 8 1.03
219 225 6 0.97
212 200 12 1.06
209 200 9 1.05
247 225 22 1.10
282 275 7 1.02
325 300 25 1.08
354 350 4 1.01
399 350 49 1.14
332 350 18 0.95332 350 18 0.95
318 350 32 0.91
17 0.03
-
4.0 MULTIPLE REGRESSION ANALYSIS
from above analysis we can write slab thickness as
slab thickness t = 138.7 + 49.774w -2.719h + 18.032H - 0.00000796A
SUMMARY OUTPUT
Regression Statistics
Multiple R 0.949
R Square 0.901
Adjusted R Square 0.881
Standard Error 21.471
Observations 25
ANOVA
df SS MS F Significance F
Regression 4 83680 20920 45 9.27818E-10
Residual 20 9220 461
Total 24 92900
Coefficients Standard Error t Stat P-value Lower 95% Upper 95%
Intercept 138.732 23.581 5.883 0.000 89.542 187.922
w 49.774 6.197 8.032 0.000 36.848 62.700
h -2.719 8.001 -0.340 0.738 -19.409 13.971
H 18.032 2.712 6.650 0.000 12.376 23.688
A -0.00000796 0.016 -0.001 1.000 -0.033 0.033
slab thickness t = 138.7 + 49.774w -2.719h + 18.032H - 0.00000796A
355 350 5 1.01
352 350 2 1.01
13 0.03
307 300 7 1.02
332 350 18 0.95
341 350 9 0.97
182 200 18 0.91
256 225 31 1.14
281 275 6 1.02
260 225 35 1.16
234 225 9 1.04
208 200 8 1.04
Network Target
Mean
Absolute
error
I 1-RAVG I
211 200 11 1.05
-
5.0 RESULTS COMPARISON FOR DIFFERENT MODELS
6.0 DISCUSSION
Results of four Artificial Neural Networks and Multiple regressions are summarized in above
table . Both mean absolute error and 1- ratio average zero are desired
From results of Artificial Neural Networks minimum mean absolute error of 16mm observed
from three middle layer node . This may due to lager number of links tend to fit more with the
trained data examples. Also lager target error avoids sub local maximums and minims.
3 Middle layer nodes 2 Middle layer nodesMultiple
Regression
16
0.05
16 19 17 13
0.05 0.05 0.03 0.03
Mean
Absolute
error
I 1-RAVG I
Error = 0.01 Error = 0.005 Error = 0.01 Error = 0.005
When comparing Artificial Neural Networks and Multiple Regression results, lowest mean
absolute error was given by Multiple Regression method. This may due to lack of training
examples of Artificial Neural Networks. But generally ANN should give better results than MR.
-
ANNEXURES
WinNN32 software interface for 4 2 1 model of error 0.01
WinNN32 software interface for 4 2 1 model of error 0.005
-
WinNN32 software interface for 4 3 1 model of error 0.01
WinNN32 software interface for 4 3 1 model of error 0.005