a neural network for predicting moisture content of grain

6
A neural network for predicting moisture content of grain drying process using genetic algorithm Xueqiang Liu, Xiaoguang Chen, Wenfu Wu * , Guilan Peng Biological and Agricultural Engineering School, Jilin University, Changchun 130025, China Received 13 January 2006; received in revised form 18 May 2006; accepted 22 May 2006 Abstract This paper is concerned with optimizing the neural network topology for predicting the moisture content of grain drying process using genetic algorithm. A structural modular neural network, by combining the BP neurons and the RBF neurons at the hidden layer, was pro- posed to predict the moisture content of grain drying process. Inlet air temperature, grain temperature and initial moisture content were considered as the input variables to the topology of neural network. The genetic algorithm is used to select the appropriate network archi- tecture in determining the optimal number of nodes in the hidden layer of the neural network. The number of neurons in the hidden layer was optimized for 6 BP neurons and 10 RBF neurons using genetic algorithm. Simulation test on the moisture content prediction of grain drying process showed that the SMNN optimized using genetic algorithm performed well and the accuracy of the predicted values is excellent. Ó 2006 Elsevier Ltd. All rights reserved. Keywords: Grain drying; Predicting; Neural network; Genetic algorithm; Moisture content 1. Introduction Grain drying is a non-linear process with a long delay. Its main objective is to achieve the desired final moisture content. Over-drying requires excessive energy and even can damage the quality of the dried material, especially in case of seed. On the other hand the grain will be vulner- able to mildew if the moisture content remains high. There is an option to determine the moisture content in the drying process by measurement but the accuracy of this approach is not satisfactory due to the technical limitations of the available moisture sensors used in the on-line observing process. In case of farm dryers, the weather conditions and dust have a great effect on the accuracy, as well. Another way to predict the moisture content is to calculate it based on drying air parameters using physically based models. But the accurate model is difficult to be established for the drying process with a long delay is non-linear. Recently, the artificial intelligence methods as well as neu- ral networks have been proposed for process control of grain drying. The artificial neural network (NN) is a well-known tool for solving complex, non-linear biological systems (De Baerdemaeker & Hashimoto, 1994) and it can give rea- sonable solutions even in extreme cases or in the event of technological faults (Lin & Lee, 1995). Huang and Mujum- dar (1993) created a NN in order to predict the perfor- mance of an industrial paper dryer. The NN model by Jay and Oliver (1996) was used for predictive control of drying process. Trelea, Courtois, and Trystram (1997) used explicit time and recurrent NNs for modelling the moisture content of thin-layer (5 cm) corn during the drying process and for wet-milling quality at constant air flow rate and absolute humidity and variable temperature. Thyagarajan, Panda, Shanmugam, Rao, and Ponnavaikko (1997) mod- elled an air heater plant for a dryer using a NN. Sreekanth, Ramaswamy, and Sablani (1998) predicted psychometric parameters using various NN models. Kaminski, Stru- millo, and Tomczak (1998) used a NN for data smoothing and for modeling material moisture content and tempera- 0956-7135/$ - see front matter Ó 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.foodcont.2006.05.010 * Corresponding author. Tel.: +86 431 5691908. E-mail address: [email protected] (W. Wu). www.elsevier.com/locate/foodcont Food Control 18 (2007) 928–933

Upload: liliana-seremet

Post on 23-Dec-2015

3 views

Category:

Documents


0 download

DESCRIPTION

Neural Network

TRANSCRIPT

Page 1: A Neural Network for Predicting Moisture Content of Grain

www.elsevier.com/locate/foodcont

Food Control 18 (2007) 928–933

A neural network for predicting moisture content of graindrying process using genetic algorithm

Xueqiang Liu, Xiaoguang Chen, Wenfu Wu *, Guilan Peng

Biological and Agricultural Engineering School, Jilin University, Changchun 130025, China

Received 13 January 2006; received in revised form 18 May 2006; accepted 22 May 2006

Abstract

This paper is concerned with optimizing the neural network topology for predicting the moisture content of grain drying process usinggenetic algorithm. A structural modular neural network, by combining the BP neurons and the RBF neurons at the hidden layer, was pro-posed to predict the moisture content of grain drying process. Inlet air temperature, grain temperature and initial moisture content wereconsidered as the input variables to the topology of neural network. The genetic algorithm is used to select the appropriate network archi-tecture in determining the optimal number of nodes in the hidden layer of the neural network. The number of neurons in the hidden layer wasoptimized for 6 BP neurons and 10 RBF neurons using genetic algorithm. Simulation test on the moisture content prediction of grain dryingprocess showed that the SMNN optimized using genetic algorithm performed well and the accuracy of the predicted values is excellent.� 2006 Elsevier Ltd. All rights reserved.

Keywords: Grain drying; Predicting; Neural network; Genetic algorithm; Moisture content

1. Introduction

Grain drying is a non-linear process with a long delay.Its main objective is to achieve the desired final moisturecontent. Over-drying requires excessive energy and evencan damage the quality of the dried material, especiallyin case of seed. On the other hand the grain will be vulner-able to mildew if the moisture content remains high. Thereis an option to determine the moisture content in the dryingprocess by measurement but the accuracy of this approachis not satisfactory due to the technical limitations of theavailable moisture sensors used in the on-line observingprocess. In case of farm dryers, the weather conditionsand dust have a great effect on the accuracy, as well.Another way to predict the moisture content is to calculateit based on drying air parameters using physically basedmodels. But the accurate model is difficult to be establishedfor the drying process with a long delay is non-linear.

0956-7135/$ - see front matter � 2006 Elsevier Ltd. All rights reserved.

doi:10.1016/j.foodcont.2006.05.010

* Corresponding author. Tel.: +86 431 5691908.E-mail address: [email protected] (W. Wu).

Recently, the artificial intelligence methods as well as neu-ral networks have been proposed for process control ofgrain drying.

The artificial neural network (NN) is a well-known toolfor solving complex, non-linear biological systems(De Baerdemaeker & Hashimoto, 1994) and it can give rea-sonable solutions even in extreme cases or in the event oftechnological faults (Lin & Lee, 1995). Huang and Mujum-dar (1993) created a NN in order to predict the perfor-mance of an industrial paper dryer. The NN model byJay and Oliver (1996) was used for predictive control ofdrying process. Trelea, Courtois, and Trystram (1997) usedexplicit time and recurrent NNs for modelling the moisturecontent of thin-layer (5 cm) corn during the drying processand for wet-milling quality at constant air flow rate andabsolute humidity and variable temperature. Thyagarajan,Panda, Shanmugam, Rao, and Ponnavaikko (1997) mod-elled an air heater plant for a dryer using a NN. Sreekanth,Ramaswamy, and Sablani (1998) predicted psychometricparameters using various NN models. Kaminski, Stru-millo, and Tomczak (1998) used a NN for data smoothingand for modeling material moisture content and tempera-

Page 2: A Neural Network for Predicting Moisture Content of Grain

output

BP hidden node RBF hidden node

input

Fig. 1. Structural modular neural network architecture.

X. Liu et al. / Food Control 18 (2007) 928–933 929

ture. Farkas, Remenyi, and Biro (2000a, 2000b) set up aNN to model moisture distribution in agricultural fixed-bed dryers. It is clear from past literature that NNs aregood for modelling drying process.

The selection of an appropriate NN topology to predictthe drying process is important in terms of model accuracyand model simplicity. The architecture of a NN greatly influ-ences its performance. Many algorithms for finding the opti-mized NN structure are derived based on specific data in aspecific area of application (Blanco, Delgado, & Pegalajar,2000; Boozarjomehry & Svrcek, 2001), but predicting theoptimal NN topology is a difficult task since choosing theneural architecture requires some priori knowledge of graindrying and/or supposes many trial-and-error runs.

In this paper, we present a genetic algorithm capable ofobtaining not only the trained optimal topology of a neuralnetwork but also the least number of connections necessaryfor solving the problem. In the following sections, the tech-niques used in this paper are briefly reviewed, and thedesign of the NN system for predicting the grain dryingprocess is discussed in detail. A grain drying process is usedto demonstrate the effectiveness of the neural network. Thefinal section draws conclusions regarding this study.

2. Materials and methods

2.1. Neural network system

The back-propagation neural network (BPNN) is a mul-tilayer feed-forward network with a back-propagationlearning algorithm. The BPNN is characterized by hiddenneurons that have a global response. The commonly usedtransfer function in the BPNN is the sigmoid function

f ðsjÞ ¼1

1þ expð�sjÞð1Þ

where sj is the weighted sum of inputs coming to the jthnode.

Usually, there is only one hidden layer for the BPNNs asthe availability of such a layer is sufficient to produce theset of desired output patterns for all of the training vectorpairs.

The radial basis function neural network (RBFNN)belongs to the group of kernel function nets that utilizesimple kernel functions as the hidden neurons, distributedin different neighborhoods of the input space, and whoseresponses are essentially local in nature. The RBF producesa significant nonzero response only when the input fallswithin a small-localized region of the input space. The mostcommon transfer function in an RBFNN is the Gaussianactivation function

/k ¼ exp �Pn

i¼1ðxi � CkiÞ2

b2

!; k ¼ 1; 2 . . . q ð2Þ

where xi is the ith variable of input; Cki the center of the kthRBF unit for input variable i; and b2 is the width of the kthRBF unit.

Because of the different response characteristics of hid-den neurons in these two kinds of neural networks, theinterpolation problems can be solved more efficiently witha BPNN, and the extrapolation problems are better to bedealt with an RBFNN.

Since the different properties of the BPNN and theRBFNN are complementary, Nan Jiang, Zhao, and Ren(2002) designed a structural modular neural network(SMNN) with genetic algorithm and showed that theSMNN constructed a better input–output mapping bothlocally and globally. The SMNN combine the neurons inthe generalization capabilities of BPNN and the computa-tional efficiency of RBFNN together in one network struc-ture. Its architecture is shown in Fig. 1, which has threelayers: the input layer which takes in the input data; thehidden layer which comprises both the sigmoid neuronsand the Gaussian neurons; and the output layer where alinear function is used to combine the BP part and theRBF part.

In this research, we adapt their SMNN for predictingmoisture content of grain drying process. The number ofneurons in the input and output layers are given by thenumber of input and output variables in the process. Theinputs of the structure can be variables such as inlet mois-ture content, grain temperatures, and air temperatures,which are easily measurable. The output of the system isthe moisture content of the grain.

2.2. Design structural modular neural network using GA

The network configuration of the SMNN can be trans-formed into two subset selection problems: one is the num-ber of BP hidden neurons; and the other is the distinctterms nc which are selected from the N data samples asthe centers of RBF hidden neurons.

There are a few types of representation schemes avail-able for decoding the neural network architecture, suchas the binary coding and the gray scale. In the presentwork, the chromosome in the GA’s population is dividedinto two parts. One part is a fixed length chromosome thatcontains the number of BP hidden neuron in binary form.The other part is a variable length chromosome (i.e. realcoding) that represents the number and position of theRBF hidden neurons. The centers of the RBF part are ran-domly selected data point from the training data set andthe center locations proposed here are also restricted tobe the data sample. The data sample xi is labeled with indexi (i = 1, 2, . . . , N), then the RBF neurons can be coded as a

Page 3: A Neural Network for Predicting Moisture Content of Grain

930 X. Liu et al. / Food Control 18 (2007) 928–933

chromosome with integer values ranging from 1 to N. Arange was given for the string length, and the string shouldonly contain distinct terms. So the chromosome isCi = Bi [ Ri. For example

B1 ¼ 0 1 1 0 1

B2 ¼ 0 1 0 1 0

R1 ¼ 13 15 10 16 17 11

R2 ¼ 17 11 12 15 19 14 13 20

C1 ¼ 0 1 1 0 1 13 15 10 16 17 11

C2 ¼ 0 1 0 1 0 17 11 12 15 19 14 13 20

where Bi represents the BP part binary code chromosome;Ri represents the RBF part real code chromosome; and Ci

is the full length chromosome.According to the validation method, the objective func-

tion used in this paper is the root mean squared error(RMSE) of the test data which have not been used in thetraining of the neural network. The genetic operators usedin this paper are as follows.

2.2.1. Crossover

For selected two chromosomes (parents) from the pop-ulation, the crossover will be done in two steps: (1) the bin-ary part string representing the BP neurons (Bi) and thereal number encoding part string representing the RBFneurons (Ri) will do crossover separately; (2) the wholechromosome does crossover in which BP string and thewhole RBF string can be switched according to a probabil-ity distribution. Therefore, Bi part uses the traditional onesingle point crossover: a point is selected between 1 andL � 1 where L is the string length of the BP part. BothBi strings are severed at this point and the segments tothe right of this point are switched. And the crossover pointis chosen with a uniform probability distribution. Forexample, if the crossover site is 3, after carrying out the firststep crossover, the above B1 and B2 will be

B1 ¼ 0 1 1 1 0; B2 ¼ 0 1 0 0 1

In the first step, Ri part uses a variable length crossoverwhich is like uniform crossover. The common terms in bothRBF part parents are searched and two binary templatestrings are created to mark the common terms in both par-ents. If the corresponding term is the common term, thebinary bit in the template string will be set to 1, otherwiseit will be 0. Secondly, two random numbers of distinctterms are selected from the RBF parents and exchangedwith each other. For example, the above R1 and R2 willdo first step crossover as follows.

First create two template strings to mark the parents:

T 1 ¼ 1 1 0 0 1 1

T 2 ¼ 1 1 0 1 0 0 1 0

Then exchange two distinct terms from parent 1 with twodistinct terms from the end of parent 2, and keep the com-mon terms unchanged:

R1 ¼ 13 15 14 20 17 11

R2 ¼ 17 11 12 15 19 10 13 16

After the first step crossover, the two parent chromosomesare changed to

C1 ¼ 0 1 1 1 0 13 15 14 20 17 11

C2 ¼ 0 1 0 0 1 17 11 12 15 19 10 13 16

To increase the diversity, the second step crossover is donefinally. Therefore, the above two chromosomes become

C1 ¼ 0 1 1 1 0 17 11 12 15 19 10 13 16

C2 ¼ 0 1 0 0 1 13 15 14 20 17 11

2.2.2. Mutation, deletion and addition

Like the crossover, for each selected chromosome fromthe population, the string of the BP neurons and the stringof the RBF neurons mutate separately. A simple pointmutation is used in Bi part and the operator exchanges witha given probability each term in Ri part with a randomlyselected term in the corresponding complementary subsetof the string. For example, the above C1 will be as followsafter mutation:

C1 ¼ 0 1 0 1 0 17 11 12 15 19 48 13 16

The deletion and addition are only for RBF part to allevi-ate premature loss of allele diversity, which is caused by thevariable length crossover in RBF string. The deletion andaddition operators are applied to each selected string withequal probability. For deletion operators, a random num-ber of terms are removed from the RBF string beginningfrom a randomly selected position. A random number ofterms are added to the end of the RBF string throughthe addition operators. The newly added terms are ran-domly chosen from the complementary subset of the se-lected string.

The GA to evolve the SMNN structure is presented inthe following:

(1) Randomly choose an initial population of p individ-ual chromosomes Ci (i = 1, 2, . . . , p). Each chromo-some defines a network with number of b BPhidden neurons and number of r RBF hidden neu-rons associated RBF center locations.

(2) Decode each chromosome. Each chromosome pre-sents one network architecture. Using the Leven-berg–Marquardt algorithm to train the network andcompute the RMSE value of the training data andthe testing data for each chromosome Ci. Set thenumber of generations Ng for evolution. Set counterg = 0.

(3) Taking the RMSE of the testing data as the fitnessf(Ci) (i = 1, 2, . . . , p) value of each individual chro-mosome. Rank fitness value of each individual inthe population.

Page 4: A Neural Network for Predicting Moisture Content of Grain

1 6 11 16 21 26 31 36 41 46 51 56

time/h-20

020406080

100120140

tem

pera

ture

/°C

T8

T7

T6

T5

T4

T3

T2

T1

TU

TM

TL

Fig. 3. The grain temperatures (T1–T8) and drying-air temperatures (TU,TM, TL).

20

25

30

nten

ts/%

w.b

.

X. Liu et al. / Food Control 18 (2007) 928–933 931

(4) Set counter g = 1, apply genetic operators to createoffspring:(a) Use roulette wheel selection to produce the

reproduction pool.(b) Apply two-step crossover with given probability

to two parent chromosomes in the reproductionpool, create two offspring.

(c) Apply mutation with given probability to everybit of the offspring.

(d) Apply deletion and addition with given probabil-ity to the RBF part strings of offspring, producethe new generation.

(e) Decode each chromosome in new generation.Train each network and compute the new RMSEvalues of the training data and the testing datafor each new chromosome.

(f) Set g = g + 1, if g > Ng, stop. Otherwise, go tostep (a).

0

5

10

15

1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61

time/hm

oist

ure

co

inlet MCoutlet MC

Fig. 4. The inlet and outlet moisture contents for training and testingneural network.

2.3. Database preparation

The experiment was carried out on a tower-type mixed-flow grain dryer with high of 26 m, section area of 16 m2

and solid flow rate from 2.4 to 4.0 m/h (see Fig. 2). Thedryer is quadrate in shape with the air in the drying sectionflowing through the grain column from the air plenum tothe ambient, and in the reverse direction in the cooling sec-tion. A grain turn-flow is located midway in the dryingcolumn.

The controller of the dryer consists of the temperaturesensors, the data acquisition system, and a personal com-puter. The PC communicates with the sensors and thegrain-discharged motor through a data acquisition card.The rpm of the grain-discharged motor is proportional to0–5 V input to the driver of the grain-discharged motor.

Fig. 2. Schematic of the tower-type mixed-flow grain dryer (T1–T8 are thegrain temperatures).

In order to study the dynamics of grain drying, about60 h of data were collected while the dryer operatedunder manual performance, with air flow rate from 0.27to 0.42 m/s, the surrounding temperature from �27 to�10 �C, and the drying-air temperature from 80 to125 �C. One set data per hour are chose, so there are 60 setsdata to be used to training and testing the neural network.Figs. 3 and 4 show all the input graphs used for training theNN.

3. Results and discussion

The experiments are carried out with the SMNN algo-rithm proposed in this paper. For comparison purpose,the results by using the evolved BPNN alone and by usingthe evolved RBFNN alone are also calculated.

In this paper, the inlet moisture content (Min), graintemperatures (T1–T8) and drying-air temperatures (TU,TM, TL) are taken as the input parameters, while the outletmoisture content (Mout) as output parameter. Thus theSMNN aims to find a mapping f such that Mout = f(T1,T2, T3, T4, T5, T6, T7, T8, TU, TM, TL, Min). The SMNNused here has 12 neurons in the input layer and one neuronin the output layer. The number of the hidden layer isdecided by the GA algorithm proposed in this paper.

The numbers of data for neural network training andtesting are 40 sets and 20 sets, respectively. The initial chro-mosome length of the BP part is 5. For the RBF part, theminimum string length is defined as 2 and the maximum

Page 5: A Neural Network for Predicting Moisture Content of Grain

Table 1MSE of grain drying process prediction

Number ofhidden neurons

MSE oftraining data

MSE oftesting data

SMNN 6-BP, 10-RBF 0.0298 0.0312BPNN 22 0.0304 0.0368RBFNN 42 0.0309 0.0336

10

11

12

1314

15

16

17

1 5 9 13 17 21 25 29 33 37 41 45 49 53 57

time/hm

oist

ure

cont

ent/%

w.b

.Fig. 7. The predicted outlet moisture contents by SMNN. Solid line:predicted data; dash line: measured data.

932 X. Liu et al. / Food Control 18 (2007) 928–933

string length is 20. The population size is chosen as 20. Theprobabilities for the crossover and the mutation are 0.5 and0.02, respectively, and the probability of deletion and addi-tion is taken as 0.04. The above GA parameters are selectedafter a series of trial and error runs. Since the training setcontains 40 distinct terms, the search space therefore con-tains 25P20

i¼2Ci40 ¼ 1:98� 1013 different networks. The gen-

eration number is set to be 50.The evolution of average and minimum MSE of the test-

ing data are shown in Fig. 5, where the average MSE is theaverage value of MSE in the whole chromosomes for eachgeneration and the minimum MSE is the minimum one inthe whole population. The best result is the record of theminimum value for one particular GA run. Fig. 5 indicatesthat the average MSE decreases through the evolution andthe best generalization network is emerged at the 32th gen-eration. For the best performance SMNN, the MSE valueson the training data and the testing data are 0.0298 and0.0312, as shown in Fig. 6. The algorithm automaticallysearches for the appropriate network size according tothe given objective. The best SMNN is at the 32th genera-tion which has least neurons (6 BP neurons and 10 RBF

0

0.02

0.04

0.06

0.08

0.1

0.12

1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49

generation

MSE

for t

est d

ata

average MSEminimum MSE

best result

Fig. 5. The average and minimum MSE for testing data in eachgeneration.

00.010.020.030.040.050.060.070.080.090.1

0.110.12

1 16 31 46 61 76 91 106

121

136

151

166

181

196

211

226

241

Epochs

MSE

testing resultstraining results

Fig. 6. The MSE for training and testing data.

neurons). The comparison between the evolved SMNNand the evolved BPNN and RBFNN can be seen at Table1. It is clear that even though the generalization error ofevolved BPNN and RBFNN are similar as the evolvedSMNN, the evolved SMNN performs slightly better onthe testing data and the complexity of the evolved SMNNis significantly reduced compared with the other twonetworks.

The predicted result from simulation test on the mois-ture content prediction of grain drying process based onthe SMNN is shown in Fig. 7. The figure shows that theaccuracy of predicted value is excellent.

4. Conclusions

As would be expected there was a fairly strong influenceof the NN topologies on the accuracy of the estimation.Therefore, the selection of the most appropriate NN topol-ogy was the main issue. In this paper, the SMNN has beenproposed which comprises sigmoid and Gaussian neuronsin the hidden layer of the feed-forward neural network.The GA is used to select the appropriate network architec-ture in determining the optimal number of nodes in the hid-den layer of the SMNN. Since the GA is a global searchmethod, so it has less probability of being trapped at localminima. It has been demonstrated that the proposedSMNN algorithm can automatically determine the appro-priate network structure, and the experimental results showgood performance of the SMNN over the BPNN and theRBFNN.

This study also shows that neural network modeling canbe used to obtain good accurate of moisture content pre-diction during grain drying process over a wide experimen-

Page 6: A Neural Network for Predicting Moisture Content of Grain

X. Liu et al. / Food Control 18 (2007) 928–933 933

tal range. The technological interest of this kind of model-ing must be related to the fact that it is elaborated withoutany preliminary assumptions on the underlying mecha-nisms. The applications of neural networks and geneticalgorithm can be used for the on-line prediction and con-trol of drying process.

Acknowledgements

This work was elaborated within the project of ‘‘PreciseDrying System of Maize’’, No. 05EFN217100439 fundedby the Ministry of Science and Technology of People’sRepublic of China.

References

Blanco, A., Delgado, M., & Pegalajar, M. C. (2000). A genetic algorithmto obtain the optimal recurrent neural network. International Journal

of Approximate Reasoning, 23, 67–83.Boozarjomehry, R. B., & Svrcek, W. Y. (2001). Automatic design of

neural network structures. Computers and Chemical Engineering, 25,1075–1088.

De Baerdemaeker, J., & Hashimoto, Y. (1994). Speaking fruit approach tothe intelligent control of the storage system. In Proceedings of 12th

CIGR World Congress on Agricultural Engineering, Vol. 2, Milan,Italy, 29 August–1 September, 1994, pp. 1493–1500.

Farkas, I., Remenyi, P., & Biro, A. (2000a). A neural network topologyfor modelling grain drying. Computers and Electronics in Agriculture,

26, 147–158.Farkas, I., Remenyi, P., & Biro, A. (2000b). Modelling aspects of grain

drying with a neural network. Computers and Electronics in Agricul-

ture, 29, 99–113.Huang, B., & Mujumdar, A. S. (1993). Use of neural network to predict

industrial dryer performance. Drying Technology, 11(3), 525–541.Jay, S., & Oliver, T. N. (1996). Modelling and control of drying processes

using neural networks. In Proceedings of the tenth international drying

symposium (IDS’96), Krako’w, Poland, 30 July–2 August, Vol. B, pp.1393–1400.

Jiang, N., Zhao, Z., & Ren, L. (2002). Design of structural modular neuralnetworks with genetic algorithm. Advances in Engineering Software, 34,17–24.

Kaminski, W., Strumillo, P., & Tomczak, E. (1998). Neurocomputingapproaches to modelling of drying process dynamics. Drying Techn-

ology, 16(6), 967–992.Lin, C. T., & Lee, C. S. G. (1995). Neural Fuzzy Systems. Englewood

Cliffs, NJ: Prentice Hall.Sreekanth, S., Ramaswamy, H. S., & Sablani, S. (1998). Prediction of

psychrometric parameters using neural networks. Drying Technology,

16(3–5), 825–837.Thyagarajan, T., Panda, R. C., Shanmugam, J., Rao, P. G., &

Ponnavaikko, M. (1997). Development of ANN model for non-lineardrying process. Drying Technology, 15(10), 2527–2540.

Trelea, I. C., Courtois, F., & Trystram, G. (1997). Dynamic models fordrying and wet-milling quality degradation of corn using neuralnetworks. Drying Technology, 15(3 and 4), 1095–1102.