neural-based orthogonal data fitting (the exin neural networks) || references

11
REFERENCES 1. T.J. Abatzoglou and J.M. Mendel. Constrained total least squares. Proc. ICASSP , pp. 1485–1488, 1987. 2. T.J. Abatzoglou, J.M. Mendel, and G.A. Harada. The constrained total least squares technique and its application to harmonic superresolution. IEEE Trans. Signal Process ., 39(5):1070–1087, May 1991. 3. R.J. Adcock. A problem in least squares. Analyst , 5:53–54, 1878. 4. P. Baldi and K. Hornik. Neural networks and principal component analysis: learning from examples without local minima. Neural Netw ., 2:52–58, 1989. 5. S. Barbarossa, E. Daddio, and G. Galati. Comparison of optimum and linear predic- tion technique for clutter cancellation. IEE Proc. F , 134:277–282, 1987. 6. E. Barnard. Optimization for training neural nets. IEEE Trans. Neural Netw ., 3(2):232–240, 1992. 7. R. Battiti. Accelerated backpropagation learning: two optimization methods. Complex Syst ., 3:331–342, 1989. 8. S. Beghelli, R.P. Guidorzi, and U. Soverini. Structural identification of multivariable systems by the eigenvector method. Int. J. Control , 46:671–678, 1987. 9. D.S. Bernstein. Matrix Mathematics . Princeton University Press, Princeton, NJ, 2005. 10. C.M. Bishop. Neural Network for Pattern Recognition . Clarendon Press, Oxford, UK, 1995. 11. N.K. Bose, H.C. Kim, and H.M. Valenzuela. Recursive total least squares algorithm for image reconstruction. Multidimens. Syst. Signal Process ., 4:253–268, 1993. 12. R. Branham. Multivariate orthogonal regression in astronomy. Celestial Mech. Dyn. Astron ., 61(3):239–251, Sept. 1995. 13. R.H. Byrd and J. Nocedal. A tool for the analysis of quasi-Newton methods with application to unconstrained minimization. SIAM J. Numer. Anal ., 26(3):727–739, June 1989. 14. F. Chatelin. Eigenvalues of Matrices . Wiley, New York, 1993. 15. Y. Chauvin. Principal component analysis by gradient descent on a constrained linear Hebbian cell. Proceedings of the IEEE International Conference on Neural Networks , Washington, DC, Vol. I, pp. 373–380. IEEE Press, New York, 1989. Neural-Based Orthogonal Data Fitting: The EXIN Neural Networks, By Giansalvo Cirrincione and Maurizio Cirrincione Copyright © 2010 John Wiley & Sons, Inc. 227

Upload: maurizio

Post on 06-Jun-2016

221 views

Category:

Documents


6 download

TRANSCRIPT

Page 1: Neural-Based Orthogonal Data Fitting (The EXIN Neural Networks) || References

REFERENCES

1. T.J. Abatzoglou and J.M. Mendel. Constrained total least squares. Proc. ICASSP ,pp. 1485–1488, 1987.

2. T.J. Abatzoglou, J.M. Mendel, and G.A. Harada. The constrained total least squarestechnique and its application to harmonic superresolution. IEEE Trans. SignalProcess ., 39(5):1070–1087, May 1991.

3. R.J. Adcock. A problem in least squares. Analyst , 5:53–54, 1878.4. P. Baldi and K. Hornik. Neural networks and principal component analysis: learning

from examples without local minima. Neural Netw ., 2:52–58, 1989.5. S. Barbarossa, E. Daddio, and G. Galati. Comparison of optimum and linear predic-

tion technique for clutter cancellation. IEE Proc. F , 134:277–282, 1987.6. E. Barnard. Optimization for training neural nets. IEEE Trans. Neural Netw .,

3(2):232–240, 1992.7. R. Battiti. Accelerated backpropagation learning: two optimization methods.

Complex Syst ., 3:331–342, 1989.8. S. Beghelli, R.P. Guidorzi, and U. Soverini. Structural identification of multivariable

systems by the eigenvector method. Int. J. Control , 46:671–678, 1987.9. D.S. Bernstein. Matrix Mathematics . Princeton University Press, Princeton, NJ,

2005.10. C.M. Bishop. Neural Network for Pattern Recognition. Clarendon Press, Oxford,

UK, 1995.11. N.K. Bose, H.C. Kim, and H.M. Valenzuela. Recursive total least squares algorithm

for image reconstruction. Multidimens. Syst. Signal Process ., 4:253–268, 1993.12. R. Branham. Multivariate orthogonal regression in astronomy. Celestial Mech. Dyn.

Astron ., 61(3):239–251, Sept. 1995.13. R.H. Byrd and J. Nocedal. A tool for the analysis of quasi-Newton methods with

application to unconstrained minimization. SIAM J. Numer. Anal ., 26(3):727–739,June 1989.

14. F. Chatelin. Eigenvalues of Matrices . Wiley, New York, 1993.15. Y. Chauvin. Principal component analysis by gradient descent on a constrained

linear Hebbian cell. Proceedings of the IEEE International Conference on NeuralNetworks , Washington, DC, Vol. I, pp. 373–380. IEEE Press, New York, 1989.

Neural-Based Orthogonal Data Fitting: The EXIN Neural Networks,By Giansalvo Cirrincione and Maurizio CirrincioneCopyright © 2010 John Wiley & Sons, Inc.

227

Page 2: Neural-Based Orthogonal Data Fitting (The EXIN Neural Networks) || References

228 REFERENCES

16. H. Chen and R.W. Liu. Adaptive distributed orthogonalization processing for prin-cipal component analysis. In Proceedings of the IEEE International Conference onAcoustics, Speech and Signal Processing , 1992.

17. H. Chen and R.W. Liu. An on-line unsupervised learning machine for adaptiveextraction. IEEE Trans. Circuits and Syst. II , 41:87–98, 1994.

18. T. Chen, S. Amari, and Q. Lin. A unified algorithm for principal and minor com-ponents extraction. Neural Netw ., 11:385–390, 1998.

19. W. Chen, J. Zhang, J. Ma, and B. Lu. Application of the total least squares methodto the determination of preliminary orbit. Chin. Astron. Astrophys ., 30(4):431–436,Oct.–Dec. 2006.

20. C.L. Cheng and J.W. Van Ness. Robust Errors-in-Variables Regression . TechnicalReport 179. Programs in Mathematical Sciences. University of Texas, Dallas, TX,1987.

21. A. Cichocki and R. Unbehauen. Neural Networks for Optimization and SignalProcessing . Wiley, New York, 1993.

22. A. Cichocki and R. Unbehauen. Simplified neural networks for solving linear leastsquares and total least squares problems in real time. IEEE Trans. Neural Netw .,5(6):910–923, Nov. 1994.

23. J.M. Cioffi. Limited-precision effects in adaptive filtering. IEEE Trans. Circuits Syst .,34(7):821–833, July 1987.

24. G. Cirrincione. A Neural Approach to the Structure from Motion Problem . Ph.D.dissertation, LIS INPG Grenoble, Dec. 1998.

25. G. Cirrincione. TLS and constrained TLS neural networks in computer vision. 3rdInternational Workshop on TLS and Errors-in-Variables Modeling , Leuven/Heverlee,Belgium, Aug. 27–29, 2001.

26. G. Cirrincione. Constrained total least squares for the essential/fundamental matrixestimation, a neural approach. Int. J. Comput. Vision (submitted in 2010).

27. G. Cirrincione and M. Cirrincione. Linear system identification using the TLS EXINneuron. NEURAP 98 , Marseille, France, Mar. 1998.

28. G. Cirrincione and M. Cirrincione. Robust total least squares by the nonlinear MCAEXIN neuron. Proceedings of the IEEE International Joint Symposia on Intelligenceand Systems , Rockville, MD, pp. 295–300, May 1998.

29. G. Cirrincione and M. Cirrincione. A robust neural approach for the estimation ofthe essential parameters in computer vision. Int. J. Artif. Intell. Tools , 8(3), Sept.1999.

30. G. Cirrincione, M. Cirrincione, J. Herault, and S. Van Huffel. The MCA EXIN neu-ron for the minor component analysis. IEEE Trans. Neural Netw ., 13(1):160–187,Jan. 2002.

31. G. Cirrincione, M. Cirrincione, and S. Van Huffel. The GeTLS EXIN neuron forlinear regression. IJCNN , July 2000.

32. G. Cirrincione, M. Cirrincione, and S. Van Huffel. The essential/fundamental matrixestimation as a constrained total least squares problem: Theory. CVPRIP 2000 ,Atlantic City, NJ, Jan.–Feb. 2000.

33. G. Cirrincione, M. Cirrincione, and S. Van Huffel. The GETLS EXIN neuron forlinear regression. IJCNN 2000, Proceedings of the IEEE-INNS-ENNS InternationalJoint Conference, July 24–27, 2000. Neural Netw ., 6:285–289.

Page 3: Neural-Based Orthogonal Data Fitting (The EXIN Neural Networks) || References

REFERENCES 229

34. G. Cirrincione, M. Cirrincione, and S. Van Huffel. A robust neural approach againstoutliers for the structure from motion problem in computer vision. Conference onStatistics with Deficient Data , Munich, Germany, 2000.

35. G. Cirrincione, G. Ganesan, K.V.S. Hari, and S. Van Huffel. Direct and neuraltechniques for the data least squares problem. MTNS 2000 , June 2000.

36. G. Cirrincione and M. Cirrincione. Linear system identification by using the TLSEXIN neuron. Neurocomputing , 28(1–3):53–74, Oct. 1999.

37. M. Cirrincione, M. Pucci, and G. Cirrincione. Estimation of the electrical parametersof an induction motor with the TLS EXIN neuron. Proceedings of the Fourth IEEEInternational Caracas Conference on Devices, Circuits and Systems , Apr. 17–19,2002, pp. I034–1 to I034-8.

38. M. Cirrincione, M. Pucci, G. Cirrincione, and G.-A. Capolino. A new TLS basedMRAS speed estimation with adaptive integration for high performance inductionmachine drives. Conference Record of the Industry Applications Conference, 38thIAS Annual Meeting , Vol. 1, pp. 140–151, Oct. 12–16, 2003.

39. M. Cirrincione, M. Pucci, G. Cirrincione, and G.A. Capolino. An adaptive speedobserver based on a new total least-squares neuron for induction machine drives.Conference Record of the 2004 IEEE Industry Applications Conference, 2004. 39thIAS Annual Meeting , Vol. 2, pp. 1350–1361, Oct. 3–7, 2004.

40. M. Cirrincione, M. Pucci, G. Cirrincione, and G.-A. Capolino. An enhanced neuralMRAS sensorless technique based on minor-component-analysis for induction motordrives. Proceedings of the IEEE International Symposium on Industrial Electronics,2005 , Vol. 4, pp. 1659–1666, June 20–23, 2005.

41. M. Cirrincione, M. Pucci, G. Cirrincione, and G.-A. Capolino. Sensorless control ofinduction motor drives by new linear neural techniques. 12th International PowerElectronics and Motion Control Conference, pp. 1820–1829, Aug. 2006.

42. M. Cirrincione, M. Pucci, G. Cirrincione, and G.-A. Capolino. A new experimentalapplication of least-squares techniques for the estimation of the induction motorparameters. In Conference Record of the Industry Applications Conference, 2002,37th IAS Annual Meeting , Vol. 2, pp. 1171–1180, Oct. 13–18, 2002.

43. M. Cirrincione, M. Pucci, G. Cirrincione, and G.-A. Capolino. A new experimentalapplication of least-squares techniques for the estimation of the induction motorparameters. IEEE Trans. Ind. Appl ., 39(5):1247–1256, Sept.–Oct. 2003.

44. M. Cirrincione, M. Pucci, G. Cirrincione, and G.-A. Capolino. A new TLS-basedMRAS speed estimation with adaptive integration for high performance inductionmotor drives. IEEE Trans. Ind. Appl ., 40(4):1116–1137, June–Aug. 2004.

45. M. Cirrincione, M. Pucci, G. Cirrincione, and G.-A. Capolino. An adaptive speedobserver based on a new total least-squares neuron for induction machine drives.IEEE Trans. Ind. Appl ., 42(1):89–104, Jan.–Feb. 2006.

46. M. Cirrincione, M. Pucci, G. Cirrincione, and G.-A. Capolino. Sensorless control ofinduction machines by a new neural algorithm: the TLS EXIN neuron. IEEE Trans.Ind. Electron ., 54(1):127–149, Feb. 2007.

47. M. Cirrincione, M. Pucci, G. Cirrincione, and G.-A. Capolino. Sensorless controlof induction motors by reduced order observer with MCA EXIN+ based adaptivespeed estimation. IEEE Trans. Ind. Electron ., 54(1):150–166, Feb. 2007.

48. P. Comon and G.H. Golub. Tracking a few extreme singular values and vectors insignal processing. Proc. IEEE , 78:1327–1343, 1990.

Page 4: Neural-Based Orthogonal Data Fitting (The EXIN Neural Networks) || References

230 REFERENCES

49. Y. Le Cun, P.Y. Simard, and B. Pearlmutter. Automatic learning rate maximizationby on-line estimation of the Hessian’s eigenvectors. In S.J. Hanson, J.D. Cowan,and C.L. Giles, eds., Advances in Neural Information Processing Systems , Vol. 5,pp. 156–163. Morgan Kaufmann, San Mateo, CA, 1993.

50. C.E. Davila. An efficient recursive total least squares algorithm for FIR adaptivefiltering. IEEE Trans. Signal Process ., 42:268–280, 1994.

51. R.D. Degroat and E. Dowling. The data least squares problem and channel equal-ization. IEEE Trans. Signal Process ., 41(1):407–411, Jan. 1993.

52. A.J. Van der Veen and A. Paulraj. An analytical constant modulus algorithm. IEEETrans. Signal Process ., pp. 1136–1155, May 1996.

53. K.I. Diammantaras and S.Y. Kung. An unsupervised neural model for oriented prin-cipal component. Proceedings of the IEEE International Conference on Acoustics,Speech and Signal Processing , pp. 1049–1052, 1991.

54. G. Eckart and G. Young. The approximation of one matrix by another of lower rank.Psychometrica , 1:211–218, 1936.

55. S.E. Fahlman. Faster-learning variations on back propagation: an empirical study.In D. Touretzky, G.E. Hinton, and T.J. Sejnowski, eds., Proceedings of the 1988Connectionist Models Summer School , pp. 38–51. Morgan Kaufmann, San Mateo,CA, 1988.

56. D.Z. Feng, Z. Bao, and L.C. Jiao. Total least mean squares algorithm. IEEE Trans.Signal Process ., 46(8):2122–2130, Aug. 1998.

57. K.V. Fernando and H. Nicholson. Identification of linear systems with input andoutput noise: the Koopmans–Levin method. IEE Proc. D , 132:30–36, 1985.

58. G.W. Fisher. Matrix analysis of metamorphic mineral assemblages and reactions.Contrib. Mineral. Petrol ., 102:69–77, 1989.

59. R. Fletcher. Practical Methods of Optimization , 2nd ed. Wiley, New York, 1987.

60. W.A. Fuller. Error Measurement Models . Wiley, New York, 1987.

61. G.G. Cirrincione, S. Van Huffel, A. Premoli, and M.L. Rastello. An iterativelyre-weighted total least-squares algorithm for different variances in observations anddata. In V.P. Ciarlini, M.G. Cox, E. Filipe, F. Pavese, and D. Richter, eds., AdvancedMathematical and Computational Tools in Metrology , pp. 78–86. World Scientific,Hackensack, NJ, 2001.

62. P.P. Gallo. Consistency of regression estimates when some variables are subject toerror. Commun. Stat. Theory Methods , 11:973–983, 1982.

63. K. Gao, M.O. Ahmad, and M.N. Swamy. Learning algorithm for total least-squaresadaptive signal processing. Electron. Lett ., 28(4):430–432, Feb. 1992.

64. K. Gao, M.O. Ahmad, and M.N. Swamy. A constrained anti-Hebbian learning algo-rithm for total least-squares estimation with applications to adaptive FIR and IIRfiltering. IEEE Trans. Circuits Syst.-II , 41(11):718–729, Nov. 1994.

65. C.L. Giles and T. Maxwell. Learning, invariance, and generalization in high orderneural networks. Appl. Opt ., 26:4972–4978, 1987.

66. P.E. Gill, W. Murray, and M.H. Wright. Practical Optimization . Academic Press,New York, 1980.

67. R.D. Gitlin, J.E. Mazo, and M.G. Taylor. On the design of gradient algorithms fordigitally implemented adaptive filters. IEEE Trans. Circuits Syst ., 20, Mar. 1973.

Page 5: Neural-Based Orthogonal Data Fitting (The EXIN Neural Networks) || References

REFERENCES 231

68. R.D. Gitlin, H.C. Meadors, and S.B. Weinstein. The tap-leakage algorithm: an algo-rithm for the stable operation of a digitally implemented, fractionally adaptive spacedequalizer. Bell Syst. Tech. J ., 61(8):1817–1839, Oct. 1982.

69. L.J. Gleser. Calculation and simulation in errors-in-variables regression problems.Mimeograph Ser. 78-5. Statistics Department, Purdue University, West Lafayette,IN, 1978.

70. L.J. Gleser. Estimation in a multivariate “errors in variables” regression model: largesample results. Ann. Stat ., 9:24–44, 1981.

71. G.H. Golub. Some modified eigenvalue problems. SIAM Rev ., 15:318–344, 1973.72. G.H. Golub, A. Hoffman, and G.W. Stewart. A generalization of the Eckart–

Young–Mirsky matrix approximation theorem. Linear Algebra Appl ., 88–89:322–327, 1987.

73. G.H. Golub and R.J. Leveque. Extensions and uses of the variable projection algo-rithm for solving nonlinear least squares problems. Proceedings of the 1979 ArmyNumerical Analysis and Computers Conference, White Sands Missile Range, WhiteSands, NM, pp. 1–12, 1979.

74. G.H. Golub and C.F. Van Loan. An analysis of the total least squares problem. SIAMJ. Numer. Anal ., pp. 883–893, 1980.

75. G.H. Golub and C.F. Van Loan. Matrix Computations , 2nd ed. Johns Hopkins Uni-versity Press, Baltimore, 1989.

76. G.H. Golub and C. Reinsch. Singular value decomposition and least squares solu-tions. Numer. Math ., 14:403–420, 1970.

77. J.W. Griffiths. Adaptive array processing, a tutorial. IEE Proc. F , 130:3–10, 1983.78. R. Haimi-Cohen and A. Cohen. Gradient-type algorithms for partial singular value

decomposition. IEEE Trans. Pattern Anal. Machine Intell ., 9(1):137–142, Jan. 1987.79. H.R. Hall and R.J. Bernhard. Total least squares solutions to acoustic boundary ele-

ment models. In R.J. Bernhard and R.F. Keltie, eds., Proceedings of the InternationalSymposium on Numerical Techniques in Acoustic Radiation, ASME Winter AnnualMeeting , San Francisco, Vol. 6, pp. 145–152. American Society of MechanicalEngineers, New York, 1989.

80. F.R. Hampel, E.N. Ronchetti, P. Rousser, and W.A. Stachel. Robust Statistics: TheApproach Based on Influence Functions . Wiley, New York, 1987.

81. M. Hassoun. Fundamentals of Artificial Neural Networks . MIT Press, Cambridge,MA, 1995.

82. S. Haykin. Adaptive Filter Theory , 2nd ed. Prentice Hall, Englewood Cliffs, NJ,1991.

83. S. Haykin. Neural Networks: A Comprehensive Foundation . IEEE Press, Piscataway,NJ, 1994.

84. U. Helmke and J.B. Moore. Optimization and Dynamical Systems . Communicationsand Control Engineering. Springer-Verlag, London, 1994.

85. M. Hestenes. Conjugate Direction Methods in Optimization . Springer-Verlag, NewYork, 1980.

86. M.R. Hestenes and E. Stiefel. Methods of conjugate gradients for solving linearsystems. J. Res. Natl. Bur. Stand ., 49(6):409–436, 1952.

87. K. Hirakawa and T.W. Parks. Image denoising using total least squares. IEEE Trans.Image Process ., 15:2730–2742, 2006.

Page 6: Neural-Based Orthogonal Data Fitting (The EXIN Neural Networks) || References

232 REFERENCES

88. R.R. Hocking. The analysis and selection of variables in linear regression.Biometrics , 32:1–49, 1976.

89. R.R. Hocking. Developments in linear regression methodology 1959–1982.Technometrics , 25:219–230, 1983.

90. S.D. Hodges and P.G. Moore. Data uncertainties and least squares regression. Appl.Stat ., 21:185–195, 1972.

91. P. Huber. Robust Statistics . Wiley, New York, 1981.

92. S. Van Huffel. Analysis of the Total Least Squares Problem and Its Use in ParameterEstimation . Ph.D. dissertation. Department of Electrical Engineering, KatholiekeUniversiteit Leuven, Leuven, Belgium, 1987.

93. S. Van Huffel. On the significance of nongeneric total least squares problems. SIAMJ. Matrix Anal. Appl ., 13(1):20–35, 1992.

94. S. Van Huffel. TLS applications in biomedical signal processing. In S. Van Huf-fel, ed., Recent Advances in Total Least Squares Techniques and Errors-in-VariablesModeling . SIAM Proceedings Series. SIAM, Philadelphia, 1997.

95. S. Van Huffel and J. Vandewalle. Subset selection using the total least squaresapproach in collinearity problems with errors in the variables. Linear Algebra Appl .,88–89:695–714, 1987.

96. S. Van Huffel and J. Vandewalle. The partial total least squares algorithm. J. Comput.Appl. Math ., 21:333–341, 1988.

97. S. Van Huffel and J. Vandewalle. Analysis and properties of the generalized totalleast squares problem ax = b when some or all columns of a are subject to errors.SIAM J. Matrix Anal. Appl ., 10:294–315, 1989.

98. S. Van Huffel and J. Vandewalle. The Total Least Squares Problems: ComputationalAspects and Analysis . Frontiers in Applied Mathematics. SIAM, Philadelphia, 1991.

99. S. Van Huffel, J. Vandewalle, and A. Haegemans. An efficient and reliable algorithmfor computing the singular subspace of a matrix, associated with its smallest singularvalues. J. Comput. Appl. Math ., 19:313–330, 1987.

100. S. Van Huffel, J. Vandewalle, M.C. De Roo, and J.L. Willems. Reliable and efficientdeconvolution technique based on total linear least squares for calculating the renalretention function. Med. Biol. Eng. Comput ., 25:26–33, 1987.

101. R.A. Jacobs. Increased rates of convergence through learning rate adaptation. NeuralNetw ., 1(4):295–307, 1988.

102. C.J. Tsai, N.P. Galatsanos, and A.K. Katsaggelos. Total least squares estimationof stereo optical flow. Proceedings of the IEEE International Conference on ImageProcessing , Vol. II, pp. 622–626, 1998.

103. E.M. Johannson, F.U. Dowla, and D.M. Goodman. Backpropagation learning formultilayer feedforward neural networks using the conjugate gradient method. Int. J.Neural Syst ., 2(4):291–301, 1992.

104. J.H. Justice and A.A. Vassiliou. Diffraction tomography for geophysical monitoringof hydrocarbon reservoirs. Proc. IEEE , 78:711–722, 1990.

105. G. Kelly. The influence function in the errors in variables problem. Ann. Stat .,12:87–100, 1984.

106. R.H. Ketellapper. On estimating parameters in a simple linear errors-in-variablesmodel. Technometrics , 25:43–47, 1983.

Page 7: Neural-Based Orthogonal Data Fitting (The EXIN Neural Networks) || References

REFERENCES 233

107. R. Klemm. Adaptive airborne MTI: an auxiliary channel approach. IEE Proc. F ,134:269–276, 1987.

108. T.C. Koopmans. Linear Regression Analysis of Economic Time Series . De Erver F.Bohn, Haarlem, The Netherlands, 1937.

109. A.H. Kramer and A. Sangiovanni-Vincentelli. Efficient parallel learning algo-rithms for neural networks. In D.S. Touretzky, ed., Advances in Neural InformationProcessing Systems , Vol. 1, pp. 40–48, Morgan Kaufmann, San Mateo, CA, 1989.

110. T.P. Krasulina. Method of stochastic approximation in the determination of thelargest eigenvalue of the mathematical expectation of random matrices. Automat.Remote Control , 2:215–221, 1970.

111. S.Y. Kung and K.I. Diammantaras. A neural network learning algorithm for adaptiveprincipal component extraction. Proceedings of the IEEE International Conferenceon Acoustics, Speech and Signal Processing , pp. 861–864, 1990.

112. S.Y. Kung, K.I. Diammantaras, and J.S. Taur. Adaptive principal component extrac-tion and application. IEEE Trans. Signal Process ., 42(5):1202–1217, 1994.

113. H. Kushner and D. Clark. Stochastic Approximation Methods for Constrained andUnconstrained Systems . Springer-Verlag New York, 1978.

114. C. Lawson and R. Hanson. Solving Least Squares Problems . Prentice-Hall, Engle-wood Cliffs, NJ, 1974.

115. E. Linsker. From basic network principles to neural architecture: emergenceof spatial opponent cells. Proc. Natl. Acad. Sci. USA, 83:7508–7512, 1986.

116. E. Linsker. From basic network principles to neural architecture: emergence oforientation selective cells. Proc. Natl. Acad. Sci. USA, 83:8390–8394, 1986.

117. E. Linsker. From basic network principles to neural architecture: emergence oforientation columns. Proc. Natl. Acad. Sci. USA, 83:8779–8783, 1986.

118. L. Ljung. Analysis of recursive stochastic algorithms. IEEE Trans. Autom. Control ,22:551–575, 1977.

119. D.G. Luenberger. Linear and Nonlinear Programming , 2nd ed. Addison-Wesley,Reading, MA, 1984.

120. F. Luo, Y. Li, and C. He. Neural network approach to the TLS linear predictionfrequency estimation problem. Neurocomputing , 11:31–42, 1996.

121. F. Luo and R. Unbehauen. Applied Neural Networks for Signal Processing .Cambridge University Press, New York, 1997.

122. F. Luo and R. Unbehauen. A minor subspace analysis algorithm. IEEE Trans. NeuralNetw ., 8(5):1149–1155, Sept. 1997.

123. F. Luo and R. Unbehauen. Comments on: A unified algorithm for principal andminor components extraction. Letter to the Editor. Neural Netw ., 12:393, 1999.

124. F. Luo, R. Unbehauen, and A. Cichocki. A minor component analysis algorithm.Neural Netw ., 10(2):291–297, 1997.

125. F.L. Luo, R. Unbehauen, and Y.D. Li. A principal component analysis algorithmwith invariant norm. Neurocomputing , 8(2):213–221, 1995.

126. A. Madansky. The fitting of straight lines when both variables are subject to error.J. Am. Stat. Assoc., 54:173–205, 1959.

127. S. Makram-Ebeid, J.A. Sirat, and J.R. Viala. A rationalized backpropagation learningalgorithm. Proceedings of the International Joint Conference on Neural Networks ,Vol. 2, pp. 373–380. IEEE Press, Piscataway, NJ, 1989.

Page 8: Neural-Based Orthogonal Data Fitting (The EXIN Neural Networks) || References

234 REFERENCES

128. G. Mathew and V. Reddy. Development and analysis of a neural network approachto Pisarenko’s harmonic retrieval method. IEEE Trans. Signal Process ., 42:663–667,1994.

129. G. Mathew and V. Reddy. Orthogonal eigensubspace estimation using neural net-works. IEEE Trans. Signal Process ., 42:1803–1811, 1994.

130. V.Z. Mesarovic, N.P. Galatsanos, and A.K. Katsaggelos. Regularized constrainedtotal least squares image restoration. IEEE Trans. Image Process ., 48:1096–1108,Aug. 1995.

131. L. Mirsky. Symmetric gauge functions and unitarily invariant norms. Q. J. Math.Oxford , 11:50–59, 1960.

132. R.R. Mohler. Nonlinear Systems , Vol. I, Dynamics and Control . Prentice Hall, Engle-wood Cliffs, NJ, 1991.

133. M.F. Moller. A scaled conjugate gradient algorithm for fast supervised learning.Neural Netw ., 6:525–533, 1993.

134. M. Moonen, B. De Moor, L. Vanderberghe, and J. Vandewalle. On- and off-lineidentification of linear state-space models. Int. J. Control , 49:219–232, 1989.

135. B. De Moor. Mathematical Concepts and Techniques for Modelling Static andDynamic Systems . Ph.D. dissertation, Department of Electrical Engineering,Katholieke Universiteit Leuven, Leuven, Belgium, 1988.

136. B. De Moor and J. Vandewalle. An adaptive singular value decomposition algorithmbased on generalized Chebyshev recursion. In T.S. Durrani, J.B. Abbiss, J.E. Hudson,R.W. Madan, J.G. McWhirter, and T.A. Moore, eds., Proceedings of the Conferenceon Mathematics in Signal Processing , pp. 607–635. Clarendon Press, Oxford, UK,1987.

137. M. Muhlich and R. Mester. The role of total least squares in motion analysis. In H.Burkhardt, ed., Proceedings of the European Conference on Computer Vision , pp.305–321. Lecture Notes on Computer Science. Springer-Verlag, New York, June1998.

138. E. Oja. A simplified neuron model as a principal component analyzer. J. Math. Biol .,16:267–273, 1982.

139. E. Oja. Subspace Methods of Pattern Recognition . Research Studies Press/Wiley,Letchworth, UK, 1983.

140. E. Oja. Neural networks, principal components and subspace. Int. J. Neural Syst .,1:61–68, 1989.

141. E. Oja. Principal components, minor components and linear neural networks. NeuralNetw ., 5:927–935, 1992.

142. E. Oja and J. Karhunen. On stochastic approximation of the eigenvectors and eigen-values of the expectation of a random matrix. J. Math. Anal. Appl ., 106:69–84,1985.

143. E. Oja, H. Ogawa, and J. Wangviwattana. Learning in nonlinear constrained Hebbiannetworks. Proceedings of the International Conference on ANNs , Espoo, Finland,pp. 737–745, 1991.

144. E. Oja and L. Wang. Neural fitting: robustness by anti-Hebbian learning. Neurocom-puting , 12:155–170, 1996.

145. E. Oja and L. Wang. Robust fitting by nonlinear neural units. Neural Netw .,9(3):435–444, 1996.

Page 9: Neural-Based Orthogonal Data Fitting (The EXIN Neural Networks) || References

REFERENCES 235

146. C.C. Paige and Z. Strakos. Unifying least squares, total least squares and data leastsquares problems. In S. Van Huffel and P. Lemmerling, eds., Proceedings of theThird International Workshop on TLS and Errors-in-Variables Modelling , pp. 35–44.Kluwer Academic, Dordrecht, The Netherlands. 2001.

147. C.C. Paige and Z. Strakos. Scaled total least squares fundamentals. Numer. Math .,91:117–146, 2002.

148. Y. Pao. Adaptive Pattern Recognition and Neural Networks . Addison-Wesley, Read-ing, MA, 1989.

149. B.N. Parlett. The Rayleigh quotient iteration and some generalizations for nonormalmatrices. Math. Comput ., 28:679–693, 1974.

150. B.N. Parlett. The Symmetric Eigenvalue Problem . Englewood Cliffs, NJ, PrenticeHall, 1980.

151. K. Pearson. On lines and planes of closest fit to points in space. Philos. Mag .,2:559–572, 1901.

152. V.F. Pisarenko. The retrieval of harmonics from a covariance function. Geophys. J.R. Astron. Soc., 33:347–366, 1973.

153. D. Plaut, S. Nowlan, and G.E. Hinton. Experiments on Learning by Backpropagation .Technical report T.R. CMU-CS-86-126. Department of Computer Science, CarnegieMellon University, Pittsburgh, PA, 1986.

154. E. Polak. Computational Methods in Optimization: A Unified Approach . AcademicPress, New York, 1971.

155. M.J.D. Powell. Restart procedures for the conjugate gradient method. Math.Programming , 12:241–254, 1977.

156. A. Premoli, M.L. Rastello, and G. Cirrincione. A new approach to total least-squarestechniques for metrological applications. In P. Ciarlini, M.G. Cox, F. Pavese, and D.Richter, eds., Advanced Mathematical Tools in Metrology II . Series on Advances inMathematics for Applied Sciences, Vol. 40, pp. 206–215. World Scientific, Hack-ensack, NJ, 1996.

157. W.H. Press, S.A. Teukolsky, W.T. Wetterling, and B.P. Flannery. Numerical Recipesin C: The Art of Scientific Computing , 2nd ed. Cambridge University Press, NewYork, 1992.

158. M.A. Rahman and K.B. Yu. Total least squares approach for frequency estimationusing linear prediction. IEEE Trans. Acoust. Speech Signal Process ., 35:1440–1454,1987.

159. J.A. Ramos. Applications of TLS and related methods in the environmental sciences.Comput. Stat. Data Anal ., 52(2):1234–1267, 2007.

160. B.D. Rao. Unified treatment of LS, TLS and truncated SVD methods using aweighted TLS framework. In S. Van Huffel, ed., Recent Advances in Total LeastSquares Techniques and Errors-in-Variables Modeling , pp. 11–20. SIAM, Philadel-phia, 1997.

161. H. Robbins and S. Monro. A stochastic approximation method. Ann. Math. Stat .,22:400–407, 1951.

162. R. Rost and J. Leuridan. A comparison of least squares and total least squares formultiple input estimation of frequency response functions. Paper 85-DET-105. ASMEDesign Engineering Division Conference and Exhibit on Mechanical Vibration andNoise, Cincinnati, OH, 1985.

Page 10: Neural-Based Orthogonal Data Fitting (The EXIN Neural Networks) || References

236 REFERENCES

163. R. Roy and T. Kailath. Total least-squares ESPRIT. Proceedings of the 21st AnnualAsilomar Conference on Signals, Systems and Computers , pp. 297–301, 1987.

164. Y. Saad. Chebyshev acceleration techniques for solving nonsymmetric eigenvalueproblems. Math. Comput ., 42:567–588, 1984.

165. S. Sagara and K. Wada. On-line modified least-squares parameter estimation of lineardiscrete dynamic systems. Int. J. Control , 25(3):329–343, 1977.

166. T.D. Sanger. Optimal unsupervised learning in a single-layer linear feedforwardneural network. Neural Netw ., 2:459–473, 1989.

167. R. Schmidt. Multiple emitter location and signal parameter estimation. IEEE Trans.Antennas Propag ., 34:276–280, 1986.

168. H. Schneeweiss. Consistent estimation of a regression with errors in the variables.Metrika , 23:101–115, 1976.

169. D. Shafer. Gradient vectorfields near degenerate singularities. In Z. Nitecki and C.Robinson, eds., Global Theory of Dynamical Systems , pp. 410–417. Lecture Notesin Mathematics 819. Springer-Verlag, Berlin, 1980.

170. D.F. Shanno. Conjugate gradient methods with inexact searches. Math. Oper. Res .,3(3):244–256, 1978.

171. M.T. Silvia and E.C. Tacker. Regularization of Marchenko’s integral equation bytotal least squares. J. Acoust. Soc. Am ., 72:1202–1207, 1982.

172. J.J. Sllynk. Adaptive IIR filtering. IEEE ASSP Mag ., 2:4–21, Apr. 1989.

173. H. Spath. On discrete linear orthogonal LP-approximation. Z. Angew. Math. Mech .,62:354–355, 1982.

174. H. Spath. Orthogonal least squares fitting with linear manifolds. Numer. Math .,48:441–445, 1986.

175. P. Sprent. Models in Regression and Related Topics . Methuen, London, 1969.

176. J. Staar. Concepts for Reliable Modelling of Linear Systems with Application toOn-Line Identification of Multivariable State Space Descriptions . Ph.D. disserta-tion, Department of Electrical Engineering, Katholieke Universiteit Leuven, Leuven,Belgium, 1982.

177. G.W. Stewart. Sensitivity Coefficients for the Effects of Errors in the IndependentVariables in a Line Regression . Technical Report TR-571. Department of ComputerScience, University of Maryland, College Park, MD, 1977.

178. G.W. Stewart. Matrix Algorithms , Vol. II, Eigensystems . SIAM, Philadelphia, 2001.

179. A. Stoian, P. Stoica, and S. Van Huffel. Comparative Performance Study ofLeast-Squares and Total Least-Squares Yule–Walker Estimates of AutoregressiveParameters . Internal Report. Department of Automatic Control, Polytechnic Instituteof Bucharest, Bucharest, Roumania, 1990.

180. W.J. Szajowski. The generation of correlated Weibull clutter for signal detectionproblem. IEEE Trans. Aerosp. Electron. Syst ., 13(5):536–540, 1977.

181. A. Taleb and G. Cirrincione. Against the convergence of the minor componentanalysis neurons. IEEE Trans. Neural Netw ., 10(1):207–210, Jan. 1999.

182. R.C. Thompson. Principal submatrices: IX. Interlacing inequalities for singular val-ues of submatrices. Linear Algebra Appl ., 5:1–12, 1972.

183. J. Treichler. The Spectral Line Enhancer . Ph.D. dissertation, Stanford University,Stanford, CA, May 1977.

Page 11: Neural-Based Orthogonal Data Fitting (The EXIN Neural Networks) || References

REFERENCES 237

184. G. Ungerboeck. Fractional tap-spacing equalizerand consequences for clock recoveryin data modems. IEEE Trans. Commun ., 24:856–864, Aug. 1976.

185. T.P. Vogl, J.K. Mangis, A.K. Rigler, W.T. Zink, and D.L. Alkon. Accelerating theconvergence of the back-propagation method. Biol. Cybernet ., 59:257–263, 1988.

186. L. Wang and J. Karhunen. A unified neural bigradient algorithm for robust PCA andMCA. Int. J. Neural Syst ., 7:53–67, 1996.

187. R.L. Watrous. Learning algorithms for connectionist networks: applied gradientmethods of nonlinear optimization. Proceedings of the IEEE First InternationalConference on Neural Networks , Vol. 2, pp. 619–627. IEEE, San Diego, CA, 1987.

188. G.A. Watson. Numerical methods for linear orthogonal LP approximation.IMA J. Numer. Anal ., 2:275–287, 1982.

189. A.R. Webb, D. Lowe, and M.D. Bedworth. A Comparison of Non-linear OptimisationStrategies for Feed-Forward Adaptive Layered Networks . RSRE Memorandum 4157.Royal Signals and Radar Establishment, Malvern, UK, 1988.

190. J.T. Webster, R.F. Gunst, and R.L. Mason. Latent root regression analysis. Techno-metrics , 16:513–522, 1974.

191. B. Widrow and M.A. Lehr. 30 years of adaptive neural networks: perceptron, mada-line, and backpropagation. Proc. IEEE , 78(9):1415–1442, Sept. 1990.

192. B. Widrow and S.D. Stearns. Adaptive Signal Processing . Prentice Hall, EnglewoodCliffs, NJ, 1985.

193. J.H. Wilkinson. The Algebraic Eigenvalue Problem . Clarendon Press, Oxford, UK,1965.

194. L. Xu, A. Krzyzak, and E. Oja. Neural nets for dual subspace pattern recognitionmethod. Int. J. Neural Syst ., 2(3):169–184, 1991.

195. L. Xu, E. Oja, and C. Suen. Modified Hebbian learning for curve and surface fitting.Neural Netw ., 5:441–457, 1992.

196. X. Yang, T.K. Sarkar, and E. Arvas. A survey of conjugate gradient algorithms forsolution of extreme eigen-problems of a symmetric matrix. IEEE Trans. Acoust.,Speech Signal Process ., 37(10):1550–1556, Oct. 1989.

197. D. York. Least squares fitting of a straight line. Can. J. Phys ., 44:1079–1086, 1966.

198. C.L. Zahm. Applications of adaptive arrays to suppress strong jammers in the pres-ence of weak signals. IEEE Trans. Aerosp. Electron. Syst ., 9:260–271, Mar. 1973.

199. F. Zhang. Matrix Theory . Springer-Verlag, New York, 1999.

200. W. Zhu, Y. Wang, Y. Yao, J. Chang, L. Graber, and R. Barbour. Iterative total least-squares image reconstruction algorithm for optical tomography by the conjugategradient method. J. Opt. Soc. Am. A, 14(4):799–807, Apr. 1997.

201. M.D. Zoltowski. Signal processing applications of the method of total least squares.Proceedings of the 21st Asilomar Conference on Signals, Systems and Computers ,Pacific Grove, CA, pp. 290–296, 1987.

202. M.D. Zoltowski and D. Stavrinides. Sensor array signal processing via a Procrustesrotations based eigenanalysis of the ESPRIT data pencil. IEEE Trans. Acoust. SpeechSignal Process ., 37:832–861, 1989.