ijetcas june-august issue 9

129
ISSN (ONLINE): 2279-0055 ISSN (PRINT): 2279-0047 Issue 9, Volume 1, 2 & 3 June-August, 2014 International Journal of Emerging Technologies in Computational and Applied Sciences International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research) STEM International Scientific Online Media and Publishing House Head Office: 148, Summit Drive, Byron, Georgia-31008, United States. Offices Overseas: India, Australia, Germany, Netherlands, Canada. Website: www.iasir.net , E-mail (s): [email protected] , [email protected] , [email protected]

Upload: iasir-journals

Post on 03-Apr-2016

310 views

Category:

Documents


22 download

DESCRIPTION

 

TRANSCRIPT

Page 1: IJETCAS June-August Issue 9

ISSN (ONLINE): 2279-0055 ISSN (PRINT): 2279-0047

IIssssuuee 99,, VVoolluummee 11,, 22 && 33

JJuunnee--AAuugguusstt,, 22001144

IInntteerrnnaattiioonnaall JJoouurrnnaall ooff

EEmmeerrggiinngg TTeecchhnnoollooggiieess iinn

CCoommppuuttaattiioonnaall aanndd

AApppplliieedd SScciieenncceess

International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research)

STEM International Scientific Online Media and Publishing House Head Office: 148, Summit Drive, Byron, Georgia-31008, United States.

Offices Overseas: India, Australia, Germany, Netherlands, Canada. Website: www.iasir.net, E-mail (s): [email protected], [email protected], [email protected]

Page 2: IJETCAS June-August Issue 9
Page 3: IJETCAS June-August Issue 9

PREFACE

We are delighted to welcome you to the ninth issue of the International Journal of Emerging

Technologies in Computational and Applied Sciences (IJETCAS). In recent years, advances

in science, technology, engineering, and mathematics have radically expanded the data

available to researchers and professionals in a wide variety of domains. This unique

combination of theory with data has the potential to have broad impact on educational

research and practice. IJETCAS is publishing high-quality, peer-reviewed papers covering

topics such as computer science, artificial intelligence, pattern recognition, knowledge

engineering, process control theory and applications, distributed systems, computer

networks and software engineering, electrical engineering, electric machines modeling and

design, control of electric drive systems, non-conventional energy conversion, sensors,

electronics, communications, data transmission, energy converters, transducers modeling

and design, electro-physics, nanotechnology, and quantum mechanics.

The editorial board of IJETCAS is composed of members of the Teachers & Researchers

community who have expertise in a variety of disciplines, including computer science,

cognitive science, learning sciences, artificial intelligence, electronics, soft computing,

genetic algorithms, technology management, manufacturing technology, electrical

technology, applied mathematics, automatic control , nuclear engineering, computational

physics, computational chemistry and other related disciplines of computational and applied

sciences. In order to best serve our community, this Journal is available online as well as in

hard-copy form. Because of the rapid advances in underlying technologies and the

interdisciplinary nature of the field, we believe it is important to provide quality research

articles promptly and to the widest possible audience.

We are happy that this Journal has continued to grow and develop. We have made every

effort to evaluate and process submissions for reviews, and address queries from authors

and the general public promptly. The Journal has strived to reflect the most recent and finest

researchers in the field of emerging technologies especially related to computational and

applied sciences. This Journal is completely refereed and indexed with major databases like:

IndexCopernicus, Computer Science Directory, GetCITED, DOAJ, SSRN, TGDScholar,

WorldWideScience, CiteSeerX, CRCnetBASE, Google Scholar, Microsoft Academic Search,

INSPEC, ProQuest, ArnetMiner, Base, ChemXSeer, citebase, OpenJ-Gate, eLibrary,

SafetyLit, SSRN, VADLO, OpenGrey, EBSCO, ProQuest, UlrichWeb, ISSUU, SPIE Digital

Library, arXiv, ERIC, EasyBib, Infotopia, WorldCat, .docstoc JURN, Mendeley,

Page 4: IJETCAS June-August Issue 9

ResearchGate, cogprints, OCLC, iSEEK, Scribd, LOCKSS, CASSI, E-PrintNetwork, intute,

and some other databases.

We are grateful to all of the individuals and agencies whose work and support made the

Journal's success possible. We want to thank the executive board and core committee

members of the IJETCAS for entrusting us with the important job. We are thankful to the

members of the IJETCAS editorial board who have contributed energy and time to the

Journal with their steadfast support, constructive advice, as well as reviews of submissions.

We are deeply indebted to the numerous anonymous reviewers who have contributed

expertly evaluations of the submissions to help maintain the quality of the Journal. For this

ninth issue, we received 154 research papers and out of which only 55 research papers are

published in three volumes as per the reviewers’ recommendations. We have highest

respect to all the authors who have submitted articles to the Journal for their intellectual

energy and creativity, and for their dedication to the field of computational and applied

sciences.

This issue of the IJETCAS has attracted a large number of authors and researchers across

worldwide and would provide an effective platform to all the intellectuals of different streams

to put forth their suggestions and ideas which might prove beneficial for the accelerated

pace of development of emerging technologies in computational and applied sciences and

may open new area for research and development. We hope you will enjoy this ninth issue

of the International Journal of Emerging Technologies in Computational and Applied

Sciences and are looking forward to hearing your feedback and receiving your contributions.

(Administrative Chief) (Managing Director) (Editorial Head)

--------------------------------------------------------------------------------------------------------------------------- The International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS), ISSN (Online): 2279-0055, ISSN (Print): 2279-0047 (June-August, 2014, Issue 9, Volume 1, 2 & 3). ---------------------------------------------------------------------------------------------------------------------------

Page 5: IJETCAS June-August Issue 9

BOARD MEMBERS

EDITOR IN CHIEF

Prof. (Dr.) Waressara Weerawat, Director of Logistics Innovation Center, Department of

Industrial Engineering, Faculty of Engineering, Mahidol University, Thailand.

Prof. (Dr.) Yen-Chun Lin, Professor and Chair, Dept. of Computer Science and Information Engineering, Chang Jung Christian University, Kway Jen, Tainan, Taiwan.

Divya Sethi, GM Conferencing & VSAT Solutions, Enterprise Services, Bharti Airtel, Gurgaon,

India. CHIEF EDITOR (TECHNICAL)

Prof. (Dr.) Atul K. Raturi, Head School of Engineering and Physics, Faculty of Science, Technology

and Environment, The University of the South Pacific, Laucala campus, Suva, Fiji Islands.

Prof. (Dr.) Hadi Suwastio, College of Applied Science, Department of Information Technology, The Sultanate of Oman and Director of IETI-Research Institute-Bandung, Indonesia.

Dr. Nitin Jindal, Vice President, Max Coreth, North America Gas & Power Trading, New York, United States.

CHIEF EDITOR (GENERAL)

Prof. (Dr.) Thanakorn Naenna, Department of Industrial Engineering, Faculty of Engineering,

Mahidol University, Thailand.

Prof. (Dr.) Jose Francisco Vicent Frances, Department of Science of the Computation and Artificial

Intelligence, Universidad de Alicante, Alicante, Spain.

Prof. (Dr.) Huiyun Liu, Department of Electronic & Electrical Engineering, University College London, Torrington Place, London.

ADVISORY BOARD

Prof. (Dr.) Kimberly A. Freeman, Professor & Director of Undergraduate Programs, Stetson School of Business and Economics, Mercer University, Macon, Georgia, United States.

Prof. (Dr.) Klaus G. Troitzsch, Professor, Institute for IS Research, University of Koblenz-Landau,

Germany.

Prof. (Dr.) T. Anthony Choi, Professor, Department of Electrical & Computer Engineering, Mercer

University, Macon, Georgia, United States.

Prof. (Dr.) Fabrizio Gerli, Department of Management, Ca' Foscari University of Venice, Italy.

Prof. (Dr.) Jen-Wei Hsieh, Department of Computer Science and Information Engineering,

National Taiwan University of Science and Technology, Taiwan.

Prof. (Dr.) Jose C. Martinez, Dept. Physical Chemistry, Faculty of Sciences, University of

Granada, Spain.

Prof. (Dr.) Panayiotis Vafeas, Department of Engineering Sciences, University of Patras, Greece.

Prof. (Dr.) Soib Taib, School of Electrical & Electronics Engineering, University Science Malaysia,

Malaysia.

Prof. (Dr.) Vit Vozenilek, Department of Geoinformatics, Palacky University, Olomouc, Czech Republic.

Prof. (Dr.) Sim Kwan Hua, School of Engineering, Computing and Science, Swinburne University of Technology, Sarawak, Malaysia.

Prof. (Dr.) Jose Francisco Vicent Frances, Department of Science of the Computation and Artificial

Intelligence, Universidad de Alicante, Alicante, Spain.

Prof. (Dr.) Rafael Ignacio Alvarez Sanchez, Department of Science of the Computation and Artificial Intelligence, Universidad de Alicante, Alicante, Spain.

Prof. (Dr.) Praneel Chand, Ph.D., M.IEEEC/O School of Engineering & Physics Faculty of Science & Technology The University of the South Pacific (USP) Laucala Campus, Private Mail Bag, Suva, Fiji.

Prof. (Dr.) Francisco Miguel Martinez, Department of Science of the Computation and Artificial

Intelligence, Universidad de Alicante, Alicante, Spain.

Prof. (Dr.) Antonio Zamora Gomez, Department of Science of the Computation and Artificial

Intelligence, Universidad de Alicante, Alicante, Spain.

Prof. (Dr.) Leandro Tortosa, Department of Science of the Computation and Artificial Intelligence, Universidad de Alicante, Alicante, Spain.

Prof. (Dr.) Samir Ananou, Department of Microbiology, Universidad de Granada, Granada, Spain.

Dr. Miguel Angel Bautista, Department de Matematica Aplicada y Analisis, Facultad de Matematicas, Universidad de Barcelona, Spain.

Page 6: IJETCAS June-August Issue 9

Prof. (Dr.) Prof. Adam Baharum, School of Mathematical Sciences, University of Universiti Sains,

Malaysia, Malaysia.

Dr. Cathryn J. Peoples, Faculty of Computing and Engineering, School of Computing and Information Engineering, University of Ulster, Coleraine, Northern Ireland, United Kingdom.

Prof. (Dr.) Pavel Lafata, Department of Telecommunication Engineering, Faculty of Electrical

Engineering, Czech Technical University in Prague, Prague, 166 27, Czech Republic.

Prof. (Dr.) P. Bhanu Prasad, Vision Specialist, Matrix vision GmbH, Germany, Consultant, TIFAC-

CORE for Machine Vision, Advisor, Kelenn Technology, France Advisor, Shubham Automation & Services, Ahmedabad, and Professor of C.S.E, Rajalakshmi Engineering College, India.

Prof. (Dr.) Anis Zarrad, Department of Computer Science and Information System, Prince Sultan University, Riyadh, Saudi Arabia.

Prof. (Dr.) Mohammed Ali Hussain, Professor, Dept. of Electronics and Computer Engineering, KL University, Green Fields, Vaddeswaram, Andhra Pradesh, India.

Dr. Cristiano De Magalhaes Barros, Governo do Estado de Minas Gerais, Brazil.

Prof. (Dr.) Md. Rizwan Beg, Professor & Head, Dean, Faculty of Computer Applications, Deptt. of Computer Sc. & Engg. & Information Technology, Integral University Kursi Road, Dasauli,

Lucknow, India.

Prof. (Dr.) Vishnu Narayan Mishra, Assistant Professor of Mathematics, Sardar Vallabhbhai National Institute of Technology, Ichchhanath Mahadev Road, Surat, Surat-395007, Gujarat, India.

Dr. Jia Hu, Member Research Staff, Philips Research North America, New York Area, NY.

Prof. Shashikant Shantilal Patil SVKM , MPSTME Shirpur Campus, NMIMS University Vile Parle Mumbai, India.

Prof. (Dr.) Bindhya Chal Yadav, Assistant Professor in Botany, Govt. Post Graduate College, Fatehabad, Agra, Uttar Pradesh, India.

REVIEW BOARD

Prof. (Dr.) Kimberly A. Freeman, Professor & Director of Undergraduate Programs, Stetson School of Business and Economics, Mercer University, Macon, Georgia, United States.

Prof. (Dr.) Klaus G. Troitzsch, Professor, Institute for IS Research, University of Koblenz-Landau,

Germany.

Prof. (Dr.) T. Anthony Choi, Professor, Department of Electrical & Computer Engineering, Mercer

University, Macon, Georgia, United States.

Prof. (Dr.) Yen-Chun Lin, Professor and Chair, Dept. of Computer Science and Information Engineering, Chang Jung Christian University, Kway Jen, Tainan, Taiwan.

Prof. (Dr.) Jen-Wei Hsieh, Department of Computer Science and Information Engineering,

National Taiwan University of Science and Technology, Taiwan.

Prof. (Dr.) Jose C. Martinez, Dept. Physical Chemistry, Faculty of Sciences, University of

Granada, Spain.

Prof. (Dr.) Joel Saltz, Emory University, Atlanta, Georgia, United States.

Prof. (Dr.) Panayiotis Vafeas, Department of Engineering Sciences, University of Patras, Greece.

Prof. (Dr.) Soib Taib, School of Electrical & Electronics Engineering, University Science Malaysia, Malaysia.

Prof. (Dr.) Sim Kwan Hua, School of Engineering, Computing and Science, Swinburne University

of Technology, Sarawak, Malaysia.

Prof. (Dr.) Jose Francisco Vicent Frances, Department of Science of the Computation and Artificial

Intelligence, Universidad de Alicante, Alicante, Spain.

Prof. (Dr.) Rafael Ignacio Alvarez Sanchez, Department of Science of the Computation and Artificial Intelligence, Universidad de Alicante, Alicante, Spain.

Prof. (Dr.) Francisco Miguel Martinez, Department of Science of the Computation and Artificial

Intelligence, Universidad de Alicante, Alicante, Spain.

Prof. (Dr.) Antonio Zamora Gomez, Department of Science of the Computation and Artificial

Intelligence, Universidad de Alicante, Alicante, Spain.

Prof. (Dr.) Leandro Tortosa, Department of Science of the Computation and Artificial Intelligence, Universidad de Alicante, Alicante, Spain.

Prof. (Dr.) Samir Ananou, Department of Microbiology, Universidad de Granada, Granada, Spain.

Dr. Miguel Angel Bautista, Department de Matematica Aplicada y Analisis, Facultad de

Matematicas, Universidad de Barcelona, Spain.

Prof. (Dr.) Prof. Adam Baharum, School of Mathematical Sciences, University of Universiti Sains, Malaysia, Malaysia.

Prof. (Dr.) Huiyun Liu, Department of Electronic & Electrical Engineering, University College

London, Torrington Place, London.

Page 7: IJETCAS June-August Issue 9

Dr. Cristiano De Magalhaes Barros, Governo do Estado de Minas Gerais, Brazil.

Prof. (Dr.) Pravin G. Ingole, Senior Researcher, Greenhouse Gas Research Center, Korea Institute of Energy Research (KIER), 152 Gajeong-ro, Yuseong-gu, Daejeon 305-343, KOREA.

Prof. (Dr.) Dilum Bandara, Dept. Computer Science & Engineering, University of Moratuwa, Sri

Lanka.

Prof. (Dr.) Faudziah Ahmad, School of Computing, UUM College of Arts and Sciences, University Utara Malaysia, 06010 UUM Sintok, Kedah Darulaman.

Prof. (Dr.) G. Manoj Someswar, Principal, Dept. of CSE at Anwar-ul-uloom College of Engineering & Technology, Yennepally, Vikarabad, RR District., A.P., India.

Prof. (Dr.) Abdelghni Lakehal, Applied Mathematics, Rue 10 no 6 cite des fonctionnaires dokkarat

30010 Fes Marocco.

Dr. Kamal Kulshreshtha, Associate Professor & Head, Deptt. of Computer Sc. & Applications, Modi Institute of Management & Technology, Kota-324 009, Rajasthan, India.

Prof. (Dr.) Anukrati Sharma, Associate Professor, Faculty of Commerce and Management, University of Kota, Kota, Rajasthan, India.

Prof. (Dr.) S. Natarajan, Department of Electronics and Communication Engineering, SSM College

of Engineering, NH 47, Salem Main Road, Komarapalayam, Namakkal District, Tamilnadu 638183, India.

Prof. (Dr.) J. Sadhik Basha, Department of Mechanical Engineering, King Khalid University, Abha,

Kingdom of Saudi Arabia.

Prof. (Dr.) G. SAVITHRI, Department of Sericulture, S.P. Mahila Visvavidyalayam, Tirupati-517502, Andhra Pradesh, India.

Prof. (Dr.) Shweta jain, Tolani College of Commerce, Andheri, Mumbai. 400001, India.

Prof. (Dr.) Abdullah M. Abdul-Jabbar, Department of Mathematics, College of Science, University

of Salahaddin-Erbil, Kurdistan Region, Iraq.

Prof. (Dr.) ( Mrs.) P.Sujathamma, Department of Sericulture, S.P.Mahila Visvavidyalayam, Tirupati-517502, India.

Prof. (Dr.) Bimla Dhanda, Professor & Head, Department of Human Development and Family

Studies, College of Home Science, CCS, Haryana Agricultural University, Hisar- 125001 (Haryana) India.

Prof. (Dr.) Manjulatha, Dept of Biochemistry,School of Life Sciences,University of

Hyderabad,Gachibowli, Hyderabad, India.

Prof. (Dr.) Upasani Dhananjay Eknath Advisor & Chief Coordinator, ALUMNI Association, Sinhgad Institute of Technology & Science, Narhe, Pune -411 041, India.

Prof. (Dr.) Sudhindra Bhat, Professor & Finance Area Chair, School of Business, Alliance University Bangalore-562106, India.

Prof. Prasenjit Chatterjee , Dept. of Mechanical Engineering, MCKV Institute of Engineering West

Bengal, India.

Prof. Rajesh Murukesan, Deptt. of Automobile Engineering, Rajalakshmi Engineering college, Chennai, India.

Prof. (Dr.) Parmil Kumar, Department of Statistics, University of Jammu, Jammu, India

Prof. (Dr.) M.N. Shesha Prakash, Vice Principal, Professor & Head of Civil Engineering, Vidya

Vikas Institute of Engineering and Technology, Alanahally, Mysore-570 028

Prof. (Dr.) Piyush Singhal, Mechanical Engineering Deptt., GLA University, India.

Prof. M. Mahbubur Rahman, School of Engineering & Information Technology, Murdoch

University, Perth Western Australia 6150, Australia.

Prof. Nawaraj Chaulagain, Department of Religion, Illinois Wesleyan University, Bloomington, IL.

Prof. Hassan Jafari, Faculty of Maritime Economics & Management, Khoramshahr University of

Marine Science and Technology, khoramshahr, Khuzestan province, Iran

Prof. (Dr.) Kantipudi MVV Prasad , Dept of EC, School of Engg., R.K.University, Kast urbhadham,

Tramba, Rajkot-360020, India.

Prof. (Mrs.) P.Sujathamma, Department of Sericulture, S.P.Mahila Visvavidyalayam, ( Women's University), Tirupati-517502, India.

Prof. (Dr.) M A Rizvi, Dept. of Computer Engineering and Applications, National Institute of Technical Teachers' Training and Research, Bhopal M.P. India.

Prof. (Dr.) Mohsen Shafiei Nikabadi, Faculty of Economics and Management, Industrial

Management Department, Semnan University, Semnan, Iran.

Prof. P.R.SivaSankar, Head, Dept. of Commerce, Vikrama Simhapuri University Post Graduate Centre, KAVALI - 524201, A.P. India.

Prof. (Dr.) Bhawna Dubey, Institute of Environmental Science( AIES), Amity University, Noida, India.

Prof. Manoj Chouhan, Deptt. of Information Technology, SVITS Indore, India.

Page 8: IJETCAS June-August Issue 9

Prof. Yupal S Shukla, V M Patel College of Management Studies, Ganpat University, Kherva-

Mehsana. India.

Prof. (Dr.) Amit Kohli, Head of the Department, Department of Mechanical Engineering, D.A.V.Institute of Engg. and Technology, Kabir Nagar, Jalandhar,Punjab (India).

Prof. (Dr.) Kumar Irayya Maddani, and Head of the Department of Physics in SDM College of

Engineering and Technology, Dhavalagiri, Dharwad, State: Karnataka (INDIA).

Prof. (Dr.) Shafi Phaniband, SDM College of Engineering and Technology, Dharwad, INDIA.

Prof. M H Annaiah, Head, Department of Automobile Engineering, Acharya Institute of Technology, Soladevana Halli, Bangalore -560107, India.

Prof. (Dr.) Prof. R. R. Patil, Director School Of Earth Science, Solapur University, Solapur

Prof. (Dr.) Manoj Khandelwal, Dept. of Mining Engg, College of Technology & Engineering, Maharana Pratap University of Agriculture & Technology, Udaipur, 313 001 (Rajasthan), India

Prof. (Dr.) Kishor Chandra Satpathy, Librarian, National Institute of Technology, Silchar-788010,

Assam, India

Prof. (Dr.) Juhana Jaafar, Gas Engineering Department, Faculty of Petroleum and Renewable

Energy Engineering (FPREE), Universiti Teknologi Malaysia-81310 UTM Johor Bahru, Johor.

Prof. (Dr.) Rita Khare, Assistant Professor in chemistry, Govt. Women’s College, Gardanibagh, Patna, Bihar.

Prof. (Dr.) Raviraj Kusanur, Dept of Chemistry, R V College of Engineering, Bangalore-59, India.

Prof. (Dr.) Hameem Shanavas .I, M.V.J College of Engineering, Bangalore

Prof. (Dr.) Sanjay Kumar, JKL University, Ajmer Road, Jaipur

Prof. (Dr.) Pushp Lata Faculty of English and Communication, Department of Humanities and

Languages, Nucleus Member, Publications and Media Relations Unit Editor, BITScan, BITS, Pilani-India.

Prof. Arun Agarwal, Faculty of ECE Dept., ITER College, Siksha 'O' Anusandhan University Bhubaneswar, Odisha, India

Prof. (Dr.) Pratima Tripathi, Department of Biosciences, SSSIHL, Anantapur Campus Anantapur- 515001 (A.P.) India.

Prof. (Dr.) Sudip Das, Department of Biotechnology, Haldia Institute of Technology, I.C.A.R.E.

Complex, H.I.T. Campus, P.O. Hit, Haldia; Dist: Puba Medinipur, West Bengal, India.

Prof. (Dr.) Bimla Dhanda, Professor & Head, Department of Human Development and Family Studies College of Home Science, CCS, Haryana Agricultural University, Hisar- 125001 (Haryana)

India.

Prof. (Dr.) R.K.Tiwari, Professor, S.O.S. in Physics, Jiwaji University, Gwalior, M.P.-474011.

Prof. (Dr.) Deepak Paliwal, Faculty of Sociology, Uttarakhand Open University, Haldwani-Nainital

Prof. (Dr.) Dr. Anil K Dwivedi, Faculty of Pollution & Environmental Assay Research Laboratory (PEARL), Department of Botany,DDU Gorakhpur University,Gorakhpur-273009,India.

Prof. R. Ravikumar, Department of Agricultural and Rural Management, TamilNadu Agricultural

University,Coimbatore-641003,TamilNadu,India.

Prof. (Dr.) R.Raman, Professor of Agronomy, Faculty of Agriculture, Annamalai university,

Annamalai Nagar 608 002Tamil Nadu, India.

Prof. (Dr.) Ahmed Khalafallah, Coordinator of the CM Degree Program, Department of Architectural and Manufacturing Sciences, Ogden College of Sciences and Engineering Western

Kentucky University 1906 College Heights Blvd Bowling Green, KY 42103-1066.

Prof. (Dr.) Asmita Das , Delhi Technological University (Formerly Delhi College of Engineering), Shahbad, Daulatpur, Delhi 110042, India.

Prof. (Dr.)Aniruddha Bhattacharjya, Assistant Professor (Senior Grade), CSE Department, Amrita

School of Engineering , Amrita Vishwa VidyaPeetham (University), Kasavanahalli, Carmelaram P.O., Bangalore 560035, Karnataka, India.

Prof. (Dr.) S. Rama Krishna Pisipaty, Prof & Geoarchaeologist, Head of the Department of

Sanskrit & Indian Culture, SCSVMV University, Enathur, Kanchipuram 631561, India

Prof. (Dr.) Shubhasheesh Bhattacharya, Professor & HOD(HR), Symbiosis Institute of International Business (SIIB), Hinjewadi, Phase-I, Pune- 411 057, India.

Prof. (Dr.) Vijay Kothari, Institute of Science, Nirma University, S-G Highway, Ahmedabad 382481, India.

Prof. (Dr.) Raja Sekhar Mamillapalli, Department of Civil Engineering at Sir Padampat Singhania

University, Udaipur, India.

Prof. (Dr.) B. M. Kunar, Department of Mining Engineering, Indian School of Mines, Dhanbad 826004, Jharkhand, India.

Prof. (Dr.) Prabir Sarkar, Assistant Professor, School of Mechanical, Materials and Energy Engineering, Room 307, Academic Block, Indian Institute of Technology, Ropar, Nangal Road, Rupnagar 140001, Punjab, India.

Page 9: IJETCAS June-August Issue 9

Prof. (Dr.) K.Srinivasmoorthy, Associate Professor, Department of Earth Sciences, School of

Physical,Chemical and Applied Sciences, Pondicherry university, R.Venkataraman Nagar, Kalapet, Puducherry 605014, India.

Prof. (Dr.) Bhawna Dubey, Institute of Environmental Science (AIES), Amity University, Noida, India.

Prof. (Dr.) P. Bhanu Prasad, Vision Specialist, Matrix vision GmbH, Germany, Consultant, TIFAC-CORE for Machine Vision, Advisor, Kelenn Technology, France Advisor, Shubham Automation & Services, Ahmedabad, and Professor of C.S.E, Rajalakshmi Engineering College, India.

Prof. (Dr.)P.Raviraj, Professor & Head, Dept. of CSE, Kalaignar Karunanidhi, Institute of

Technology, Coimbatore 641402,Tamilnadu,India.

Prof. (Dr.) Damodar Reddy Edla, Department of Computer Science & Engineering, Indian School

of Mines, Dhanbad, Jharkhand 826004, India.

Prof. (Dr.) T.C. Manjunath, Principal in HKBK College of Engg., Bangalore, Karnataka, India.

Prof. (Dr.) Pankaj Bhambri, I.T. Deptt., Guru Nanak Dev Engineering College, Ludhiana 141006,

Punjab, India.

Prof. Shashikant Shantilal Patil SVKM , MPSTME Shirpur Campus, NMIMS University Vile Parle

Mumbai, India.

Prof. (Dr.) Shambhu Nath Choudhary, Department of Physics, T.M. Bhagalpur University, Bhagalpur 81200, Bihar, India.

Prof. (Dr.) Venkateshwarlu Sonnati, Professor & Head of EEED, Department of EEE, Sreenidhi

Institute of Science & Technology, Ghatkesar, Hyderabad, Andhra Pradesh, India.

Prof. (Dr.) Saurabh Dalela, Department of Pure & Applied Physics, University of Kota, KOTA 324010, Rajasthan, India.

Prof. S. Arman Hashemi Monfared, Department of Civil Eng, University of Sistan & Baluchestan, Daneshgah St.,Zahedan, IRAN, P.C. 98155-987

Prof. (Dr.) R.S.Chanda, Dept. of Jute & Fibre Tech., University of Calcutta, Kolkata 700019, West

Bengal, India.

Prof. V.S.VAKULA, Department of Electrical and Electronics Engineering, JNTUK, University College of Eng.,Vizianagaram5 35003, Andhra Pradesh, India.

Prof. (Dr.) Nehal Gitesh Chitaliya, Sardar Vallabhbhai Patel Institute of Technology, Vasad 388 306, Gujarat, India.

Prof. (Dr.) D.R. Prajapati, Department of Mechanical Engineering, PEC University of

Technology,Chandigarh 160012, India.

Dr. A. SENTHIL KUMAR, Postdoctoral Researcher, Centre for Energy and Electrical Power,

Electrical Engineering Department, Faculty of Engineering and the Built Environment, Tshwane University of Technology, Pretoria 0001, South Africa.

Prof. (Dr.)Vijay Harishchandra Mankar, Department of Electronics & Telecommunication Engineering, Govt. Polytechnic, Mangalwari Bazar, Besa Road, Nagpur- 440027, India.

Prof. Varun.G.Menon, Department Of C.S.E, S.C.M.S School of Engineering, Karukutty,Ernakulam, Kerala 683544, India.

Prof. (Dr.) U C Srivastava, Department of Physics, Amity Institute of Applied Sciences, Amity

University, Noida, U.P-203301.India.

Prof. (Dr.) Surendra Yadav, Professor and Head (Computer Science & Engineering Department), Maharashi Arvind College of Engineering and Research Centre (MACERC), Jaipur, Rajasthan,

India.

Prof. (Dr.) Sunil Kumar, H.O.D. Applied Sciences & Humanities Dehradun Institute of Technology, (D.I.T. School of Engineering), 48 A K.P-3 Gr. Noida (U.P.) 201308

Prof. Naveen Jain, Dept. of Electrical Engineering, College of Technology and Engineering,

Udaipur-313 001, India.

Prof. Veera Jyothi.B, CBIT, Hyderabad, Andhra Pradesh, India.

Prof. Aritra Ghosh, Global Institute of Management and Technology, Krishnagar, Nadia, W.B. India

Prof. Anuj K. Gupta, Head, Dept. of Computer Science & Engineering, RIMT Group of Institutions,

Sirhind Mandi Gobindgarh, Punajb, India.

Prof. (Dr.) Varala Ravi, Head, Department of Chemistry, IIIT Basar Campus, Rajiv Gandhi University of Knowledge Technologies, Mudhole, Adilabad, Andhra Pradesh- 504 107, India

Prof. (Dr.) Ravikumar C Baratakke, faculty of Biology,Govt. College, Saundatti - 591 126, India.

Prof. (Dr.) NALIN BHARTI, School of Humanities and Social Science, Indian Institute of

Technology Patna, India.

Prof. (Dr.) Shivanand S.Gornale , Head, Department of Studies in Computer Science, Government College (Autonomous), Mandya, Mandya-571 401-Karanataka, India.

Page 10: IJETCAS June-August Issue 9

Prof. (Dr.) Naveen.P.Badiger, Dept.Of Chemistry, S.D.M.College of Engg. & Technology,

Dharwad-580002, Karnataka State, India.

Prof. (Dr.) Bimla Dhanda, Professor & Head, Department of Human Development and Family Studies, College of Home Science, CCS, Haryana Agricultural University, Hisar- 125001 (Haryana) India.

Prof. (Dr.) Tauqeer Ahmad Usmani, Faculty of IT, Salalah College of Technology, Salalah, Sultanate of Oman.

Prof. (Dr.) Naresh Kr. Vats, Chairman, Department of Law, BGC Trust University Bangladesh

Prof. (Dr.) Papita Das (Saha), Department of Environmental Science, University of Calcutta, Kolkata, India.

Prof. (Dr.) Rekha Govindan , Dept of Biotechnology, Aarupadai Veedu Institute of technology ,

Vinayaka Missions University , Paiyanoor , Kanchipuram Dt, Tamilnadu , India.

Prof. (Dr.) Lawrence Abraham Gojeh, Department of Information Science, Jimma University,

P.o.Box 378, Jimma, Ethiopia.

Prof. (Dr.) M.N. Kalasad, Department of Physics, SDM College of Engineering & Technology, Dharwad, Karnataka, India.

Prof. Rab Nawaz Lodhi, Department of Management Sciences, COMSATS Institute of Information

Technology Sahiwal.

Prof. (Dr.) Masoud Hajarian, Department of Mathematics, Faculty of Mathematical Sciences,

Shahid Beheshti University, General Campus, Evin, Tehran 19839,Iran

Prof. (Dr.) Chandra Kala Singh, Associate professor, Department of Human Development and Family Studies, College of Home Science, CCS, Haryana Agricultural University, Hisar- 125001 (Haryana) India

Prof. (Dr.) J.Babu, Professor & Dean of research, St.Joseph's College of Engineering & Technology, Choondacherry, Palai,Kerala.

Prof. (Dr.) Pradip Kumar Roy, Department of Applied Mechanics, Birla Institute of Technology

(BIT) Mesra, Ranchi- 835215, Jharkhand, India.

Prof. (Dr.) P. Sanjeevi kumar, School of Electrical Engineering (SELECT), Vandalur Kelambakkam Road, VIT University, Chennai, India.

Prof. (Dr.) Debasis Patnaik, BITS-Pilani, Goa Campus, India.

Prof. (Dr.) SANDEEP BANSAL, Associate Professor, Department of Commerce, I.G.N. College,

Haryana, India.

Dr. Radhakrishnan S V S, Department of Pharmacognosy, Faser Hall, The University of Mississippi Oxford, MS- 38655, USA.

Prof. (Dr.) Megha Mittal, Faculty of Chemistry, Manav Rachna College of Engineering, Faridabad (HR), 121001, India.

Prof. (Dr.) Mihaela Simionescu (BRATU), BUCHAREST, District no. 6, Romania, member of the

Romanian Society of Econometrics, Romanian Regional Science Association and General Association of Economists from Romania

Prof. (Dr.) Atmani Hassan, Director Regional of Organization Entraide Nationale

Prof. (Dr.) Deepshikha Gupta, Dept. of Chemistry, Amity Institute of Applied Sciences,Amity University, Sec.125, Noida, India.

Prof. (Dr.) Muhammad Kamruzzaman, Deaprtment of Infectious Diseases, The University of

Sydney, Westmead Hospital, Westmead, NSW-2145.

Prof. (Dr.) Meghshyam K. Patil , Assistant Professor & Head, Department of Chemistry,Dr. Babasaheb Ambedkar Marathwada University,Sub-Campus, Osmanabad- 413 501, Maharashtra,

India.

Prof. (Dr.) Ashok Kr. Dargar, Department of Mechanical Engineering, School of Engineering, Sir Padampat Singhania University, Udaipur (Raj.)

Prof. (Dr.) Sudarson Jena, Dept. of Information Technology, GITAM University, Hyderabad, India

Prof. (Dr.) Jai Prakash Jaiswal, Department of Mathematics, Maulana Azad National Institute of Technology Bhopal, India.

Prof. (Dr.) S.Amutha, Dept. of Educational Technology, Bharathidasan University, Tiruchirappalli-620 023, Tamil Nadu, India.

Prof. (Dr.) R. HEMA KRISHNA, Environmental chemistry, University of Toronto, Canada.

Prof. (Dr.) B.Swaminathan, Dept. of Agrl.Economics, Tamil Nadu Agricultural University, India.

Prof. (Dr.) K. Ramesh, Department of Chemistry, C.B.I.T, Gandipet, Hyderabad-500075. India.

Prof. (Dr.) Sunil Kumar, H.O.D. Applied Sciences &Humanities, JIMS Technical campus,(I.P. University,New Delhi), 48/4 ,K.P.-3,Gr.Noida (U.P.)

Prof. (Dr.) G.V.S.R.Anjaneyulu, CHAIRMAN - P.G. BOS in Statistics & Deputy Coordinator UGC DRS-I Project, Executive Member ISPS-2013, Department of Statistics, Acharya Nagarjuna University, Nagarjuna Nagar-522510, Guntur, Andhra Pradesh, India.

Page 11: IJETCAS June-August Issue 9

Prof. (Dr.) Sribas Goswami, Department of Sociology, Serampore College, Serampore 712201,

West Bengal, India.

Prof. (Dr.) Sunanda Sharma, Department of Veterinary Obstetrics Y Gynecology, College of Veterinary & Animal Science,Rajasthan University of Veterinary & Animal Sciences,Bikaner-334001, India.

Prof. (Dr.) S.K. Tiwari, Department of Zoology, D.D.U. Gorakhpur University, Gorakhpur-273009 U.P., India.

Prof. (Dr.) Praveena Kuruva, Materials Research Centre, Indian Institute of Science, Bangalore-

560012, INDIA

Prof. (Dr.) Rajesh Kumar, Department Of Applied Physics, Bhilai Institute Of Technology, Durg (C.G.) 491001, India.

Dr. K.C.Sivabalan, Field Enumerator and Data Analyst, Asian Vegetable Research Centre, The World Vegetable Centre, Taiwan.

Prof. (Dr.) Amit Kumar Mishra, Department of Environmntal Science and Energy Research,

Weizmann Institute of Science, Rehovot, Israel.

Prof. (Dr.) Manisha N. Paliwal, Sinhgad Institute of Management, Vadgaon (Bk), Pune, India.

Prof. (Dr.) M. S. HIREMATH, Principal, K.L.ESOCIETY’s SCHOOL, ATHANI

Prof. Manoj Dhawan, Department of Information Technology, Shri Vaishnav Institute of Technology & Science, Indore, (M. P.), India.

Prof. (Dr.) V.R.Naik, Professor & Head of Department, Mechancal Engineering, Textile & Engineering Institute, Ichalkaranji (Dist. Kolhapur), Maharashatra, India.

Prof. (Dr.) Jyotindra C. Prajapati,Head, Department of Mathematical Sciences, Faculty of Applied

Sciences, Charotar University of Science and Technology, Changa Anand -388421, Gujarat, India

Prof. (Dr.) Sarbjit Singh, Head, Department of Industrial & Production Engineering, Dr BR

Ambedkar National Institute of Technology,Jalandhar,Punjab, India.

Prof. (Dr.) Professor Braja Gopal Bag, Department of Chemistry and Chemical Technology , Vidyasagar University, West Midnapore

Prof. (Dr.) Ashok Kumar Chandra, Department of Management, Bhilai Institute of Technology,

Bhilai House, Durg (C.G.)

Prof. (Dr.) Amit Kumar, Assistant Professor, School of Chemistry, Shoolini University, Solan,

Himachal Pradesh, India

Prof. (Dr.) L. Suresh Kumar, Mechanical Department, Chaitanya Bharathi Institute of Technology,

Hyderabad, India.

Scientist Sheeraz Saleem Bhat, Lac Production Division, Indian Institute of Natural Resins and

Gums, Namkum, Ranchi, Jharkhand, India.

Prof. C.Divya , Centre for Information Technology and Engineering, Manonmaniam Sundaranar

University, Tirunelveli - 627012, Tamilnadu , India.

Prof. T.D.Subash, Infant Jesus College Of Engineering and Technology, Thoothukudi Tamilnadu, India.

Prof. (Dr.) Vinay Nassa, Prof. E.C.E Deptt., Dronacharya.Engg. College, Gurgaon India.

Prof. Sunny Narayan, university of Roma Tre, Italy.

Prof. (Dr.) Sanjoy Deb, Dept. of ECE, BIT Sathy, Sathyamangalam, Tamilnadu-638401, India.

Prof. (Dr.) Reena Gupta, Institute of Pharmaceutical Research, GLA University, Mathura, India.

Prof. (Dr.) P.R.SivaSankar, Head Dept. of Commerce, Vikrama Simhapuri University Post Graduate Centre, KAVALI - 524201, A.P., India.

Prof. (Dr.) Mohsen Shafiei Nikabadi, Faculty of Economics and Management, Industrial

Management Department, Semnan University, Semnan, Iran.

Prof. (Dr.) Praveen Kumar Rai, Department of Geography, Faculty of Science, Banaras Hindu University, Varanasi-221005, U.P. India.

Prof. (Dr.) Christine Jeyaseelan, Dept of Chemistry, Amity Institute of Applied Sciences, Amity University, Noida, India.

Prof. (Dr.) M A Rizvi, Dept. of Computer Engineering and Applications , National Institute of

Technical Teachers' Training and Research, Bhopal M.P. India.

Prof. (Dr.) K.V.N.R.Sai Krishna, H O D in Computer Science, S.V.R.M.College,(Autonomous), Nagaram, Guntur(DT), Andhra Pradesh, India.

Prof. (Dr.) Ashok Kr. Dargar, Department of Mechanical Engineering, School of Engineering, Sir Padampat Singhania University, Udaipur (Raj.)

Prof. (Dr.) Asim Kumar Sen, Principal , ST.Francis Institute of Technology (Engineering College)

under University of Mumbai , MT. Poinsur, S.V.P Road, Borivali (W), Mumbai-400103, India.

Prof. (Dr.) Rahmathulla Noufal.E, Civil Engineering Department, Govt.Engg.College-Kozhikode

Page 12: IJETCAS June-August Issue 9

Prof. (Dr.) N.Rajesh, Department of Agronomy, TamilNadu Agricultural University -Coimbatore,

Tamil Nadu, India.

Prof. (Dr.) Har Mohan Rai , Professor, Electronics and Communication Engineering, N.I.T. Kurukshetra 136131,India

Prof. (Dr.) Eng. Sutasn Thipprakmas from King Mongkut, University of Technology Thonburi,

Thailand.

Prof. (Dr.) Kantipudi MVV Prasad, EC Department, RK University, Rajkot.

Prof. (Dr.) Jitendra Gupta,Faculty of Pharmaceutics, Institute of Pharmaceutical Research, GLA University, Mathura.

Prof. (Dr.) Swapnali Borah, HOD, Dept of Family Resource Management, College of Home

Science, Central Agricultural University, Tura, Meghalaya, India.

Prof. (Dr.) N.Nazar Khan, Professor in Chemistry, BTK Institute of Technology, Dwarahat-263653 (Almora), Uttarakhand-India.

Prof. (Dr.) Rajiv Sharma, Department of Ocean Engineering, Indian Institute of Technology Madras, Chennai (TN) - 600 036,India.

Prof. (Dr.) Aparna Sarkar,PH.D. Physiology, AIPT,Amity University , F 1 Block, LGF, Sector-

125,Noida-201303, UP ,India.

Prof. (Dr.) Manpreet Singh, Professor and Head, Department of Computer Engineering, Maharishi Markandeshwar University, Mullana, Haryana, India.

Prof. (Dr.) Sukumar Senthilkumar, Senior Researcher Advanced Education Center of Jeonbuk for Electronics and Information Technology, Chon Buk National University, Chon Buk, 561-756, SOUTH KOREA. .

Prof. (Dr.) Hari Singh Dhillon, Assistant Professor, Department of Electronics and Communication

Engineering, DAV Institute of Engineering and Technology, Jalandhar (Punjab), INDIA. .

Prof. (Dr.) Poonkuzhali, G., Department of Computer Science and Engineering, Rajalakshmi

Engineering College, Chennai, INDIA. .

Prof. (Dr.) Bharath K N, Assistant Professor, Dept. of Mechanical Engineering, GM Institute of Technology, PB Road, Davangere 577006, Karnataka, INDIA. .

Prof. (Dr.) F.Alipanahi, Assistant Professor, Islamic Azad University,Zanjan Branch, Atemadeyeh,

Moalem Street, Zanjan IRAN

Prof. Yogesh Rathore, Assistant Professor, Dept. of Computer Science & Engineering, RITEE,

Raipur, India

Prof. (Dr.) Ratneshwer, Department of Computer Science (MMV), Banaras Hindu University Varanasi-221005, India.

Prof. Pramod Kumar Pandey, Assistant Professor, Department Electronics & Instrumentation Engineering, ITM University, Gwalior, M.P., India

Prof. (Dr.)Sudarson Jena, Associate Professor, Dept.of IT, GITAM University, Hyderabad, India

Prof. (Dr.) Binod Kumar,PhD(CS), M.Phil(CS),MIEEE,MIAENG, Dean & Professor( MCA), Jayawant Technical Campus(JSPM's), Pune, India

Prof. (Dr.) Mohan Singh Mehata, (JSPS fellow), Assistant Professor, Department of Applied

Physics, Delhi Technological University, Delhi

Prof. Ajay Kumar Agarwal, Asstt. Prof., Deptt. of Mech. Engg., Royal Institute of Management &

Technology, Sonipat (Haryana)

Prof. (Dr.) Siddharth Sharma, University School of Management, Kurukshetra University, Kurukshetra, India.

Prof. (Dr.) Satish Chandra Dixit, Department of Chemistry, D.B.S.College ,Govind Nagar,Kanpur-

208006, India

Prof. (Dr.) Ajay Solkhe, Department of Management, Kurukshetra University, Kurukshetra, India.

Prof. (Dr.) Neeraj Sharma, Asst. Prof. Dept. of Chemistry, GLA University, Mathura

Prof. (Dr.) Basant Lal, Department of Chemistry, G.L.A. University, Mathura

Prof. (Dr.) T Venkat Narayana Rao, C.S.E,Guru Nanak Engineering College, Hyderabad, Andhra Pradesh, India

Prof. (Dr.) Rajanarender Reddy Pingili, S.R. International Institute of Technology, Hyderabad,

Andhra Pradesh, India

Prof. (Dr.) V.S.Vairale, Department of Computer Engineering, All India Shri Shivaji Memorial

Society College of Engineering, Kennedy Road, Pune-411 001, Maharashtra, India

Prof. (Dr.) Vasavi Bande, Department of Computer Science & Engineering, Netaji Institute of Engineering and Technology, Hyderabad, Andhra Pradesh, India

Prof. (Dr.) Hardeep Anand, Department of Chemistry, Kurukshetra University Kurukshetra,

Haryana, India.

Prof. Aasheesh shukla, Asst Professor, Dept. of EC, GLA University, Mathura, India.

Page 13: IJETCAS June-August Issue 9

Prof. S.P.Anandaraj., CSE Dept, SREC, Warangal, India.

Satya Rishi Takyar , Senior ISO Consultant, New Delhi, India.

Prof. Anuj K. Gupta, Head, Dept. of Computer Science & Engineering, RIMT Group of Institutions,

Mandi Gobindgarh, Punjab, India.

Prof. (Dr.) Harish Kumar, Department of Sports Science, Punjabi University, Patiala, Punjab, India.

Prof. (Dr.) Mohammed Ali Hussain, Professor, Dept. of Electronics and Computer Engineering, KL

University, Green Fields, Vaddeswaram, Andhra Pradesh, India.

Prof. (Dr.) Manish Gupta, Department of Mechanical Engineering, GJU, Haryana, India.

Prof. Mridul Chawla, Department of Elect. and Comm. Engineering, Deenbandhu Chhotu Ram University of Science & Technology, Murthal, Haryana, India.

Prof. Seema Chawla, Department of Bio-medical Engineering, Deenbandhu Chhotu Ram

University of Science & Technology, Murthal, Haryana, India.

Prof. (Dr.) Atul M. Gosai, Department of Computer Science, Saurashtra University, Rajkot,

Gujarat, India.

Prof. (Dr.) Ajit Kr. Bansal, Department of Management, Shoolini University, H.P., India.

Prof. (Dr.) Sunil Vasistha, Mody Institute of Tecnology and Science, Sikar, Rajasthan, India.

Prof. Vivekta Singh, GNIT Girls Institute of Technology, Greater Noida, India.

Prof. Ajay Loura, Assistant Professor at Thapar University, Patiala, India.

Prof. Sushil Sharma, Department of Computer Science and Applications, Govt. P. G. College, Ambala Cantt., Haryana, India.

Prof. Sube Singh, Assistant Professor, Department of Computer Engineering, Govt. Polytechnic,

Narnaul, Haryana, India.

Prof. Himanshu Arora, Delhi Institute of Technology and Management, New Delhi, India.

Dr. Sabina Amporful, Bibb Family Practice Association, Macon, Georgia, USA.

Dr. Pawan K. Monga, Jindal Institute of Medical Sciences, Hisar, Haryana, India.

Dr. Sam Ampoful, Bibb Family Practice Association, Macon, Georgia, USA.

Dr. Nagender Sangra, Director of Sangra Technologies, Chandigarh, India.

Vipin Gujral, CPA, New Jersey, USA.

Sarfo Baffour, University of Ghana, Ghana.

Monique Vincon, Hype Softwaretechnik GmbH, Bonn, Germany.

Natasha Sigmund, Atlanta, USA.

Marta Trochimowicz, Rhein-Zeitung, Koblenz, Germany.

Kamalesh Desai, Atlanta, USA.

Vijay Attri, Software Developer Google, San Jose, California, USA.

Neeraj Khillan, Wipro Technologies, Boston, USA.

Ruchir Sachdeva, Software Engineer at Infosys, Pune, Maharashtra, India.

Anadi Charan, Senior Software Consultant at Capgemini, Mumbai, Maharashtra.

Pawan Monga, Senior Product Manager, LG Electronics India Pvt. Ltd., New Delhi, India.

Sunil Kumar, Senior Information Developer, Honeywell Technology Solutions, Inc., Bangalore, India.

Bharat Gambhir, Technical Architect, Tata Consultancy Services (TCS), Noida, India.

Vinay Chopra, Team Leader, Access Infotech Pvt Ltd. Chandigarh, India.

Sumit Sharma, Team Lead, American Express, New Delhi, India.

Vivek Gautam, Senior Software Engineer, Wipro, Noida, India.

Anirudh Trehan, Nagarro Software Gurgaon, Haryana, India.

Manjot Singh, Senior Software Engineer, HCL Technologies Delhi, India.

Rajat Adlakha, Senior Software Engineer, Tech Mahindra Ltd, Mumbai, Maharashtra, India.

Mohit Bhayana, Senior Software Engineer, Nagarro Software Pvt. Gurgaon, Haryana, India.

Dheeraj Sardana, Tech. Head, Nagarro Software, Gurgaon, Haryana, India.

Naresh Setia, Senior Software Engineer, Infogain, Noida, India.

Raj Agarwal Megh, Idhasoft Limited, Pune, Maharashtra, India.

Shrikant Bhardwaj, Senior Software Engineer, Mphasis an HP Company, Pune, Maharashtra,

India.

Vikas Chawla, Technical Lead, Xavient Software Solutions, Noida, India.

Kapoor Singh, Sr. Executive at IBM, Gurgaon, Haryana, India.

Ashwani Rohilla, Senior SAP Consultant at TCS, Mumbai, India.

Anuj Chhabra, Sr. Software Engineer, McKinsey & Company, Faridabad, Haryana, India.

Jaspreet Singh, Business Analyst at HCL Technologies, Gurgaon, Haryana, India.

Page 14: IJETCAS June-August Issue 9
Page 15: IJETCAS June-August Issue 9

TOPICS OF INTEREST

Topics of interest include, but are not limited to, the following:

Social networks and intelligence

Social science simulation

Information retrieval systems

Technology management

Digital libraries for e-learning

Web-based learning, wikis and blogs

Operational research

Ontologies and meta-data standards

Engineering problems and emerging application

Agent based modeling and systems

Ubiquitous computing

Wired and wireless data communication networks

Mobile Ad Hoc, sensor and mesh networks

Natural language processing and expert systems

Monte Carlo methods and applications

Fuzzy logic and soft computing

Data mining and warehousing

Software and web engineering

Distributed AI systems and architectures

Neural networks and applications

Search and meta-heuristics

Bioinformatics and scientific computing

Genetic network modeling and inference

Knowledge and information management techniques

Aspect-oriented programming

Formal and visual specification languages

Informatics and statistics research

Quantum computing

Automata and formal languages

Computer graphics and image processing

Web 3D and applications

Grid computing and cloud computing

Algorithms design

Genetic algorithms

Compilers and interpreters

Computer architecture & VLSI

Advanced database systems

Digital signal and image processing

Distributed and parallel processing

Information retrieval systems

Technology management

Automation and mobile robots

Manufacturing technology

Electrical technology

Applied mathematics

Automatic control

Nuclear engineering

Computational physics

Computational chemistry

Page 16: IJETCAS June-August Issue 9
Page 17: IJETCAS June-August Issue 9

TABLE OF CONTENTS (June-August, 2014, Issue 9, Volume 1, 2 & 3)

Issue 9, Volume 1

Paper Code

Paper Title Page No.

IJETCAS 14-504

Image Compression using Hybrid Slant Wavelet where Slant is Base Transform and Sinusoidal Transforms are Local Transforms H. B. Kekre, Tanuja Sarode, Prachi Natu

01-10

IJETCAS 14-507

Error Propagation of Quantitative Analysis Based on Ratio Spectra Prof. J. Dubrovkin

11-20

IJETCAS 14-508

Thermal and Moisture Behavior of Premise Exposed to Real Climate Condition Nour LAJIMI, Noureddine BOUKADIDA

21-28

IJETCAS 14-509

Influence of notch parameters on fracture behavior of notched component M. Moussaoui, S. Meziani

29-37

IJETCAS 14-510

Modeling Lipase Production From Co-cultures of Lactic Acid Bacteria Using Neural Networks and Support Vector Machine with Genetic Algorithm Optimization Sita Ramyasree Uppada, Aditya Balu, Amit Kumar Gupta, Jayati Ray Dutta

38-43

IJETCAS 14-515

Numerical investigation of absorption dose distribution of onion powder in electron irradiation system by MCNPX code T. Taherkhani, Gh. Alahyarizadeh

44-49

IJETCAS 14-516

Predicting Crack Width in Circular Ground Supported Reservoir Subject to Seismic Loading Using Radial Basis Neural Networks: RC & FRC Wall Tulesh.N.Patel, S.A. Vasanwala, C.D. Modhera

50-55

IJETCAS 14-518

Impact of Various Channel Coding Schemes on Performance Analysis of Subcarrier Intensity-Modulated Free Space Optical Communication System Joarder Jafor Sadique, Shaikh Enayet Ullah and Md. Mahbubar Rahman

56-60

IJETCAS 14-523

Glaucomatous Image Classification Based On Wavelet Features Shafan Salam, Jobins George

61-65

IJETCAS 14-524

Comparative Analysis of EDFA based 32 channels WDM system for bidirectional and counter pumping techniques Mishal Singla, Preeti, Sanjiv Kumar

66-70

IJETCAS 14-525

Appraising Water Quality Aspects for an Expanse of River Cauvery alongside Srirangapatna Ramya, R. and Ananthu, K. M.

71-75

IJETCAS 14-527

An Improved Image Steganography Technique Using Discrete Wavelet Transform Richika Mahajan, B.V. Kranthi

76-82

IJETCAS 14-528

Robust Watermarking in Mid-Frequency Band in Transform Domain using Different Transforms with Full, Row and Column Version and Varying Embedding Energy Dr. H. B. Kekre, Dr. Tanuja Sarode, Shachi Natu

83-93

IJETCAS 14-533

Optimizing Pair Programming Practice through PPPA Smitha Madhukar

94-98

IJETCAS 14-536

Estimation of Inputs for a Desired Output of a Cooperative and Supportive Neural Network P. Raja Sekhara Rao, K. Venkata Ratnam and P.Lalitha

99-105

IJETCAS 14-537

Effect of High Temperature Pre-Annealing on Thermal Donors in N-Doped CZ-Silicon Vikash Dubey and Mahipal Singh

106-109

Issue 9, Volume 2

Paper Code

Paper Title Page No.

IJETCAS 14-538

Sybil Attack Detection through TDOA-Based Localization Method in Wireless Sensor Network Sweety Saxena, Vikas Sejwar

110-114

IJETCAS 14-539

Study of the Maxwell-Boltzmann Distribution Asymmetry J. Dubrovkin

115-118

IJETCAS 14-542

Design and Analysis of High Speed SRAM Cell at 45nm Technology PN Vamsi Kiran, Anurag Mondal

119-123

Page 18: IJETCAS June-August Issue 9

IJETCAS 14-545

An Empirical Study of Impact of Software Development Model on Software Delivery Dr. Rajinder Singh

124-127

IJETCAS 14-546

On finding nth root of m leading to Newton Raphson’s improved method Nitin Jain, Kushal D Murthy and Hamsapriye

128-136

IJETCAS 14-548

Analysis of compression and elasticity of the nanocrystalline cubic silicon nitride (γ- Si3N4) under high pressure Monika Goyal & B.R.K.Gupta

137-140

IJETCAS 14-549

Efficacy of GGBS Stabilized Soil Cushions With and Without Lime in Pavements Sridevi G and Sreerama Rao A

141-147

IJETCAS 14-550

Evaluation of UV/H2O2 advanced oxidation process (AOP) for the degradation of acid orange7 and basic violet 14 dye in aqueous solution P. Manikandan, P. N. Palanisamy, R.Ramya, and D. Nalini

148-151

IJETCAS 14-555

Modified Error Data Normalized Step Size algorithm Applied to Adaptive Noise Canceller Shelly Garg and Ranjit Kaur

152-158

IJETCAS 14-559

Design and Analysis of Solar Power Switched Inductor and Switched Capacitor for DC Distribution System Mr. D.Saravanakumar, Mrs. G.Gaayathri

159-165

IJETCAS 14-562

Performance Analysis of Low Power Dissipation and High Speed Voltage Sense Amplifier Mrs. Jasbir Kaur, Nitin Goyal

166-169

IJETCAS 14-567

Improved Complexity of Area Sequence Moments for Mouse Drawn Shapes Vinay Saxena

170-175

IJETCAS 14-571

Designing a Conceptual Framework for Library 2.0 Services Dr. (Mrs.) Shalini R. Lihitkar, Vaibhav P. Manohar

176-184

IJETCAS 14-572

Reliable Data Communication over Mobile Adhoc Network Using WEAC Protocol with ARQ Technique A. Kamatchi, Dr. Annasaro Vijendran

185-190

IJETCAS 14-575

A Generic Transliteration tool for CLIA & MT Applications Nishant Sinha, Atul Kumar and Vishal Mandpe

191-196

IJETCAS 14-577

Denoising of the ECG Signal using Kohonen Neural Network Gautam Chouhan, Dr. Ranjit Kaur

197-203

Issue 9, Volume 3

Paper Code

Paper Title Page No.

IJETCAS 14-578

Mass media Interventions and Technology transfer among Banana Growers: Experiences from Tamil Nadu, India P. Ravichamy, S. Nandakumar, K.C.Siva balan

204-209

IJETCAS 14-580

On Classifying Sentiments and Mining Opinions Jasleen Kaur, Dr. Jatinderkumar R. Saini

210-214

IJETCAS 14-583

Security Design Issues in Distributed Databases Pakanati.Raja sekhar Reddy, Dr Syed umar, Narra.Sriram

215-217

IJETCAS 14-584

Groundwater Chemistry of South Karaikal and Nagapattinam Districts,Tamilnadu, India. M.Chandramouli and T J Renuka Prasad

218-223

IJETCAS 14-585

Bit Error Rate vs Signal to Noise Ratio Analysis of M-ary QAM for Implementation of OFDM Mrs. Jasbir Kaur, Anant Shekhar Vashistha

224-228

IJETCAS 14-589

Physico-chemical analysis of groundwater covering the parts of Padmanabhanagar, Bangalore Urban District S Shruthi and T J Renuka Prasad

229-236

IJETCAS 14-591

Advanced Energy Efficient Routing Protocol for Clustered Wireless Sensor Network: Survey Prof. N R Wankhade, Dr. D N Choudhari

237-242

IJETCAS 14-593

Design of a digital pll with divide by 4/5 prescaler Jayalekshmi, Vipin Thomas

243-247

IJETCAS 14-594

Evaluation of the Peak Location Uncertainty in Second-Order Derivative Spectra. Case Study: Symmetrical Lines J. Dubrovkin

248-255

IJETCAS 14-598

Comparison of Various Biometric Methods Dr. Rajinder Singh, Shakti Kumar

256-261

Page 19: IJETCAS June-August Issue 9

IJETCAS 14-604

Determination of Diffusion Constants in Boronation Powder Metallurgy Samples of the System Fe-C-Cu I. Mitev, K.Popov

262-265

IJETCAS 14-605

Almost NORLUND Summability of Conjugate Series of a Fourier Series V. S. Chaubey

266-268

IJETCAS 14-608

On a New Weighted Average Interpolation Vignesh Pai B H and Hamsapriye

269-275

IJETCAS 14-615

A Review on Bandwidth Enhancement Methods of Microstrip Patch Antenna Tanvir Singh Buttar, Narinder Sharma

276-279

IJETCAS 14-619

A Script Recognizer Independent Bi-lingual Character Recognition System for Printed English and Kannada Documents N. Shobha Rani, Deepika B.D., Pavan Kumar S.

280-285

IJETCAS 14-624

A Survey on String Similarity Matching Search Techniques S.Balan, Dr. P.Ponmuthuramalingam

286-288

IJETCAS 14-632

Independent GATE FINFET SRAM Cell Using Leakage Reduction Techniques Anshul Jain, Dr.Minal Saxena and Virendra Singh

289-294

IJETCAS 14-639

Towards a new ontology matching system through a multi-agent architecture Jihad Chaker, Mohamed Khaldi and Souhaib Aammou

295-299

IJETCAS 14-641

Bridgeless SEPIC for AC to DC Kakkeri,Roopa, Bagban,Jasmine, Patil,Vanita

300-304

IJETCAS 14-643

The Effect of Temperatures on the Silicon Solar Cell Asif Javed

305-308

IJETCAS 14-647

Performance Enhancement and Characterization of Junctionless VeSFET Tarun Chaudhary, Gargi Khanna

309-314

IJETCAS 14-648

Review Paper on Comparative study of various PAPR Reduction Techniques Gagandeep Singh and Ranjit Kaur

315-318

IJETCAS 14-650

Link Prediction-Based Topology Control and Adaptive Routing for Cognitive Radio Mobile Ad-Hoc Networks Kanchan Hadawale, Sunita Barve, Parag Kulkarni

319-325

Page 20: IJETCAS June-August Issue 9
Page 21: IJETCAS June-August Issue 9

International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research)

International Journal of Emerging Technologies in Computational

and Applied Sciences (IJETCAS)

www.iasir.net

IJETCAS 14- 504; © 2014, IJETCAS All Rights Reserved Page 1

ISSN (Print): 2279-0047

ISSN (Online): 2279-0055

Image Compression using Hybrid Slant Wavelet where Slant is Base

Transform and Sinusoidal Transforms are Local Transforms H. B. Kekre

1, Tanuja Sarode

2, Prachi Natu

3

Computer Engineering Dept/ 1Sr. Professor,

2Associate Professor,

3Asst. Professor and Ph D. Research Scholar

1, 3 NMIMS University,

2Mumbai University

INDIA

Abstract: Many transform based image compression methods have been experimented till now. This paper

proposes novel image compression method using hybrid slant transform. Slant transform is used as base

transform to focus on global features of an image. Sinusoidal orthogonal transforms like DCT, DST, Hartley

and Real-DFT are paired with slant transform to generate hybrid slant wavelet transform. Performance of

hybrid slant wavelet can be compared by varying the size of its component transform. Along with RMSE which

is commonly used parameter, Mean Absolute Error, AFCPV and SSIM are the parameters used to observe the

perceptibility of compressed image. It has been observed that, hybrid slant wavelet generated using 8x8 Slant

and 32x32 DCT gives lowest error at compression ratio 32 as compared to other sinusoidal transforms when

paired with slant transform. Performance of hybrid slant wavelet is compared with its multi-resolution analysis

which includes semi-global features of an image and with hybrid transform that includes global features of

image. Comparison shows that, hybrid wavelet has given good image quality than hybrid transform and its

multi-resolution analysis.

Keywords: Slant transform, Image compression, Compression ratio, RMSE, SSIM

I. Introduction

In today’s internet world use of multimedia data is increasing tremendously. Due to digital technology

transmission and storage of data in more and more compact form is necessary to achieve efficient bandwidth

utilization. Digital Images are integral part of this data and hence image compression plays vital role to make

better use of available bandwidth and storage space. Image compression schemes are generally classified as

lossless compression and lossy compression. Lossless compression is error free because after decompression

original image is reconstructed as it is. Hence it is applicable in text data compression, medical image

compression where loss of data is not tolerable. On the other hand, lossy image compression produces some

error between original image and reconstructed image. Performance of lossy image compression methods is

measured using compression ratio which is ratio of number of bits in original image to number of bits in

reconstructed image. Goal of any lossy compression technique is to maintain the tradeoff between compression

ratio and quality of reconstructed image [1].

Till now many lossy image compression methods have been studied in literature. Predictive coding, transform

based coding, wavelet based coding, vector quantization are few of them. Other than DCT and wavelet

transform, fractal transform coding techniques were also developed but these techniques have not shown

satisfactory results at low bit rate applications [2]. In transform based image compression Discrete cosine

transform [3] is widely used. It is standard for JPEG image compression. Normally DCT is applied on individual

NxN block of an image which introduces blocky effect in compressed image. JPEG 2000 uses wavelet

transform coding. It analyzes the signal in time and frequency domain. It has higher energy compaction property

than DCT. Hence wavelets provide better compression ratio [4]. It also reduces blocky effect considerably.

Multi-resolution representation of image is another important feature of wavelet transforms. The wavelets can

be scaled and shifted to analyze the spatial frequency contents of an image at different resolutions and positions

[5]. Slant transform coding has been proven to be substantial [6] in bandwidth reduction as compared to pulse

code modulation. It results in lower MSE for moderate sized image blocks. This paper focuses on hybrid

wavelet transform and its multi-resolution analysis property. Wavelet transform is generated using orthogonal

component transforms. Component transforms can be varied to generate hybrid wavelet transform.

II. Review of Literature

In last two-three decades, wavelet transform is emphasized in various image processing applications. Image compression is one of them. So far Haar wavelet transform has been studied as it is simple and fast. Modified fast Haar wavelet transform (MFHWT) has been discussed by Chang P. et al. [7]. Multilevel 2-D Haar wavelet transform is used for image compression by Ch. Samson and V.U.K. Sastry [8]. Image compression with multi-resolution singular decomposition is proposed by Ryuichi Ashino et al. [9]. In their paper wavelet transform is combined with singular value decomposition. Two level 9/7 biorthogonal wavelet is used to transform the image.

Page 22: IJETCAS June-August Issue 9

H. B. Kekre et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014, pp.

01-10

IJETCAS 14- 504; © 2014, IJETCAS All Rights Reserved Page 2

Transformed image is decomposed using Singular value Decomposition and then this decomposed image is compressed using SPIHT. In this method more levels of wavelets need to be applied to get lower value of bits per pixel. Multi-resolution segmentation based algorithm is proposed by Hamid R. Rabiee, R. L. Kashyap and H. Radha [2] in which high quality low bit rate image compression is achieved by recursively coding the Binary Space Partitioning (BSP) tree representation of images with Multi-level Block Truncation Coding (BTC). Jin Li Kuo et. al [10] proposed a hybrid wavelet-fractal coder (WFC) for image compression. The WFC uses the fractal contractive mapping to predict the wavelet coefficients of the higher resolution from those of the lower resolution and then encode the prediction residue with a bit plane wavelet coder. Multiwavelet transform based on zero tree coefficient shuffling has been proposed by M. Ashok, T. Bhaskara Reddy [11]. A non-linear transform called peak transform is proposed in [12]. It minimizes the high frequency components in the image to a greater extent thus making the image to get compressed more. Hybrid wavelet transform containing Kekre transform combined with other sinusoidal transforms is presented in [13]. It shows that use of full wavelet transform gives one third RMSE as compared to respective column and row hybrid wavelet transform. Alani et. al [14] proposes a well suited algorithm for low bit rate image coding called the Geometric Wavelets. Geometric wavelet is a recent development in the field of multivariate piecewise polynomial approximation. Here the binary space partition scheme which is a segmentation based technique of image coding is combined with the wavelet technique [15]. Kekre-Hartley hybrid wavelet transform is compared with its hybrid transform [16] and it shows that including global features of an image increases error in compression as compared to inclusion of only local features.

III. Proposed Technique

Proposed methods compares the performances of hybrid Slant wavelet transform with hybrid transform and its

multi-resolution analysis [17]. In hybrid slant wavelet transform, Slant wavelet acts as a base transform and

other sinusoidal transforms act as local transforms. Hybrid wavelet is generated using Kronecker product of two

different transform matrices as given in eq. (1). Here A is pxp slant transform matrix and B is any sinusoidal

matrix of size qxq. Bq (1) indicates first row of matrix B. In general nth

row of B is represented as Bq (n). Kronecker product of slant matrix with first row of matrix B is taken. It represents global features of image. Identity matrix of size pxp is used to translate the rows of ‘B’ to get local properties of image.

(1)

Semi global features of image can be included by changing the transformation matrix TAB in eq. (1) as

(2)

In above matrix we have flexibility to select number of rows that will contribute to local, global and semi global

features of an image. Scaling is done by reducing the size of matrix ‘A’ to half in each row of matrix and

shifting is done by using Identity matrix. In transformation matrix, global properties are included using simple

Kronecker product of its two components transforms which is given as TAB= Ap Bq= aij [Bq] (3)

It has no local properties. Since it is a Kronecker product of two different transform matrices, we call it as a

Hybrid Transform. To measure the performance of any compression method, compression ratio and traditional error measurement criteria like MSE, RMSE and PSNR are used. Here, RMSE is used, but as it gives perceived error this criterion is not sufficient. Hence Mean Absolute Error (MAE), Average fractional change in pixel value (AFCPV) and

Page 23: IJETCAS June-August Issue 9

H. B. Kekre et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014, pp.

01-10

IJETCAS 14- 504; © 2014, IJETCAS All Rights Reserved Page 3

Structural Similarity Index (SSIM) are also used to observe the perceptibility of compressed image to human eye. SSIM and AFCPV give change in perceived error. Mathematical formulae for these parameters are as given

MAE= |𝑥𝑖𝑗−𝑦𝑖𝑗 |

𝑗=𝑞𝑗=1

𝑖=𝑝𝑖=1

𝑝∗𝑞

(4)

AFCPV =

|𝑥𝑖𝑗−𝑦𝑖𝑗 | 𝑗=𝑞𝑗=1

𝑖=𝑝𝑖=1

𝑥𝑖𝑗

𝑝∗𝑞

(5)

where xij= original Image, yij=Reconstructed image, p= Number of rows and q=Number of columns.

SSIM (x,y) = (2µxµy+c1) (2σxy+c2) / (µx2+µy

2+c1) (σx

2+σy

2+c2) (6)

Here, c1 and c2 are constants given by c1= (k1L)2 and c2= (k2L)

2, where k1=0.01, k2=0.03 by default and L=2

8-

1=255. µx is average of image x,

µy is average of image y,

σxy is covariance of x and y,

σx2 and σy

2 are variance of image x and y respectively.

SSIM considers image degradation as perceived change in structural information.

IV. Results and Discussions

Proposed method is applied on 256x256 color images of different classes. Fig. 1 shows color images selected for experimental work.

Mandrill Peppers Grapes Cartoon

Dolphin Waterlili Bud Bear

Lena Apple Ball Balloon

Bird Colormap Fruits Hibiscus

Page 24: IJETCAS June-August Issue 9

H. B. Kekre et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014, pp.

01-10

IJETCAS 14- 504; © 2014, IJETCAS All Rights Reserved Page 4

Puppy Rose Tiger

Fig. 1 Color Images of Different Classes used for Experimental Work

Proposed hybrid wavelet transform is applied on above images. For this Slant is selected as a base transform and sinusoidal transforms such as DCT, DST, Hartley and Real-DFT are used as local transforms. Comparative results are given below in Fig. 2.

Fig. 2 Average RMSE vs. Compression ratio in Hybrid Slant Wavelet Transform with variation in component transforms and size

slant 8x8 and local transform 32x32 (8-32)

Fig. 2 shows average RMSE against compression ratio for different slant wavelet transform. 8x8 Slant transform

matrix and 32x32 local component transform is used to generate 256x256 transform matrix. It is then used to

transform the image. As shown in graph, when DCT is used as local transform, less RMSE is obtained. For

various compression ratios up to 32, Slant-DCT proves to be better. At compression ratio 32, RMSE 10 is

obtained using this pair. As Slant-DCT gives less RMSE, further different sizes of these component transforms are used to find better size combination. Results are shown in Fig. 3.

As observed from Fig. 3, for lower compression ratios up to 4, 16-16 and 32-8 pair of Slant-DCT gives less

error. Here 32-8 means 32x32 is size of base transform i.e. slant transform and 8x8 is size of local component.

For higher compression ratios up to 16, less RMSE is given by 16-16 pair. For highest compression ratio 32, 8-

32 and 16-16 pair give almost equal error.

0

5

10

15

2

2.1

3

2.2

9

2.4

6

2.6

7

2.9

1

3.2

3.5

6

4

4.5

7

5.3

3

6.4

8

10

.67

16

32

Avg

. RM

SE

Compression Ratio

Average RMSE vs Compression ratio in Slant-DCT Hybrid wavelet with variation in component size

8--32 16--16 32--8 64--4

Fig. 3 Average RMSE vs. Compression ratio in Slant-DCT Hybrid Wavelet with different sizes of component transforms

Page 25: IJETCAS June-August Issue 9

H. B. Kekre et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014, pp.

01-10

IJETCAS 14- 504; © 2014, IJETCAS All Rights Reserved Page 5

Fig. 4 shows RMSE in multi-resolution analysis of hybrid Slant wavelet transform.

Fig. 4 Avg. RMSE vs. Compression Ratio in Multi Resolution Analysis of Hybrid Slant Wavelet Transform with different local

component transforms

In multi-resolution analysis also Slant-DCT produces less error for all compression ratios. Further different sizes

of Slant and DCT are combined to observe the best size giving less RMSE. Respective graph is plotted in Fig. 5.

Fig. 1 Average RMSE against compression ratio in Slant-DCT Multi-Resolution Analysis with variation of component transforms

As shown in Fig. 5, for lower compression ratio up to 8, slant-DCT multi resolution hybrid wavelet with 8-32 and 16-16 component size shows almost equal error. For compression ratio 8 onwards, 8-32 pair clearly shows less error than other size combinations. 64-4 pair shows maximum RMSE at all compression ratios. After analyzing performance of hybrid wavelet and multi resolution hybrid wavelet, full Kronecker product of Slant transform with other sinusoidal transforms is taken which is hybrid transform of two components. Variation of RMSE for these different hybrid transforms is observed at different compression ratios in Fig. 6.

Fig. 2 RMSE vs. Compression Ratio in Hybrid transform with Slant as base transform

In hybrid transform, like hybrid wavelet and its multi resolution analysis Slant-DCT performs better than Slant-

DST, Slant-Hartley and Slant-RealDFT. Slant-DST shows high error in all three types of transforms.

Page 26: IJETCAS June-August Issue 9

H. B. Kekre et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014, pp.

01-10

IJETCAS 14- 504; © 2014, IJETCAS All Rights Reserved Page 6

Fig. 3 RMSE at different compression ratios in Slant-DCT Hybrid transform with variation in component size.

Fig. 7 plots average RMSE against compression ratio in Slant-DCT hybrid transform. Size of base transform and local component is varied and error is observed at different compression ratios. For all compression ratios, 8-32 pair gives less RMSE. From all above figures, it has been observed that 8-32 slant-DCT gives minimum RMSE in hybrid wavelet, its multi-resolution analysis and hybrid transform than other component sizes in respective transform types. Figure 8 shows comparison of RMSE in three types of transforms i.e. hybrid wavelet, its multi-resolution analysis and hybrid transform using specific size of component transforms 8-32.

Fig. 4 Comparison of RMSE at various compression ratios using 8-32 component size in hybrid wavelet, multi resolution analysis

and hybrid transform

From Fig. 8 it is observed that Slant-DCT hybrid wavelet gives lower RMSE than hybrid transform and multi-

resolution hybrid wavelet keeping component size same in all three transforms. As Slant-DCT gives lower RMSE

than other hybrid slant wavelet transforms, its performance is measured in terms of Mean absolute error (MAE).

It gives absolute difference in pixel values and hence better perceptibility of compressed image. Using different

component sizes, MAE is plotted against compression ratio in hybrid wavelet, multi-resolution hybrid wavelet

and hybrid transform as shown in fig 9, 10 and 11 respectively.

Fig. 5 Average MAE vs. Compression ratio in Slant-DCT hybrid wavelet with variation in component sizes

Page 27: IJETCAS June-August Issue 9

H. B. Kekre et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014, pp.

01-10

IJETCAS 14- 504; © 2014, IJETCAS All Rights Reserved Page 7

As shown in Fig 9, at lower compression ratios up to 4.57, 16-16 and 32-8 pair gives almost equal MAE like RMSE. For higher compression ratios this size changes to 16-16. At compression ratio 32, 8-32 pair gives slight less MAE than 16-16 pair.

Fig. 6 Average MAE vs. Compression ratio in Slant-DCT Multi-resolution Hybrid wavelet with variation in component sizes

As shown in fig. 10, in multi resolution analysis, 8-32 size of slant-DCT gives lower MAE. For lower

compression ratios this size is 16-16.

Fig. 7 Average MAE vs. Compression ratio in Slant-DCT Hybrid Transform with variation in component sizes

As shown in Fig. 11, 8-32 size of Slant-DCT gives lower MAE at all compression ratios. Further the

performance of Slant-DCT hybrid wavelet is measured in terms of Average Fractional Change in Pixel value

(AFCPV). Component size is varied as in RMSE and MAE comparison to observe the best size combination.

Fig. 12 shows AFCPV against compression ratio for slant-DCT hybrid wavelet with variation in component

size. Similar to RMSE and MAE, 32-8 pair gives less AFCPV at lower compression ratio up to 4. For

compression ratios 4 to 16, 16-16 pair works better. At highest compression ratio 32, equal AFCPV is obtained

by 8-32 and 16-16 pair.

Fig. 8 AFCPV vs. Compression ratio in Slant-DCT Hybrid Wavelet with variation in component size

Page 28: IJETCAS June-August Issue 9

H. B. Kekre et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014, pp.

01-10

IJETCAS 14- 504; © 2014, IJETCAS All Rights Reserved Page 8

Fig. 13 and 14 shows AFCPV versus compression ratio in multi-resolution analysis and hybrid transform of

Slant-DCT respectively. Sizes of component transforms are varied to choose the size giving less AFCPV.

Fig. 9 AFCPV vs. Compression ratio in Slant-DCT Multi-resolution Hybrid Wavelet with variation in component size

Fig. 10 AFCPV vs. Compression ratio in Slant-DCT Hybrid Transform with variation in component size

In multi-resolution analysis as well as in hybrid transform, 8-32 pair of Slant-DCT gives lower AFCPV like

hybrid wavelet. 64-4 pair gives high AFCPV and hence should not be considered.

Till now performance using various parameters is compared. Structural similarity index is an error metric that

gives more accuracy than above mentioned metrics. The difference with respect to other techniques mentioned

previously such as MSE or PSNR is that these approaches estimate perceived errors; on the other hand, SSIM

considers image degradation as perceived change in structural information. Structural information is the idea

that the pixels have strong inter-dependencies especially when they are spatially close. These dependencies

carry important information about the structure of the objects in the visual scene. Fig. 15 shows blocked SSIM

plotted against compression ratio.

Fig. 11 Average blocked SSIM against compression ratio in Slant-DCT hybrid wavelet with component size 8--32

Page 29: IJETCAS June-August Issue 9

H. B. Kekre et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014, pp.

01-10

IJETCAS 14- 504; © 2014, IJETCAS All Rights Reserved Page 9

Image is divided into 16x16 block and SSIM is computed for each block. Average of SSIM for all blocks is

calculated for specific compression ratio and is plotted in fig. 15. SSIM varies from -1 to 1. For two similar

images it is one. As image is compressed more, error increases and SSIM decreases. At lower compression

ratios it is almost equal to one. In hybrid transform SSIM decreases up to 0.991 at compression ratio 32. In

hybrid wavelet and multi resolution it is 0.993 at same compression ratio. It indicates that when image is

compressed using hybrid wavelet transform or its multi-resolution analysis, better image quality is obtained than

one obtained in hybrid transform. As shown in above graph, in hybrid wavelet and multi resolution it is almost

equal which is indicated by overlapping of graphs in these two cases.

Fig. 16 show ‘Lena’ image reconstructed using hybrid slant wavelet transform at compression ratio 32. Local

component transforms are varied as DCT, Hartley, Real-DFT and DST. In each case SSIM is observed at

highest compression ratio 32. Slant-DCT pair shows SSIM 0.993 using hybrid wavelet and its multi resolution

analysis. In Slant-DCT hybrid transform it decreases to 0.991 degrading the image quality. Lowest SSIM is

observed as 0.98 in Slant-DST hybrid wavelet transform showing grids in reconstructed image.

Slant-DCT Slant-Hartley Slant-Real DFT Slant-DST

Hybrid Wavelet

SSIM 0.993 0.9924 0.992 0.98

Multi-

resolution

Hybrid Wavelet

SSIM 0.993 0.9922 0.992 0.984

Hybrid

Transform

SSIM 0.991 0.99 0.991 0.988

Fig. 12 Reconstructed ‘Lena’ image at Compression ratio 32 using Slant (16x16) as Base Transform in Hybrid Wavelet, its Multi

Resolution Analysis and Hybrid Transform with different Local Component Transforms of Size 16x16

V. Conclusion

In this paper three different cases of hybrid slant wavelet have been experimented and compared for color image

compression. Hybrid wavelet i.e. bi-resolution analysis, multi-resolution analysis and hybrid transform are

compared using different error parameters. Various sinusoidal orthogonal transforms are used as local component

and combined with Slant transform. Different sizes of component transforms like 8-32, 16-16, 32 -8 and 64-4 are

used to generate 256x256 hybrid wavelet transform matrix. It is then applied on color image of same size.

Different fidelity criteria are used as RMSE gives perceived error. At lower compression ratios, 16-16 slant-DCT

hybrid wavelet transform gives less error which is closely followed by 8-32 size at higher compression ratios. In

multi resolution analysis and in hybrid transform 8-32 component size gives less error. Slant-RealDFT ranks

second in performance followed by Slant-Hartley pair whereas slant-DST gives maximum error and hence it is

not recommended. Apart from RMSE, MAE and AFCPV are also used to observe the reconstructed image

quality. Structural Similarity Index gives clear idea about subjective image quality in three different types of

transforms as compared to traditional error metric like RMSE. SSIM obtained in hybrid wavelet is 0.993 which is

closest to one indicating better reconstructed image quality. In hybrid transform SSIM obtained is 0.991 that

indicates slight degradation in image quality.

Page 30: IJETCAS June-August Issue 9

H. B. Kekre et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014, pp.

01-10

IJETCAS 14- 504; © 2014, IJETCAS All Rights Reserved Page 10

References

[1] Rehna V. J., Jeya Kumar M. K. “Hybrid Approaches to image coding: A Review”, International Journal of Advanced Computer

Science and Applications (IJACSA), Vol. 2, No. 7, 2011.

[2] Hamid R. Rabiee, R. L. Kashyap and H. Radha, “Multi-resolution Image Compression With BSP Trees And Multilevel BTC”

[3] Ahmed, N., Natarajan T., Rao K. R.: Discrete cosine transform. In: IEEE Transactions on Computers, Vol. 23, 90-93, 1974. [4] Amara Graps, “An Introduction to Wavelets”, IEEE Computational Science and Engineering, vol. 2, num. 2, Summer 1995, USA

[5] S. Mallat, "A Theory of Multi-resolution Signal Decomposition: The Wavelet Representation," IEEE Trans. Pattern Analysis and

Machine Intelligence, vol. 11, pp. 674-693, 1989 [6] William Pratt, Wen H. SIung Chen, Lloyd Welch, “Slant Transform Image coding”, IEEE Transactions on communications, Vol.

Com 22, No.8, August 1974, pp. 1075-1093.

[7] Chang P, P. Piau, “Modified Fast and Exact Algorithm for Fast Haar Transform”, In Proc. of World Academy of Science, Engineering and Technology, 2007, 509-512.

[8] Ch. Samson and V.U.K. Sastry, “ A Novel Image Encryption Supported by Compression Using Multilevel Wavelet Transform”,

International Journal of Advanced Computer Science and Applications (IJACSA), Vol. 3, No. 9, 2012 pp. 178-183 [9] R. Ashin, A. Morimoto, m. Nagase, R. Vaillancourt, “Image compression with multi-Resolution Singular Value Decomposition

and Other Methods”, Mathematical and Computer Modeling, Vol. 41, 2005, pp. 773-790.

[10] Jin Li Kuo, C.-C.J, “Image compression with a hybrid wavelet-fractal coder”, IEEE Trans. Image Process, Vol. 8, no. 6, pp. 868–874, Jun.1999.

[11] M. Ashok, Dr. T. Bhaskara Reddy, “Image Compression Techniques Using Modified High Quality Multi wavelets”, International Journal of

Advanced Computer Science and Applications, Vol. 2, No. 7, 2011 [12] S. Anila, Dr. .N. Devarajan, “The Usage of Peak Transform For Image Compression”, International Journal of Engineering Science and

Technology, Vol. 2(11), pp. 6308-6316, 2010.

[13] H. b. Kekre, Tanuja Sarode, Prachi Natu, “Performance Comparison of Column Hybrid Row Hybrid and full Hybrid Wavelet Transform on Image compression using Kekre Transform as Base Transform”, International Journal of Computer Science and

Information Security, (IJCSIS) Vol. 12, No. 2, 2014. Pp. 5-17. [14] D. Alani, A. Averbuch, and S. Dekel, “Image coding with geometric wavelets,” IEEE Trans. Image Processing, vol. 16, no. 1,

Jan. 2007, pp. 69–77.

[15] Chopra, G. Pal, A.K, “An Improved Image Compression Algorithm Using Binary Space Partition Scheme and Geometric Wavelets” IEEE Trans. Image Processing, vol. 20, no. 1, pp. 270–275, Jan. 2011.

[16] H. B. Kekre, Tanuja Sarode, Prachi Natu, “Performance Analysis of Hybrid Transform, Hybrid Wavelet and Multi-Resolution

Hybrid Wavelet for Image Data Compression”, International Journal of Modern Engineering Research, Vol. 4, Issue 5, May 2014, pp. 37-48.

[17] H.B. Kekre, Tanuja Sarode, Rekha Vig, (2013). Multi-resolution Analysis of Multispectral palm prints using Hybrid Wavelets

for Identification. International Journal of Advanced Computer Science and Applications (IJACSA), 4(3),192-198

Page 31: IJETCAS June-August Issue 9

International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research)

International Journal of Emerging Technologies in Computational

and Applied Sciences (IJETCAS)

www.iasir.net

IJETCAS 14-507; © 2014, IJETCAS All Rights Reserved Page 11

ISSN (Print): 2279-0047

ISSN (Online): 2279-0055

Error Propagation of Quantitative Analysis Based on Ratio Spectra J. Dubrovkin

Computer Department, Western Galilee College

2421 Acre, Israel

Abstract: Error propagation of the quantitative analysis of binary and ternary mixtures based on the ratio

spectra and the mean-centred ratio spectra has been studied. Gaussian doublets and triplets were used as

models of the mixture pure-component spectra. The mixture spectra were disturbed by random constant and

proportional noises and unknown background. The perturbations of the calibration matrix were modelled by

systematic errors caused by the wavelength shifts. The least-squares estimation of the concentration vector and

the estimation errors were obtained theoretically and numerically. The condition number of the matrix of the

pure-component ratio spectra was theoretically evaluated for binary mixtures. The advantages and

disadvantages of the ratio spectra methods are discussed.

Keywords: quantitative spectrochemical analysis, ratio spectra, mean centred ratio spectra, errors, condition

number, random constant and proportional noise, unknown background.

I. Introduction

One of the main problems of the spectrochemical analysis of white multicomponent mixtures is overlapping of

pure-component spectra which are known a priori. In the period of more than half a century, analysts have

attempted to solve this problem by developing numerous smart mathematical algorithms for processing the

mixture spectrum in conjunction with physical-chemical treatment of the mixture to be analyzed [1]. These

algorithms may be divided into two main groups:

1. Allocation of the analytical points set (or its linear transforms) in the mixture spectrum free of overlapping

with respect to a given analyte.

2. Direct and inverse calibration methods based on solving linear equation systems for data sets of calibration

mixtures with known spectra and concentrations.

The most popular methods of the first group include derivative spectroscopy [2], the method of orthogonal

transforms (“the net analyte signal”) [3, 4], and different modifications of the optical density ratio method [1, 5-

15].

A major progress in developing the methods of the second group was achieved in 1980s due to applying

statistical methods of chemometrics (regularization, principal component analysis, and partial least squares

regression) to combined analytical data obtained by spectroscopic and non-spectroscopic measurements [16].

The success of this approach is attributed to using increased information for analytical purposes. Unfortunately,

in some real-life cases, mixtures which contain different concentrations of pure components are not available

and/or the preparation of artificial mixtures is too complicated. The analysis of medicines with claimed

compositions also requires special approach [17].

In view of the above, many researchers attempted to improve the "old" analytical methods developed in the

“pre-computer era” by using modern instrumentation and computational tools. An interesting practical

applications in this field is the centering modification of the ratio spectra (RS) method [5] (MCRS method [6-

15]). However, its effectiveness was proved only by experimental studies. The goal of our work was giving

rigorous mathematical foundation for studying the noise-filtering properties (error propagation) of the method.

Standard notations of linear algebra are used throughout the paper. Bold upper-case and lower-case letters

denote matrices and vectors, respectively. Upper-case and lower-case italicized letters denote scalars. All

calculations were performed and the plots were built using the MATLAB program.

II. Theory

Consider the spectrum of an additive binary mixture that obeys the Beer-Bouguer-Lambert law:

where is a column vector ( ), and are the column vectors ( ) which represent the

spectra of the first and the second mixture components, respectively, is the concentration vector,

and are the component concentrations, and is the symbol for transpose. The path length is assumed to

be equal to unit. Suppose that the error-free pure-component spectra are a priory known.

Multiplying Eq. 1 by the diagonal matrix

Page 32: IJETCAS June-August Issue 9

J. Dubrovkin, International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp. 11-20

IJETCAS 14-507; © 2014, IJETCAS All Rights Reserved Page 12

where is the -element of , we obtain the ratio spectrum:

where is the unit vector ( ). Constant term can be eliminated by the mean centering of Eq. 3

(subtracting its average):

where chevron is the mean value symbol and Matrix equation (4) consists of linear

equations for each wavelength. Earier it was suggested to evaluate unknown concentration ( at one optimal

wavelength (e.g., at the point of the maximum) by linear calibration procedure using standard mixtures

[6]. Such calibration can significantly reduce the systematic errors [18]. On the other hand, the single-point

analysis results in the losses of information that is contained in the rest of the analytical points. Therefore, we

prefer to solve Eq. 4 by the least squares (LS) method using the “best” combination of the analytical

wavelengths (e.g., [19]).

The LS-solution of Eq. 4 is [20]

where .

Since

where Similarly,

where

and are obtained by changing the indexes in Eqs. 4

and 5.

The above algebraic operations represent, actually, linear transformation of Eq. 1. According to statistical

concepts [20], in the presence of uncorrelated normal noise (perturbation) with zero mean, the LS-estimate is

the best linear unbiased estimate with minimal dispersion, which cannot be decreased by any linear

transformations. However, for other kinds of noise (e.g., proportional), this conclusion is not valid.

The LS-solution of Eq.1 gives two calibration vectors ( :

which differ from the corresponding vecrors obtained by the MCRS method (Eqs. 6 and 7).

If a mixture spectrum contains uncorrelated normal noise with zero mean and constant dispersion , the mean

squared error of the LS estimation of the component concentration vector depends on the sum of the

diagonal elements of inverse matrix (8) [20]:

If the standard deviation of the noise is proportional to the response of the spectral instrument,

where is the coefficient of the noise.

Similarly, if the MCRS method is used,

To compare the LS and the MCRS methods, we used the standard deviation (std) ratio:

The mathematical analysis of the mean centering of ternary mixture ratio spectra is similar to the corresponding

analysis for binary mixtures (see APPENDIX A). In the case of ternary mixtures, the error term of the third

component will appear in Eqs. 9-12.

The error propagation in multicomponent analysis can be studied using the condition number of matrix It is known that the relative prediction uncertainty in quantitative analysis [21] is

Page 33: IJETCAS June-August Issue 9

J. Dubrovkin, International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp. 11-20

IJETCAS 14-507; © 2014, IJETCAS All Rights Reserved Page 13

where

and

are the relative uncertainties of matrix (calibration errors) and of the mixture spectra

(measurement errors), respectively. Since the theoretical calculation of the condition number is possible only in

some simple cases (see APPENDIX B), this number is generally evaluated numerically by computer modeling.

III. Computer modeling

A. Binary mixtures

To perform computer modeling, the components of a symmetrical Gaussian doublet (Fig. 1) were chosen as

elements of matrix :

where is the abscissa of the spectrum plot (e.g., wavelength),

is full width at half maximum of Gaussian lines, and are the positions

of the component maxima.

The condition number of ratio spectra matrix was evaluated (APPENDIX B) as

where is the sampling interval along -axis. Since the value could

not be calculated analytically, the ratio

was calculated numerically (Fig. 2).

From the curves shown in Fig. 2, it can be concluded that the RS method is a little less error-sensitive than the

LS method ( ) only for a large number of analytical points in the case of a strongly overlapping

Gaussian doublet ( ). For a resolved doublet ( ), the RS method is not effective.

The standard deviation ratios (Eq. 13) were evaluated for different sets of analytical points. The chosen sets

were located in the neighborhood of the doublet middle point (Fig. 3) and symmetrically around the point 598

of the doublet MCRS (Fig.1). Since the wings of Gaussian lines quickly decay to zero, the intensity of the

MCRS approaches infinity. To partly compensate for this drawback, very small constant background (0.001)

was added to the doublet components (Eq. 15).

Based on the results presented in Figs. 4 and 5, it can be concluded that it is more advantageous to apply the LS

method to the MCRS than to the original spectra only in the case of proportional noise (which is typical of UV-

VIS instruments). The sets of analytical points were identical for both LS and MCRS methods.

The systematic errors caused by uncompensated second-order polynomial background in the mixture spectrum,

were evaluated numerically for both methods. The obtained values were close to 1.

Another type of errors is caused by the systematic errors in the matrix of the pure-component spectra. While the

random errors of matrix can be significantly decreased by averaging, the systematic errors cannot be

eliminated. The main source of the latter type errors is the shift of the spectral points from their original position

Figure 1. Symmetrical Gaussian doublet with Figure 2. Dependences of relative condition number on

constant base line and its MCRS for different integration limits

( ∙∙∙) Doublet ( , base line = 0.001); and (c) 0.2. ( — ) the MCRS.

Page 34: IJETCAS June-August Issue 9

J. Dubrovkin, International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp. 11-20

IJETCAS 14-507; © 2014, IJETCAS All Rights Reserved Page 14

Figure 3. Analytical points for binary mixtures. Figure 4. Dependences of std ratio on for

different analytical sets

Mixture ( ) and pure-component ( and ) spectra. Constant and proportional noise (above and below the line

Five-point (circles) and nine-point analytical sets respectively). The designations of the analytical sets (rectangles): 1a, 1b, 2a and 2b; a - black, b-red. are the same as in Fig. 3.

Figure 5. Dependences of std ratio on foranalytical sets symmetrically located around point 598

Constant and proportional noise (above and below the

respectively). The set size is given next to each plot.

along -axis This shift depends on the slope of the spectral curve [1]. The theoretical study of the impact of the

wavelength shift in MCRS on the concentration errors is presented in APPENDIX (Eq. C7). It was shown that

the errors of the calculated concentrations strongly depend on the derivatives (slopes) of the mixture spectrum

and of the pure-component spectra. Thus the metrological characteristics of the RS method can be improved by

precise setting of the mixture spectrum and of the pure-component spectra at the same wavelengths. This result

is in agreement with that of the well-known study of the influence of the calibration spectra wavelength shift on

the validity of multivariate calibration models [22, 23]. The impact of wavelength shift on the uncertainty of the

LS-based analysis of spectra and of the corresponding ratio spectra (including MCRS) was also evaluated

numerically. The analytical points were chosen in the range of steep slope of the spectra (Fig. 6, the sampling

interval was halved). The obtained results show that, in some cases, the mean centering procedure significantly

decreases the RS analysis uncertainly (Fig. 6). However, the MCRS method has no advantages over the

common LS method. Moreover, the selection of analytical points is a critical factor of the analysis. For

example, for symmetrical distribution of the analytical points around the point 598 (Fig.1), the LS method gives

better results than the MCLS one, especially for large data sets (Fig. 7).

Page 35: IJETCAS June-August Issue 9

J. Dubrovkin, International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp. 11-20

IJETCAS 14-507; © 2014, IJETCAS All Rights Reserved Page 15

Figure 6. Evaluation of error propagation of binary mixture analysis using different sets of analytical points

Page 36: IJETCAS June-August Issue 9

J. Dubrovkin, International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp. 11-20

IJETCAS 14-507; © 2014, IJETCAS All Rights Reserved Page 16

Left-hand panels: Sets of the analytical points of matrix (∙∙∙) and of the first doublet component RS (—). Ranges of analytical points:

(a) 1090 - 1190, (b) 950 – 1050, (c, d) 850 – 1150. (a-c) and 2 (d). Right-hand panels: (top) and (bottom)

plots.

Figure 7. Dependences of std ratio on for calibration matrix errors

LS (— ) and MCRS (∙∙∙) methods. The analytical sets are located symmetrically around point

598 (Fig.1). The set sizes are 3, 5, 9, and 21 from bottom to top, respectively.

B. Ternary mixtures

The components of two Gaussian triplets were taken as elements of two matrices (Fig. 8, a1 and a2, top

plots). The condition number of the first triplet, is very large due to strong overlapping of the

pure-component spectra. Overlapping in the second triplet is small, consequently, From the

full-region MCRSs (Fig. 8 a1 and a2, bottom plots), “the best” combination of analytical points was selected

empirically (Fig. 8, b1-d1, b2-d2). The points were selected in the regions of the maximum intensity of the

transformed spectra, according to the minimum error criteria for evaluated concentrations. The total prediction

error (Eq. C9) was estimated by averaging over 100 statistically independent numerical experiments using a

95%-confidence interval.

The results presented in Fig. 8 show that the advantage of the MCRS method is significant compression of

spectral data. In other words, this method allows replacing full-range spectra by a relative small number of

analytical points. However, the intensities of the transformed spectra decrease notably, which results in an

increased impact of small errors in MCRS on the of the quantitative analysis uncertainty. It was found that data

compression is achieved at the expence of biased estimation of the second component and large increase of the

total prediction error of the component concentrations for ternary mixtures.

The prediction errors, calculated for 16 ternary mixtures (Table 1), are listed in Table 2. According to these

results, the errors of the LS analysis based on full-range spectra are significantly less than those of the MCRS

method. However, in the same spectral region, the MCRS method is more preferrable than the LS one in the

case of strongly overlapping pure-component spectra.

The most critical factor for the MCRS method are systematic errors of the pure-component spectra matrix. For

example, for the relative errors of the total prediction error are more than 100%.

Page 37: IJETCAS June-August Issue 9

J. Dubrovkin, International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp. 11-20

IJETCAS 14-507; © 2014, IJETCAS All Rights Reserved Page 17

In conclusion, it should be pointed out that the MCRS method employs a very limited number of analytical

points, whose location has been a priory chosen. In contrast to this, in the multivariate regression method, all

possibly relevant spectral and non-spectral data are pooled together. Therefore, generally speaking, the latter

method is preferred to the MCRS analysis. However, consuming large data volumes from analytical instruments

creates a new data-management level. In this connection, traditional spectochemical professionals often prefer

"single-point" analytical methods to complex multi-point procedures.

Figure 8. Gaussian triplet component spectra and their full-region MCRS.

(a1, a2) Component spectra ( top plots) and MCRS ( bottom plots). Analytical point sets for: (b1-d1) a1 triplets and for (b2-d2) a2 triplets, respectively.

Table 1. Mixture concentrations

0.05 0.05 0.05 0.1 0.1 0.8 0.1 0.2 0.7 0.1 0.3 0.6 0.2 0.2 0.6 1/3

0.05 0.9 0.05 0.1 0.8 0.1 0.2 0.1 0.7 0.3 0.1 0.6 0.2 0.6 0.2 1/3

0.9 0.05 0.9 0.8 0.1 0.1 0.7 0.2 0.1 0.6 0.3 0.1 0.6 0.2 0.2 1/3

Table 2. Total prediction errors, %

Disturbance

LS (full)

LS (part)

MCRS

Constant noise )

0.35±0.13 83±30 5.0±1.6

0.027±0.091 0.20±0.071 0.41±0.12

Proportional noise

(

0.25±0.088 67±27 4.3±1.3

0.017±0.0059 0.15±0.054 0.31±0.096

Constant noise ) Background

( )

0.41±0.14 79±29 5.5± 1.7

0.16±0.059 0.25±0.081 0.46±0.14

Wavelength shift

0.019±0.0075 0.057±0.025 0.30±0.12

0.037±0.0144 0.12±0.056 0.61±0.25

0.016±0.0065 0.016±0.0071 0.070±0.018

0.098±0.039 0.28±0.13 64±128

0.040±0.016 0.041±0.017 0.70±1.3

Data for two calibration sets (Fig. 8, a1 and a2) are given in the upper and lower rows, respectively.

References [1] I.Ya. Bernstein and Yu.L. Kaminsky, Spectrophotometric Analysis in Organic Chemistry. Leningrad: Science, 1986.

[2] J. M. Dubrovkin, V. G. Belikov, Derivative Spectroscopy. Theory, Technics, Application. Russia: Rostov University, 1988. [3] J. M. Dubrovkin, “The possibility of the discrete Fourier transform-based quantitative analysis for overlapping absorption

bands”, Izvestia Severo-Kavkazskogo Nauchnogo Centra Vysšey Školy, Yestestvennye Nauki, N1, 1981, pp. 57-60.

Page 38: IJETCAS June-August Issue 9

J. Dubrovkin, International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp. 11-20

IJETCAS 14-507; © 2014, IJETCAS All Rights Reserved Page 18

[4] A. Lorber, “Error propagation and figures of merit for quantification by solving matrix equations”, Anal. Chem., vol. 58, 1986,

pp. 1167–1172 [5] M. J. S. Dewar and D. S. Urch, “Electrophilic substitution. Part VIII. The nitration of dibenzofuran and a new method of

ultraviolet spectrophotometric analysis of mixtures”, J. Chem. Soc., N1, 1957, pp. 345-347.

[6] A. Afkhami and M. Bahram, “Mean centering of ratio kinetic profiles as a novel spectrophotometric method for the simultaneous kinetic analysis of binary mixtures”, Anal. Chim. Acta, vol. 526, 2004, pp. 211-218.

[7] A. Afkhami and M. Bahram, “Mean centering of ratio spectra as a new spectrophotometric method for the analysis of binary and

ternary mixtures”, Talanta, vol. 66, 2005, pp. 712-720. [8] H. M. Lotfy and M. A. Hegazy, “Comparative study of novel spectrophotometric methods manipulating ratio spectra: An

application on pharmaceutical ternary mixture of omeprazole, tinidazole and clarithromycin”, Spectrochim. Acta, A: Molecular

and Biomolecular Spectrosc.,.96, 2012, pp. 259-270. [9] E. A. Abdelaleem and N. S. Abdelwahab, “Simultaneous determination of some antiprotozoal drugs in different combined

dosage forms by mean centering of ratio spectra and multivariate calibration with model updating methods”, Chem. Central J.,

vol. 6:27, 2012, pp.1-8. [10] M. M. Issa, R. M. Nejem, A. M. Abu Shanab and N. T. Shaat,” Resolution of five-component mixture using mean centering

ratio and inverse least squares”, Chem. Central J., vol. 7:152, 2013, pp.1-11.

[11] H. W. Darwish, S. A. Hassan, M. Y. Salem and B. A. El-Zeiny, “Three different spectrophotometric methods manipulating ratio spectra for determination of binary mixture of Amlodipine and Atorvastatin”, Spectrochim. Acta, A: Molecular and

Biomolecular Spectrosc.,83, 2011, pp. 140-148.

[12] N. M. Bhatt, V. D. Chavada, M. Sanyal and P. S. Shrivastav, “Manipulating Ratio Spectra for the Spectrophotometric Analysis of Diclofenac Sodium and Pantoprazole Sodium in Laboratory Mixtures and Tablet Formulation”, The Scientific World Journal,

Vol. 2014, 2014, Article ID 495739, 10 pp.

[13] H. M. Lotfy, N. Y. Hassan, S. M. Elgizawy and S. S. Saleh, “Comparative study of new spectrophotometric methods: an application on pharmaceutical binary mixture of ciprofloxacin hydrochloride and hydrocortisone”, J. Chil. Chem. Soc. vol.58 ,

2013, pp. 1651-1657 .

[14] H. M. Lotfy, M. A.l. M. Hegazy and S. A. N. Abdel‐Gawad, “Simultaneous determination of Simvastatin and Sitagliptin in tablets by new univariate spectrophotometric and multivariate factor based methods”, European J. Chem. vol .4 ,2013, pp. 414-421.

[15] N. W. Ali, M. A. Hegazy, M. Abdelkawy and E. A. Abdelaleem, “Simultaneous determination of methocarbamol and

ibuprofen or diclofenac potassium using mean centering of the ratio spectra method “, Acta Pharm.vol. 62, 2012, pp. 191–200. [16] H. Martens and T. Næs, Multivariate Calibration. New York: Wiley, 1992.

[17] “US Pharmakopia <197> Spectrometric identification tests”, http://www.pharmacopeia.cn/v29240/usp29nf24s0_c197.html

[18] J. Dubrovkin, “Evaluation of the maximum achievable information content in quantitative spectrochemical analysis”, International Journal of Emerging Technologies in Computational and Applied Sciences, vol. 1-6, 2013, pp. 1-7.

[19] J. M. Dubrovkin, “Quantitative analytical spectrometry of multicomponent systems with known quantitative composition using

the orthogonal projection method”, Zhurnal Prikladnoi Spectroscopii, vol. 50, 1989, pp. 861-864. [20] G. A. F.Sever and A. J. Lee,Linear Regression Analysis, 2nded.,New Jersey:John Wiley and Sons, 2003.

[21] K. Danzer, M. Otto, L. A. Currie, “Guidelines for calibration in analytical chemistry. Part 2. Multispecies calibration (IUPAC

Technical report,” Pure and Applied Chemistry, vol. 76, 2004, pp. 1215-1225. [22] H. Swierenga A.P. de Weijer, R.J. van Wijk b and L.M.C. Buydens, “Strategy for constructing robust multivariate calibration

models”, Chem.Intel. Lab. Syst., vol. 49, 1999, pp. 1–17.

[23] F. Vogt and K. Booksh, ” Influence of wavelength-shifted calibration spectra on multivariate calibration models”, Appl. Spectrosc., 2004, vol. 58, pp.624-635.

Appendix

A. Mean centering of ratio spectra for ternary mixtures [6].

Similar to the case of binary mixtures (Eq. 1), consider the spectrum of an additive ternary mixture which obeys

the Beer-Bouguer-Lambert law:

1st step: multiplication of Eq. A1 by matrix (2):

and mean centering of (A2):

2nd

step: multiplication of Eq. A3 by diagonal matrix with non-zero elements

:

where and mean centering of (A4):

where The LS solution of Eq. A5 is

where

and

The slope and the intercept of the linear equation for the first and the second components is readily obtained in

the same way.

B. Condition number of the RS matrix of the pure-components of a Gaussian doublet.

Transformed matrix (Eq. 15) has the form

Page 39: IJETCAS June-August Issue 9

J. Dubrovkin, International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014, pp. 11-20

IJETCAS 14-507; © 2014, IJETCAS All Rights Reserved Page 19

where is defined by Eq. 2. Then

where For sufficienly large the sums in Eq. B2 can be substituded by integrals:

where and The condition number of matrix

is

where λ is the eigenvalue of matrix . The eigenvalues are the solution of the following

equation:

where det is the determinant symbol and is the identical matrix. From Eqs. B2-B6, we obtain:

Using Taylor series, it is easy to show for that

Substituting Eq. B8 into Eq. B7, we obtain:

For approximation B9 is very close to the precise value obtained numerically.

Due to the symmetry of the Gauss doublet the same result was obtained for

C. Gaussian doublet. Case study:Impact of systematic errors of matrix S on the errors of binary mixture

analysis

Let

where is disturbance of matrix . The elements of matrix can be regarded as known constants Substituting Eq. C1 into Eq. 1, we obtain:

where is an unknown concentration vector. The LS solution of Eq. C2 is

Suppose that the shift of point along -axis from its original position in the mixture spectrum is the main

source of systematic error . This shift, being dependent on the slope of the spectral curve [1], is measured by

the fraction of sampling interval along -axis:

where k is a constant, is the derivative of the spectrum in the point .

Setting Eqs. C1 and C4 into Eq. C3 and using the matrix equation for small , we obtain:

Next, neglecting the small terms that contain of the order , we have

where .

Eq. C6 is identical to the equation obtained in the case of the round–off errors of the explanatory variables (the

elements of the regression matrix) [20].

Setting Eq. B1 in , we have for the ratio method:

where

Page 40: IJETCAS June-August Issue 9

J. Dubrovkin, International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014, pp. 11-20

IJETCAS 14-507; © 2014, IJETCAS All Rights Reserved Page 20

The first term in the brackets in Eq. C7 appears due to the shifts of the mixture pure-component spectra.The

second term is connected with the errors of the mixture ratio spectra that are due to the shift of the second pure-

component spectrum.

To compare errors and

we used the ratio of the vector norms

which was evaluated numerically.

Besides, total prediction error for mixtures was calculated as

where and

Page 41: IJETCAS June-August Issue 9

International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research)

International Journal of Emerging Technologies in Computational

and Applied Sciences (IJETCAS)

www.iasir.net

IJETCAS 14-508; © 2014, IJETCAS All Rights Reserved Page 21

ISSN (Print): 2279-0047

ISSN (Online): 2279-0055

THERMAL AND MOISTURE BEHAVIOR OF PREMISE EXPOSED TO

REAL CLIMATE CONDITION Nour LAJIMI

1, Noureddine BOUKADIDA

1

1 LABORATORY OF ENERGY AND MATERIALS, LabEM-LR11ES34

Rue LAMINE ABBASSI 4011 HAMMAM SOUSSE –TUNISIA

_____________________________________________________________________________________

Abstract: This paper presents a numerical study of the thermal and moisture behavior of premise. Vertical walls

are equipped with alveolar structure in East, South and West faces. The temperature and the relative humidity

are assumed to be variable with time. The study shows that the climatic conditions and the orientation of

vertical walls have a relatively in influence on the inside behavior of premise. The study also show the effect of

alveolar structure on the relative humidity and temperature inside the premise.

Keywords: Solar energy-Heat and moisture transfer- Relative humidity-alveolar structure.

__________________________________________________________________________________________

I. State of the art in terms of heat and mass transfer in buildings

Very high or very low relative humidity and condensation phenomena can compromise building occupants'

health and comfort. Controlling humidity and Maintaining a comfortable humidity range for occupants is

necessary. Generally, most people will be comfortable in a humidity range of 30–80% if the air temperature is in

a range of 18–24ºC. There are many ways that avoid condensation and maintain relative humidity in optimal

range in buildings. Since the last decade many theoretical and less experimental work of thermal and moisture

behavior of building has been done. Many searchers were interested in thermal behavior, others only in moisture

behavior. Among the first there are those who are focused on buildings equipped with inclined alveolar

structure; among them we mention Seki S.[1] and Bairi A. [2] who experimentally studied heat transfer by

natural convection in a cavity. They brought out correlations of Nusselt number type according to the Grashof

number: nGrFNu ).( , where (n) depends on the nature of the flow for different configurations by varying the

angle of inclination in the cavities, the report of shape and the temperature difference (ΔT) between both warm

and cold vertical walls. Bairi.A [2] showed the influence of the thermal boundary conditions at the level of the

passive walls (lamelleas) on the convective heat transfer.

Zugari.MR and al [3] specified that a simple glazing equipped with inclined lamellas structure will have during

one day (the incidence of radiation varying constantly) an overall efficiency upper to that of the simple glazing

or double glazing.

Vullierme J.J and Boukadida N.[4] experimentally determined the global density of heat transfer flux including

convection and radiation (Fa) in the crossing and insulating senses through the realized alveolar structure. These

measures allowed to bring out laws for different distributions of low or high emissivity coatings of the inside

faces of alveolar. These laws are defined by the following correlation:

25.1Ta

F )1(

where: is a constant which depends on emissivity, transfer direction and the angle of inclination.

In order to show the effect of the anti-convective structure, they [5] studied heat transfer in a room by using this

alveolar structure. The aim of the work was to study the external temperature, solar flux and wall nature effects

on the building thermal behavior using a structure with a diode thermal effect. The structure is conceptualized to

be used for a cooling or heating application. Numerical simulations allowed to compare the thermal behavior of

a building equipped with this structure on its East, South and West faces to that of standing or conventional

building with large or low inertia. Simulations were made for a cooling application in a deserted zone where the

thermal amplitude between day and night is sensitive. Results showed the effect of conducted and insulated wall

layers thickness and the external solar flux on the premise thermal behavior. They also showed that the average

inside temperature of a place equipped with this structure is slightly higher than one having high or low thermal

inertia.

Lajimi N. and Boukadida N. [6] studied numerically the thermal behavior of the premises. Vertical walls are

equipped with alveolar structure and/or simple glazing in East, South and West faces. The temperature of the

premises is assumed to be variable with time or imposed at set point temperature. Results principally show that

the simple glazing number has a sensitive effect on convection heat transfer and on inside air temperature. They

also show that the diode effect is more sensitive in winter. The effect of alveolar structure and simple glazing on

the power heating in case with set point temperature was also brought out. In order to optimize building energy

efficiency, M. Doya and al[7] studied experimentally the effects of dense urban model and the impact of cool

facades on outdoor and indoor thermal conditions. The aim of this work is to look for alternatives soulutions to

Page 42: IJETCAS June-August Issue 9

Nour LAJIMI et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp.

21-28

IJETCAS 14-508; © 2014, IJETCAS All Rights Reserved Page 22

improve thermal comfort and to reduce cooling energy demand, such as building morphology (for example, the

orientation of the walls, in this case to study the temperature profil on the East and West sides ) and surface

albedo (the modification of albedo can reduce the solar radiation absorbed. However, this reduction decreases

the surface temperatures, and then the longwave radiation exchanges).

Among searchers those interested in moisture behavior in buildings we mention Woloszyn M and Rode C. [8]

who studied performance of tools simulation of heat, air and moisture of whole building. They specified that

inside humidity depends on several factors, such as moisture sources, air change, sorption in materials and

possible condensation. Since all these phenomena are strongly dependent on each other. Numerical predictions

of inside humidity need to be integrated into combined heat and airflow simulation tools. The purpose of a

recent international collaborative project, IEA–ECBCS (International Energy Agency- Energy Conservation in

Buildings and Community Systems), has been to advance development in modeling the integral heat, air and

moisture transfer processes that take place in “whole buildings”by considering all relevant parts of its

constituents. It is believed that understanding of these processes for the whole building is absolutely crucial for

future energy optimization of buildings, as this cannot take place without a coherent and complete description of

all hygrothermal processes. They also illustrate some of the modeling work that has taken place within the

project and presented some of the used simulation tools. They focused on six different works carried out in the

project to compare between models and to stimulate the participants to extend the capabilities of their models. In

some works, it was attempted to model the results from experimental investigations, such as climate chamber

tests (for example Lenz K. [9]), it was attempted to model so-called BESTEST building of “IEA SHC Task 12

& ECBCS Annex 21’’ (Judkoff R. and Neymark [10]). The original BESTEST building was extended with

moisture sources and material properties for moisture transport and is described in more details in [11].

Constructions were altered so they were monolithic, the material data were given as constant values or

functions, and the solar gain through windows was modeled in a simplified way. From 9:00 to 17:00 every day.

The air change rate was always 0.5 ach (air-exchange per hour). The heating and cooling control for all the non-

isothermal cases specified that the inside temperature should be between 20 and 27°C and that the heating and

cooling systems had unlimited power to ensure this. The system was a 100% convective air system and the

thermostat was regulated on the air temperature. The first cases were very simple, so analytical solutions could

be found. These results gave an increased belief that it was possible to predict the inside relative humidity with

whole building hydrothermal calculations. In the second and the more realistic part of the exercise, the building

was exposed to a real outside climate as represented by the test reference weather of Copenhagen, and a

simplified modeling of radiation was adopted. The result shows the relative humidity inside the roof structure.

For most of the tools, the results agreed with one another, which indicates that the simulations perform correctly

when it comes to the calculation of moisture transport in the building enclosure. Woloszyn M. and Rode C. [8]

clarified the models that represented heat and simple vapor diffusion in envelope parts, without considering

liquid migration or hysteresis in sorption isotherm, which can give correct estimation of hydrothermal building

response in many practical applications. Indeed their results were similar to those of more complex tools in the

works performed. The importance of interactions in whole building HAM (heat, air and moisture) response was

also shown. The relative humidity of the inside air levels are strongly dependent not only on transfer of moisture

between the air and the construction of sources of moisture, but also on air flow, temperature levels and energy

balances.

Moisture balance:

The simplicity of the model presented here is obtained by the use of Kirchhoff’s potential, [12]. This allows to

describe the moisture transport. It was originally introduced for heat transfer by Kirchhoff introduced and

further developed to describe moisture transport during the past two decades (Arfvidsson J. [13]). An important

result is that the average value of the Kirchhoff's potential in the material over a time period is equal to the

average value of the Kirchhoff's potential. This is valid in a semi-infinite material (2).

XD

Xt

(2)

The potential can be chosen to fit a special application or measurement. Relative humidity or moisture content

is often natural choices since these potentials are directly measurable.

Künzel H M. [14] studied the inside relative humidity in residential buildings. To assess the moisture

performance of building envelope systems using the moisture balance (2), a boundary condition is necessary:

QWAGt

cV

(3)

Where:

C: absolute moisture ratio of the interior air, [kg m-3

]

G: mass flow of moisture from the inside surface into the room, [kg.m-2

h-1

]

A: enclosure surfaces [m2]

W: inside mass flow of moisture generated by internal moisture sources [kg.h-1

]

Page 43: IJETCAS June-August Issue 9

Nour LAJIMI et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp.

21-28

IJETCAS 14-508; © 2014, IJETCAS All Rights Reserved Page 23

Q: wet mass rate ventilated by air conditioning systems [kg.h-1

].

Fitsu T. [15] studied the whole building heat and moisture analysis. The simulation results are based on

integrating analysis of three components used to compare between models. These components cover three

aspects of the whole building performance assessment which are:

-Inside environment: prediction of inside temperature and relative humidity,

-Building envelope hydrothermal condition: temperature and relative humidity conditions of the outside surface

of the roof,

-Energy consumption: estimation of the heating and cooling loads that are required to maintain the inside

temperature in the desired range. For the second aspect, results showed that the highest moisture accumulation

corresponding to 76% of relative humidity is observed at the time when the surface temperature is the lowest

and the solar radiation is the highest (13:00 h), the outside surface of the roof (corresponding to as low as 15%

relative humidity).

Several projects based on experimental analysis determined correlation between moisture and temperature of the

air inside building. whys U.and Sauret H. [16] studied experimentally the heat and mass comfort in two

different test buildings (with Nubian Vault) and (with sheet metal) by determining the temperature and relative

humidity of the inside air .The measures analysis of the surface temperature and humidity shows that the

temperature of the building which corresponds to “Nubian Vault” is less important compared to that of the

building related to “sheet metal”. It may cause an increase in the outside temperature. The variation of relative

humidity surface is less important in the buildings tested under “sheet metal” than those under “Nubian Vault”.

Milos J. and Robert C. [17], determined the property of the vapor diffusion in the building materials, they used

an experimental method to determine the transport properties of the water vapor. This method is based on

measurements in steady state under isothermal condition of water vapor, measurement of the vapor flow by

diffusion through samples. Using the measured water vapor diffusion coefficient, the water vapor diffusion

resistance factor, which is the parameter most frequently used in building practice, was determined as:

D

Vv

D

dR 0 , where

0vD is the diffusion coefficient of water vapor in air, Rd is the vapor diffusion resistance

factor and D is the diffusion coefficient of water vapor in the building materials.

Patrick R. and al [18] studied the modeling of uncontrolled inside humidity for (HAM) simulation of residential

building; this paper examines the current approaches to modeling the inside humidity for (HAM) computer

simulation use. Moisture balance methods have been developed to estimate the inside humidity in residential

buildings without mechanical humidity control. This paper makes the case for establishing different parameters

for hot and cold seasons. Calculations of inside humidity are presented for a representative mild marine climate

and it is demonstrated that the controlling parameters must be carefully selected to produce realistic inside

humidity levels. They compared the calculated relative humidity using two models, the first one being the BRE

(Building Research Establishment ) admittance and the second one the ashare 106P . Authors have shown the

impact of inside temperatures using those models. Results illustrate the measured field data of multi-unit

residential buildings in Vancouver. They have also shown a general trend in the inside-outside vapor pressure

difference in measured data from Vancouver over several years of monitoring. The inside vapor pressure will

nearly always be greater than the outside vapor pressure for uncontrolled inside humidity during the hot season.

The difference of vapor pressure decreases over the cold season until at some point the inside vapor pressure

will be close to the outside vapor pressure.

Based on formal analogy between the equation of diffusion (Fick law) and the equation of conduction (Fourier’s

law):

gradTt

(4)

gradCDm

(5)

There is a correspondence between the following grouping CTDmt ,,,,, . Then, the transposition of

the thermal problem of conduction into a diffusion problem is called thermo-mass diffusion analogy. Knowing

the correlation quantifying heat transfer it can be deduced by analogy those that quantify mass transfer. Driss S.

[19] and Rode C. [20] determined the convective moisture transfer coefficient and the surface resistance by

using the Lewis relation s expressed as:

4/3 Lecp

hthm

(6)

The exponent ¾ is recommended for inside surfaces in buildings by Sandberg P. [21].

There are a number of validated models for thermal building simulations as well as hydrothermal envelope

calculations used in building practice today. However, working combinations of these models are not yet

available for the practitioner. In principle, this combination is made by coupling existing models of both types.

Page 44: IJETCAS June-August Issue 9

Nour LAJIMI et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp.

21-28

IJETCAS 14-508; © 2014, IJETCAS All Rights Reserved Page 24

Figure 1 shows the notion of such a combination where balance equations for the inside space and the different

envelope parts have to be solved simultaneously.

Figure 1: Hygrothermal effects of inside heat and moisture, outside climate and transient behavior of

envelope components.

II. Position of the problem

To our knowledge, no experimental or numerical work has been done to study the transfer of moisture in a room

with walls equipped with lamellaes inclined to the horizontal plane. Based on the above and on previous work,

we are interested in studying the thermal and moisture behavior of such premises. Each wall is exposed to

variable solar flux and submitted to metrological condition. The area and the volume of the premise are (S=30m²

and V=300m3). The descripton of premise walls mentioned in paper [6].

III. The working assumptions

- The heat and mass transfer is unidirectional.

- The air is considered a perfectly transparent gas,

- The thermo-physical properties of materials are constant,

-The air temperature inside the room is uniform,

-The participation of the occupant energy is negligible,

IV. Formulation of the problem

The equation of the thermal balance of element ‘i’ is expressed as:

Pii

Tj

T

njji

Ki

Tj

T

njji

Cdt

idT

imc

)44(

,1,

)(

,1,

)(

(7)

Where Ti, (mc) i, Ki,j and Pi are respectively,

the real time temperature (K),the heat capacity (J.K-1

), the conductive and /or convective coefficient between

nodes i and j (W.K-1

) , the radiative coupling coefficient between nodes i and j (W.K-4

) and Pi (W) the solar

flux absorbed at the time (t) by node i . The equation of the moisture balance of element 'i' [11, 13] is:

)(

,1, ij

njji

wdt

id

(8)

where: Inside humidity generation by internal moisture sources and moisture supply or removal by ventilation

and air conditioning systems are neglected.i

is the real time humidity (%) of element 'i' and

jiw

, is the

diffusion and/ or masse convective coefficient between nodes i and j (s-1

)

V. Boundary conditions

A. Meteorological conditions

As far as meteorological data are concerned, real data can be used for general equations fitted to experimental

data of temperature (10) and relative humidity (11). Mean values of temperature and humidity can be expressed as

cosine function. These functions, which incorporate parameters such as minimum and maximum, are respectively

expressed as:

t

TTB

TAT

2cos

(10)

Page 45: IJETCAS June-August Issue 9

Nour LAJIMI et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp.

21-28

IJETCAS 14-508; © 2014, IJETCAS All Rights Reserved Page 25

t

THB

HAHR

2cos

(11)

Where:

2

maxmin

2

maxminTT

TBand

TT

TA

2

maxmin

2

maxminHRHR

HBand

HRHR

HA

B. Thermal boundary conditions

The convective heat transfer coefficient reflecting the exchange between the outer wall and ambient

air is assumed to be uniform, we have taken the values:

- 12wm-2

k-1

for vertical face,

- 14wm-2

k-1

for the horizontal face.

Global heat transfer coefficient inside alveolar:

We have opted for the correlation including convection and radiation, determined experimentall by Boukadida

N. and Vullierme J.J [4]:

25.0Tht (12)

Where is Coefficient which depends on the heat direction, the angle of inclination and faces emissivity of the

lamellas (low or high emissivity). It is obtained for an angle of 60° and takes the value of 2.950 in the spending

direction and 1.388 in the insulating direction.

Diode effect coefficient (Ed)

It is defined as the ratio between the time average of convective heat transfer coefficient during the day time

(spending direction) and the nocturnal period (insulating direction):

tih

tsh

Ed (13)

Coefficient of heat transfer between inside faces and air of the premises:

In order to characterize the convective heat transfer between inside faces and air, we used the classic average

correlation:

BGrANu Pr).(

(14)

With: A=0.11, B=0.33 for (the vertical walls)

A=0.27, B=0.25 for Roof

A=0.14, B=0.33 for Floor

The Grashof and Nusselt numbers are respectively defined by:

,)2/(3.

mmTLTgGr

(15)

m

LhNu

Where: L :The width of the Roof and Floor and L = H for the vertical walls.

C. Moisture boundary conditions

The outside and inside mass convection coefficients me

h and mi

h are assumed to be related by the Lewis’

relation (5).

VI. Numerical methods

The numerical method used is the nodal method, the system is divided into several elements, each one is

represented by a node placed at its center and affected by the average temperature, relative humidity and

specific heat capacity. To limit the number of nodes, we used the method of fictitious node to transcribe the

exchange surface. The model is divided into 44 nodes. Each wall contains 7 nodes (4 nodes for the outer

wall and 3 nodes for the inner wall).Outside and inside air are respectively represented by one node.

VII. Results and interpretations

A. Time evolution of outside and inside of air temperature and relative humidity during the summer and

winter seasons

In view of the different figures (2-6), by comparing the different profiles, during the night period, the temporal

variations in relative humidity and temperature are in the same direction. Instead, they are in opposide one

during the day time.

Page 46: IJETCAS June-August Issue 9

Nour LAJIMI et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp.

21-28

IJETCAS 14-508; © 2014, IJETCAS All Rights Reserved Page 26

Figs. 2a. and 2b. show the temperature and relative humidity of outside and inside air during winter season

(December, January and February) and summer season (June, July, August and September). The simulation

results show that the inside temperature in the winter can reach 14°C during the noctural period and as high as

20°C in the day time. This last increase is mainly due to solar flux variation. Whatever the season is, the

minimum humidity is reached in the day time and the maximum in the nocturnal period. The inside relative

humidity (fig. 2a) is brought out in the winter season (59% to 84%) and conversely minimum (fig. 2b) in the

summer season (30% to 65%) is observed where the temperature is in the range of( 32°C to37°C). This is

mainly due to the fact that vapor flows from the inside (high vapor pressure) to the outside surface (low vapor

pressure ).

Figure 2a: Winter season Figure 2b: Summer season

Fig.2 Time evolution of relative humidity and temperature of air during the winter and summer

seasons.

B. Time evolution of temperature and relative humidity of inside faces during the summer and winter

seasons

B.1 Case of North,South and Roof faces

Figs. 3a. and 3b illustrate that in winter seasons the surface temperature in the south (16°C to 25°C) (fig. 3a) is

higher compared to those of the North and the Roof faces. On the contrary the relative humidity in the south

(52% to 80%) is minimum and maximum in the North (62% to 85%) and in the Roof (60% to 84%). In the

summer season (fig.3b), the temperature in the roof is higher ( 32.5°C to 41°C) compared to those of the North

and the Roof faces while its relative humidity is minimum. Whatever the faces are, we observe that the relative

humidity is lowest when the temperature is highest. With regard to midday and during the winter season the

difference in amplitude of temperature and relative humidity between faces (North, south and Roof) is important

compared to the summer season.

Figure 3a: Winter season Figure 3b: Summer season

Figure 3 Time evolution of temperature and relative humidity of inside faces during the winter and

summer seasons (Nord ,Sud and Roof).

B.2 Case of East and West faces

For East and West faces, curves figs. 4a and 4b display the time evolution of temperature and relative humidity

of inside faces during the summer and winter seasons. These results are almost similar and consistent with those

of M. Doya and al [7] , the temperature profile is related to that of solar flux in each face. For winter season (fig.

4a) the results show that near midday the relative humidity can reach a minimum value of 53% in the East side

and 56% in the West side, which respectively corresponds to temperatures (19.5°C, 17.8 C); we also notice that

the temperature gradually increases respectively on the East and West sides to (20.7°C, 23.7°C) corresponding

to the relative humidity of (62%,58%). During the noctural period, the temperature reaches a minimum of

Rela

tive h

um

idity

Tem

pera

ture

(°C

)

2 4 6 8 10 12 14 16 18 20 22 24

10

12

14

16

18

20

Tin(°C) Tout(°C)

HRin

HRout

TIMES(h)

0,50

0,55

0,60

0,65

0,70

0,75

0,80

0,85

0,90

Tem

pera

ture

(°C

)

Rela

tive h

um

idity

2 4 6 8 10 12 14 16 18 20 22 2424

26

28

30

32

34

36

38

Tin(°C)

Tout(°C)

HRin

HRout

TIME(h)

0,3

0,4

0,5

0,6

0,7

Tem

pera

ture

(°C

)

Rela

tive h

um

idity

2 4 6 8 10 12 14 16 18 20 22 24

12

14

16

18

20

22

24

26

Tn(°C)

Ts(°C)

Tr(°C)

HRn

HRs

HRr

TIME(h)

0,50

0,55

0,60

0,65

0,70

0,75

0,80

0,85

0,90

Tem

pera

ture

(°C

)

Rela

tive h

um

idity

2 4 6 8 10 12 14 16 18 20 22 24

30

32

34

36

38

40

42

Tn(°C)

TS(°C)

Tr(°C)

HRn

HRS

HRr

TIMES(h)

0,30

0,35

0,40

0,45

0,50

0,55

0,60

Page 47: IJETCAS June-August Issue 9

Nour LAJIMI et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp.

21-28

IJETCAS 14-508; © 2014, IJETCAS All Rights Reserved Page 27

(16°C) and a maximum relative humidity (80%). The increase in temperature depends on solar flux density on

the East and West sides[6], hence the decrease of the relative humidity. For the summer season the simulation

results show that at midday, for the East side (fig. 4b), the temperature and relative humidity can reach

respectively (39.76°C and 32.4%) and on the West side are estimated respectively to reach 37.9°C and 34%. At

16h:00 the difference in temperature between the East and West sides is estimated to reach (3.6°C) and in

relative humidity it is (2%), which is lower compared to the winter season; this difference is due to the diode

effect.

Figure 4a: winter season Figure 4b: Summer season

Figure 4 Time evolution of temperature and relative humidity of inside faces during the winter summer

seasons (East and West).

C. Annual evolution of relative humidity and temperature of air inside the premises

In case with alveolar structure ,fig.5 depict the annual evolution of average relative humidity and temperature of

inside air. The average value of relative humidity varies between 43% and 77% while the average temperature

varies between 15.6°C and 35°C. The difference in relative humidity and temperature between the outside and

inside is estimated to (5%) (fig.5). Fig.6 show the influence of diode effect on annual evolution of relative

humidity and temperature of inside air. We notice that during the spring season, the average relative humidity in

case with alveolar structure is about 59%, which corressponds to an average temperature of 23.7°C compared to

the average relative humidity in case without alveolar structure (fig.6) is estimated to 64% wich corresponds to

temperature 19°C . During the cold season the average value of temperature and relative humidity can reach

respectively 17°C and 77% in case with alveolar structure, whereas the average value of temperature and

relative humidity in case without alveolar structure can reach respectively (14°C and 81%) (fig.6). We can

conclude that the alveolar structure allows not only maximizing the temperature of the inside air during cold

and spring seasons but also limiting the penetration of moisture into the building (as we have shown in (fig.6) ).

Figure 5: Annual evolution of relative humidity Figure 6: Effect of the alveolar structure on the

relative and temperature of air inside and relative humidity and temperature of air inside

of outside of the premise. of the premise.

VIII. Conclusion

The economic crisis has raised the problematic of saving energy in any building, for that reason taking into

consideration the climatic aspect is needed to assess the environmental conditions inside a building. The results

of this work show that:

- The influence of climatic conditions on the building internal behavior expressed that the maximum of moisture

accumulation is observed in the winter season and conversely the minimum in summer season. This is mainly

due to the fact that vapor flows from the inside (high vapor pressure ) to the outside surface (low vapor

pressure),

Tem

pera

ture

(°C

)

Rela

tive h

um

idity

2 4 6 8 10 12 14 16 18 20 22 24

14

16

18

20

22

24

26

Te(°C)

Tw

(°C)

HRe(East)

HRw(West)

TIME(h)

0,50

0,55

0,60

0,65

0,70

0,75

0,80

0,85

0,90

Tem

pera

ture

(°C

)

Rela

tive h

um

idity

2 4 6 8 10 12 14 16 18 20 22 2430

32

34

36

38

40

42

Te(°C)

Tw(°C)

HRe

HRw

Time (h)

0,30

0,35

0,40

0,45

0,50

0,55

0,60

1 2 3 4 5 6 7 8 9 10 11 12

10

15

20

25

30

35

40Tout(°C)

Tin(°C)

HRin

MONTH

Tem

pera

ture

(°C

)

HRout

0,40

0,45

0,50

0,55

0,60

0,65

0,70

0,75

0,80

0,85

Rela

tive h

um

idity

1 2 3 4 5 6 7 8 9 10 11 12

10

15

20

25

30

35

40

MONTH

Tem

pe

ratu

re (

°C)

Tin without alveolar structure (°C)

Tin with alveolar structure(°C)

HRin without alveolar structure

HRin with alveolar structure

0,35

0,40

0,45

0,50

0,55

0,60

0,65

0,70

0,75

0,80

0,85

Re

lativ

e h

um

idity

Page 48: IJETCAS June-August Issue 9

Nour LAJIMI et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp.

21-28

IJETCAS 14-508; © 2014, IJETCAS All Rights Reserved Page 28

-The impact of the orientation of vertical facade on temperature and relative humidity of the inside air proves

that the increase in temperature depends on solar flux density on the faces [6], and therefore a decrease of the

relative humidity occurs,

-The inclined alveolar structure can limit the level of relative humidity, especially during the spring and winter

seasons .

Through all the results, we can infer that the orientation and the alveolar structure make it possible to gain

energy.

References

[1] Seki Fokosako S., Yamagushi A. (1983). An experimental study of free convective heat transfer in a parallelogrammic enclosure,

ASME Journal of Heat Transfer 105, pp. 433-439.

[2] Bairi A. (1984). Contribution to the experimental study of the natural convection in the closed cavities in parellelogrammic

sections, thesis, N° 199, University of Poitiers, France. [3] Zugari M.R. and Vullierme J.J. (1993). Amelioration of the thermal performances of a solar cell by the use of a structure with

alveolar structure, Entropy, n° 176, pp. 25-30.

[4] Boukadida N. and Vullierme J.J. (1988) Experimental study of the performances of a structure with effect of thermal diode.

General review of thermal Science, 324, pp. 645-651.

[5] Boukadida N. , Ben Amor S. , Fathallah R. and Guedri L. (2008). Contribution to the study of heat transfer in a room with a

structure variable insulation. General review of Renewable Energy CISM’08 Oum El Bouaghi pp. 79 – 88 [6] Lajimi N. and Boukadida N. (2013). Thermal behavior of premises equipped with different alveolar structures, Thermal Science,

pp.160-173, doi: 10.2298/TSCI130204160L

[7] Doya M. and al (2012) . Experimental measurement of cool facades performance in a dense urban environment Energy and Buildings 55, pp. 42–50.

[8] Woloszyn M., Rode C. (2008). Tools for Performance Simulation of Heat, Air and Moisture Conditions of Whole Buildings.

Building and Simulation journal, pp.5–24. [9] Lenz. K. (2006). CE3—Two real exposure rooms at FhG. Results of the complete Common Exercise 3. Publication A41-T1-D-

06-1. Presentation for IEA Annex 41 meeting, Kyoto, Japan.

[10] Judkoff R. and Neymark J. (1995). Building energy simulation test (BESTEST) and diagnostic method. NREL/TP-472-6231. Golden, Colo.: National Renewable Energy Laboratory, USA.

[11] Rode C. and al (2006). Moisture Buffering of Building Materials, project. n°:04023, ISSN 1601-2917. Technical University of

Denmark. [12] Rode C., Peuhkuri R., Woloszyn M. (2006). Simulation tests in whole building heat and moisture transfer. Paper presented at

International Building Physics Conference, Montreal, Canada.

[13] Arfvidsson J. (1999). Moisture Penetration for periodically varying relative humidity at the boundary. Acta Physical Aedificiorum, Vol 2.

[14] Künzel H.M., Holm A., Zirkelbach, D., & Karagiozis, A.N. (2005). Simulation of inside temperature and humidity conditions

including hygrothermal interactions with the building envelope. Solar Energy 78, pp. 554-561 [15] Fitsu T. (2008).Whole building heat and moisture analysis. AThesis in the Department of Building, Civil and Environmental

Engineering.

[16] Wyss U. and Sauret H. (2007) . Indicateurs de confort dans la technique de la voute-nubienne. [17] Milos J., Robert C. (2012). Effect of moisture content on heat and moisture transport and storage properties of thermal insulation

materials Energy and Buildings 53,pp 39–46.

[18] Patrick R. and al (2007). Modeling of Uncontrolled Inside Humidity for HAM Simulations of Residential Buildings. Proceedings of the IX International Conference on the Performance of Whole Buildings. ASHRAE.

[19] Driss. S (2008). Analyze and physical characterization of hygrothermal building materials. Experimental approach and numerical

modeling, Thesis ISAL-0067. [20] Rode C., Grau K., and Mitamura T. (2001). Model and Experiments for Hydrothermal conditions of the envelope and inside Air

of Buildings. Publications, Atlanta in: Proceedings-CD Buildings VIII, ASHRAE.

[21] Sandberg P.I. (1973). Building component moisture balance in the natural climate. Department of Building Technology, Report 43.

Nomenclature

Tm Average temperature Tm = (Tc+Tf)/2 (°C)

Tf Cold wall Temperature (°C)

Tc Hot wall Temperature (°C) h Heat transfer coefficient (W m-2K-1)

H Height of the cavity vertical walls (m)

L Length for the floor and Roof(m) HR Relative humidity(%)

T Temperature (°C)

In Inside Out Outside

r Roof

n North s South

e Est

w West

Greek symbols

α Angle of inclination γ Cinematic viscosity of air (m² s-1)

β Dilatation coefficient (K-1)

g Acceleration of gravity (ms-2) λ Thermal conductivity of air (W m-1K-1)

μ Dynamic viscosity of air (kg m-1s-1)

Page 49: IJETCAS June-August Issue 9

International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research)

International Journal of Emerging Technologies in Computational

and Applied Sciences (IJETCAS)

www.iasir.net

IJETCAS 14-509; © 2014, IJETCAS All Rights Reserved Page 29

ISSN (Print): 2279-0047

ISSN (Online): 2279-0055

Influence of notch parameters on fracture behavior of notched

component M. Moussaoui

1, S. Meziani

2

1 Mechanical Engineering Department, University Ziane Achour, Djelfa 17000- ALGERIA

2 Laboratory of Mechanic, University Constantine 1, Campus Chaab Erssas, Constantine 25000 - ALGERIA

Abstract: In the present study, the influence of variation of notch parameters on the notch stress intensity factor

KI is studied using CT- specimen made from steel construction. A semi-elliptical notch has been modeled and

investigated and is applied to finite elements model. The specimen is subjected to a uniform uniaxial tensile

loading at its two ends under perfect elastic-plastic behavior. The volumetric method and the Irwin models are

compared using a finite elements method for determined the effective distance, effective stress and relative

gradient stress which represent the elements fundamentals of volumetric method. Changing made to notch

parameters affect the results of stress intensity factor and the outcomes obtained shows that the increase in size

of minor axis reduces the amplitude of elastic-plastic stresses and effective stresses. In lengthy notches, the

Irwin model remains constant with very little disturbance of outcomes.

Keywords: Notch; effective distance; notch stress intensity factor; effective stress; relative gradient; Irwin

I. Introduction

The role of stress concentration was first highlighted by Inglis (1913), [9] who gave a stress concentration factor

for an elliptical defect, and later by Neuber (1958) [13]. The fracture phenomenon is created by these defects, if

the fracture setting reaches the critical value and is observed in any geometric discontinuity. These kinds of

failures take place in areas, which are called notches. The notch geometry and other notch characteristics have

strong influence on fracture behavior. Notch effect results in the modification of stress distribution owed to the

presence of a notch which changes the force flux. Near the notch tip the lines of force are relatively close

together and this leads to a concentration of local stress which is at a maximum at the notch tip.

In fracture mechanics of cracked structures is dominated by the near-tip stress field, it is characterized by the

stress intensity factors which describe the singular stress field ahead of a crack tip and govern the fracture of a

specimen when a critical stress intensity factor is reached.

Nevertheless, the stress distribution at a notch tip is governed by the notch stress intensity factor (NSIF), which

is the basis of Notch Fracture Mechanics [15] for which a crack is a simple case of a notch with a notch radius

and notch angle equal to zero.

Notch Fracture Mechanics is associated with the volumetric method [21, 23] which postulates that fracture

requires a physical volume, in this volume acts an average fracture parameter in term of stress, strain or strain

energy density.

Several studies are mainly based on the volumetric method and focus on the notches effect, where Allouti et al,

have addressed an analysis of these effects on the stress distribution [3]. Effective stresses are determined by two

methods, the method of hot surface stresses (HS) and the volumetric method (VM). The model used for the test is

done with a thin pipe of ductile material where the plastic relaxation induced a maximum distribution of stresses.

The HS method is obtained by a linear extrapolation of the stress distribution for longitudinal or transversal

surface defects in pipes under pressure. This interpolation uses discrete points where the stress concentration

effect is dominant. The results led to a similarity of effective stress values. Pluvinage et al., analysis the stress

distribution at a notch root; they show a pseudo-singularity stress distribution governed by notch stress intensity

factor (NSIF), K. The result of this works and others studies indicate that this approach gives a good description

in relation with the notch effect [24]. Under cyclic stress, a fatigue phenomenon is created and the damage of the

area appears near the notches tips. The application of the volumetric approach has been extended to a problem of

fatigue [2] and has been used to analyze fatigue parameters of notched specimens. Damage to the area due to

fatigue depends not only on the peak stress at the notch root but also in areas where the damages caused to

material accumulates [17]. The volumetric approach is classified as a macro-mechanical model [17, 20]. Its

application depends on a number of considerations such as the elastoplastic stress distribution near the root of

notch, notch geometry, loading, boundary conditions and the effect of plasticity and stress relaxation near the

notch. According to this study a new concept contributes to a fatigue life assessment, based on volumetric

approach and the YAO’s concept (stress field intensity, SFI) [21, 22].

The objective of this work is to investigate the effect of short and lengthy notch in fracture behavior in plain

specimen. For this purpose an elliptical notch applied to CT-specimen, plane stress, perfect elastic–plastic

behavior in steel construction under mode-I loading conditions using two methods especially volumetric and

Irwin models [10,11] applied to a notch specimen. The elliptical notch geometry is characterized by two

Page 50: IJETCAS June-August Issue 9

Moussaoui et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014, pp. 29-

37

IJETCAS 14-509; © 2014, IJETCAS All Rights Reserved Page 30

dimensional parameters, the minor axis ‘2b’ and the major axis '2a' (Fig.1). The results have been extracted from

elastic-plastic finite element code of the castem software to calculate the notch stress intensity factor as

compared to the classical Irwin formula and analyzed the stress field that reigns at the notch root.

Fig. 1 Parameters of Semi-Elliptical notch

II. Finite element models of elliptical and circular notches

Fig.2 show the finite element model for elliptical and circular notch used in the elastic and elastic-plastic

analysis, having the dimensions 20x20 [mm] with two given values of the major axis, the short length of the

notch is 0.5[mm] and the long notch is 6[mm], subjected to a uniform uni-axial tension loading at its two ends

with a value 125[MPa].

Figure 2 Finite element models

The material has the following mechanical properties: Young's modulus E = 230E03MPa, yield strength

670MPa, Poisson’s coefficient = 0.293.

An appropriate refinement of discretization applied by triangular elements with six nodes made around the tip of

the notch. Fig 2a. Elliptical notch, Fig.2b Circular notch.

Fig.2a elliptical notch Fig.2b circular notch

III. Analysis of Elastic-Plastic Stress Field in Notched Bodies

The volumetric method has been classified as a critical distance method (TCD) [1, 4]. The main objective in

volumetric method is to calculate ‘‘effective distance’’ and ‘‘effective stress’’ via extracted stress distribution at

notch roots. The volumetric method takes damage accumulation in local damaged zone into consideration [2].

Investigations into the fatigue failure mechanisms have shown that the accumulation of fatigue damages

depends not only on the peak stress at the notch roots but also an average stress in the damage zone and relative

stress gradient [17].

In the volumetric method, the aforementioned average stress is named as ‘‘effective stress’’ and it is calculated

using an effective distance. Traditionally, the effective distance has been obtained using volumetric bi-

logarithmic diagram (Fig. 3). In fact, the stress distribution near notch roots versus ligament in bi-logarithmic

diagram linearly behaves in certain zone like cracks [5] and the start point is considered as effective distance,

this distance is considered the boundary of the stress relaxation. It can be found by means of the minimum point

of relative stress gradient. The relative stress gradient for volumetric method can be written as:

x

x

xx

yy

yy

1 (1)

Page 51: IJETCAS June-August Issue 9

Moussaoui et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014, pp. 29-

37

IJETCAS 14-509; © 2014, IJETCAS All Rights Reserved Page 31

Where (x) and yy(x), are the relative stress gradient, and maximum principal stress, or crack opening stress,

respectively.

Fig. 3. A typical illustration of elastic-plastic stress along notch ligament and notch stress intensity

virtual crack concept including relative stress gradient which signifies the effective distance position

The effective stress for fracture is then considered as the average volume of the stress distribution over the

effective distance. The bi-logarithmic elastic–plastic stress distribution (Fig. 3) along the ligament exhibits three

distinct zones which can be easily distinguished [18]. The elastic–plastic stress primarily increases and it attains

a peak value (zone I) then it gradually drops to the elastic plastic regime (zone II). Zone III represents linear

behaviour in the bi-logarithmic diagram. It has been proof by examination of fracture initiation sites that the

effective distance correspond to the beginning of zone III which is in fact an inflexion point on this bi

logarithmic stress distribution [7]. A graphical method based on the relative stress gradient (x) associated the

effective distance to the minimum of (x).

A. Weight function

Weight function deals with stress contribution in elaborated damage accumulation zone; the weight function

explicitly depends on stress, stress gradient and distance from notch root in the elaborated zone and implicitly

depends on notch geometry, loading type, boundary conditions and material properties [1]. Weight function is

essential to distinguish between Stress Field Intensity (SFI) weight function and other weight function concepts

which are utilized in Fracture Mechanics based on Green’s function for boundary problem [8, 12]. The weight

function should satisfy the following conditions:

1))(,()

1))0(,0()

1),(0)

maxmax

rrc

b

ra

Three weighting functions have been proposed, available in volumetric method [1]:

- Unit weight function: 1, x

- Delta weight function: effXxx ,

- Gradient weight function: .1, xx

An analytical expression allowing the discrete points modelling of the stress distribution is expressed by the

polynomial interpolation [1] :

n

i

i

iyy xax0

)( (2)

The relative stress gradient can be derived by Eq. (2) as:

n

i

i

i

n

i

i

iyy

yy xa

xia

x

x

xx

0

0

11

(3)

According to the polynomial formulation, effective distance which corresponds to minimum point of relative

stress gradient can be obtained as below:

0

)(

)(

112

22

2

x

x

xx

x

xx

x yy

yy

yy

yy

(4)

B. Effective stress

The average stress value within the fracture process zone is then obtained by a line method [8] which consists to

average the opening stress distribution over the effective distance. One obtains the second fracture criterion

Page 52: IJETCAS June-August Issue 9

Moussaoui et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014, pp. 29-

37

IJETCAS 14-509; © 2014, IJETCAS All Rights Reserved Page 32

parameter called the effective stress ef. However, it is necessary to take into account the stress gradient due to

loading mode and specimen geometry. This is done by multiply the stress distribution by a weight function

(x,); where x is the distance from notch tip and . The effective stress is finally defined as the average of

the weighted stress inside the fracture process zone [25,26]:

eff

x

0yy

eff

eff χ)dx(x,(x)σx

1σ (5)

The volumetric method effective distance can be numerically solved using the presented volumetric method

effective distance characteristics equation. With substitution of Eq. (2) into (5) and calculated effective stress, it

can be rewritten including unknown weight function as follows:

dxxxaX

effX n

i

i

i

eff

eff ).,(1

00

(6)

The mentioned polynomial stress distribution can be utilized to calculate the effective stress for all proposed

weight functions. Eq. (6) can be rewritten using unit weight function as follows [1]:

n

i

ieffi

eff

effi

Xa

X 0

1

1

1 (7)

By changing the weight function and replacing the unit weight function by delta function )( effXx , the

new relationship of effective stress becomes:

)( effyyeff X (8)

Including another weight function, which uses the relative stress gradient, is taken as: .1),( xx

Effective stress will be [1]:

n

i

i

effi

eff

n

i

i

effi

eff

effi

Xa

Xi

Xa

X 0

2

0

1

2.

1

1

1 (9)

In bi-logarithmic diagram, at limit of zone II and x = Xeff, the notch stress intensity factor is expressed as

function of Xeff and eff [6,25]:

effeff X.2 (10)

Where eff ,and Xeff are the notch stress intensity factor, the effective stress, and the effective distance

respectively.

C. Irwin’s models

The elastic stress distribution in an elliptical defect was determined by Inglis [9]. Calculating the stress

intensity factor (SIF) of a crack can be done without analyzing the singular stresses field near the tip of notch if

the crack is replaced by an elliptical notch of the same size. The relationship between KI, max and can be

written as [11,14]:

2lim max

0 (11)

Where , max and = b²/a, are stress intensity factor in opening mode, maximum elastic stress, and curvature

radius of elliptical notch respectively.

In corrected approach, Irwin argued that the presence of a crack tip plastic zone makes the crack behave as if it

were longer than its physical size, and the distribution of stress is equivalent to that of an elastic crack with a

length (a + rE) that is [19]:

)( Eeff ra (12)

Where , a and rE: applied tension, major axis of ellipse and plastic zone size respectively.

IV. Analysis of Elastic Stress Field in Notched Bodies

Fig. 4 shows an elastic stress distribution analysis of the short and lengthy notch, having a semi-elliptical shape

with dimension b equal to 0.1, 1 and 5 [mm] and for the major axis can take the value 0.5mm for short notch,

and for lengthy notch having a value equal to 6 [mm]. It shows a comparison of analytically obtained results

with those obtained by the finite element method. Various formalisms representing the stress distribution can be

found in literature and are presented as follows [16]:

Usami 1985:

42

max 12

31

2

11

3

xxyy

Chen and Pen,1978:

Page 53: IJETCAS June-August Issue 9

Moussaoui et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014, pp. 29-

37

IJETCAS 14-509; © 2014, IJETCAS All Rights Reserved Page 33

xyy

8max

Neuber and Weiss1962:

xyy

4max

Kujawski,1991

2/32/1

max

21

21

xxfyy

2.08.2

2/tan1,2.0

;12.0

xkf

xif

fx

if

t

The results are similar near the end of notch where the stress concentration is high (Fig. 4a, Fig. 4b and Fig. 4c).

However, at remote distances from notch bottom, i.e. where the stress gradient is lower, the results obtained by

finite element method decrease rapidly towards lower values. It is also known that the flattened notches behave

the same way as crack, generate a higher stress concentration, and consequently significant elastic stresses reach

maximum values. In the lengthy notches (a=6[mm],b=1mm and 5mm) away from the notch root, stresses are

reduced to lower values then the results obtained by Chen Pen , Neuber and Kujawski, decrease regularly and

get closer mutually (Fig. 4b and Fig. 4c). If the semi-elliptical notch tends towards a semi-circular notch, the

maximum elastic-plastic stresses tend to have lower values.For short notch configurations, the Chen Pen and

Neuber results show a convergence those obtained by finite elements, while for lengthy notch, b=1mm Neuber

and Kujawski obtained better results and the extent of high stress region becomes smaller ( 1 [mm]) compared

to lengthy notches for b=5mm which is less than 2 [mm].

Fig. 4 Elastic stresses distribution, a) Short notch a=0.5mm and b=0.1mm, b) Lengthy notch

a=6mm and b=1mm

Fig. 4c Elastic stresses distribution, lengthy notch a=6mm and b=5mm

On the other hand, in the case of deep notches having an elastic behaviour, increasing the size of minor axis (b)

reduces the effect of stress concentration. The elastic distribution of stress gives the maximum value at notch

bottom; unlike the maximum of elastic-plastic stress is far from the notch tip.

Page 54: IJETCAS June-August Issue 9

Moussaoui et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014, pp. 29-

37

IJETCAS 14-509; © 2014, IJETCAS All Rights Reserved Page 34

V. Analyzing notch stress intensity factor

The phenomenon of rupture must indeed be considered in its physical dimension. It requires a certain

elaboration volume of process failure [15]. In this volume, a zone is governed by the notch stress intensity

factor, which depends at the same time on the effective distance, the effective stress, the relative stress gradient

and the weight function. The results obtained are compared with those calculated in accordance to the two Irwin

models [10,14].

Figures 5a and 5b show the notch stress intensity factor evolution versus the ratio b/a calculated by using the

two Irwin models and the three weight functions: unit, delta and gradient functions.

Fig. 5 Evolution the notch stress intensity factor for, a) short notch, a=0.5mm, b) lengthy notch,

a=6mm

The changes made to notch parameters (a, b and, ) affects the evolution of stress intensity factor versus the

ratio b / a. The figures show its variation for elastic-plastic behaviour; (note that the volumetric method was

applied initially for brittle materials, the extension of its use to ductile materials has gave the good

approximations).

Figure 5 depicts at low values of (b / a) i.e. for an open notch. The results calculated by the volumetric method

are convergent mutually by approaching each other with the results of Irwin corrected model. In particular the

weight gradient function gives approximately the same results with the corrected Irwin model.

For a short notch (a = 0.5mm) and beyond a higher ratio (b/a), a divergence in the results of the volumetric

method is observed compared to Irwin corrected, but a convergence to the uncorrected Irwin model has

appeared.

The made of significant plasticizing (Fig.9) that is starting at the bottom of the notch in lengthy notches, the

results of volumetric method (VM) describe a similarity of approximation of the results with re-convergence to

the uncorrected -Irwin model and excessive divergence from Irwin – corrected. This latter remains constant with

very little disturbance of outcomes, we can conclude that the volumetric method is very sensitive to notch

parameters variation.

Fig. 6 Evolution of the notch stress intensity factor for: a) Opened semielliptical notch; b) Thinned

semielliptical notch

If a plastific area widens near the end of the notch, the Irwin corrected model can not detect the change that has

occurred to the notch stress intensity factor (NSIF) parameters.

Page 55: IJETCAS June-August Issue 9

Moussaoui et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014, pp. 29-

37

IJETCAS 14-509; © 2014, IJETCAS All Rights Reserved Page 35

The increase in size of the major axis (a) and the reduction of the minor axis (b) correspond to a deeper and

flattened semi-elliptical notch; creating a region where the stress concentration is high and generates an

increased stress gradient as a result of the increase of the stress intensity factor values. Thinned and deepened

semi-elliptical notches are much more dangerous than short semi-elliptical and semi-circular notches.

If the stress gradient is low (b/a high), Figures 6a and 6b show that the results obtained results by VM and irwin

corrected model of the SIF-KI are closer and the curves show an identical appearance in Figure 6a. The Irwin

corrected model gives converging results with results of unit weight function (Fig. 6a and Fig. 6b).

For a short notch, at great stress gradient (b/a low), i.e. decreasing the major axis (a); the results obtained by the

three weight functions move away from those obtained by the Irwin corrected model and are approaching to

those of Irwin uncorrected model (Fig. 6b).

For values of the minor axis b equal to 4 [mm], the elliptical notch is classified as an open notch (Fig. 6a), this

gives a rearrangement of stress concentration distribution. The opening of the elliptical notch reduces the stress

concentration in the area near the bottom of the notch (increased b / a).

Fig. 7 Evolution of the notch stress intensity factor for a semi-circular notch

From a high stress gradient, the values obtained using the two Irwin models, diverge excessively relative to the

volumetric approach (Fig. 6a). In the semi-circular notch (Fig. 7), the results of SIF, (calculated according to the

Irwin corrected model) are intermediate between those obtained by the weight, delta and unit functions for a

radius lower than 2 [mm].

VI. Effective Stress and Maximum Elastic-Plastic Stress

The effective stress is the average value of the stress distribution over the effective distance, weighted by the

relative stress gradient inside the elaboration volume of fracture. Figures 8a and 8b show the evolution of

effective stress calculated using the three weight functions (unit, delta and gradient) with the maximum elastic-

plastic stress, according to the ratio (b/a), relating to the short and lengthy semi-elliptic notches in order to

determine the function which can approximate the maximum elastic-plastic stress.

At low values of ( b / a), where the notch tends to flatten the effective stress calculated by the VM and

maximum elastic-plastic stress values increases to the peak, and then follow a regular decrease to low values of

stress ,where the stress gradient is lower.

Fig. 8 Evolution of effective stress & maximum elastic-plastic stress for short notch and lengthy notch

Page 56: IJETCAS June-August Issue 9

Moussaoui et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014, pp. 29-

37

IJETCAS 14-509; © 2014, IJETCAS All Rights Reserved Page 36

In the case of deep notch which develops a high stress, induces a high stress gradient and displays an extensive

of lower values (b / a <0.70) (Fig. 8b ), than that of the short notch (b / a <1.2) (Fig. 8a) and the effective

stresses calculated by the weight gradient function get closer to stress maximum elastic-plastic. The outcomes

of maximums elastic-plastic stress is greater than the effective stresses value calculated by the volumetric

approach. The opening of the elliptical notch (b high) reduces the amplitude of the effective stresses and

maximum elastic-plastic stresses and brings up a similarity of approximation of the results of effectives

stresses calculated by the different weight functions (Fig. 8b).

If the minor axis 'b' of semi-elliptical notch increases, the approximation obtained by the three weight functions

converge between them for both notch configurations (short and lengthy).

Fig. 9 Plastic zone at the tip of the lengthy notch according to VON MISES criterion; notch a=6mm

VII. Conclusion

In the present study, the volumetric method is investigated and compared with Irwin models using the finite

element method to explain the influence of notch parameters variation in weight functions , effective distance,

effective stress, and in the stress gradient (consequently on the fracture behaviour). Extension of the application

of volumetric method in the case of an elastic-plastic stress distribution has taken into account the effect of these

changes made to notch parameters.

The main outcomes can be summarized as:

Changing the notch parameters (a, b and ) creates stress field disturbances near the notch root.

Therefore, it affects the evaluation of the stress intensity factor.

Lengthy notches have a significant plasticizing near the notch root, and the corrected- Irwin model

remains constant with very little disturbance of outcomes. The volumetric method is very sensitive to

notch parameters variation. The outcomes of volumetric method (VM) describe a similarity of

approximation of the results with re-convergence to the uncorrected -Irwin model and excessive

divergence from the Irwin corrected model.

If the notch tends to open, the results achieved through volumetric method and the uncorrected Irwin

model, mutually converge.

The deeper and flattened notch creates a region where the stress concentration is high and generates an

increased stress gradient. The thinned and lengthy semi-elliptical notches are much more dangerous

than the short semi-elliptical and semi-circular notches.

The results obtained show that the increase in size of minor axis of elliptical notch reduces the

amplitude of elastic-plastic stresses and effective stresses.

The elastic stress distribution is characterized by maximum stress at the notch root and in the case of

elastic-plastic distribution it is characterized by a stress relaxation.

VI. References [1] Adib-Rammezani.H, Jeong.J. Advanced volumetric method for fatigue life prediction using stress gradient effects at notch roots,

Computational materials science 39: 649-663, 2007

[2] Adib. H, Pluvinage.G,. Theoretical and numerical aspects of the volumetric approach for fatigue life prediction in notched components,. International Journal of fatigue 25: 67-76, 2003

[3] Allouti. M, Jallouf.S, Schmitt.C, Pluvinage.G. Comparison between hot surface stress and effective stress acting at notch-like defect

tip in a pressure vessel,. Engineering Failure Analysis 18: 846-854,2011 [4] David Taylor. The theory of critical distances, a new perspective en fracture mechanics. Book. Elsevier, London WC1X 8RR, UK.

First edition 2007

[5] Boukharouba.T, Tamine.T, Niu.L, Chehimi.C , Pluvinage.G .The use of notch stress intensity factor as a fatigue crack initiation parameter. Engineering Fracture Mechanics Vol. 52, No. 3: 503-512, 1995

[6] Minor. H. El, Kifaini. A, Louah. M, Azari. Z, Pluvinage.G. ‘Fracture toughness of high strength steel-using the notch stress

intensity factor and volumetric approach’,. Structural Safety 25: 35-45, 2003 [7] Hadj Meliani. M, Matvienko. Y. G , Pluvinage.G. Corrosion defect assessment on pipes using limit analysis and notch fracture

mechanics,. Engineering Failure Analysis 18: 271–283, 2011

[8] A.E. Green, I.N. Snedden, Proceeding of the Cambridge Philosophical Society 46 159–163 , 1950

Page 57: IJETCAS June-August Issue 9

Moussaoui et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014, pp. 29-

37

IJETCAS 14-509; © 2014, IJETCAS All Rights Reserved Page 37

[9] Inglis . Stress in a plate due to the presence of cracks and sharp corners,. Trans Inst Naval Architects, 55:219–230, 1913

[10] Jean Lemaitre, Jean-Louis Chaboche. Mécanique des matériaux solides’, © Bordas, Paris. 1988 [11] Livieri Paolo, Segala Fausto, Evaluation of Stress Intensity Factors from elliptical notches under mixed mode loadings,.

Engineering Fracture Mechanics 81: 110–119, 2012

[12] J. Rice, International Journal of Solids and Structures 8 751–758, 1972 [13] Neuber .H. Theory of notch stresses,. Springer-Berlin, 1958

[14] Nui. L. S, Chehimi.C , Pluvinage.G. Stress filed near a large blunted tip V-notch and application of the concept of the critical notch

stress intensity factor (NSIF) to the fracture toughness of very brittle materials,. Engineering Fracture Mechanics. Pergamos, Vol.49 .No. 3: 325-335, 1994

[15] Pluvinage.G. Fracture and Fatigue Emanating from Stress Concentrators, Université de Metz, . France, KLUWER Academic

Publishers, 2003 [16] Pluvinage..G. Fatigue and fracture emanating from notch; the use of the notch stress intensity factor’,. Nuclear Engineering and

Design 185: 173-184, 1998.

[17] Qylafiku. G., Azari. Z., Kadi. N., Gjonaj. M. , Pluvinage. G. Application of a new model proposal for fatigue life prediction on notches and key-seats,. International journal of fatigue 21: 753-760, 1999.

[18] Shi . S. Q.,. Puls. M. P. A simple method of estimating the maximum normal stress and plastic zone size at a shallow notch,.

International Journal of Pressure Vessels and Piping, 64(1): 67–71, 1995 [19] Agah Uguz and John W.Martin, materials characterization 37:105-118 ,1996

[20] H. Adib, J. Gilgert, G. Pluvinage. Fatigue life duration prediction for welded spots by volumetric. Method.International Journal of

Fatigue 26 81–94, 2004 [21] Weixing. Yao. The prediction of fatigue behaviours by stress field intensity approach,. Acta mechanica solida sinica, 9 (4): 337–349,

1996

[22] Weixing Yao, Bin Ye, Lichun Zheng. A verification of the assumption of anti-fatigue design,. International Journal of Fatigue, 23(3): 271–277, 2001

[23] Weixing Yao. Stress field intensity approach for predicting fatigue life,. International Journal of Fatigue, 15(3): 243–246, 1993

[24] Pluvinage G., Azari Z., Kadi N., Dlouhy I. and Kozak V. Effect of ferritic microstructure on local damage zone distance associated with fracture near notch. Theoretical and Applied Fracture Mechanics 31,149–156, 1999

[25] Vratnica. M, Puvinage. G., Jodin. P., Cvijovic. Z, Rakin. M, Burzic. Z.. Influence of notch radius and microstructure on the fracture behaviour of Al-Zn-Mg-Cu alloys of different purity,. Materials and design 31: 1790-1798, 2010

[26] Zedira.H, Gilgert.J, Boumaza. A, Jodin.P, Azari.Z, Pluvinage.G,. Fatigue life prediction of welded box structures. Strength of

Materials. Vol. 36. No 6, 2004

Page 58: IJETCAS June-August Issue 9

International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research)

International Journal of Emerging Technologies in Computational

and Applied Sciences (IJETCAS)

www.iasir.net

IJETCAS 14-510; © 2014, IJETCAS All Rights Reserved Page 38

ISSN (Print): 2279-0047

ISSN (Online): 2279-0055

Modeling Lipase Production From Co-cultures of Lactic Acid Bacteria

Using Neural Networks and Support Vector Machine with Genetic

Algorithm Optimization Sita Ramyasree Uppada

1, Aditya Balu

2, Amit Kumar Gupta

2, Jayati Ray Dutta

1*

1 Biological Sciences Department, Birla Institute of Technology and Science, Pilani-Hyderabad Campus,

Hyderabad, Andhra Pradesh, 500078, India.

2 Department of Mechanical Engineering, Birla Institute of Technology and Science, Pilani-Hyderabad Campus,

Hyderabad, Andhra Pradesh, 500078, India.

I. Introduction Lipases are versatile enzymes, which are used in food, dairy, detergent, pharmaceutical, leather, cosmetics,

biosensor, pulp and paper industries[1]. They are important biocatalysts that perform various chemical

transformations synthesizing a variety of stereo-specific esters, sugar esters, thioesters and amides[2]. Usually

acid/base catalysts will catalyze organic chemical reactions, but due to their disadvantages like difficulty in

recovery of byproduct, requirement of intensive energy for undergoing reactions and difficulty in removing

catalyst, the reactions are accomplished by enzyme catalyst like lipases. The interest in microbial lipase

production increased because of its huge industrial applications but the availability of genetically distinct lipases

with specific characteristic is still limited and thus there is an immense need to develop stable lipases which can

replace chemical catalysts. The primary importance in any fermentation process is an optimization of medium

components[3]. Even a small increase in performance can have a significant impact on its production.

Therefore, process optimization is one of the most frequently used operation in biotechnology. The classical

method of optimization process was carried out by one factor at a time (OFAT) method by varying only a single

factor and keeping the remaining factors constant[4]. This approach is not only time consuming, but also

ignores the combined interactions between physico-chemical parameters[5]. Hence these classical techniques of

optimization form a basis for developing advanced techniques more suitable to today’s practical problems.

Though microbial consortium of various microorganisms has been applied in many fields of biotechnology[6]

its application for the production of lipase is yet to be explored more. It is well known that extracellular lipase

production in microorganism is greatly influenced by physical factors like pH, temperature, incubation period,

substrate volume and inoculum volume[7]. Therefore, considering the many industrial applications of lipase, we

report here for the first time the optimization of extracellular lipase production using co-culture of Lactococcus

lactis and Lactobacillus plantarum, Lactococcus lactis and Lactobacillus brevis.

II. Materials and Experimentation

A. Microorganism and lipolytic activity

For the present investigation, the co-cultures of Lactococcus lactis and Lactobacillus plantarum, Lactococcus

lactis and Lactobacillus brevis were used for producing extracellular lipase. The strains were maintained on LB

agar slants with a composition of tryptone (10g), yeast extract (5g), NaCl (10g), agar (15 g) and distilled water

(1lt) at 40C. For preliminary screening of lipase producing bacteria, tributyrin agar was used. All the cultures

were inoculated into tributyrin agar plates containing peptone (5g), beef extract (3g), tributyrin (10ml), agar-

agar (20g) and distilled water (1lt) and kept for incubation at 37°C for 24 hours and observed for zone

formation. A clear zone around the colonies indicated the production of lipase.

B. Enzyme assay

The lipase assay was performed spectrophotometrically using p-nitro phenyl palmitate as substrate. The assay

mixture contained 2.5 ml of 420µm P-nitrophenol palmitate, 2.5ml of 0.1 M Tris – Hcl (pH-8.2) and 1ml of

enzyme solution. It was incubated in water bath at 37°C for 10 min. p-nitrophenol was liberated from p-

Abstract: Optimization of lipase production from co-cultures of Lactococcus lactis and Lactobacillus

plantarum, Lactococcus lactis and Lactobacillus brevis was carried out. The lipase production was first

modeled with Artificial Neural Network (ANN) and Support Vector Machine (SVM) taking various physico-

chemical parameters (pH, temperature, incubation period, inoculum and substrate volume) into account. The

yield obtained from SVM and ANN are compared based on correlation coefficient, % deviation from

experimental results, computational time and the generalization capability. Later, the lipase production was

optimized using Genetic Algorithm(GA). Experiments were performed on the process parameters, as

obtained from GA and the results were validated to have % deviation to be less than 5%.

Keywords: Optimization; Microbial lipase; Co-culture; Artificial neural network; Genetic algorithm;

Support vector machine.

Page 59: IJETCAS June-August Issue 9

Sita Ramyasree Uppada et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August,

2014,pp. 38-43

IJETCAS 14-510; © 2014, IJETCAS All Rights Reserved Page 39

nitrophenylpalmitate by lipase mediated hydrolysis imparting a yellow color to the reaction mixture. After

incubation, the absorbance was measured at 410 nm [8]. One unit (U) of lipase was defined as the amount of

enzyme that liberates one micromole of p-nitro phenol per minute under the assay conditions.

C. Experimental design and Lipase production

The optimum levels for extracellular lipase production by the L. lactis and L.brevis, L. lactis and L. plantarum

strains with respect to incubation period, temperature, pH inoculum volume and substrate volume, were

obtained by single factor optimization by conducting the experiments in 250 ml Erlenmeyer flasks containing

50 ml of medium comprising of peptone (0.5%), yeast extract (0.3%), NaCl (0.25%), MgSO4 (0.05%) with

olive oil as a substrate and inoculated with the freshly prepared bacterial suspension at 350C. After incubation,

the cell-free supernatant was obtained by centrifugation at 7197xg for 20 minutes and the extracellular lipase

activity of the fermented broth was determined. Experiments were conducted in triplicate and the results were

the average of these three independent trials. Table 1 shows the chosen process parameters and their levels used. Table 1. Process parameters and levels of the experiment

Level pH Temperature

( 0C)

Incubation

Period (hrs)

Inoculum

volume(ml)

Substrate

volume(ml)

1 5 25 24 0.5 0.5

2 5.5 30 36 1 1

3 6 35 48 1.5 1.5

4 6.5 40 60 2 2

5 - - 72 - -

6 - - 84 - -

In the next stage, ANN and SVM models are built to study the interactive effects of the five variables, i.e. pH,

temperature, inoculum volume, incubation period, substrate volume.

III. Artificial Neural Network Model

Machine learning provides tools that automate the computer to recognize the complex patterns and make

intelligent decisions based on data. One of the very useful tools in the field of the Machine Learning is the

Artificial Neural Networks [9] . Predictive modeling of physical law which is highly complex is often done

using the tool of ANN [10]. The most basic unit of ANN is a neuron. A neuron has a similar function like a

biological neuron, it combines all the inputs given to the neuron and transfers it to another neuron based on an

activation function. Tan sigmoid, linear line and Log sigmoid are the popularly used activation functions, also

known as transfer functions. A tan sigmoid function is defined as follows:

(1)

Where, t represents the input to the tansigmoid function. Similarly, log sigmoid and linear line are also defined.

A group of neurons connecting together in a weighted form gives rise to output is called a layer of neurons. In

general, there are many layers in a neural network. The layer taking the actual input is called the input layer and

the layer which ultimately gives the output is called the output layer. The intermediate layers are called hidden

layers. The collection of layers is known as a neural network. The number of neurons in the input layer and

output layer is fixed to the model which is built. However, the number of neurons in the hidden layer is a

variable. Evaluation of the weights of every layer in the neural network is known as training of the neural

network. The back propagation training algorithm is most popularly used for training neural network and to

estimate the values of the weights. Hence, in this paper, back propagation algorithm is used for training the feed

forward neural networks. The Differential Evolution algorithm (DE) was used for training the networks and for

the tuning of weights so as to determine the optimal architecture of neural networks.The neural network toolbox

of the MATLAB software package is used for training, testing and validation of the given data.

IV. Support Vector Machine Model

Support Vector Machines (SVMs) is most important method of supervised learning, which analyses and

recognizes data patterns, useful for classification and regression [11]. The advantage with this type of algorithm

is easy attainment of the global minimum, and avoiding of the local minimum as in other methods such as

neural networks [12]. Algorithm performance significantly depends on the choice of kernel function which

maps the input space to the transformed feature space. Several non-linear-mapping functions exist to do such

conversion. These functions are known as kernel function. One of the most popularly used kernel functions is

Radial Basis Function (RBF).

N training data with xi as the input vector and yi as the actual output value is used for

the SVM model. SVM model is expressed as follows:

(2)

Where the function fi(x) is called the non-linearly mapped feature from the input space x,

is the weight vector and = is the basis function vector. The result of the model is a hyper-

surface which is a non-linear surface. However, it is converted into a linear regression model by mapping the

Page 60: IJETCAS June-August Issue 9

Sita Ramyasree Uppada et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August,

2014,pp. 38-43

IJETCAS 14-510; © 2014, IJETCAS All Rights Reserved Page 40

input vectors x to vectors fi(x) of a high dimensional kernel-induced feature space. The parameters w and b are

support vector weight and a bias that is calculated. The learning task is defined as the minimization problem of

the regularized risk function (L). The kernel function chosen learns and minimizes the regularized risk function.

(3)

(4)

The two variables ε and λ are the parameters that control the dimension of the approximating function. Both

must be selected by the user. By increasing the insensitivity zone, governed by ε, the accuracy of the

approximation is reduced, which decreases the number of support vectors, leading to data compression. In

addition, while modelling highly noisy and polluted data, increasing the insensitivity zone ε has smoothing

effect. The regularization parameter ( determines the tradeoffs between the weight vector norm and the

approximation error. Hence the parameters need to be fine-tuned using any optimization techniques. One such

technique is DE algorithm. LS-SVM (least square-support vector machines) toolbox built in MATLAB

environment is applied to the data obtained from the experiments. The parameters and ε are determined using

the DE algorithm. Firstly, the DE starts with the initial values of and ε and then the

evolutionary DE algorithm which uses cross validation to fine tunes the parameters. The kernel function used is

the radial basis function which is used extensively in the literature.

V. Optimization of Lipase production using Genetic Algorithm

A Genetic algorithm is a stochastic optimization technique that searches for an optimal value of a complex

objective function and are used to solve complicated optimization problems by simulation or mimicking a

natural evolution process[13]. GA has been successfully used as a tool in computer programming, artificial

intelligence, optimization[14] and neural network training and information technology. In a GA, a population of

candidate solutions (called chromosomes) to an optimization problem is evolved towards fitter solutions in an

iterative process. Each candidate solution is mutated and altered; traditionally, solutions are represented in

binary as strings of 0s and 1s, but other encodings are also possible. The selection of chromosomes for the next

generation is called reproduction, which is determined by the fitness of an individual. Different selection

procedures are used in GA depending on the fitness values, of which proportional selection, tournament

selection and ranking are the most popular procedures[15]. In this study, the settings for GA in MATLAB are as

follows which is explained in Table 2.

Table 2. The parameters used for optimization using GA

GA Algorithm

Population size 100

Generations 100

Crossover Probability 0.8

Mutation function Constraint dependent

Elite count 2

Max. Function Evaluations 100000

VI. Results and Discussion

The ANN and SVM models for the lipase production were trained as explained in the algorithm above. The

correlation coefficient is one of the statistical measures used to judge the goodness of fit of a model. The

correlation coefficient between two variables X and Y is measured using the Pearson product-moment

coefficient, which takes the value between -1 and +1 inclusive. It is defined by the formula:

The Xi represents the original values as obtained in the experiments. is the mean of the original values.

Similarly, Y represents the predicted values. The ideal prediction is supposed to give a value of r which is equal

to one. Consequently, the ideal prediction leads to a straight line with slope 1, as the X-axis and Y-axis

represent the experimental and predicted values by each of the methods employed. The complete data are

divided into training, testing and validation data (80%, 10% and 10% respectively). The training data are used

for the training of the ANN and SVM models. The validation data is used to determine the optimal or the best

ANN architecture or best SVM parameters for good fit with the experimental results. The testing data is used to

finally compare the ANN and SVM models based on the their predictive capability at unknown process

parameters. The correlation plots of training, testing and validating data of ANN and SVM for co-culture of

Lactococcus lactis and Lactobacillus plantarum are shown in Figs. 1 (a-c) and 2 (a-c) respectively.

Page 61: IJETCAS June-August Issue 9

Sita Ramyasree Uppada et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August,

2014,pp. 38-43

IJETCAS 14-510; © 2014, IJETCAS All Rights Reserved Page 41

Fig.1 The plot of Predicted vs. Experimental (a) training data (b) testing data (c) validation data of ANN

model for lipase production from co-cultures of Lactococcus lactis and Lactobacillus plantarum

1(a) 1(b) 1 (c)

Fig. 2 The plot of Predicted vs. Experimental (a) training data (b) testing data (c) validation data of SVM

model for lipase production from co-cultures of Lactococcus lactis and Lactobacillus plantarum 2(a) 2(b) 2(c)

Similarly, the correlation plots of training, testing and validating data of ANN and SVM for co-culture of

Lactococcus lactis and Lactobacillus brevis are shown in Fig. 3 (a-c) and 4 (a-c) respectively.

Fig. 3 The plot of Predicted vs. Experimental (a) training data (b) testing data (c) validation data of ANN

model for lipase production from co-cultures of Lactococcus lactis and Lactobacillus brevis

3(a) 3(b) 3(c)

Fig. 4 The plot of Predicted vs. Experimental (a) training data (b) testing data (c) validation data of SVM

model for lipase production from co-cultures of Lactococcus lactis and Lactobacillus brevis

4(a) 4(b) 4(c)

It is seen in general from these figures that SVM has good correlation with the experimental results for the

training data set. Hence, it is seen that ANN is having more predictability of yield from lipase production due its

Page 62: IJETCAS June-August Issue 9

Sita Ramyasree Uppada et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August,

2014,pp. 38-43

IJETCAS 14-510; © 2014, IJETCAS All Rights Reserved Page 42

good correlation with validation and testing data. Later, the lipase production is optimized using the GA toolbox

as mentioned in the previous sections. The optimal yield as obtained from the GA from ANN and SVM models

of lipase production are shown in Tables 3 and 4

Table 3. Settings of parameters and predicted yield obtained for co-culture of Lactococcus lactis and

Lactobacillus planatarum

Co-culture of Lactococcus lactis and Lactobacillus plantarum

Model pH Temp(0C)

Input parameters

Experimenta

l Yield

Predicte

d Yield

%

deviation Incubation

(hrs)

Inoculum

volume (ml)

Substrate

volume (ml)

SVM 5.492 34.828 76.230 1.768 1.760 0.414 0.428 3.365

ANN 5.450 34.944 77.213 0.518 2.000 0.400 0.412 2.887

Table 4. Settings of parameters and predicted yield obtained for co-culture of Lactococcus lactis and

Lactobacillus brevis

Co-culture of Lactococcus lactis and Lactobacillus brevis

Model

pH Temp.(0C)

Input parameters

Experimental

Yield

Predicted

Yield

% deviation Incubation

Period (hrs)

Inoculum

volume (ml)

Substrate

volume (ml)

SVM 5.658 25.019 72.980 1.486

1.997

0.350

0.361 3.288

ANN 5.574 29.934 60.295 1.695 1.799 0.362 0.369 1.869

The input parameters represent the process parameters using which the optimal yield was obtained. The

predicted yield is the yield at optimum settings, obtained from the GA toolbox for the ANN and SVM models.

The Experimental yield represents the experimental results obtained. The final column represents the deviation

in the yield from the experimental value found out by

The % deviation in the value is less than 5% in all the models obtained.

Further, it is observed that the % deviation from ANN model is lesser than the SVM model. This suggests that

ANN is a better predictive model due to its lesser % deviation, better correlation with the experimental results.

However, there are a few more considerations which have been identified in the literature. The computational

time taken from ANN is 122.93 seconds, whereas that of SVM is 0.45 seconds. Thus, it was seen that SVM

takes less computational time than ANN. Similar results were observed with the lipase production results. The

mean computational time taken for ANN was 1.63 seconds and 24.1 seconds for ANN and SVM respectively.

Further, it seen that the prediction of lipase production is significantly accurate. The % deviation from the

predicted results is under 5%. Hence, it can be concluded that both ANN and SVM are having comparable

results. The higher yield obtained from SVM model is justified due to the generalization capability. When a

model is allowed to completely predict the results based on the present experimental results, there is a general

tendency to incur more error at unknown data points. However, if the data is allowed have little bit of error in

the prediction with the present experimental data points, the tendency to overfit the data is less( this

phenomenon is known as generalization capability as explained by Cristopher and Bishop). Hence, it can be

seen that SVM is better than ANN is terms of computational time and generalization capability. Whereas ANN

is better than SVM in terms of correlation coefficient and % deviation.

VII. Conclusion

In this research study, ANN and SVM models were built using fermentation performance parameters (pH,

temperature, incubation period, inoculum and substrate volume) to predict the extracellular lipase production

from co-cultures of Lactococcus lactis and Lactobacillus plantarum, Lactococcus lactis and Lactobacillus

brevis. The following two conclusions can be drawn from the results.

1. Considering the computational time and the generalization capability of both the models, it was found

that SVM was better than the ANN model

2. Considering the % deviation and the correlation coefficient with the experimental data, it can be found

that ANN is better than SVM.

Further, it is also seen that on application of GA, the lipase production yield is optimized and the results are

validated with experiments at the input parameters obtained from GA for optimal yield.

References [1] M.P, Licia, M.R. Mario and R.C. Guillermo, “Catalytic properties of lipase extracts from Aspergillus niger,” Food Technol.

Biotech, vol. 44, March 2006, pp. 247-252.

Page 63: IJETCAS June-August Issue 9

Sita Ramyasree Uppada et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August,

2014,pp. 38-43

IJETCAS 14-510; © 2014, IJETCAS All Rights Reserved Page 43

[2] B. S. Sooch and B. S. Kauldhar, “Influence of multiple bioprocess parameters on production of lipase from Pseudomonas sp.

BWS-5,” Braz. Arch. Biol. Technol, vol.56, Sept/Oct. 2013, pp. 711-721. [3] Y.R. Abdel- Fattah, N.A.Soliman, S.M. Yousef and E.R. El-Helow, “Application of experimental designs to optimize medium

composition for production of thermostable lipase/esterase by Geobacillus thermodenitrificans AZ1, J. Genet Eng. Biotechnol,

vol. 10, Dec. 2012, pp. 193–200. [4] S. Ghosh, S. Murthy, S. Govindasamy and M. Chandrasekaran, “Optimization of L-asparaginase production by Serratia

marcescens (NCIM 2919) under solid state fermentation using coconut oil cake, Sustain. Chem. Process, vol.1, March 2013,

pp.1-8. [5] P. Kanmani, S. Karthik, J. Aravind and K. Kumaresan, “The use of Response surface methodology as a statistical tool for media

optimization in lipase production from the Dairy effluent isolate Fusarium solani,” ISRN Biotechnology, vol. 2013, Sep 2012,

pp. 8 . [6] R. Kaushal, N. Sharma and D. Tandon, “Cellulase and xylanase production by co-culture of Aspergillus niger and Fusarium

oxysporum utilizing forest waste, Turk. J. Biochem, vol. 37, March 2012, pp.35–41.

[7] A. Saha and S. C. Santra, “Isolation and Characterization of Bacteria Isolated from Municipal Solid Waste for Production of Industrial Enzymes and Waste Degradation, J. Microbiol. Exp, vol.1, May 2014, pp.1-8.

[8] N.Verma, S.Thakur and A.K.Bhatt, “Microbial Lipases: Industrial Applications and Properties (A Review),” Int. Res. J.

Biological Sci, vol.1, Dec 2012, pp.88-92. [9] C.H. Kuo, T.A. Liu, J.H. Chen, C.M. Chang and C.J. Chieh, “Response surface methodology and artificial neural network

optimized synthesis of enzymatic 2-phenylethyl acetate in a solvent-free system,” Biocatal. Agric Biotechnol, vol. 3, July 2014,

pp.1-6. [10] B. Fathiha, B.Sameh, S. Yousef, D. Zeineddine and R. Nacer, “Comparison of artificial neural network (ANN) and response

surface methodology (RSM) in optimization of the immobilization conditions for lipase from Candida rugosa on Amberjet(®)

4200-Cl, Prep Biochem Biotechnol, vol. 43, Dec 2012, pp. 33-47. [11] G. Zhang and H. Ge, “Prediction of xylanase optimal temperature by support vector regression,” Electron. J. Biotechn, vol. 15,

January 2012, pp.8.

[12] L. Morgado, C. Pereira,P. Verissimo and A.Dourado, “Modelling proteolytic wnzymes with Support vector machines,” J. Integrative Bioinformatics, vol. 8, Dec 2011, pp.170.

[13] M. Chauhan, R.S. Chauhan, and V.K. Garlapati, “Modelling and optimization Studies on a novel lipase production by Staphylococcus arlettae through submerged fermentation,” Enzyme Research, vol. 2013, Nov 2013, pp. 8.

[14] A. Sheta, R. Hiary, H. Faris and N. Ghatasheh, “Optimizing thermostable enzymes production using Multigene symbolic

regression genetic programming,” World Applied Sciences Journal, vol. 22, April 2013, pp.485-493.

[15] S. R. Uppada, A. k. Gupta, and J.R. Duta, “Statistical optimization of culture parameters for lipase production from Lactococus

lactis and its application in detergent industry, Int. J. ChemTech Research, vol. 4, Oct-Dec 2012, pp. 1509-1517.

Page 64: IJETCAS June-August Issue 9

International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research)

International Journal of Emerging Technologies in Computational

and Applied Sciences (IJETCAS)

www.iasir.net

IJETCAS 14-515; © 2014, IJETCAS All Rights Reserved Page 44

ISSN (Print): 2279-0047

ISSN (Online): 2279-0055

Numerical investigation of absorption dose distribution of onion powder in

electron irradiation system by MCNPX code T. Taherkhani,

a, Gh. Alahyarizadeh

b

aDepartment of physics, faculty of science, Takestan Branch, Islamic Azad University,Takestan, IRAN.

bEngineering Department, Shahid Beheshti University, G.C., P.O. Box 1983963113, Tehran, IRAN.

________________________________________________________________________________________

Abstract: Absorption dose distribution of onion powder in electron irradiation system has been numerically

investigated by using of MCNPX (Monte Carlo N-Particle extended) software package and the real parameters

of the Rhodotron Accelerator at Yazd Radiation Processing Center (YRPC) of Atomic Energy Organization of

Iran (AEOI). The simulations were carried out for the cases in which the onion powder was irradiated by one

sided and double-sided electron irradiation and with homogeneous and inhomogeneous sample structures. The

line dose profiles and surface dose profile, as well as iso-dose profiles for different depths, and 3D profile of

dose distributions in each case were also studied. The simulation results indicated that the dose distribution for

double-sided irradiation is uniform and its maximum is in the center. The results also showed that the maximum

of depth–dose curve on one-side and double-side irradiation is up to maximal value of 33 kGy and 44 kGy,

respectively.

Keywords: Electron beam irradiation; Dose distribution; Rhodotron accelerator, MCNPX code

________________________________________________________________________________________

I. Introduction

One of important method to treat objects for a kind of industrial purposes such as the food industry is electron

beam processing or electron irradiation. High intensity and high energy electron beam (approximately 5-10

MeV) are widely used to radiation processing of a kind of products. Rhodotron accelerator is an important

source which is used for electron irradiation.

When a product is undergone electron irradiation of rhodotron accelerator, all parts of the sample do not receive

an identical dose. Therefore the dose distribution in the product is non-uniform. On the other hand,

determination of dose value and situation, dose minimum and maximum values in the product boxes is also

important for irradiation processing. Dose value depends on several parameters of products and radiation beam

such as size, density, homogeneity or heterogeneity of products and irradiating condition, the frequency of the

beam cross-body exposure, the width of the scanning beam, beam energy, beam current intensity and conveyor

speed [1]. Another important parameter in the electron beam irradiating is product thickness. The dose

distribution is not in the optimum condition, if the product thickness was more than the penetration depth of the

electron beam. In this case, the best way is using the double-side irradiation. Due to the short penetration depth

of electron beam, the electron beam and product interaction is occurring on the surface [2,3]. Hence, double-side

irradiation is used for the products with the thickness more than electron penetration depth to enhance the

electron penetration in the product.

In this research, the absorption dose distribution of onion powder in the electron irradiation system was

investigated by using MCNPX code software package and utilizing real parameters of rhodotron type electron

accelerator available at the Yazd Radiation Processing Center (YRPC) of Atomic Energy Organization of Iran

(AEOI). In this simulation, to determination dose distribution of onion powder, one-side and double-side

irradiations in two cases i.e. in homogeneity and heterogeneity states of products have been performed. In each

case, depth dose distributions, surface dose distributions in different depths, iso-dose curves in different depths,

and three dimensional dose distributions in different depths have been also investigated.

II. Simulation Parameters and Procedure

In this research, the simulation has been performed to determine of absorbed dose distribution of onion powder

by using the real geometry of irradiation system, and real parameters of the 10 MeV energy electron accelerator

at the Yazd Radiation Processing Center (YRPC) and utilizing MCNPX software package. The simulation has

been done for the products which their whole volume package filled (homogeneous cases) and for products

which their whole volume not filled and parts of the package was empty (inhomogeneous case). The dose

distribution has been studied for both directions, along the conveyer motion and scanning electron beam in

different depths. Schematic of arrangement of product to electron beam irradiation on the conveyer is shown in

the Fig. 1.

Page 65: IJETCAS June-August Issue 9

T. Taherkhani et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp.

44-49

IJETCAS 14-515; © 2014, IJETCAS All Rights Reserved Page 45

Fig. 1. Schematic of electron beam irradiation of products in Radiation Processing Center

The simulation parameters which were used in this research are related to the electron accelerator that was

established at the Yazd Radiation Processing Center (YRPC) at 1997. This electron accelerator is high-energy

electron accelerator Rhodotron TT200 type. The system is provided with 5 and 10 MeV electron energies with

the maximum available beam current of 8 mA, and a scan width of 100 cm, at a scan frequency of 100 Hz. The

accelerator power 100 kW, Rhodotron TT200 type, already has proved to be stable at 250 kW for many hours.

The machine is also equipped with a scanning horn with a scan width of 100 cm, at a scan frequency of 100 Hz

and is also equipped with a variable-speed conveyor to pass the materials under the scanned beam. The

characteristics of the electron source are listed in the table. 1.

Table 1. Characteristics of electron source which are used in the simulation according to YRPC

Rhodotron electron accelerator parameters

Conveyer speed Current Beam energy

1.8 cm/s 4 mA 10 MeV

The electron beam is received from the source with spatial distribution which acts as a surface source with 50

cm length and 48cm width. When the products pass from the irradiation position, the beam also scans it in the

vertical direction. Since MCNPX could just simulate the constant geometry and could not simulate its motion,

hence, it is assumed that all product surfaces are irradiated by the electron beam.

The electron beam considers as the parallel beam to the product. In the one-side irradiation, a surface source is

used on one side of the product, and in the double side irradiation, two surface sources with same size and

distance from the product are used. After tally calculation, to the exact determination of dose distribution, the

parameters and coefficients related to the conveyer speed, current intensity, and product density are added to

MCNPX code. The MCNPX code, then determines the dose distribution by finding out it on the different cell

which considered in the product. Since supposed product geometries for inhomogeneous and homogeneous

irradiation are different, the bulk of matter cells in two cases and therefore the simulation coefficients are

different. As mentioned before, the product under study is onion powder which is commonly used in irradiation

studies. The characteristic parameters of onion powder which were used in the simulation are listed in the

Table.2.

Table 2. Characteristic parameters of onion powder which is used in the simulation based corresponding

to Rhodotron accelerator parameters at YRPC

Homogenous product

Coefficient of dose calculation Density Matter

4.42 × 104 0.7 Onion powder

Inhomogeneous product

Coefficient of dose calculation Density Matter

1.11 × 105 0.7 Onion powder

The homogeneous product which is used in the simulation is considered as a rectangular cubic with dimensions

11cm × 33 cm × 48 cm. This configuration is selected based on the packages which are irradiated in YRPC.

Page 66: IJETCAS June-August Issue 9

T. Taherkhani et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp.

44-49

IJETCAS 14-515; © 2014, IJETCAS All Rights Reserved Page 46

Inhomogeneous geometry is changed depth dose distribution in the inhomogeneous regions and their

neighborhood, which causes to uniformity of absorbed dose in the different depths. To reduce the errors and run

time of the code, the variance reduction technique and a cutoff technique for energy at 0.01 MeV have been

used. In this case, the error value was less than 2%.

III. RESULTS AND DISCUSSION Figure 2 (a, b) shows the dose distribution along the conveyor motion in different depths for homogeneous

onion powder product under one-side and double side irradiation, respectively. As shown in the Fig. 2 (a), in the

one-side irradiation, the dose distribution increases and then decreases with depth. As well as, the results

indicate that the dose distribution has asymmetric form. The curve slopes in the two ends of the graph show the

dose variation rate in two adjacent cells inside the product. The results also show that the parts of products

which is at the edges absorb lower dose. The obtained results have good agreement with the references [4, 7].

(a)

(b)

Fig. 2. Dose distribution along the conveyor motion in different depths for homogeneous onion powder,

and 10 MeV electron beam. a) One- side irradiation, and b) double-side irradiation

The absorbed dose differences in the different depths are considerable in the one-side irradiation. As shown in

the Fig. 2, the absorbed dose value in depth 2.75 cm is 35 kGy and in depth 4.25 cm, is 15 kGy. The differences

are indicated that irradiation method should be modified. To improve the irradiation method, and to obtain

uniform absorbed dose distribution in the whole product package, the product packages with lower depth can be

used. As well as, irradiation from different angles can be performed. The double-side irradiation is one of

important methods can be used to improve the absorbed dose distribution which its results are shown in the Fig.

2 (b).

Figure 3 (a, b) shows the depth dose distributions in homogeneous onion powder under one-side and double-side

irradiation, respectively. Depth dose distribution under one-side irradiation is shown in the Fig. 3 (a). The

difference between absorbed dose in different depths is clearly observed in this Fig. As shown in this Fig, up to

the 4 cm depth, due to increasing of secondary electron creation from the atoms and beam interaction, the

Page 67: IJETCAS June-August Issue 9

T. Taherkhani et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp.

44-49

IJETCAS 14-515; © 2014, IJETCAS All Rights Reserved Page 47

absorbed dose increases, then, due to decrease of beam intensity, the absorbed dose decreases [5, 6]. On the

other hand, double-side irradiation causes to that the inside parts of product receive more doses from two sides,

so the maximum depth of two sides overlap. As shown in the Fig. 3 (b), the two peak overlaps and therefore, the

maximum dose is 45 kGy in the center.

(a)

(b)

Fig. 3. Dose distributions in different depths for homogeneous onion powder, and 10 MeV electron beam.

a) One- side irradiation, and b) double-side irradiation

Fig 4 (a, b) shows the iso-dose curves for the 5.75 cm depth under one-side and double-side irradiation

respectively which exhibit the dose distribution inside the product. This curve provides the possibility of survey

of dose uniformity in all surfaces under irradiation. The most surface in this depth have been covered by curve

32 kGy. In the other word, in this depth, more than 90% of products receive a uniform dose of 32kGy. The iso

dose also decreases with increasing depth in the one-side irradiation which causes to obtain asymmetric curves.

While, in the double-side irradiation, iso-dose increases, so that the first and last layers have the same dose.

(a)

Page 68: IJETCAS June-August Issue 9

T. Taherkhani et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp.

44-49

IJETCAS 14-515; © 2014, IJETCAS All Rights Reserved Page 48

(b)

Fig. 4. Iso-dose distributions in 5.75cm depth for homogeneous onion powder, and 10 MeV electron beam.

a) One- side irradiation, and b) double-side irradiation.

Figure 5 shows the 3 dimensional (3D) dose distribution in 5.75 cm depth. The 3D dose distribution could use to

observe absorbed dose variations by the cells in the defined depth. It also shows that how absorbed dose changes

in the regions which are under irradiation respect to other regions. As shown in this Fig, the dose uniformity has

been preserved in the double-side irradiation. It also shows that the absorbed doses decrease up to 25 kGy in the

boundaries and 15 kGy in the edges. In one-side irradiation, the 3D dose distributions decrease in the lower

depths and causes non-uniformity of dose distribution in those depths. While the double-side irradiation causes

of increasing dose distribution due to uniformity of dose distribution and higher electron scattering.

(a)

(b)

Fig. 5. Three dimensional dose distributions in 5.75cm depth for homogeneous onion powder and double-

side irradiation

Page 69: IJETCAS June-August Issue 9

T. Taherkhani et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp.

44-49

IJETCAS 14-515; © 2014, IJETCAS All Rights Reserved Page 49

The dose distribution along the conveyor motion in different depth of inhomogeneous product and one-side

irradiation is shown in the Fig. 6. As well as, Fig. 7 displays the depth dose distribution in inhomogeneous parts

of inhomogeneous product of onion powder under one-side irradiation. As shown in this Fig, the absorbed dose

is too low on the part which includes air. Therefore, the absorbed dose increases in lower depths. On the other

hand, in lower depths just under air layer, the dose value increases due to higher electron scattering. In the one-

side irradiation, the product does not receive any dose in the parts which including air. Then the dose

distribution increases with increasing depth.

Fig. 6. Dose distribution along conveyor motion in different depth of inhomogeneous product and one-

side irradiation of 10MeV electron beam

Fig. 7. Depth dose distribution in inhomogeneous parts of the product for one-side irradiation

IV. CONCLUSION

Dose distribution of onion powder under electron irradiation has been numerically studied by using of the

MCNPX software package and the real parameters of the Rhodotron Accelerator at YRPC of AEOI. The

simulations were performed for one-side and double-side electron irradiation and with homogeneous and

inhomogeneous products. The line dose, surface dose and iso-dose profiles for different depths, and 3D profile

of dose distributions in each case were also investigated. The simulation results showed that the dose

distribution for double-sided irradiation is uniform and its maximum is in the center. The results also showed

that the maximum of depth–dose curve on one-side and double-side irradiation is up to maximal value of 33

kGy and 44 kGy, respectively.

REFERENCES ]1] ASTM, Standard practice E, 1649, 1995. [2] H. Cember, T. E. Johnson “Introduction to Health Physics”, Pergamon Press, 1983.

[3] N. Soulfanidis, “Measurement and detection of radiation, Hemisphere Publishing Corporation”, New York, 1983.

[4] F. Ziaei, “design conversion target of high energy electrons to bremsstrahlung x-ray”, PhD dissertation, AmirKabir University, Tehran, Iran, 2002.

[5] Manual of food irradiation Dosimetry, IAEA technical reports series No. 178, 1977.

[6] ASTM, Standard practice E 1431. 1991. [7] F. Ziaie, H. Farideh, S.M. Hadji-Saeid, S.A. Durrani, “Investigation of beam uniformity in industrial electron

Accelerator,Radiation Measurements” vol. 34, 609–613, 2001.

Page 70: IJETCAS June-August Issue 9

International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research)

International Journal of Emerging Technologies in Computational

and Applied Sciences (IJETCAS)

www.iasir.net

IJETCAS 14-516; © 2014, IJETCAS All Rights Reserved Page 50

ISSN (Print): 2279-0047

ISSN (Online): 2279-0055

Predicting Crack Width in Circular Ground Supported Reservoir Subject

to Seismic Loading Using Radial Basis Neural Networks: RC & FRC Wall Tulesh.N.Patel

1, S.A. Vasanwala

2, C.D. Modhera

3

1Student (Ph.D.), Department of Applied Mechanics, S.V.N.I.T Surat- 395 007, Gujarat. India.

2Professor, Department of Applied Mechanics, S.V.N.I.T Surat- 395 007, Gujarat. India.

3Professor & Dean, Department of Applied Mechanics, S.V.N.I.T Surat- 395 007, Gujarat, India.

__________________________________________________ Abstract: Design and calculation of crack width in circular GSR walls are time consuming task, which requires

a great deal of expertise. Many times it is required to know crack width of a tank of known capacity and

geometry before its detailed design. In circular GSR reinforced concrete wall built using high strength deformed

bars and designed using limit state design method were found to have larger crack widths. To control these

crack widths in reinforced and fibre reinforced concrete wall to enhance durability. The latest revision of the

Indian code stresses the importance of durability and has not reduced formulae to calculate the crack widths. In

this case it is important to select reinforced concrete structures with minimization of crack width including other

parameters, i.e. thickness of section, area of steel required, spacing of steel, cover etc. However, under cases

such as severe exposure conditions and in water tanks we may check the crack width by theoretical calculations.

Cracks can be produced also by shrinkage and temperature variations. Extra reinforcements to reduce such

cracks are always necessary in reinforced concrete. The methods for calculation of widths of cracks due to loads

as well as those due to shrinkage and temperature changes are in this tool. Main reasons for limiting the crack

width in concrete walls are corrosion and water tightness. A back propagation neural network has been

considered in the present solution of the analysis problem using A NEURO SOLUTION FOR EXCEL Toolbox.

To develop software in spread sheet tool for design & calculate of crack width circular on ground water tank

RC, FRC wall. The main benefit is using a neural network approach is that the network is built directly from

experimental or theoretical data using the self-organizing capabilities of the neural network. The input

parameters for the software are size of circular tank. The highlights the need of introducing graph in such a way

that all design parameters are incorporate in one graph, i.e. diameter of bar, area of steel requirements, spacing

of steel, thickness of concrete wall, service bending moment for controlling crack widths in Indian codes on

similar lines of the BS code. The crack width in water tank is calculated to satisfy a limit state of serviceability.

Keywords: Software; Ground supported reservoir: Reinforced concrete; Fibre reinforced concrete; Crack width.

_________________________________________________________________________________ I. Introduction

Water retaining reinforced concrete structures built using high strength deformed bars and designed using limit

state design method were found to have larger crack widths. To control these crack widths and to enhance

durability, different codes prescribe limiting crack widths based on the environment in which the structure exists.

The latest revision of the Indian code stresses the importance of durability and has not reduced formulae to

calculate the crack widths. In this case it is important to select reinforced concrete structures with minimization

of crack width including other parameters, i.e. thickness of section, area of steel required, spacing of steel, cover

etc. However, under cases such as severe exposure conditions and in water retaining structures we may check the

crack width by theoretical calculations. Cracks can be produced also by shrinkage and temperature variations.

Extra reinforcements to reduce such cracks are always necessary in reinforced concrete. The methods for

calculation of widths of cracks due to loads as well as those due to shrinkage and temperature changes are

presented in this paper.

There are three perceived reasons for limiting the crack width in concrete. These are appearance, corrosion and

water tightness. It may be particularly mentioned that all the three are not applicable simultaneously in a

particular structure. Appearance is important in the case of exposed concrete for aesthetic reasons. Similarly,

corrosion is important for concrete exposed to aggressive environments. Water tightness is required in the case of

water retaining structures. Appearance requires limit of crack widths on the surface, this can be ensured by

locating the reinforcement as close as possible to the surface (by using small covers) which will prevent cracks

from widening. Corrosion control on the contrary requires increased thickness of concrete cover and better

quality of concrete. Water tightness requires control on crack widths but applicable only to special structures.

Hence, a single provision in the code is not sufficient to address the control of cracking due to all the above three

reasons. Recent research has found that there is no correlation between corrosion and crack widths. Also, there

was a large scatter in the measured crack widths even in controlled laboratory experiments. Hence, a simple

Page 71: IJETCAS June-August Issue 9

Tulesh.N.Patel et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp.

50-55

IJETCAS 14-516; © 2014, IJETCAS All Rights Reserved Page 51

formula, involving the clear cover and calculated stress in reinforcement at service load has been included in the

latest revision of the IS code. The highlights the need of introducing a simple formula for controlling crack

widths in Indian codes on similar lines of the BS code. The crack width in water retaining structure is calculated

to satisfy a limit state of serviceability.

Neural computing is defined as the study of networks of adaptable nodes which, through a process of learning

from examples. Store experimental knowledge and make it available for use. In other words, neural networks are

highly simplified models of the human neural systems. The most important property of neural networks in

engineering problems is their capability of learning directly from examples. The other important properties of

neural networks are their correct or nearly correct responses to incomplete tasks.

II. Application of Fibre Reinforced Concrete

Fibre-reinforced concrete has specialized properties and can enhance impact, abrasives, high durability, shatter,

and vibration. In beginning, fibre-reinforced concrete were used for pavement and industrial slabs. But recently,

applications of fibre-reinforced concrete have wide variety usage in structures such as heavy-duty pavement,

airplane runways, industrial slabs, water tanks, canals, dam structure, parking structure decks, water and

wastewater treatment plant, pipes, channel, precast panels, structures resist to earthquake and explosives and the

techniques of concrete application.

III. Polypropylene Fibres (micro-synthetic fibres)

Polypropylene fibres are gaining in significance due to the low price of the raw polymer material and their high

alkaline resistance (Keer, 1984; Maidl, 1995). They are available in two forms i.e. monofilament or fibrillated

manufactured in a continuous process by extrusion of a polypropylene homo polymer resin (Keer, 1984;

Knapton, 2003). Micro synthetic fibres, based on 100% Polypropylene are used extensively in vertical walls &

ground-supported slabs for the purpose of reducing, plastic shrinkage cracking and plastic settlement cracking.

These fibres are typically 12mm long by 18μmm diameter (Perry, 2003). The addition of polypropylene fibres is

at a recommended dosage of approximately 0.90kg/m3 (0.1% by volume) (Knapton, 2003), the fibre volume is so

low that mixing techniques require little or no modification from normal practice (Newman et al, 2003).The

fibres may be added at either a conventional batching/mixing plant or by hand to the ready mix truck on site

(Knapton, 2003).

Concrete mixes containing polypropylene fibres can be transported by normal methods and flow easily from the

hopper outlet. No special precautions are necessary. Conventional means of tamping or vibration to provide the

necessary compaction can be used. Curing procedures similar to those specified for conventional concrete should

be strictly undertaken. While placed fibre-dosed mixes may be floated and toweled using all normal hand and

poor tools (Knapton, 2003).

IV. Crack width Analysis in FRC

Crack control is only possible if at least one of the conditions mentioned below is satisfied.

Presence of conventional steel bars,

Presence of normal compressive forces (compression – pre stressing),

Crack control maintained by the structural system itself (redistribution of internal moments and forces

limited by the rotation capacity).

The calculation of the design crack width in steel fibre reinforced concrete is Similar to that in normal reinforced

concrete. However, it has to be taken into account that the tensile stress in fibre reinforced concrete after

cracking is not equal to zero but equal to 0.45 fRm,1 (constant over the cracked part of the cross section).

V. Crack width control: Spread sheet tool

The design problem is first represented in the spread sheet. The main inputs required form the designers are first

represented under various sections namely;

Size of tanks

Grade of concrete

Grade of steel

Diameter of steel

Spacing of steel

Cover

VI. Advantages of the Program

User can perform what if analysis, i.e., user can experiment effect of geometric and dimension on cost

User will be able to watch intermediate calculations, hence better control over redesign process.

User can easily modify the logic at later date if necessary as per revision of design course.

As the output result will be generated in excel sheet format, construction of graphs are easily carried out

Page 72: IJETCAS June-August Issue 9

Tulesh.N.Patel et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp.

50-55

IJETCAS 14-516; © 2014, IJETCAS All Rights Reserved Page 52

for data interpretation.

There is complete transparency of calculation of the user.

Table 1: Input and output data for circular tank

VII. Radial Basis Function (RBF)

We now move on to discuss a class of feed forward neural networks called Radial Basis Function Networks

(RBFNs) that compute activation at the hidden neurons in a way that is different from what we have seen in the

case of feed forward neural networks. Rather than employing an inner product between the input vector and the

weight vector, hidden neuron activations in RBFNs are computed using an exponential of a distance measure

(usually the Euclidean distance or a weighted norm) between the input vector and a prototype vector that

characterizes the signal function at hidden neuron.

The Radial Basis Function (RBF) model is a special type of neural network consisting of three layers: input,

pattern (hidden), and output. It represents two sequential mappings. The first nonlinearly maps the input data via

basis functions in the hidden layer. The second, a weighted mapping of the basis function outputs, generates the

model output. The two mappings are usually treated separately, which makes RBF a very versatile modeling

technique. The RBF networks have been successfully employed in areas such as data mining, medical diagnosis,

face and speech recognition, robotics, forecasting stock prices, cataloging objects in the sky, and bioinformatics.

RBF networks have their theoretical roots in regularization theory and were originally developed by Russian

mathematicians in the1960s.

Function approximation, with RBF networks, in a limited area of the input space requires

• The placement of the localized Gaussians to cover the space

• The control of the width of each Gaussian

• The setting of the amplitude of each Gaussian

Fig.1: Clarifier of 46mt diameter and 7mt vertical wall height under construction.

If we can accomplish these three tasks, we can approximate arbitrary continuous functions with an RBF network.

Page 73: IJETCAS June-August Issue 9

Tulesh.N.Patel et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp.

50-55

IJETCAS 14-516; © 2014, IJETCAS All Rights Reserved Page 53

Broomhead and Lowe used the RBF model for approximation. The RBF network has been proven to be a

universal function approximation .It can perform similar function mappings as a MLP but its architecture and

functionality are very different. We will first examine the RBF architecture and then examine the differences

between it and the MLP that arise from this architecture.

Fig.2: Neuro Solution software for Circular Tank (Radial Basis Function)

Table 2: Training data for Crack width in Circular Tank with obstruction

Capacity

of

tank

Wall

thick

Ht.

of

wall

Bar

dia

RC

CW

Desired

RC

CW

Predicted

RC

%

Error

FRC

CW

Desired

FRC

CW

Predicted

FRC

%

Error

m3 mm mt. mm mm mm

mm mm

7000 700 10 12 0.161 0.165 2.427 0.042 0.041 -2.569

7000 700 6 12 0.190 0.176 -7.691 0.068 0.065 -4.170

7000 700 10 16 0.199 0.208 4.276 0.054 0.052 -4.496

7000 700 5 16 0.467 0.473 1.116 0.166 0.164 -1.279

7000 700 5 20 0.585 0.596 1.852 0.206 0.206 0.180

7000 700 8 20 0.084 0.079 -7.100 0.024 0.023 -6.497

8000 750 8 12 0.012 0.012 -1.423 0.004 0.004 6.707

8000 750 8 16 0.016 0.017 5.019 0.005 0.005 6.606

8000 750 7 25 0.205 0.221 7.204 0.063 0.067 6.085

8000 750 9 12 0.091 0.083 -9.504 0.027 0.026 -0.454

8000 750 6 32 0.582 0.576 -1.077 0.197 0.188 -4.746

8000 750 8 32 0.032 0.031 -1.569 0.009 0.009 -2.486

8000 750 8 20 0.020 0.020 -0.538 0.006 0.006 4.733

8000 750 10 25 0.270 0.287 5.936 0.068 0.071 5.112

8000 750 9 32 0.239 0.240 0.378 0.065 0.063 -3.829

8000 750 6 16 0.308 0.309 0.396 0.102 0.102 0.175

Table 3: Training data for Crack width in Circular Tank without obstruction

Capacity

of

tank

Wall

thick

Ht.

of

wall

Bar

dia

RC

CW

Desired

RC

CW

Predicted

RC

%

Error

FRC

CW

Desired

FRC

CW

Predicted

FRC

%

Error

m3 mm mt. mm mm mm

mm mm

7000 700 7 12 0.146 0.148 1.139 0.039 0.040 0.838

7000 700 5 12 0.117 0.127 7.910 0.042 0.043 2.094

7000 700 6 12 0.061 0.057 -6.464 0.019 0.018 -3.825

8000 750 10 32 0.484 0.472 -2.459 0.079 0.079 -0.309

8000 750 10 16 0.253 0.250 -0.951 0.054 0.053 -1.770

8000 750 7 25 0.233 0.217 -7.395 0.059 0.057 -4.430

8000 750 9 12 0.203 0.197 -3.071 0.043 0.041 -4.510

8000 750 7 12 0.113 0.105 -8.109 0.031 0.030 -3.707

8000 750 8 20 0.251 0.257 2.324 0.062 0.064 3.206

8000 750 6 25 0.035 0.033 -6.116 0.010 0.011 5.724

8000 750 7 16 0.144 0.139 -3.116 0.040 0.039 -2.795

8000 750 5 16 0.226 0.207 -8.946 0.074 0.073 -1.580

8000 750 8 25 0.321 0.329 2.426 0.076 0.074 -1.691

8000 750 6 16 0.022 0.023 6.250 0.007 0.006 -7.967

8000 750 6 12 0.016 0.017 4.836 0.005 0.005 -6.887

8000 750 10 12 0.229 0.216 -6.088 0.043 0.042 -2.106

8000 750 5 20 0.287 0.299 3.722 0.092 0.098 6.710

8000 750 10 25 0.370 0.362 -2.050 0.079 0.074 -6.773

Page 74: IJETCAS June-August Issue 9

Tulesh.N.Patel et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp.

50-55

IJETCAS 14-516; © 2014, IJETCAS All Rights Reserved Page 54

Fig. 3: Net training performance of crack width in Circular Tank RC wall without Obstruction

Fig. 4: Net training performance of crack width in Circular Tank RC wall without Obstruction

VIII. Conclusion

Attempt is made in this work to develop crack width control tool using spread sheet (MS-Office).This tool will

be very useful for structural consultant for checking serviceability design criteria for reinforced concrete

elements, particularly for design of water tanks.

Neural network tool has been develop considering supervise learning methodology using Levenberg Marquardt

algorithm for back propagation and radial basis function. It has been observed that for circular tank with &

without obstruction the best neural architectural form is 5 neuron in hidden layer gives optimum solution also

coefficient of correlation is 0.9969 & absolute maximum error during training is 7.714 & testing is 8.087.

The absolute maximum error for both class of tank in testing phase is below 10%. The develop neural network

tool can be readily use in field for preliminary design in reference to crack width of circular under seismic

loading.

Addition of polypropylene fibers improves the cracking behavior of concrete walls reinforced with tension bars.

The inclusion of polypropylene fibres decreases both the crack spacing and crack width. A greater reduction of

the crack width, the crack spacing respectively, can be noticed if polypropylene fibres with an appropriate aspect

ratio are used.

Fibre reinforcement could be an attractive alternative to crack-controlling conventional reinforcement. The price

for the concrete is increased; however, casting FRC is a much less labor-intensive operation than placing and

tying conventional crack-controlling reinforcement. Conclusively, one can say that fibre reinforcement could be

a new cheap crack controlling reinforcement.

References [1] Anette Jansson.: “Design methods for fibre-reinforced concrete: a state-of-the-art review”. Thomas concrete group AB, Sweden. [2] British Standards Institution (1985) .BS 8110: Structural use of concrete, British Standard Institution, London.

0.000

0.500

1.000

1

10

1

9

28

37

46

55

64

73

82

91

10

0

10

9

11

8

12

7

13

6

14

5

15

4

16

3

17

2

Desired Value RC mm

ANN Value RC mm

Exemplar

Cra

ck

Wid

th

RC

wa

ll

(mm

)

0.000

0.500

1.000

1.500

1

10

19

28

37

46

55

64

73

82

91

10

0

10

9

11

8

12

7

13

6

14

5

15

4

16

3

17

2

Desired Value RC mm ANN Value RC mm

Exemplar

Cra

ck

Wid

th R

C

wa

ll (

mm

)

Page 75: IJETCAS June-August Issue 9

Tulesh.N.Patel et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp.

50-55

IJETCAS 14-516; © 2014, IJETCAS All Rights Reserved Page 55

[3] British Standards Institution (1987) .BS 8007: Design of concrete Structures for retaining aqueous liquids, British Standard

Institution, London. [4] B. Chiaia, A. P. Fantilli and P. Vallini: “Crack Patterns in Reinforced and Fibre Reinforced Concrete Structures”. The Open

Construction and Building Technology Journal, 2008, 2, 146-155.

[5] C.Q.Li and W.Lawanwisut, “Crack Width due to Corroded Bar in Reinforced Concrete Structures”. International Journal of Materials & Structural Reliability Vol.3, September 2005,87-94

[6] D. Pettersson and S. Thelandersson,: “Crack development in concrete structures due to imposed strains - Part I:Modelling”. Vol.

34, January-February 2001, pp 7-13 [7] D. Pettersson and S. Thelandersson,: “Crack development in concrete structures due toimposed strains - Part II: Parametric study

of a wall fully restrained at the base”. Materials and Stmctures/Materiaux et Constructions, Vol. 34, January-February 2001, pp

14-20. [8] J. Zhang1 and H. Stang,: “Application of stress crack width relationship in predicting the flexural behavior of fibre-reinforced

concrete”. Cement and Concrete Research, Vol. 28, No. 3, pp. 439–452, 1998

[9] Henrik Stang & Tine Aarre.: “Evaluation of Crack Width in FRC with Conventional reinforcement” Cement & Concrete Composites 14 ( 1992 ) 143-154.

[10] Hong Sung Nam and Han Kyoung Bong. “Estimate of Flexural Crack Width in Reinforced Concrete Members.” The 3rd

International Conference (2008) B 19 752-758. [11] H. Wang and A. Belarbi: “Flexural Behavior of Fiber-Reinforced-Concrete Beams Reinforced with FRP Rebars”. SP 230-51.

[12] N.Subramanian (2005). “Controlling the Crack Width of Flexural RC Members”. The Indian Concrete Journal,1-6

[13] Vandewalle, L. “Cracking behavior of concrete beams reinforced with a combination of ordinary reinforcement and steel fibers”. Materials and Structures .Vol. 33, April 2000, pp 164-170.

Page 76: IJETCAS June-August Issue 9

International Association of Scientific Innovation and Research (IASIR)

(An Association Unifying the Sciences, Engineering, and Applied Research)

(An Association Unifying the Sciences, Engineering, and Applied Research)

International Journal of Emerging Technologies in Computational

and Applied Sciences (IJETCAS)

www.iasir.net

IJETCAS 14- 518; © 2014, IJETCAS All Rights Reserved Page 56

ISSN (Print): 2279-0047

ISSN (Online): 2279-0055

Impact of Various Channel Coding Schemes on Performance Analysis of

Subcarrier Intensity-Modulated Free Space Optical Communication

System Joarder Jafor Sadique

1, Shaikh Enayet Ullah

2 and Md. Mahbubar Rahman

3

1Department of Electronics and Telecommunication Engineering

Begum Rokeya University, Rangpur-5404, Bangladesh 2Department of Applied Physics and Electronic Engineering

Rajshahi University, Rajshahi-6205, Bangladesh 3Department of Applied Physics, Electronics and Communication Engineering,

Islamic University, Kushtia-7003, Bangladesh

Abstract: In this paper, we made a comprehensive simulative study on the performance assessment of a

Subcarrier intensity-modulated Free Space Optical (FSO) Communication system. The proposed system under

investigation consider a communication link between a base station and a mobile unit using light wave

transmission through free space in consideration with Atmospheric turbulence effect. The FSO system

implements various types of channel coding schemes such as LDPC, Turbo, Cyclic, BCH and Reed-Solomon.

From MATLAB based simulated study on synthetic data transmission, it is found a quite noticeable impact on

implementing different types of channel coding schemes on performance enhancement of the presently

considered FSO system. The system is also capable of showing its robustness in retrieving transmitted data in

spite of atmospheric turbulence effect under QAM digital modulation and BCH channel coding scheme.

Keywords: FSO, channel coding, Bit Error rate (BER) and Atmospheric turbulence effect.

I. Introduction

Free-space optical (FSO) communication is an emerging technology which offers license-free spectrum and

highly secure link. The Free-space optical (FSO) communication systems are capable of providing high data

transmission rates and have received considerable attention during the past few years in many applications

linking with satellite communication, fiber backup, RF-wireless back haul and last mile connectivity,

unmanned aerial vehicles (UAVs), high altitude platforms (HAPs), aircraft, and other nomadic communication

partners. Over the last two decades free-space optical communication (FSO) has become more and more

interesting as an adjunct or alternative to radio frequency communication. In high-speed FSO signal detection,

Avalanche photodiodes (APD) are normally used where the noise shows signal-dependent Gaussian noise

(SDGN) distribution rather than the signal-independent Gaussian noise (SIGN) distribution. It has been known

that the FSO communication link availability becomes limited during foggy weather and heavy snow fall. The

FSO signal intensity undergoes random fluctuation due to the atmospheric turbulence, known as scintillation.

Scintillation causes performance degradation and possible loss of connectivity. These drawbacks pose the main

challenge for the FSO communication system deployment. To meet up desire of mitigating such drawbacks and

keeping in view of improving FSO system performance, emphasis is given on the various channel coding

schemes [1-3].

The FSO communication system utilizes subcarrier intensity modulation which is a technique borrowed from

the very successful multiple carrier RF communications already deployed in applications such as digital

television, LANs, asymmetric digital subscriber line (ADSL), 4G communication systems and optical fiber

communications. In optical fiber communication networks, the subcarrier modulation techniques have been

commercially adopted in transmitting cable television signals and have also been used in conjunction with

wavelength division multiplexing [4]. The present study is intended to make a comprehensive study on

performance assessment of Subcarrier intensity-modulated Free Space Optical Communication system under

implementation of various channel coding schemes.

II. Channel Coding

In this paper, the synthetically generated binary data are encrypted. The input binary bit stream is encrypted

using symmetric stream cipher [5]. The encrypted binary data are channel encoded using various channel coding

schemes such as Cyclic, Reed Solomon, Bose-Chadhuri-Hocquenghem (BCH), LDPC and Turbo. In cyclic

coding, the encrypted binary data streams are rearranged into blocks with each block containing two consecutive

bits. For each bit, additional redundant identical bit is pre appended to produce cyclically encoded data. In Reed

Solomon (RS) nonbinary block coding, 512 information symbols is encoded in a block of 572 symbols. Each

information symbol consists of 16 bits and 16 redundant symbols are added at the end of 512 information

Page 77: IJETCAS June-August Issue 9

Joarder Jafor Sadique et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August,

2014, pp. 56-60

IJETCAS 14- 518; © 2014, IJETCAS All Rights Reserved Page 57

symbols to produce a RS block encoded data of 572 symbols. In Bose-Chadhuri-Hocquenghem (BCH) channel

coding, the encrypted data are arranged into 64 rows × 64 columns. Its each 64-elements based row represents a

message word and additional 63 parity bits are appended at the end of each message word. The BCH channel

encoded data would be 64 rows × 127 columns [6],[7]. The LDPC and Turbo channel coding schemes have

been discussed in details [8],[9].

III. System Description

The block diagram of the simulated Subcarrier intensity-modulated Free Space Optical Communication system

is depicted in Figure 1. The synthetically generated binary data are encrypted using symmetric block cipher

cryptographic scheme. The encrypted binary data are channel encoded prior to conversion into complex digitally

modulated symbols [10]. The real and imaginary part of each complex symbol is multiplied by a carrier

represented by cosine wave and its 90 degree phase shifted version. Eventually, all components are summed up

and fed into electrical to optical converter. The optically generated signal is passed through atmospheric channel

and detected in the receiver. In the receiving section, optical to electrical conversion takes place.

Fig. 1 Block diagram of Subcarrier intensity-modulated Free Space Optical Communication system

The signal is filtered assigning a selective frequency band and subsequently contaminated with additive white

Gaussian noise. Its real and imaginary parts are multiplied by carriers with amplitude double as compared to

carriers used in the transmitting section, low pass filtered and sampled to make decision for complex signal

generation [4]. The generated complex symbols are digitally demodulated, channel decoded and decrypted to

recover transmitted signal.

IV. Results and Discussion

We have conducted computer simulations to evaluate the BER performance of a Subcarrier intensity-modulated

Free Space Optical Communication system based on the parameters given in Table 1.

Table 1: Summary of the simulated model parameters No. of bits used 4096

Bit rate 200 Mbps

Subcarrier frequency 1 GHz

Sampling frequency 50 GHz

Data Encryption technique Symmetric Stream Cipher

Channel Coding LDPC , Turbo, CRC , BCH and Reed-

Solomon Photo detector responsivity 1

Optical Modulation index 1

Modulation DPSK, QPSK and QAM

Channel AWGN and Atmospheric Turbulent

Signal to noise ratio, SNR -10 to 5 dB

It is noticeable that the BER curves depicted in Figure 2 through Figure 6 are clearly indicative of showing

distinct system performance under various channel coding and digital modulation schemes. In all cases, the

simulated system shows satisfactory performance in QAM and ratifies worst performance in DQPSK digital

modulation. In Figure 2, is it observable that the BER values approach zero in QAM and QPSK under scenario

of LDPC channel coding and 1dB greater noise power relative to signal power( SNR=-1dB). At -5dB SNR, a

Page 78: IJETCAS June-August Issue 9

Joarder Jafor Sadique et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August,

2014, pp. 56-60

IJETCAS 14- 518; © 2014, IJETCAS All Rights Reserved Page 58

low system performance gain of 0.43 dB is obtained in QAM relative to DQPSK. At higher SNR values greater

than 0 dB, the system shows identical performance in all digital modulations. In Figure 3, the BER performance

differences for Turbo channel coded FSO system at greater signal power relative to noise power are not well

distinguishable from each other in different digital modulations. At highly noisy situation, the system

performance degradation is comparatively higher in comparison with LDPC channel coding. In Figure 4, the

system performance with CRC channel coding is well defined in different digital modulations. At -10dB SNR

value, the BER values are 0.0369, 0.2339 and 0.2959 in case of QAM, QPSK and DQPSK which are

indicative of reasonable system performance improvement of 8.02 dB and 9.04 dB in QAM as compared to

QPSK and DQPSK. At 3.5 % BER, a SNR gain of .6 dB is achieved in QAM as compared to QPSK. In Figure

5, it is quite obvious that the system performance is quite satisfactory with BCH channel coding. The BER value

approaches zero at -8 dB SNR value with QAM digital modulation. Over a significant SNR value region, the

BER value approaches zero with all digital modulations. At -10dB SNR value, the BER values are 0.0276,

0.1335 and 0.2295 in case of QAM, QPSK and DQPSK which implies reasonable system performance

improvement of 6.85 dB and 9.20 dB in QAM as compared to QPSK and DQPSK. In Figure 6, the Reed-

Solomon channel encoded FSO system shows distinct BER performance. At -1dB SNR, BER value approaches

zero in all digital modulations. At approximately 5% BER, SNR gain of 3 dB and 4 dB are achieved in QAM

as compared to QPSK and DQPSK respectively. At -10dB, system performance enhancement of 3.46 dB and

5.61 dB are achieved in QAM as compared to QPSK and DQPSK (SNR values: 0.062, 0.1375 and 0.2256).

Fig. 2 BER performance comparison of subcarrier intensity-modulated Free Space Optical

Communication system under various digital modulations, LDPC channel coding

and atmospheric turbulence effect.

Fig. 3 BER performance comparison of subcarrier intensity-modulated Free Space Optical

Communication system under various digital modulations, Turbo channel coding

and atmospheric turbulence effect.

-10 -5 0 50

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

Signal to Noise ratio(dB)

BE

R

FSO with QAM +LDPC Channel Coding

FSO with QPSK +LDPC Channel Coding

FSO with DQPSK +LDPC Channel Coding

-10 -5 0 50

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

Signal to Noise ratio(dB)

BE

R

FSO with QAM +TURBO Channel Coding

FSO with QPSK +TURBO Channel Coding

FSO with DQPSK +TURBO Channel Coding

Page 79: IJETCAS June-August Issue 9

Joarder Jafor Sadique et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August,

2014, pp. 56-60

IJETCAS 14- 518; © 2014, IJETCAS All Rights Reserved Page 59

Fig. 4 BER performance comparison of subcarrier intensity-modulated Free Space Optical

Communication system under various digital modulations, CRC channel coding

and atmospheric turbulence effect.

Fig. 5 BER performance comparison of subcarrier intensity-modulated Free Space Optical

Communication system under various digital modulations, BCH channel coding

and atmospheric turbulence effect.

Fig. 6 BER performance comparison of subcarrier intensity-modulated Free Space Optical

Communication system under various digital modulations, Reed-Solomon channel coding

and atmospheric turbulence effect.

-10 -5 0 50

0.05

0.1

0.15

0.2

0.25

0.3

0.35

Signal to Noise ratio(dB)

BE

R

FSO with QAM +CRC Channel Coding

FSO with QPSK +CRC Channel Coding

FSO with DQPSK +CRC Channel Coding

-10 -5 0 50

0.05

0.1

0.15

0.2

0.25

Signal to Noise ratio(dB)

BE

R

FSO with QAM +BCH Channel Coding

FSO with QPSK +BCH Channel Coding

FSO with DQPSK +BCH Channel Coding

-10 -5 0 50

0.05

0.1

0.15

0.2

0.25

Signal to Noise ratio(dB)

BE

R

FSO with QAM +Reed-Solomon Channel Coding

FSO with QPSK +Reed-Solomon Channel Coding

FSO with DQPSK +Reed-Solomon Channel Coding

Page 80: IJETCAS June-August Issue 9

Joarder Jafor Sadique et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August,

2014, pp. 56-60

IJETCAS 14- 518; © 2014, IJETCAS All Rights Reserved Page 60

V. Conclusion

With growing demand for bandwidth in mobile communication and significant increasing in number of users,

the next-generation wireless communication systems may be linked with implementation of Optical wireless

communications (OWC) technology that entails the transmission of information-laden optical radiation through

the free-space channel. In this paper we have tried to show that a simplified FSO wireless communication

system is capable of showing its robustness in retrieving data in atmospheric turbulent effect. From the

simulation based study, it can be concluded that a Subcarrier intensity-modulated Free Space Optical (FSO)

Communication system is very much effective to produce its satisfactory system performance under low order

QAM digital modulation and BCH channel coding scheme.

References [1] Muhammad N Khan, 2014: Importance of noise models in FSO communications EURASIP Journal on Wireless

Communications and Networking vol.102, pp.1-10 [2] Zabidi, S.A, Khateeb, W.A. ; Islam, M.R. and Naji, A.W, 2010: The effect of weather on free space optics communication

(FSO) under tropical weather conditions and a proposed setup for measurement, Proceeding of International IEEE Conference

on Computer and Communication Engineering (ICCCE), pp.1-5 [3] Hennes Henniger and Otakar Wilfert,2010: An Introduction to Free-space Optical Communications, Radio Engineering, vol. 19,

no. 2, pp.203-212

[4] Z. Ghassemlooy, W. Popoola and S. Rajbhandari, 2013: Optical Wireless Communications System and Channel Modelling with MATLAB®, CRC Press, Taylor & Francis Group, USA

[5] William Stallings,: Cryptography and Network Security Principles and Practices, Fourth Edition, Prentice Hall Publisher, 2005

[6] Wicker, Stephen B., Error Control Systems for Digital Communication and Storage, Upper Saddle River, NJ, Prentice Hall, 1995.

[7] Clark, G. C., and Cain, J. B., Error-Correction Coding for Digital Communications, New York, Plenum Press, 1981.

[8] Md. Mainul Islam Mamun, Joarder Jafor Sadique and Shaikh Enayet Ullah, 2014: Performance Assessment of a Downlink Two- Layer Spreading Encoded COMP MIMO OFDM System, International Journal of Wireless Communication and Mobile

Computing (WCMC), Science Publishing Group, NY, USA, vol. 2, no. 1, pp. 11-17.

[9] Yuan Jiang ,2010: A Practical Guide to Error-Control Coding Using MATLAB, Jiang Artech House, Boston, USA [10] Goldsmith, Andrea, 2005: Wireless Communications, First Edition, Cambridge University Press, United Kingdom.

Page 81: IJETCAS June-August Issue 9

International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research)

International Journal of Emerging Technologies in Computational

and Applied Sciences (IJETCAS)

www.iasir.net

IJETCAS 14-523; © 2014, IJETCAS All Rights Reserved Page 61

ISSN (Print): 2279-0047

ISSN (Online): 2279-0055

Glaucomatous Image Classification Based On Wavelet Features Shafan Salam

1, Jobins George

2

1PG Scholar, Dept. of Electronics and Communication Engineering Department

2Faculty, Dept. of Electronics and Communication Engineering Department

M.G University, Kottayam

ICET, Muvattupuzha, Kerala, India

Abstract: Glaucoma is one of the second largest diseases caused in human eye that may results in the

sightlessness or even blindness. Texture features within images are effectively used for accurate and efficient

glaucoma classification. Energy distribution over wavelet features are applied to find these important texture

features. In this paper several wavelet filters are used in order obtain energy signatures. Most commonly used

wavelets are Harr wavelet, May also called daubechies (db1), symlets (sym3), and biorthogonal (bio3.1, bio3.4,

and bio3.5) wavelet filters. Then the feature ranking and feature selection process are carried out before the

wavelet features are introduced to the classifier network. We have gauged the effectiveness of the resultant

ranked and selected subsets of features using a support vector machine, sequential minimal optimization,

random forest, and naive Bayes classification strategies. Based on certain action caused by the classifier to

these features, the selected candidates will produce effective glaucoma classification. And the proposed system

will have an accuracy more than the existing system i.e., above 95%.

Keywords: Glaucoma, wavelet filters, SMO, SVM, random forest, naive Bayes

I. Introduction

Glaucoma is the second leading cause of blindness worldwide. Glaucoma is a condition that causes damage to

your eye's optic nerve and gets worse over time. It may be due to the buildup of pressure inside the eye. Effect

of glaucoma increases with increase in the age of human and may not show up until later in life. The increased

pressure, called intraocular pressure, can damage the optic nerve, which transmits images to the brain. If damage

to the optic nerve from high eye pressure continues, glaucoma will cause permanent loss of vision. Without

treatment, glaucoma can cause total permanent blindness within a few years. Before years this disease was only

shown in elder people, but because of some biological reason this symptoms has been shown in younger people

also. This can happen when eye fluid isn't circulating normally in the front part of the eye. Normally, this fluid,

called aqueous humor, flows out of the eye through a mesh-like channel. If this channel becomes blocked, fluid

builds up, causing glaucoma. The direct cause of this blockage is unknown, but doctors do know that it can be

transferred parents to children. So better treatment should be given at the earliest.

A number of techniques have been developed in order to detect the glaucoma at the earliest. Here the proposed

system also deals with the detection of glaucoma which is having several advantages than the previous

techniques. One the main advantage of this paper is that, it has a better accuracy compared with the previous.

Now a days we know that, in the field of diagnostic several new diagnostic methods has been arised in order for

detection and management of glaucoma. Several imaging modalities and their enhancements, including optical

coherence tomography and multifocal electroretinograph are prominent techniques employed to quantitatively

analyze structural and functional abnormalities in the eye both to observe variability and to quantify the

progression of the disease objectively. Glaucoma diagnosis usually follows an investigation of the retina using

the Heidelberg Retina Tomograph (HRT), here the HRT is a confocal laser scanning system developed by

Heidelberg Engineering. It allows 3-dimension images of the retina to be obtained and analyzed. This way the

topography of the optical nerve head, called papilla, can be followed over time and any changes made is

quantitatively characterized. In this paper we investigate intends to improve on this side by proposing a

systematic and automatic investigation of 2-dimendional level images. Pre-processing is the first step in

automatic diagnosis of retinal images. The quality of image is usually not good. Hence, Z- Score Normalization

is used, which improves the quality of the retinal image. The two issues for the automatic glaucoma recognition

are: 1) feature extraction from the retinal images and 2) classification based on the chosen feature extracted.

Features extracted from the images are categorized as either structural features or texture features. Commonly

categorized structural features include disk area, disk diameter, rim area, cup area, cup diameter, cup-to-disk

ratio, and topological features extracted from the image. Here, in this paper the texture features with in the images are actually mend for the efficient glaucoma

classification. Here we use wavelet based features of that glaucoma image and after analyzing the energy

features from the wavelet, these are introduced into certain classifier for the accurate glaucoma classification.

Here accuracy is one of the important features of this paper which crosses about 95%. Previous techniques are

Page 82: IJETCAS June-August Issue 9

Shafan Salam et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp.

61-65

IJETCAS 14-523; © 2014, IJETCAS All Rights Reserved Page 62

not as accurate as the proposed technique. We propose to use five well-known wavelet filters, the Haar or

daubechies (db1), the symlets (sym3), biorthogonal (bio3.1, bio3.5, and bio3.7) filter wavelet filters. We

calculate the averages of the detailed horizontal and vertical coefficients and wavelet energy signature from the

detailed vertical coefficients. We subject the extracted features to four different classifications. We have gauged

the effectiveness of the resultant ranked and selected subsets of features using a support vector machine,

sequential minimal optimization, random forest, and naive Bayes classification strategies. This approach

includes classification with huge scale of data and consuming times and energy, if done manually.

II. Related works

A number of works has been developed in order to detect the glaucoma with in the human eye. Many more

efforts are made for several years to detect or diagnosis the disease, glaucoma, so that the sufferings caused by

the disease can be reduced or even fully cured. The optical coherence tomography and multifocal

electroretinograph (mfERG) are some prominent methods employed in order to find out and analyze the

functional abnormalities of the eye especially glaucoma. Electroretinography measures the electrical responses

of various cell types in the retina, including the photoreceptors (rods and cones), inner retinal cells

(bipolar and amacrine cells), and the ganglion cells. The mfERG gives detailed idea regarding the topographical

information of each zone of our retina and can therefore detect small-area local lesions in the retina and even in

its central region (fovea). And several other abnormalities can also be detected with the help of multi focal

electroretinograph. Optical coherence tomography (OCT) is an optical signal acquisition and processing

method. This technique, typically employing near-infrared light. Diseases affected in in the internal tissues and

muscles can be detected with the help of optical coherence tomography. This disease may affect the internal

parts of eye which may leads to the loss of vision of eye. The discrete transform (DWT) analyses mfERG

signals and detect glaucoma. In ophthalmology, CDSS are used efficiently to create a decision support system

that identifies disease pathology in human eyes. In CDSS, both structural and texture features of images are

extracted. The extracted structural features mainly include disk area, rim area, cup to disc ratio and

topographical features. Automatic glaucoma diagnosis can be done by calculating cup to disc ratio.

The glaucoma progression can be identified from textural features using a method called POD. Glaucoma often

damages the optic nerve head (ONH) and ONH changes occur prior to visual field loss. Thus, digital image

analysis is extremely good choice for detecting the disease related to glaucoma and onset and progression of

glaucoma by using the method of proper orthogonal decomposition (POD). A baseline topography subspace was

constructed for each eye to describe the structure of the ONH of the eye at a reference/baseline condition using

POD. Any glaucomatous changes in the ONH of the eye present during a follow-up exam were estimated by

comparing the follow-up ONH topography with its baseline topography subspace representation. The texture

features and higher order spectra can also be used for glaucomatous image classification. The wavelet

decomposition is used for feature extraction, and they uses three well-known wavelet filters, the daubechies

(db1) also called haar filter, the symlets (sym3), and the biorthogonal (bio3.1, bio3.5, and bio3.7) filters and

then the classification is done using support vector machine, sequential minimal optimization, naive Bayesian,

and random-forest classifiers.

III. Data set

The retinal images used for this study were collected from the Kasturba Medical College, Manipal, India

(http://www. manipal.edu). The doctors in the ophthalmology department of the hospital manually curated the

images based on the quality and usability of samples. The ethics committee, consisting of senior doctors,

approved the use of the images for this research. All the images were taken with a resolution of 560 × 720

pixels and stored in lossless JPEG format. The dataset contains 60 fundus images: 30 normal and 30 open angle

glaucomatous images from 20 to 70 year-old subjects. The fundus camera, a microscope, and a light source

were used to acquire the retinal images to diagnose diseases. Fig. 1(a) and (b) presents typical normal and

glaucoma fundus images, respectively.

Figure 1 data set (a) Normal image (b) Glaucoma image

(a) (b)

IV. Methodology

The images in the dataset were subjected to standard histogram equalization. Evaluation of histogram provides

the efficient classification of glaucoma. The objective of applying histogram equalization was twofold. The first

Page 83: IJETCAS June-August Issue 9

Shafan Salam et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp.

61-65

IJETCAS 14-523; © 2014, IJETCAS All Rights Reserved Page 63

one to assign the intensity values of pixels in the input image, such that the output image contained a uniform

distribution of intensities, and the second one is to increase the dynamic range of the histogram of an image. The

following detailed procedure was then employed as the feature extraction procedure on all the images before

proceeding to the feature ranking and feature selection schemes

A. Image Decomposition

For the classification of glaucoma from several data sets, the decomposition of data set is necessary. Here the

decomposition is done with help of transform called discrete wavelet transform (DWT). A discrete wavelet

transform is any wavelet transform for which the wavelets are discretely sampled. This transforms captures both

frequency and location information. The DWT captures both the spatial and frequency information of a signal.

DWT analyzes the image by decomposing the specified image in to a coarse approximation through low-pass

filtering and then the image information is subjected to high-pass filtering. Such decomposition is performed

recursively on low-pass approximation coefficients obtained at each level, until the necessary iterations are

reached. While taking the DWT of the data set, each image from the data set are converted to four parts based

on their intensity of frequency and there directions. They may be of 0 degree (horizontal, cH), 45 degree

(diagonal, cD), 90 degree (vertical, CV) and 135 degree (diagonal, cD). As the image itself is considered to be a

matrix with dimension m*n, after decomposition they are converted to four coefficient matrics. The first level of

decomposition results in four coefficient matrices, namely, A1, Dh1, Dv1, and Dd1

B. Feature Extraction

After image decomposition certain features are extracted from decomposed image. Here 2D-DWT is used for

the feature extraction procedure. The DWT is applied to three different filters namely daubechies (db1) also

called haar filter, symlets (sym3) and biorthogonal (bio3.1, bio3.5, bio3.7). From these filters certain filter

coefficients are extracted. With the help of these filter coefficients feature extraction is carried out. The

extraction is possible with those three equations mentioned below. Equations (1) and (2) represents the average

of the corresponding intensity values and equation (3) represents the average of energy of intensity values.

ℎ1 = 1/(p*q)∑x={p}∑y={q}| ℎ1 ( , ) | (1)

v1 = 1/(p*q)∑x={p}∑y={q}| v1 ( , ) | (2)

Energy = 1/(p2*q

2)∑x={p}∑y={q}( ℎ1 ( , ))

2| (3)

C. Normalisation of Features

The next step after feature extraction is normalization of these features. From each data set given, 14 features

are extracted out. These features are then z-score normalized with the help of given equation. For the

normalization of old features mean and the standard deviation of these 14 features should be determined.

Ynew = (Yold – mean)/std

Where Yold is the original value, Ynew is the new value, and the mean and std are the mean and standard

deviation of the original data range, respectively.

V. Data set classification

We performed the validation of the ranked features and feature subsets using the standard C-SVC

implementation of SVM, SMO, random forest, and naive Bayes. Here support vector machines are supervised

learning models with associated learning algorithms that analyze data and recognize patterns, used

for classification of given data set and regression analysis. Given a set of training examples, each marked as

belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples into

one category or the other, making it a non-probabilistic binary linear classifier. In SVM model, it represents the

given images as points in space, mapped so that the images of the separate categories are divided by a clear gap

that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a

category based on which side of the gap they fall on. Sequential minimal optimization (SMO) is an algorithm

for solving the quadratic programming (QP) problem that arises during the training of support vector machines.

SMO is widely used for training support vector machines. Here the data set given as input is classified

accordingly. Naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes'

theorem with strong independence assumptions between the features. Here assumptions between the features are

taken from the data set. It is a popular method for text categorization, the problem of judging documents as

belonging to one category or the other, with word frequencies as the features. Random forests are ensemble

learning method for classification and regression of data set given as input, that operate by constructing a

multitude of decision trees at training time and outputting the class that is the mode of the classes output by

individual trees. The method combines Breiman's bagging idea and the random selection of features of the given

data set, in order to construct a collection of decision trees with controlled variance.

Page 84: IJETCAS June-August Issue 9

Shafan Salam et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp.

61-65

IJETCAS 14-523; © 2014, IJETCAS All Rights Reserved Page 64

Figure 2 Data set classification flow chart.

VI. Experimental Result

The program code is generated using Matlab and the result is analyzed. The output is such that it classifies the

dataset into normal and glaucomatous images. We performed the validation of the ranked features and feature

subsets using the standard C-SVC implementation of SVM, SMO, random forest, and naive Bayes. Below

mentioned is the graphical representation of feature extraxtion. The first figure is the graph showing tested

images that contains glaucoma and the second graph that does not contain glaucoma. The different colour

representation in the graphs shows the 14 features that has been extracted from different descrete wavelet

transform filters.

Figure 3: Graphical representation (a) Normal image (b) Glaucoma image

(a) (b)

Input Image

Image Decomposition Using DWT

1. Haar or db1 2. Symlet

3. Reverse Biorthogonal (rbio3.1, rbio 3.5, rbio 3.7) Filters

Feature Selection

Feature Extraction

Classification

Normal Image Glaucoma Image

Page 85: IJETCAS June-August Issue 9

Shafan Salam et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp.

61-65

IJETCAS 14-523; © 2014, IJETCAS All Rights Reserved Page 65

VII. Conclusion

This paper demonstrates the feature extraction process using three wavelet filters. The daubechies (db1) or also

represented as haar filter , symlets (sym3) and three biorthogonal wavelet filters such as rbio3.1,rbio3.5, rbio3.7

are used. From these five filters 14 wavelet coefficients are extracted. The wavelet coefficients obtained are

then subjected to average and energy calculation resulting in feature extraction. The sequential feature selection

algorithm is then used for selecting the most appropriate features for classification. The classification is done

using four different classifiers that provides higher accuracy. The classifiers used here in this paper constitute

SMO, SVM, Naïve bayes and Random Forest classifiers. We can conclude that the energy obtained from the

detailed coefficients can be used to distinguish between normal and glaucomatous images with very high

accuracy.

References [1] Sumeet Dua , Senior Member, IEEE “Wavelet-Based Energy Features for Glaucomatous Image Classification”, IEEE

Transactions On Information Technology In Biomedicine, Vol. 16, No. 1, January 2012. [2] R. Varma et al., “Disease progression and the need for neuroprotection in glaucoma management,” Am. J. Manage Care, vol. 14,

pp. S15–S19,2008.

[3] R. George, R. S. Ve, and L. Vijaya, “Glaucoma in India: Estimated burden of disease,” J. Glaucoma, vol. 19, pp. 391–397, Aug. 2010.

[4] K. R. Sung et al., “Imaging of the retinal nerve fiber layer with spectral domain optical coherence tomography for glaucoma diagnosis,” Br. J.Ophthalmol., 2010.

[5] J. M. Miquel-Jimenez et al., “Glaucoma detection by wavelet-based analysis of the global flash multifocal electroretinogram,”

Med. Eng. Phys., vol. 32, pp. 617–622, 2010. [6] B. Brown, “Structural and functional imaging of the retina: New ways to diagnose and assess retinal disease,” Clin. Exp.

Optometry, vol. 91, pp. 504–514, 2008.

[7] Celina Rani George, “Glaucomatus Image Classification Using Wavelet Based Energy Signatures And Neural Networks,” International Journal of Engineering Research & Technology (IJERT), ISSN: 2278-0181, Vol. 2 Issue 3, March – 2013.

[8] N.Annu1, Judith Justin, “Automated Classification of Glaucoma Images By Wavelet Energy Features,” International Journal of

Engineering and Technology (IJET), Vol 5 No 2 Apr-May 2013. [9] U. R. Acharya, S. Dua, X. Du, V. S. Sree, and C. K. Chua, “Automated diagnosis of glaucoma using texture and higher order

spectra features,” IEEE Trans. Inf. Technol. Biomed., vol. 15, no. 3, pp. 449–455, May 2011.

[10] R. C. Gonzalez and R. E. Woods, Digital Image Processing. NJ: Prentice Hall, 2001.

Page 86: IJETCAS June-August Issue 9

International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research)

International Journal of Emerging Technologies in Computational

and Applied Sciences (IJETCAS)

www.iasir.net

IJETCAS 14-524; © 2014, IJETCAS All Rights Reserved Page 66

ISSN (Print): 2279-0047

ISSN (Online): 2279-0055

Comparative Analysis of EDFA based 32 channels WDM system for

bidirectional and counter pumping techniquesMishal Singla, Preeti, Sanjiv Kumar

Electronics and Communication

UIET, Panjab University,

Chandigarh, India

__________________________________________________________________________________________

Abstract: With the increase in demand of capacity, Wavelength Division Multiplexing (WDM) is used in optical

fiber network. In WDM, an equalized gain spectrum of EDFA is required so that the output power can be

uniformed. So, in this paper the influence of bidirectional pumping is presented for 32 channels EDFA based

WDM system. The performance of the system is analyzed on the basis of received power, Bit Error Rate, Q-

factor at different pumping power in the wavelength range of 1530nm to 1555nm at power of -26dBm with 0.8

spacing. The performance of the WDM system with bidirectional pumping is compared to that with counter

pumping.

Keywords: EDFA, Pump power, Fiber length, BER, WDM.

_________________________________________________________________________________________

I.INTRODUCTION

History of the optical fiber communication is very long [1]. Initially the carrier waves were used, but for long

span, there was no method that can be practically implemented for all the channels between fiber- optic links to

be amplified. Evolutionary change came into existence with the introduction of amplifiers [2]. Amplifiers are of

various types like Raman amplifier, Erbium Doped amplifier and Semiconductor amplifier [3]. EDFA is optical

amplifier that is known best and recently used for optical window having low loss and contain the fiber having

silica [4]. EDFA is preferred to use because its gain bandwidth is large, which is in the range of tens of

nanometers. The channels of data can amplify without the gain narrowing effect at higher data rates [5]. For the

amplification of an optical signal, doped fiber is used as a medium of gain. EDFA basic says when an optical

signal at 1550 nm wavelength enters the EDFA; the signal is combined with a 980 nm or 1480nm pump laser

through a wavelength division multiplexer [6]. Further, as the technology of optical communication increases,

demand for increase in capacity of the system increases. As the number of channels increases, it becomes dense

and also called Dense WDM [7].

For communication, there are practically two wavelength widows 1530nm to 1560nm(C-Band) and 1560nm to

1600nm (L-Band) [8]. EDFA can amplify a wide wavelength range (1500nm - 1600nm) simultaneously, hence

is very useful in wavelength division multiplexing for amplification. Erbium doped ions can amplify the signal

through the interaction at 1550nm [9]. EDFA is pumped in three ways: Co-Pumping, Counter pumping and

Bidirectional Pumping [10]. It has been observed from the analysis that the results of co-pumping are not better

than the bidirectional pumping. So, in this paper only two techniques are discussed: counter and bidirectional.

The performance of EDFA based WDM System depends on the length of Erbium Doped Fibre and the pump

power [5]. The performance of WDM system for long haul transmission is analysed in terms of Bit Error Rate,

Noise Figure and power received [5].

II.SYSTEM DESIGN AND ANALYSIS

A. System Consideration:

Basically there are three sections of a WDM system i.e. transmitter, communication channel, and receiver

channel. The transmitter section includes: 32 channel WDM transmitter, ideal multiplexer; communication

channel includes ideal isolator, Erbium doped fiber, optical fiber where as photodiode and low pass filter are at

the receiver end[11]. BER analyzer and optical power meter is used to visualize the simulation results.

B. WDM System Design:

The WDM system is designed in OptiSystem v11.0. In this, 32 WDM signals are given at the input in the range

of 1530nm-1555nm having frequency spacing of 0.8nm at 10Gbps data rate. The input power of -26dBm is

given to channels. Doped ions are excited to the higher energy level, when they pumped at 980nm [12]. Length

of the optical fiber is 50km.To avoid the effects of Amplified Spontaneous Emission (ASE) produced in WDM

system during amplification, isolator is used at the input end. It also prevents the propagation of the signal in

backward direction. Otherwise, population inversion is reduced due to reflected ASE[13]. In this paper, the

performance of the two pumping techniques is compared i.e. counter pumping and bidirectional pumping.

Page 87: IJETCAS June-August Issue 9

Mishal Singla et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp.

66-70

IJETCAS 14-524; © 2014, IJETCAS All Rights Reserved Page 67

B.1 Counter-Pumping (Backward Pumping):

In Counter Pumping, both the signals i.e. input and pump, travel in opposite direction to each other in the fiber.

For amplification the direction of input and pump signal is not essential. They can travel in any direction.

B.2 Bidirectional-Pumping:

In Bi-directional, both the signals i.e input and pump, propagate in one direction. But in the fiber, two pump

signals travel. One pump signal propagates in the same direction as the input signal and the other pump signal

propagates in the direction opposite to that of signal at the input. Figure 1 show the WDM system designed in

OptiSystem using counter pumping and bidirectional pumping.

(a)

(b)

Figure 1: Block Diagram of two pumping techniques (a) Counter- Pumping (b) Bidirectional Pumping

Pumping is done by using pump coupler at both the ends. In case of counter pumping, null is given to pump

coupler co-propagating and power to the pump coupler counter propagating where as in case of bidirectional

pumping, power is given from both pump coupler co- propagating and pump coupler counter propagating.

III. RESULTS AND DISCUSSION

On the basis of the literature reviewed the optimized values of certain parameters are considered. The

parameters which are to be considered are Pump Power, Length of EDFA and Length of optical fiber. The input

power is taken to be -26dBm. Length of EDFA is 8m [5]. Length of optical fiber is considered to be 50km and

power of pump is varied from 20mW to 100mW at 980nm wavelength. The pump power of the Erbium Doped

Fiber is increased and corresponding received power is recorded for both the pumping techniques i.e. counter

pumping and bidirectional pumping. It is shown in table1and table2 respectively.

Table1: Tx and Rx Power with the variation of Pump power of EDFA with counter pumping Pump Power given to EDFA (mW) Input

power(E-6)

Input Power (In dBm) Output Power(E-3) Output Power (In dBm)

20 43.874 -13.578 1 -1

40 43.874 -13.578 1 -1

60 43.874 -13.578 1 -1

80 43.874 -13.578 1 -1

100 43.874 -13.578 51.313 17.102

Page 88: IJETCAS June-August Issue 9

Mishal Singla et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp.

66-70

IJETCAS 14-524; © 2014, IJETCAS All Rights Reserved Page 68

Table 2: Tx and Rx Power with the variation of Pump power of EDFA with bidirectional pumping Pump Power given to EDFA (mW) Input power(E-6) Input Power (In dBm) Output Power(E-3) Output Power (In dBm)

20 43.874 -13.578 3.670 5.646

40 43.874 -13.578 14.551 11.628

60 43.874 -13.578 25.574 14.108

80 43.874 -13.578 37.045 15.687

100 43.874 -13.578 48.382 16.847

From table and table 2, it can be observed that in case of counter pumping, received power is negligible up to

80mW and 17.1dBm at 100mW where as in case of bidirectional pumping, power is received even at low values

i.e even at 20mW. Also, in bidirectional pumping at 20mW, total power of system is 20mW whereas the pump

power at pump coupler co-propagating is 10mW and at pump coupler counter-propagating is 10mW. Hence, it

can be concluded that the system works at low power only for bidirectional pumping. Counter pumping starts

working from 100mW which is very high as compared to bidirectional pumping.

The performance analysis of counter pumping and bidirectional pumping in terms of BER and Q-factor is

shown in table 3.

Table 3: BER and Q-factor for both pumping techniques for pump power of 20mW to 100mW Pump Power

given to

EDFA(mW)

Counter Pumping Bidirectional Pumping

Output Channel1 Output Channel2 Output Channel1 Output Channel2

BER Q-Factor BER Q-Factor BER Q-Factor BER Q-factor

20 1 0 1 0 1 0 1 0

40 1 0 1 0 1.112e-5 4.237 8.712e-6 4.295

60 1 0 1 0 1.033e-11

6.700 2.906e-12 6.884

80 1 0 1 0 5.400e-

19

8.825 1.026e-19 9.010

100 1.945e-17 8.415 2.146e-18 8.670 5.10e-26 10.483 2.957e-27 10.750

From Table 3, it can be observed that the minimum value of BER in counter pumping is 1 up to 80mW and e-

18 for 100mW where as in bidirectional BER values varies from e-5 to e-27 as the pump power increases. In

case of 80mW, counter pumping have minimum BER i.e.1 and for bidirectional it is e-19. At 100mW, counter

pumping provides BER value of e-17 to e-18 and bidirectional pumping provides BER values in e-26 to e-27.

So, from this it can be concluded that for 32 channels, the performance of bidirectional pumping is better than

counter pumping. In case of bidirectional, although better results in terms of BER are found at 80mW and

100mW. Also, Q-factor for counter pumping is zero at low power levels where as in case of bidirectional

pumping it increases with increase in pump power. The acceptable values of the Q-factor for optimized

condition i.e at 80mW are 9.01 for bidirectional pumping.

Eye diagrams for both the cases 80mW and 100mW are shown from which Q-factor can be analyzed.

1. At pump power of 80mW for both counter and bidirectional pumping.

Figure 2: Eye diagram for 80mW (a) Counter pumping (channel1) (b) Counter pumping (channel2)

(c) Bidirectional pumping (channel1) (d) Bidirectional pumping (channel2)

Page 89: IJETCAS June-August Issue 9

Mishal Singla et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp.

66-70

IJETCAS 14-524; © 2014, IJETCAS All Rights Reserved Page 69

2. At pump power of 100mW for both counter and bidirectional pumping.

Figure 3: Eye diagram for 100mW (a) Counter pumping (channel1) (b) Counter pumping (channel2)

(c) Bidirectional pumping (channel1) (d) Bidirectional pumping (channel2)

From figure 2, it can be clearly observed that in case of counter pumping, eye is almost closed or it can be

concluded that no eye pattern is formed as the value of min. BER is 1.Where as in bidirectional pumping; eye is

quite open showing better performance in terms of Q factor for 80mW. Figure 3 shows the eye patterns for both

counter and bidirectional at 100mW power. Here also it can be observed that eye is quite open and wide for the

bidirectional case and hence the Q-factor. Further from the above results, it can be observed that bidirectional

pumping provides better results than counter pumping at each level of power. If the case of 80mW is considered

then only bidirectional pumping works whereas in case of 100mW, both will give result but better results are

obtained in case of bidirectional. Further, if the factor of cost is considered then the system with bidirectional

can be optimized to operate at 80mW because with the increase in power system cost also increases. The

optimized value of BER and Q-factor are received at 80mW pump power.

IV.CONCLUSION

Comparison of two pumping techniques are given in this paper i.e counter pumping and bidirectional pumping

in terms of BER(Bit error rate) and Q-factor for different values of pump power(20mW to 100mW) of EDFA in

C-Band at input power of -26dBm and EDFA length of 8m at 980nm pump wavelength. For counter pumping

received power is negligible up to 80mW and 17.1dBm at 100mW where as in case of bidirectional pumping,

power is received even at low values i.e 20mW. If with the received power, BER factor is considered then the

acceptable performance is recorded for 80mW and 100mW. At both the values of pump power, bidirectional

pumping provide better results. Also it can be concluded that the system with bidirectional pumping operating at

80mW power is optimized as the cost effective system.

REFERENCES [1] HariBhagwan Sharma, Tarun Gulati, Bharat Rawat, “ Evaluation of Optical Amplifiers”, International Journal of

EngineeringRresearch and Applications, ISSN: 2248-9622, Vol. 2, pp.663-667,2012.

[2] Simaranjit Singh, Amanpreet Singh, R.S. Kaler, “Performance evaluation of EDFA, Ramanand SOA optical amplifier for WDM

systems”, Elsevier, Optik 124(2013)95-101. [3] Simaranjit Singh, R.S. kaler, “Hybrid Optical amplifiers for 64X10 Gbps dense wavelength division multiplexed system”,

Elsevier, Optik 124(2013)1311-1313.

[4] MrinmayPal,M.C. Paul, A. Dhar, A. Pal, “ Investigation of the optical gain and noise figure for multi-channel amplification in EDfa under optimized pump condition”, Elsevier, Optics communications 273,407-412,2007.

[5] M. M.Ismail, M.A.Othman, Z.Zakaria, M.H.Misran, M.A.Meor Said, H.A.Sulaiman, M.N.ShahZainudin, M. A. Mutalib,

“EDFA- WDM Optical Network design System”, Elsevier, Procedia Engineering 53,294-302,2013. [6] Prachi Shukla, KanwarPreet Kaur, “Performance Analysis of EDFA for different Pumping Configurations at High Data Rate”,

IJEAT, ISSN: 2249 – 8958, Volume-2, Issue-5, June 2013.

[7] Gao Yan, Cui Xiarong, Zhang Ruixia, “The simulation of the dense wavelength division multiplexing based on hybrid amplifier”, IEEE, ISECS.135.2009.

Page 90: IJETCAS June-August Issue 9

Mishal Singla et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp.

66-70

IJETCAS 14-524; © 2014, IJETCAS All Rights Reserved Page 70

[8] P. Nagasivakumar, A. Sangeetha, “Gain Flatness of EDFA in WDM System”, International conference on communication and

signal processing,IEEE,2013 [9] Farah Diana BintiMahadandAbu Sahmah Bin MohdSupa,” EDFA Gain Optimization for WDM System”, Elektrika, Vol. 11, NO.

1, 34-37,2009.

[10] R.Deepa,R.Vijaya, “ Influence of bidirectional pumping in high power EDFA on single channel, multichannel and pulsed signal amplification”, Elsevier, Optical Fiber Technology 14,20-26, 2008.

[11] Ramandeep Kaur, Rajneesh Randhawa, R.S. Kaler, “Performance evaluation of optical amplifier for 16x10,32x10 and 64x10

Gbps WDM system”, Elsevier Journal, Optik 124 (2013)693-700. [12] Yugnanada Malhotra, R.S. Kaler, “Optimization of Super Dense WDM Systems for capacity enhancement”, Elsevier, Optik

123(2012) 1497-1500.

[13] Jing Huang, “Impact of ASE noise in WDM systems”, Elsevier, Optik 122,1376–1380,2011.

Page 91: IJETCAS June-August Issue 9

International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research)

International Journal of Emerging Technologies in Computational

and Applied Sciences(IJETCAS)

www.iasir.net

IJETCAS 14-525; © 2014, IJETCAS All Rights Reserved Page 71

ISSN (Print): 2279-0047

ISSN (Online): 2279-0055

Appraising Water Quality Aspects for an Expanse of River Cauvery

alongside Srirangapatna Ramya, R.

1 and Ananthu, K. M.

2

1 Assistant Professor, Department of Civil Engineering, Acharya Institute of Technology, India 2 Professor, Department of Environmental Engineering, P.E.S College of Engineering, India

Abstract: River Cauvery which converts Srirangapatna into an island is one among the most significant holy

places in Southern India. In the present study, an expanse of river Cauvery was selected based on prevailing

human activities and its water quality was monitored across 12 strategic points over two seasons. The river is

also presently serving as a principle means of municipal wastewater carrier. The water samples were analyzed

based on various analytical techniques during wet and dry flow conditions, to ascertain the probable water

quality for the average flow conditions. The analysis highlighted that the Dissolved Oxygen of river was

significantly less along with the increase in Biochemical Oxygen Demand and Total Coliform population,

therewith clearly indicating impairment of the natural water-course beyond its self-purification capacity.

Keywords: Cauvery; water; municipal; quality.

I. Introduction

Today, with ever increasing demands being made on streams and rivers, the need to understand streams and rivers as ecological systems and to manage them effectively has become increasingly important [1]. Consequently water quality management is (or should be) one of the most important activities of mankind, so as to protect and save human life and the life of other living organisms. Water pollution originates from point and non-point (diffuse) sources and most of which is always due to human action [2]. To restrict pollution below a given threshold, the assimilative capacity of the river should remain sufficient to comply with the current pollution load all along the river [3]. The study area Srirangapatna lies in Mandya district at latitude of 12°23′22″ and longitude of 76°39′13″. The town is famous for ancient temples and its population ranges from 20,000 to 60,000. As River Cauvery is considered a sacred river, consequently numerous religious rituals are being performed involving several human activities inciting its selection as study area.

II. Materials and Methodology

The present study endeavors the assessment of impact of Municipal wastewater discharge on an expanse of River Cauvery by performing periodic water quality monitoring and its analyses. In this study, the river course selected extends from the upstream of the municipal sewer discharge point near the Fort of Srirangapatna up to Sangama. The entire expanse accounts for about 2.5 km and is divided into two “REACHES”. REACH-1: From the upstream of the municipal sewer discharge point near the Fort of Srirangapatna to the downstream river course at Karighatta bridge and REACH-2: From the downstream of Karighatta bridge up to Sangama. The sampling was carried out for two seasons under high and low flow conditions at 12 strategic points, selected based on human activities which can impair the water quality. The samples collected were subjected for water quality analysis for diverse parameters such as pH, temperature, Total Dissolved Solids (TDS), turbidity, conductivity, phosphate, sulphate, chloride, Nitrate, Bio-chemical oxygen demand (BOD), Chemical oxygen demand (COD), Dissolved oxygen (DO), total coliform organisms, calcium hardness and total hardness. An attempt has hence been undertaken to estimate the probable water quality during the normal/average flow conditions.

III. Results and Discussion

The water quality of River Cauvery has been analyzed during the months of April and May, 2011. The Month of April is considered as summer season as there was no rainfall, whereas the month of May is considered as pre-monsoon as two showers had fallen before sampling was done. The results obtained by their comparative study are discussed under two different conditions of river flow.

1. Water Quality at Low Stream Flow (Summer) 2. Water Quality at High Stream Flow (Pre-Monsoon)

Page 92: IJETCAS June-August Issue 9

Ramya et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp. 71-75

IJETCAS 14-525; © 2014, IJETCAS All Rights Reserved Page 72

During the month of April the stream flow at the point just below the municipal wastewater discharge was found to be 15.8 cubic meter per second whereas for the month of May it was found to be 34.2 cubic meter per second. The width of river was found to be varying between 110m to 150m during low flow conditions and 110 to 160 metre during high flow conditions. Hence it was estimated that during average flow conditions the width of the river may vary between 110 to 160 metre. Results of the pH of the river water measured about each strategic point of sampling are graphically represented in Chart 1. From this chart it is clear that the pH of the water course was varying between 6.8 to 7.25 during summer, and 6.97 to 7.55 during pre-monsoon. The water temperature was found not to vary much during sampling. It was about 24.5 to 26.56 °C during summer and 24 to 25.4 °C during pre-monsoon, as shown in Chart 2.

Chart 1: Seasonal variation of pH along the river. Chart 2: Seasonal variation of Water Temperature along the river.

The depth profile and the flow velocity of the river are as shown in Chart 3 and Chart 4 respectively. During

Low flow conditions, the depth and velocity of the river was found to be varying between 0.6 to 1.6 metre and 0.1 to 0.22 m/s respectively. On the other hand during high flow conditions depth and velocity of river was found to be varying between 1.16 to 2.2 metre and 0.14 to 0.247 m/s respectively, whereas during average flow conditions depth and velocity of river was estimated to vary between 0.8 to 1.6 metre and 0.12 to 0.22 m/s respectively.

Chart 3: Seasonal variation of depth along the river. Chart 4: Seasonal variation of velocity along the

river.

The amount of solids dissolved in water was found out by measuring the TDS content of the water samples. The analysis showed that TDS in water varied from 82.62 to 233.83 mg/l during low flow and 84.81 to 282 mg/l during high flow conditions. The concentration of TDS at each point of sampling is represented in Chart 5. The conductivity of water was measured and is shown in Chart 6. The conductivity of the water course was about 135.78 to 369.83 µs in the first month of sampling, and 151.42 to 416 µs during the second month of sampling.

The turbidity of the river at each sampling port is represented in Chart 7. It was found that turbidity of the water was more during pre-monsoon. The turbidity was ranging from 0 to 14 NTU during April and 0 to 18 NTU during May. A detailed representation of the concentration of dissolved oxygen along the river is made in Chart 8. It was observed that the DO varied from 2.61 to 6.57 mg/l during low stream flow and 3.016 to 8 mg/l during high stream flow conditions. The COD of water course was analyzed and its variation along the river stretch is represented in Chart 9.

Page 93: IJETCAS June-August Issue 9

Ramya et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp. 71-75

IJETCAS 14-525; © 2014, IJETCAS All Rights Reserved Page 73

Chart 5: Seasonal variation of TDS along the river. Chart 6: Seasonal variation of conductivity along

the river.

The turbidity of the river at each sampling port is represented in Chart 7. It was found that turbidity of the water was more during pre-monsoon. The turbidity was ranging from 0 to 14 NTU during April and 0 to 18 NTU during May. A detailed representation of the concentration of dissolved oxygen along the river is made in Chart 8. It was observed that the DO varied from 2.61 to 6.57 mg/l during low stream flow and 3.016 to 8 mg/l during high stream flow conditions. The COD of water course was analyzed and its variation along the river stretch is represented in Chart 9.

Chart 7: Seasonal variation of Turbidity along the river. Chart 8: Seasonal variation of DO along the

river.

The COD of water was found to be varying from 22.2 to 170.77 mg/l during summer and 21.04 to 80.77 mg/l during pre-monsoon. The analysis of BOD confirmed non-uniform distribution along the river. Chart 10 shows the BOD profile of the river course. The BOD varied from 0.88 to 10.65 mg/l during first month of sampling and 1.026 to 20.66 mg/l during second month of sampling

Chart 9: Seasonal variation of COD along the river. Chart 10: Seasonal variation of BOD along the

river. The Laboratory analysis of the water samples for sulphate, phosphate and nitrate showed that their

concentration along the river varied from 12.37 to 33.28 mg/l, 0.74 to 1.69 mg/l and 0.24 to 0.32 mg/l respectively during low stream flow whereas during high flow conditions it was found to vary between 8 to 43.66 mg/l, 0.23 to 2.63 mg/l and 1.46 to 1.79 mg/l. The concentration of sulpahte, phosphate and nitrate all along the river stretch is represented in Chart 11, Chart 12 and Chart 13 respectively. When the chloride content was

Page 94: IJETCAS June-August Issue 9

Ramya et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp. 71-75

IJETCAS 14-525; © 2014, IJETCAS All Rights Reserved Page 74

analyzed, its concentration varied from 6.53 to 62.2 mg/l during April and 4.6 to 67.55 mg/l during May. The result of the same is shown in Chart 14.

Chart 11: Seasonal variation of Sulphate along the river. Chart 12: Seasonal variation of Phosphate long the river.

Chart 13: Seasonal variation of Nitrate along the river. Chart 14: Seasonal variation of Chloride along

the river. During summer, the concentration of calcium hardness and total hardness were found to vary from 103.63 to

208 mg/l and 159.4 to 263 mg/l respectively. However, during pre-monsoon it was found to be varying from 104.9 to 237 mg/l and 76.7 to 204 mg/l respectively. Chart 15 and Chart 16 shows the concentration of calcium and total hardness at each transect of sampling respectively.

Chart 15: Seasonal variation of Calcium along the river. Chart 16: Seasonal variation of Total Hardness along the river.

When total coliform organisms analysis was carried out, it was observed that the number of total coliform varied from 400 to 2800 MPN/100 ml and 600 to 2000 MPN/100 ml during April and May month of sampling respectively. The number of total coliform organisms observed at each transect is as shown in Chart 17. During low flow and high flow conditions, the water sample at transect 3 and at transect 12 (Sangama) showed the presence of Total Coliform organisms beyond the prescribed limits. The total coliforms at both the stations were found to be greater than 1600 MPN/100 ml.

Chart 17: Seasonal variation of Total Coliform along

the river.

Page 95: IJETCAS June-August Issue 9

Ramya et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp. 71-75

IJETCAS 14-525; © 2014, IJETCAS All Rights Reserved Page 75

The monitoring results of River Cauvery during low and high flow conditions shows that all the parameters

are within the prescribed standards except DO, BOD and Total coliform. Hence we can estimate that during

average flow conditions also DO, BOD and Total coliform can be above the prescribed standards and can be in

the range of 2.81 to 6.99 mg/l, 0.95 to 12.59 mg/l and 500 to 2400 MPN/100 ml respectively.

IV. Summary

The presence of less dissolved oxygen and increase in the BOD concentration and total coliform organisms

may be because, the entire river expanse supports lots of human activities, like fishing, washing of animals,

utensils, clothes, bathing and the unauthorized wastewater discharges from the near-by residences through the

thick bushes which are grown on either side of the bank. If proper treatment is given prior to the disposal of

municipal wastewater into the river, then not only the water quality of the river will improve but also the self-

purification capacity of the river can be certainly regained.

V. Recommendations

The wastewater from the sewer drains has to be properly treated before it enters the river channel.

The dumping of the solid waste near the river banks should be avoided in order to reduce the river

contamination.

Unauthorized wastewater discharges into the river needs to be banned and

All the houses are required to be connected with the sewer lines and all those sewer lines to be connected

with the main sewer outlet which should be given proper treatment and then let into the river course.

More number of samples replica can be collected to have appropriate field data especially at the

confluence points.

VI. Limitations of the Study

The samplings are done for three days in a month and hence the samples may not represent the exact water

quality for the entire month or season.

VII. References [1] Centre for Research In Water Resources (CRWR). (2000). River Channels-Arc GISHydro Data Model, 2.

[2] Geza Jolankai. (1997). Description Of The CAL Programme on Water Quality Modelling, version 1.1 - Basic River Water Qualitymodels, IHPV Project 8.1, United Nations Educational, Scientific and Cultural Organization, Budapest, 4.

[3] Campolo, M., Andreussi, P., and Soldati, A. (2002). Water Quality control in the river Arno, technical note. Water, 36, 2673.

Page 96: IJETCAS June-August Issue 9

International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research)

International Journal of Emerging Technologies in Computational

and Applied Sciences (IJETCAS)

www.iasir.net

IJETCAS 14-527; © 2014, IJETCAS All Rights Reserved Page 76

ISSN (Print): 2279-0047

ISSN (Online): 2279-0055

An Improved Image Steganography Technique Using Discrete Wavelet

Transform Richika Mahajan, B.V. Kranthi

Electronics and Communication Engineering

Lovely Professional University

Jalandhar, Punjab, India

_________________________________________________________________________________________

Abstract: This paper proposes a new method for hiding a data in frequency domain. In this a spatial domain

technique adaptive pixel pair matching is applied on frequency domain with some new modifications in

calculating B- ary notational system, selecting the co-ordinate pair for making new frequency values and also in

secrete data. Discrete wavelet transform is preferred for embedding the secret data. Data is embedded in the

middle frequencies because they are more enormous to attacks than that of high frequencies. Coefficients in the

low frequency sub-band preserved unaltered to improve the image quality. The experimental results shows

better performance in discrete wavelet transform as compared with the spatial domain.

Index Terms: Discrete Wavelet Transform, Image Steganography, Adaptive Pixel Pair Matching (APPM)

________________________________________________________________________________________

I. INTRODUCTION

In the recent years due to the wide growth of digital communication and information technologies difficulty in

ensuring privacy challenges increases. Internet users frequently need to store, send, or receive private

information. The most common way to do this is to transform the data into a different form. The resulting data

can be understood only by those who know how to return it to its original form. This method of protecting

information is known as encryption. The method which makes the information encrypted so that it is difficult to

read till it reaches to the receiver side. This method is known as cryptography. A major drawback to

cryptography is that the existence of data is not hidden. Data that has been encrypted, although unreadable, still

exists as data. If given enough time, someone could eventually decrypt the data [1]. A solution to this problem is

Steganography. Digital image Steganography plays a very crucial role in secure data hiding. In digital image

Steganography, the secret data is embedded within a digital image called cover-image. Cover-image carrying

embedded secret data is referred as Stego-image. Steganography can be used as both legal and illegal ways. For

example, civilians may use it as privacy protection; while terrorist use it as to spread terroristic data. Fig 1

shows the basic block diagram of image Steganography. In this secret image is embedded into the cover image

with an embedding algorithm. A key is used for the security purpose so that the eavesdropper cannot extract the

secret data. There are three types of Steganography. (1) Pure Steganography: In this no prior information is

required before sending the data therefore no key is used. (2) Secret key Steganography: One key is used by

both the sender and receiver. (3) Public key Steganography: Public key Steganography does not depend on the

exchange of a secret key. It requires two keys, one of them private and the other public: the public key is stored

in a public database, whereas the public key is used in the embedding process [2]. The secret key is used to

reconstruct the secret message. This process results into the Stego-image. In extraction algorithm reverse

process of embedding algorithm is applied to extract the secret data.

Fig. 1 Basic block diagram of image Steganography

Image Steganography is classified into various domains: spatial domain, frequency domain and spread spectrum

[2], [3]. In spatial domain the data is embedded in the intensity value of the pixels directly. It is known as basic

Page 97: IJETCAS June-August Issue 9

Richika Mahajan et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August,

2014,pp. 76-82

IJETCAS 14-527; © 2014, IJETCAS All Rights Reserved Page 77

substitution system. The advantage of this type of image Steganography is easy computation and less

complexity. However, there are some disadvantages that data embedded can be easily. However, there are some

disadvantages that data embedded can be easily detected by some attacks of signal processing techniques like

addition of noise, rotation, compression etc. For frequency domain, image is first transform into the different

frequency components. Frequency domain methods hide message in a significant area of the cover image which

makes them more robust to attack, such as adding noise, compression, cropping some image processing. The

most important techniques in frequency domain are discrete cosine transform and discrete wavelet transform. In

today’s scenario, DWT is more preferred due to its Robustness means more robust to the attacks like blocking

artifacts and Perceptual transparency means better image quality than that of DCT [4]. Spread spectrum

technique spread the narrow band frequency over the wide band and then the embedding is done in noise but

due to its complexity it is less used. Therefore, frequency domain is selected for embedding the data. Following

are the required features for image Steganography [3]. (1) Embedding Capacity: It refers to the amount of data

that can be inserted into the cover-media without changing its integrity. (2) Perceptual transparency: This

concept is based on the properties of the human visual system. The embedded information is imperceptible if an

average human is unable to distinguish between carriers that do contain hidden information and those that do not

contain the information. (3) Robustness: Robustness refers to the ability of the embedded data to remain intact if

the Stego-system undergoes transformation, such as linear and non-linear filtering; addition of random noise;

and scaling, rotation, and loose compression. (4) Computational complexity: Computational complexity of

Steganography technique employed for encoding and decoding is another consideration and should be given

importance.

II. LITERATURE REVIEW

The most common and widely used in spatial domain is least significant method. In this directly LSB of the

cover image is replaced by the message bits. The major drawback of this method is its vulnerability to various

statistical attacks [3]. In [2003] chan [5] et al. the limitations of LSB method is improved by optimal pixel

adjustment method. This method reduces the image distortion of LSB. In this method some conditions are

applied and if the resultant produces less distortion then it is modified unless kept unmodified. Then Pixel pair

matching method (PPM) came which uses pixel pair for embedding [6]. The basic idea is replacing the pixel

pair (x, y) as a co-ordinate with a new searched co- ordinate (x', y') with in a predefined neighborhood set ϕ (x,

y). In [2006] zhang [7] et al. proposed exploiting modification direction method in which (2n+1)- ary notational

system is introduced according to which only one pixel is increased or decreased by 1. The drawback of this

method is lower payload. In [2009] chao [8] et al. proposed diamond encoding which increase the payload of

EMD method. The drawback of DE is it forms the B- ary notational system of some embedding parameters.

When k is 1, 2 and 3 the B- ary is 5, 13 and 25- ary notational system. In [2012] hong [6] et al. proposed a new

method known as adaptive pixel pair matching. This allows selecting the digit in any B- ary notational system

which overcomes the limitation of DE. This method fulfills the basic requirement of PPM fully. (1) There must

be exactly B coordinates in neighborhood sets. (2) The characteristic values must be mutually exclusive. (3) The

best B must be selected which achieves lower embedding distortions and the design of neighborhood sets and

characteristic value should be capable of embedding digits in any B- ary notational system.

For frequency domain, In [2012] P.Rajkumar [9] et al. the comparative analysis of spatial and frequency domain

techniques. The result of this paper shows spatial domain technique is easy to implement and encode with high

payload where as frequency domain is more robust to statistical attack and have low payload capacity. This

paper also shows comparison between DCT and DWT. Discrete wavelet transform is more robust in blocking

artifacts and also have good perceptual transparency than discrete cosine transform [4]. There are different

algorithms for DWT. In [2006] chen [10] et al. proposed DWT based approach. In this paper data embedding is

done in the LSB’s of the frequency components. It also satisfies the requirement of different users. It determines

the two modes of embedding data. The mapping technique is applied on the secrete bits for increasing the

security. The two embedding modes described in it are varying mode and fix mode. In varying mode the

embedding capacity varies however in fix mode embedding capacity do not change. In accordance to these

techniques its peak signal to noise ratio varies. It also gives the detail view how data embedded in various sub-

bands. The drawback of this is the key matrix used for secret data manipulation is not good because extra data is

also embedded therefore new method required to enhance its authentication and improve its embedding capacity

by reducing embedding of extra data in the original image. In [2010] song [11] et al. proposed a method based

on authentication in which chaotic logistic map is used to randomize the secret data. In this first chaotic

sequence is generated and then applying some threshold binary sequence formed. Data is embedded in DWT

coefficients. In [2011] Ghasemi [12] et al. proposed a method to improve the embedding capacity by using

genetic algorithm and imperceptibility is improved by using OPAP method. The drawback of this method is

computational complexity is high.

This paper proposed a method in which a spatial domain technique APPM with some modification is applied in

discrete wavelet transform of frequency domain. In this technique of APPM pixel pair is selected to embed with

all conditions satisfied of PPM. Here data is embedded in the pair of frequency coefficients. The proposed

Page 98: IJETCAS June-August Issue 9

Richika Mahajan et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August,

2014,pp. 76-82

IJETCAS 14-527; © 2014, IJETCAS All Rights Reserved Page 78

method improves the robustness and imperceptibility than APPM. The rest of this paper is organized as follows.

Section III reviews some correlated area and describes the proposed work. Section IV shows the simulation

results and analysis. Section V includes the concluding remarks.

III. PROPOSED WORK

In this section, Haar- Discrete wavelet transform and proposed method followed are discussed in detail.

A. Haar- Discrete wavelet transform

Haar wavelet is the simplest wavelet and most commonly used. It is applied in two ways: one is horizontal way

and other is vertical way. First, scan the pixels from left to right in horizontal direction. Then perform addition

and subtraction operation on neighboring pixels also multiply with the scaling function for Haar wavelet is

1/ . Store the result of addition on left half and addition on the right half. Let us consider the starting pixel A

and neighboring pixel B.

Sum on left side =

(1)

Difference on right side =

(2)

Repeat the process until it covers all the rows. The pixel sum is represented by low frequency and difference is

represented by high frequency. Secondly, scan the pixels from top to bottom in vertical direction. Then perform

addition and subtraction operation also multiply by 1/ . Store the result of addition on top and subtraction

result on bottom. For doing these operations filter bank is used as analysis and synthesis [13].

Fig. 2 Input image with different sub-bands after applying DWT

B. Embedding procedure

STEP 1: Secret data is the data we want to conceal. In this data to be concealed is image. In this step we first

convert the secret image grayscale pixel values into binary bit stream. Then according to the embedding

requirement we divide these bits into several groups like 2, 3, 4 etc. and its embedding on the frequency sub-

bands of cover image in which we want to hide data depend on key. Key k is used as a security purpose which

determines the embedding sequence.

STEP 2: Let the cover image of size M × M. Apply Haar- DWT to separate low, high and middle frequencies

each of size M/2 × M/2= M1 × M1. Data is embedded in middle frequency components.

STEP 3: Adaptive pixel pair matching is used for embedding the data with some modifications. Find the B- ary

notational system by using

B = 2 ^ (n / (M1 × M1)) (3)

Here n is the number of bits to be embedded. Number of bits is calculated by

n= (size of secret image × 8) bits (4)

Now calculate the bits per pixel which is calculated by

Bits per pixel =

(5)

Bits per pixel determine the matrix size required to cover the value equal to B-ary notational system. Find

characteristic values f(x,y) where x and y are the coordinates of the B-ary notation formed.

f(x,y)= (x + CB × y) mod B/10 (6)

CB is constant and can be obtain by solving the given pair (x, y) and given integer value B. Then find the

neighborhood sets ϕ(x,y), where x and y are the co-ordinates of the center value, xi and yi are the other co-

ordinates along (x, y).

Minimize:

Subject to: f(xi, yi) {0,…….,(B-1)/10}

f(xi, yi) f(xj, yj) if i j for 0 i, j (B-1)/10 (7)

From the above equation the neighborhood values having minimum distance are selected and other values get

neglected. For embedding the secret digit SB, take the two pixel pair from cover image according to embedding

sequence. Find f(a,b) where a and b are pair of frequency components. Determine the modulus distance d

between SB and f(a,b). Repeat the process until data get embedded. Apply inverse discrete wavelet transform to

obtain the stego- image.

d= SB – f(a, b) mod B/10 (8)

Page 99: IJETCAS June-August Issue 9

Richika Mahajan et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August,

2014,pp. 76-82

IJETCAS 14-527; © 2014, IJETCAS All Rights Reserved Page 79

Let us explain with the help of example, consider cover image of size 512 × 512; (M × M). After applying DWT

on cover image of size 512 × 512 it became 256 × 256 each (M1 × M1). Let secret image of size 256 × 128.

Number of bits can be calculated as 256 × 128 × 8 = 262144 bits. B-ary calculated is 16. Hence we make the

secret data according to it. Here CB is constant its value for B- ary 16 is 6. If (x, y) are (-0.1, -0.2) then f(x, y) is

(-0.1 + 6 × (-0.2)) mod 1.6 equal to 0.3.

Table I: Constant Values CB for 2 CB 64

It is based on B- ary which is 16. Therefore we require matrix of 5 × 5. Here in equation x and y are the

coordinates of matrix. After calculating all the values of f(x, y) for all x and y we have a matrix as shown below.

Fig. 3 Characteristic values f(x, y)

Therefore the co-ordinates having less values of neighborhood set are considered. Hence we have matrix as

shown below.

Fig 4 neighborhood sets ϕ(x, y)

Table II: Shows Different Neighborhood Co-Ordinates f(x', y')

Page 100: IJETCAS June-August Issue 9

Richika Mahajan et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August,

2014,pp. 76-82

IJETCAS 14-527; © 2014, IJETCAS All Rights Reserved Page 80

f(x0.1, y0.1)

= (0.1, 0)

f(x0.5, y0.5 )

=(-0.1,0.1)

f(x0.9, y0.9 )

=(-0.1,-0.1)

f(x1.3, y1.3 )

=(0.1,0.2)

f(x0.2, y0.2 )

= (0.2, 0)

f(x0.6, y0.6 )

= (0,0.1)

f(x1.0, y1.0 )

=(0,-0.1)

f(x1.4, y1.4 )

=(-0.2,0)

f(x0.3, y0.3 )

=(-0.1,-0.2)

f(x0.7, y0.7 )

=(0.1,0.1)

f(x1.1, y1.1 )

=(0.1,-0.1)

f(x1.5, y1.5 )

=(-0.1,0)

f(x0.4, y0.4 )

=(0,-0.2)

f(x0.8, y0.8 )

=(0.2, 0.1)

f(x1.2, y1.2 )

=(0, 0.2) Now make the secret data in between the range 0 to (B-1)/10. As discussed above cover image is divided into

different sub-bands. According to the payload capacity decomposition level can be increased. Here we want to

embed 262144 bits we want to embed. After making different groups of four bits in binary we left with 65536

bits. Now these bits we want to embed. Then we are having two frequency sub-band HL and LH in which we

want to hide data. Each can embed (M1 × M1) / 2. Therefore half on one rest on other sub- band. Let we have

0.7 secret digit to conceal in frequency coefficients (-2.4, 5.2). Therefore first find the characteristic value f(a, b)

where a and b are the frequency components we want to conceal data. f(-2.4, 5.2) = 0 and now determine the

modulus distance. d= (0.7 – 0) mod 1.6 is 0.7. Now we find the co- ordinates of 0.7 from Fig.4.3 they are (0.1,

0.1). New frequency components are (-2.4 + 0.1, 5.2 + 0.1) = (-2.3, 5.3). Scan all the frequency components

according to key k and repeat till all data get embedded.

C. Extraction procedure

STEP1: Stego- image contains the secret data. Here Steganography is based on transform domain. Therefore

data is constrained inside the frequency coefficients. DWT is applied on it. Then data extraction procedure is

followed to extract the data.

STEP 2: In extraction scan the frequency components of the sub-bands of middle frequencies in which data is

embedded according to the key k. Then calculate the f(a', b') the result obtained is embedded values. Make the

value in the range of 0 to (B-1) and form a binary bit stream. According to the input bit sequence make them

into different groups of 2, 3 and 4 etc. and convert them to decimal. Hence the secret image is extracted. Let us

continue the above example having new frequency components (-2.3, 5.3). f(a,b)= (-2.3 + 6 × 5.3) mod 1.6 is

0.7 continue this process till all the data is extracted.

IV. SIMULATION RESULT AND ANALYSIS

In this section, the simulation results and performs some data analysis. The results are performed on four cover

images. Each of size 512 × 512, gray level images is shown below. To calculate the results two evaluation

parameters are PSNR and MSE. Comparison of PSNR and MSE between APPM and Proposed method is

shown. Result on different images with payload of 1bits per pixel for constant value of CB. Data embedding

capacity for different B- ary is also shown.

A. Peak signal to noise ratio (PSNR):

The PSNR is commonly used to determine the quality of the stego-image. An approximation to human

perception of reconstruction quality, therefore in some cases one reconstruction may appear to be closer to the

original than another, even though it has a lower PSNR. In Steganography it must be above ‘30dB’.

PSNR = 10 log10

(9)

B. Mean square error (MSE):

It stands for mean square distance between the cover image and stego-image.

MSE

(10)

Where aij are pixel value at position i and j in the cover image and bij of stego- image. Smaller the value of mean

square error PSNR value is larger.

Page 101: IJETCAS June-August Issue 9

Richika Mahajan et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August,

2014,pp. 76-82

IJETCAS 14-527; © 2014, IJETCAS All Rights Reserved Page 81

Fig. 5 Four gray scale images used as cover medium

Table IV: MSE and PSNR Comparison Between APPM and Proposed Method

Methods

Payload B- ary MSE PSNR MSE PSNR

1 4 0.3754 52.3859 0.0019 75.3945

2 16 1.344 46.847 0.0067 69.8518

3 64 5.1923 40.9772 0.0259 63.994

4 256 20.44 35.495 0.104 57.9585

1.171 5 0.4 52.1102 0.002 75.079

1.831 13 1.077 47.8086 0.005 71.1175

APPM Proposed work

Fig. 6 Four Stego Images

Table V: MSE and PSNR of Images (C4, 1 bpp)

APPM Proposed method

Parameters MSE PSNR MSE PSNR

Lena 0.3762 52.377 0.0019 75.3945

Living room 0.3754 52.3754 0.0019 75.4172

Baboon 0.3747 52.3944 0.0019 75.4175

Private 0.3747 52.3935 0.0019 75.3949

Table VI: Data Embedded Capacity

B-ary Constant

4 2

16 6

64 14

256 60 524288

131072

262144

393216

Embedding capacity

Table IV depicts the comparison of MSE and PSNR between proposed method in DWT and adaptive pixel pair

matching. The result show that proposed method has good results than APPM for different B- ary. For high

Page 102: IJETCAS June-August Issue 9

Richika Mahajan et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August,

2014,pp. 76-82

IJETCAS 14-527; © 2014, IJETCAS All Rights Reserved Page 82

payload and 256- ary proposed method shows very small MSE which improves the stego- image quality than

that of APPM. Table V shows the results on various images at payload of 1 bpp. Table VI gives the embedding

capacity for different B-ary.

V. CONCLUSION

A new Steganography scheme is proposed in this research. A spatial domain technique adaptive pixel pair

matching is implemented in frequency domain. As in frequency domain data is embedded in the frequency

coefficients which make them more secure and perceptual transparency to human visual system. So, data is

hided in the middle frequencies. Key is used to more secure. The proposed method results shows good peak

signal to noise ratio in comparison with the adaptive pixel pair matching. In future work, better security key can

be used for decreasing the time duration. More data can be embedded by using some compression techniques

and by increasing the decomposition level. Other wavelets can be used for good results.

REFERENCES [1] Ankita Agarwal “Security enhancement scheme for image Steganography using S- DES technique” International Journal of

Advanced Research in Computer Science and Software Engineering, April 2012.

[2] Zaidoon Kh. AL-Ani, A.A.Zaidan, B.B.Zaidan and Hamdan.O.Alanazi, “Overview: Main Fundamentals for Steganography”

Journal of computing, Volume 2, Issue 3, MARCH 2010, ISSN 2151-9617. [3] Babloo Saha and Shuchi Sharma, “Steganographic Techniques of Data Hiding using Digital Images” Defence Science Journal

Vol. 62, No. 1, January 2012, pp. 11-18, 2012.

[4] V.Srinivasa rao1, Dr P.Rajesh Kumar2, G.V.H.Prasad3, M.Prema Kumar4, S.Ravichand5 “Discrete Cosine Transform Vs Discrete Wavelet Transform: An Objective Comparison of Image Compression Techniques for JPEG Encoder” IJCSI

International Journal of Computer Science Issues, Vol. 8, Issue 5, No 2, September 2011 ISSN (Online): 1694-0814.

[5] Chi-Kwong Chan, L.M. Cheng “Hiding data in images by simple LSB substitution” 2003 Pattern Recognition Society. Published by Elsevier Ltd.

[6] Wien Hong and Tung-Shou Chen“A Novel Data Embedding Method Using Adaptive Pixel Pair Matching” IEEE Trans., Vol. 7,

No.1, February 2012 [7] Xinpeng Zhang and Shuozhong Wang “Efficient Steganographic Embedding by Exploiting Modification Direction” Xinpeng

Zhang and Shuozhong Wang, IEEE Comm. letters, Vol. 10, No.11, November 2006.

[8] R.M. Chao, H. C. Wu, C. C. Lee, and Y. P. Chu, “A novel image datahiding scheme with diamond encoding,” EURASIP J. Inf. Security, vol.2009, 2009, DOI: 10.1155/2009/658047, Article ID 658047.

[9] P. Rajkumar , R. Kar , A. K. Bhattacharjee, H. Dharmasa “A Comparative Analysis of Steganographic Data Hiding within

Digital Images” International Journal of Computer Applications (0975 – 8887) Volume 53– No.1, September 2012. [10] Po-Yueh Chen* and Hung-Ju Lin “A DWT Based Approach for Image Steganography” International Journal of Applied Science

and Engineering2006. 4, 3:275-290.

[11] Chuanmu Li Haiming Song “A Novel Watermarking Scheme for Image Authentication in DWT Domain” Natural Science Foundation of Fujian Province, China (No. 2008J0197 and No. JA08139).

[12] Elham Ghasemi, Jamshid Shanbehzadeh, Nima Fassihi “High Capacity Image Steganography using Wavelet Transform and

Genetic Algorithm” International multiconference of engineers and Computer scientists 2011, Vol. I, Hong Kong.

Page 103: IJETCAS June-August Issue 9

International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research)

International Journal of Emerging Technologies in Computational

and Applied Sciences(IJETCAS)

www.iasir.net

IJETCAS 14-528; © 2014, IJETCAS All Rights Reserved Page 83

ISSN (Print): 2279-0047

ISSN (Online): 2279-0055

Robust Watermarking in Mid-Frequency Band in Transform Domain

using Different Transforms with Full, Row and Column Version and

Varying Embedding Energy Dr. H. B. Kekre

1, Dr. Tanuja Sarode

2, Shachi Natu

3

1Senior Professor, Computer Engineering Dept., MPSTME, NMIMS University, Mumbai, India

2Associate Professor, Department of Computer Engineering, TSEC, Mumbai, India

3Ph. D. Research Scholar, MPSTME, Assistant Professor, Department of Information Technology, TSEC,

Mumbai, India,

Abstract: This paper proposes a watermarking technique using sinusoidal orthogonal transforms DCT, DST,

Real Fourier transform and Sine-cosine transform and non-sinusoidal orthogonal transforms Walsh and Haar.

These transforms are used in full, column and row version to embed the watermark and their performance is

compared. Also using energy conservation property of transforms, different percentage of host image energy

like 60%, 100% and 140% is maintained after embedding the watermark to observe the effect on robustness of

proposed watermarking technique. Though for different types of attacks, different transforms are proved robust,

Haar column transform followed by Haar row transform are observed to be best in terms of robustness. These

are followed by Walsh column/row transform and DCT column and row transform.

Keywords: Watermarking; DCT; DST; Real Fourier Transform; Sine-Cosine transform; Walsh; Haar

I. Introduction

Networked communication systems have become very popular for data exchange, especially for multimedia

data. Though distribution of multimedia data has become easier due to advanced technology, many times

creators of data or owners of data are not willing to distribute their data to avoid copyright violation.

Distribution of multimedia contents without violating its copyright is possible through digital watermarking.

Contents like audio, video, images can be secured from unauthorized copying or distribution by hiding the

certain information in it such that it remains unperceivable to Human Visual System (HVS). Other applications

of digital watermarking include fingerprinting, broadcast monitoring, owner identification, indexing etc. [1].

Various watermarking techniques which have been proposed till now can be classified into two groups: those

which hide the information in spatial domain i.e. in pixel values of an image and those which hide the

information in frequency domain [2]. Depending upon need of various applications, watermarking can also be

classified as robust, fragile, semi-fragile, visible and invisible watermarking [1].

Selection of appropriate domain (spatial or frequency domain) depends on required robustness of watermarking

scheme with respect to specific alterations introduced in data. Spatial domain watermarking is robust against

geometric attacks however; its poor capacity of embedding data may violate the imperceptibility characteristic.

Transform based or frequency domain watermarking techniques are more robust and also provide better

imperceptibility. This is due to fact that when watermark is embedded in transform domain of host image, its

effect gets scattered all over the image in spatial domain rather than concentrating in a specific pixel area.

Combinations of one or more transforms are also found to be more robust than using one single transform.

Wavelet transforms are popular transforms used in frequency domain watermarking.

In this paper various orthogonal transforms are explored for watermarking under various attacks. In addition,

these transforms are applied in different forms like full transform; column transform and row transform to study

their effect when various attacks are performed on watermarked images. By using energy conservation property

of transforms, watermark is embedded in such a way that it contributes to 60%, 100% and 140% energy of the

host image region chosen for embedding the watermark. Appropriate scaling factors are used to make the energy

level of watermark up to the desired level of energy of host image region.

Remaining paper is organized as follows. Section 2 gives review of existing watermarking techniques. Section 3

gives brief idea of Real Fourier transform and Sine-cosine transform. Section 4 describes proposed method.

Section 5 focuses on results and discussions related to proposed method. Section 6 gives conclusion of the work

presented in paper.

II. Review of Literature

Ample of transform based watermarking techniques have been proposed in literature. Among them DCT and its

combination with Discrete Wavelet Transforms (DWT) are very popular. Barni et.al proposed a DCT based

watermarking scheme for copyright protection of multimedia data. A pseudo-random sequence of real numbers

is embedded in a selected set of DCT coefficients [2]. Jiansheng, Sukang and Xiaomei proposed such DCT-

Page 104: IJETCAS June-August Issue 9

H. B. Kekre et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp. 83-

83

IJETCAS 14-528; © 2014, IJETCAS All Rights Reserved Page 84

DWT based invisible and robust watermarking scheme in which Discrete Cosine transformed watermark is

inserted into three level wavelet transformed host image [3]. Surya Pratap Singh, Paresh Rawat, Sudhir Agrawal

also proposed a DCT-DWT based watermarking technique in which scrambled watermark using Arnold

transform is subjected to DCT and inserted into HH3 band of host image[4]. Yet another joint DCT-DWT based

watermarking scheme [5] is proposed by Saeed K. Amirgholipour and Ahmad R. Naghsh-Nilchi. Another

combined DWT-DCT based watermarking with low frequency watermarking and weighted correction is

proposed by Kaushik Deb, Md. Sajib Al-Seraj, Md. Moshiul Hoque and Md. Iqbal Hasan Sarkar in [6]. In their

proposed method, watermark bits are embedded in the low frequency band of each DCT block of selected DWT

sub-band. The weighted correction is also used to improve the imperceptibility. In [7], Zhen Li, Kim-Hui Yap

and Bai-Ying Lei proposed a DCT and SVD based watermarking scheme in which SVD is applied to cover

image. By selecting first singular values macro block is formed on which DCT is applied. Watermark is

embedded in high frequency band of SVD-DCT block by imposing particular relationship between some pseudo

randomly selected pairs of the DCT coefficients. H. B. Kekre, Tanuja Sarode, Shachi Natu presented a DWT-

DCT-SVD based hybrid watermarking method for color images in [8]. In their method, robustness is achieved

by applying DCT to specific wavelet sub-bands and then factorizing each quadrant of frequency sub-band using

singular value decomposition. Watermark is embedded in host image by modifying singular values of host

image. Performance of this technique is then compared by replacing DCT by Walsh in above combination.

Walsh results in computationally faster method and acceptable performance. Imperceptibility of method is

tested by embedding watermark in HL2, HH2 and HH1 frequency sub-bands. Embedding watermark in HH1

proves to be more robust and imperceptible than using HL2 and HH2 sub-bands. In [9] and [10] Kekre, Sarode,

and Natu presented DCT wavelet and Walsh wavelet based watermarking techniques. In [9], DCT wavelet

transform of size 256*256 is generated using existing well known orthogonal transform DCT of dimension

128*128 and 2*2. This DCT Wavelet transform is used in combination with the orthogonal transform DCT and

SVD to increase the robustness of watermarking. HL2 sub-band is selected for watermark embedding.

Performance of this watermarking scheme is evaluated against various image processing attacks like contrast

stretching, image cropping, resizing, histogram equalization and Gaussian noise. DCT wavelet transform

performs better than their previous DWT-DCT-SVD based watermarking scheme in [8] where Haar functions

are used as basis functions for wavelet transform. In [10] Walsh wavelet transform is used that is derived from

orthogonal Walsh transform matrices of different sizes. 256*256 Walsh wavelet is generated using 128*128 and

2*2 Walsh transform matrix and then using 64*64 and 4*4Walsh matrix which depicts the resolution of host

image taken into consideration. It is supported by DCT and SVD to increase the robustness. Walsh wavelet

based technique is then compared with DCT wavelet based method given in [9]. Performance of three

techniques is compared against various attacks and they are found to be almost equivalent. However,

computationally Walsh wavelet was found preferable over DCT wavelet. Also Walsh wavelet obtained by

64*64 and 4*4 is preferable over DCT wavelet and Walsh wavelet obtained from corresponding orthogonal

transform matrix of size 128*128 and 2*2. In [11], other wavelet transforms like Hartley wavelet, Slant wavelet,

Real Fourier wavelet and Kekre wavelet were explored by Kekre, Sarode and Shachi Natu. Performance of

Slant wavelet and Real Fourier wavelet were proved better for histogram Equalization and Resizing attack than

DCT wavelet based watermarking in [9] and Walsh wavelet based watermarking presented in [10].

Kekre et.al presented a DCT wavelet transform based watermarking technique [12]. Here DCT wavelet is

generated from orthogonal DCT using algorithm of wavelet generation from orthogonal transforms given by Dr.

Kekre in [13]. Watermark is compressed before embedding in host image. Various compression ratios are tried

for compression of watermark so that watermark image quality is maintained with acceptable loss of

information from image. Embedding compressed image also reduces the payload of information embedded in

host image and thus causes good imperceptibility of watermarked image. Performance of the technique is

evaluated under attacks like binary run length noise, Gaussian distributed run length noise, cropping for various

compression ratios used in watermark compression. The work by Kekre et.al in [12] was extended for other

attacks like resizing and compression in [13]. Also the compressed watermark is obtained using compression

ratio 2.67 and strength of compressed normalized watermark is further increased using suitable scaling factor

which was not done in [12]. Performance of full, column and row transform using DCT wavelet and DKT_DCT

hybrid wavelet against various attacks is explored by Kekre et.al in [14] and [15] respectively. Column

transform was proved better performance wise as well as computational efficiency wise in both the cases.

Further, DKT_DCT column wavelet was observed to be better than DCT column wavelet. Effect of embedding

the watermark by maintaining its energy in some proportion of the host energy using wavelet transform is

studied in [16] by Kekre et.al.

III. Real Fourier transform [17] and Sine-Cosine transform [18]

Discrete Fourier Transform (DFT) contains complex exponentials. It contains both cosine and sine functions. It

gives complex values in the output of Fourier Transform. To avoid these complex values in the output, complex

terms in Fourier Transform are to be eliminated. This can be done by combining elements of Discrete Cosine

Page 105: IJETCAS June-August Issue 9

H. B. Kekre et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp. 83-

83

IJETCAS 14-528; © 2014, IJETCAS All Rights Reserved Page 85

Transform (DCT) and Discrete Sine Transform (DST) matrices. If we select odd sequences from DCT matrix

and even sequences from DST matrix, it gives sine-cosine transform whereas choosing even sequences from

DCT matrix and odd sequences from DST matrix will generate Real Fourier Transform.

Real Fourier Transform is nothing but discrete sinusoidal functions in Fourier analysis. Both these versions are

real and orthogonal as they have been obtained from real and orthogonal DCT and DST matrices.

IV. Proposed Method In the proposed method, orthogonal transforms (DCT/DST/ Walsh/ Haar/ Real Fourier transform/Sine cosine

transform) are applied to both host image and watermark image. This is done by three different ways by taking full transform, column transform and row transform. Transformed image obtained after applying full transform to it is given by

F=T*f*T’ (1) Where, T is unitary, orthogonal transform matrix, T’ is its transpose, f is image to be transformed and F is

transformed image. Original image can be obtained from transformed image as f=T’*F*T (2) For column transform, transformed image is obtained by pre-multiplying image with transform matrix as

shown in equation (3) and original image is obtained by pre-multiplying transformed image with transpose of transform matrix as shown in equation (4).

F=T*f (3) f=T’*F (4) Row transform of an image is given by operating transposed transform matrix on rows of an image and image

in spatial domain is obtained by operating transform matrix on rows of transformed image as shown in equation (5) and (6).

F=f*T’ (5) f=F*T (6)

One noticeable advantage of applying column or row transform over applying full transform on image from

equations (1) to (6) is that it reduces the number of multiplications by 50% making the operation faster.

In a transformed image, low frequency elements correspond to smoothness of an image and high frequency

elements are responsible for texture and edges in the image. Smoothness of the image if damaged by any means

is easily noticeable to Human Visual System. In contrast, any alterations made to high frequency elements of an

image cause the distortion in image texture and image boundaries. Hence hiding the information in terms of

watermark in low or high frequency elements of transformed image is not feasible as it will strongly affect

imperceptibility. So the suitable candidate for manipulation by information hiding is middle frequency band. In

the proposed method also the middle frequency elements are chosen for embedding the watermark. Applying

full transform to image leads to generation of HL and LH frequency bands corresponding to middle frequency

elements. When column transform is applied to an image, energy concentration is observed to be at the upper

side of an image. Hence middle rows of column transformed image correspond to middle frequency elements.

When row transform is applied to an image, energy of image gets concentrated towards the left side of image

hence middle columns of row transformed image correspond to middle frequency elements.

By knowing these characteristics of full, column and row transform, HL and LH band of full transformed image

and middle rows and middle columns of column and row transformed image respectively are selected for

embedding the watermark.

Another important property of transforms considered while embedding the watermark is energy conservation

property. So when middle frequency elements of transformed image are replaced by transform coefficients of

watermark, energy of watermark is made equal (100%) to the energy of middle frequency band of host by using

suitable scaling parameter. A study of embedding less energy (60%) and more energy (140%) in middle

frequency band is also done.

Following five images (a)-(e) in Fig. 1are used as host images and Fig. 1(f) is used as watermark.

(a) Lena (b) Mandrill (c) Peppers (d) Face (e) Puppy (f) NMIMS

Figure1 Host images and a watermark image used for experimental work

V. Results of proposed method under various attacks

For various orthogonal transforms used, results of Lena host image after embedding NMIMS watermark with

watermark energy matching to the middle frequency band of host selected for embedding are shown. Figure 2

(a) and (b) below show the watermarked Lena image and extracted watermark from it when the watermark is

embedded into HL and LH frequency bands of full Haar transformed image. Mean Absolute Error between host

Page 106: IJETCAS June-August Issue 9

H. B. Kekre et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp. 83-

83

IJETCAS 14-528; © 2014, IJETCAS All Rights Reserved Page 86

image and watermarked image as well as between embedded and extracted watermark are shown below each

corresponding image.

MAE=3.69 MAE=0 MAE=5.171 MAE=0

(a) (b)

Fig. 2 (a) Watermarked image and extracted watermark using full Haar-HL band (b) Watermarked image and extracted

watermark using full Haar-LH band Fig. 3 (a) and (b) show the watermarked image and extracted watermark from Haar column transformed image

and Haar row transformed image.

MAE=3.209 MAE=0 MAE=4.962 MAE=0

(a) (b)

Fig. 3 (a) Watermarked image and extracted watermark using Haar column transform (b) Watermarked image and extracted

watermark using Haar row transform From Fig. 2(a)-(b) and Fig. 3 (a)-(b), it is observed that host image distortion caused by embedding watermark is

noticeable in column and row Haar transform though MAE between host and watermarked image is less for

them as compared to Full Haar transform-HL and LH frequency band selection for embedding watermark.

A. Performance against various attacks:

Cropping attack: In this type of attack, watermarked image is cropped in three different ways: equally at four

corners (16x16size square portion and 32x32size square portion) and at center (32x32 size). Cropped

watermark images and watermarks extracted from them when Haar transform is used (full, column and row

transform) are shown in Fig. 4(a)-(d).

2.151 17.356 2.156 19.657

(a)Full Haar -HL band (b)Full Haar-LH band

2.145 1.651 2.145 1.128

(c)Column Haar (d) Row Haar

Figure4 (a) cropped watermarked image and watermark extracted from it using full Haar transform and HL band (b) cropped

watermarked image and watermark extracted from it using full Haar transform and LH band (c) cropped watermarked image and

watermark extracted from it using column Haar transform (d) cropped watermarked image and watermark extracted from it using

row Haar transform.

From Fig. 4(a)-(d), it can be seen that for row transform, extracted watermark shows high correlation with the

embedded one and is closely followed by column transform. Comparatively, quality of extracted watermark is

not good for full transform (both HL and LH band) as it clearly shows the black strips at the borders of extracted

watermark.

Compression attack:

Compression is the most common attack that can take place on an image when sent over a network. In the

proposed work, compression attack using various orthogonal transforms, JPEG compression and compression

Page 107: IJETCAS June-August Issue 9

H. B. Kekre et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp. 83-

83

IJETCAS 14-528; © 2014, IJETCAS All Rights Reserved Page 87

using Vector quantization is performed on watermarked image. Performance of the proposed method against

this attack is illustrated in terms of MAE between watermarked image before and after compression and MAE

between embedded and extracted watermark from it. Fig. 5(a)-(d) below shows the compressed watermarked

image and watermark extracted from it when DCT is used for compression and compression ratio 1.14.

1.515 47.539 1.708 39.536

(a)Full Haar -HL band (b)Full Haar-LH band

0.897 24.684 1.391 31.028

(c)Column Haar (d) Row Haar

Fig. 5 (a) DCT Compressed watermarked image and watermark extracted from it using full Haar transform and HL band (b) DCT

Compressed watermarked image and watermark extracted from it using full Haar transform and LH band (c) DCT Compressed

watermarked image and watermark extracted from it using column Haar transform (d) DCT Compressed watermarked image and

watermark extracted from it using row Haar transform.

From Fig. 5(a)-(d), it can be seen that column Haar transform shows better robustness than row and full Haar

transform as well as better imperceptibility. For all other compression attacks except compression using vector

quantization, column Haar has shown better robustness. Row Haar transform has shown better robustness

against VQ based compression attack.

Noise addition attack: In noise addition attack, binary distributed run length noise with different runs and

Gaussian distributed run length noise is added to watermarked image. For binary run length noise discrete

magnitude level is 0 and 1 whereas Gaussian distributed noise has discrete magnitude between -2 and 2. Fig.

6(a)-(d) shows the results of binary run length noise with run length 10 to 100 and Fig. 7(a)-(d) shows the results

of Gaussian distributed run length noise respectively using full, column and row Haar transform.

1 20.234 1 0

(a)Full Haar -HL band (b)Full Haar-LH band

1 13.283 1 0.472

(c)Column Haar (d) Row Haar

Fig. 6 (a) Binary distributed run length noise added watermarked image and watermark extracted from it using full Haar

transform and HL band (b) Binary distributed run length noise added watermarked image and watermark extracted from it using

full Haar transform and LH band (c) Binary distributed run length noise added watermarked image and watermark extracted from

it using column Haar transform (d) Binary distributed run length noise added watermarked image and watermark extracted from

it using row Haar transform. From Fig. 6(a)-(d), it is observed that embedding watermark in LH band of full transformed host image is

strongly robust against specified binary distributed run length noise attack and is closely followed by row

transform.

Page 108: IJETCAS June-August Issue 9

H. B. Kekre et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp. 83-

83

IJETCAS 14-528; © 2014, IJETCAS All Rights Reserved Page 88

0.746 0 0.746 15.144

(a)Full Haar -HL band (b)Full Haar-LH band

0.746 0.357 0.746 9.088

(c)Column Haar (d) Row Haar

Fig. 7 (a) Gaussian distributed run length noise added watermarked image and watermark extracted from it using full Haar

transform and HL band (b) Gaussian distributed run length noise added watermarked image and watermark extracted from it

using full Haar transform and LH band (c) Gaussian distributed run length noise added watermarked image and watermark

extracted from it using column Haar transform (d) Gaussian distributed run length noise added watermarked image and

watermark extracted from it using row Haar transform. As can be seen from Fig. 7, full Haar transform with HL band used for embedding the watermark gives highest

robustness against Gaussian distributed run length noise attack and is closely followed by column Haar

transform.

Resizing attack: In resizing attack, image is zoomed in using different techniques and watermark is extracted

from it after getting back the zoomed image to its original size. Similarly, watermark is also extracted from

zoomed image without getting back the zoomed image to its original size. For example, watermarked image of

size 256x256 is zoomed to size 512x512 using bicubic interpolation and brought back to size 256x256 to extract

the watermark. Another approach followed is transform based image zooming in which Hartley, DFT, DCT,

DST and Real Fourier transforms are used to zoom the image as proposed in [19] and zooming using grid based

interpolation[20]. This zoomed image is brought back to size 256x256 and then watermark is extracted from it.

Similarly, image is zoomed to size 384x384 (1.5 times of the original image) and from this zoomed image

watermark is extracted. Fig. 8 shows the results of some representative image resizing attacks using bicubic

interpolation, DCT based resizing and Grid interpolation when full Haar, column Haar and row Haar transform

is used for watermark embedding.

1.795 62.318 2.061 58.311 1.38 25.29 1.65 28.90

Full Haar -HL band Full Haar-LH band Column Haar Row Haar

Resizing and reducing the image using bicubic interpolation

0 0 0 0 0 0 0 0

Full Haar -HL band Full Haar-LH band Column Haar Row Haar

Resizing and reducing the image using DCT

0.265 10.899 0.298 16.11 0.189 12.50 0.234 17.90

Full Haar -HL band Full Haar-LH band Column Haar Row Haar

Resized using Grid based resizing technique

Fig. 8 (a) Resized-reduced watermarked image using bicubic interpolation and watermark extracted from it using full Haar

transform and HL band (b) Resized-reduced watermarked image using DCT and watermark extracted from it using full Haar

transform and LH band (c) Resized-reduced watermarked image using grid interpolation and watermark extracted from it using

column Haar transform (d) Gaussian distributed run length noise added watermarked image and watermark extracted from it

using row Haar transform.

Page 109: IJETCAS June-August Issue 9

H. B. Kekre et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp. 83-

83

IJETCAS 14-528; © 2014, IJETCAS All Rights Reserved Page 89

From Fig. 8 following observations can be made. For bicubic interpolation based resizing, column Haar

transform gives good robustness. For DCT based resizing and all other transform based resizing (i.e. DST,

Hartley, Real Fourier Transform and DFT), MAE between embedded and extracted watermark is zero. For

resizing using grid based interpolation technique, full Haar transform with HL band used for embedding

watermark shows better robustness than column Haar, row Haar and full Haar with LH band.

B. Comparison of various transforms when applied as full (HL and LH), column and row transform

Comparison of various transforms used for embedding the watermark has been done under four categories: full

transform HL band used for embedding, full transform LH band used for embedding, column transform and row

transform. The case of maintaining 100% embedding energy is considered.

Table 1 below shows the comparison of various transforms used for embedding watermark in the HL band.

Table I: Comparison of MAE between embedded and extracted watermark using various orthogonal transforms

when watermark is embedded in HL band

Attack

Transform used

DCT Full

HL

DST Full

HL

Real

Fourier

Full HL

Sine-

cosine

Full HL

Walsh Full

HL

Haar Full

HL

DCT wavelet compression 20.443 26.948 21.308 27.120 61.952 116.385

DCT compression 3.479 4.286 3.567 4.268 32.565 47.540

DST compression 6.318 3.582 6.220 3.653 32.711 51.454

Walsh compression 51.704 41.742 51.309 42.104 6.298 70.741

Haar compression 84.726 64.234 87.042 65.706 59.434 53.868

JPEG compression 37.261 40.643 38.872 39.890 43.595 45.475

VQ compression 64.754 50.502 64.438 50.665 44.098 43.263

16x16 crop 14.322 14.896 14.101 14.945 2.249 17.356

32x32 crop 31.799 35.040 31.818 35.122 9.216 36.044

32x32 crop at centre 12.342 8.355 11.776 8.912 2.978 0.759

Binary Run Length Noise (1to10) 0.000 0.457 0.000 0.457 0.000 0.000

Binary Run Length Noise (5to50) 31.430 23.096 30.668 23.044 17.732 19.607

Binary Run Length Noise (10 to 100) 30.516 22.822 31.792 22.876 17.043 20.234

Gaussian Distributed run length noise 1.122 0.857 1.138 0.888 0.000 0.000

Bicubic interpolation Resize(4times)-

reduce 20.877 20.948 21.177 20.996 37.473 60.837

Bicubic interpolation Resize(2times)-

reduce 21.586 21.637 21.893 21.687 38.529 62.318

DFT_resize-reduce 1.998 0.863 1.111 1.454 0.489 9.588

Real FT resize-reduce 0.000 0.000 0.000 0.000 0.000 0.000

Hartley_resize-reduce 0.000 0.000 0.000 0.000 0.000 0.000

DCT resize-reduce 0.000 0.000 0.000 0.000 0.000 0.000

DST resize2 0.000 0.000 0.000 0.000 0.000 0.000

grid resize2 6.639 4.919 6.579 4.921 8.925 10.900

DFT resize1.5 times 89.696 114.346 79.496 115.385 NA NA

Bicubic interpolation resize1.5 times 62.550 102.951 65.627 103.119 NA NA

grid based resize 1.5 times 232.301 184.267 215.842 188.541 NA NA

Histogram equalization 165.551 151.025 166.644 149.988 157.834 139.106

From Table 1 it can be seen that, different transforms prove better for different types of attacks. However from

the highlighted cells of the table which correspond to lowest MAE between embedded and extracted watermark,

we can say that Haar, Walsh and DCT are performing better than other transforms. For transform based resizing

attack, outstanding robustness is shown by all orthogonal transforms used for embedding the watermark with

MAE between embedded and extracted watermark zero.

Table 2 shows the performance comparison of various orthogonal transforms used for embedding the watermark

in LH band.

Page 110: IJETCAS June-August Issue 9

H. B. Kekre et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp. 83-

83

IJETCAS 14-528; © 2014, IJETCAS All Rights Reserved Page 90

Table II: Comparison of MAE between embedded and extracted watermark using various orthogonal transforms

when watermark is embedded in LH band

Attack

Transform used

DCT Full

LH

DST Full

LH

Real Fourier

Full LH

Sine-cosine Full

LH

Walsh Full

LH

Haar Full

LH

DCT wavelet compression 20.721 25.389 21.637 25.277 66.185 102.888

DCT compression 3.264 3.821 3.308 3.807 32.666 39.537

DST compression 4.442 3.262 4.341 3.412 33.190 42.127

Walsh compression 46.591 44.015 45.818 44.431 5.768 50.598

Haar compression 86.737 74.013 89.504 75.035 62.330 54.236

JPEG compression 34.739 38.278 35.614 37.756 43.558 50.079

VQ compression 46.566 42.901 46.266 43.007 34.469 35.625

16x16 crop 8.456 11.175 8.376 11.216 2.249 19.628

32x32 crop 20.894 27.803 20.679 28.038 9.216 38.421

32x32 crop at centre 9.142 7.304 8.677 7.950 2.978 0.616

Binary Run Length Noise (1to10) 4.811 4.212 5.672 5.213 1.356 2.983

Binary Run Length Noise (5to50) 1.681 1.524 1.658 1.533 0.430 0.939

Binary Run Length Noise (10 to

100) 1.257 1.143 1.341 1.123 0.000 0.000

Gaussian Distributed run length

noise 24.375 21.118 24.000 21.401 16.004 15.144

Bicubic interpolation

Resize(4times)-reduce 20.939 21.022 21.417 20.952 39.501 56.909

Bicubic interpolation

Resize(2times)-reduce 21.648 21.716 22.137 21.644 40.612 58.311

DFT_resize-reduce 2.186 1.006 1.121 1.915 0.472 6.725

Real FT resize-reduce 0.000 0.000 0.000 0.000 0.000 0.000

Hartley_resize-reduce 0.000 0.000 0.000 0.000 0.000 0.000

DCT resize-reduce 0.000 0.000 0.000 0.000 0.000 0.000

DST resize2 0.000 0.000 0.000 0.000 0.000 0.000

grid resize2 4.723 4.193 4.708 4.174 6.122 16.118

DFT resize1.5 times 90.679 96.039 77.384 102.028 NA NA

Bicubic interpolation resize1.5

times 62.572 91.057 64.972 91.200 NA NA

grid based resize 1.5 times 217.187 181.693 199.233 188.681 NA NA

Histogram equalization 162.206 150.612 162.204 150.193 149.343 80.774

Once again from table 2 it can be observed that, different transforms show different degrees of robustness

against different attacks. Walsh, Haar and DCT show better performance than other transforms. Walsh performs

well for majority of attacks followed by Haar and then DCT.

Table 3 below shows performance comparison of orthogonal column transforms against various attacks.

Table III: Comparison of MAE between embedded and extracted watermark using various orthogonal column transforms

Attack

Transform used

DCT

Column

DST

Column

Real Fourier

Column

Sine-cosine

Column

Walsh

Column

Haar

Column

DCT wavelet compression 39.901 47.715 44.457 47.422 71.895 63.734

DCT compression 0.000 0.064 0.049 0.054 7.182 24.685

DST compression 0.223 0.000 0.209 0.057 7.598 26.212

Walsh compression 15.788 12.396 15.634 12.691 0.000 11.720

Haar compression 33.202 30.727 37.628 40.407 11.901 0.000

JPEG compression 28.969 31.591 29.270 31.426 170.211 30.450

VQ compression 47.692 40.766 47.639 41.388 39.719 37.361

16x16 crop 15.491 21.870 16.259 22.664 4.733 1.651

32x32 crop 32.745 42.467 33.024 44.040 17.227 5.728

32x32 crop at centre 13.312 10.437 13.065 10.535 6.900 0.000

Binary Run Length Noise

(1to10) 0.000 0.315 0.000 0.342 0.000 0.000

Binary Run Length Noise

(5to50) 15.398 11.687 15.524 13.049 12.651 13.358

Binary Run Length Noise (10

to 100) 17.317 13.152 16.066 11.250 12.779 13.284

Page 111: IJETCAS June-August Issue 9

H. B. Kekre et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp. 83-

83

IJETCAS 14-528; © 2014, IJETCAS All Rights Reserved Page 91

Gaussian Distributed run

length noise 0.770 0.787 0.974 0.724 1.099 0.358

Bicubic interpolation

Resize(4times)-reduce 12.059 13.548 12.321 13.737 21.579 24.578

Bicubic interpolation

Resize(2times)-reduce 12.498 14.018 12.768 14.212 22.267 25.298

DFT_resize-reduce 0.364 0.349 0.347 0.397 0.241 1.337

Real FT resize-reduce 0.000 0.000 0.000 0.000 0.000 0.000

Hartley_resize-reduce 0.000 0.000 0.000 0.000 0.000 0.000

DCT resize-reduce 0.000 0.000 0.000 0.000 0.000 0.000

DST resize2 0.000 0.000 0.000 0.000 0.000 0.000

grid resize2 4.978 4.278 4.986 4.393 5.929 12.502

DFT resize1.5 times 129.611 139.405 136.072 127.135

Bicubic interpolation resize1.5

times 124.112 130.534 129.561 121.534

grid based resize 1.5 times 179.530 162.725 191.747 156.927

Histogram equalization 169.434 170.085 169.407 171.503 161.122 104.832

Table 3 shows that Haar column transform provides noticeable robustness against various attacks performed on

watermarked images. Followed by column Haar is DCT column transform which shows better robustness

against bicubic interpolation based resizing, DCT wavelet compression and JPEG compression attack.

Table IV: Comparison of MAE between embedded and extracted watermark using various orthogonal row transforms

Attack

Transform used

DCT

Row

DST

Row

Real Fourier

Row

Sine-cosine

Row

Walsh

Row

Haar

Row

DCT wavelet compression 42.617 46.821 45.118 47.016 72.467 70.673

DCT compression 0.000 0.058 0.043 0.044 8.089 31.029

DST compression 0.163 0.000 0.146 0.058 8.312 31.314

Walsh compression 17.776 15.514 17.712 15.666 0.000 11.889

Haar compression 30.536 29.565 34.666 36.577 12.185 0.000

JPEG compression 27.238 28.908 27.834 28.850 25.419 33.579

VQ compression 40.216 37.850 40.090 37.739 33.740 28.905

16x16 crop 10.954 19.150 11.259 19.312 6.571 1.129

32x32 crop 27.507 41.881 27.676 41.523 21.900 6.535

32x32 crop at centre 11.784 10.104 11.492 10.322 5.635 0.000

Binary Run Length Noise (1to10) 3.816 3.215 3.751 4.384 2.919 4.456

Binary Run Length Noise (5to50) 1.875 1.619 1.872 1.696 2.185 1.022

Binary Run Length Noise (10 to 100) 1.087 1.077 1.215 1.034 1.317 0.472

Gaussian Distributed run length noise 13.894 11.765 13.519 12.048 10.139 9.088

Bicubic interpolation Resize(4times)-

reduce 12.019 13.232 12.451 13.164 21.628 28.114

Bicubic interpolation Resize(2times)-

reduce 12.454 13.695 12.899 13.623 22.313 28.904

FFT_resize-reduce 0.370 0.350 0.343 0.384 0.246 1.370

Real FT resize-reduce 0.000 0.000 0.000 0.000 0.000 0.000

Hartley_resize-reduce 0.000 0.000 0.000 0.000 0.000 0.000

DCT resize-reduce 0.000 0.000 0.000 0.000 0.000 0.000

DST resize2 0.000 0.000 0.000 0.000 0.000 0.000

grid resize2 4.367 4.292 4.438 4.126 5.333 17.907

DFT resize1.5 times 144.271 150.256 145.672 142.358

Bicubic interpolation resize1.5 times 134.597 138.563 133.834 135.818

grid based resize 1.5 times 179.770 166.270 186.285 165.164

Histogram equalization 166.751 163.012 168.509 163.228 164.713 71.624

Table 4 shows that Haar row transform shows highest robustness against maximum number of attacks.

Since from Table 1 to Table 4, Haar, Walsh and DCT are prominently seen showing better robustness than other

transforms in their full, column and row version, a representative comparison of full, column and row Haar

transform is shown in Table 5.

Page 112: IJETCAS June-August Issue 9

H. B. Kekre et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp. 83-

83

IJETCAS 14-528; © 2014, IJETCAS All Rights Reserved Page 92

Table V: Comparison of MAE between embedded and extracted watermark using Column, Row and Full Haar transform

Attack Haar Column

Transform

Haar Row

Transform

Haar Full

Transform HL

Haar Full

Transform LH

DCT wavelet compression 74.723 84.251 132.565 119.884

DCT compression 30.121 38.501 55.895 48.312

DST compression 32.101 38.849 59.616 50.927

Walsh compression 12.111 12.369 70.741 50.598

Haar compression 0.000 0.000 53.868 54.236

JPEG compression 31.235 33.412 49.196 50.561

VQ compression 42.831 33.849 52.419 42.511

16x16 crop 1.651 1.129 17.356 19.628

32x32 crop 5.728 6.535 36.044 38.421

32x32 crop at centre 0.000 0.000 0.759 0.616

Binary Run Length Noise (1to10) 0.000 6.313 0.000 6.446

Binary Run Length Noise (5to50) 16.609 1.472 26.385 1.334

Binary distributed run length

noise(10 to 100) 16.580 0.487 26.650 0.000

Gaussian Distributed run length

noise 0.460 11.679 0.000 19.519

Bicubic interpolation Resize(4times)-

reduce 27.853 33.102 68.123 64.747

Bicubic interpolation Resize(2times)-

reduce 28.656 34.018 69.783 66.349

FFT_resize-reduce 1.630 1.621 9.588 6.725

Real FT resize-reduce 0.000 0.000 0.000 0.000

Hartley_resize-reduce 0.000 0.000 0.000 0.000

DCT resize-reduce 0.000 0.000 0.000 0.000

DST resize-reduce 0.000 0.000 0.000 0.000

grid resize2 11.215 15.207 11.475 12.324

Histogram Equalization 116.753 88.768 164.577 97.913

In Table 5, cell highlighted with yellow color indicates that for an attack in the corresponding row, transform in

corresponding column gives best performance among other similar type of transforms (full/row/column) and

green color indicates it is the second best performer. Thus from Table 5, it can be prominently seen that Haar

column transform proves to be the best performer against various attacks, closely followed by Haar row

transform. When other row and column transforms are compared, Walsh column/row transform is the next best

performer followed by DCT column/row transform.

VI. Conclusion

For majority of attacks, Haar transform gives better performance that too when applied column wise or row wise

(followed by each other) followed by Walsh transform column wise or row wise. Between column and row

transform of Haar / Walsh, column transform gives better performance closely followed by row transform

(except VQ based compression and Binary run length noise with run length 5 to 50 and 10 to 100). The overall

performance of full, column and row transform can be rated in the following sequence

1. Haar row transform

2. Haar column transform

3. Walsh column/Row transforms

4. DCT Column/ Row transform

For bicubic interpolation based resizing, (4 times and 2 times), Haar is the worst performer. DCT row / column

transform is the best in this case followed by Real Fourier row / column transform. For compression attack using

DCT wavelet and DCT, DCT column/row/full gives better performance. Among them row transform gives best

results. Robustness against attacks improves as the embedding energy is increased. We have studied three cases

of varying energy of embedded watermark namely 60%, 100% and 140%. Although the robustness is better for

140% energy, the error in watermarked image is noticeable. Hence the results of 100% are given in the paper.

References [1] Pravin pithiya, H. L. Desai, “DCT based digital image watermarking, dewatermarking and authentication”, International Journal

of latest trends in engineering and technology, Vol. 2, issue 3, pp. 213-219, May 2013. [2] Mauro Barni, Franko Bortolini, Vito cappellini, Alessandro Piva, “A DCT domain system for image watermarking”, Signal

Processing (66), pp. 357-372, 1998

[3] Mei Jiansheng, Li Sukang, Tan Xiomeri, “A digital watermarking algorithm based on DCT and DWT”, Proc. pf International symposium on web information systems and applications, pp. 104-107, May 2009.

Page 113: IJETCAS June-August Issue 9

H. B. Kekre et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,pp. 83-

83

IJETCAS 14-528; © 2014, IJETCAS All Rights Reserved Page 93

[4] Surya pratap Singh, Paresh Rawat, Sudhir Agrawal, “A robust watermarking approach using DCT-DWT”, International journal

of emerging technology and advanced engineering, vol. 2, issue 8, pp. 300-305, August 2012. [5] Saeed Amirgholipour, Ahmed Naghsh-Nilchi, “Robust digital image watermarking based on joint DWT-DCT”, International

journal of digital content technology and its applications, vol. 3, No. 2, pp. 42-54, June 2009.

[6] Kaushik Deb, Md. Sajib Al-Seraj, Md. Moshin Hoque, Md. Iqbal Hasan Sarkar, “Combined DWT-DCT based digital image watermarking technique for copyright protection”, In proc. of International conference on electrical and computer engineering,

pp. 458-461, Dec 2012.

[7] Zhen Li, Kim-Hui Yap and Bai-Ying Li, “A new blind robust image watermarking scheme in SVD-DCT composite domain”, in proc. of 18th IEEE international conference on image processing, pp. 2757-2760, 2011.

[8] H. B. Kekre, Tanuja Sarode, Shachi Natu, “Performance Comparison of DCT and Walsh Transforms for Watermarking using

DWT-SVD”, International Journal of Advanced Computer Science and Applications, Vol. 4, No. 2, pp. 131-141, 2013. [9] Dr. H. B. Kekre, Dr. Tanuja Sarode, Shachi Natu, “Hybrid Watermarking of Colour Images using DCT-Wavelet, DCT and

SVD”, International Journal of Advances in Engineering and Technology, vol.6, Issue 2. May 2013.

[10] Dr. H. B. Kekre, Dr. Tanuja Sarode, Shachi Natu, “Robust watermarking using Walsh wavelets and SVD”, International Journal of Advances in Science and Technology, Vol. 6, No. 4, May 2013.

[11] Dr. H. B. Kekre, Dr. Tanuja Sarode, Shachi Natu,“ Performance Comparison of Wavelets Generated from Four Different

Orthogonal Transforms for Watermarking With Various Attacks”, International Journal of Computer and Technology, Vol. 9, No. 3, pp. 1139-1152, July 2013.

[12] Dr. H. B. Kekre, Dr. Tanuja Sarode, Shachi Natu, “Performance of watermarking system using wavelet column transform under

various attacks”, International Journal of Computer Science and Information Security, Vol. 12, No. 2, pp. 30-35, Feb 2014. [13] Dr. H. B. Kekre, Dr. Tanuja Sarode, Shachi Natu, “Robust watermarking scheme using column DCT wavelet transform under

various attacks”, International Journal on Computer Science and Engineering, Vol. 6, No. 1, pp. 31-41, Jan 2014.

[14] Dr. H. B. Kekre, Dr. Tanuja Sarode, Shachi Natu, “Performance evaluation of watermarking technique using full, column and row DCT wavelet transform”, International Journal of Advanced Research in Electrical, Electronics and Instrumentation

Engineering, Vol. 3, Issue 1, Jan 2014.

[15] Dr. H. B. Kekre, Dr. Tanuja Sarode, Shachi Natu, “Robust watermarking technique using hybrid wavelet transform generated from Kekre transform and DCT”, International Journal of Scientific Research Publication, vol. 4, Issue 2, pp. 1-13, Feb 2014.

[16] Dr. H. B. Kekre, Dr. Tanuja Sarode, Shachi Natu, “Effect of weight factor on the performance of hybrid column wavelet transform used for watermarking under various attacks”, International Journal of Computer and Technology, Vol. 12, No. 10, pp.

3997-4013, March 2014.

[17] Dr. H. B. Kekre, Dr. Tanuja Sarode, Prachi Natu, “ImageCompression Using Real Fourier Transform, Its Wavelet Transform And Hybrid Wavelet With DCT”, International Journal of Advanced Computer Science and Applications(IJACSA), Volume 4

Issue 5, pp. 41-47, 2013.

[18] H. B.Kekre, J. K. Solanki, “Comparative performance of various trigonometric unitary transforms for transform image coding”, International Journal of electron vol.. 44, pp. 305-315, 1978.

[19] Dr. H. B. Kekre, Dr. Tanuja Sarode, Shachi Natu, “Image Zooming using Sinusoidal Transforms like Hartley, DFT, DCT, DST

and Real Fourier Transform”, selected for publication in International journal of computer science and information security Vol. 12 No. 7, July 2014.

[20] H. B. Kekre, Tanuja Sarode, Sudeep Thepade, “Grid based image scaling technique”, International Journal of Computer Science

and Applications, Volume 1, No. 2, pp. 95-98, August 2008.

Page 114: IJETCAS June-August Issue 9

International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research)

International Journal of Emerging Technologies in Computational

and Applied Sciences (IJETCAS)

www.iasir.net

IJETCAS 14-533; © 2014, IJETCAS All Rights Reserved Page 94

ISSN (Print): 2279-0047

ISSN (Online): 2279-0055

Optimizing Pair Programming Practice through PPPA Smitha Madhukar

Department of Computer Science / Assistant Professor

Amrita Vishwa Vidyapeetham, Mysore Campus,

Karnataka, INDIA.

Abstract: Pair programming is a technique by which two individuals collaborate to accomplish a single

programming task. It is a good practice towards agile software development process. Pair programming

increases the quality of the software product and achieves higher competitive benefits as compared to solo

programming. Pair programming occurs when two programmers engage themselves in the development of

software simultaneously at the same workstation. This results in lesser risks, higher productivity, enhances inter

communication skills and improvement of technical expertise. This paper explores the benefits of pair

programming and highlights the findings of a programming activity performed by students in a java lab.

Furthermore, this paper has made a sincere attempt to present an analysis and has compiled the outcomes of

pair programming methodology adopted in the lab with students as participants.

Keywords: Pair programming, cohesion, coupling, Driver, Navigator, Cyclomatic complexity, PPPA.

I. Introduction

Pair programming is a software development strategy through which two programmers strive to complete a task by working at the same machine. The two programmers essay the roles of driver and navigator respectively. The driver creates the code and navigator tests the code as and when it is written. This type of programming practice scores over other methods because the navigator suggests many ideas for improving the quality of the code and helps in resolving many issues that may cause potential threats to the software. With the help of pair programming, the driver gets the freedom to concentrate only on the coding aspects and navigator primarily focuses on safety, reliability and performance of the end product. T hey keep swapping their roles quite often. Pair programming is quite beneficial in enhancing communication between driver and navigator. Both help each other in eliminating coding deficiencies by making their ideas get implemented and constantly involve in exchanging technical views. This often results in the production of software high on performance. There are sufficient evidences to prove that when programmers work in pairs, they tend to produce substantially good results when compared to working solo. This form of programming claims to have more advantages in solving big problems. With pair programming, two individuals effectively operate on the same algorithm, similar code and share equal credit when the logic succeeds. The concept of pair programming is inevitable in a software industry where professionals rely upon productive teamwork coupled with proficient technical synergy. An almost analogous scenario prevails in an academic environment in which the same protocol can be applied to evaluate students on certain parameters like: performance, conduct, perseverance and persistence. It can be noted that students performed better while executing java programs when they worked in pairs. They comprehend, reason and analyze well in a pair programming framework.

II. Methods in pair programming

Pair programming can be adopted in the following forms: 1. Master-Master combination: This happens to be the best and proven solution for achieving good results. 2. Master-Amateur combination: This pairing leads to monitoring of amateurs by experts. It provides lot of Opportunities for amateurs to learn and practise novel approaches. 3. Amateur-Amateur combination: This combination is considered to be the least beneficial in terms of quality. However, this combination is strongly criticized in an industrial setup but followed in an academic context. Remote pair programming is another variation to pair programming when driver and navigator are in different places. They work by means of desktop sharing or collaborative editors.

III. Feasibility of pair programming From an industry perspective, pair programming is found to be extremely helpful in saving the time and effort of a software professional. This programming practice creates lesser bugs, more readable code and ultimately an efficient architecture. Working collaboratively is a step towards developing reusable code.

Page 115: IJETCAS June-August Issue 9

Smitha Madhukar et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,

pp. 94-98

IJETCAS 14-533; © 2014, IJETCAS All Rights Reserved Page 95

Additional benefits are: 1. Pair programming helps in establishing better communication between professionals who take up the roles of driver and navigator. 2. Greatly reduces time and effort spent in coding and testing. 3. Since pair programming offers scope for switching roles between driver and navigator, it helps both to learn and implement new things. 4. In a software industry, when two experts with versatile experience are involved in solving a programming task, it becomes more simpler in finding a solution. 5. Pair programming introduces a higher degree of dynamic challenges.

Figure 1: Two professionals involved in pair programming

Figure 2: Pair programming process

IV. Experimenting with Pair programming in Academic framework Computer Science happens to be a challenging discipline. Even more challenging is the manner in which it can be taught to students so that they can enjoy the experience and learn more. Therefore, computer science tutors are

Driver

Navigator

Coding

Testing

Results in Stable software

Page 116: IJETCAS June-August Issue 9

Smitha Madhukar et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,

pp. 94-98

IJETCAS 14-533; © 2014, IJETCAS All Rights Reserved Page 96

always in the process of exploring innovative techniques to make teaching more effective and enriching. In this regard, an experiment was conducted wrt pair programming in java lab. The details are as follows: 12 lab sessions were handled by the author to teach Java programming for postgraduate students. Students were asked to work on completing 25 programs related to advanced java concepts. The method of pair programming was applied and results were recorded by the author. It was compared to solo programming method previously adopted by the author to teach C++ to the same batch of postgraduate students in their previous semester. The purpose of this study was to examine the behaviour and performance of students during the lab sessions. Assessment was made based upon five key parameters. A brief description of the outcome is presented below:

Criteria Solo Programming( C++) Pair programming( Java)

Participation 30% 80 - 90%

Inquisitive nature 15% 50 - 60%

Behaviour 50% 75%

Debugging skills 50% 90 - 95%

Perseverance 10% 60 - 70%

Table 1: Comparison between Solo and Pair programming In addition to enjoying their lab sessions, students also exhibited confidence and showed more interest towards learning more. There was a significant level of interaction between students that resulted in better quality of programs. From the above table, it can be inferred that, students performed rather productively when paired than solo programming. Students were asked to answer a set of questions on the last day of lab sessions. The resulting graph is shown below.

V. Analysis of PP Performance algorithm (PPPA) In this paper, an algorithm called as Pair programming performance algorithm is presented to assess the results of pair programming efforts. By analysing this algorithm, it is easy to understand how pair programming has performed in the above mentioned academic setup. The factors taken into consideration for judging the performance are mentioned below: 1. Effort estimation 2. Time estimation 3. Cohesion 4. Coupling 5. Cyclomatic complexity 6. Bugs per line of code The above metrics are measurable and can be quantified. Let us examine the definitions of the above terms wrt. Pair programming methodology. Effort can be defined as the combined endeavour of both driver and navigator in accomplishing the given programming activity. Time can be defined as the total time taken by both driver and navigator in accomplishing the given programming activity. Cohesion can be defined as the extent to which the unit coded by driver can be seamlessly integrated by the navigator when they switch between roles. Coupling can be defined as the extent of dependency between components developed by driver and navigator. Cyclomatic complexity is defined as the complexity of the developed code. Bugs refer to something that the software does but it is not supposed to do. Let effort be denoted by ‘e’. e= ½ (driver) + ½ (navigator). Let‘t’ denote the time. t= ½ (driver) + ½ (navigator). Pair programming proves the fact that when developers work in pairs, they seem to be quite fast in developing the code than single developer. It is quite evident from the above equation. The coupling and cohesion estimates can be made as follows:

Page 117: IJETCAS June-August Issue 9

Smitha Madhukar et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,

pp. 94-98

IJETCAS 14-533; © 2014, IJETCAS All Rights Reserved Page 97

The percentage of seamless integration performed is found to be higher in PPPA because driver and navigator switch roles quite often. It introduces higher cohesion because the driver is constantly forced to convince the navigator before proceeding further. Quite obviously, when navigator takes upon the role of driver, then the navigator’s developed module integrates seamlessly into the already existing modules of driver. Let S1 be the set comprising of modules developed by the driver. S1=(M1, M2, M3...........Mn) Let S2 be the set comprising of modules developed by the navigator. S2=(M1, M2, M3...........Mn) The cohesion factor CF = (S1U S2). The software, on the whole can work properly only when there is an average level of interdependence between the Programs of driver and navigator because it yields reusability. This is illustrated by coupling activity. To achieve higher rate of success in programming, it is noted that driver writes the code only when the navigator consents for it. The navigator certifies that the code is error free only when he applies various levels of tests and checks. In PPPA, the rate of coupling is weak because both driver and navigator are striving hard to achieve the desired level of efficiency. Coupling measure (CM) can be interpreted as follows: Since there is a possibility of one module invoking another, the inputs supplied to a module may be the resultant output of another module. Let ‘p’ and ‘q’ be two modules interdependent. ‘m’ be the number of interconnections between them. Let ‘n’ be the number of inputs supplied to the modules. CM(p,q) = p(n) + q(n) Ʃn+Ʃm Since the CM is relatively low and CF is high, it results in a comparatively reliable software. Cyclomatic complexity, CC is a quantitative measure to a function’s complexity. The independent paths in a software developed using pair programming are relatively less. These proven results indicate that higher CC will lead to many errors. Therefore the number of bugs is dependent upon Cyclomatic complexity. Let ‘b’ denote the density of bugs. Higher the CC, higher the bug density.

CC ∝ b.

VI. Empirical evidence for pair programming conducted in academic environment From the above theoretical representations and mathematical equations, empirical evidence can be obtained for the experiment conducted in academic environment. The students engaged themselves in rigorous programming in java and made a sincere attempt to extend the functionalities of the existing programs by introducing greater amount of flexibility to the code. Five parameters were presented to the students in terms of questionnaires to collect answers for the following questions. The findings are illustrated below: 1. What is the degree of satisfaction obtained after your pair programming experience? 2. Did you experience resilient flow during pair programming? 3. Do you support the concept of collective code ownership during pairing? 4. Did u proceed with doing the right thing? 5. Were there fewer interruptions during pairing? The below graph depicts the findings of the author based upon the answers given by the students in view of pair programming. The author interviewed each student to get deeper insights about the adopted strategy. The degree of satisfaction was observed as high. The flow during pair programming happens quite smoothly. Hardly susceptible to interruptions. Students were of the opinion that they gained a good amount of expertise when they were subjected to pair programming. Eventhough they worked in pairs, they gained a better understanding of all the programs by exchanging their ideas with their partners. Students requested for less number of breaks during lab sessions since they enjoyed the whole process. Students opined that they encountered fewer interruptions compared to solo programming. Thus the graph illustrated below justifies the pair programming performance algorithm.

Page 118: IJETCAS June-August Issue 9

Smitha Madhukar et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014,

pp. 94-98

IJETCAS 14-533; © 2014, IJETCAS All Rights Reserved Page 98

Figure 3: Evaluation of Pair programming performance algorithm

VII. Conclusion This paper has provided a broader view of pair programming by illustrating an example from the academic perspective. There is tremendous scope for future work to consider a live scenario from the software industry perspective. The experimental results provided forms as basis for enhancing this work. The same experiment can be extended to include more parameters. The primary advantage of PPPA is that it supports the methodology by presenting ample evidences. In the current context, the PPPA works fine with the existing factors mentioned above. It can be observed that PPPA shows a significant improvement in analysing pair programming techniques. REFERENCES [1] Laurie Williams, Strengthening the case of pair programming, 2000. [2] Salleh, A systematic review of pair programming, 2008. [3] Williams, Guidelines for implementing pair programming, 2006. [4] Gallis. H, An initial framework for pair programming, 2003. [5] Roland Doepke, Markus Soworke, Pair programming interfaces and research, 2009 [6] Tammy Vandergrift, Coupling pair programming and writing, 2004 [8] Sven Heiberg, Unno Puus, Priit Salumma, Asko Seeba, Pair programming effect on programmer’s Productivity, 2003 [9] Mawarny Rejab, Mazni Omar, Mazda Mohd, Kairul Bahiya Ahmed, Pair programming in inducing Knowledge sharing, Proceedings

of the 3rd international conference on computing and informatics, 2011. [10] Mark Antony Poff, Pair programming to facilitate training of newly hired professionals, Mark Antony Poff, thesis submitted to Florida Institute of Technology, 2003.

Resilient flow Fewer

interruptions

Collective code

ownership

Increased

discipline Satisfaction

Percentage 70% 75% 90% 80% 85%

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

Percentage

Percentage

Page 119: IJETCAS June-August Issue 9

International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research)

International Journal of Emerging Technologies in Computational

and Applied Sciences (IJETCAS)

www.iasir.net

IJETCAS 14-536; © 2014, IJETCAS All Rights Reserved Page 99

ISSN (Print): 2279-0047

ISSN (Online): 2279-0055

Estimation of Inputs for a Desired Output of a Cooperative and Supportive

Neural Network 1P. Raja Sekhara Rao,

2K. Venkata Ratnam,

3P.Lalitha

1Department of Mathematics, Government Polytechnic, Addanki - 523 201, Prakasam, A.P., INDIA. 2Department of Mathematics, Birla Institute of Technology and Science-Pilani, Hyderabad campus,

Jawahar Nagar, Hyderabad-500078, INDIA. 3Department of Mathematics, St. Francis College for Women, Begumpet, Hyderabad, INDIA.

__________________________________________________________________________________________

Abstract: In this paper a cooperative and supportive neural network involving time delays is considered.

External inputs to the network are allowed to vary with respect to time. Asymptotic behavior of solutions of

network system with variable inputs is studied with respect to its counterpart of constant inputs. With suitable

restrictions on the inputs, it is noticed that solution of the network may be made to approach a pre-specified

output.

Keywords: Co-operative and Supportive Neural Network, Variable Inputs, Desired Output, Convergence.

__________________________________________________________________________________________

I. Introduction

This paper deals with the study of influence of time varying exogenous inputs on a cooperative and supportive

neural network. A model of a cooperative and supportive network (CSNN, for short) is introduced by Sree Hari

Rao and Raja Sekhara Rao [9]. It takes into account the collective capabilities of neurons involved with tasks

divided and distributed to sub networks of neurons. Applications to such networks are many, for example, in

industrial information management (hierarchical systems) which involve distribution and monitoring of various

tasks. They are also useful in classification and clustering problems, data mining and financial engineering

[6,7,8]. They are also utilized for parameter estimation of auto regressive signals and to decompose complex

classification tasks into simpler subtasks and solve then.

In a recent paper [11], the authors considered time delays in transmission of information from sub-networks to

main one as well as in processing of information in sub-network itself (before transmission of information to

main network). Qualitative properties of solutions of the system are studied. Sufficient conditions for global

asymptotic stability of equilibrium pattern of the system are established even in the presence of time delays. In

the present paper, we wish to consider the CSNN model of [11] with time delays to study the influence of time

varying inputs on the system. The motivation for this study stems from the observations of [10] that the

applicability a neural network may be increased by the choice of inputs and inputs play a key role in attaining

desired outputs. Proper choice of could be an alternative for modifying the neural network for each application

and existing neural network may be utilized for different tasks, thus. Besides this, the presence of time varying

inputs make the system non autonomous and the study enriches the literature. Mathematical studies of neural

networks have been concentrated on stability of equilibrium patterns. Equilibria are stationary solutions of the

system and correspond to memory states of the network. Stability of an equilibrium implies a recall of memory

state. Thus, such stability analysis of neural networks is confined to recall of memories only and we may not

reach the desired output for which the network is intended. In the present study, we deviate from this recall of

memories but look for ways of reaching a desired solution.

An attempt is made in [9] to explain briefly the influence of variable inputs on asymptotic nature of solutions of

CSNN model. The present study extends this work. We concentrate on the interplay between the inputs and

outputs of the network. For this, several results are established for estimation or restriction of inputs for getting a

desired or pre-specified output and understand the behavior of solutions in the presence of variable inputs. The

work also extends the study of [10] carried out for BAM networks. As remarked in [10], convergence to a

desired output for a given output explained here should not be confused with convergence of output function of

the network. Results are available in literature which consider time varying inputs in various directions [1-

3,5,12] but our emphasis here is on utilization of these inputs to make solutions of system approach an a priori

value of output. We reiterate that this is not yet another usual study on qualitative behavior of solutions of the

system under the influence of variable inputs.

The paper is organized as follows. In Section 2, the model under consideration is explained. Asymptotic

behavior of solutions and their relation with the solutions of corresponding system with constant inputs are

discussed. Section 3 deals with the input-output trade-off. Estimates on inputs are provided for approaching a

desired, preset output for the network. A discussion follows in Section 4.

Page 120: IJETCAS June-August Issue 9

P. Raja Sekhara Rao et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August,

2014,pp. 99-105

IJETCAS 14-536; © 2014, IJETCAS All Rights Reserved Page 100

II. The Model and Asymptotic Behavior

The following model is considered in [11],

,

In (2.1), , i=1,2,...,n denote a typical neuron in neural field X and denote a subgroup of

neurons in another neuronal field Y and are attached to . may be considered to form the main group of

neurons which are required to perform the task. constitute a subgroup of neurons attached to each to

which assigns some of its task. support, coordinate and cooperate with in completing the task. and

are positive constants known as passive decay rates of neurons and

respectively. and are the

synaptic connection weights and all these are assumed to be real or complex constants. denotes the rate of

distribution of information between and . The weight connections connect the i th neuron in one neuronal

field to the j th neuron in another neuronal field. The functions and

are the neuronal output response

functions and are more commonly known as the signal functions. The parameter signifies the time delay

in transmission of information from sub network neuron to main network neuron . The delay

in

second equation represents the processing delays in the subsystems. and are exogenous inputs which are

assumed to be constants in [11]. For more details of the terms and design of the CSNN, readers are referred to

[9].

Introducing the time variables and (t), , in place of constant inputs and into

the system (2.1), we get

The following initial functions are assumed for the system (2.2).

for , (2.3)

where are continuous, bounded functions on and

We assume that the

response functions and

satisfy conditions

(2.4)

(2.5)

(2.6)

where ,

, and are positive constants.

Under the conditions (2.4)-(2.6) on the response functions and with and bounded, continuous functions on

) it is not difficult to see that the system (2.2) possesses unique solutions that are continuable in their

maximal intervals of existence ([11]).

Since (2.2) is non-autonomous it may not possess equilibrium patterns(constant solutions). A solution of (2.1) or

(2.2) is denoted by where ,

throughout.

Therefore, we study the asymptotic behavior of its solutions. We recall from [10] that two solutions and

of the system (2.2) are asymptotically near if

.

In the following, we present results on asymptotic nearness of solutions of (2.2).

Our first result is

Theorem 2.1: For any pair of solutions and of (2.2), we have

provided holds, where

(2.7)

Page 121: IJETCAS June-August Issue 9

P. Raja Sekhara Rao et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August,

2014,pp. 99-105

IJETCAS 14-536; © 2014, IJETCAS All Rights Reserved Page 101

Proof: Consider the functional,

Along the solutions of (2.2), the upper right derivative of V is given by

using (2.4)-(2.6).

.

Integrating both sides with respect to t,

Thus, V (t) is bounded on and

for .

But and

are also bounded on . Hence, it follows that their derivatives are also

bounded on

Page 122: IJETCAS June-August Issue 9

P. Raja Sekhara Rao et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August,

2014,pp. 99-105

IJETCAS 14-536; © 2014, IJETCAS All Rights Reserved Page 102

Therefore, and

, are uniformly continuous on . Thus, we may conclude

that and

(e.g., [4]). This concludes the proof.

The following result provides conditions under which all solutions of (2.2) are asymptotic to the solutions of

(2.1), which shows that for a proper choice of input functions the stability of system (2.1) is not altered by the

presence of time dependent inputs.

Theorem 2.2: Assume that the parametric conditions (2.7) hold. Further, let inputs satisfy

where . Then for any solutions (x, y) of (2.2)

and

Proof: To establish this, we employ the same functional as in Theorem 2.1, that is,

Proceeding as in Theorem 2.1 we get after a simplification and rearrangement

+

Rest of the argument is same as that of Theorems 2.1, and hence, omitted. Thus, the conclusion follows.

We now recall from [11] that the system (2.1) has a unique equilibrium pattern any set of input

vectors provided the parameters satisfy,

Then we have,

Corollary 2.3: Assume that all the hypotheses of Theorem 2.2 are satisfied. Further if (2.1) possesses

equilibrium pattern then all solutions (x, y) of (2.2) approach

Proof: The result obviously follows form the observation that the equilibrium pattern is also a solution

of (2.1) and the choice in Theorem 2.2.

The following example illustrates the above results.

Example 2.4: Consider the following system having two neurons in X each supported by two neurons in Y

involving time delays as given by

.

Choose

and

. Then

.

Clearly conditions for both the existence of unique equilibrium and its stability are satisfied for any pair of

constant inputs .

Page 123: IJETCAS June-August Issue 9

P. Raja Sekhara Rao et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August,

2014,pp. 99-105

IJETCAS 14-536; © 2014, IJETCAS All Rights Reserved Page 103

Now choose

It is easy to see that since the condition

holds, and we have

(i). Conditions of Theorem 2.1 are satisfied and all solutions of the system are asymptotic to each other.

(ii). Conditions of Corollary 2.3 are satisfied and all solutions of the system approach the equilibrium pattern of

corresponding system with constant inputs.

III. Estimations on Inputs for a Pre-specified Output

In this section, we try to estimate our inputs, depending on the output given, that help the solutions to approach

the given output. For an easy understanding of the concept, we avoid the complicated notation. We, therefore,

rearrange our system (2.2) suitably. We use the following notation

For , (2.2) may be represented as

(3.1) in which

We assume that is the desired output of the network. Further that both and are fixed with respect to t

and are arbitrarily chosen. We now arrange (3.1) as

(3.2)

Conditions on response functions (2.2) to (2.6) may be modified as and for some || denoting appropriate norm. We have

Theorem 3.1. Assume that the parameters of the system and the response functions satisfy the condition

For an arbitrarily chosen output , the solutions of system (3.1) converge to provided the inputs

satisfy either of the conditions

or

Proof . Then the upper right derivative of V along the solutions of

(3.1), using (3.2), we have

– – –

This gives rise to

Rest of the argument is similar to that of Theorem 2.1 and invoking the condition (i) on inputs.

Again, it is easy to see from the last inequality above that

for large t using conditions (ii) on and . Hence, in either of the cases, follows.

The proof is complete.

The following example illustrates the effectiveness of this result.

Page 124: IJETCAS June-August Issue 9

P. Raja Sekhara Rao et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August,

2014,pp. 99-105

IJETCAS 14-536; © 2014, IJETCAS All Rights Reserved Page 104

Example 3.2: Consider

Now choose

and

Clearly, for given we have all the conditions of Theorem 3.1 are satisfied, and hence, for

sufficiently large t.

We shall now consider the time delay system corresponding to (2.2),

(3.3)

As has been done earlier for a given , we write (3.3) as

(3.4)

We have

Theorem 3.3: Assume that the parameters of the system and the response functions satisfy the condition

For an arbitrarily chosen output , solutions of system (3.3) converge to provided the inputs satisfy

the condition

Proof: Employing the functional

using (3.4) and proceeding as in Theorem 2.1 and Theorem 3.1, the conclusion follows.

Since the conditions on parameters and input functions in Theorem 3.1 and 3.3 are the same, it may imply that

delays have no effect on convergence here

Example 3.4: Consider

Now choose and Clearly, for given we have all the conditions of Theorem 3.1 are satisfied, and hence, for

sufficiently large t.

Remark 3.5: Now consider the system

(3.5)

are constant inputs, given . It is easy to observe that

is an equilibrium pattern of (3.5). Then from Corollary 2.4, we have, solutions of (3.3) approach

whenever the variable inputs of (3.3) are well near those of (3.5) as specified in Corollary 2.4.

Thus, by varying the external inputs of the system in the parameter space defined by as specified by

Theorems 3.1 and 3.3, the solutions of the network approaches pre-specified output

IV. Discussion

In this article, we have extended the concept of approaching a desired output of a given network by suitable

selection of inputs based on the given output for a cooperative and supportive neural network that was studied

earlier for a BAM network ([10]). With the help of suitable Lyapunov functionals, results are established for

asymptotic nearness and boundedness of solutions of the system also. It is noticed that inputs define a new space

of equilibria for the network while they run through a space defined by output parameters. This way memory

states of brain that are usually ignored by constant inputs may be recalled by varying the inputs to brain

appropriately. Since the input-output relation is not direct but includes system parameters and functional

responses, dynamics of entire system are involved in this process. It is hoped that this concept helps in utilizing

the same network for different applications without altering its architecture. This shows how designed structures

may be made emergent structures which are adaptive and flexible. Since the results hold good for all time delays

(delay independent criteria) the results are applicable to delay-free case as well, i.e., models of [9].

Page 125: IJETCAS June-August Issue 9

P. Raja Sekhara Rao et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August,

2014,pp. 99-105

IJETCAS 14-536; © 2014, IJETCAS All Rights Reserved Page 105

V. References

[1] H. Bereketoglu and I. Gyori, Global asymptotic stability in a nonautonomous Lotka-Volterra type system with infinite delay, Journal of Mathematical Analysis and Applications, 210(1997), 279-291.

[2] Q.X. Dong, K. Matsui and X.K. Huang, Existence and stability of periodic solutions for Hopfield neural network equations with

periodic input, Nonlinear Analysis, 49(2002), 471-479. [3] M. Forti, P. Nistri and D. Papini, Global exponential stability and global convergence in finite time of delayed neural networks

with infinite gain, IEEE TNN 16(6): 1449-1463, 2005.

[4] K. Gopalsamy and Xue-Zhong He, Delay-independent stability in bidirectional associative memory networks, IEEE TNN, 5(1994), 998-1002.

[5] S. Hu and D. Liu, On global output convergence of a class of recurrent neural networks with time varying inputs, 18(2005), 171-

178. [6] B. Kosko, “Neural Networks and Fuzzy Systems - A Dynamical Systems Approach to Machine Intelligence", Prentice-Hall of

India, New Delhi, 1994.

[7] F.-L. Luo, R. Unbehauen, Applied Neural Networks for Signal Processing, Cambridge Univ. Press, Cambridge, UK, 1997. [8] B.B. Misra and S. Dehuri, Functional link artificial neural network for classification task in data mining, J. Computer Science,

3(12), 2007, 948-955.

[9] V. Sree Hari Rao and P. Raja Sekhara Rao, Cooperative and Supportive Neural Networks, Physics Letters A 371 (2007) 101–110. [10] V. Sree Hari Rao and P. Raja Sekhara Rao, Time Varying Stimulations to Simple Neural Networks and Convergence to Desired

Outputs, Communicated.

[11] P. Raja Sekhara Rao, K.Venkata Ratnam and P. Lalitha, Delay Independent Stability of Co-operative and Supportive Neural

Networks, Communicated.

[12] Zhang Yi, J.C. Lv and L. Zhang, Output convergence analysis for a class of delayed recurrent neural networks with time varying

inputs, IEEE TSMC, 36(1), 87-95, 2006.

Page 126: IJETCAS June-August Issue 9

International Association of Scientific Innovation and Research (IASIR)

(An Association Unifying the Sciences, Engineering, and Applied Research)

(An Association Unifying the Sciences, Engineering, and Applied Research)

International Journal of Emerging Technologies in Computational

and Applied Sciences (IJETCAS)

www.iasir.net

IJETCAS 14- 537; © 2014, IJETCAS All Rights Reserved Page 106

ISSN (Print): 2279-0047

ISSN (Online): 2279-0055

Effect of High Temperature Pre-Annealing on Thermal Donors in N-Doped

CZ-Silicon Vikash Dubey

1 and Mahipal Singh

2

1Department of Physics, Government P.G. College, Ramnagar, Nainital (Uttarakhand), INDIA.

2Department of Physics,

R.H. Govt. Post Graduate College, Kashipur, U S Nagar (Uttarakhand), INDIA.

______________________________________________________________________________________

Abstract: Role of high temperature pre-annealing at 10000C for shorter duration of 10 h and longer duration of

40 h, followed by annealing at 6500C up to 90 h, has been studied in N-undoped /doped CZ-Silicon. Four Probe

and FTIR Spectroscopy tools have been employed in this study. The increase in carrier concentration in N-

doped CZ-Silicon occurs at and above 20 h of annealing and the rate of increase is quantitatively more in

sample without pre-annealing as compared to pre-annealed at 10000C. It is also observed that neither the

shorter nor the longer high temperature pre-annealing time has any effect on carrier concentration in N-doped

CZ-Silicon.

Keywords: Semiconductors, CZ-Silicon, Thermal Donors, Nitrogen

PACS: 71.55.-I; 72.20.-I; 72.80.-r

______________________________________________________________________________________

I. INTRODUCTION

In view of the potential applications of the silicon, this material has been thoroughly investigated perhaps from

all possible angles in order to optimize the material for device fabrication. Presence of oxygen in silicon for

device processing helps to promote internal gettering process, while a certain level of oxygen concentration is

quite essential to provide mechanical strength to the wafer [1], [2], [3]. However, excessive amount of oxygen

leads to the degradation of device yield. In the recent past optical and electrical properties of nitrogen doped/

implanted silicon have been carried out and this material showed a great promise for future device application.

A lot of self-contradicting experimental data are available in the literature, still there is a wide gap of doubt yet

to be bridged up by a more methodical and comprehensive approach harmoniously blended with sound logic.

Inherent presence of oxygen and nitrogen plays a crucial role in the formation mechanism of different donor

species which differ from one another in their composition and electronic structure depending upon the

temperature range within which they can be generated. Annealing treatment of the silicon crystal with high

oxygen contents in the temperature range 400-12000C produces various kinds of defects [4], [5], [6], [7]. The

question of formation and diffusion of molecule like oxygen in silicon has also been a point of debate for years

together. Therefore, the present investigation is aimed to see the role and behaviour of oxygen and nitrogen in

donor formation in CZ-Silicon annealed at temperature 6500C, preceded high temperature pre-annealing at

10000C.

II. MATERIALS AND METHODS

The silicon wafers are n-type with orientation <111> and thickness 500 µm. Some other specifications are given

in Table 1. The wafers were cut into small pieces of the size 1 × 2 cm2 and then subjected to heat treatment in

ambient air. We do not anneal the samples continuously at constant temperature, but step annealing schedules of

10 hours are fixed for Group A and B samples at constant temperature of 650°C up to 90 h of annealing. Both

groups of samples were annealed at 650°C for 90 h preceded by the high temperature pre-annealing at 1000°C

for 10 h and 40 h, respectively.

Table 1 : Specifications of the CZ-Silicon samples

Sample Resistivity

(Ohm-cm)

Initial concentration (atoms/cm3)

Oxygen Carbon Nitrogen

Group A 8 4.8 × 1017 3.0 × 1015 –

Group B 8 7.3 × 1017 2.5 × 1015 5.9 × 1015

Resistivity of silicon was measured by Four Probe method and then was converted into carrier concentration by

Irvin’s Curve [8]. The results are also supplemented by Hall measurement in order to ascertain the carrier

concentration. FTIR studies have been used to identify the presence of N, O and N-O complexes. Interstitial

oxygen in silicon causes absorption at wave number 1107 cm-1

at room temperature due to asymmetric vibration

of SiO2 complex [9]. Nitrogen in silicon causes absorption at wave number 967 cm-1

while N-O complexes has

absorption peaks at wave number 240, 242 and 249 cm-1

[10]. These absorption peaks are superimposed on

phonon excitations of the silicon.

Page 127: IJETCAS June-August Issue 9

Vikash Dubey et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014, pp.

106-109

IJETCAS 14- 537; © 2014, IJETCAS All Rights Reserved Page 107

III. RESULTS AND DISCUSSION

The collinear four probe method used for resistivity measurements on the un-annealed and annealed CZ-silicon

samples in air ambient showed that the resistance differs from point to point in as grown silicon sample while in

annealed at 650°C for 1 h, the resistance is almost constant throughout the sample which leads us to conclude

that the inhomogeneties present in as grown silicon samples are reduced to a greater extent. The results to be

followed relate to the donors generated in N-undoped (Group A) and N-doped (Group B) CZ-silicon annealed at

650°C respectively, preceded by at high temperature pre-annealing 1000°C.

From comparative plots of donors concentration in Group A and B samples annealed at 6500C as a function of

annealing time for 90 h, as in Fig. 1, it can be inferred that there is a gradual increase in the concentration of donors

after the first 10 h of step annealing in Group A samples, while the donor concentration remains almost unchanged in

Group B samples. This, in turn, leads to infer that the presence of nitrogen suppressed the donors formed in the Group

A samples. The results are in good agreement with Prakash and Singh [11], Alt et al [12] and Newman [13]. During

the course of crystallization of silicon in the presence of nitrogen it is quite natural to expect that the nitrogen atoms

occupy substitutional sites in silicon and may exist in N-N pairs. The possibility of the formation of N-O complexes

and electrically inactive N-O clusters, having more than one oxygen atom, can not be ruled out [14]. Further heat

treatment of the samples changes the agglomeration process of the constituent atoms of the clusters and hence may

suppress the formation of new donor in the Group B samples due to the formation of electrically inactive embryos.

Fig. 1: Donor concentration of Group A and B samples as a function of annealing time at 650

0C

The behaviour of donors generated in Group A samples pre-annealed at 1000°C for 0 h, 10 h and 40 h

followed by annealing at 6500C, as a function of annealing duration up to 90 h is shown in Fig. 2. An increase in

pre-annealing duration helps in maintaining the constancy of carrier concentration almost equal to the initial

value throughout the entire cycle of annealing treatment. The increase in carrier concentration occurs at and

above 20 h of annealing and the rate of increase is quantitatively more in the samples without pre-annealing as

compared to pre-annealed ones. Even high temperature pre-annealing at 10000C, complete annihilation of parent

nuclei having radii less than or more than critical radius, does not take place. Nuclei having radii greater than

critical radius are in a position to attract more number of oxygen atoms resulting in the elimination of donors

and formation of oxygen precipitates. Longer high temperature pre-annealing duration is suicidal to the

existence of as grown nuclei of donors. This is expected on logical grounds also.

Fig. 2: Plot of donor concentration of Group A samples pre-annealed at 1000

0C for 0 h, 10 and 40 h as a

function of annealing time at 6500C

Page 128: IJETCAS June-August Issue 9

Vikash Dubey et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014, pp.

106-109

IJETCAS 14- 537; © 2014, IJETCAS All Rights Reserved Page 108

The carrier concentration in Group B samples pre-annealed at 1000°C for 10 h and 40 h followed by annealing

at 650°C, as a function of annealing time up to 90 h is shown in Fig. 3. It is observed that neither the shorter nor

the longer pre-annealing time has any effect on carrier concentration. In the samples without pre-annealing as

shown in Fig. 4, the satellite peaks are associated with nitrogen and the major peak is due to presence of oxygen.

The pre-annealing at 10000C for 10 h results in the disappearance of nitrogen related peaks and a subsequent

reduction in the magnitude of oxygen related peak at 1107 cm-1

. This is clearly indicative of the oxygen out

diffusion process leading to the formation of more and more electrically inactive clusters. Because no nuclei of

donors exist in the high temperature pre-annealed samples, donors are not generated in the annealing at 650°C

as also suggested by Yang et al [15], Fujita et al [16] and Dubey and Singh [17].

Fig. 3: Variation of donor concentration of Group B samples pre-annealed at 1000

0C for 0 h, 10 and 40 h

as a function of annealing time at 6500C

700 800 900 1000 1100 1200

Ab

so

rban

ce (

a.u

.)

Wave Number (cm-1)

Fig. 4: FTIR spectra of Group B samples (a) without pre-annealing (b) high temperature pre-

annealing for 10 h

The FTIR spectra of Group A and B samples annealed at 650°C for 1 h are shown in Fig. 5. Hara et al [18] and

Wagner et al [19] suggested that the optical absorbance lines in the range 350-500 cm-1

are considered to be

related to the thermal donors in silicon. As can be seen from the figure (350-500 cm-1

), there is no absorbance

peak in Group B sample, while Group A sample exhibits the appearance of several absorbance peaks. This

means that the presence of nitrogen assists in the suppression of thermal donors. The appearance of optical lines

at 240, 242 and 249 cm-1

in IR spectra of Group B sample is due to nitrogen because these lines do not appear in

Group A sample.

Page 129: IJETCAS June-August Issue 9

Vikash Dubey et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 9(1), June-August, 2014, pp.

106-109

IJETCAS 14- 537; © 2014, IJETCAS All Rights Reserved Page 109

500 450 400 350 300 250

Ab

so

rba

nc

e (

a.u

.)

Wave numbers (cm-1)

Group B

Group A

Fig. 5: FTIR spectra of Group A and B samples annealed at 650

0C for 1 h in the range 230-500 cm

-1

Suezawa et al [20] and Dubey [21] studied the properties of oxygen-nitrogen complexes in nitrogen doped

silicon and considered that the oxygen-nitrogen complexes related to 240, 242 and 249 cm-1

optical lines have

the properties of the shallow thermal donors. Our results agree with their conclusion. The above discussion leads

us to conclude the existence of two types of oxygen related donors. The first of its kind is shallow thermal donor

associated with the N-O complexes and the second one is thermal donor related to the oxygen impurity only

assisted by silicon self-interstitialcy.

IV. CONCLUSION

In order to ascertain the effect of nitrogen on the formation of oxygen related donors in step-annealed CZ-

Silicon at 6500C preceded by high temperature pre-annealing treatments of 10h and 40h, resistivity

measurement and FTIR have been used as two basic tools. It is found that donors generated in Group A and B

samples annealed at 6500C as a function of annealing time for 90 h, it can be inferred that there is a gradual increase in

the concentration of donors after the first 10 h of step annealing in Group A samples, while the donor concentration

remains almost unchanged in Group B samples. This, in turn, leads to infer that the presence of nitrogen suppressed

the donors formed in the Group A samples. The increase in carrier concentration in N-doped CZ-Silicon samples

occurs at and above 20 h of annealing and the rate of increase is quantitatively more in sample without pre-

annealing as compared to pre-annealed at 10000C. It is also observed that neither the shorter nor the longer high

temperature pre-annealing time has any effect on carrier concentration.

REFERENCES [1] K. Sumino, I. Yonenaga, M. Imai, T. Abe J. Appl. Phys. 54 (1983) 5016.

[2] V. Dubey, S. Singh Bull. Mater. Sci. 25 (2002) 589.

[3] Om Prakash; N. K. Upreti, S. Singh, Mater. Sci. & Engg. B, 52 (1997) 180. [4] A. Ourmazd, W. Schroter, A. Bourret J. Appl. Phys. 56 (1984) 1670.

[5] D. Mathiot Appl. Phys. Lett. 51 (1987) 904.

[6] S.A. McQuaid, M.J. Binns, C.A. Londos, J.H. Tucker, A.R. Brown, R.C. Newman J. Phys. C: J. Appl. Phys. 77 (1995) 1427. [7] H. Ono Appl. Phys. Exp. 01 (2008) 25001.

[8] S.M. Sze, J.C. Irvin, Solid State Electronics 11 (1968) 599.

[9] T. Iizuka, S. Takasu, M. Tajima, T. Aria, T. Najaki, N. Inoue, M. Watanake, J. Electrochem. Soc. 132 (1985) 1707. [10] P. Wagner, R. Oeiler, Zurhwhner Appl. Phys. A 46 (1986) 73.

[11] Om Prakash, S. Singh, J. Phys. Chem. Solids 60 (1999) 353.

[12] H. Ch. Alt. Y.V. Gomeniuk, F. Bittersberger, A. Kempf and D. Zemke Appl. Phys. Lett. 87 (2005) 151909. [13] R.C. Newman, J. Phys.: Condens. Matter 12 (2000) 335.

[14] C.S. Chen, C.F. Li, H.J. Ye, S.C. Shen, D.R. Yang J. Appl. Phys. 76 (1994) 3347.

[15] D. Yang, M. Klevermann, L.I. Murin, Physics B 302-303 (2001) 193. [16] N. Fujita, R. Jones, S. Oberg, T.R. Briddon Appl. Phys. Lett. 91 (2007) 51914.

[17] V. Dubey, S. Singh, J. Phys. Chem. Solids 65 (2004) 1265.

[18] A. Hara, T. Fukuda, T. Miyabo, I. Hirai Appl. Phys. Lett. 54 (1989) 626. [19] H.E. Wagner, H. Ch. Alt, W. Ammon, F. Bittersberger, A. Huber, L. Koester Appl. Phys. Lett. 91 (2007) 152102.

[20] M. Suezawa, K. Sumino, H. Harada, T. Abe J. Appl. Phys. 25 (1988) L829.

[21] V. Dubey Recent Research in Science and Technology 3(7) (2011) 112.