journal of applied functional analysis - … of applied functional analysis ... [email protected]...

346
Volume 10, Numbers 1-2 January-April 2015 ISSN:1559-1948 (PRINT), 1559-1956 (ONLINE) EUDOXUS PRESS,LLC JOURNAL OF APPLIED FUNCTIONAL ANALYSIS 1

Upload: lytuyen

Post on 11-Feb-2018

244 views

Category:

Documents


7 download

TRANSCRIPT

Page 1: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Volume 10, Numbers 1-2 January-April 2015 ISSN:1559-1948 (PRINT), 1559-1956 (ONLINE) EUDOXUS PRESS,LLC

JOURNAL OF APPLIED FUNCTIONAL ANALYSIS

1

Page 2: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

SCOPE AND PRICES OF

JOURNAL OF APPLIED FUNCTIONAL ANALYSIS A quartely international publication of EUDOXUS PRESS,LLC ISSN:1559-1948(PRINT),1559-1956(ONLINE) Editor in Chief: George Anastassiou Department of Mathematical Sciences The University of Memphis Memphis, TN 38152,USA E mail: [email protected] Assistant to the Editor: Dr.Razvan Mezei,Lenoir-Rhyne University,Hickory,NC 28601, USA. -------------------------------------------------------------------------------- The purpose of the "Journal of Applied Functional Analysis"(JAFA) is to publish high quality original research articles, survey articles and book reviews from all subareas of Applied Functional Analysis in the broadest form plus from its applications and its connections to other topics of Mathematical Sciences. A sample list of connected mathematical areas with this publication includes but is not restricted to: Approximation Theory, Inequalities, Probability in Analysis, Wavelet Theory, Neural Networks, Fractional Analysis, Applied Functional Analysis and Applications, Signal Theory, Computational Real and Complex Analysis and Measure Theory, Sampling Theory, Semigroups of Operators, Positive Operators, ODEs, PDEs, Difference Equations, Rearrangements, Numerical Functional Analysis, Integral equations, Optimization Theory of all kinds, Operator Theory, Control Theory, Banach Spaces, Evolution Equations, Information Theory, Numerical Analysis, Stochastics, Applied Fourier Analysis, Matrix Theory, Mathematical Physics, Mathematical Geophysics, Fluid Dynamics, Quantum Theory. Interpolation in all forms, Computer Aided Geometric Design, Algorithms, Fuzzyness, Learning Theory, Splines, Mathematical Biology, Nonlinear Functional Analysis, Variational Inequalities, Nonlinear Ergodic Theory, Functional Equations, Function Spaces, Harmonic Analysis, Extrapolation Theory, Fourier Analysis, Inverse Problems, Operator Equations, Image Processing, Nonlinear Operators, Stochastic Processes, Mathematical Finance and Economics, Special Functions, Quadrature, Orthogonal Polynomials, Asymptotics, Symbolic and Umbral Calculus, Integral and Discrete Transforms, Chaos and Bifurcation, Nonlinear Dynamics, Solid Mechanics, Functional Calculus, Chebyshev Systems. Also are included combinations of the above topics. Working with Applied Functional Analysis Methods has become a main trend in recent years, so we can understand better and deeper and solve important problems of our real and scientific world. JAFA is a peer-reviewed International Quarterly Journal published by Eudoxus Press,LLC. We are calling for high quality papers for possible publication. The contributor should submit the contribution to the EDITOR in CHIEF in TEX or LATEX double spaced and ten point type size, also in PDF format. Article should be sent ONLY by E-MAIL [See: Instructions to Contributors] Journal of Applied Functional Analysis(JAFA) is published in January,April,July and October of each year by

2

Page 3: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

EUDOXUS PRESS,LLC, 1424 Beaver Trail Drive,Cordova,TN38016,USA, Tel.001-901-751-3553 [email protected] http://www.EudoxusPress.com visit also http://www.msci.memphis.edu/~ganastss/jafa. Annual Subscription Current Prices:For USA and Canada,Institutional:Print $400, Electronic OPEN ACCESS.Individual:Print $150.For any other part of the world add $50 more to the above prices for handling and postages. Single article for individual $20.Single issue for individual $80. No credit card payments.Only certified check,money order or international check in US dollars are acceptable. Combination orders of any two from JoCAAA,JCAAM,JAFA receive 25% discount,all three receive 30% discount. Copyright©2015 by Eudoxus Press,LLC all rights reserved.JAFA is printed in USA. JAFA is reviewed and abstracted by AMS Mathematical Reviews,MATHSCI,and Zentralblaat MATH. It is strictly prohibited the reproduction and transmission of any part of JAFA and in any form and by any means without the written permission of the publisher.It is only allowed to educators to Xerox articles for educational purposes.The publisher assumes no responsibility for the content of published papers. JAFA IS A JOURNAL OF RAPID PUBLICATION

3

Page 4: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Journal of Applied Functional Analysis Editorial Board

Associate Editors

Editor in-Chief: George A.Anastassiou Department of Mathematical Sciences The University of Memphis Memphis,TN 38152,USA 901-678-3144 office 901-678-2482 secretary 901-751-3553 home 901-678-2480 Fax [email protected] Approximation Theory,Inequalities,Probability, Wavelet,Neural Networks,Fractional Calculus Associate Editors: 1) Francesco Altomare Dipartimento di Matematica Universita' di Bari Via E.Orabona,4 70125 Bari,ITALY Tel+39-080-5442690 office +39-080-3944046 home +39-080-5963612 Fax [email protected] Approximation Theory, Functional Analysis, Semigroups and Partial Differential Equations, Positive Operators. 2) Angelo Alvino Dipartimento di Matematica e Applicazioni "R.Caccioppoli" Complesso Universitario Monte S. Angelo Via Cintia 80126 Napoli,ITALY +39(0)81 675680 [email protected], [email protected] Rearrengements, Partial Differential Equations. 3) Catalin Badea UFR Mathematiques,Bat.M2, Universite de Lille1 Cite Scientifique F-59655 Villeneuve d'Ascq,France

24) Nikolaos B.Karayiannis Department of Electrical and Computer Engineering N308 Engineering Building 1 University of Houston Houston,Texas 77204-4005 USA Tel (713) 743-4436 Fax (713) 743-4444 [email protected] [email protected] Neural Network Models, Learning Neuro-Fuzzy Systems. 25) Theodore Kilgore Department of Mathematics Auburn University 221 Parker Hall, Auburn University Alabama 36849,USA Tel (334) 844-4620 Fax (334) 844-6555 [email protected] Real Analysis,Approximation Theory, Computational Algorithms. 26) Jong Kyu Kim Department of Mathematics Kyungnam University Masan Kyungnam,631-701,Korea Tel 82-(55)-249-2211 Fax 82-(55)-243-8609 [email protected] Nonlinear Functional Analysis,Variational Inequalities,Nonlinear Ergodic Theory, ODE,PDE,Functional Equations. 27) Robert Kozma Department of Mathematical Sciences The University of Memphis Memphis, TN 38152 USA [email protected] Neural Networks, Reproducing Kernel Hilbert Spaces, Neural Perculation Theory

4

Page 5: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Tel.(+33)(0)3.20.43.42.18 Fax (+33)(0)3.20.43.43.02 [email protected] Approximation Theory, Functional Analysis, Operator Theory. 4) Erik J.Balder Mathematical Institute Universiteit Utrecht P.O.Box 80 010 3508 TA UTRECHT The Netherlands Tel.+31 30 2531458 Fax+31 30 2518394 [email protected] Control Theory, Optimization, Convex Analysis, Measure Theory, Applications to Mathematical Economics and Decision Theory. 5) Carlo Bardaro Dipartimento di Matematica e Informatica Universita di Perugia Via Vanvitelli 1 06123 Perugia, ITALY TEL+390755853822 +390755855034 FAX+390755855024 E-mail [email protected] Web site: http://www.unipg.it/~bardaro/ Functional Analysis and Approximation Theory, Signal Analysis, Measure Theory, Real Analysis. 6) Heinrich Begehr Freie Universitaet Berlin I. Mathematisches Institut, FU Berlin, Arnimallee 3,D 14195 Berlin Germany, Tel. +49-30-83875436, office +49-30-83875374, Secretary Fax +49-30-83875403 [email protected] Complex and Functional Analytic Methods in PDEs, Complex Analysis, History of Mathematics. 7) Fernando Bombal Departamento de Analisis Matematico Universidad Complutense Plaza de Ciencias,3 28040 Madrid, SPAIN Tel. +34 91 394 5020 Fax +34 91 394 4726 [email protected]

28) Miroslav Krbec Mathematical Institute Academy of Sciences of Czech Republic Zitna 25 CZ-115 67 Praha 1 Czech Republic Tel +420 222 090 743 Fax +420 222 211 638 [email protected] Function spaces,Real Analysis,Harmonic Analysis,Interpolation and Extrapolation Theory,Fourier Analysis. 29) Peter M.Maass Center for Industrial Mathematics Universitaet Bremen Bibliotheksstr.1, MZH 2250, 28359 Bremen Germany Tel +49 421 218 9497 Fax +49 421 218 9562 [email protected] Inverse problems,Wavelet Analysis and Operator Equations,Signal and Image Processing. 30) Julian Musielak Faculty of Mathematics and Computer Science Adam Mickiewicz University Ul.Umultowska 87 61-614 Poznan Poland Tel (48-61) 829 54 71 Fax (48-61) 829 53 15 [email protected] Functional Analysis, Function Spaces, Approximation Theory,Nonlinear Operators. 31) Gaston M. N'Guerekata Department of Mathematics Morgan State University Baltimore, MD 21251, USA tel:: 1-443-885-4373 Fax 1-443-885-8216 Gaston.N'[email protected] Nonlinear Evolution Equations, Abstract Harmonic Analysis, Fractional Differential Equations, Almost Periodicity & Almost Automorphy. 32) Vassilis Papanicolaou Department of Mathematics National Technical University of Athens

5

Page 6: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Operators on Banach spaces, Tensor products of Banach spaces, Polymeasures, Function spaces. 8) Michele Campiti Department of Mathematics "E.De Giorgi" University of Lecce P.O. Box 193 Lecce,ITALY Tel. +39 0832 297 432 Fax +39 0832 297 594 [email protected] Approximation Theory, Semigroup Theory, Evolution problems, Differential Operators. 9)Domenico Candeloro Dipartimento di Matematica e Informatica Universita degli Studi di Perugia Via Vanvitelli 1 06123 Perugia ITALY Tel +39(0)75 5855038 +39(0)75 5853822, +39(0)744 492936 Fax +39(0)75 5855024 [email protected] Functional Analysis, Function spaces, Measure and Integration Theory in Riesz spaces. 10) Pietro Cerone School of Computer Science and Mathematics, Faculty of Science, Engineering and Technology, Victoria University P.O.14428,MCMC Melbourne,VIC 8001,AUSTRALIA Tel +613 9688 4689 Fax +613 9688 4050 [email protected] Approximations, Inequalities, Measure/Information Theory, Numerical Analysis, Special Functions. 11)Michael Maurice Dodson Department of Mathematics University of York, York YO10 5DD, UK Tel +44 1904 433098 Fax +44 1904 433071 [email protected] Harmonic Analysis and Applications to Signal Theory,Number Theory and Dynamical Systems.

Zografou campus, 157 80 Athens, Greece tel:: +30(210) 772 1722 Fax +30(210) 772 1775 [email protected] Partial Differential Equations, Probability. 33) Pier Luigi Papini Dipartimento di Matematica Piazza di Porta S.Donato 5 40126 Bologna ITALY Fax +39(0)51 582528 [email protected] Functional Analysis, Banach spaces, Approximation Theory. 34) Svetlozar (Zari) Rachev, Professor of Finance, College of Business,and Director of Quantitative Finance Program, Department of Applied Mathematics & Statistics Stonybrook University 312 Harriman Hall, Stony Brook, NY 11794-3775 Phone: +1-631-632-1998, Email : [email protected] 35) Paolo Emilio Ricci Department of Mathematics Rome University "La Sapienza" P.le A.Moro,2-00185 Rome,ITALY Tel ++3906-49913201 office ++3906-87136448 home Fax ++3906-44701007 [email protected] [email protected] Special Functions, Integral and Discrete Transforms, Symbolic and Umbral Calculus, ODE, PDE,Asymptotics, Quadrature, Matrix Analysis. 36) Silvia Romanelli Dipartimento di Matematica Universita' di Bari Via E.Orabona 4 70125 Bari, ITALY. Tel (INT 0039)-080-544-2668 office 080-524-4476 home 340-6644186 mobile Fax -080-596-3612 Dept. [email protected] PDEs and Applications to Biology and Finance, Semigroups of Operators.

6

Page 7: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

12) Sever S.Dragomir School of Computer Science and Mathematics, Victoria University, PO Box 14428, Melbourne City, MC 8001,AUSTRALIA Tel. +61 3 9688 4437 Fax +61 3 9688 4050 [email protected] Inequalities,Functional Analysis, Numerical Analysis, Approximations, Information Theory, Stochastics. 13) Oktay Duman TOBB University of Economics and Technology, Department of Mathematics, TR-06530, Ankara, Turkey, [email protected] Classical Approximation Theory, Summability Theory, Statistical Convergence and its Applications

14) Paulo J.S.G.Ferreira Department of Electronica e Telecomunicacoes/IEETA Universidade de Aveiro 3810-193 Aveiro PORTUGAL Tel +351-234-370-503 Fax +351-234-370-545 [email protected] Sampling and Signal Theory, Approximations, Applied Fourier Analysis, Wavelet, Matrix Theory. 15) Gisele Ruiz Goldstein Department of Mathematical Sciences The University of Memphis Memphis,TN 38152,USA. Tel 901-678-2513 Fax 901-678-2480 [email protected] PDEs, Mathematical Physics, Mathematical Geophysics. 16) Jerome A.Goldstein Department of Mathematical Sciences The University of Memphis Memphis,TN 38152,USA Tel 901-678-2484 Fax 901-678-2480 [email protected] PDEs,Semigroups of Operators, Fluid Dynamics,Quantum Theory.

37) Boris Shekhtman Department of Mathematics University of South Florida Tampa, FL 33620,USA Tel 813-974-9710 [email protected] Approximation Theory, Banach spaces, Classical Analysis. 38) Rudolf Stens Lehrstuhl A fur Mathematik RWTH Aachen 52056 Aachen Germany Tel ++49 241 8094532 Fax ++49 241 8092212 [email protected] Approximation Theory, Fourier Analysis, Harmonic Analysis, Sampling Theory. 39) Juan J.Trujillo University of La Laguna Departamento de Analisis Matematico C/Astr.Fco.Sanchez s/n 38271.LaLaguna.Tenerife. SPAIN Tel/Fax 34-922-318209 [email protected] Fractional: Differential Equations-Operators- Fourier Transforms, Special functions, Approximations,and Applications. 40) Tamaz Vashakmadze I.Vekua Institute of Applied Mathematics Tbilisi State University, 2 University St. , 380043,Tbilisi, 43, GEORGIA. Tel (+99532) 30 30 40 office (+99532) 30 47 84 office (+99532) 23 09 18 home [email protected] [email protected] Applied Functional Analysis, Numerical Analysis, Splines, Solid Mechanics. 41) Ram Verma International Publications 5066 Jamieson Drive, Suite B-9, Toledo, Ohio 43613,USA. [email protected] [email protected] Applied Nonlinear Analysis, Numerical Analysis, Variational Inequalities, Optimization Theory, Computational Mathematics, Operator Theory.

7

Page 8: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

17) Heiner Gonska Institute of Mathematics University of Duisburg-Essen Lotharstrasse 65 D-47048 Duisburg Germany Tel +49 203 379 3542 Fax +49 203 379 1845 [email protected] Approximation and Interpolation Theory, Computer Aided Geometric Design, Algorithms. 18) Karlheinz Groechenig Institute of Biomathematics and Biometry, GSF-National Research Center for Environment and Health Ingolstaedter Landstrasse 1 D-85764 Neuherberg,Germany. Tel 49-(0)-89-3187-2333 Fax 49-(0)-89-3187-3369 [email protected] Time-Frequency Analysis, Sampling Theory, Banach spaces and Applications, Frame Theory. 19) Vijay Gupta School of Applied Sciences Netaji Subhas Institute of Technology Sector 3 Dwarka New Delhi 110075, India e-mail: [email protected]; [email protected] Approximation Theory 20) Weimin Han Department of Mathematics University of Iowa Iowa City, IA 52242-1419 319-335-0770 e-mail: [email protected] Numerical analysis, Finite element method, Numerical PDE, Variational inequalities, Computational mechanics 21) Tian-Xiao He Department of Mathematics and Computer Science P.O.Box 2900,Illinois Wesleyan University Bloomington,IL 61702-2900,USA Tel (309)556-3089 Fax (309)556-3864 [email protected] Approximations,Wavelet, Integration Theory, Numerical Analysis, Analytic Combinatorics.

42) Gianluca Vinti Dipartimento di Matematica e Informatica Universita di Perugia Via Vanvitelli 1 06123 Perugia ITALY Tel +39(0) 75 585 3822, +39(0) 75 585 5032 Fax +39 (0) 75 585 3822 [email protected] Integral Operators, Function Spaces, Approximation Theory, Signal Analysis. 43) Ursula Westphal Institut Fuer Mathematik B Universitaet Hannover Welfengarten 1 30167 Hannover,GERMANY Tel (+49) 511 762 3225 Fax (+49) 511 762 3518 [email protected] Semigroups and Groups of Operators, Functional Calculus, Fractional Calculus, Abstract and Classical Approximation Theory, Interpolation of Normed spaces. 44) Ronald R.Yager Machine Intelligence Institute Iona College New Rochelle,NY 10801,USA Tel (212) 249-2047 Fax(212) 249-1689 [email protected] [email protected] Fuzzy Mathematics, Neural Networks, Reasoning, Artificial Intelligence, Computer Science. 45) Richard A. Zalik Department of Mathematics Auburn University Auburn University,AL 36849-5310 USA. Tel 334-844-6557 office 678-642-8703 home Fax 334-844-6555 [email protected] Approximation Theory,Chebychev Systems, Wavelet Theory.

8

Page 9: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

22) Don Hong Department of Mathematical Sciences Middle Tennessee State University 1301 East Main St. Room 0269, Blgd KOM Murfreesboro, TN 37132-0001 Tel (615) 904-8339 [email protected] Approximation Theory,Splines,Wavelet, Stochastics, Mathematical Biology Theory. 23) Hubertus Th. Jongen Department of Mathematics RWTH Aachen Templergraben 55 52056 Aachen Germany Tel +49 241 8094540 Fax +49 241 8092390 [email protected] Parametric Optimization, Nonconvex Optimization, Global Optimization. NEW MEMBERS 46) Jianguo Huang

Department of Mathematics

Shanghai Jiao Tong University

Shanghai 200240

P.R. China

[email protected] Numerical PDE’s

9

Page 10: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Instructions to Contributors

Journal of Applied Functional Analysis A quartely international publication of Eudoxus Press, LLC, of TN.

Editor in Chief: George Anastassiou

Department of Mathematical Sciences University of Memphis

Memphis, TN 38152-3240, U.S.A.

1. Manuscripts files in Latex and PDF and in English, should be submitted via email to the Editor-in-Chief: Prof.George A. Anastassiou Department of Mathematical Sciences The University of Memphis Memphis,TN 38152, USA. Tel. 901.678.3144 e-mail: [email protected] Authors may want to recommend an associate editor the most related to the submission to possibly handle it. Also authors may want to submit a list of six possible referees, to be used in case we cannot find related referees by ourselves. 2. Manuscripts should be typed using any of TEX,LaTEX,AMS-TEX,or AMS-LaTEX and according to EUDOXUS PRESS, LLC. LATEX STYLE FILE. (Click HERE to save a copy of the style file.)They should be carefully prepared in all respects. Submitted articles should be brightly typed (not dot-matrix), double spaced, in ten point type size and in 8(1/2)x11 inch area per page. Manuscripts should have generous margins on all sides and should not exceed 24 pages. 3. Submission is a representation that the manuscript has not been published previously in this or any other similar form and is not currently under consideration for publication elsewhere. A statement transferring from the authors(or their employers,if they hold the copyright) to Eudoxus Press, LLC, will be required before the manuscript can be accepted for publication.The Editor-in-Chief will supply the necessary forms for this transfer.Such a written transfer of copyright,which previously was assumed to be implicit in the act of submitting a manuscript,is necessary under the U.S.Copyright Law in order for the publisher to carry through the dissemination of research results and reviews as widely and effective as possible.

10

Page 11: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

4. The paper starts with the title of the article, author's name(s) (no titles or degrees), author's affiliation(s) and e-mail addresses. The affiliation should comprise the department, institution (usually university or company), city, state (and/or nation) and mail code. The following items, 5 and 6, should be on page no. 1 of the paper. 5. An abstract is to be provided, preferably no longer than 150 words. 6. A list of 5 key words is to be provided directly below the abstract. Key words should express the precise content of the manuscript, as they are used for indexing purposes. The main body of the paper should begin on page no. 1, if possible. 7. All sections should be numbered with Arabic numerals (such as: 1. INTRODUCTION) . Subsections should be identified with section and subsection numbers (such as 6.1. Second-Value Subheading). If applicable, an independent single-number system (one for each category) should be used to label all theorems, lemmas, propositions, corollaries, definitions, remarks, examples, etc. The label (such as Lemma 7) should be typed with paragraph indentation, followed by a period and the lemma itself. 8. Mathematical notation must be typeset. Equations should be numbered consecutively with Arabic numerals in parentheses placed flush right, and should be thusly referred to in the text [such as Eqs.(2) and (5)]. The running title must be placed at the top of even numbered pages and the first author's name, et al., must be placed at the top of the odd numbed pages. 9. Illustrations (photographs, drawings, diagrams, and charts) are to be numbered in one consecutive series of Arabic numerals. The captions for illustrations should be typed double space. All illustrations, charts, tables, etc., must be embedded in the body of the manuscript in proper, final, print position. In particular, manuscript, source, and PDF file version must be at camera ready stage for publication or they cannot be considered. Tables are to be numbered (with Roman numerals) and referred to by number in the text. Center the title above the table, and type explanatory footnotes (indicated by superscript lowercase letters) below the table. 10. List references alphabetically at the end of the paper and number them consecutively. Each must be cited in the text by the appropriate Arabic numeral in square brackets on the baseline. References should include (in the following order): initials of first and middle name, last name of author(s) title of article,

11

Page 12: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

name of publication, volume number, inclusive pages, and year of publication. Authors should follow these examples: Journal Article 1. H.H.Gonska,Degree of simultaneous approximation of bivariate functions by Gordon operators, (journal name in italics) J. Approx. Theory, 62,170-191(1990). Book 2. G.G.Lorentz, (title of book in italics) Bernstein Polynomials (2nd ed.), Chelsea,New York,1986. Contribution to a Book 3. M.K.Khan, Approximation properties of beta operators,in(title of book in italics) Progress in Approximation Theory (P.Nevai and A.Pinkus,eds.), Academic Press, New York,1991,pp.483-495. 11. All acknowledgements (including those for a grant and financial support) should occur in one paragraph that directly precedes the References section. 12. Footnotes should be avoided. When their use is absolutely necessary, footnotes should be numbered consecutively using Arabic numerals and should be typed at the bottom of the page to which they refer. Place a line above the footnote, so that it is set off from the text. Use the appropriate superscript numeral for citation in the text. 13. After each revision is made please again submit via email Latex and PDF files of the revised manuscript, including the final one. 14. Effective 1 Nov. 2009 for current journal page charges, contact the Editor in Chief. Upon acceptance of the paper an invoice will be sent to the contact author. The fee payment will be due one month from the invoice date. The article will proceed to publication only after the fee is paid. The charges are to be sent, by money order or certified check, in US dollars, payable to Eudoxus Press, LLC, to the address shown on the Eudoxus homepage. No galleys will be sent and the contact author will receive one (1) electronic copy of the journal issue in which the article appears. 15. This journal will consider for publication only papers that contain proofs for their listed results.

12

Page 13: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

SOME RESULTS CONCERNING SYSTEM OF MODULATES

FOR L2(R)

A. Zothansanga1 and Suman Panwar2

1,2 Department of Mathematics,University of Delhi, Delhi 110 007. INDIA

1e-mail:[email protected]: [email protected]

Abstract: Frames by integer translates were studied by Kim and Kwon [6]. Chris-

tensen et.al [3] studied Riesz sequences of translates and their generalized duals. In

this paper we consider a system of modulates for L2(R). A necessary and sufficient

condition for a system of modulates for L2(R) to be a frame sequence (Riesz se-

quence,orthonormal sequence) is given. We also prove a result related to the frame

operator of a frame sequence Ekϕk∈Z. Finally, it is proved that for n ≥ 1, the

system of modulates EkBnk∈Z is a Riesz Fischer sequence.

AMS Subject Classification 2010: 42C15, 42C30

Key Words: Frame, system of modulates, Bessel sequence, Riesz-Fischer sequence.

1 Introduction

A basic approach to the decomposition of a signal in terms of elementary signals

was given by Dennis Gabor [7] in 1946. While answering some deep problems in non-

harmonic Fourier series, Duffin and Schaeffer [5] in 1952 abstracted Gabor’s method

and defined frames for Hilbert spaces. Frames are widely used in mathematics,

science and engineering. In particular they are used in filter banks, wireless sensor

networks, multiple-antenna code design, signal processing, image processing and

many more. For a nice and comprehensive survey on various types of frames, one

may refer to [1,2,4,8,10].

Coherent frames fkk∈I are the frames for which the elements fk appear by

action of some operators (belonging to a special class) on a single element f in

a Hilbert space. This feature is necessary for applications because it simplifies

calculations and makes it simpler to collect information about the frame.

In this paper, we consider the case where the operators act by modulation. A

necessary and sufficient condition for a system of modulates for L2(R) to be a frame

sequence(Riesz sequence,orthonormal) is given. We also prove a result related to

the frame operator of a frame sequence Ekϕk∈Z. Finally, it is proved that for

n ≥ 1, the system of modulates EkBnk∈Z is a Riesz Fischer sequence.

13J. APPLIED FUNCTIONAL ANALYSIS, VOL. 10, NO.'S 1-2,13-18 ,2015, COPYRIGHT 2015 EUDOXUS PRESS, LLC

Page 14: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

A. ZOTHANSANGA AND SUMAN PANWAR

2 Preliminaries

Definition 2.1. [4] Let H denote a Hilbert space, I denotes a countable index

set and [fi]i∈I denotes the closed linear span of the sequence fii∈I . A family

of vectors fii∈I is called a frame for H if there exist constants A and B with

0 < A ≤ B < ∞ such that

A∥f∥2 ≤∑i∈I

|⟨f, fi⟩|2 ≤ B∥f∥2, for all f ∈ H.(1)

The positive constants A and B are called lower frame bound and upper frame

bound for the family fii∈I , respectively. The inequality (1) is called the frame

inequality. If in (1) only the upper inequality holds, then fii∈I is called a Bessel

sequence.

If the family of vectors fii∈I is a frame for [fi]i∈I , then it is called a frame

sequence.

Definition 2.2. [4]A family of vectors fii∈I is called a Riesz basis for H if it is

complete in H and if there exist constants A and B with 0 < A ≤ B < ∞ such that

A∑i∈I

|ci|2 ≤ ∥∑i∈I

cifi∥2 ≤ B∑i∈I

|ci|2.(2)

If the family of vectors fii∈I satisfies (2) , then it is called a Riesz sequence

and if it satisfies the left hand inequality of (2), then it is called a Riesz-Fischer

sequence.

For a ∈ R and ϕ ∈ L2(R), the modulation operator Ea on L2(R) is defined as

Eaϕ(x) = e2πiaxϕ(x) , x ∈ R and the dilation operator Da, a = 0 on L2(R) is

defined as Daϕ(x) =1

|a|ϕ(xa ), x ∈ R.

Definition 2.3. Let ϕ ∈ L2(R) . The sequence Ekϕk∈Z is called a system of

modulates for L2(R). Further,

(I) If Ekϕk∈Z is a frame for L2(R), then it is called a frame of modulates.

(II) If Ekϕk∈Z is a Bessel sequence for L2(R), then it is called a Bessel sequence

of modulates.

For a system of modulates Ekϕk∈Z, the associated pre-frame operator is T :

l2(Z) → L2(R), given by Tαkk∈Z =∑k∈Z

αkEkϕ. The adjoint operator of T or

the analysis operator T ∗ : L2(R) → l2(Z), is given by T ∗f = ⟨f,Ekϕ⟩k∈Z,

f ∈ L2(R.If the system of modulates Ekϕk∈Z is a Bessel sequence, then by composing

T and T ∗, we obtain the frame operator S : L2(R) → L2(R) defined as Sf =∑k∈Z

⟨f,Ekϕ⟩Ekϕ, f ∈ L2(R).

If the system of modulates Ekϕk∈Z is a frame, then the frame operator S is

invertible.

14

Page 15: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

SOME RESULTS CONCERNING SYSTEM OF MODULATES FOR L2(R)

Definition 2.4. For n ∈ N, the B-splines Bn are defined inductively as B1 =

χ[− 12 ,

12 ]

and Bn+1 = Bn ∗B1, that is Bn+1(x) =

12∫

− 12

Bn(x− t)dt.

3 Main Results

We give necessary and sufficient condition for the system of modulates Ekϕk∈Z, ϕ ∈L2(R), to be a frame sequence, Riesz sequence or an orhonormal sequence in terms

of a function Φ associated with ϕ.

Theorem 3.1. For ϕ ∈ L2(R) consider the function Φ on R defined as Φ(x) =∑k∈Z

|ϕ(x + k)|2. Consider the set N = x ∈ [0, 1],Φ(x) = 0. Then for A,B > 0,

the following characterizations hold:

(1) Ekϕk∈Z is a frame sequence with bounds A and B if and only if

A ≤ Φ(x) ≤ B, a.e x ∈ [0, 1]−N.

(2) Ekϕk∈Z is a Riesz sequence with bounds A and B if and only if

A ≤ Φ(x) ≤ B, a.e x ∈ [0, 1].

(3) Ekϕk∈Z is an orthonormal sequence if and only if

Φ(x) = 1, a.e x ∈ [0, 1].

Proof. (1) Let ckk∈Z ∈ l2(Z) and Ekϕk∈Z be a Bessel sequence with the pre-

frame operator T . Using Theorem 3.2 in [9], we have

∥Tckk∈Z∥2 =

1∫0

|∑k∈Z

cke2πikx|2Φ(x)dx.(3)

Let the Kernal of T be denoted as Ker(T). Then using Equation (3)

Ker(T ) =

ckk∈Z ∈ l2(Z),

∑k∈Z

cke2πikx = 0, for all x ∈ [0, 1]−N.

Let ckk∈Z ∈ Ker(T )⊥.Then ⟨ckk∈Z, dkk∈Z⟩ = 0, for alldkk∈Z ∈ Ker(T ).

As Ekk∈Z is an orthonormal basis for L2(0, 1), ckk∈Z ∈ Ker(T )⊥if and only

if1∫

0

∑k∈Z

cke2πikx

∑k∈Z

dke2πikxdx = 0.

Hence,

Ker(T )⊥=

ckk∈Z ∈ l2(Z),

∑k∈Z

cke2πikx = 0, for all x ∈ N.

Using the definition of Ker(T )

⊥, for all ckk∈Z ∈ Ker(T )

⊥, we have∑

k∈Z

|ck|2 = A

∫[0,1]−N

|∑k∈Z

cke2πikx|2dx(4)

15

Page 16: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

A. ZOTHANSANGA AND SUMAN PANWAR

Now using equation (3) and (4)and Lemma 5.4.5 in [4], Ekϕk∈Z is a frame sequence

with lower bound A if and only if

A

∫[0,1]−N

|∑k∈Z

cke2πikx|2dx ≤

∫[0,1]−N

|∑k∈Z

cke2πikx|2ϕ(x)dx.

which is equivalent to

A ≤ Φ(x) a.e x ∈ [0, 1]−N.

Also,Ekϕk∈Z is a frame sequence with upper bound B if and only if

B

∫[0,1]−N

|∑k∈Z

cke2πikx|2dx ≥

∫[0,1]−N

|∑k∈Z

cke2πikx|2ϕ(x)dx.

which is equivalent to

B ≥ Φ(x) a.e x ∈ [0, 1]−N.

Hence, we obtain

A ≤ Φ(x) ≤ B a.e x ∈ [0, 1]−N.

(2) Let ckk∈Z ∈ l2(Z). Then using Theorem 3.2 in [9] , we have

∥Tckk∈Z∥2 =

1∫0

|∑k∈Z

cke2πikx|2Φ(x)dx and(5)

∑k∈Z

|ck|2 =

1∫0

|∑k∈Z

cke2πikx|2 dx.(6)

Also, Ekϕk∈Z is a Riesz sequence with bounds A and B if and only if

A∑k∈Z

|ck|2 ≤ ∥Tckk∈Z∥2 ≤ B∑k∈Z

|ck|2.(7)

Using (5) and (6) in (7) we get that Ekϕk∈Z is a Riesz sequence with bounds A

and B if and only if

A

1∫0

|∑k∈Z

cke2πikx|2dx ≤

1∫0

|∑k∈Z

cke2πikx|2Φ(x)dx ≤ B

1∫0

|∑k∈Z

cke2πikx|2dx.

which implies that

A ≤ Φ(x) ≤ B, a.e x ∈ [0, 1].

(3) Let ckk∈Z ∈ l2(Z). We know that Ekϕk∈Z is an orthonormal sequence if

and only if ∑k∈Z

|ck|2 ≤ ∥Tckk∈Z∥2 ≤∑k∈Z

|ck|2.(8)

Thus, Ekϕk∈Z is an orthonormal sequence with bounds A and B if and only if

1∫0

|∑k∈Z

cke2πikx|2dx ≤

1∫0

|∑k∈Z

cke2πikx|2Φ(x)dx ≤

1∫0

|∑k∈Z

cke2πikx|2dx.

This proves (3).

16

Page 17: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

SOME RESULTS CONCERNING SYSTEM OF MODULATES FOR L2(R)

From general frame theory we know that if for ϕ ∈ L2(R), Ekϕk∈Z is a frame

sequence. Then, for all f ∈ [Ekϕ]k∈Z, we have

f =∑k∈Z

⟨f, S−1Ekϕ⟩Ekϕ.

In order to apply the above frame decomposition, we should be able to calculate

S−1Ekϕ for all k ∈ Z. The following theorem helps to simplify this process by

proving that instead of calculating S−1Ekϕ for all k ∈ Z, it is sufficient to calculate

S−1ϕ.

Theorem 3.2. For ϕ ∈ L2(R) assume that Ekϕk∈Z is a frame sequence. Then,

for all f ∈ [Ekϕ]k∈Z,

(1) SEkf = EkSf , for all k ∈ Z.(2) S−1Ekf = EkS

−1f , for all k ∈ Z.

where S denotes the frame operator for [Ekϕ]k∈Z.

Proof. (1) Given f ∈ [Ekϕ]k∈Z and k ∈ Z, we have

SEkf =∑k′∈Z

⟨Ekf,Ek′ϕ⟩Ek′ϕ.

=∑k′∈Z

⟨f,Ek′−kϕ⟩Ek′ϕ.

letting k′ −→ k′ + k, we finally obtain

SEkf = Ek

∑k′∈Z

⟨f,Ek′ϕ⟩Ek′ϕ

= EkSf.

(2) Since Ekϕk∈Z is a frame sequence, it is a frame for [Ekϕ]k∈Z. Therefore, the

operator S is invertible. Therefore, the proof of (2) follows from (1).

4 Applications

We now prove that the B-spline B1 is a Riesz sequence with Riesz bound 1.

Theorem 4.1. For B1 = χ[−12 , 12 ]

, EkB1k∈Z is a Riesz sequence with Riesz bound

1.

Proof. Note that for k ∈ Z, ∥EkB1∥2 =

12∫

− 12

|e2πikx|2dx.

For k = j, ⟨EkB1, EjB1⟩ =12∫

− 12

e2πi(k−j)xdx.

Thus, EkB1k∈Z is an orthonormal sequence and therefore in view of

Theorem 3.1, the result follows.

As an important outcome of Theorem 3.1, we now prove that the integer modu-

lates of any B-spline form a Riesz-Fischer sequence.

17

Page 18: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

A. ZOTHANSANGA AND SUMAN PANWAR

Theorem 4.2. For n ≥ 1, EkBnk∈Z is a Riesz Fischer sequence with lower bound

1.

Proof. By Theorem 3.1, for n ≥ 1, EkBnk∈Z is a Riesz Fischer sequence with

lower bound 1 if and only if

1 ≤ Φ(x), x ∈ [0, 1], where Φ(x) =∑k∈Z

|Bn(x+ k)|2.

By corollary 6.2.1 in [4], we have∑k∈Z

Bn(x+ k) = 1, for all x ∈ R.

Now,

1 = |∑k∈Z

Bn(x+ k)|2 ≤∑k∈Z

|Bn(x+ k)|2, for all x ∈ R.(9)

Since EkB1k∈Z is a Riesz sequence with Riesz bound 1, by Theorem 3.1 we have∑k∈Z

|B1(x+ k)|2 = 1, a.e x ∈ [0, 1].(10)

Using (9) and (10), we have

1 ≤∑k∈Z

|Bn(x+ k)|2, a.e x ∈ [0, 1].

Thus, for n ≥ 1, EkBnk∈Z is a Riesz Fischer sequence with lower bound 1.

Acknowledgement

The research of the second author is supported by CSIR vide letter no. 09 \045(1140) \ 2011 -EMR I dated 16 \ 11 \ 2011.

References[1] P.G. Casazza, D. Han and D.R. Larson, Frames for Banach spaces, Contem.

Math., 247 , 149 - 181, 1999.[2] P.G.Casazza, The art of frame theory, Taiwanese J. Math.4(2),129-201, 2000.[3] O. Christensen, O.H. Kim, R.Y. Kim and J.K. Lim, Riesz sequences of trans-

lates and their generalized duals, J. Geometruc Analysis. 16(4), 585-596, 2006.[4] O. Christensen, Frames and Bases: An Introductory Course, Birkhausher,

Boston, 2008.[5] R.J. Duffin, A.C. Schaeffer, A class of non-harmonic Fourier series, Trans.

Amer. Math. Soc. 72(2), 341-366, 1952.[6] J.M. Kim and K.H. Kwon, Frames by integer translates and their generalized

duals, J. Korean society for Industrial and Applied Mathematics. 11(3), 1-5,2007.

[7] D.Gabor, Theory of communications, J.IEE(London), 93(3), 429-457, 1946.[8] C. Heil, A basis theory primer: Birkhausher, Boston, 2011.[9] Suman Panwar, On System of Modulates for L2(R), Journal of Wavelet theory

and Applications, 7(1), 69-75, 2013.[10] R.M. Young, An Introduction to Non-harmonic Fourier Series, Academic Press,

New York, 1980 (Revised First Edition, 2001).

18

Page 19: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

SOME DEFINITION OF A NEW INTEGRAL TRANSFORM BETWEEN

ANALOGUE AND DISCRETE SYSTEMS

S.K.Q. AL-OMARI

Department of Applied Sciences, Faculty of Engineering TechnologyAl-Balqa Applied University, Amman 11134, Jordan

[email protected]

ABSTRACTThe D transform is proclaimed as a new integral transform which states an

equivalence between analogue and discrete systems [16]. In this article, we establisha convolution theorem of the D transform and give its denition in the space ofgeneralized functions. The denition of the D transform of a distribution is a oneto one and linear mapping. Over and above, we establish that the D transform of aBoehmian is a distribution and retain its classical properties.

Keywords: D Transformation; Integral Transform; Analogue System; Discrete Sys-tem; Distribution; Boehmian.

1 INTRODUCTION

The sampling techniques are based on the transformation x ! x; where x is ananalogue function and x is the sequence

x (n) = x (nT ) ; (1)

T being a sampling period.Let x be an analogue signal which is casual; i.e. t < 0 ) x (t) = 0: Then the

D transform of x is dened by [16]

D (x) (m) :=1Z0

x (t) etT

tTmm!

dt;m 2 N: (2)

Given an analogue system which is characterized by an impulse response h thena discrete system is described by an impulse response D (h) whose input signal isD (x) ; where x is an input signal of the analogue system with output D (y) givenby

D (x y) = D (x) D (x) ; (3)

where

x y (t) =tZ0

x () y (t ) d: (4)

19J. APPLIED FUNCTIONAL ANALYSIS, VOL. 10, NO.'S 1-2,19-30 ,2015, COPYRIGHT 2015 EUDOXUS PRESS, LLC

Page 20: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

2 S.K.Q. Al-Omari

It also clear that the cited transform extends possibilities concerning linear di¤erenceand di¤erential equations as well. For, we recall the following example from the givencitation.

Example [16] Given a system which is described by the linear di¤erential equationLR

dydt

+ y = x;

where L and R are given as in Figure 2 of [16]: Then an equivalent discretesystem under the D transform can be written as

LR

1y + y = x

where 1y (n) =y (n) y (n 1)

T :

Hence, calculations then lead to the solution x (t) = (t) ; where represents aDirac delta distribution.

We enumerate certain properties of the D transform summarized as follows :

(1) Linearity : D (x + y) = D (x) + D (y) :

(2) Derivation : Ddnxdtn

= nD (x) ; where

n = 1 (n1x) :

(3) Integration : For f =dFdt

we have

D (F) (m) = TmXk=0

D (f) (k) tT :F (0) :

(4) Sum and integral correspondence :

1Z0

x (t) dt =Pmk=0D (x) (m) :

2 CONVOLUTION THEOREM OF D TRANSFORM

Denote by the usual convolution product but extended in its upper limit. That is,

(f g) (t) =1Z0

f () g (t ) d: (5)

Then we establish the following theorem.

20

Page 21: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

some denition of a new integral transform... 3

Theorem 1 ( Convolution Theorem) Let x and y be casual analogue signals.Then we have

D (x y) (m) =mXk=0

D (x) (k)D (y) (m k) : (6)

Proof Let x and y be arbitrary signals; then, by (5) and (1) ; we get that

D (x y) (m) =1Z0

0@ 1Z0

x () y (t ) d

1A etT

tTmm!

d :

Fubinitz theorem also gives

D (x y) (m) =1Z0

x ()

1Z0

y (t ) etT

tTmm!

dtd :

The change of variables, = t ; implies that

D (x y) (m) =1Z0

x ()

1Z0

y () e+T

+T

mm!

dd :

Hence, the binomial theorem, (+ )m =Pmk=0 (

mk )

mkk; suggests

D (x y) (m) =

mXk=0

(mk )

m!k! (m k)!

0@1Z0

x () eT

Tkk!

d

1A

0B@1Z0

y () eT

T

mk(m k)!d

1CA : (7)

Using the formula (mk ) =m!

(m k)!k! ; Equation (7) can be simplied to mean

D (x y) (m) =mXk=0

D (x) (k)D (y) (m k)

This completes the proof of the theorem.

3 GENERALIZED D TRANSFORM

In this section we discuss theD transform on a distribution space [3,10,13] of compactsupport and a quotient space of Boehmians [7]. To this end, reader is supposed tobe aquainted with the concept of distribution spaces [17]. The minimal structure ofBoehmians has been given in Section 3.2. of this note.

21

Page 22: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

4 S.K.Q. Al-Omari

3.1 DISTRIBUTIONAL D TRANSFORM

Denote by E (R+) the space of smooth functions over R+ equipped with its usualtopology [13,10] then for real values t; t > 0; we certainly get that

etT

tTmm!

2 ER+; (8)

T being the sampling period.By (8), we dene the distributionl D transform of a distribution in E

0(R+) ; the

space of distributions of compact support, as

VD (x) (m) =

*x (t) ; e

tT

tTmm!

+: (9)

Reader can simply establish some properties ofVD transform which are listed in

the following remark.

Remark 1 Let x 2 E0(R+) be given; then the following are satised in the distri-

butional sense :

(1)VD is linear.

(2)VD is one to one.

Proof is straightforward.Moreover, the generalized convolution of (5) can also be described by the inner

producth(x y) (t) ; ' (t)i = hx (t) ; hy () ; ' (t+ )ii ;

where ' () 2 E (R+) is arbitrary. Hence, the distributionalVD transform therefore

acts on the convolution by the equation

VD (x y) (m) =

mXk=0

VD (x) (k)

VD (y) (m k) : (10)

3.2 GENERAL BOEHMIAN

The space of Boehmians as the youngest space of generalized functions consistsof the following elements : A linear space Y and a subspace X of Y. To all pairs(f; ) ; (g; ), f; g 2 Y; ; 2 X; is assigned the products f ; g such that thefollowing holds :

(1) 2 X and = :(2) (f ) = f ( ) :(3) (f + g) = f + g :(4) k (f ) = (kf) = f (k) ; k 2 R:Let be the collection of sequences from X satisfying :(5)If (n) 2 and f n = g n,n = 1; 2; :::; then f = g;

22

Page 23: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

some denition of a new integral transform... 5

(6) (n) ; (n) 2 ) (n n) 2 ;Then elements of are called delta sequences or approximating identities.Let A be the class of pairs of sequences dened by

A =n((fn) ; (n)) : (fn) YN; (n) 2

o;

8n 2 N:Then the pair ((fn) ; (n)) 2 A is said to be quotient of sequences,

fnn; when

fn m = fm n;8n;m 2 N:

Quotients of sequencesfnnand

gnn

are equivalent,fnn gnn; if

fn m = gm n;8n;m 2 N:

The operation is an equivalent relation on A and hence, splits A into equivalenceclasses.

The equivalence class containingfnn

is denoted byfnn

: These equivalence

classes are called Boehmians and the space of all Boehmians is denoted by B (Y; X;; ) :The sum and multiplication by a scalar of two Boehmians are naturally dened

in the respective ways : fnn

+

gnn

=

fn n + gn n

n n

and

fnn

=

fnn

; being complex number:

The operations andDk; the k-th derivative, are dened byfnn

gnn

=

fn gnn n

and Dk

fnn

=

Dkfnn

:

Many a time, Y is equipped with a notion of convergence. The intrinsic relation-ship between the notion of convergence and the product are given by:

(1) If fn ! f as n!1 in Y and, 2 X is any xed element, then

fn ! f in Y as n!1:

(2) If fn ! f as n!1 in Y and (n) 2 ; then fn n ! f in Y as n!1:The operation is extended to B (Y; X;; ) X by the following denition.More details of the construction of Boehmians are given in [1, 2, 4, 5, 6, 7, 8, 9,

11, 12, 14, 15].

23

Page 24: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

6 S.K.Q. Al-Omari

3.3 D TRANSFORM OF A BOEHMIAN

Denote by U the space of test functions of bounded support and U0its conjugate

space of distributions dened on R+: By we mean the subset of U of all sequencesof analogue signals (i)

10 in U such that :

(i)

1Z0

i (t) dt = 1;

(ii) i (t) > 0;8i 2 N;(iii) supp i (t)! 0 as i!1:Then each sequence of such signals is called delta sequences which correspond to

dirac delta function.In the next, let us be concerned with the Boehmian space B (E; U;; ) where E

is the group, U is the subgroup of E and, is the operation, then, with ; a typicalelement of B (E; U;; ) is written in the form

:=

xii

;

where xi is an analogue sequence from E (R+) and i 2 :Sum, derivation, multiplication by scalars and the operation between Boehmi-

ans in B (E; U;; ) are respectively dened in the natural way as :xii

+

yii

=

xi i + yi i

i i

;

dk

dxk

xii

=

2664dk

dxkxi

i

3775 ;

xii

=

xii

; being complex number;

and xii

yii

=

xi yii i

Theorem 2 Let (xi) 2 E, (i) 2 be a pair of analogue sequences be given such

that =xii

2 B (E; U;; ) : Then Dxi converges uniformly in U

0(R+) :

Proof Let ' 2 U (R+) be arbitrary. Choose (k0) 2 such that (k0) (m k) > 0on the support of '; 1 k m: Then the fact that

xii

represents a quotientmeans

xi j = xj i; i; j 2 N: (11)

Applying D transform to (11) yields

mXk=0

Dxi (k)Dj (m k) =mXk=0

Dxj (k)Di (m k) ; (12)

24

Page 25: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

some denition of a new integral transform... 7

from which we get

Dxi (k)Dj (m k) = Dxj (k)Di (m k) : (13)

Therefore,

Dxi (k) (') = Dxi (k)Dk0 (m k)Dk0 (m k)

'

= DxiDk0 (m k)

'

Dk0 (m k)

:

By (13) we write

Dxi (k) (') = Dxk0 (k)Di (m k)

'

Dk0 (m k)

= Dxk0

Di (m k)Dk0 (m k)

: (14)

But the fact that

Di (m k)! etT

tTkk!

; (15)

as i!1; when inserted in (14) ; yields

Dxi (k) (')! as i!1

where := etT

tTkk!

'

Dk0 (m k):

Once again, the fact that is ofcourse smooth function with a property that

supp supp'

implies that Dxi (k) denes a distribution in U0(R+) :

This completes the proof of the theorem.

Theorem 3 Letxii

=

yii

2 B (E; U;; ) ; then Dxi (k) and Dyi (k) con-

verge to the same limit in U0(R+).

Proof Letxii

=

yii

2 B (E; U;; ) ; then dening a set Ai and Bi by

(A)i =

(x i+1

2 i+1

2; if i odd

y i2 i

2; if i even

and

Bi =

( i+1

2 i+1

2; if i odd

i2 i

2; if i even

impliesAi

Bi

=

xii

=

yii

:

25

Page 26: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

8 S.K.Q. Al-Omari

Therefore DAi converges in U0(R+) and

limi!1

DA(2i1) (') = limi!1

Dxi (') :

Thus, DAi and Dxi converge to same limit.Similarly we proceed for DAi and Dyi .Hence the proof is completed.Now, by virtue of above, we introduce the D transform of the equivalence class

xii

2 B (E; U;; ) as

Dxii

= limi!1

Dxi (16)

on compact subsets of R+:

Theorem 4 The transformD : B (E; U;; )! U

0(R+) is well dened.

Proof Considering the equationxii

=

yii

we get, by the concept of quotients,

thatxi j = yi i: (17)

Applying D transform to (17) then using the idea of (12) and (13) we get

Dxi (k)Dj (m k) = Dyj (k)Di (m k) :

In particular, for i = j; we get

Dxi (k)Di (m k) = Dyi (k)Di (m k) : (18)

Considering limits as i!1; from (18) ; we nd

limi!1

Dxi (k) = limi!1

Dyi (k) :

Hence the theorem.

Theorem 5 The transformD : B (E; U;; )! U

0(R+) is innitely smooth.

Proof Letxii

2 B (E; U;; ) and K be an open bounded set of R+ then, there is

j 2 N such thatDj (m k) > 0; 1 k m; on K:

Hence,

Dxii

= lim

i!1Dxi (k)

= limi!1

Dxi (k)Dj (m k)Dj (m k)

26

Page 27: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

some denition of a new integral transform... 9

By (13) we get

Dxii

= lim

i!1

Dxj (k)Di (m k)Dj (m k)

!e

tTtTk

k!

Dxj (k)Dj (m k) on K:

Last step follows from (15) :

Since Dxj and Dxj are in E (R+) our result follows.This completes the proof of the theorem.

Theorem 6 Let (xi) ; (yi) 2 E; (i) ; (i) 2 be given sequences of signals such

thatxii

;

yii

2 B (E; U;; ) : Then

D

xii

yii

=e

tT

k!

t

T

k Dxii

Dyii

:

Proof of this theorem is a result of (15) and (16) :

Theorem 7 Letxii

dene a Boehmian in B (E; U;; ) then

Dxii

(Dj) (m k) = e

tT

tTkk!

Dxj (k) ; 1 k m:

Proof Let ' 2 U (R+) then (15) and (16) giveDxii

(Dj) (m k) (') = lim

i!1Dxi (k) (Di) (m k) (')

The equation in (13) then yields

Dxii

(Dj) (m k) (') = lim

i!1Dxj (k) (Di) (m k) (')

= etT

tTkk!

(Dxj (k)) (') :

Hence the theorem.

Theorem 8 The transformD : B (E; U;; ) ! U

0(R+) is continuous with respect

to - convergence.

Proof Let (i) 2 B (E; U;; ) ; 2 B (E; U;; ) be such that i! : There can

existsxi;k

; (xk) 2 E; (i) 2 ; such that n =

xi;ki

; =

xki

and xi;k ! xk for every k 2 N as i ! 1 in E: Continuity of D impliesD (xi;k)! D (xk) as i!1 in U

0(R+) :

27

Page 28: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

10 S.K.Q. Al-Omari

Thus,Di

!D as i!1:

This completes the proof of the theorem.

Theorem 9 The transformD : B (E; U;; )! U

0(R+) is linear.

Proof Let k1; k2 2 R then we get

Dk1

xii

+ k2

yii

=

k1xii

+

k2yii

That is

Dk1

xii

+ k2

yii

= k1 lim

i!1Dxi + k2 lim

i!1Dyi

That isDk1

xii

+ k2

yii

= k1

Dxii

+ k2

Dyii

This completes the proof of the theorem.

Theorem 10 The transformD : B (E; U;; )! U

0(R+) is one - to - one.

Proof Letxii

=

yii

; then xi j = yj i: Applying D transform then

using (13) and considering limits for i = j, imply limxi = lim yi as i!1.Thus,

Dxii

=Dyii

:

Hence the proof.

We state without proof the following theorems. Due simplicity of the proofs weprefer they be omitted.

Theorem 11D

xii

i (m k)

=Di

xii

; ()i 2 .

Theorem 12 IfDxii

= 0; then

xii

= 0 in the sense of Boehmians of

B (E; U;; ) :

Theorem 13 If n is a sequence of Boehmians in B (E; U;; ) such that n! as

n!1; thenDn

!D as n!1 in U

0(R+) on compact subsets.

Conict of Interests The author declare that there is no conict of interests re-garding the publication of this paper.

28

Page 29: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

some denition of a new integral transform... 11

References

[1] Mikusinski, P. (1987), Fourier transform for integrable Boehmians, RockyMountain Journal of Mathematics, 17(3), 577582.

[2] Al-Omari, S.K.Q., Loonker D., Banerji P.K. and Kalla, S.L. (2008). FourierSine(Cosine) Transform for Ultradistributions and their Extensions to Tem-pered and UltraBoehmian spaces, Integ. Trans. Spl. Funct. 19(6), 453 462.

[3] Banerji, P.K., Al-Omari, S.K.Q. and Debnath, L.(2006). Tempered Distribu-tional Fourier Sine(Cosine)Transform, Integ. Trans. Spl. Funct.17(11),759-768.

[4] S.K.Q.Al-Omari and Kilicman, A. (2012). Note on Boehmians for Class of Op-tical Fresnel Wavelet Transforms, Journal of Function Spaces and Applications,Volume 2012, Article ID 405368, doi:10.1155/2012/405368.

[5] Boehme, T.K. (1973), The Support of Mikusinski Operators, Tran.Amer. Math.Soc.176,319-334.

[6] Mikusinski, P.(1983). Convergence of Boehmians, Japan. J. Math. (N.S.) 9 ,159179.

[7] S.K.Q.Al-Omari and Kilicman, A. On Generalized Hartley-Hilbert andFourier-Hilbert Transforms, Advances in Di¤erence Equations 2012, 2012:232doi:10.1186/1687-1847-2012-232.

[8] Mikusinski, P. (1995), Tempered Boehmians and Ultradistributions, Proc.Amer. Math. Soc. 123(3), 813-817.

[9] Roopkumar, R.(2009). Mellin transform for Boehmians, Bull. Inst. Math., Aca-demica Sinica, 4(1), pp. 7596.

[10] Zemanian, A.H. (1987). Generalized Integral Transformation, Dover Publica-tions, Inc., New York. First published by interscience publishers, New York.

[11] Al-Omari, S.K.Q.(2011). Generalized Functions for Double Sumudu Transfor-mation, Intern.J. Algeb., 6(3), 139 - 146.

[12] S. K. Q. Al-Omari and A. Kilicman (2012), On Di¤raction Fresnel Transformsfor Boehmians, Abstract and Applied Analysis, Volume 2011, Article ID 712746.

[13] Pathak, R. S. (1997). Integral transforms of Generalized Functions and theirApplications, Gordon and Breach Science Publishers, Australia , Canada, India,Japan.

[14] Roopkumar, R.(2007). Stieltjes Transform for Boehmians, Integral Transf. Spl.Funct.18(11), 845-853.

[15] Al-Omari, S.K.Q. (2011). Notes for Hartley Transforms of Generalized Func-tions , Italian J. Pure Appl. Math. N.28, 21-30.

29

Page 30: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

12 S.K.Q. Al-Omari

[16] J.Hekrdla (1988), New Integral Transform Between Analogue and Discrete Sys-tems. Electronic Letters 24(10),620-623.

[17] Zemanian, A.H. (1987). Distribution theory and transform analysis, Dover Pub-lications, Inc., New York. First Published by McGraw-Hill, Inc. New York(1965).

30

Page 31: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

SOME FIXED POINT THEOREMS FOR

ITERATED CONTRACTION MAPS

Santosh Kumar

Department of Mathematics

College of Natural and Applied Sciences

University of Dar es salaam

P.O.Box-35062

Tanzania.

E. Mail: [email protected]

ABSTRACT

The study of iterated contraction was initiated by Rheinboldt in 1969.

The concept of iterated contraction proves to be very useful in the study

of certain iterative process and has wide applicability. In this paper a brief

introduction of iterated contraction maps is given and some fixed point results

are discussed.

1. INTRODUCTION

In 1969, Rheinboldt [4] gave unified convergence theory for a class of

iterative processes. He initiated the process of finding solution of the nonlin-

ear operator equation Fx = 0 using metric fixed point theory. Rheinboldt

developed a general convergence theory for iterative processes of the form

xk+1 = Gxk, k = 0, 1, ...; and founded on certain nonlinear estimates for

the iteration function G as well as on a so called concept of majorizing se-

quences, His new approach reduces the study of the iterative process to that

of a second order nonlinear difference equation.

02000 Mathematics Subject Classification: 47H10, 54H25.

Key words and phrases: Nonexpansive mappings, Iteration process, Fixed points of

nonexpansive map,Compact map, Iterated contraction map.

31J. APPLIED FUNCTIONAL ANALYSIS, VOL. 10, NO.'S 1-2,31-39 ,2015, COPYRIGHT 2015 EUDOXUS PRESS, LLC

Page 32: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

The purpose of this paper is to establish yet another fixed point theo-

rem for iterated contractions maps in different settings. Our results extends

several results contained in [1], [2], [6] etc.

2. PRELIMINARIES

If f is a contractive mapping, then under certain conditions the sequence

xn+1 = fxn tends to the unique fixed-point of f . Ortega, J. M. and Rhein-

boldt, W. C. [3] proved this type results. The following definitions are useful

to our discussion.

Definition 2.1 If f : X → X is map such that d(fx, ffx) ≤ k d(x, fx)

for all x ∈ X, 0 ≤ k < 1, then f is said to be an iterated contraction map.

In case d(fx, ffx) < d(x, fx), x = fx, then f is an iterated contractive

map. See for further results [3,4].

Definition 2.2 A map f : X → X, satisfying d(fx, fy) ≤ kd(x, y), 0 ≤

k < 1, for all x, y ∈ X, is called a contraction map.

A contraction map is continuous and is an iterated contraction.

For example, if y = fx, then d(fx, ffx) ≤ kd(x, fx) is satisfied. However,

converse is not true.

If f : [−1/2, 1/2] → [−1/2, 1/2] is given by fx = x2 , then f is an iterated

contraction but not a contraction map.

If f : R→ R, defined by fx = 0 for x ∈ [0, 1/2) and fx = 1 for x ∈ [1/2, 1],

then f is not continuous at x = 1/2, and f is an iterated contraction.

Remark 2.1 A contraction map has a unique fixed point. However, an

iterated contraction map may have more than one fixed point.

For example, the iterated contraction function fx = x2 on [0, 1] has f0 = 0

and f1 = 1, two fixed points.

Remark 2.2 If f : R→ R is given by fx = 2x+1, then f is a continuous

map but f is not an iterated contraction. It is easy to see that d(fx, ffx) ≤

kd(x, fx) is not satisfied for 0 ≤ k < 1.

Remark 2.3 A discontinuous function need not always be an iterated

contraction.

Let X = [0, 1]. Define f : [0, 1] → [0, 1] by fx = 1/2, x ∈ [0, 1/2), and

2

SOME FIXED POINT THEOREMS FOR ITERATED CONTRACTION MAPS

Santosh Kumar32

Page 33: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

fx = 0, x ∈ [1/2, 1]. Then f is discontinuous.

If we take x = 1/4, then d(fx, ffx) ≤ kd(x, fx), 0 ≤ k < 1, is not satisfied

and hence f is not an iterated contraction.

If f : [0, 1] → [0, 1] defined by fx = 1/4, x ∈ [0, 1/2), and fx = 3/4, x ∈

[1/2, 1],then f is discontinuous at x = 1/2, f has fixed points when x = 1/4

and x = 3/4.

If f : R → R defined by fx = x/3 + 1/3 for x ≤ 0, and fx = x/3 for

x > 0, then f is an iterated contraction and f has no fixed point.

If f : [0, 1] → [0, 1] is defined by fx = 0 for x ∈ [0, 1/2), and fx = 1/2

for x ∈ [1/2, 1], then f is discontinuous at x = 1/2 and f is an iterated

contraction map and has a fixed point f(1/2) = 1/2.

The following is a fixed point theorem for iterated contraction map.

Theorem 2.1 If f : X → X is a continuous iterated contractive map and

the sequence of iterates xn defined by xn+1 = fxn, n = 1, 2, ... for x ∈ X,

has a subsequence converging to y ∈ X, then y = fy, that is,f has a fixed

point.

Proof: Let xn+1 = fxn, n = 1, 2, ... Then the sequence d(xn+1, xn) is

a non-increasing sequence of reals. It is bounded below by 0, and therefore

has a limit. Since the subsequence converges to y and f is continuous on X,

so f(xni) converges to fy and ff(xni) converges to ffy.

Thus d(y, fy) = lim d(xni , xni+1) = lim d(xni+1 , xni+2) = d(fy, ffy).

If y = fy, then d(ffy, fy) < d(fy, y), since f is an iterative contractive map.

Consequently, d(fy, y) = d(ffy, fy) < d(fy, y), a contradiction and hence

fy = y.

We give the following example to show that if f is an iterated contraction

that is not continuous, then f may not have a fixed point.

Let f : R → R, given by fx = x/5 + 1/5 for x ≤ 0, and fx = x/5 for

x > 0. Then f does not have a fixed point. Here f is discontinuous at x = 0.

If f is continuous but X is not complete, then f need not have a fixed point.

For example, if f : (0, 1] → (0, 1] given by fx = x/2, then f is an iterated

contraction, but has no fixed point.

3

SOME FIXED POINT THEOREMS FOR ITERATED CONTRACTION MAPS

Santosh Kumar33

Page 34: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

A continuous function f that is not an iterated contraction may not have

a fixed point.

The following example illustrates the fact:

If f : R→ R is a translation map fx = x+ 2, then f has no fixed point but

f is continuous.

Note 2.1 If f is not contraction but some powers of f is contraction,

then f has a unique fixed point on a complete metric space.

For example, if f : R→ R is defined by fx = 1 for x rational and fx = 0

for x irrational, then fofx = f 2x = 1 a constant map, is a contraction map,

and f has a unique fixed point.

Note 2.2 Continuity of an iterated contraction is sufficient but not nec-

essary.

Let f : [0, 1] → [0, 1] be defined by fx = 0 on [0, 1/2), and fx = 2/3, for

x ∈ [1/2, 1]. Then f is an iterated contraction and f0 = 0, f2/3 = 2/3, and

f is not continuous.

As stated in Note 2.1 that if f is not contraction still f may have a unique

fixed point when some powers of f is a contraction map. The same is true

for iterated contraction map.

Theorem 2.2 Let f : X → X be an iterated contraction map on a

complete metric space X. If for some power of f , say f r is an iterated

contraction, that is, d(f rx, f rf rx) ≤ k d(x, f rx) and f r is continuous at y,

where y = lim(f r)nx, for any arbitrary x ∈ X. Then f has a fixed point.

Proof: Since X is a complete metric space and f is an iterated contrac-

tion that is continuous at y, therefore y = f ry, where y = lim(f r)nx.

It is easy to show that d(y, fy) ≤ kr d(y, fy). Since kr ≤ 1, therefore

d(y, fy) = 0 and hence f has a fixed point.

We give the following example to illustrate the theorem.

Example 2.1 If f : [0, 1] → [0, 1] is given by fx = 1/4, x ∈

[0, 1/4], f(x) = 0, x ∈ (1/4, 1/2), and fx = 1/2, x ∈ [1/2, 1], then f is not

continuous at x = 1/4. However, if we take iteration, then f r(1/4) = 1/4

on [0, 1/2) and f r(1/4) = 1/2 on [1/2, 1] for r = 2, 3, ... It is clear that

4

SOME FIXED POINT THEOREMS FOR ITERATED CONTRACTION MAPS

Santosh Kumar34

Page 35: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

f(1/4) = 1/4.

Note 2.3 If f is not an iterated contraction in Theorem 2.2, but f r is an

iterated contraction with f ry = y, then f has a fixed point.

The following example is worth mentioning.

Example 2.2 Let f : [0, 1] → [0, 1] defined by fx = 1/4 for x ∈

[0, 1/4], fx = 0 for x ∈ (1/4, 1/2), fx = 1/2 for x ∈ [1/2, 3/4] and fx = 3/4

for x ∈ (3/4, 1].

Here f is not iterated contraction but f r is iterated contraction for r =

2, 3, ...f 2(x) = 1/4 for x ∈ [0, 1/2) and f 2(x) = 1/2 for x ∈ [1/2, 1]. In this

example f has a fixed point at x = 1/4.

3. MAIN RESULTS

In this section, we will prove some fixed point theorems for iterated maps

under different settings. Our results will corollarize many existing results.

The first theorem is as follows:

Theorem 3.1 If f : X → X is an iterated contraction map, and X is a

complete metric space, then the sequence of iterates xn converges to y ∈ X.

In case, f is continuous at y, then y = fy, that is,f has a fixed point.

Proof: Let xn+1 = fxn, n = 1, 2...;x1 ∈ X. It is easy to show that xn is

a Cauchy sequence, since f is an iterated contraction. The Cauchy sequence

xn converges to y ∈ X, since X is a complete metric space. Moreover, if f

is continuous at y, then xn+1 = fxn converges to fy. It follows that y = fy.

Note 3.1 A continuous iterated contraction map on a complete metric

space has a unique fixed point. If an iterated contraction map is not contin-

uous, even then it may have a fixed point.

For example, if f : [0, 1] → [0, 1] is given by fx = 0 for x ∈ [0, 1/2) and

fx = 2/3 for x ∈ [1/2, 1]. In this example, X = [0, 1] is complete but f is

not continuous at x = 1/2. However, f is continuous at x = 0, f0 = 0 and f

is continuous at x = 2/3, f2/3 = 2/3.

Theorem 3.2 If f : C → C is an iterated contraction and is continuous,

where C is closed subset of a metric spaceX, then f has a fixed point provided

that f(C) is compact.

5

SOME FIXED POINT THEOREMS FOR ITERATED CONTRACTION MAPS

Santosh Kumar35

Page 36: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Proof: We show that the sequence xn has a convergent subsequence.

Using iterated contraction and continuity of f we get a fixed point.

Definition 3.1 Let X be a metric space and f : X → X. Then f is said

to be an iterated nonexpansive map if

d(fx, ffx) ≤ d(x, fx)

for all x ∈ X.

The following is a fixed point theorem for the iterated nonexpansive map.

Theorem 3.3 Let X be a metric space and f : X → X an iterated

nonexpensive map satisfying the following :

(i) if x = fx, then d(ffx, fx) < d(fx, x),

(ii) if for some x ∈ X, the sequence of iterates xn+1 = fxn has a convergent

subsequence converging to y say and f is continuous at y. Then f has a fixed

point.

Proof: It is easy to show that the sequence d(xn+1, xn) is a nonincreas-

ing sequence of positive reals bounded below by 0. The sequence has a limit.

Hence, d(fy, y) = lim d(xni, xni+1) = lim d(xni+1 , xni+2) = d(ffy, fy).

This is a contradiction to (i). Therefore f has a fixed point, that is, fy = y.

If C is a compact subset of a metric space X and f : C → C an iterated

nonexpansive map, satisfying condition (i) of the above theorem, then f has

a fixed point.

Note 3.2 If C is compact, then condition (ii) of Theorem 3.3 is satisfied,

and hence the result.

If C is a closed subset of a metric space X and f : C → C an iterated

contraction. If the sequence xnn converges to y, where f is continuous at

y, then fy = y, that is, f has a fixed point.

The following theorem deals with two metrics on X.

Theorem 3.4 Let f : X → X satisfy the following:

(i) X is complete with metric d and d(x, fx) ≤ δ(x, fx) for all x, fx ∈ X,

(ii)f is iterated contraction with respect to δ,

Then for x ∈ X, the sequence of iterates xn = fnx converges to y ∈ X. If

f is continuous at y, then f has a fixed point, say fy = y.

6

SOME FIXED POINT THEOREMS FOR ITERATED CONTRACTION MAPS

Santosh Kumar36

Page 37: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Proof: It is easy to show that xn is a Cauchy sequence with respect

to δ. Since d(x, fx) ≤ δ(x, fx), therefore xn is a Cauchy sequence with

respect to d. The sequence xn converges to y in (X, d) since it is complete.

The function f is continuous at y, therefore fy = f lim xn = lim fxn =

lim xn+1 = y. Hence the result.

Kannan [2] considered the following map.

Let f : X → X satisfy d(fx, fy) ≤ k[d(x, fx) + d(y, fy)] for all x, y ∈ X

and 0 ≤ k < 1/2. Then f is said to be a Kannan map.

If y = fx, then we get d(fx, ffx) ≤ k[d(x, fx) + d(fx, ffx)]. This gives

d(fx, ffx) ≤ k

(1−k)d(x, fx), where 0 ≤ k

(1−k)< 1, that is, f is an iterated

contraction. A Kannan map is an iterated contraction map.

The following result due to Kannan [2] is valid for iterated contraction

map.

Theorem 3.5 Let f : X → X satisfy d(fx, fy) ≤ k[d(x, fx) + d(y, fy)]

for all x, y ∈ X, 0 ≤ k < 1/2, f continuous on X and let the sequence of

iterates xn have a subsequence xni converging to y. Then f has a fixed

point.

The following result deals with two mappings.

Theorem 3.6 Let f : X → X and g : X → X, where X is a complete

metric space, satisfy d(fx, gfx) ≤ kd(x, fx) and d(gx, fgx) ≤ kd(x, gx) for

some k, 0 ≤ k < 1 and for each x1 ∈ X. If f and g are continuous on X, then

f and g have a common fixed point.

Proof: We consider the sequence of iterates as follows. Let x2 =

fx1, x3 = gx2, x4 = fx2, and so on. In this case it is easy to show that

the sequence xn is a Cauchy sequence and converges in X since X is a

complete metric space.

Let lim xn = y. Then lim x2n = y and lim x2n−1 = y. Since f and g are

continuous on X, therefore,

fy = f lim x2n−1 = lim fx2n−1 = lim x2n = y.

gy = g lim x2n = lim gx2n = lim x2n+1 = y.

7

SOME FIXED POINT THEOREMS FOR ITERATED CONTRACTION MAPS

Santosh Kumar37

Page 38: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Hence y = fy = gy and y is a common fixed point.

Results of this type for topological vector spaces are given in [5]. We are

giving the following:

Theorem 3.7 Let f : X → X be a continuous iterated contraction map

such that:

(i) if x = fx, then d(fx, ffx) < d(x, fx), and

(ii) the sequence xn+1 = f(xn) has a convergent subsequence converging to

y(say).

Then the sequence xn converges to a fixed point of f.

Proof: It is easy to see that the sequence d(xn, xn+1) is a nonincreasing

and bounded below by 0. Let xni be a subsequence of xn converging to

y. Then,

d(y, fy) = lim d(xni, xni+1) = d(xni+1 , xni+2) = d(fy, ffy) < d(y, fy).

This is a contradiction so y = fy. Since d(xn+1, y) < d(xn, y) for all n, so

xn converges to y = fy.

The result due to Cheney and Goldstein [1] follows as a corollary.

Corollary 3.1 Let f be a map of a metric space X into itself such that

(i) f is a nonexpansive map onX, that is, d(fx, fy) ≤ d(x, y) for all x, y ∈ X,

(ii) if x = fx, then d(fx, ffx) < d(x, fx),

(iii) the sequence xn+1 = f(xn) has a convergent subsequence converging to

y(say). Then the sequence xn converges to a fixed point of f.

The following is well known for contractive maps.

Note 3.3 If f : X → X is a contractive map and f(X) compact, then f

has a unique fixed point.

It is easy to see that the sequence of iterates xn converges to a unique

fixed point of f. However, for nonexpansive map, a sequence of iterates need

not converge to a fixed point of f. For example, if fx = −x, then the sequence

of iterates xn does not converge to a fixed point of f(f0 = 0).

Note 3.4 If g = afx + (1− a)x, 0 < a < 1, then the fixed point of f is

the same as of g.

Let fy = y. Then gy = afy + (1− a)y, that is, gy = y since fy = y.

8

SOME FIXED POINT THEOREMS FOR ITERATED CONTRACTION MAPS

Santosh Kumar38

Page 39: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Let gy = y. Then we show that y = fy. Here gy = afy + (1 − a)y = y.

Then fy = y, that is, f has a fixed point y [6]. In case the sequence xn+1 =

gxn converges to y a fixed point of g, then y = fy.

Acknowledgement: The author is extremely thankful to Late Prof.

S. P. Singh of Memorial University, St.John’s, NL, Canada for his valuable

suggestions in the preparation of this manuscript.

REFERENCES

1. Cheney, E.W. and Goldstein, A.A., Proximity maps for convex sets,

Proc. Amer. Math. Soc. 10 (1959), 448 - 450.

2. Kannan, R., Some results on fixed points -II, Amer. Math. Monthly,

76(1969),405 - 408.

3. Ortega, J. and Rheinboldt, W. C., On a class of approximate iterative

process, Arch. Rat. Mech. Anal. 23 (1967) 352 - 365.

4. Rheinboldt, W.C., A unified convergence theory for a class of iterative

process, SIAM J. Numerical Anal., 5 (1968) 42 - 63.

5. Singh, K. L. and Whitfield, J. H. M., A fixed point theorem in non-

convex domain, Bull. Polish Acad. of Sc., 32(1984) 95 - 99.

6. Singh, S. P., Watson, B. and Srivastava, P., Fixed Point theory and Best

Approximation; The KKM-map Principle, Kluwer Academic Publish-

ers, The Netherlands 1997.

On Leave from:

Department of Applied Mathematics

Inderprastha Engineering College,

63, Site-IV, Surya Nagar Flyover Road, Sahibabad

Ghaziabad, U.P-201010.

India.

9

SOME FIXED POINT THEOREMS FOR ITERATED CONTRACTION MAPS

Santosh Kumar39

Page 40: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Nonlinear Regularized Nonconvex Random Variational Inequalities withFuzzy Event in q-uniformly Smooth Banach Spaces

SalahuddinDepartment of Mathematics

Jazan University, JazanKingdom of Saudi Arabia

[email protected]

Abstract

The main purpose of this paper is to introduced and studied a new class of non-linear regularized nonconvex random variational inequalities with fuzzy mappings inq-uniformly smooth Banach spaces. An existence theorem of random solutions forthis kind of nonlinear regularized nonconvex random variational inequalities with fuzzymappings is established and a new iterative random algorithm with random errors issuggested and discussed. The results presented in this paper generalized, improvedand unified some recent works in this fields.

Keywords: Nonlinear regularized nonconvex random variational inequalities, fuzzymappings, uniformly r-prox regular sets, random iterative sequences, randomly re-laxed (κt, ζt)-cocoercive mapping, randomly accretive mappings, convergence analysis,random mixed errors, q-uniformly smooth Banach spaces.AMS Mathematics Subject Classification: 49J40, 47H06.

1 Historical Backbone

Variational inequalities are an important and generalization of classical variationalinequalities which have wide applications in many fields for example mechanics,physics, optimization and control theory, nonlinear programming, economics and en-gineering sciences and in face, the problems for random variational inequalities arejust so. It is known that accretivity of the underlying operator plays indispensableroles in the theory of variational inequalities and its generalizations. In 2001, Huangand Fang [16] were first to introduced generalized m-accretive mapping and gave thedefinition of the resolvent operators for generalized m-accretive mapping in Banach spaces.

The fuzzy set theory which was given by Zadeh [29] at university of Californiain 1965 has emerged as an interesting and fascinating branch of pure and appliedsciences. The applications of the fuzzy set theory can be found in many branches ofregional, physical, mathematical and engineering sciences. In 1989, Chang and Zhu[8] first introduced and studied a class of variational inequalities for fuzzy mappings.Since then several classes of variational inequalities, quasi variational inequalities andcomplementarity problems with fuzzy mappings were considered by Agarwal et al.[1], Anastassiou et al. [2], Chang and Huang [7], Ding et al. [11], Huang [15], Leeet al. [18, 19], Salahuddin [22, 23],and Zhang and Bi [30]. Note that most of resultsin this direction for variational inequalities have been done in the setting of Hilbert spaces.

1

40J. APPLIED FUNCTIONAL ANALYSIS, VOL. 10, NO.'S 1-2,40-52 ,2015, COPYRIGHT 2015 EUDOXUS PRESS, LLC

Page 41: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Salahuddin 2

On the other hand, random variational inequality problems and random quasi-variational inequality problems have been studied by Bharucha and Reid [4], Chang [5, 6],Chang and Huang [7], Huang [15], Khan and Salahuddin [17], Salahuddin [22], Tan [24]and Yuan [28], etc..

The projection method is an important tools for finding an approximate solutionsof various types of variational inequalities and quasi variational inequalities. The idea ofthis techniques is to established the equivalence between the variational inequalities andfixed point problems using the concepts of projection. The most of results regarding theexistence and the iterative approximation of solutions to variational inequality problemshave been investigated and considered so far the case where the underlying set is a convexset. Recently the concepts of convex set has been generalized in many directions which haspointed and important applications in various fields. It is well known that the uniformlyprox regular set are nonconvex and included the convex sets as special cases . This classof uniformly prox regular sets has played an important part in many nonconvex ap-plications such as optimization, dynamic systems and differential inclusions, see [9, 10, 26].

Inspired by the recent works going on this fields, see [3, 13, 15, 16, 17, 18, 20, 21, 25],in this paper, we introduced and considered a nonlinear regularized nonconvex randomvariational inequalities with fuzzy mapping involving randomly relaxed cocoercive andrandomly accretive mappings in q-uniformly smooth Banach spaces. We suggested arandom perturbed projection iterative algorithm with random mixed errors for finding arandom solutions of aforementioned problems for fuzzy mappings. We also proved theconvergence of the defined random iterative sequences under some suitable assumptions.The results presented in this paper generalized, improve and unify some recent results inthis paper.

2 Conceptual Backbone

Throughout this paper, we suppose that (Ω,R, µ) is a complete σ-finite measurable spaceand X is a separable real Banach space endowed with dual space X ∗, the norm ‖ · ‖ andan dual pairing 〈·, ·〉 between X and X ∗. We denote by B(X ) the class of Borel σ-fieldsin X ; 2X and CB(X ) denote the family of all nonempty subsets of X , the family of allnonempty closed bounded subsets of X , respectively. The generalized duality mappingJq : X → 2X

∗is defined by

Jq(x) = f∗ ∈ X ∗ : 〈x, f∗〉 = ‖x‖q, ‖f∗‖ = ‖x‖q−1 ∀x ∈ X

where q > 1 is a constant. In particular J2 is a usual normalized duality mapping. It isknown that in general Jq(x) = ‖x‖q−1J2(x) for all x 6= 0 and Jq is single valued if X ∗ isstrictly convex. In the sequel, we always assume that X is a real Banach space such thatJq is a single valued. If X is a Hilbert space then Jq becomes the identity mapping on X .The modulus of smoothness of X is the function πX : [0,∞)→ [0,∞) is defined by

πX (t) = sup1

2(‖x+ y‖+ ‖x− y‖)− 1 : ‖x‖ ≤ 1, ‖y‖ ≤ t.

Nonlinear Regularized Nonconvex Random Variational Inequalities with Fuzzy Event in q-uniformly Smooth Banach Spaces

41

Page 42: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Salahuddin 3

A Banach space X is called uniformly smooth if

limt→0

πX (t)

t= 0.

X is called q-uniformly smooth if there exists a constant c > 0 such that

πX (t) < ctq, q > 1.

Note that Jq is a single valued if X is uniformly smooth. Concerned with the characteristicinequalities in q-uniformly smooth Banach spaces. Xu [27] proved the following results.

Lemma 2.1 The real Banach space X is q-uniformly smooth if and only if there exists aconstant cq > 0 such that for all x, y ∈ X

‖x+ y‖q ≤ ‖x‖q + q〈y, Jq(x)〉+ cq‖y‖q.

Let D(·, ·) represent the Hausdorff metric on CB(X ) defined by

D(A,B) = max

supa∈A

d(a,B), supb∈B

d(A, b)

for all A,B ∈ CB(X ),

where

d(a,B) = d(B, a) = infb∈B‖a− b‖ for a ∈ A.

Definition 2.2 A mapping x : Ω→ X is said to be measurable if for any B ∈ B(X ), t ∈Ω, x(t) ∈ B ∈ R.

Definition 2.3 A mapping T : Ω × X → X is said to be a random mapping if for eachfixed x ∈ X , T (t, x) = y(t) is measurable. A random mapping Tt is said to be continuousif for each fixed t ∈ Ω, f(t, ·) : Ω×X → X is continuous.

Definition 2.4 A multivalued mapping T : Ω → 2X is said to be measurable if for anyB ∈ B(X ), T−1(B) = t ∈ Ω : T (t) ∩B 6= ∅ ∈ Σ.

Definition 2.5 A mapping u : Ω→ X is said to be a measurable selection of a measurablemapping T : Ω×X → 2X , if u is measurable and for any t ∈ Ω, u(t) ∈ Tt(x(t)).

Definition 2.6 A mapping T : Ω×X → 2X is said to be a random multivalued mappingif for each fixed x ∈ X , T (·, x) : Ω × X → 2X is a measurable multivalued mapping. Arandom multivalued mapping T : Ω× X → CB(X ) is said to be D-continuous if for eachfixed t ∈ Ω, T (t, ·) : Ω × X → 2X is a randomly continuous with respect to the Hausdorffmetric on D.

Definition 2.7 A multivalued mapping T : Ω×X → 2X is said to be a random multivaluedmapping if for any x ∈ X , T (·, x) is measurable (denoted by Tt,x or Tt).

Nonlinear Regularized Nonconvex Random Variational Inequalities with Fuzzy Event in q-uniformly Smooth Banach Spaces

42

Page 43: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Salahuddin 4

Let Ω be a set and F(X ) be a collection of fuzzy sets over X . A mapping F : Ω×X → F(X )is called a fuzzy mapping. For each x ∈ X , F (x) (denote it by Tx in the sequel) is a fuzzymapping on X and Fx(y) is the membership-grade of y in Fx.

Let A ∈ F(X ), α ∈ (0, 1], then the set

Aα = x ∈ X : A(x) ≥ α

is called an α-cut of A.

Definition 2.8 A fuzzy mapping T : Ω × X → F(X ) is called measurable, if for anyα ∈ (0, 1], (T (·))α : Ω→ 2X is a measurable multivalued mapping.

Definition 2.9 A fuzzy mapping T : Ω × X → F(X ) is a random fuzzy mapping if forany x ∈ X , T (·, x) : Ω× X → F(X ) is a measurable fuzzy mapping (denoted by Tt,x shortdown Tt).

Definition 2.10 Let K be a nonempty closed subsets of q-uniformly smooth Banach spaceX . The proximal normal cone of K at a point u ∈ X is given by

NPK (u) = ζ ∈ X : u ∈ PK(u+ αζ) for some α > 0

where α > 0 is a constant and

PK(u) = v ∈ K : dK(u) = ‖u− v‖.

Here dK(u) is the usual distance function to the subset K that is

dK(u) = infv∈K‖u− v‖.

Lemma 2.11 Let K be a nonempty closed subset in q-uniformly smooth Banach space X .Then ζ ∈ NP

K (u) if and only if there exists a constant α = α(ζ, u) > 0 such that

〈ζ, jq(v − u)〉 ≤ α‖v − u‖2 ∀v ∈ K.

Lemma 2.12 Let K be a nonempty closed and convex subset in q-uniformly smooth Ba-nach space X . Then ζ ∈ NP

K (u) if and only if

〈ζ, jq(v − u)〉 ≤ 0 ∀v ∈ K.

Lemma 2.13 [5] Let G : Ω × X → CB(X ) be a D-continuous random multival-ued mapping. Then for a measurable mapping u : Ω → X , a multivalued mappingG(·, u(·)) : Ω→ CB(X ) is measurable.

Lemma 2.14 [5] Let A, T : Ω → CB(X) be measurable multivalued mappings and u :Ω→ X be a measurable selection of A. Then there exists a measurable selection v : Ω→ Xof T such that for all t ∈ Ω and ε > 0,

‖u(t)− v(t)‖ ≤ (1 + ε)D(A(t), T (t)).

Nonlinear Regularized Nonconvex Random Variational Inequalities with Fuzzy Event in q-uniformly Smooth Banach Spaces

43

Page 44: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Salahuddin 5

Definition 2.15 The tangent cone TK(x) to K at a point x ∈ K is defined as follows

TK(x) = v ∈ X : doK(x; v) = 0.

The normal cone of K at x by polarity with TK(x) defined as follows

NK(x) = ζ : 〈ζ, jq(v)〉 ≤ 0 ∀v ∈ TK(x).

The Clarke normal cone NCK (x) is given by

NCK (x) = co[NP

K (x)]

where co(S) mean the closure of the convex hull of S. Clearly NPK (x) ⊆ NC

K (x) but theconverse is not true. Since NP

K (x) is always closed and convex cone where NCK (x) is convex

but may not be closed, see [9, 10, 21].

Definition 2.16 For any r ∈ (0,+∞], a subset Kr of X is called the normalized uniformlyprox-regular (or uniformly r-prox-regular) if every nonzero proximal normal to Kr can berealized by an r-ball that is ∀u ∈ Kr and 0 6= ζ ∈ NP

Kr(u), ‖ζ‖ = 1 one has

〈ζ, jq(v − u)〉 ≤ 1

2r‖v − u‖2 ∀v ∈ K.

Proposition 2.17 Let r > 0 and Kr be a nonempty closed and uniformly r-prox regularsubset of q-uniformly smooth Banach space X . Set

U(r) = u ∈ X : 0 ≤ dKr(u) < r.

Then the following statements are hold:

(a) for all x ∈ U(r), PKr(x) 6= ∅;

(b) for all r′ ∈ (0, r), PKr is Lipschitz continuous mapping with constant rr−r′ on

U(r′) = u ∈ X : 0 ≤ dKr(u) < r′;

(c) the proximal normal cone is closed as a set valued mapping.

Let T : Ω×X → F(X ) be a random fuzzy mapping satisfying the condition (∗) :(∗) : there exists a function a : X → (0, 1] such that for all (t, x) ∈ Ω × X , we have(Tt)α(x) ∈ CB(X ) where CB(X ) denotes the family of all nonempty bounded closed

subsets of X . By using the random fuzzy mapping Tt, we can define a random multivaluedmapping Tt : Ω × X → CB(X ) by Tt = (Tt)α(x) for x ∈ X . In the sequel Tt is called the

multivalued mapping induced by the random fuzzy mapping Tt. Let α : X → (0, 1] bea function and g, h : Ω × X → X be the nonlinear random single valued mapping suchthat Kr ⊂ gt(X ). For a given measurable mapping η : Ω → (0, 1) we finding measurablemappings x, u : Ω→ X such that

〈ηtu(t) +ht(x(t))− gt(x(t)), gt(y(t))−ht(x(t))〉+ 1

2r‖gt(y(t))−ht(x(t))‖2 ∀ gt(y(t)) ∈ Kr,

(2.1)which is called a nonlinear regularized nonconvex random variational inequalities with fuzzymappings.

Nonlinear Regularized Nonconvex Random Variational Inequalities with Fuzzy Event in q-uniformly Smooth Banach Spaces

44

Page 45: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Salahuddin 6

Lemma 2.18 Let Kr be a uniformly r-prox regular set then the problem (2.1) is equivalentto finding x(t) ∈ X , u(t) ∈ Tt(x(t)) such that ht(x(t)) ∈ Kr and

0 ∈ ηtu(t) + ht(x(t))− gt(x(t)) +NPKr

(ht(x(t))), (2.2)

where NPKr

(s) denotes the P -normal cone of Kr at s in the sense of nonconvex analysis.

Proof. Let (x(t), u(t)) with x(t) ∈ X , ht(x(t)) ∈ Kr and u(t) ∈ Tt(x(t)) be the randomsolution sets of the problem (2.2). If

ηtu(t) + ht(x(t))− gt(x(t)) = 0

because the vector zero always belongs to any normal cone, then

0 ∈ ηtu(t) + ht(x(t))− gt(x(t)) +NPKr

(ht(x(t))).

Ifηtu(t) + ht(x(t))− gt(x(t)) 6= 0

then for all x(t) ∈ X with gt(x(t)) ∈ Kr one has

〈−(ηtu(t) + ht(x(t))− gt(x(t))), gt(y(t))− ht(x(t))〉 ≤ 1

2r‖gt(y(t))− ht(x(t))‖2. (2.3)

From Lemma 2.11, we have

−(ηtu(t) + ht(x(t))− gt(x(t))) ∈ NPKr

(ht(x(t)))

and0 ∈ ηtu(t) + ht(x(t))− gt(x(t)) +NP

Kr(ht(x(t))). (2.4)

Conversely if (x(t), u(t)) with x(t) ∈ X , ht(x(t)) ∈ Kr and u(t) ∈ Tt(x(t)) is a randomsolution sets of the problem (2.2) then from Definition 2.16, x(t) ∈ X and u(t) ∈ Tt(x(t))with ht(x(t)) ∈ Kr are random solution sets of the problem (2.1).

Lemma 2.19 Let Tt : Ω×X → F(X ) be a random fuzzy mapping induced by a multivaluedmapping T : Ω×X → CB(X ) and g, h : Ω×X → X be the random single valued mappingsand η : Ω → (0, 1) be a measurable mapping, then (x(t), u(t)) with (x(t)) ∈ X , ht(x(t)) ∈Kr, u(t) ∈ Tt(x(t)) is a random solution sets of the problem (2.1) if and only if

ht(x(t)) = PKr [gt(x(t))− ηtu(t)], (2.5)

where PKr is the projection of X onto the uniformly r-prox regular set Kr.

Proof. Let (x(t), u(t)) with x(t) ∈ X , ht(x(t)) ∈ Kr, u(t) ∈ Tt(x(t)) be a random solutionsets of the problem (2.1). Then from Lemma 2.18, we have

0 ∈ ηtu(t) + ht(x(t))− gt(x(t)) +NPKr

(ht(x(t))) (2.6)

Nonlinear Regularized Nonconvex Random Variational Inequalities with Fuzzy Event in q-uniformly Smooth Banach Spaces

45

Page 46: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Salahuddin 7

⇔gt(x(t))− ηtu(t) ∈ (I +NP

Kr)(ht(x(t))) (2.7)

⇔ht(x(t)) = PKr [gt(x(t))− ηtu(t)], (2.8)

where I is an identity mapping and PKr = (I +NPKr

)−1.

Remark 2.20 The inequality (2.5) can be written as follows

x(t) = x(t)− ht(x(t)) + PKr [gt(x(t))− ηtu(t)], (2.9)

where η : Ω→ (0, 1) is a measurable mapping.

Algorithm 2.21 Let Tt : Ω × X → F(X ) be a random fuzzy mapping satisfying thecondition (∗) and T : Ω× X → CB(X ) be a multivalued random mapping induced by therandom fuzzy mapping Tt. Let g, h : Ω × X → X be the random single valued mappings.For any given x0 : Ω→ X the random single valued mappings, Tt(x0(t)) : Ω×X → CB(X )is measurable by Lemma 2.13. We know that for any x0(t) ∈ X , Tt(x0(t)) is measurableand there exists a measurable selection u0(t) ∈ Tt(x0(t)), see [13, 14]. Set

x1(t) = (1− λt)x0(t) + λt[x0(t)− ht(x0(t)) + PKr [gt(x0(t))− ηtu0(t)] + λte0(t) + r0(t)

where η, λ : Ω→ (0, 1) are measurable mappings and 0 < λt < 1 is a random constant ande0(t), r0(t) : Ω → X is the random measurable mapping which is a random errors to takeinto account of a possible inexact computation for the proximal point. Then it is easy tosee that x1 : Ω→ X is a random measurable mapping. Since u0(t) ∈ Tt(x0(t)) then thereexists a random measurable selection u1(t) ∈ Tt(x1(t)) for all t ∈ Ω

‖u0(t)− u1(t)‖ ≤ (1 + (1

1)D(Tt(x0(t)), Tt(x1(t))).

By induction, we can define the measurable random sequences xn(t), un(t) : Ω→ X induc-tively satisfying

xn+1(t) = (1−λt)xn(t)+λt[xn(t)−ht(xn(t))+PKr(gt(xn(t))−ηtun(t))+λn(t)en(t)+rn(t),(2.10)

un(t) ∈ Tt(xn(t)); ‖un(t)− un+1(t)‖ ≤ (1 + (1 + n)−1)D(Tt(xn(t)), Tt(xn+1(t))) (2.11)

where 0 < λt < 1 is a random constant and en(t)∞n=0, rn(t)∞n=0 are random sequencesin X to take into account of a possible inexact computation for the resolvent operatorsatisfying the following conditions:

limn→∞

en(t) = limn→∞

rn(t) = 0,

∞∑n=1

‖en(t)− en−1(t)‖ <∞ and

∞∑n=1

‖rn(t)− rn−1(t)‖ <∞. (2.12)

Nonlinear Regularized Nonconvex Random Variational Inequalities with Fuzzy Event in q-uniformly Smooth Banach Spaces

46

Page 47: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Salahuddin 8

Definition 2.22 Let X be a q-uniformly smooth Banach spaces. A random single valuedmapping g : Ω×X → X is called

(i) randomly accretive if

〈gt(x(t))− gt(y(t)), jq(x(t)− y(t))〉 ≥ 0 ∀x(t), y(t) ∈ X , t ∈ Ω;

(ii) randomly µt-strongly accretive if there exists a measurable mapping µ : Ω → (0, 1)such that

〈gt(x(t))− gt(y(t)), jq(x(t)− y(t))〉 ≥ µt‖x(t)− y(t)‖q ∀x(t), y(t) ∈ X , t ∈ Ω;

(iii) randomly relaxed µt-accretive mapping if there exists a measurable mapping µ : Ω→(0, 1) such that

〈gt(x(t))− gt(y(t)), jq(x(t)− y(t))〉 ≥ −µt‖x(t)− y(t)‖q ∀x(t), y(t) ∈ X , t ∈ Ω;

(iv) randomly αt-Lipschitz continuous mapping if there exists a measurable mapping α :Ω→ (0, 1) such that

‖gt(x(t))− gt(y(t))‖ ≤ αt‖x(t)− y(t)‖ ∀x(t), y(t) ∈ X , t ∈ Ω.

Definition 2.23 Let g : Ω×X → X be a random single valued mapping and T : Ω×X →CB(X ) a random multivalued mapping. Then Tt is said to be

(i) randomly accretive if

〈u(t)− v(t), jq(x(t)− y(t))〉 ≥ 0, ∀x(t), y(t) ∈ X , u(t) ∈ Tt(x(t)), v ∈ Tt(y(t)),

(ii) randomly (κt, ζt)-relaxed cocoercive mapping with respect to gt if there exist the mea-surable mappings κ, ζ : Ω→ (0, 1) such that for all x(t), y(t) ∈ X , t ∈ Ω

〈u(t)− v(t), jq(gt(x(t))− gt(y(t)))〉 ≥ −κt‖u(t)− v(t)‖q + ζt‖gt(x(t))− gt(y(t))‖q

∀u(t) ∈ Tt(x(t)), v(t) ∈ Tt(y(t)),

(iii) randomly α−D-Lipschitz continuous mapping if there exists a measurable mappingα : Ω→ (0, 1) such that for all x(t), y(t) ∈ X , t ∈ Ω

D(Tt(x(t)), Tt(y(t))) ≤ αt‖x(t)− y(t)‖

where D is the Hausdorff pseudo metric defined in X .

Nonlinear Regularized Nonconvex Random Variational Inequalities with Fuzzy Event in q-uniformly Smooth Banach Spaces

47

Page 48: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Salahuddin 9

3 Main Backbone

In this section, we prove the existence and the convergence of problem (2.1).

Theorem 3.1 Let X be a q-uniformly smooth real Banach space. Let Tt : Ω×X → F(X )be a random fuzzy mapping satisfying the condition (∗) and Tt : Ω × X → CB(X ) bea random multivalued mapping induced by random fuzzy mapping. Suppose that Tt isa randomly D-Lipschitz continuous with random measurable mapping α : Ω → (0, 1).Let Tt be the randomly (κt, ζt)-relaxed cocoercive with respect to the random mapping gtand g : Ω × X → X be the randomly Lipschitz continuous with a measurable mappingβ : Ω → (0, 1). Let h : Ω × X → X be the randomly relaxed µt-accretive mapping withmeasurable mapping µ : Ω→ (0, 1) and randomly Lipschitz continuous with the measurablemapping γ : Ω→ (0, 1). If for any t ∈ Ω, y(t), z(t) ∈ X , we have

‖PKr(y(t))− PKr(z(t)) ≤ r

r − r′‖y(t)− z(t)‖. (3.1)

Assume that the measurable mapping η : Ω→ (0, 1) satisfying the following assumptions

0 < θt = 1− λt(1− pt) < 1,

Qt +r

r − r′q

√βqt − qηt(−κtα

qt + ζtβ

qt ) + cqη

qtα

qt < 1,

Qt = q

√1 + qµt + cqγ

qt

Qt < 1, 0 < λt < 1, pt < 1

and

limn→∞

‖en(t)‖ = limn→∞

‖rn(t)‖ = 0,

∞∑n=0

‖en(t)− en−1(t)‖ <∞,∞∑n=0

‖rn(t)− rn−1(t)‖ <∞, (3.2)

where r′ ∈ (0, r), then the random mappings xn(t), un(t) : Ω × X → X generated byrandom algorithm converge strongly to the random mappings x∗(t), u∗(t) : Ω×X → X andx∗(t), u∗(t) ∈ X with ht(x(t)) ∈ Kr is a random solution sets of problem (2.1).

Proof. From Algorithm 2.21 and (3.1), t ∈ Ω and 0 < λt < 1, we have

‖xn+1(t)−xn(t)‖ ≤ (1−λt)‖xn(t)−xn−1(t)‖+λt(‖xn(t)−xn−1(t)−(ht(xn(t))−ht(xn−1(t)))‖

+r

r − r′‖gt(xn(t))−gt(xn−1(t))−ηt(un(t)−un−1(t))‖)+λt‖en(t)−en−1(t)‖+‖rn(t)−rn−1(t)‖.

(3.3)Since ht is randomly relaxed µt-accretive mapping and randomly Lipschitz continuousmapping, we have

‖xn(t)− xn−1(t)− (ht(xn(t))− ht(xn−1(t)))‖q = ‖xn(t)− xn−1(t)‖q

Nonlinear Regularized Nonconvex Random Variational Inequalities with Fuzzy Event in q-uniformly Smooth Banach Spaces

48

Page 49: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Salahuddin 10

−q〈ht(xn(t))− ht(xn−1(t)), jq(xn(t)− xn−1(t))〉+ cq‖ht(xn(t))− ht(xn−1(t))‖q

≤ ‖xn(t)− xn−1(t)‖q + qµt‖xn(t)− xn−1(t)‖q + γqt cq‖xn(t)− xn−1(t)‖q

≤ (1 + qµt + γqt cq)‖xn(t)− xn−1(t)‖q

⇒ ‖xn(t)−xn−1(t)−(ht(xn(t))−ht(xn−1(t)))‖ ≤ q

√1 + qµt + cqγ

qt ‖xn(t)−xn−1(t)‖. (3.4)

Since Tt is randomly αt −D-Lipschitz continuous mapping, then we have

‖un(t)− un−1(t)‖ ≤ (1 +1

n)D(Tt(xn(t)), Tt(xn−1(t)))

≤ αt(1 +1

n)‖xn(t)− xn−1(t)‖. (3.5)

Again Tt is randomly (κt, ζt)-relaxed cocoercive mapping with respect to gt and gt israndomly Lipschitz continuous mapping with measurable mapping β : Ω → (0, 1), wehave‖gt(xn(t))− gt(xn−1(t))− ηt(un(t)− un−1(t))‖q ≤ ‖gt(xn(t))− gt(xn−1(t))‖q

−qηt〈un(t)− un−1(t), jq(gt(xn(t))− gt(xn−1(t))))〉+ cqηqt ‖un(t)− un−1(t)‖q

≤ ‖gt(xn(t))− gt(xn−1(t))‖q − qηt(−κt‖un(t)− un−1(t)‖q + ζt‖gt(xn(t))− gt(xn−1(t))‖q)

+cqηqt ‖un(t)− un−1(t)‖q

≤ βqt ‖xn(t)− xn−1(t)‖q + qηtκtαqt (1 +

1

n)q‖xn(t)− xn−1(t)‖q − qηtζtβqt ‖xn(t)− xn−1(t)‖q

+cqηqtα

qt (1 +

1

n)q‖xn(t)− xn−1(t)‖q

≤ (βqt − qηt(−κtαqt (1 +

1

n)q + ζtβ

qt ) + cqη

qtα

qt (1 +

1

n)q)‖xn(t)− xn−1(t)‖q

⇒ ‖gt(xn(t))− gt(xn−1(t))− ηt(un(t)− un−1(t))‖

≤ q

√βqt − qηt(−κtα

qt (1 +

1

n)q + ζtβ

qt ) + cqη

qtα

qt (1 +

1

n)q‖xn(t)− xn−1(t)‖. (3.6)

From (3.3),(3.4) and (3.6), we get

‖xn+1(t)− xn(t)‖ ≤ (1− λt)‖xn(t)− xn−1(t)‖+ λt(q√

1 + qµt + cqγqt

+r

r − r′q

√βqt − qηt(−κtα

qt (1 +

1

n)q − ζtβqt ) + cqη

qtα

qt (1 +

1

n)q )‖xn(t)− xn−1(t)‖

+λt‖en(t)− en−1(t)‖+ ‖rn(t)− rn−1(t)‖

≤ (1− λt + λtpn,t)‖xn(t)− xn−1(t)‖+ λt‖en(t)− en−1(t)‖+ ‖rn(t)− rn−1(t)‖

≤ θn,t‖xn(t)− xn−1(t)‖+ λt‖en(t)− en−1(t)‖+ ‖rn(t)− rn−1(t)‖ (3.7)

Nonlinear Regularized Nonconvex Random Variational Inequalities with Fuzzy Event in q-uniformly Smooth Banach Spaces

49

Page 50: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Salahuddin 11

where θn,t = 1− λt + λtpn,t

pn,t = Qt +r

r − r′q

√βqt − qηt(−κtα

qt (1 +

1

n)q + ζtβ

qt ) + cqη

qtα

qt (1 +

1

n)q

and Qt = q√

1 + qµt + cqγqt .

Let θt = 1− λt + λtpt and

pt = Qt +r

r − r′q

√βqt − qηt(−κtα

qt + ζtβ

qt ) + cqη

qtα

qt .

We have pn,t → pt and θn,t → θt as n→∞. It follows from (3.2), 0 < λt < 1 and θ∗t ∈ (0, 1)such that θn,t < θ∗t for all n ≥ N0. Therefore from (3.7), we have

‖xn+1(t)− xn(t)‖ ≤ θ∗t ‖xn(t)− xn−1(t)‖+ λt‖en(t)− en−1(t)‖+ ‖rn(t)− rn−1(t)‖, ∀n ≥ 1

without loss of generality, we may assume that

‖xn+1(t)− xn(t)‖ ≤ θ∗t ‖xn(t)− xn−1(t)‖+λt‖en(t)− en−1(t)‖+ ‖rn(t)− rn−1(t)‖,∀n ≥ 1.

Hence for any m > n > 0, we have

‖xn+1(t)− xn(t)‖ ≤m−1∑i=n

‖xi+1(t)− xi(t)‖ ≤m−1∑i=n

θ∗,it ‖x1(t)− x0(t)‖

+λt

m−1∑i=n

i∑j=1

θ∗,i−jt ‖ej(t)− ej−1(t)‖+

m−1∑i=n

i∑j=1

θ∗,i−jt ‖rj(t)− rj−1(t)‖,∀n ≥ 1. (3.8)

It follows from condition (3.2) that ‖xm(t) − xn(t)‖ → 0 as n → ∞ and so xn(t) isa Cauchy sequence in X . Let xn(t) → x∗(t) as n → ∞. By the random D-Lipschitzcontinuity of Tt(·) we have

‖un+1(t)− un(t)‖ ≤ (1 +1

1 + n)D(Tt(xn+1(t)), Tt(xn(t)))

≤ (1 +1

1 + n))αt‖xn+1(t)− xn(t)‖ → 0, as n→∞. (3.9)

It follows that un(t) is a Cauchy sequence in X . We can assume that un(t) → u∗(t).Note that un(t) ∈ Tt(xn(t)) we have

d(u∗(t), Tt(x∗(t))) ≤ ‖u∗(t)− un(t)‖+ d(un(t), Tt(x

∗(t)))

≤ ‖u∗(t)− un(t)‖+ (1 +1

n)D(Tt(xn(t)), Tt(x

∗(t)))

≤ ‖u∗(t)− un(t)‖+ (1 +1

n)αt‖xn(t)− x∗(t)‖ → 0 as n→∞. (3.10)

Hence d(u∗(t), Tt(x∗(t))) = 0 and therefore u∗(t) ∈ Tt(x∗(t)) ∈ X .

By the random D-Lipschitz continuity of Tt(·), Lemma 2.19 and condition (3.2) and

limn→∞

‖en(t)‖ = limn→∞

‖rn(t)‖ = 0,

Nonlinear Regularized Nonconvex Random Variational Inequalities with Fuzzy Event in q-uniformly Smooth Banach Spaces

50

Page 51: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Salahuddin 12

we have

x∗(t) = (1− λt)x∗(t) + λt[x∗(t)− ht(x∗(t)) + PKr(gt(x

∗(t))− ηtu∗(t)). (3.11)

By Lemma 2.19, we know that (x∗(t), u∗(t)) is a random solution sets of problem (2.1).This completes the proof.

References

[1] R. P. Agarwal, M. F. Khan, D. O. Regan and Salahuddin, On generalized multivaluednonlinear variational inclusions with fuzzy mappings, Adv. Nonlinear Var. Inequal.,8(1) (2005), 41-55.

[2] G. A. Anastassiou, M. K. Ahmad and Salahuddin, Fuzzified random generalized non-linear variational inclusions, J. Concrete and Appl. Math., 10(3) (2012), 186-206.

[3] J. Balooee, Y. J. Cho and M. K. Kang, Projection methods and a new system ofextended general regularized nonconvex set valued variational inequalities, J. AppliedMath., 2012, Article ID 690648, (2012), 18 pages.

[4] A. T. Bharucha-Reid, Random Integral Equations, Academic Press, New York, 1972.

[5] S. S. Chang, Fixed Point Theory with Applications, Chongging Publishing House,Chongging, 1984.

[6] S. S. Chang, Variational Inequality and Complementarity Problem Theory with Ap-plications, Shanghai Scientific and Tech. Literature Publ. House, Shanghai, 1991.

[7] S. S. Chang and N. J. Huang, Generalized random quasi multivalued complementarityproblems, Indian J. Math., 35 (1993), 305-320.

[8] S. S. Chang, and Y. G. Zhu, On variational inequalities for fuzzy mappings, FuzzySets and Systems, 32 (1989), 359-367.

[9] F. H. Clarke, Optimization and Nonsmooth Analysis, Wiley Int. Science, New York,1983.

[10] F. H. Clarke, Y. S. Ledyaw, R. J. Stern and P. R. Wolenski, Nonsmooth Analysis andControl Theory, Springer-Verlag, New York 1998.

[11] X. P. Ding, M. K. Ahmad and Salahuddin, Fuzzy generalized vector variational in-equalities and complementarity problems, J. Nonlinear Functional Analysis, 13 (2)(2008), 253-263.

[12] Y. P. Fang and N. J. Huang, H-accretive operators and resolvent operator techniquesfor solving variational inclusions in Banach spaces, Appl. Math. Letts., 17(6)(2004),647-653.

Nonlinear Regularized Nonconvex Random Variational Inequalities with Fuzzy Event in q-uniformly Smooth Banach Spaces

51

Page 52: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Salahuddin 13

[13] P. R. Halemous, Measure Theory, Springer Verlag, New York, 1974.

[14] G. J. Himmelberg, Measurable relations, Fund. Math., 87 (1975), 53–72.

[15] N. J. Huang, Random generalized nonlinear variational inclusions for random fuzzymappings, Fuzzy Sets and Systems, 105(1999), 437-444.

[16] N. J. Huang and Y. P. Fang, Generalized m-accretive mappings in Banach spaces, J.Sichuan University, 38(4)(2001), 591-592.

[17] M. F. Khan and Salahuddin, Completely generalized nonlinear random variationalinclusions, South East Asian Bulletin of Mathematics, 30(2) (2006), 261-276.

[18] B. S. Lee, M. F. Khan and Salahuddin, Fuzzy nonlinear mixed random variationallike inclusions, Pacific J. Optimization, 6(3)(2010), 573-590.

[19] B. S. Lee, M. F. Khan and Salahuddin, Fuzzy nonlinear set valued variational inclu-sions, Comp. Math. Appl., 60(6)(2010), 1768-1775.

[20] Jr. S. B. Nadler, Multivalued contraction mappings, Pacific J. Math., 30 (1969), 475–485.

[21] R. A. Poliquin, R. T. Rockafellar and L. Thibault, Local differentiability of distancefunctions, Trans. Amer. Math. Soc., 352(2000), 5231-5249.

[22] Salahuddin, Some Aspects of Variational Inequalities, Ph.D. Thesis, A.M.U., Aligarh,India, 2000.

[23] Salahuddin, Random hybrid proximal point algorithm for fuzzy nonlinear set valuedinclusions, J. Concrete Appl. Math., 12(1-2)(2014), 47-62.

[24] X. Z. Tan, Random quasi-variational inequality, Math. Nachr., 125 (1986), 319–328.

[25] R. U. Verma, Generalized system for relaxed cocoercive variational inequalities andprojection methods, J. Optim. Theo. Appl., 121(2004), 203-210.

[26] X. Weng, Fixed point iteration for local strictly pseudo contractive mapping, Proceed-ings of American Mathematical Society, 113(3) (1991), 727-737.

[27] H. K. Xu, Inequalities in Banach spaces with applications, Nonlinear Anal., 16(12)(1991) ,1127-1138.

[28] X. Z. Yuan, Noncompact random generalized games and random quasi variationalinequalities, J. Math. Stochastic. Analysis, 7(1994), 467-486.

[29] L. A. Zadeh, Fuzzy sets, Inform. Control, 8 (1965), 338-353.

[30] C. Zhang and Z. S. Bi, Random generalized nonlinear variational inclusions for ran-dom fuzzy mappings, J. Sichuan Univ., 6(2007), 499-502.

Nonlinear Regularized Nonconvex Random Variational Inequalities with Fuzzy Event in q-uniformly Smooth Banach Spaces

52

Page 53: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

New White Noise Functional Solutions ForWick-type Stochastic Coupled KdVEquations Using F-expansion Method

Hossam A. Ghany1,2 and M. Zakarya31Department of Mathematics, Faculty of Science,

Taif University, Taif, Saudi Arabia.2Department of Mathematics, Helwan University, Cairo, Egypt.

[email protected] of Mathematics, Faculty of Science, Al-Azhar University,

Assiut 71524, Egypt.mohammed [email protected]

Abstract

Wick-type stochastic coupled KdV equations are researched. By means ofHermite transformation, white noise theory and F-expansion method, three typesof exact solutions for Wick-type stochastic coupled KdV equations are explicitlygiven. These solutions include the white noise functional solutions of Jacobielliptic function (JEF) type, trigonometric type and hyperbolic type.

Keywords: Coupled KdV equations; F-expansion method; Hermite transform; Wick-type product; White noise theory.PACS No. : 05.40.±a, 02.30.Jr.

1 Introduction

In this paper, we shall explore exact solutions for the following variable coefficientscoupled KdV equations.

ut + h1(t)uux + h2(t)vvx + h3(t)uxxx = 0,, (t, x) ∈ R+ × R,

vx + h4(t)uvx + h3(t)vxxx = 0,(1.1)

where h1(t), h2(t), h3(t) and h4(t) are bounded measurable or integrable functions onR+. Random wave is an important subject of stochastic partial differential equations

1

53J. APPLIED FUNCTIONAL ANALYSIS, VOL. 10, NO.'S 1-2,53-69 ,2015, COPYRIGHT 2015 EUDOXUS PRESS, LLC

Page 54: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

(PDEs). Many authors have studied this subject. Wadati first introduced and studiedthe stochastic KdV equations and gave the diffusion of soliton of the KdV equationunder Gaussian noise in [28, 30] and others [3–6, 23] also researched stochastic KdV-type equations. Xie first introduced Wick-type stochastic KdV equations on whitenoise space and showed the auto- Backlund transformation and the exact white noisefunctional solutions in [35]. Furthermore, Xie [36–39], Ghany et al. [12–14, 16–19]researched some Wick-type stochastic wave equations using white noise analysis.In this paper we use F-expansion method for finding new periodic wave solutions

of nonlinear evolution equations in mathematical physics, and we obtain some newperiodic wave solution for coupled KdV equations. This method is more powerfuland will be used in further works to establish more entirely new solutions for otherkinds of nonlinear (PDEs) arising in mathematical physics. The effort in finding exactsolutions to nonlinear equations is important for the understanding of most nonlinearphysical phenomena. For instance, the nonlinear wave phenomena observed in fluiddynamics, plasma, and optical fibers. Many effective methods have been presented,such as variational iteration method [7, 8], tanh-function method [9, 32, 40], homotopyperturbation method [11, 27, 33], homotopy analysis method [1], tanh-coth method[29, 31, 32], exp-function method [21, 22, 34, 41, 42] , Jacobi elliptic function expansionmethod [10,24–26], the F-expansion method [43–46]. The main objective of this paperis using the F-expansion method to construct white noise functional solutions for wick-type stochastic coupled KdV equations via hermite transform, wick-type product, whitenoise theory. If equation (1.1) is considered in a random environment, we can getstochastic coupled KdV equations. In order to give the exact solutions of stochasticcoupled KdV equations, we only consider this problem in white noise environment. Weshall study the following Wick-type stochastic coupled KdV equations.

Ut +H1(t) U Ux +H2(t) V Vx +H3(t) Uxxx = 0,Vx +H4(t) U Vx +H3(t) Vxxx = 0,

(1.2)

where “ ” is the Wick product on the Kondratiev distribution space (S−1) which wasdefined in [20], H1(t), H2(t), H3(t) and H4(t) are (S−1)-valued functions.

2 Description of the F-expansion Method

In order to simultaneously obtain more periodic wave solutions expressed by variousJacobi elliptic functions to nonlinear wave equations, we introduce an F-expansionmethod which can be thought of as a succinctly over-all generalization of Jacobi ellipticfunction expansion. We briefly show what is F-expansion method and how to use itto obtain various periodic wave solutions to nonlinear wave equations. Suppose anonlinear wave equation for u(t,x) is given by.

p(u, ut, ux, uxx, uxxx, ...) = 0, (2.1)

2

Wick-type Stochastic Coupled KdV Equations Using F-expansion Method,

Hossam A. Ghany and M. Zakarya54

Page 55: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

where u = u(t, x) is an unknown function, p is a polynomial in u and its various partialderivatives in which the highest order derivatives and nonlinear terms are involved. Inthe following we give the main steps of a deformation F-expansion method.Step 1. Look for traveling wave solution of Eq.(2.1)by taking

u(t, x) = u(ξ) , ξ(t, x) = kx+ s

∫ t

0

δ(τ)dτ + c, (2.2)

Hence, under the transformation (2.2). Eq.(2.1) can be transformed into ordinarydifferential equation (ODE) as following.

O(u, sδu′, ku′, k2u′′, k3u′′′, ...) = 0, (2.3)

Step 2. Suppose that u(ξ) can be expressed by a finite power series of F (ξ) of theform.

u(t, x) = u(ξ) =N∑

i=1

aiFi(ξ), (2.4)

where a0, a1, ..., aN are constants to be determined later, while F′(ξ) in(2.4) satisfy

[F ′(ξ)]2 = PF 4(ξ) +QF 2(ξ) + R, (2.5)

and hence holds for F (ξ)

F ′F ′′ = 2PF 3F ′ +QFF ′,F ′′ = 2PF 3 +QF,F ′′′ = 6PF 2F ′ +QF ′,. . .

(2.6)

where P,Q, and R are constants.Step 3. The positive integer N can be determined by considering the homogeneousbalance between the highest derivative term and the nonlinear terms appearing in (2.3).Therefore, we can get the value of N in (2.4).Step 4. Substituting (2.4) into (2.3) with the condition (2.5), we obtain polynomialin f i(ξ)[f ′(ξ)]j , (i = 0± 1,±2, ..., j = 0, 1). Setting each coefficient of this polynomialto be zero yields a set of algebraic equations for a0, a1, ..., aN , s and δ.Step 5. Solving the algebraic equations with the aid of Maple we have a0, a1, ..., aN , sand δ can be expressed by P,Q and R. Substituting these results into F-expansion(2.4), then a general form of traveling wave solution of Eq. (2.1) can be obtained.Step 6. Since the general solutions of (2.4) have been well known for us Chooseproperly (P,Q and R.) in ODE (2.5) such that the corresponding solution F (ξ) of itis one of Jacobi elliptic functions. (See Appendix A,B and C.)

3

Wick-type Stochastic Coupled KdV Equations Using F-expansion Method,

Hossam A. Ghany and M. Zakarya55

Page 56: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

3 New Exact Traveling Wave Solutions of Eq. (1.2)

Taking the Hermite transform, white noise theory, and F-expansion method to ex-plore new exact wave solutions for Eq.(1.2). Applying Hermite transform to Eq.(1.2),we get the deterministic equations:

Ut(t, x, z) + H1(t, z)U(t, x, z)Ux(t, x, z) + H2(t, z)V (t, x, z)Vx(t, x, z)

+H3(t, z)Uxxx(t, x, z) = 0,

Vt(t, x, z) + H4U(t, x, z)Vx(t, x, z) + H3Vxxx(t, x, z) = 0,

(3.1)

where z = (z1, z2, ...) ∈ (CN) is a vector parameter. To look for the travelling wave solu-tion of Eq.(3.1), we make the transformations H1(t, z) := h1(t, z), H2(t, z) := h2(t, z),H3(t, z) := h3(t, z), H4(t, z) := h4(t, z), U(t, x, z) =: u(t, x, z) = u(ξ(t, x, z)) andV (t, x, z) =: v(t, x, z) = v(ξ(t, x, z)) with.

ξ(t, x, z) = kx+ s

∫ t

0

δ(τ, z)dτ + c,

where k, s and c are arbitrary constants which satisfy sk 6= 0 , δ(τ) is a nonzero functionof the indicated variables to be determined later. So, Eq.(3.1) can be transformed intothe following (ODE).

sδu

′+ kh1uu

′+ kh2vv

′+ k3h3u

′′′= 0,

sδv′+ kh4uv

′+ k3h3v

′′′= 0,

(3.2)

where the prime denote to the differential with respect to ξ. In view of F-expansionmethod, the solution of Eq. (3.1), can be expressed in the form.

u(t, x, z) = u(ξ) = ΣNi=1(aiF

i(ξ)),v(t, x, z) = v(ξ) = ΣMi=1(biF

i(ξ)),(3.3)

where ai and bi are constants to be determined later. considering homogeneous balancebetween u

′′′and uu

′, vv

′and the order of v

′′′and uv

′in (3.2), then we can obtain

N =M = 2, so (3.3) can be rewritten as following.

u(t, x, z) = a0 + a1F (ξ) + a2F

2(ξ),v(t, x, z) = b0 + b1F (ξ) + b2F

2(ξ),(3.4)

where a0, a1, a2, b0, b1 and b2 are constants to be determined later. Substituting (3.4)with the conditions (2.5),(2.6) into (3.2) and collecting all terms with the same power

4

Wick-type Stochastic Coupled KdV Equations Using F-expansion Method,

Hossam A. Ghany and M. Zakarya56

Page 57: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

of f i(ξ)[f ′(ξ)]j , (i = 0± 1,±2, ..., j = 0, 1). as following.

2k[12k2a2Ph3 + b22h2 + a

22h1]F

3F′

+3k[2k2a1Ph3 + a1a2h1 + b1b2h2]F2F

+[2a2sδ + kb21h2 + 8k

3a2Qh3 + 2ka0a2h1 + ka21h1 + 2kb0b2h2]FF

+[sa1h3 + ka0a1h1 + kb0b1h2 + k3a1Qh3]F

′= 0,

2kb2[12k2Ph3 + a2h4]F

3F′

+k[6k2b1Ph3 + 2a1b2h4 + a2b1h4]F2F

+[2sb2δ + 2ka0b2h4 + ka1b1h4 + 8k3b2Qh3]FF

+b1[sδ + ka0h4 + k3Qh3]F

′= 0.

(3.5)

Setting each coefficient of f i(ξ)[f ′(ξ)]j to be zero, we get a system of algebraic equationswhich can be expressed by.

2k[12k2a2Ph3 + b22h2 + a

22h1] = 0,

3k[2k2a1Ph3 + a1a2h1 + b1b2h2] = 0,

2a2sδ + kb21h2 + 8k

3a2Qh3 + 2ka0a2h1 + ka21h1 + 2kb0b2h2 = 0,

sa1h3 + ka0a1h1 + kb0b1h2 + k3a1Qh3 = 0,

2kb2[12k2Ph3 + a2h4] = 0,

k[6k2b1Ph3 + 2a1b2h4 + a2b1h4] = 0,

2sb2δ + 2ka0b2h4 + ka1b1h4 + 8k3b2Qh3 = 0,

b1[sδ + ka0h4 + k3Qh3] = 0.

(3.6)

with solving by Maple to get the following coefficient.

a1 = b1 = 0, a0 = arbitrary constant,

a2 = −12k2Ph3(t,z)h4(t,z)

,

b2 = ±i12k2Ph3(t,z)h4(t,z)

√h1(t,z)−h4(t,z)

h2(t,z)= ∓ia2

√h1(t,z)−h4(t,z)

h2(t,z),

b0 = ±[3k2Qh3(t,z)−a0(h1(t,z)−h4(t,z))]√

h2(t,z)[h1(t,z)−h4(t,z)],

δ = −k[a0h4(t,z)+k2Qh3(t,z)]S

.

(3.7)

Substituting by coefficient (3.7) into (3.4) yields general form solutions of eq. (1.2).

u(t, x, z) = a0 − [12k2Ph3(t, z)

h4(t, z)]F 2(ξ(t, x, z)), (3.8)

5

Wick-type Stochastic Coupled KdV Equations Using F-expansion Method,

Hossam A. Ghany and M. Zakarya57

Page 58: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

v(t, x, z) = ± [3k2Qh3(t,z)+a0(h1(t,z)−h4(t,z))]√h2(t,z)[h1(t,z)−h4(t,z)]

±12ik2Ph3(t,z)h4(t,z)

√h1(t,z)−h4(t,z)

h2(t,z)F 2(ξ(t, x, z)),

(3.9)

with

ξ(t, x, z) = k

[

x−∫ t

0

a0h4(τ, z) + k

2Qh3(τ, z)dτ

]

.

From Appendix C, we give the special cases as following:Case 1. if we take P = 1, Q = (2−m2) and R = (1−m2), we have F (ξ)→ cs(ξ) ,

u1(t, x, z) = a0 − [12k2h3(t, z)

h4(t, z)] cs2(ξ1(t, x, z)), (3.10)

v1(t, x, z) = ±[3k2h3(t,z)(2−m2)+a0(h1(t,z)−h4(t,z))]√

h2(t,z)(h1(t,z)−h4(t,z))

±12ik2h3(t,z)h4(t,z)

√h1(t,z)−h4(t,z)

h2(t,z)cs2(ξ1(t, x, z)),

(3.11)

with

ξ1(t, x, z) = k

[

x−∫ t

0

a0h4(τ, z) + k

2(2−m2)h3(τ, z)dτ

]

.

In the limit case when m→ o we have cs(ξ)→ cot(ξ), thus (3.10),(3.11) become.

u2(t, x, z) = a0 − [12k2h3(t, z)

h4(t, z)] cot2(ξ2(t, x, z)), (3.12)

v2(t, x, z) = ±[6k2h3(t,z)+a0(h1(t,z)−h4(t,z))]√

h2(t,z)[h1(t,z)−h4(t,z)]

±12ik2h3(t,z)h4(t,z)

√h1(t,z)−h4(t,z)

h2(t,z)cot2(ξ2(t, x, z)),

(3.13)

with

ξ2(t, x, z) = k

[

x−∫ t

0

a0h4(τ, z) + 2k

2h3(τ, z)dτ

]

.

In the limit case when m→ 1 we have cs(ξ)→ csch(ξ), thus (3.10).(3.11) become.

u3(t, x, z) = a0 − [12k2h3(t, z)

h4(t, z)] csch2(ξ3(t, x, z)), (3.14)

6

Wick-type Stochastic Coupled KdV Equations Using F-expansion Method,

Hossam A. Ghany and M. Zakarya58

Page 59: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

v3(t, x, z) = ±[3k2h3(t,z)+a0(h1(t,z)−h4(t,z))]√

h2(t,z)[h1(t,z)−h4(t,z)]

±12ik2h3(t,z)h4(t,z)

√h1(t,z)−h4(t,z)

h2(t,z)csch2(ξ3(t, x, z)),

(3.15)

with

ξ3(t, x, z) = k

[

x−∫ t

0

a0h4(τ, z) + k

2h3(τ, z)dτ

]

.

Case 2. if we take P = 1, Q = (2m2− 1) and R = −m2(1−m2), then F (ξ)→ ds(ξ) ,

u4(t, x, z) = a0 − [12k2h3(t, z)

h4(t, z)] ds2(ξ4(t, x, z)), (3.16)

v4(t, x, z) = ±[3k2h3(t,z)(2m2−1)+a0(h1(t,z)−h4(t,z))]√

h2(t,z)[h1(t,z)−h4(t,z)]

±12ik2h3(t,z)h4(t,z)

√h1(t,z)−h4(t,z)

h2(t,z)ds2(ξ4(t, x, z)),

(3.17)

with

ξ4(t, x, z) = k

[

x−∫ t

0

a0h4(τ, z) + k

2(2m2 − 1)h3(τ, z)dτ

]

.

In the limit case when m→ o we have ds(ξ)→ csc(ξ), thus (3.16),(3.17) become.

u5(t, x, z) = a0 − [12k2h3(t, z)

h4(t, z)] csc2(ξ5(t, x, z)), (3.18)

v5(t, x, z) = ±[−3k2h3(t,z)+a0(h1(t,z)−h4(t,z))]√

h2(t,z)[h1(t,z)−h4(t,z)]

±12ik2h3(t,z)h4(t,z)

√h1(t,z)−h4(t,z)

h2(t,z)csc2(ξ5(t, x, z)),

(3.19)

with

ξ5(t, x, z) = k

[

x−∫ t

0

a0h4(τ, z)− k

2h3(τ, z)dτ

]

.

Remark that. In the limit case when m → 1 we have ds(ξ) = cs(ξ) → csch(ξ), thus(3.16),(3.17) become the same solutions in case 1.

Case 3. if we take P = 14, Q = 1−2m2

2and R = 1

4, then F (ξ)→ ns(ξ)± cs(ξ),

u6(t, x, z) = a0 −3k2h3(t, z)

h4(t, z)

[

ns(ξ6(t, x, z))± cs(ξ6(t, x, z))

]2, (3.20)

7

Wick-type Stochastic Coupled KdV Equations Using F-expansion Method,

Hossam A. Ghany and M. Zakarya59

Page 60: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

v6(t, x, z) = ±[3k2h3(t,z)(1−2m2)+2a0(h1(t,z)−h4(t,z))]

2√h2(t,z)[h1(t,z)−h4(t,z)]

±3ik2h3(t,z)h4(t,z)

√h1(t,z)−h4(t,z)

h2(t,z)

[

ns(ξ6(t, x, z))± cs(ξ6(t, x, z))

]2,

(3.21)

with

ξ6(t, x, z) = k

[

x−1

2

∫ t

0

2a0h4(τ, z) + k

2(1− 2m2)h3(τ, z)dτ

]

.

In the limit case when m→ o we have (ns(ξ)± cs(ξ))→ (csc(ξ)± cot(ξ)),thus (3.22),(3.23) become.

u7(t, x, z) = a0 −3k2h3(t, z)

h4(t, z)

[

csc(ξ7(t, x, z))± cot(ξ7(t, x, z))

]2, (3.22)

v7(t, x, z) = ±[3k2h3(t,z)+2a0(h1(t,z)−h4(t,z))]

2√h2(t,z)[h1(t,z)−h4(t,z)]

±3ik2h3(t,z)h4(t,z)

√h1(t,z)−h4(t,z)

h2(t,z)

[

csc(ξ7(t, x, z))± cot(ξ7(t, x, z))

]2,

(3.23)

with

ξ7(t, x, z) = k

[

x−1

2

∫ t

0

2a0h4(τ, z) + k

2h3(τ, z)dτ

]

.

In the limit case when m→ 1 we have (ns(ξ)± cs(ξ))→ (coth(ξ)± csch(ξ)),thus (3.22),(3.23) become.

u8(t, x, z) = a0 −3k2h3(t, z)

h4(t, z)

[

coth(ξ8(t, x, z))± csch(ξ8(t, x, z))

]2, (3.24)

v8(t, x, z) = ±[−3k2h3(t,z)+2a0(h1(t,z)−h4(t,z))]

2√h2(t,z)[h1(t,z)−h4(t,z)]

±3ik2h3(t,z)h4(t,z)

√h1(t,z)−h4(t,z)

h2(t,z)

[

coth(ξ8(t, x, z))± csch(ξ8(t, x, z))

]2,

(3.25)

with

ξ8(t, x, z) = k

[

x−1

2

∫ t

0

2a0h4(τ, z)− k

2h3(τ, z)dτ

]

.

Remark that: there are another solutions for Eq.(1.2). These solutions come fromsetting different values for the cofficients P,Q and R.(see Appendex C.). The abovementioned cases are just to clarify how far our technique is applicable.

8

Wick-type Stochastic Coupled KdV Equations Using F-expansion Method,

Hossam A. Ghany and M. Zakarya60

Page 61: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

4 White Noise Functional Solutions of Eq.(1.2)

In this section, we employ the results of the Section 3 by using Hermite transformto obtain exact white noise functional solutions for Wick-type stochastic coupled KdVequations (1.2). The properties of exponential and trigonometric functions yield thatthere exists a bounded open set D ⊂ R+ × R, ρ < ∞, λ > 0 such that the solutionu(t, x, z) of Eq.(3.1) and all its partial derivatives which are involved in Eq. (3.1) areuniformly bounded for (t, x, z) ∈ D × Kρ(λ), continuous with respect to (t, x) ∈ Dfor all z ∈ Kρ(λ) and analytic with respect to z ∈ Kρ(λ), for all (t, x) ∈ D. FromTheorem 4.1.1 in [20], there exists U(t, x, z) ∈ (S)−1 such that u(t, x, z) = U(t, x)(z)for all (t, x, z) ∈ D ×Kρ(λ) and U(t, x) solves Eq.(1.2) in (S)−1. Hence, by applyingthe inverse Hermite transform to the results of Section 3, we get New exact white noisefunctional solutions of Eq.(1.2) as follows:

• New Wick-type stochastic solutions of (JEF):

U1(t, x) = a0 − [12k2H3(t)

H4(t)] cs2(Ξ1(t, x)), (4.1)

V1(t, x) = ±[3k2H3(t)(2−m2)+a0(H1(t)−H4(t))]√

H2(t)[H1(t)−H4(t)]

±12ik2H3(t)H4(t)

√H1(t)−H4(t)H2(t)

cs2(Ξ1(t, x)),

(4.2)

U2(t, x) = a0 − [12k2H3(t)

H4(t)] ds2(Ξ2(t, x)), (4.3)

V2(t, x) = ±[3k2H3(t)(2m2−1)+a0(H1(t)−H4(t))]√

H2(t)[H1(t)−H4(t)]

±12ik2H3(t)H4(t)

√H1(t)−H4(t)H2(t)

ds2(Ξ2(t, x)),

(4.4)

U3(t, x) = a0 −3k2H3(t)

H4(t)

[

ns(Ξ3(t, x))± cs(Ξ3(t, x))

]2, (4.5)

V3(t, x) = ±[3k2H3(t)(1−2m2)+2a0(H1(t)−H4(t))]

2√H2(t)[H1(t)−H4(t)]

±3ik2H3(t)H4(t)

√H1(t)−H4(t)H2(t)

[

ns(Ξ3(t, x))± cs(Ξ3(t, x))

]2,

(4.6)

9

Wick-type Stochastic Coupled KdV Equations Using F-expansion Method,

Hossam A. Ghany and M. Zakarya61

Page 62: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

with

Ξ1(t, x) = k

[

x−∫ t

0

a0H4(τ) + k

2(2−m2)H3(τ)dτ

]

,

Ξ2(t, x) = k

[

x−∫ t

0

a0H4(τ) + k

2(2m2 − 1)H3(τ)dτ

]

,

Ξ3(t, x) = k

[

x−1

2

∫ t

0

2a0H4(τ) + k

2(1− 2m2)H3(τ)dτ

]

.

• New Wick-type stochastic solutions of trigonometric functions:

U4(t, x) = a0 −12k2H3(t)

H4(t) cot2(Ξ4(t, x)), (4.7)

V4(t, x) = ±[6k2H3(t)+a0(H1(t)−H4(t))]√

H2(t)(H1(t)−H4(t))

±12ik2H3(t)H4(t)

√H1(t)−H4(t)H2(t)

cot2(Ξ4(t, x)),

(4.8)

U5(t, x) = a0 − [12k2H3(t)

H4(t)] csc2(Ξ5(t, x)), (4.9)

V5(t, x) = ±[−3k2H3(t)+a0(H1(t)−H4(t))]√

H2(t)[H1(t)−H4(t)]± 12ik2H3(t)

H4(t)

√H1(t)−H4(t)H2(t)

csc2(Ξ5(t, x)),

(4.10)

U6(t, x) = a0 −3k2H3(t)

H4(t)

[

csc(Ξ6(t, x))± cot(Ξ6(t, x))

]2, (4.11)

V6(t, x) = ±[3k2H3(t)+2a0(H1(t)−H4(t))]

2√H2(t)[H1(t)−H4(t)]

± 3ik2H3(t)H4(t)

√H1(t)−H4(t)H2(t)

[

csc(Ξ6(t, x))± cot(Ξ6(t, x))

]2,

(4.12)

10

Wick-type Stochastic Coupled KdV Equations Using F-expansion Method,

Hossam A. Ghany and M. Zakarya62

Page 63: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

with

Ξ4(t, x) = k

[

x−∫ t

0

a0H4(τ) + 2k

2H3(τ)dτ

]

,

Ξ5(t, x) = k

[

x−∫ t

0

a0H4(τ)− k

2H3(τ)dτ

]

,

Ξ6(t, x) = k

[

x−1

2

∫ t

0

2a0H4(τ) + k

2H3(τ)dτ

]

.

• New Wick-type stochastic solutions of hyperbolic functions:

U7(t, x) = a0 − [12k2H3H4

] csch2(Ξ7(t, x)), (4.13)

V7(t, x) = ±[3k2H3(t)+a0(H1(t)−H4(t))]√

H2(t)[H1(t)−H4(t)]± 12ik2H3(t)

H4(t)

√H1(t)−H4(t)H2(t)

csch2 Ξ7(t, x),

(4.14)

U8(t, x) = a0 −3k2H3(t)

H4(t)

[

coth(Ξ8(t, x))± csch(Ξ8(t, x))

]2, (4.15)

V8(t, x) = ±[−3k2H3(t)+2a0(H1(t)−H4(t))]

2√H2(t)[H1(t)−H4(t)]

± 3ik2H3(t)H4(t)

√H1(t)−H4(t)H2(t)

[

coth(Ξ8(t, x))± csch(Ξ8(t, x))

]2,

(4.16)

with

Ξ7(t, x) = k

[

x−∫ t

0

a0H4(τ) + k

2H3(τ)dτ

]

,

Ξ8(t, x) = k

[

x−1

2

∫ t

0

2a0H4(τ)− k

2H3(τ)dτ

]

.

We observe that. For different forms of H1, H2, H3 and H4, we can get different exactwhite noise functional solutions of Eq.(1.2) from Eqs.(4.1)-(4.16).

11

Wick-type Stochastic Coupled KdV Equations Using F-expansion Method,

Hossam A. Ghany and M. Zakarya63

Page 64: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

5 Example

It is well known that Wick version of function is usually difficult to evaluate. So,in this section, we give non-Wick version of solutions of Eq.(1.2). Let Wt = Bt bethe Gaussian white noise, where Bt is the Brownian motion. We have the Hermitetransform Wt(z) =

∑∞i=1 zi

∫ t0ηi(s)ds [20]. Since exp

(Bt) = exp(Bt − t2

2), we have

sin(Bt) = sin(Bt − t2

2), cos(Bt) = cos(Bt − t2

2), cot(Bt) = cot(Bt − t2

2), csc(Bt) =

csc(Bt − t2

2), coth(Bt) = coth(Bt − t2

2) and csch(Bt) = csch(Bt − t2

2). Suppose that

H1(t) = H2(t) = λ1H3(t), H3(t) = λ2H4(t) and H4(t) = Γ(t) + λ3Wt where λ1, λ2and λ3 are arbitrary constants and Γ(t) is integrable or bounded measurable functionon R+. Therefore, for H1(t)H2(t)H3(t)H4(t) 6= 0. thus exact white noise functionalsolutions of Eq.(1.2) are as follows:

U9(t, x) = 3k2

a0

3k2− 4λ2 cot

2(Φ1(t, x))

, (5.1)

V9(t, x) = ±6k2λ2

[1+

a06k2λ2(λ1λ2−1)

]

√λ1λ2(λ1λ2−1)

+

2i√(λ1λ2−1)λ1λ2

cot2(Φ1(t, x))

,

(5.2)

U10(t, x) = 3k2

a0

3k2− 4λ2 csc

2(Φ2(t, x))

, (5.3)

V10(t, x) = ±3k2λ2

[a0

3k2λ2(λ1λ2−1)−1]

√λ1λ2(λ1λ2−1)

+

4i√(λ1λ2−1)λ1λ2

csc2(Φ2(t, x))

,

(5.4)

U11(t, x) = a0 − 3k2λ2

[

csc(Ξ(t, x))± cot(Φ3(t, x))

]2, (5.5)

V11(t, x) = ±3k2λ2

[1+

2a03k2λ2(λ1λ2−1)

]

2√λ1λ2(λ1λ2−1)

+

i√(λ1λ2−1)λ1λ2

[

csc(Φ(t, x))± cot(Φ3(t, x))

]2

,

(5.6)

12

Wick-type Stochastic Coupled KdV Equations Using F-expansion Method,

Hossam A. Ghany and M. Zakarya64

Page 65: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

U12(t, x) = a0 − 12k2λ2 csch

2(Φ4(t, x)), (5.7)

V12(t, x) = ±3k2λ2

[1+

a03k2λ2(λ1λ2−1)

]

√λ1λ2(λ1λ2−1)

+

4i√(λ1λ2−1)λ1λ2

csch2(Φ4(t, x)

,

(5.8)

U13(t, x) = a0 − 3k2λ2

[

coth(Φ5(t, x))± csch(Φ5(t, x))

]2, (5.9)

V13(t, x) = ±3k2λ2

[ 2a03k2λ2(λ1λ2−1)

−1]

2√λ1λ2(λ1λ2−1)

+

3i√(λ1λ2−1)λ1λ2

[

coth(Φ5(t, x))± csch(Φ5(t, x))

]2

,

(5.10)

with

Φ1(t, x) = k

[

x− (a0 + 2k2λ2)

∫ t

0

Γ(τ)dτ + λ3[Bt −t2

2]

]

,

Φ2(t, x) = k

[

x− (a0 − k2λ2)

∫ t

0

Γ(τ)dτ + λ3[Bt −t2

2]

]

,

Φ3(t, x) = k

[

x−(2a0 + k

2λ2)

2

∫ t

0

Γ(τ)dτ + λ3[Bt −t2

2]

]

,

Φ4(t, x) = k

[

x− (a0 + k2λ2)

∫ t

0

Γ(τ)dτ + λ3[Bt −t2

2]

]

and

Φ5(t, x) = k

[

x−(2a0 − k2λ2)

2

∫ t

0

Γ(τ)dτ + λ3[Bt −t2

2]

]

.

13

Wick-type Stochastic Coupled KdV Equations Using F-expansion Method,

Hossam A. Ghany and M. Zakarya65

Page 66: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

6 Conclusion

We have discussed the solutions of stochastic (PDEs) driven by Gaussian white noise.There is a unitary mapping between the Gaussian white noise space and the Poissonwhite noise space. This connection was given by Benth and Gjerde [2]. We can see inthe section 4.9 [20] clearly. Hence, by the aid of the connection, we can derive somestochastic exact soliton solutions if the coefficients , and are Poisson white noise func-tions in Eq. (1.2). In this paper, using Hermite transformation, white noise theory andF-expansion method, we study the white noise solutions of the Wick-type stochasticcoupled KdV equations. This paper shows that the F-expansion method is sufficient tosolve the stochastic nonlinear equations in mathematical physics. The method whichwe have proposed in this paper is powerful, direct and computerized method, whichallows us to do complicated and tedious algebraic calculation. It is shown that thealgorithm can be also applied to other NLPDEs in mathematical physics such as mod-ified Hirota-Satsuma coupled KdV, (2+1)-dimensional coupled KdV, KdV-Burgers,schamel KdV, modified KdV Burgers, Sawada-Kotera, Zhiber-Shabat equations andBenjamin-Bona-Mahony equations. Since the equation (1.2) has other solutions if se-lect other values of P,Q and R (see Appendix A,B and C). So there are many otherof exact solutions for wick-type stochastic coupled KdV equations.

Appendix A.the jacobi elliptic functions degenerate into trigonometric functions when m→ 0.

snξ → sin ξ, cnξ → cos ξ, dnξ → 1, scξ → tan ξ, sdξ → sin ξ, cdξ → cos ξ,nsξ → csc ξ, ncξ → sec ξ, ndξ → 1, csξ → cot ξ, dsξ → csc ξ, dcξ → sec ξ.

Appendix B.the jacobi elliptic functions degenerate into hyperbolic functions when m→ 1.

snξ → tan ξ, cnξ → sech ξ, dnξ → sech ξ, scξ → sinh ξ, sdξ → sinh ξ, cdξ → 1,nsξ → coth ξ, ncξ → cosh ξ, ndξ → cosh, csξ → csch ξ, dsξ → csch ξ, dcξ → 1.

Appendix C. The ODE and Jacobi Elliptic FunctionsRelation between values of (P , Q , R) and corresponding F (ξ) in ODE

(F ′)2(ξ) = PF 4(ξ) +QF 2(ξ) + R,

14

Wick-type Stochastic Coupled KdV Equations Using F-expansion Method,

Hossam A. Ghany and M. Zakarya66

Page 67: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

P Q R F (ξ)

m2 −1−m2 1 snξ, cdξ = cnξdnξ

−m2 2m2 − 1 1−m2 cnξ

−1 2−m2 m2 − 1 dnξ

1 −1−m2 m2 nsξ = 1snξ , dcξ =

dnξcnξ

1−m2 2m2 − 1 −m2 ncξ = 1cnξ

m2 − 1 2−m2 −1 ndξ = 1

dnξ

1−m2 2−m2 1 scξ = snξcnξ

−m2(1−m2) 2m2 − 1 1 sdξ = snξdnξ

1 2−m2 1−m2 csξ = cnξsnξ

1 2m2 − 1 −m2(1−m2) dsξ = dnξsnξm4

4m2−22

14

snξ1±dnξ

, cnξ√1−m2±dnξ

m2

4m2−22

m2

4snξ ± icnξ, dnξ

i√1−m2snξ±cnξ ,

msnξ1±dnξ

14

1−2m2

214

nsξ ± csξ, cnξ√1−m2snξ±dnξ

, snξ1±cnξ ,

m2−14

m2+12

m2−14

dnξ1±msnξ

1−m2

4m2+12

1−m2

4ncξ ± iscξ cnξ

1±snξ−14

m2+12

−(1−m2)2

4mcnξ ± dnξ

14

m2−22

m2

4nsξ ± dsξ

References

[1] S. Abbasbandy. Chem. Eng. J , 136 (2008): 144-150.

[2] E. Benth and J. Gjerde. Potential. Anal., 8 (1998): 179-193.

[3] A. de Bouard and A. Debussche. J. Funct. Anal., 154 (1998): 215-251.

[4] A. de Bouard and A. Debussche. J.Funct. Anal. , 169 (1999): 532-558.

[5] A. Debussche and J. Printems. Physica D, 134 (1999): 200-226.

[6] A. Debussche and J. Printems. J. Comput. Anal. Appl., 3 (2001): 183-206.

[7] M. Dehghan, J. Manafian and A. Saadatmandi. Math. Meth. Appl. Sci, 33 (2010):1384-1398.

15

Wick-type Stochastic Coupled KdV Equations Using F-expansion Method,

Hossam A. Ghany and M. Zakarya67

Page 68: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

[8] M. Dehghan and M. Tatari. Chaos Solitons and Fractals, 36 (2008): 157-166.

[9] E. Fan. Phys. Lett. A, 277 (2000): 212-218.

[10] Z. T. Fu, S. K. Liu, S. D. Liu, Q. Zhao. Phys. Lett. A, 290 (2001): 72-76.

[11] D. D. Ganji and A. Sadighi. Int J Non Sci Numer Simul, 7 (2006): 411-418.

[12] H. A. Ghany. Chin. J. Phys., 49 (2011): 926-940.

[13] H. A. Ghany. International Journal of pure and applied mathematics, 78 (2012):17-27.

[14] H. A. Ghany and A. Hyder. International Review of Physics, 6 (2012): 153-157.

[15] H. A. Ghany and A. Hyder. J. Comput. Anal. Appl., 15 (2013): 1332-1343.

[16] H. A. Ghany and A. Hyder. Kuwait Journal of Science, (2013), Accepted.

[17] H. A. Ghany and A. Hyder. IAENG International Journal of Applied Mathematics,(2013), Accepted.

[18] H. A. Ghany and M. S. Mohammed. Chin. J. Phys., 50 (2012): 619-627.

[19] H. A. Ghany, A. S. Okb El Bab, A. M. Zabal and A. Hyder. Chin. Phys. B, (2013),Accepted.

[20] H. Holden, B. Øsendal, J. U b øe and T. Zhang. Stochastic partial differentialequations, Bihkauser: Basel, (1996).

[21] J .H. He and X. H. Wu. Chaos Solitons and Fractals, 30 (2006):700-708.

[22] J. H. He and M. A. Abdou Chaos Solitons and Fractals, 34 (2007): 1421-1429.

[23] V.V. Konotop, L. Vazquez. Nonlinear RandomWaves, World Scientific, Singapore,(1994).

[24] S. K. Liu, Z. T. Fu, S. D. Liu, Q. Zhao. Phys. Lett. A, 289 (2001): 69-74.

[25] J. Liu, L. Yang, K. Yang Chaos, Solitons and Fractals, 20 (2004): 1157-1164.

[26] E. J. Parkes, B. R. Duffy, P. C. Abbott Phys. Lett. A, 295 (2002): 280-286.

[27] F. Shakeri and M. Dehghan. Math. Comput. Model, 48 (2008): 486-498.

[28] M. Wadati. J. Phys. Soc. Jpn., 52 (1983): 2642-2648.

[29] A. M. Wazwaz. Chaos Solitons Fractals, 188 (2007): 1930-1940.

[30] M. Wadati and Y. Akutsu. J. Phys. Soc. Jpn., 53 (1984): 3342-3350.

16

Wick-type Stochastic Coupled KdV Equations Using F-expansion Method,

Hossam A. Ghany and M. Zakarya68

Page 69: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

[31] A. M. Wazwaz. Appl. Math. Comput, 177 (2006): 755-760.

[32] A. M. Wazwaz. Appl. Math. Comput, 169 (2005): 321-338.

[33] X. Ma, L. Wei, and Z. Guo. J. Sound Vibration, 314 (2008): 217-227.

[34] X. H. Wu and J.H. He. Chaos Solitons and Fractals, 38 (2008): 903-910.

[35] Y. C. Xie . Phys. Lett. A, 310 (2003): 161-167.

[36] Y. C. Xie. Solitons and Fractals, 21 (2004): 473-480.

[37] Y. C. Xie. Solitons and Fractals, 20 (2004): 337-342.

[38] Y. C. Xie. J. Phys. A: Math. Gen, 37 (2004): 5229-5236.

[39] Y. C. Xie. Phys. Lett. A, 310 (2003): 161-167.

[40] S. Zhang, T. C. Xia. Communications in Theoretical Physics, 46 (2006): 985-990.

[41] S. D. Zhu. Int J Non Sci Numer Simul, 8 (2007): 461-464.

[42] S. D. Zhu. Int J Non Sci Numer Simul, 8 (2007): 465-468.

[43] Yubin Zhou. Mingliang Wang. Yueming Wang. Physics Letters A, 308 (2003):3136.

[44] Sheng Zhang. Tiecheng Xia. Applied Mathematics and Computation, 189 (2007):836843.

[45] Sheng Zhang. TieCheng Xia. Applied Mathematics and Computation, 183 (2006):11901200.

[46] Sheng Zhang. Tiecheng Xia. Communications in Nonlinear Science and NumericalSimulation, 13 (2008): 12941301.

17

Wick-type Stochastic Coupled KdV Equations Using F-expansion Method,

Hossam A. Ghany and M. Zakarya69

Page 70: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

A NOTE ON n-BANACH LATTICES

B·IRSEN SA ¼GIR1 AND N·IHAN GÜNGÖR2

Abstract. In this paper we describe n-Banach lattices by Riesz space equippedwith n-norm. We consider spaces of reguler operators and order bounded op-erators between n-Banach lattices and Banach lattices, we generate norm onthese space by the aid of n-norm. Then we investigate properties of operatorson these space with generated norm.

1. Introduction

Riesz spaces, also called vector lattices, K-lineals which were rst considered byF. Riesz, L. Kantorovic, and H. Freudenthal in the middle of nineteen thirties. L.Kantorovic and his school rst recognized the importance of studying vector lat-tices in connection with Banachs theory of normed vector spaces; they investigatednormed vector lattices as well as order-related linear operators between such vectorlattices. In the middle seventies the research on this subject was essentially inu-enced by H.H. Schaefer. H. H. Schaefer present the theory of Banach lattices andpositive operators as an inseparable part of the general Banach space and operatortheory. In particular, deep results of the general theory and classical analysis wereused to prove related properties in case of general Banach lattices. More recentlyother important study concerning this subject appeared CD. Aliprantis and O.Burkinshaw.Let a real vector space X be an ordered vector space equipped with an order

relation . A vector x in an ordered vector space X is called positive whenever0 x holds. The set of all positive vectors denoted by X+. An operator is amap between two vector space. An operator T : X ! Y between two orderedvector space is called positive if 0 T (x) for all x 2 X. A Riesz space is anordered vector space X with property that for each pair vectors x; y 2 X thesupremum and the inmum of the set fx; yg both exists in X: For any vector xin a Riesz space X, dene x+ := x _ 0 = sup fx; 0g, x := x ^ 0 = sup fx; 0g,jxj := sup fx;xg. If T : X ! Y be an linear operator between two Riesz spacessuch that sup fjTyj : jyj xg exists in Y for each x 2 X+, then the modulus ofT exists and jT jx = sup fjTyj : jyj xg for each x 2 X+. Also jT (x)j jT j (jxj)holds for all x 2 X. A subset A of a Riesz space is said to be bounded above(bounded below) whenever there exists some x satisfying y x (x y) for ally 2 _X. A subset in a Riesz space is called order bounded if it is bounded both aboveand below. A Riesz space is called Dedekind complete whenever every nonemptybounded above subset has supremum (whenever every nonempty bounded belowsubset has inmum). An linear operator T : X ! Y between two Riesz spaces is

2000 Mathematics Subject Classication. 46B42.Key words and phrases. Riesz spaces, n-norm, n-Banach lattices, reguler operators, order

bounded operators.

1

70J. APPLIED FUNCTIONAL ANALYSIS, VOL. 10, NO.'S 1-2,70-77 ,2015, COPYRIGHT 2015 EUDOXUS PRESS, LLC

Page 71: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

2 B·IRSEN SA ¼GIR1 AND N·IHAN GÜNGÖR2

said to be reguler if it can be written as a di¤erence of two positive operators. Thevector space of all reguler operators form X to Y denoted by Lr (X;Y ). Of coursethis is equivalent to saying there exists a positive operator S : X ! Y satisfyingT S. An linear operator T : X ! Y between two Riesz spaces is said to be orderbounded if it maps an order bounded subsets of X to order bounded subsets of Y .The vector space of all order bounded operators from X to Y denoted by Lb (X;Y ).Every positive operator is order bounded. Therefore every reguler operator is alsoorder bounded. Hence Lr (X;Y ) Lb (X;Y ). A norm k:k on a Riesz space is saidto be a lattice norm whenever jxj jyj implies kxk kyk. For more detailed, itcan be looked [1, 2, 6, 7].

2. Operators on n-Banach Lattices

Fistly, we will give the some required denitions. The notion of n-normed spaceas well as its properties was introduced by S. Gähler in [4] as a generalization ofthe normed space.Let X be a real linear space. A function k:; : : : ; :k : Xn ! R is called a n-norm

on Xn if it satises following conditions, for all 2 R and x1; x2; :::; xn; y 2 X;i) kx1; x2; :::; xnk = 0, x1; x2; :::; xn are linearly dependentii) kx1; x2; :::; xnk is invariant under permutations of x1; x2; :::; xniii) kx1; x2; :::; xnk = jj kx1; x2; :::; xnkiv)kx1 + y; x2; :::; xnk kx1; x2; :::; xnk+ ky; x2; :::; xnk.Then the pair (X; k:; : : : ; :k) is called n-normed space.A sequence fxkg of n-normed space X is said to be convergent if there exists an

element x 2 X such that

limk!1

kxk x; z1; :::; zn1k = 0

for all z1; :::; zn1 2 X [5].Let (X; k:; : : : ; :k) and (Y; k:; : : : ; :k) be n-normed spaces. For an operator

T : (X; k:; : : : ; :k)! (Y; k:; : : : ; :k), dene [:]n by

[T ]n := sup

kT (x1) ; T (x2) ; :::; T (xn)k

kx1; x2; :::; xnk: kx1; x2; :::; xnk 6= 0

.

But, this natural denition of [:]n is not a norm [8] .For this reason, we need totake Y as normed space.Let X be n-normed space and Y be normed space. An operator T : Xn ! Y be

a n-linear operator on X if T is linear in each of variable. An n-linear operator isbounded if there exists a constant k > 0 such that ,

kT (x1; x2; :::; xn)k k kx1; x2; :::; xnk

for all (x1; x2; :::; xn) 2 Xn. If T is a bounded n-linear operator, then the n-operatornorm dened by

jjjT jjj := sup fkT (x1; x2; :::; xn)k : kx1; x2; :::; xnk 1g

and the space of all bounded n-linear operators fromXn to Y denoted by L(Xn; Y ).If Y is a Banach space, then L(Xn; Y ) is a Banach space with n-operator norm.Also its known that when T : Xn ! Y is a n-linear operator, T is bounded if andonly if T is continuous [8].

71

Page 72: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

A NOTE ON n-BANACH LATTICES 3

Let X be a Riesz space. The cartesian product Xn is a Riesz space under theordering

x = (x1; x2; :::; xn) z = (z1; z2; :::; zn) , xi ziholds in X for each i = 1; 2; ::; n. If x = (x1; x2; :::; xn) and z = (z1; z2; :::; zn) arevectors of Xn, then

x _ y = (x1 _ z1; x2 _ z2; :::; xn _ zn) and x ^ y = (x1 ^ z1; x2 ^ z2; :::; xn ^ zn)

[2].Hence we can see

jxj = j(x1; x2; :::; xn)j = (jx1j ; jx2j ; :::; jxnj)

for all x = (x1; x2; :::; xn) 2 Xn. Also, its clearly that

x+ = (x1; x2; :::; xn)+=x+1 ; x

+2 ; :::; x

+n

and x = (x1; x2; :::; xn)

=x1 ; x

2 ; :::; x

n

:

We demonstrate the space of positive vectors x+ = (x1; x2; :::; xn)+ by Xn+ .

Through this paper Xn will be equipped with this ordering:In [2, 10], the space of reguler operators and order bounded operators between

Banach lattices is dened and equipped with reguler norm and bound norm, re-spectively. Also some properties of these operators are showed. In this study, wedene n-Banach lattice with the aid of Riesz space equipped with n-norm and thenwe generalize some of these properties by using n-normed.

Denition 1. Let X be a Riesz space. A n-norm k; : : : ; k on X is said to be an-lattices norm whenever jxj 4 jyj implies kx; z1; :::; zn1k ky; z1; :::; zn1k for allz1; :::; zn1 2 X. When X equipped with a n-lattice norm, its dened as a n-normedRiesz space. If n-normed Riesz space X is complete, then its called as n-Banachlattice.

Since jxj j jxj j and j jxj j jxj for all x element of n-normed Riesz space X,we can see easily that the following equality

kx; z1; :::; zn1k = kjxj ; z1; :::; zn1k

for all z1; :::; zn1 2 X. Also kx+ y+; z1; :::; zn1k kx y; z1; :::; zn1k andkjxj jyj ; z1; :::; zn1k kx y; z1; :::; zn1k for all z1; :::; zn1 2 X.Its known that the space lp with 1 p <1 is n-normed space with

kx1; x2; :::; xnkp :=

24 1n!

Xj1

:::Xjn

jdet (xijk)jp

35 1p

, i = 1; :::; n [5].If lp equipped with an order relation such that jxj jyj , jxkj jykj whenever x = (xk), y = (yk) for all k 2 N , then lp is a Riesz space accordingto this relation and k:; :kp is a n-lattice norm. So

lp; k:; :kp

is an example of

n-normed Riesz space.The following lemma will used in the proof of the future theorem. We prove the

theorem by using the method which is well-known as in [3, 9].

Lemma 1. n-normed space X is a n-Banach space if and only if every convergentseries is absolutely convergent.

72

Page 73: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

4 B·IRSEN SA ¼GIR1 AND N·IHAN GÜNGÖR2

Proof. Firstly, let X be a n-Banach space. Assume that1Pk=1

kxk; z1; :::; zn1k

is convergent for all z1; :::; zn1 2 X. From convergent principle of real series1P

k=m+1

kxk; z1; :::; zn1k ! 0 (m!1). If we take Sm =mPk=1

xk, then

kSl Sm; z1; :::; zn1k = kxm+1 + xm+2 + + xl; z1; :::; zn1k kxm+1; z1; :::; zn1k+ kxm+2; z1; :::; zn1k+

+ + kxl; z1; :::; zn1k

lX

k=m+1

kxk; z1; :::; zn1k .

So that kSl Sm; z1; :::; zn1k ! 0 (m; l!1) and then we see that (Sm) is aCauchy sequence. Since X is n-Banach space,

1Pk=1

xk is convergent.

For the converse, every convergent series is absolutely convergent. Let fxkg bea Cauchy sequence of X, so kxk xm; z1; :::; zn1k ! 0 (k;m ! 1). Then thereexists strictly increasing sequence fkmg of natural numbers such that

kxkm+1 xkm ; z1; :::; zn1k <1

2m

for allm 2 N. Hence we obtain1Pm=1

kxkm+1 xkm ; z1; :::; zn1k <1, then1Pk=1

(xkm+1 xkm)

is convergent from the hypothesis. If we take Tv =vP

m=1(xkm+1 xkm), then

limv!1

Tv = limv!1

(xkv+1 xk1) exists and so the subsequence fxkv+1g is convergent.Therefore fxkg is convergent and hence the proof is completed.

Theorem 1. Let X be n-Banach lattice and Y be normed Riesz space. IfT : Xn ! Y is positive surjective n-linear operator, then its continuous.

Proof. Its known that if T : Xn ! Y is n-linear operator, then boundedness andcontinuity are equivalent [8]. Assume that T is not bounded. Then there exists asequence fxkg of X such that kxk; z1; :::; zn1k = 1 and kT (xk; z1; :::; zn1)k k3for all k 2 N and z1; :::; zn1 2 X. Since T is positive operator, we can write

jT (xk; z1; :::; zn1)j T j(xk; z1; :::; zn1)j = T (jxkj ; jz1j ; :::; jzn1j) .

Hence we nd,

kT (jxkj ; jz1j ; :::; jzn1j)k k jT (xk; z1; :::; zn1)j k = kT (xk; z1; :::; zn1)k k3:

If we consider that kxk; z1; :::; zn1k = kjxkj ; z1; :::; zn1k and yk = jxkj, then thereis a sequence fykg such that

(2.1) kT (yk; jz1j ; :::; jzn1j)k k3

with kyk; z1; :::; zn1k = 1 and 0 yk for every k 2 N.Since

1Pk=1

kyk;jz1j;:::;jzn1jkk2 <1 and X is a n-Banach space, then the series

1Pk=1

ykk2

is n-norm convergent in X. If we say that y =1Pk=1

ykk2 , then its obvious that

73

Page 74: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

A NOTE ON n-BANACH LATTICES 5

0 ykk2 y holds for every k 2 N. Hence, from the inequality (2.1) we obtain

k T yk

k2; jz1j ; :::; jzn1j

kT (y; jz1j ; :::; jzn1j)k <1for all k 2 N, but its a contradiction. Therefore T must be bounded, so T iscontinuous. When X and Y be normed Riesz space, the denition of reguler norm k:kr is

given in [1, 2]. Now we will generate n-reguler norm by using n-norm.

Denition 2. Let X be n-Banach lattice and Y be Banach lattice. If T : Xn ! Ybe a n-linear operator with modulus, then the n-reguler norm jjjT jjjr is dened by

jjjT jjjr = jjj jT j jjj := sup fk jT (x1; x2; :::; xn)j k : kx1; x2; :::; xnk 1gfor all x1; x2; :::; xn 2 X.Also we can give equivalent denition of n-reguler norm as

jjjT jjjr := inf fjjjSjjj : T Sg :

Theorem 2. If X be n-Banach lattice and Y be Banach lattice with Y Dedekindcomplete, then Lr (Xn; Y ) is a Banach lattice under the n-reguler norm. AlsojjjT jjj jjjT jjjr implies for all T 2 Lr (Xn; Y ).

Proof. From the denition of n-reguler norm, if T is a positive operator thenjjjT jjjr = jjj jT j jjj = jjjT jjj holds. Let T; S 2 Lr (Xn; Y ) satisfying 0 S T . So,we can write

jS (x1; x2; :::; xn)j S j(x1; x2; :::; xn)j T j(x1; x2; :::; xn)j = T (jx1j ; jx2j ; :::; jxnj)for all x1; x2; :::; xn 2 X with kx1; x2; :::; xnk = kjx1j ; jx2j ; :::; jxnjk 1. Because of0 S (x1; x2; :::; xn), we nd

kS (x1; x2; :::; xn)k = k jS (x1; x2; :::; xn)j k kS j(x1; x2; :::; xn)j k kT j(x1; x2; :::; xn)j k = kT (jx1j ; jx2j ; :::; jxnj)k jjjT jjj k jx1j ; jx2j ; :::; jxnj k jjjT jjj .

Therefore, we obtain

jjjSjjj = sup fkS (x1; x2; :::; xn)k : kx1; x2; :::; xnk 1g jjjT jjj .Let T; S 2 Lr (Xn; Y ) satisfying jSj jT j. Since jSj and jT j are positive opera-

tors, from the preceding result we write

jjjSjjjr = jjj jSj jjj jjj jT j jjj = jjjT jjjr .Hence n-reguler norm is a lattice norm. Also, we get

jjjT jjj = sup fkT (x1; x2; :::; xn)k : kx1; x2; :::; xnk 1g= sup fk jT (x1; x2; :::; xn)j k : kx1; x2; :::; xnk 1g sup fk jT j (jx1j ; jx2j ; :::; jxnj) k : k jx1j ; jx2j ; :::; jxnj k 1g sup fk jT j (z1; z2; :::; zn) k : kz1; z2; :::; znk 1g= jjjT jjjr .

Theorem 3. If X be n-Banach lattice and Y be Banach lattice with Y Dedekindcomplete, then Lb(Xn; Y ) is a Dedekind complete Banach lattice under the n-regulernorm.

74

Page 75: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

6 B·IRSEN SA ¼GIR1 AND N·IHAN GÜNGÖR2

Proof. Its known that Lb(Xn; Y ) is Dedekind complete whenever Y is Dedekindcomplete [1, 2]. Let fTmg be a Cauchy sequence of Lb(Xn; Y ) with respect to then-reguler norm. Then, there is a subsequence such that

jjjTm+1 Tmjjjr = jjj jTm+1 Tmj jjj < 2m

for all m 2 N. Since jjjTm+1 Tmjjj jjjTm+1 Tmjjjr holds for all m 2 Nfrom Theorem 2, fTmg be a Cauchy sequence of L (Xn; Y ). So there exists aT 2 L (Xn; Y ) with jjjTm T jjj ! 0. Now let any x = (x1; x2; :::; xn) 2 Xn+ .Since

(T Tm) (z) =1Xi=m

(Ti+1 Ti) (z) 1Xi=m

jTi+1 Tij (x)

for all z = (z1; z2; :::; zn) 2 Xn with jzj x, the modulus of T Tm exists and

(2.2) jT Tmj (x) = sup f(T Tm) (z) : jzj xg 1Xi=m

jTi+1 Tij (x)

holds for each x 2 Xn+ . T is a reguler operator i.e. T 2 Lb (Xn; Y ), because we can

write T = (T T1) + T1. From (2.2), jjjTm T jjjr 1Pi=m

jjjTi+1 Tijjjr 21m

and hence jjjTm T jjjr ! 0.

The order bound norm of an order bounded linear operators between Banachlattices is dened by [10]. We will introduce a new norm using n-norm on the spaceLb (X

n; Y ) whenever X be n-Banach lattice and Y be Banach lattice. For this, weneed the following proposition.

Proposition 1. If X be n-Banach lattice and Y be Banach lattice and T : Xn ! Yis an order bounded operator then there exists a real constant such that for all x =(x1; x2; :::; xn) 2 Xn+ there is y 2 Y + such that j(z1; z2; :::; zn)j (x1; x2; :::; xn)implies jT (z1; z2; :::; zn)j y with kyk kx1; x2; :::; xnk.

Proof. Since T is an order bounded, its obvious that j(z1; z2; :::; zn)j (x1; x2; :::; xn)implies jT (z1; z2; :::; zn)j y for any (x1; x2; :::; xn) 2 Xn+ . We will show n-normcondition, for this assume to the contrary that this condition. In this case, we cannd a sequence fxkg of X+ with kxk; e1; :::; en1k = 1 for all k 2 N such that forany yk 2 Y + with j(z1; z2; :::; zn)j (xk; e1; :::; en1) implies jT (z1; z2; :::; zn)j ykits must be

kykk k2k.

SinceX is n-Banach space the series1Pk=1

2kxk convergent, so we write x =1Pk=1

2kxk.

Now, let y 2 Y + such thatj(z1; z2; :::; zn)j (x; e1; :::; en1) implies jT (z1; z2; :::; zn)j y:

If j(z1; z2; :::; zn)j (xk; e1; :::; en1) then2kz1; z2; :::; zn 2kxk; e1; :::; en1 (x; e1; :::; en1)and hence jT (z1; z2; :::; zn)j 2ky. Therefore we obtain

2ky k2k and sokyk k

for all k 2 N. Its a contradiction so the proof is completed.

75

Page 76: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

A NOTE ON n-BANACH LATTICES 7

Denition 3. If X is n-Banach lattice, Y is Banach lattice and T : Xn ! Y isan order bounded operator then we dene the n-order bound norm of T ,jjjT jjjb = inff 2 R :there exists y 2 Y +such that j(z1; z2; :::; zn)j (x1; x2; :::; xn)implies jT (z1; z2; :::; zn)j y with kyk kx1; x2; :::; xnk for all (x1; x2; :::; xn) 2Xn+g.Theorem 4. If T 2 Lr (Xn; Y ), then jjjT jjj jjjT jjjb jjjT jjjr. Moreover, if Y isa Dedekind complete, then jjjT jjjr = jjjT jjjb holds for all T 2 Lr (Xn; Y ).

Proof. Let any (x1; x2; :::; xn) 2 Xn+ with kx1; x2; :::; xnk 1. We can writekz1; z2; :::; znk kx1; x2; :::; xnk 1 for all (z1; z2; :::; zn) 2 Xn satisfying the con-dition j(z1; z2; :::; zn)j (x1; x2; :::; xn) in denition of jjj:jjjb norm. If we considerthat jT (z1; z2; :::; zn)j y and kyk kx1; x2; :::; xnk, then we obtain

kT (z1; z2; :::; zn)k kyk .Therefore we nd jjjT jjj , then

jjjT jjj jjjT jjjbholds for all T 2 Lr (Xn; Y ).Now, let S 2 Lr (Xn; Y ) with T S. If we take y = S (x1; x2; :::; xn), then

j(z1; z2; :::; zn)j (x1; x2; :::; xn) implies jT (z1; z2; :::; zn)j jT j j(z1; z2; :::; zn)j S (x1; x2; :::; xn) = y. Hence we obtain

kyk = kS (x1; x2; :::; xn)k jjjSjjj kx1; x2; :::; xnk .So, jjjSjjj 2 inff 2 R : there exists y 2 Y +such that j(z1; z2; :::; zn)j (x1; x2; :::; xn)implies jT (z1; z2; :::; zn)j y with kyk kx1; x2; :::; xnk for any (x1; x2; :::; xn) 2Xn+ g. Then we get

jjjT jjjb jjjT jjjr :

Its known that if T : X ! Y is an order bounded disjointness preservingoperator, then jT j exists and jT j (x) = jT (x)j for all x 2 X+ [2]. By the aid of thisinformation, we will get the following proposition.

Proposition 2. If X is n-Banach lattice and Y is Banach lattice and T; S : Xn !Y are order bounded disjointness preserving operators then

jjj jT j jSj jjj 2 jjjT Sjjj .Proof. Let (x1; x2; :::; xn) 2 Xn+ . Then

j jT j (x1; x2; :::; xn) jSj (x1; x2; :::; xn) j = j jT (x1; x2; :::; xn)j jS (x1; x2; :::; xn)j j j(T S) (x1; x2; :::; xn)j

so we nd

kjT j (x1; x2; :::; xn) jSj (x1; x2; :::; xn)k k(T S) (x1; x2; :::; xn)k jjjT Sjjj kx1; x2; :::; xnk .

For (x1; x2; :::; xn) 2 Xn, we write

jjT j (x1; x2; :::; xn) jSj (x1; x2; :::; xn)j

jT j(x1; x2; :::; xn)+ jSj(x1; x2; :::; xn)++jT j(x1; x2; :::; xn) jSj(x1; x2; :::; xn)

76

Page 77: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

8 B·IRSEN SA ¼GIR1 AND N·IHAN GÜNGÖR2

hence we obtain

k(jT j jSj) (x1; x2; :::; xn)k kT Sk x+1 ; x+2 ; :::; x+n + x1 ; x2 ; :::; xn

2 kT Sk kx1; x2; :::; xnk .The proof is completed. We will obtain the corollary by using the preceding proposition as the method

in [10].

Corollary 1. If X be n-Banach lattice and Y be Banach lattice, for each k 2 NTk : X

n ! Y is an order bounded disjointness preserving operator and T : Xn ! Yis a bounded operator with Tn ! T for the n-operator norm then T is an orderbounded disjointness preserving operator and jTnj ! jT j for the n-operator norm.

References

[1] Y.A. Abromovich and C.D. Aliprantis, An Invitation To Operator Theory, A.M.S. GraduateStudies in Mathematics 50, Rhode Island, U.S.A, 2002.

[2] C.D. Aliprantis and O. Burkinshaw, Positive Operators, Academic Press, New York andLondon, 1985.

[3] J.B. Conway, A Course in Functional Analysis, Springer-Verlag Berlin Heidelberg , NewYork, 1985.

[4] S. Gähler, Lineare 2-normietre Räume, Math. Nachr., 28; 1-43, 1964.[5] H. Gunawan, The space Of p-Summable Sequences and Its Natural n-Norm, Bull. Austral.

Math. Soc, 6; 137-147, 2001.[6] P. Meyer-Nieberg, Banach Lattices, Springer-Verlag, Berlin-Heidelberg, 1991.[7] H.H.Schaefer, Banach Lattices and Positive Operators, Springer-Verlag, Berlin-Heidelberg,

1974.[8] A.L. Soenjaya, On n-Bounded and n-Continuous Operator in n-Normed space, J.Indones.

Math. Soc., 18, 45-46, 2012.[9] T. Terzio¼glu, Fonksiyonel Analizin Yöntemleri, Tübitak Yay¬nlar¬, Ankara,1984.[10] A.W. Wickstead and A.K. Kitover, Operator Norm Limits Of Order Continuous Operators,

Positivity, 9; 341-345,2005.

1ONDOKUZ MAYIS UNIVERSITY, FACULTY OF SCIENCES AND ARTS, DEPART-MENT OF MATHEMATICS, 55139 KURUPEL·IT SAMSUN / TURKEY

E-mail address : [email protected]

Current address : 2GÜMÜSHANE UNIVERSITY, FACULTY OF ENGINEERING, DEPART-MENT OF MATHEMATICAL ENGINEERING, 29100 GÜMÜSHANE / TURKEYE-mail address : [email protected]

77

Page 78: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Differential subordinations using Ruscheweyh derivative and amultiplier transformation

Alb Lupas AlinaDepartment of Mathematics and Computer Science

University of Oradeastr. Universitatii nr. 1, 410087 Oradea, Romania

[email protected]

Abstract

In this paper the author derives several interesting differential subordination results. These subordi-nations are established by means of a differential operator obtained using Ruscheweyh derivative Rmf(z)and the multiplier transformations I (m,λ, l) f(z) namely

RIαm,λ,l the operator given by RIαm,λ,l : An → An, RIαm,λ,lf(z) = (1 − α)Rmf(z) + αI (m,λ, l) f(z),

for z ∈ U, m ∈ N, λ, l ≥ 0 and An = f ∈ H(U) : f(z) = z + an+1zn+1 + . . . , z ∈ U,.A number of interesting consequences of some of these subordination results are discussed. Relevant

connections of some of the new results obtained in this paper with those in earlier works are also provided.

Keywords: differential subordination, convex function, best dominant, differential operator.2000 Mathematical Subject Classification: 30C45, 30A20, 34A40.

1 IntroductionDenote by U the unit disc of the complex plane, U = z ∈ C : |z| < 1 and H(U) the space of holomorphic

functions in U .Let An = f ∈ H(U) : f(z) = z + an+1z

n+1 + . . . , z ∈ U and H[a, n] = f ∈ H(U) : f(z) =a+ anz

n + an+1zn+1 + . . . , z ∈ U for a ∈ C and n ∈ N.

Denote by K =nf ∈ An : Re zf 00(z)

f 0(z) + 1 > 0, z ∈ Uo, the class of normalized convex functions in U .

If f and g are analytic functions in U , we say that f is subordinate to g, written f ≺ g, if there is afunction w analytic in U , with w(0) = 0, |w(z)| < 1, for all z ∈ U, such that f(z) = g(w(z)) for all z ∈ U . Ifg is univalent, then f ≺ g if and only if f(0) = g(0) and f(U) ⊆ g(U).Let ψ : C3×U → C and h an univalent function in U . If p is analytic in U and satisfies the (second-order)

differential subordinationψ(p(z), zp0(z), z2p00(z); z) ≺ h(z), z ∈ U, (1.1)

then p is called a solution of the differential subordination. The univalent function q is called a dominant ofthe solutions of the differential subordination, or more simply a dominant, if p ≺ q for all p satisfying (1.1).A dominant eq that satisfies eq ≺ q for all dominants q of (1.1) is said to be the best dominant of (1.1).

The best dominant is unique up to a rotation of U .

Definition 1.1 (Ruscheweyh [13]) For f ∈ An, m,n ∈ N, the operator Rm is defined by Rm : An → An,

R0f (z) = f (z)

R1f (z) = zf 0 (z) , ...

(m+ 1)Rm+1f (z) = z (Rmf (z))0 +mRmf (z) , z ∈ U.

Remark 1.1 If f ∈ An, f(z) = z +P∞j=n+2 ajz

j, then Rmf (z) = z +P∞j=n+1

(n+j−1)!n!(j−1)! ajz

j, z ∈ U .

1

78J. APPLIED FUNCTIONAL ANALYSIS, VOL. 10, NO.'S 1-2,78-88 ,2015, COPYRIGHT 2015 EUDOXUS PRESS, LLC

Page 79: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Definition 1.2 ([3]) For f ∈ An, m,n ∈ N, λ, l ≥ 0, the operator I (m,λ, l) f(z) is defined by the followinginfinite series

I (m,λ, l) f(z) = z +∞X

j=n+1

µλ (j − 1) + l + 1

l + 1

¶majz

j.

Remark 1.2 It follows from the above definition that

I (0,λ, l) f(z) = f(z),

(l + 1) I (m+ 1,λ, l) f(z) = (l + 1− λ) I (m,λ, l) f(z) + λz (I (m,λ, l) f(z))0 , z ∈ U.

Remark 1.3 For l = 0, λ ≥ 0, the operator Dmλ = I (m,λ, 0) was introduced and studied by Al-Oboudi [11],

which is reduced to the Salagean differential operator [14] for λ = 1.

Definition 1.3 ([6]) Let α,λ, l ≥ 0, m,n ∈ N. Denote by RIαm,λ,l the operator given by RIαm,λ,l : An → An,

RIαm,λ,lf(z) = (1− α)Rmf(z) + αI (m,λ, l) f(z), z ∈ U.

Remark 1.4 If f ∈ An, f(z) = z +P∞j=n+1 ajz

j, then

RIαm,λ,lf(z) = z +P∞j=n+1

nα³1+λ(j−1)+l

l+1

´m+ (1− α) (n+j−1)!n!(j−1)!

oajz

j , for z ∈ U.

Remark 1.5 For α = 0, RI0m,λ,lf(z) = Rmf(z), where z ∈ U and for α = 1, RI1m,λ,lf (z) = I (m,λ, l) f (z),

where z ∈ U , which was studied in [2], [8].For l = 0, we obtain RIαm,λ,0f (z) = RDm

1,αf (z) which was studied in [4], [5], [10] and for l = 0 andλ = 1, we obtain RIαm,1,0f (z) = L

mα f (z) which was studied in [1], [7].

For m = 0, RIα0,λ,lf (z) = (1− α)R0f (z) + αI (0,λ, l) f (z) = f (z) = R0f (z) = I (0,λ, l) f (z), wherez ∈ U.

Lemma 1.1 (Miller and Mocanu [12]) Let g be a convex function in U and let h(z) = g(z) + nαzg0(z), forz ∈ U, where α > 0 and n is a positive integer.If p(z) = g(0) + pnzn + pn+1zn+1 + . . . , z ∈ U, is holomorphic in U and

p(z) + αzp0(z) ≺ h(z), z ∈ U,

thenp(z) ≺ g(z), z ∈ U,

and this result is sharp.

Lemma 1.2 (Hallenbeck and Ruscheweyh [12, Th. 3.1.6, p. 71]) Let h be a convex function with h(0) = a,and let γ ∈ C\0 be a complex number with Re γ ≥ 0. If p ∈ H[a, n] and

p(z) +1

γzp0(z) ≺ h(z), z ∈ U,

thenp(z) ≺ g(z) ≺ h(z), z ∈ U,

where g(z) = γnzγ/n

R z0h(t)tγ/n−1dt, z ∈ U.

2 Main resultsTheorem 2.1 Let g be a convex function, g(0) = 1 and let h be the function h(z) = g(z) + nz

δ g0(z), z ∈ U.

If α,λ, l, δ ≥ 0, m,n ∈ N, f ∈ An and satisfies the differential subordinationµRIαm,λ,lf(z)

z

¶δ−1 ¡RIαm,λ,lf(z)

¢0 ≺ h(z), z ∈ U, (2.1)

then µRIαm,λ,lf(z)

z

¶δ≺ g(z), z ∈ U,

and this result is sharp.

2

Ruscheweyh derivative and multiplier transformation

Alb Lupa¸s Alina79

Page 80: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Proof. Consider p(z) =³RIαm,λ,lf(z)

z

´δ=

µz+P∞

j=n+1α( 1+λ(j−1)+ll+1 )m+(1−α) (n+j−1)!

n!(j−1)! ajzjz

¶δ= 1 + pδnz

δ +

pδn+1zδ+1 + ..., z ∈ U.

We deduce that p ∈ H[1, δn].Differentiating we obtain

³RIαm,λ,lf(z)

z

´δ−1 ³RIαm,λ,lf(z)

´0= p(z) + 1

δ zp0(z), z ∈ U.

Then (2.1) becomes

p(z) +1

δzp0(z) ≺ h(z) = g(z) + nz

δg0(z), z ∈ U.

By using Lemma 1.1, we have

p(z) ≺ g(z), z ∈ U, i.e.µRIαm,λ,lf(z)

z

¶δ≺ g(z), z ∈ U.

Theorem 2.2 Let h be an holomorphic function which satisfies the inequality Re³1 + zh00(z)

h0(z)

´> −12 , z ∈ U,

and h(0) = 1. If α,λ, l, δ ≥ 0, m, n ∈ N, f ∈ An and satisfies the differential subordinationµRIαm,λ,lf(z)

z

¶δ−1 ¡RIαm,λ,lf(z)

¢0 ≺ h(z), z ∈ U, (2.2)

then µRIαm,λ,lf(z)

z

¶δ≺ q(z), z ∈ U,

where q(z) = δ

nzδn

R z0h(t)t

δn−1dt.

Proof. Let

p(z) =

µRIαm,λ,lf(z)

z

¶δ=

⎛⎝z +P∞j=n+1nα³1+λ(j−1)+l

l+1

´m+ (1− α) (n+j−1)!n!(j−1)!

oajz

j

z

⎞⎠δ

=

⎛⎝1 + ∞Xj=n+1

½α

µ1 + λ (j − 1) + l

l + 1

¶m+ (1− α)

(n+ j − 1)!n! (j − 1)!

¾ajz

j−1

⎞⎠δ

= 1 +∞X

j=nδ+1

pjzj−1,

for z ∈ U, p ∈ H[1, nδ].Differentiating, we obtain

³RIαm,λ,lf(z)

z

´δ−1 ³RIαm,λ,lf(z)

´0= p(z) + 1

δ zp0(z), z ∈ U, and (2.2) becomes

p(z) +1

δzp0(z) ≺ h(z), z ∈ U.

Using Lemma 1.2, we have

p(z) ≺ q(z), z ∈ U, i.e.µRIαm,λ,lf(z)

z

¶δ≺ q(z) = δ

nzδn

Z z

0

h(t)tδn−1dt, z ∈ U,

and q is the best dominant.

Corollary 2.3 Let h(z) = 1+(2β−1)z1+z be a convex function in U , where 0 ≤ β < 1.If α, δ, l,λ ≥ 0, m,n ∈ N,

f ∈ An and satisfies the differential subordinationµRIαm,λ,lf(z)

z

¶δ−1 ¡RIαm,λ,lf(z)

¢0 ≺ h(z), z ∈ U, (2.3)

3

Ruscheweyh derivative and multiplier transformation

Alb Lupa¸s Alina80

Page 81: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

then µRIαm,λ,lf(z)

z

¶δ≺ q(z), z ∈ U,

where q is given by q(z) = (2β − 1) + 2(1−β)δnz

δn

R z0tδn−1

1+t dt, z ∈ U. The function q is convex and it is the bestdominant.

Proof. Following the same steps as in the proof of Theorem 2.2 and considering p(z) =³RIαm,λ,lf(z)

z

´δ,

the differential subordination (2.3) becomes

p(z) +z

δp0(z) ≺ h(z) = 1 + (2β − 1)z

1 + z, z ∈ U.

By using Lemma 1.2 for γ = δ, we have p(z) ≺ q(z), i.e.µRIαm,λ,lf(z)

z

¶δ≺ q(z) = δ

nzδn

Z z

0

h(t)tδn−1dt =

δ

nzδn

Z z

0

tδn−1 1 + (2β − 1)t

1 + tdt =

δ

nzδn

Z z

0

"(2β − 1) t δn−1 + 2 (1− β)

tδn−1

1 + t

#dt

= (2β − 1) + 2 (1− β) δ

nzδn

Z z

0

tδn−1

1 + tdt, z ∈ U.

Remark 2.1 For m = 1, n = 1, l = 2, λ = 1, α = 12 , δ = 1 we obtain the same example as in [9, Example

5.2.1, p. 179].

Theorem 2.4 Let g be a convex function such that g (0) = 1 and let h be the function h (z) = g (z)+ nzδ g

0 (z),z ∈ U . If α,λ, l, δ ≥ 0, m,n ∈ N, f ∈ An and the differential subordination

zδ + 1

δ

RIαm,λ,lf (z)³RIαm+1,λ,lf (z)

´2 + z2δ RIαm,λ,lf(z)³RIαm+1,λ,lf(z)

´2⎡⎢⎣³RIαm,λ,lf(z)

´0RIαm,λ,lf(z)

− 2

³RIαm+1,λ,lf(z)

´0RIαm+1,λ,lf(z)

⎤⎥⎦ ≺ h (z) , z ∈ U

(2.4)holds, then

zRIαm,λ,lf (z)³RIαm+1,λ,lf (z)

´2 ≺ g (z) , z ∈ U,and this result is sharp.

Proof. Consider p(z) = z RIαm,λ,lf(z)

(RIαm+1,λ,lf(z))2 and we obtain

p (z) + zδ p0 (z) = z δ+1δ

RIαm,λ,lf(z)

(RIαm+1,λ,lf(z))2 +

z2

δ

RIαm,λ,lf(z)

(RIαm+1,λ,lf(z))2

∙(RIαm,λ,lf(z))

0

RIαm,λ,lf(z)− 2(RI

αm+1,λ,lf(z))

0

RIαm+1,λ,lf(z)

¸.

Relation (2.4) becomes

p(z) +z

δp0(z) ≺ h(z) = g(z) + nz

δg0(z), z ∈ U.

By using Lemma 1.1, we have

p(z) ≺ g(z), z ∈ U, i.e. zRIαm,λ,lf (z)³RIαm+1,λ,lf (z)

´2 ≺ g(z), z ∈ U.

4

Ruscheweyh derivative and multiplier transformation

Alb Lupa¸s Alina81

Page 82: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Theorem 2.5 Let h be an holomorphic function which satisfies the inequality Re³1 + zh00(z)

h0(z)

´> −12 , z ∈ U,

and h(0) = 1. If α,λ, l, δ ≥ 0, m, n ∈ N, f ∈ An and satisfies the differential subordination

zδ + 1

δ

RIαm,λ,lf (z)³RIαm+1,λ,lf (z)

´2 + z2δ RIαm,λ,lf(z)³RIαm+1,λ,lf(z)

´2⎡⎢⎣³RIαm,λ,lf(z)

´0RIαm,λ,lf(z)

− 2

³RIαm+1,λ,lf(z)

´0RIαm+1,λ,lf(z)

⎤⎥⎦ ≺ h(z), z ∈ U,

(2.5)then

zRIαm,λ,lf (z)³RIαm+1,λ,lf (z)

´2 ≺ q(z), z ∈ U,

where q(z) = δ

nzδn

R z0h(t)t

δn−1dt.

Proof. Let p(z) = z RIαm,λ,lf(z)

(RIαm+1,λ,lf(z))2 , z ∈ U, p ∈ H[1, n].

Differentiating, we obtain

p (z) + zδ p0 (z) = z δ+1δ

RIαm,λ,lf(z)

(RIαm+1,λ,lf(z))2 +

z2

δ

RIαm,λ,lf(z)

(RIαm+1,λ,lf(z))2

∙(RIαm,λ,lf(z))

0

RIαm,λ,lf(z)− 2(RI

αm+1,λ,lf(z))

0

RIαm+1,λ,lf(z)

¸, z ∈ U, and

(2.5) becomes

p(z) +z

δp0(z) ≺ h(z), z ∈ U.

Using Lemma 1.2, we have

p(z) ≺ q(z), z ∈ U, i.e. zRIαm,λ,lf (z)³RIαm+1,λ,lf (z)

´2 ≺ q(z) = δ

nzδn

Z z

0

h(t)tδn−1dt, z ∈ U,

and q is the best dominant.

Theorem 2.6 Let g be a convex function such that g(0) = 1 and let h be the function h(z) = g(z)+ nzδ g

0(z),z ∈ U. If α,λ, l, δ ≥ 0, m,n ∈ N, f ∈ An and the differential subordination

z2δ + 2

δ

³RIαm,λ,lf (z)

´0RIαm,λ,lf (z)

+z3

δ

⎡⎢⎣³RIαm,λ,lf (z)

´00RIαm,λ,lf (z)

⎛⎜⎝³RIαm,λ,lf (z)

´0RIαm,λ,lf (z)

⎞⎟⎠2⎤⎥⎦ ≺ h(z), z ∈ U (2.6)

holds, then

z2

³RIαm,λ,lf (z)

´0RIαm,λ,lf (z)

≺ g(z), z ∈ U.

This result is sharp.

Proof. Let p(z) = z2 (RIαm,λ,lf(z))

0

RIαm,λ,lf(z). We deduce that p ∈ H[0, n].

Differentiating, we obtain

p (z) + zδ p0 (z) = z2 δ+2δ

(RIαm,λ,lf(z))0

RIαm,λ,lf(z)+ z3

δ

"(RIαm,λ,lf(z))

00

RIαm,λ,lf(z)−µ(RIαm,λ,lf(z))

0

RIαm,λ,lf(z)

¶2#, z ∈ U.

Using the notation in (2.6), the differential subordination becomes

p(z) +1

δzp0(z) ≺ h(z) = g(z) + nz

δg0(z).

By using Lemma 1.1, we have

p(z) ≺ g(z), z ∈ U, i.e. z2

³RIαm,λ,lf (z)

´0RIαm,λ,lf (z)

≺ g(z), z ∈ U,

and this result is sharp.

5

Ruscheweyh derivative and multiplier transformation

Alb Lupa¸s Alina82

Page 83: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Theorem 2.7 Let h be an holomorphic function which satisfies the inequality Re³1 + zh00(z)

h0(z)

´> −12 , z ∈ U,

and h(0) = 1. If α,λ, l, δ ≥ 0, m, n ∈ N, f ∈ An and satisfies the differential subordination

z2δ + 2

δ

³RIαm,λ,lf (z)

´0RIαm,λ,lf (z)

+z3

δ

⎡⎢⎣³RIαm,λ,lf (z)

´00RIαm,λ,lf (z)

⎛⎜⎝³RIαm,λ,lf (z)

´0RIαm,λ,lf (z)

⎞⎟⎠2⎤⎥⎦ ≺ h(z), z ∈ U, (2.7)

then

z2

³RIαm,λ,lf (z)

´0RIαm,λ,lf (z)

≺ q(z), z ∈ U,

where q(z) = δ

nzδn

R z0h(t)t

δn−1dt.

Proof. Let p(z) = z2 (RIαm,λ,lf(z))

0

RIαm,λ,lf(z), z ∈ U, p ∈ H[0, n].

Differentiating, we obtain p (z) + zδ p0 (z) = z2 δ+2δ

(RIαm,λ,lf(z))0

RIαm,λ,lf(z)+ z3

δ

"(RIαm,λ,lf(z))

00

RIαm,λ,lf(z)−µ(RIαm,λ,lf(z))

0

RIαm,λ,lf(z)

¶2#, z ∈

U, and (2.7) becomes

p(z) +1

δzp0(z) ≺ h(z), z ∈ U.

Using Lemma 1.2, we have

p(z) ≺ q(z), z ∈ U, i.e. z2

³RIαm,λ,lf (z)

´0RIαm,λ,lf (z)

≺ q(z) = δ

nzδn

Z z

0

h(t)tδn−1dt, z ∈ U,

and q is the best dominant.

Theorem 2.8 Let g be a convex function such that g(0) = 1 and let h be the function h(z) = g(z)+nzg0(z),z ∈ U. If α,λ, l ≥ 0, m,n ∈ N, f ∈ An and the differential subordination

1−RIαm,λ,lf (z) ·

³RIαm,λ,lf (z)

´00∙³RIαm,λ,lf (z)

´0¸2 ≺ h(z), z ∈ U (2.8)

holds, thenRIαm,λ,lf (z)

z³RIαm,λ,lf (z)

´0 ≺ g(z), z ∈ U.

This result is sharp.

Proof. Let p(z) = RIαm,λ,lf(z)

z(RIαm,λ,lf(z))0 . We deduce that p ∈ H[1, n].

Differentiating, we obtain 1− RIαm,λ,lf(z)·(RIαm,λ,lf(z))00

[(RIαm,λ,lf(z))0]2 = p (z) + zp0 (z) , z ∈ U.

Using the notation in (2.8), the differential subordination becomes

p(z) + zp0(z) ≺ h(z) = g(z) + nzg0(z).

By using Lemma 1.1, we have

p(z) ≺ g(z), z ∈ U, i.e.RIαm,λ,lf (z)

z³RIαm,λ,lf (z)

´0 ≺ g(z), z ∈ U,

and this result is sharp.

6

Ruscheweyh derivative and multiplier transformation

Alb Lupa¸s Alina83

Page 84: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Theorem 2.9 Let h be an holomorphic function which satisfies the inequality Re³1 + zh00(z)

h0(z)

´> −12 , z ∈ U,

and h(0) = 1. If α,λ, l ≥ 0, m, n ∈ N, f ∈ An and satisfies the differential subordination

1−RIαm,λ,lf (z) ·

³RIαm,λ,lf (z)

´00∙³RIαm,λ,lf (z)

´0¸2 ≺ h(z), z ∈ U, (2.9)

thenRIαm,λ,lf (z)

z³RIαm,λ,lf (z)

´0 ≺ q(z), z ∈ U,

where q(z) = 1

nz1n

R z0h(t)t

1n−1dt.

Proof. Let p(z) = RIαm,λ,lf(z)

z(RIαm,λ,lf(z))0 , z ∈ U, p ∈ H[0, n].

Differentiating, we obtain 1− RIαm,λ,lf(z)·(RIαm,λ,lf(z))00

[(RIαm,λ,lf(z))0]2 = p (z) + zp0 (z) , z ∈ U, and (2.9) becomes

p(z) + zp0(z) ≺ h(z), z ∈ U.

Using Lemma 1.2, we have

p(z) ≺ q(z), z ∈ U, i.e.RIαm,λ,lf (z)

z³RIαm,λ,lf (z)

´0 ≺ q(z) = 1

nz1n

Z z

0

h(t)t1n−1dt, z ∈ U,

and q is the best dominant.

Corollary 2.10 Let h(z) = 1+(2β−1)z1+z be a convex function in U , where 0 ≤ β < 1. If α,λ, l ≥ 0, m,n ∈ N,

f ∈ An and satisfies the differential subordination

1−RIαm,λ,lf (z) ·

³RIαm,λ,lf (z)

´00∙³RIαm,λ,lf (z)

´0¸2 ≺ h(z), z ∈ U, (2.10)

thenRIαm,λ,lf (z)

z³RIαm,λ,lf (z)

´0 ≺ q(z), z ∈ U,

where q is given by q(z) = (2β − 1) + 2(1−β)nz

1n

R z0t1n−1

1+t dt, z ∈ U. The function q is convex and it is the bestdominant.

Proof. Following the same steps as in the proof of Theorem 2.9 and considering p(z) =RIαm,λ,lf(z)

z(RIαm,λ,lf(z))0 ,

the differential subordination (2.10) becomes

p(z) + zp0(z) ≺ h(z) = 1 + (2β − 1)z1 + z

, z ∈ U.

By using Lemma 1.2 for γ = 1, we have p(z) ≺ q(z), i.e.RIαm,λ,lf (z)

z³RIαm,λ,lf (z)

´0 ≺ q(z) = 1

nz1n

Z z

0

h(t)t1n−1dt =

1

nz1n

Z z

0

1 + (2β − 1)t1 + t

t1n−1dt =

1

nz1n

Z z

0

"(2β − 1) t 1n−1 + 2 (1− β)

t1n−1

1 + t

#dt

= (2β − 1) + 2 (1− β)

nz1n

Z z

0

t1n−1

1 + tdt, z ∈ U.

7

Ruscheweyh derivative and multiplier transformation

Alb Lupa¸s Alina84

Page 85: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Example 2.1 Let h (z) = 1−z1+z a convex function in U with h (0) = 1 and Re

³zh00(z)h0(z) + 1

´> −12 .

Let f (z) = z + z2, z ∈ U . For n = 1, m = 1, l = 2,λ = 1, α = 12 , we obtain RI

121,1,2f (z) =

12R

1f (z) + 12I (1, 1, 2) f (z) =

13f (z) +

23zf

0 (z) = z + 53z2, z ∈ U .

Then³RI

121,1,2f (z)

´0= 1 + 10

3 z and³RI

121,1,2f (z)

´00= 10

3 ,

RI121,1,2f(z)

z

µRI

121,1,2f(z)

¶ 0 = z+ 53z

2

z(1+ 103 z)

= 3+5z3+10z ,

1−RI

121,1,2f(z)·

µRI

121,1,2f(z)

¶ 00∙µRI

121,1,2f(z)

¶ 0¸2 = 1− (z+53z

2)· 103(1+ 10

3 z)2 = 50z2+30z+9

(3+10z)2.

We have q (z) = 1z

R z01−t1+tdt = −1 +

2 ln(1+z)z .

Using Theorem 2.9 we obtain

50z2 + 30z + 9

(3 + 10z)2≺ 1− z1 + z

, z ∈ U,

induce3 + 5z

3 + 10z≺ −1 + 2 ln (1 + z)

z, z ∈ U.

Theorem 2.11 Let g be a convex function such that g(0) = 0 and let h be the function h(z) = g(z)+nzg0(z),z ∈ U. If α,λ, l ≥ 0, m,n ∈ N, f ∈ An and the differential subordinationh¡

RIαm,λ,lf (z)¢0i2

+RIαm,λ,lf (z) ·¡RIαm,λ,lf (z)

¢00 ≺ h(z), z ∈ U (2.11)

holds, then

RIαm,λ,lf (z) ·³RIαm,λ,lf (z)

´0z

≺ g(z), z ∈ U.

This result is sharp.

Proof. Let p(z) =RIαm,λ,lf(z)·(RIαm,λ,lf(z))

0

z . We deduce that p ∈ H[0, n].

Differentiating, we obtain∙³RIαm,λ,lf (z)

´0¸2+RIαm,λ,lf (z) ·

³RIαm,λ,lf (z)

´00= p (z) + zp0 (z) , z ∈ U.

Using the notation in (2.11), the differential subordination becomes

p(z) + zp0(z) ≺ h(z) = g(z) + nzg0(z).

By using Lemma 1.1, we have

p(z) ≺ g(z), z ∈ U, i.e.RIαm,λ,lf (z) ·

³RIαm,λ,lf (z)

´0z

≺ g(z), z ∈ U,

and this result is sharp.

Theorem 2.12 Let h be an holomorphic function which satisfies the inequality Re³1 + zh00(z)

h0(z)

´> −12 ,

z ∈ U, and h(0) = 0. If α,λ, l ≥ 0, m, n ∈ N, f ∈ An and satisfies the differential subordinationh¡RIαm,λ,lf (z)

¢0i2+RIαm,λ,lf (z) ·

¡RIαm,λ,lf (z)

¢00 ≺ h(z), z ∈ U, (2.12)

thenRIαm,λ,lf (z) ·

³RIαm,λ,lf (z)

´0z

≺ q(z), z ∈ U,

where q(z) = 1

nz1n

R z0h(t)t

1n−1dt.

8

Ruscheweyh derivative and multiplier transformation

Alb Lupa¸s Alina85

Page 86: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Proof. Let p(z) =RIαm,λ,lf(z)·(RIαm,λ,lf(z))

0

z , z ∈ U, p ∈ H[0, n].

Differentiating, we obtain∙³RIαm,λ,lf (z)

´0¸2+RIαm,λ,lf (z) ·

³RIαm,λ,lf (z)

´00= p (z)+zp0 (z) , z ∈ U, and

(2.12) becomesp(z) + zp0(z) ≺ h(z), z ∈ U.

Using Lemma 1.2, we have

p(z) ≺ q(z), z ∈ U, i.e.RIαm,λ,lf (z) ·

³RIαm,λ,lf (z)

´0z

≺ q(z) = 1

nz1n

Z z

0

h(t)t1n−1dt, z ∈ U,

and q is the best dominant.

Corollary 2.13 Let h(z) = 1+(2β−1)z1+z be a convex function in U , where 0 ≤ β < 1. If α,λ, l ≥ 0, m,n ∈ N,

f ∈ An and satisfies the differential subordinationh¡RIαm,λ,lf (z)

¢0i2+RIαm,λ,lf (z) ·

¡RIαm,λ,lf (z)

¢00 ≺ h(z), z ∈ U, (2.13)

thenRIαm,λ,lf (z) ·

³RIαm,λ,lf (z)

´0z

≺ q(z), z ∈ U,

where q is given by q(z) = (2β − 1) + 2(1−β)nz

1n

R z0t1n−1

1+t dt, z ∈ U. The function q is convex and it is the bestdominant.

Proof. Following the same steps as in the proof of Theorem 2.12 and considering p(z) =RIαm,λ,lf(z)·(RIαm,λ,lf(z))

0

z ,the differential subordination (2.13) becomes

p(z) + zp0(z) ≺ h(z) = 1 + (2β − 1)z1 + z

, z ∈ U.

By using Lemma 1.2 for γ = 1, we have p(z) ≺ q(z), i.e.

RIαm,λ,lf (z) ·³RIαm,λ,lf (z)

´0z

≺ q(z) = 1

nz1n

Z z

0

h(t)t1n−1dt =

1

nz1n

Z z

0

1 + (2β − 1)t1 + t

t1n−1dt =

1

nz1n

Z z

0

"(2β − 1) t 1n−1 + 2 (1− β)

t1n−1

1 + t

#dt

= (2β − 1) + 2 (1− β)

nz1n

Z z

0

t1n−1

1 + tdt, z ∈ U.

Example 2.2 Let h (z) = 1−z1+z a convex function in U with h (0) = 1 and Re

³zh00(z)h0(z) + 1

´> −12 .

Let f (z) = z + z2, z ∈ U . For n = 1, m = 1, l = 2, λ = 1, α = 12 , we obtain RI

121,1,2f (z) =

12R

1f (z) + 12I (1, 1, 2) f (z) =

13f (z) +

23zf

0 (z) = z + 53z2, z ∈ U .

Then³RI

121,1,2f (z)

´0= 1 + 10

3 z,³RI

121,1,2f (z)

´00= 10

3 ,

RI121,1,2f(z)·

µRI

121,1,2f(z)

¶ 0z =

(z+ 53 z

2)(1+ 103 z)

z = 509 z

2 + 5z + 1,∙³RI

121,1,2f (z)

´0¸2+RI

121,1,2f (z) ·

³RI

121,1,2f (z)

´00=¡1 + 10

3 z¢2+¡z + 5

3z2¢· 103 =

503 z

2 + 10z + 1.

We have q (z) = 1z

R z01−t1+tdt = −1 +

2 ln(1+z)z .

9

Ruscheweyh derivative and multiplier transformation

Alb Lupa¸s Alina86

Page 87: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Using Theorem 2.12 we obtain

50

3z2 + 10z + 1 ≺ 1− z

1 + z, z ∈ U,

induce50

9z2 + 5z + 1 ≺ −1 + 2 ln (1 + z)

z, z ∈ U.

Theorem 2.14 Let g be a convex function such that g(0) = 0 and let h be the function h(z) = g(z)+ nz1−δg

0(z),z ∈ U. If α,λ, l ≥ 0, δ ∈ (0, 1), m,n ∈ N, f ∈ An and the differential subordinationÃ

z

RIαm,λ,lf (z)

!δRIαm+1,λ,lf (z)

1− δ

⎛⎜⎝³RIαm+1,λ,lf (z)

´0RIαm+1,λ,lf (z)

− δ

³RIαm,λ,lf (z)

´0RIαm,λ,lf (z)

⎞⎟⎠ ≺ h(z), z ∈ U (2.14)

holds, then

RIαm+1,λ,lf (z)

z·Ã

z

RIαm,λ,lf (z)

≺ g(z), z ∈ U.

This result is sharp.

Proof. Let p(z) = RIαm+1,λ,lf(z)

z ·³

zRIαm,λ,lf(z)

´δ. We deduce that p ∈ H[1, n].

Differentiating, we obtain³

zRIαm,λ,lf(z)

´δ RIαm+1,λ,lf(z)

1−δ

µ(RIαm+1,λ,lf(z))

0

RIαm+1,λ,lf(z)− δ

(RIαm,λ,lf(z))0

RIαm,λ,lf(z)

¶= p (z)+ 1

1−δ zp0 (z) ,

z ∈ U.Using the notation in (2.14), the differential subordination becomes

p(z) +1

1− δzp0(z) ≺ h(z) = g(z) + nz

1− δg0(z).

By using Lemma 1.1, we have

p(z) ≺ g(z), z ∈ U, i.e.RIαm+1,λ,lf (z)

z·Ã

z

RIαm,λ,lf (z)

≺ g(z), z ∈ U,

and this result is sharp.

Theorem 2.15 Let h be an holomorphic function which satisfies the inequality Re³1 + zh00(z)

h0(z)

´> −12 ,

z ∈ U, and h(0) = 1. If α,λ, l ≥ 0, δ ∈ (0, 1) , m, n ∈ N, f ∈ An and satisfies the differential subordinationÃz

RIαm,λ,lf (z)

!δRIαm+1,λ,lf (z)

1− δ

⎛⎜⎝³RIαm+1,λ,lf (z)

´0RIαm+1,λ,lf (z)

− δ

³RIαm,λ,lf (z)

´0RIαm,λ,lf (z)

⎞⎟⎠ ≺ h(z), z ∈ U, (2.15)

thenRIαm+1,λ,lf (z)

z·Ã

z

RIαm,λ,lf (z)

≺ q(z), z ∈ U,

where q(z) = 1−δnz

1−δn

R z0h(t)t

1−δn −1dt.

Proof. Let p(z) = RIαm+1,λ,lf(z)

z ·³

zRIαm,λ,lf(z)

´δ, z ∈ U, p ∈ H[0, n].

Differentiating, we obtain³

zRIαm,λ,lf(z)

´δ RIαm+1,λ,lf(z)

1−δ

µ(RIαm+1,λ,lf(z))

0

RIαm+1,λ,lf(z)− δ

(RIαm,λ,lf(z))0

RIαm,λ,lf(z)

¶= p (z)+ 1

1−δ zp0 (z) ,

z ∈ U, and (2.15) becomesp(z) +

1

1− δzp0(z) ≺ h(z), z ∈ U.

10

Ruscheweyh derivative and multiplier transformation

Alb Lupa¸s Alina87

Page 88: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Using Lemma 1.2, we have

p(z) ≺ q(z), z ∈ U, i.e.RIαm+1,λ,lf (z)

z·Ã

z

RIαm,λ,lf (z)

≺ q(z) = 1− δ

nz1−δn

Z z

0

h(t)t1−δn −1dt, z ∈ U,

and q is the best dominant.

References[1] A. Alb Lupas, On special differential subordinations using Salagean and Ruscheweyh operators, Mathe-

matical Inequalities and Applications, Volume 12, Issue 4, 2009, 781-790.

[2] A. Alb Lupas, A special comprehensive class of analytic functions defined by multiplier transformation,Journal of Computational Analysis and Applications, Vol. 12, No. 2, 2010, 387-395.

[3] A. Alb Lupas, A new comprehensive class of analytic functions defined by multiplier transformation,Mathematical and Computer Modelling 54 (2011) 2355—2362, doi:10.1016/j.mcm.2011.05.044.

[4] A. Alb Lupas, On special differential subordinations using a generalized Salagean operator andRuscheweyh derivative, Journal of Computational Analysis and Applications, Vol. 13, No.1, 2011, 98-107.

[5] A. Alb Lupas, On a certain subclass of analytic functions defined by a generalized Salagean operator andRuscheweyh derivative, Carpathian Journal of Mathematics, 28 (2012), No. 2, 183-190.

[6] A. Alb Lupas, On special differential subordinations using multiplier transformation and Ruscheweyhderivative, Romai Journal, Vol. 6, Nr. 2, 2010, 1-14.

[7] A. Alb Lupas, Some differential subordinations using Ruscheweyh derivative and Salagean operator,Advances in Difference Equations.2013, 2013:150., DOI: 10.1186/1687-1847-2013-150.

[8] A. Alb Lupas, Some differential subordinations using multiplier transformation, Analele Universitatii dinOradea, Fascicola Matematica, Tom XXI (2014), Issue No. 2, (to appear).

[9] D.A. Alb Lupas, Subordinations and Superordinations, Lap Lambert Academic Publishing, 2011.

[10] L. Andrei, Differential subordinations using Ruscheweyh derivative and generalized Salagean operator,Advances in Difference Equation, 2013, 2013:252, DOI: 10.1186/1687-1847-2013-252.

[11] F.M. Al-Oboudi, On univalent functions defined by a generalized Salagean operator, Ind. J. Math. Math.Sci., 27 (2004), 1429-1436.

[12] S.S. Miller, P.T. Mocanu, Differential Subordinations. Theory and Applications, Marcel Dekker Inc.,New York, Basel, 2000.

[13] St. Ruscheweyh, New criteria for univalent functions, Proc. Amet. Math. Soc., 49(1975), 109-115.

[14] G. St. Salagean, Subclasses of univalent functions, Lecture Notes in Math., Springer Verlag, Berlin,1013(1983), 362-372.

11

Ruscheweyh derivative and multiplier transformation

Alb Lupa¸s Alina88

Page 89: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

On a certain subclass of analytic functions involving generalizedSalagean operator and Ruscheweyh derivative

Alina Alb Lupas1 and Loriana Andrei2

Department of Mathematics and Computer ScienceUniversity of Oradea

str. Universitatii nr. 1, 410087 Oradea, [email protected], [email protected]

Abstract

The main object of this paper is to study some properties of certain subclass of analytic functions in theopen unit disc which is defined by the linear operator RDn

λ,α : A→ A, RDnλ,αf(z) = (1−α)Rnf(z) +αDn

λf(z),z ∈ U , where Rnf(z) is the Ruscheweyh derivative, Dn

λf(z) the generalized Salagean operator and An = f ∈H(U) : f(z) = z + an+1z

n+1 + . . . , z ∈ U is the class of normalized analytic functions with A1 = A. Theseproperties include a coefficient inequality, distorsion theorem and extreme points of differential operator. Wealso discuss the boundedness properties associated with partial sums of functions in the class T Snλ,α (β, γ) .

Keywords: analytic functions, coefficient inequalities, partial sums, starlike functions.2000 Mathematical Subject Classification: 30C45, 30A20, 34A40.

1 IntroductionDenote by U the unit disc of the complex plane, U = z ∈ C : |z| < 1 and H(U) the space of holomorphic

functions in U .Let An = f ∈ H(U) : f(z) = z + an+1z

n+1 + . . . , z ∈ U with A1 = A. Denote by T the subclass of Aconsisting the functions f of the form f (z) = z −

P∞j=2 ajz

j, aj ≥ 0, z ∈ U .

Definition 1.1 (Al Oboudi [9]) For f ∈ A, λ ≥ 0 and n ∈ N, the operator Dnλ is defined by D

nλ : A→ A,

D0λf (z) = f (z)

D1λf (z) = (1− λ) f (z) + λzf 0(z) = Dλf (z) , ...

Dn+1λ f(z) = (1− λ)Dn

λf (z) + λz (Dnλf (z))

0 = Dλ (Dnλf (z)) , z ∈ U.

Remark 1.1 If f ∈ A and f(z) = z +P∞j=2 ajz

j, then Dnλf (z) = z +

P∞j=2 [1 + (j − 1)λ]

n ajzj, z ∈ U .

If f ∈ T and f(z) = z −P∞j=2 ajz

j, then Dnλf (z) = z −

P∞j=2 [1 + (j − 1)λ]

najz

j, z ∈ U .

Remark 1.2 For λ = 1 in the above definition we obtain the Salagean differential operator [14].

Definition 1.2 (Ruscheweyh [13]) For f ∈ A, n ∈ N, the operator Rn is defined by Rn : A→ A,

R0f (z) = f (z)

R1f (z) = zf 0 (z) , ...

(n+ 1)Rn+1f (z) = z (Rnf (z))0+ nRnf (z) , z ∈ U.

Remark 1.3 If f ∈ A, f(z) = z +P∞j=2 ajz

j, then Rnf (z) = z +P∞

j=2(n+j−1)!n!(j−1)! ajz

j, z ∈ U .If f ∈ T , f(z) = z −

P∞j=2 ajz

j, then Rnf (z) = z −P∞j=2

(n+j−1)!n!(j−1)! ajz

j, z ∈ U .

Definition 1.3 [4] Let γ,α ≥ 0, n ∈ N. Denote by RDnλ,α the operator given by RD

nλ,α : A→ A,

RDnλ,αf(z) = (1− α)Rnf(z) + αDn

λf(z), z ∈ U.

1

89J. APPLIED FUNCTIONAL ANALYSIS, VOL. 10, NO.'S 1-2,89-94 ,2015, COPYRIGHT 2015 EUDOXUS PRESS, LLC

Page 90: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Remark 1.4 If f ∈ A, f(z) = z +P∞j=2 ajz

j, then

RDnλ,αf(z) = z +

P∞j=2

nα [1 + (j − 1)λ]n + (1− α) (n+j−1)!n!(j−1)!

oajz

j , z ∈ U.This operator was studied also in [5], [7], [8], [10], [11].If f ∈ T , f(z) = z −

P∞j=2 ajz

j, then

RDnλ,αf(z) = z −

P∞j=2

nα [1 + (j − 1)λ]n + (1− α) (n+j−1)!n!(j−1)!

oajz

j , z ∈ U.

Remark 1.5 For γ = 0, RDnλ,0f(z) = R

nf(z), where z ∈ U and for γ = 1, RDnλ,1f (z) = D

nλf (z), where z ∈ U.

For λ = 1, we obtain RDn1,αf (z) = L

nαf (z) which was studied in [2], [3] and [6].

If f and g are analytic functions in U , we say that f is subordinate to g, written f ≺ g, if there is a function wanalytic in U , with w(0) = 0, |w(z)| < 1, for all z ∈ U, such that f(z) = g(w(z)) for all z ∈ U . If g is univalent,then f ≺ g if and only if f(0) = g(0) and f(U) ⊆ g(U).We follow the works of A. Abubaker and M. Darus [1].

Definition 1.4 Let 0 < β ≤ 1, α,λ ≥ 0, n ∈ N, γ ∈ C\0. Then, the function f ∈ A is siad to be in the classSnλ,α (β, γ) if ¯

¯ 1γ

⎛⎜⎝z³RDn

λ,αf(z)´0

RDnλ,αf(z)

− 1

⎞⎟⎠¯¯ < β, z ∈ U.

We define now the class T Snλ,α (β, γ) by

T Snλ,α (β, γ) = Snλ,α (β, γ) ∩ T .

2 Coefficient InequalityTheorem 2.1 Let the function f ∈ T . Then f(z) is in the class T Snλ,α (β, γ) if and only if

∞Xj=2

(j − 1 + β|γ|)½α [1 + (j − 1)λ]n + (1− α)

(n+ j − 1)!n! (j − 1)!

¾aj ≤ β|γ|, (2.1)

where γ ∈ C− 0, λ,α ≥ 0, n ∈ N, z ∈ U .

Proof. Let f(z) ∈ T Snλ,α (β, γ). Then, we have

Rez(RDn

λ,γf(z))0

RDnλ,γf(z)

− 1| > −β|γ|

z(RDnλ,γf(z))

0

RDnλ,γf(z)

=z −

P∞j=2 j

nα [1 + (j − 1)λ]n + (1− α) (n+j−1)!n!(j−1)!

oajz

j

z −P∞

j=2

nα [1 + (j − 1)λ]n + (1− α) (n+j−1)!n!(j−1)!

oajzj

Re

⎧⎨⎩−P∞j=2 (j − 1)

nα [1 + (j − 1)λ]n + (1− α) (n+j−1)!n!(j−1)!

oajz

j

z −P∞j=2

nα [1 + (j − 1)λ]n + (1− α) (n+j−1)!n!(j−1)!

oajzj

⎫⎬⎭ > −β |γ|

By chossing the valkues of z on the real ais and letting z → 1− through real values, the above inequality immediatelyyields the required condition (2.1).Conversely, by applying the hypothesis (2.1) and letting |z| = 1, we obtain¯

¯z(RDnλ,γf(z))

0

RDnλ,γf(z)

− 1¯¯ =

¯¯P∞j=2 (j − 1)

nα [1 + (j − 1)λ]n + (1− α) (n+j−1)!n!(j−1)!

oajz

j

z −P∞j=2

nα [1 + (j − 1)λ]n + (1− α) (n+j−1)!n!(j−1)!

oajzj

¯¯

≤β |γ|

³1−

P∞j=2

nα [1 + (j − 1)λ]n + (1− α) (n+j−1)!n!(j−1)!

oaj

´1−

P∞j=2

nα [1 + (j − 1)λ]n + (1− α) (n+j−1)!n!(j−1)!

oaj

≤ β|γ|.

2

On a certain subclass of analytic functions, Alina Alb Lupas and Loriana Andrei

90

Page 91: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Hence, by maximum modulus theorem, we have f ∈ T Snλ,α (β, γ), which evidently completes the proof of theorem.Finally, the result is sharp with the extremal functions fj be in the class T Snλ,α (β, γ) given by

fj (z) = z −β |γ|

(j − 1 + β|γ|)nα [1 + (j − 1)λ]n + (1− α) (n+j−1)!n!(j−1)!

ozj , for j ≥ 2. (2.2)

Corollary 2.2 Let the function f ∈ T be in the class T Snλ,α (β, γ). Then, we have

aj ≤β|γ|

(j − 1 + β|γ|)nα [1 + (j − 1)λ]n + (1− α) (n+j−1)!n!(j−1)!

o , for j ≥ 2. (2.3)

The equality is attained for the functions f given by (2.2).

3 Growth and distortion theoremsTheorem 3.1 Let the function f ∈ T be in the class T Snλ,α (β, γ). Then for |z| = r, we have

r − β|γ|(1 + β|γ|) [α (1 + λ)

n+ (1− α) (n+ 1)]

r2 ≤ |f(z)| ≤ r + β|γ|(1 + β|γ|) [α (1 + λ)

n+ (1− α) (n+ 1)]

r2.

with equality for

f(z) = z − β|γ|(1 + β|γ|) [α (1 + λ)

n+ (1− α) (n+ 1)]

z2.

Proof. In view of Theorem 2.1, we have

(1 + β|γ|) [α (1 + λ)n + (1− α) (n+ 1)]∞Xj=2

aj ≤

∞Xj=2

(j − 1 + β|γ|) [α (1 + λ)n+ (1− α) (n+ 1)] aj ≤ β|γ|.

Hence

|f(z)| ≤ r +∞Xj=2

ajrj ≤ r + r2

∞Xj=2

aj ≤ r +β|γ|

(1 + β|γ|) [α (1 + λ)n + (1− α) (n+ 1)]r2

and

|f(z)| ≥ r −∞Xj=2

ajrj ≥ r − r2

∞Xj=2

aj ≥ r −β|γ|

(1 + β|γ|) [α (1 + λ)n + (1− α) (n+ 1)]r2.

This completes the proof.

Theorem 3.2 Let the function f ∈ T be in the class T Snλ,α (β, γ). Then, for |z| = r we have

r − 2β|γ|(1 + β|γ|) [α (1 + λ)

n+ (1− α) (n+ 1)]

r2 ≤ |f 0(z)| ≤ r + 2β|γ|(1 + β|γ|) [α (1 + λ)

n+ (1− α) (n+ 1)]

r2

with equality for

f(z) = z − β|γ|(1 + β|γ|) [α (1 + λ)

n+ (1− α) (n+ 1)]

z2.

Proof. We have

|f 0(z)| ≤ r +∞Xj=2

jajrj ≤ r + 2r2

∞Xj=2

aj ≤ r +2β|γ|

(1 + β|γ|) [α (1 + λ)n + (1− α) (n+ 1)]r2

and

|f 0(z)| ≥ r −∞Xj=2

jajrj ≥ r − 2r2

∞Xj=2

aj ≥ r −2β|γ|

(1 + β|γ|) [α (1 + λ)n + (1− α) (n+ 1)]r2,

which completes the proof.

3

On a certain subclass of analytic functions, Alina Alb Lupas and Loriana Andrei

91

Page 92: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

4 Extreme pointsThe extreme points of the class T Snλ,α (β, γ) will be now determined.

Theorem 4.1 Let f1 (z) = z and fj (z) = z − β|γ|(j−1+β|γ|)α[1+(j−1)λ]n+(1−α) (n+j−1)!n!(j−1)!

zj, j ≥ 2.Assume that f is analytic in U . Then f ∈ T Snλ,α (β, γ) if and only if it can be expressed in the form

f(z) =∞Xj=1

µjfj(z),

where µj ≥ 0 andP∞j=1 µj = 1.

Proof. Suppose that f(z) =P∞j=1 µjfj(z) with µj ≥ 0 and

P∞j=1 µj = 1. Then

f(z) =∞Xj=1

µjfj(z) = µ1f1 (z) +∞Xj=2

µjfj (z) =

µ1z +∞Xj=2

µj

⎛⎝z − β|γ|(j − 1 + β|γ|)

nα [1 + (j − 1)λ]n + (1− α) (n+j−1)!n!(j−1)!

ozj⎞⎠ =

z −∞Xj=2

µjβ|γ|

(j − 1 + β|γ|)nα [1 + (j − 1)λ]n + (1− α) (n+j−1)!n!(j−1)!

ozj .Then

∞Xj=2

µjβ|γ|

(j − 1 + β|γ|)nα [1 + (j − 1)λ]n + (1− α) (n+j−1)!n!(j−1)!

o (j − 1 + β|γ|)nα [1 + (j − 1)λ]n + (1− α) (n+j−1)!n!(j−1)!

oβ|γ| =

∞Xj=2

µj =∞Xj=1

µj − µ1 = 1− µ1 ≤ 1.

Thus f ∈ T Snλ,α (β, γ) by Theorem 2.1.Conversely, suppose that f ∈ T Snλ,α (β, γ). By using (2.3) we may set and

µj =(j − 1 + β|γ|)

nα [1 + (j − 1)λ]n + (1− α) (n+j−1)!n!(j−1)!

oβ|γ| aj

for j ≥ 2 and µ1 = 1−P∞j=2 µj .

Then

f (z) = z −∞Xj=1

ajzj = z −

∞Xj=2

µjβ|γ|

(j − 1 + β|γ|)nα [1 + (j − 1)λ]n + (1− α) (n+j−1)!n!(j−1)!

ozj =µ1f1 (z) +

∞Xj=2

µjfj (z) =∞Xj=1

µjfj (z) ,

with µj ≥ 0 andP∞j=1 µj = 1, which completes the proof.

Corollary 4.2 The extreme points of T Snλ,α (β, γ) are the functions f1 (z) = z and

fj (z) =β|γ|

(j − 1 + β|γ|)nα [1 + (j − 1)λ]n + (1− α) (n+j−1)!n!(j−1)!

ozj , for j ≥ 2.

4

On a certain subclass of analytic functions, Alina Alb Lupas and Loriana Andrei

92

Page 93: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

5 Partial sumsWe investigate in this section the partial sums of functions in the class T Snλ,α (β, γ). We shall obtain sharp

lower bounds for the real part of its ratios. We shall follow similar works done by Silverman [15] and Khairnar andMoreon [12] about the partial sums of analytic functions. In what follows, we will use the well known result thatfor an analytic function ω in U ,

Re

µ1 + ω (z)

1− ω (z)

¶> 0, z ∈ U,

if and only if the inequality |ω (z)| < 1 is satisfied.

Theorem 5.1 Let f (z) = z −P∞

j=2 ajzj ∈ T Snλ,α (β, γ) and define its partial sums by f1 (z) = z and fm (z) =

z −Pmj=2 ajz

j, m ≥ 2. Then

Re

µf (z)

fm (z)

¶≥ 1− 1

cm+1, z ∈ U , m ∈ N, (5.1)

and

Re

µfm (z)

f (z)

¶≥ cm+11 + cm+1

, z ∈ U , m ∈ N, (5.2)

where

cj =(j + β|γ|)

nα [1 + (j − 1)λ]n + (1− α) (n+j−1)!n!(j−1)!

oβ|γ| (5.3)

This estimates in (5.1) and (5.2) are sharp.

Proof. To prove (5.1), it suffices to show that

cm+1

µf (z)

fm (z)−µ1− 1

cm+1

¶¶≺ 1 + z1− z , z ∈ U.

By the subordination property, we can write

1−Pm

j=2 ajzj−1 − cm+1

P∞j=m+1 ajz

j−1

1−Pmj=2 ajz

j−1 =1 + ω (z)

1− ω (z)

for certain ω analytic in U with |ω (z)| ≤ 1. Notice that ω (0) = 0 and

|ω (z)| ≤cm+1

P∞j=m+1 aj

2− 2Pmj=2 aj − cm+1

P∞j=m+1 aj

.

Now |ω (z)| ≤ 1 if and only ifmXj=2

aj + cm+1

∞Xj=m+1

aj ≤∞Xj=2

cjaj ≤ 1.

The above inequality holds because cj is a non-decreasing sequence. This completes the proof of (5.1). Finally, itis observed that equality in (5.1) is attained for the function given by (2.3) when z = re2

πim as r → 1−. Similarly,

we take

(1 + cm+1)

µfm (z)

f (z)− cm+11 + cm+1

¶=1−

Pmj=2 ajz

j−1 + cm+1P∞j=m+1 ajz

j−1

1−Pmj=2 ajz

j−1 =1 + ω (z)

1− ω (z),

where

|ω (z)| ≤(1 + cm+1)

P∞j=m+1 aj

2− 2Pm

j=2 aj + (1− cm+1)P∞j=m+1 aj

.

Now |ω (z)| ≤ 1 if and only ifmXj=2

aj + cm+1

∞Xj=m+1

aj ≤∞Xj=2

cjaj ≤ 1.

This immediately leads to assertion (5.2) of Theorem 5.1. This completes the proof.Using a similar method, we can prove the following theorem.

5

On a certain subclass of analytic functions, Alina Alb Lupas and Loriana Andrei

93

Page 94: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Theorem 5.2 If f ∈ T Snλ,α (β, γ) and define the partial sums by f1 (z) = z and fm (z) = z −Pmj=2 ajz

j. Then

Re

µf 0 (z)

f 0m (z)

¶≥ 1− m+ 1

cm+1, z ∈ U , m ∈ N,

and

Re

µf 0m (z)

f (z)

¶≥ cm+1m+ 1 + cm+1

,

where cj is given by (5.3). The result is sharp for every m, with extremal functions given by (2.2).

References[1] A. Abubaker, M. Darus, On a certain subclass of analytic functions involving differential operators, TJMM 3

(2011), No. 1, 1-8.

[2] A. Alb Lupas, On special differential subordinations using Salagean and Ruscheweyh operators, MathematicalInequalities and Applications, Volume 12, Issue 4, 2009, 781-790.

[3] A. Alb Lupas, On a certain subclass of analytic functions defined by Salagean and Ruscheweyh operators,Journal of Mathematics and Applications, No. 31, 2009, 67-76.

[4] A. Alb Lupas, On special differential subordinations using a generalized Salagean operator and Ruscheweyhderivative, Journal of Computational Analysis and Applications, Vol. 13, No.1, 2011, 98-107.

[5] A. Alb Lupas, On a certain subclass of analytic functions defined by a generalized Salagean operator andRuscheweyh derivative, Carpathian Journal of Mathematics, 28 (2012), No. 2, 183-190.

[6] A. Alb Lupas, D. Breaz, On special differential superordinations using Salagean and Ruscheweyh operators,Geometric Function Theory and Applications’ 2010 (Proc. of International Symposium, Sofia, 27-31 August2010), 98-103.

[7] A. Alb Lupas, On special differential superordinations using a generalized Salagean operatorand Ruscheweyh derivative, Computers and Mathematics with Applications 61, 2011, 1048-1058,doi:10.1016/j.camwa.2010.12.055.

[8] A. Alb Lupas, Certain special differential superordinations using a generalized Salagean operator andRuscheweyh derivative, Analele Universitatii Oradea, Fasc. Matematica, Tom XVIII, 2011, 167-178.

[9] F.M. Al-Oboudi, On univalent functions defined by a generalized Salagean operator, Ind. J. Math. Math. Sci.,27 (2004), 1429-1436.

[10] L. Andrei, Differential subordinations using Ruscheweyh derivative and generalized Salagean operator, Advancesin Difference Equation, 2013, 2013:252, DOI: 10.1186/1687-1847-2013-252.

[11] L. Andrei, V. Ionescu, Some differential superordinations using Ruscheweyh derivative and generalized Salageanoperator, Journal of Computational Analysis and Applications, Vol. 17, No. 3, 2014, 437-444

[12] S.M. Khairnar, M. Moore, On a certain subclass of analytic functions involving the Al-Oboudi differentialoperator, J. Ineq. Pure Appl. Math., 10 (2009), 1-22.

[13] St. Ruscheweyh, New criteria for univalent functions, Proc. Amet. Math. Soc., 49(1975), 109-115.

[14] G. St. Salagean, Subclasses of univalent functions, Lecture Notes in Math., Springer Verlag, Berlin, 1013(1983),362-372.

[15] H. Silverman, Partial sums of starlike and convex functions, J. Math. Anal. Appl., 209 (1997), 221-227.

6

On a certain subclass of analytic functions, Alina Alb Lupas and Loriana Andrei

94

Page 95: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

ON THE (r; s)CONVEXITY AND SOME HADAMARD-TYPEINEQUALITIES

HM. EMIN ÖZDEMIR, FERHAN SET, AND HAHMET OCAK AKDEMIR

Abstract. In this paper, we dene a new class of convex functions which iscalled (r; s)convex functions. We also prove some Hadamards type inequal-ities related to this new class of functions.

1. INTRODUCTION

Let f : I R ! R be a convex function dened on the interval I of realnumbers and a; b 2 I with a < b: Then, the following double inequality is wellknown in literature as Hermite-Hadamard inequality holds for convex functions:

f

a+ b

2

1

b a

bZa

f(x)dx f(a) + f(b)

2:

In [1], Pearce et al. generalized this inequality to rconvex positive function fwhich is dened on an interval [a; b]; for all x; y 2 [a; b] and 2 [0; 1];

f(x+ (1 )y) (

( [f(x)]r+ (1 ) [f(y)]r)

1r ; if r 6= 0

[f(x)][f(y)]

1; if r = 0

:

Obviously, one can see that 0convex functions are simply logconvex functionsand 1convex functions are ordinary convex functions. Another inequality whichwell known in the literature as Minkowski Inequality is given as following;

Let p 1; 0 <bZa

f(x)pdx <1; and 0 <bZa

g(x)pdx <1: Then

(1.1)

0@ bZa

(f(x) + g(x))pdx

1A1p

0@ bZa

f(x)pdx

1A1p

+

0@ bZa

g(x)pdx

1A1p

:

Denition 1. [See [4]] A function f : I ! [0;1) is said to be logconvex ormultiplicatively convex if log f is convex, or, equivalently, if for all x; y 2 I andt 2 [0; 1] one has the inequality:

(1.2) f (tx+ (1 t)y) [f(x)]t [f(y)]1t

We note that a logconvex function is convex, but the converse may not neces-sarily be true.

2000 Mathematics Subject Classication. Primary 26D15, 26A07.Key words and phrases. Hadamards inequality, rconvex, sconvex.

1

95J. APPLIED FUNCTIONAL ANALYSIS, VOL. 10, NO.'S 1-2,95-100 ,2015, COPYRIGHT 2015 EUDOXUS PRESS, LLC

Page 96: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

2 HM. EMIN ÖZDEMIR, FERHAN SET, AND HAHMET OCAK AKDEMIR

In [6], Gill et al. proved following inequality for rconvex functions.

Theorem 1. Suppose f is a positive rconvex function on [a; b]. Then

(1.3)1

b a

Z b

a

f(t)dt Lr (f(a); f (b)) :

If f is a positive rconcave function, then the inequality is reversed.

For recent results on rconvexity see the papers [2], [5] and [6].

Denition 2. [See [3]] Let s 2 (0; 1] : A function f : [0;1)! [0;1) is said to besconvex in the second sense if(1.4) f (tx+ (1 t) y) tsf (x) + (1 t)s f (y)for all x; y 2 R+ and t 2 [0; 1] :

In [10], sconvexity introduced by Breckner as a generalization of convex func-tions. Also, Breckner proved the fact that the set valued map is sconvex only ifthe associated support function is sconvex function in [11]. Several properties ofsconvexity in the rst sense are discussed in the paper [3]. Obviously, sconvexitymeans just convexity when s = 1.

Theorem 2. [See [8]] Suppose that f : [0;1)! [0;1) is an sconvex function inthe second sense, where s 2 (0; 1] and let a; b 2 [0;1) ; a < b: If f 2 L1 [0; 1] ; thenthe following inequalities hold:

(1.5) 2s1f

a+ b

2

1

b a

Z b

a

f (x) dx f (a) + f (b)

s+ 1:

The constant k = 1s+1 is the best possible in the second inequality in (1.5). The

above inequalities are sharp.

Some new Hermite-Hadamard type inequalities based on concave functions andsconvexity established by K¬rmac¬et al. in [9]. For related results see the papers[7], [8] and [9].The main purpose of this paper is to give denition of (r; s)convexity and to

prove some Hadamard-type inequalities for (r; s)convex functions.

2. MAIN RESULTS

Denition 3. A positive function f is (r; s)convex on [a; b] [0;1) if for allx; y 2 [a; b] ; s 2 (0; 1] and 2 [0; 1]

f(x+ (1 )y)

(sfr(x) + (1 )sfr(y))1r ; if r 6= 0

f(x)f1(y) ; if r = 0:

This denition of (r; s)convexity naturally complements the concept of (r; s)concavefunctions in which the inequality is reversed.

Remark 1. We have that (0; s)convex functions are simply logconvex functionsand (1; 1)convex functions are ordinary convex functions.

Remark 2. We have that (r; 1)convex functions are rconvex functions.

Remark 3. We have that (1; s)convex functions are sconvex functions.

Now, we will prove inequalities based on above denition.

96

Page 97: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

HADAMARD-TYPE INEQUALITIES 3

Theorem 3. Let f : [a; b] [0;1) ! (0;1) be (r; s)convex function on [a; b]with a < b: Then the following inequality holds;

(2.1)1

b a

Z b

a

f(x)dx

r

r + s

[fr(a) + fr (b)]

1r

for r; s 2 (0; 1]:

Proof. Since f is (r; s)convex function and r > 0; we can write

f(ta+ (1 t)b) (ts [f(a)]r + (1 t)s [f(b)]r)1r

for all t 2 [0; 1] and s 2 (0; 1]. It is easy to observe that

1

b a

Z b

a

f(x)dx =

1Z0

f(ta+ (1 t)b)dt

1Z0

(ts [f(a)]r+ (1 t)s [f(b)]r)

1r dt:

Using the inequality (1.1), we get

1Z0

(ts [f(a)]r+ (1 t)s [f(b)]r)

1r dt

240@ 1Z0

tsr f(a)dt

1Ar

+

0@ 1Z0

(1 t)sr f(b)dt

1Ar351r

=

r

r + s

r(fr(a) + fr (b))

1r

=

r

r + s

[fr(a) + fr (b)]

1r :

Thus1

b a

Z b

a

f(x)dx

r

r + s

[fr(a) + fr (b)]

1r

which completes the proof.

Corollary 1. In Theorem 3, if we choose (1; s)convex function on [a; b] witha < b: Then, we have the following inequality;

1

b a

Z b

a

f(x)dx

1

s+ 1

[f(a) + f (b)] :

Corollary 2. In Theorem 3, if we choose (r; 1)convex function on [a; b] witha < b: Then, we have the following inequality;

1

b a

Z b

a

f(x)dx

r

r + 1

[fr(a) + fr (b)]

1r :

Remark 4. In Theorem 3, if we choose (1; 1)convex function on [a; b] [0;1)with a < b: Then, we have the right hand side of Hadamards inequality.

97

Page 98: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

4 HM. EMIN ÖZDEMIR, FERHAN SET, AND HAHMET OCAK AKDEMIR

Theorem 4. Let f; g : [a; b] [0;1)! (0;1) be (r1; s)convex and (r2; s)convexfunction on [a; b] with a < b: Then the following inequality holds;(2.2)

1

b a

Z b

a

f(x)g(x)dx

1

s+ 1

([f(a)]

r1 + [f (b)]r1)

1r1 ([g(a)]

r2 + [g (b)]r2)

1r2

for r1 > 1 and 1r1+ 1

r2= 1:

Proof. Since f is (r1; s)convex function and g is (r2; s)convex function, we have

f(ta+ (1 t)b) (ts [f(a)]r1 + (1 t)s [f(b)]r1)1r1

and

g(ta+ (1 t)b) (ts [g(a)]r2 + (1 t)s [g(b)]r2)1r2

for all t 2 [0; 1] and r1; r2; s 2 (0; 1]: Since f and g are non-negative functions, hence

f(ta+ (1 t)b)g(ta+ (1 t)b)

(ts [f(a)]r1 + (1 t)s [f(b)]r1)

1r1 (ts [g(a)]

r2 + (1 t)s [g(b)]r2)1r2 :

Integrating both sides of the above inequality over [0; 1] with respect to t; we obtain

1Z0

[f(ta+ (1 t)b)g(ta+ (1 t)b)] dt

1Z0

h(ts [f(a)]

r1 + (1 t)s [f(b)]r1)1r1 (ts [g(a)]

r2 + (1 t)s [g(b)]r2)1r2

idt:

By applying the Hölders inequality, we have

1Z0

h(ts [f(a)]

r1 + (1 t)s [f(b)]r1)1r1 (ts [g(a)]

r2 + (1 t)s [g(b)]r2)1r2

idt

24 1Z0

(ts [f(a)]r1 + (1 t)s [f(b)]r1 dt)

351r124 1Z0

(ts [g(a)]r2 + (1 t)s [g(b)]r2 dt)

351r2

=

1

s+ 1

([f(a)]

r1 + [f (b)]r1)

1r1 ([g(a)]

r2 + [g (b)]r2)

1r2 :

By using the fact that

1

b a

Z b

a

f(x)g(x)dx =

1Z0

[f(ta+ (1 t)b)g(ta+ (1 t)b)] dt:

We obtain the desired result.

Corollary 3. In Theorem 4, if we choose s = 1; r1 = r2 = 2 and f(x) = g(x); wehave the following inequality;

1

b a

Z b

a

f(x)2dx [f(a)]2+ [f (b)]

2

2:

98

Page 99: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

HADAMARD-TYPE INEQUALITIES 5

Corollary 4. In Theorem 4, if we choose s = 1 and r1 = r2 = 2; we have thefollowing inequality;

1

b a

Z b

a

f(x)g(x)dx

s[f(a)]

2+ [f (b)]

2

2

s[g(a)]

2+ [g (b)]

2

2:

Theorem 5. Let f : [a; b] [0;1) ! (0;1) be a (r; s)convex function on [a; b]with r; s 2 (0; 1]: If f 2 L1[a; b]; then one has the following inequalities;

(2.3) 2sr1f

a+ b

2

1

b a

Z b

a

fr(x)dx fr(a) + fr(b)

s+ 1:

Proof. By the (r; s)convexity of f , we have that

f

x+ y

2

1

2sr[fr(x) + fr (y)]

for all x; y 2 [a; b] : If we take x = ta+ (1 t)b and y = (1 t)a+ tb; we obtain

f

a+ b

2

1

2sr[fr(ta+ (1 t)b) + fr ((1 t)a+ tb)]

for all t 2 [0; 1] : Integrating the above inequality over [0; 1] with respect to t; weget

(2.4) f

a+ b

2

1

2sr

24 1Z0

fr(ta+ (1 t)b)dt+1Z0

fr ((1 t)a+ tb) dt

35 :From the facts that

1Z0

fr(ta+ (1 t)b)dt = 1

b a

Z b

a

fr(x)dx

and1Z0

fr ((1 t)a+ tb) dt = 1

b a

Z b

a

fr (x) dx

we obtain the left hand side of the (2.3).By the (r; s)convexity of f , we also have that

f(ta+ (1 t)b) + f (tb+ (1 t)a)(2.5)

tsfr(a) + (1 t)sfr(b) + tsfr (b) + (1 t)sfr (a)

for all t 2 [0; 1] : Integrating the inequality (2.5) over [0; 1] with respect to t; we get

1

b a

Z b

a

fr(x)dx fr(a) + fr(b)

s+ 1:

which completes the proof.

Remark 5. In Theorem 5, if we choose r = 1; we have the inequality (1.5).

Remark 6. In Theorem 5, if we choose s = r = 1; we have the Hadamardsinequality.

99

Page 100: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

6 HM. EMIN ÖZDEMIR, FERHAN SET, AND HAHMET OCAK AKDEMIR

References

[1] C.E.M. Pearce, J. Peµcaric, V. Simic, Stolarsky Means and Hadamards Inequality, JournalMath. Analysis Appl., 220, 1998, 99-109.

[2] G.S. Yang, D.Y. Hwang, Renements of Hadamards inequality for rconvex functions, In-dian Journal Pure Appl. Math., 32 (10), 2001, 1571-1579.

[3] H. Hudzik, L. Maligranda, Some remarks on sconvex functions, Aequationes Math., 48(1994) 100111.

[4] J. Peµcaric, F. Proschan and Y.L. Tong, Convex Functions, Partial Orderings and StatisticalApplications, Academic Press, Inc., 1992.

[5] N.P.G. Ngoc, N.V. Vinh and P.T.T. Hien, Integral inequalities of Hadamard-type forrconvex functions, International Mathematical Forum, 4, 2009, 1723-1728.

[6] P.M. Gill, C.E.M. Pearce and J. Peµcaric, Hadamards inequality for rconvex functions,Journal of Math. Analysis and Appl., 215, 1997, 461-470.

[7] S. Hussain, M.I. Bhatti and M. Iqbal, Hadamard-type inequalities for sconvex functions,Punjab University, Journal of Mathematics, 41 (2009) 51-60.

[8] S.S. Dragomir and S. Fitzpatrick, The Hadamards inequality for sconvex functions in thesecond sense, Demonstratio Math., 32 (4) (1999) 687-696.

[9] U. S. K¬rmac¬, M. K. Bakula, M. E. Özdemir and J. Peµcaric, Hadamard-type inequalities forsconvex functions, Applied Mathematics and Computation, 193 (2007) 26-35.

[10] W.W. Breckner, Stetigkeitsaussagen f¨ ur eine Klasse verallgemeinerter konvexer funktionenin topologischen linearen Raumen, Pupl. Inst. Math., 23 (1978) 1320.

[11] W.W. Breckner, Continuity of generalized convex and generalized concave set-valued func-tions, Rev Anal. Num´er. Thkor. Approx., 22 (1993) 3951.

HAtaturk University, K. K. Education Faculty, Department of Mathematics, 25640,Kampus, Erzurum, Turkey

E-mail address : [email protected]

FDepartment of Mathematics, Faculty of Science and Arts, Ordu University, Ordu/ Turkey

E-mail address : [email protected]

HA¼Gr¬·Ibrahim Çeçen University, Faculty of Science and Arts, Department of Math-ematics, 04100, A¼Gr¬, Turkey

E-mail address : [email protected]

100

Page 101: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

COMPACT COMPOSITION OPERATORS ON WEIGHTED

HILBERT SPACES

WALEED AL-RAWASHDEH

Abstract. Let ϕ be an analytic self-map of open unit disk D. A composi-tion operator is defined as (Cϕf)(z) = f(ϕ(z)), for z ∈ D and f analytic on

D. Given an admissible weight ω, the weighted Hilbert space Hω consists of

all analytic functions f such that ‖f‖2Hω= |f(0)|2 +

∫D|f ′(z)|2w(z)dA(z) is

finite. In this paper, we study composition operators acting between weightedBergman space A2

α and the weighted Hilbert space Hω . Using generalized

Nevalinna counting functions associated with ω, we characterize the bounded-

ness and compactness of these composition operators.

1. Introduction

Let D be the unit disk z ∈ D : |z| < 1 in the complex plane. Suppose ϕis an analytic function maps D into itself and ψ is an analytic function onD, the weighted composition operator Wψ,ϕ is defined on the space H(D) ofall analytic functions on D by

(Wψ,ϕf)(z) = ψ(z)Cϕf(z) = ψ(z)f(ϕ(z)),

for all f ∈ H(D) and z ∈ D. The composition operator Cϕ is a weightedcomposition operator with the weight function ψ identically equal to 1. Itis well known that the composition operator Cϕf = f ϕ defines a linearoperator Cϕ which acts boundedly on various spaces of analytic or harmonicfunctions on D. These operators have been studied on many spaces of ana-lytic functions. During the past few decades much effort has been devotedto the study of these operators with the goal of explaining the operator-theoretic properties of Wψ,ϕ in terms of the function-theoretic propertiesof the induced maps ϕ and ψ. We refer to the monographs by Cowen andMacCluer [1], Duren and Schuster [2], Hedenmalm, Korenblum, and Zhu [3],Shapiro [6], and Zhu ([9], [10]) for the overview of the field as of the early1990s.

For α > −1, we define the measure dAα on D by

dAα(z) = (− log |z|)αdA(z),

where dA is the normalized Lebesgue measure on the unit disk D. Theweighted Bergman space A2

α consists of all functions f analytic on D that

2010 Mathematics Subject Classification. Primary 47B33, 47B38, 30H10, 30H20; Secondary

47B37, 30H05, 32C15.Key words and phrases. Composition operators, weighted Bergman spaces, Hardy spaces,

weighted Hilbert spaces, compact operators, generalized Nevalinna counting function.

1

101J. APPLIED FUNCTIONAL ANALYSIS, VOL. 10, NO.'S 1-2,101-108 ,2015, COPYRIGHT 2015 EUDOXUS PRESS, LLC

Page 102: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

2 AL-RAWASHDEH

satisfy

‖f‖2α =

∫D|f(z)|2dAα(z) <∞.

In this definition, the measure dAα can be replaced by the measure (1 −|z|)αdA(z) and this results an equivalent norm; see [5] for example. Shapiro[8] showed that f ∈ A2

α if and only if f ′ ∈ A2α+2. In particular, he proved

that

‖f‖2α = |f(0)|2 + ‖f ′‖2α+2,

this formula is just the Bergman space version of the Little-Paley identity.

The Hardy space H2(D) consists of functions f analytic on D that satisfy

‖f‖2 = sup0<r<1

∫∂D|f(rζ)|pdσ(ζ) <∞,

where σ is the normalized Lebesgue measure on the boundary of the unitdisk. In view of the Little-Payley identity, it is natural to define the Hardyspace as A2

−1.

Given a positive integrable function ω ∈ C2[0, 1), we extend ω on D bysetting ω(z) = w(|z|) for each z ∈ D. Let ω be the weight function such thatω(z)dA(z) defines a finite measure on D; that is, ω ∈ L1(D, dA). For such aweight ω, the weighted Hilbert space Hω consists of all analytic functions fon D such that

‖f‖2Hω = |f(0)|2 + ‖f ′‖2ω,where

‖f ′‖2ω =

∫D|f ′(z)|2w(z)dA(z) <∞.

For example, consider the weighted Hilbert space Hα associated with theweight ωα(r) = (1 − r2)α where α > −1. The weighted Hilbert space H1

is the Hardy space H2. The Dirichlet space Dα is Hα for 0 ≤ α < 1. Theweight Bergman space A2

α, where α > −1, is the weighted Hilbert spaceHα+2.

Recently, Kellay and Lefevre [4] characterized the compactness of com-position operators acting on the weighted Hilbert space Hω with the weightfunction ω that satisfies the following conditions:

(1) ω is non-decreasing.

(2) ω(r)(1− r)−(1+δ) is non-decreasing for some δ > 0.(3) lim

r→1−ω(r) = 0.

(4) One of the two properties of convexity is fulfilled: ω is convex andlimr→1

ω′(r) = 0; or ω is concave.

In this paper, we consider the admissible weight ω that only satisfies twoconditions; namely we say ω is admissible weight if it is convex and satisfies

102

Page 103: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

COMPACT COMPOSITION OPERATORS ON WEIGHTED HILBERT SPACES 3

limr→1−

ω(r) = limr→1

ω′(r) = 0. We characterize boundedness and compactness

of composition operators act between weighted bergman space A2α and the

weighted Hilbert space Hω. The results we obtain about composition oper-ators will be given in terms of a generalized Nevalinna counting functions,which we define next.

The Nevalinna counting functions play an essential role in study of compo-sition operators on weighted Hilbert spaces. The classical Nevalinna functionof the inducing map ϕ is defined by

Nϕ(z) =∑

a∈ϕ−1z

log

(1

|a|

), z ∈ D\ϕ(0),

where as usual a ∈ ϕ−1z is repeated according to the multiplicity of thezero of ϕ − z at a. The Nevalinna counting function Nϕ was first used byShapiro in [8] to study composition operators on Hardy spaces H2. In thesame paper Shapiro also defined the generalized counting functions Nϕ,γ forγ > 0 by

Nϕ,γ(z) =∑

a∈ϕ−1z

(log

(1

|a|

))γ, z ∈ D\ϕ(0),

and used them to study composition operators from a weighted Bergmanspace to itself. For more information about the Nevalinna counting func-tions; see [8], [6], and [1], for example. Kellay and Lefevre [4] defined thegeneralized Nevalinna function associated to the admissible weight ω by

Nϕ,ω(z) =∑

a∈ϕ−1z

ω(a), z ∈ D\ϕ(0),

and used them to study composition operators from a weighted Hilbertspace to itself. Note that if ω(r) = (log(1/r))γ , then Nϕ,ω is the generalizedNevalinna counting function Nϕ,γ .

The next lemma explains how the generalized Nenalinna counting func-tion Nϕ,ω arise naturally in the study of composition operators on weightedHilbert spaces, and its proof is a modification of that of (Theorem 2.32,[1]). This lemma is a generalization of the change of variables formula [8].Shapiro’s formula is a special case of Stanton’s formula for integral meansof analytic functions [7] that extends the Little-Paley identity. The utilityof the change of variables formula in the study of composition operators onHardy and Bergman spaces comes form Stanton’s formula.

Lemma 1.1. Let g and w be positive measurable functions, and ϕ is anonconstant analytic self-map of D. Then∫

Dg (ϕ(z)) |ϕ′(z)|2ω(z)dA(z) =

∫ϕ(D)

g(λ)Nϕ,ω(λ)dA(λ).

103

Page 104: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

4 AL-RAWASHDEH

Proof. Suppose ϕ is a nonconstant analytic self-map of D. Since ϕ′(z) is notidentically zero, ϕ is locally univalent on D except at the points at whichthe derivative equal to zero. Since the zeros of ϕ at most countably many,we can find a countable collection of disjoint open sets Rn such that thearea of D\ ∪Rn is zero, and such that ϕ is univalent on each Rn.

Let ψn be the inverse of ϕ : Rn → ϕ(Rn). Then the usual change ofvariables applied on Rn, with z = ψn(λ) and dA(λ) = |ϕ′(z)|2dA(z), gives∫

Rn

g (ϕ(z)) |ϕ′(z)|2 ω(z)dA(z) =

∫ϕ(Rn)

g(λ) ω (ψn(λ)) dA(λ).

Let χn denotes the characteristic function of the set ϕ(Rn). Since the zerosof ϕ′(z) is countable, we have∫

Dg (ϕ(z)) |ϕ′(z)|2 ω(z)dA(z) =

∞∑n=1

∫ϕ(Rn)

g(λ) ω (ψn(λ)) dA(λ)

=

∫ϕ(D)

g(λ)

( ∞∑n=1

χn(λ) ω (ψn(λ))

)dA(λ)

=

∫ϕ(D)

g(λ)

( ∞∑n=1

ω (zn(λ))

)dA(λ),

where zn(λ) denotes the sequence of zeros of ϕ(z)−λ, repeated accordingto multiplicity. This completes the proof, since the sum in the last equationis Nϕ,ω(λ).

Recall that an operator is said to be compact if it takes bounded sets tosets with compact closure. The following lemma characterizes the compact-ness of a composition operator, and its proof is just a modification of that of(Proposition 3.11, [1]). The proof is a standard technique of Montel’s The-orem and definition of a composition operator Cϕ, so we omit the proof’sdetails.

Lemma 1.2. Let ϕ be an analytic self-map of D. Then Cϕ : A2α → Hω

is compact if and only if whenever fn is a bounded sequence in A2α that

converges to 0 uniformly on compact subsets of D then limn→∞

‖Cϕfn‖Hω = 0.

The next lemma shows that the generalized Nevalinna functions Nϕ,ω,while not subharmonic themselves, satisfy a subharmonic mean value prop-erty. This lemma is a modification of (Corollary 6.7, [8]) and its proof similarto that of (Corollary 4.6, [8]), so we omit the proof’s details.

Lemma 1.3. Let ω be an admissible weight. Let ϕ be an analytic self-mapof D and ϕ(0) 6= 0. Then for every 0 < r < |ϕ(0)| we have

Nϕ,ω(0) ≤ 1

r2

∫rDNϕ,ω(ζ)dA(ζ).

104

Page 105: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

COMPACT COMPOSITION OPERATORS ON WEIGHTED HILBERT SPACES 5

2. Compactness of a composition operator

In this section we characterize the boundedness and compactness of a com-position operator Cϕ mapping weighted Bergman space A2

α into weightedHilbert space Hω associated with the admissible weight ω. The notationA ≈ B means that there are two positive constants c1 and c2 independentof A and B such that c1A ≤ B ≤ c2A.

Theorem 2.1. let ϕ be an analytic self-map of D. Then Cϕ : A2α → Hω is

bounded if and only if

Nϕ,ω(λ) = O((− log |λ|)2+α

), as |λ| → 1.

Proof. We first show that Nϕ,ω satisfies the given growth condition. Forλ ∈ D, consider the test function

fλ(z) =

(1− |λ|2

)(2+α)/2

(1− λz)2+α.

Let ψλ(z) be the Mobius self-map of D that interchanges 0 and λ, that is

ψλ(z) =λ− z1− λz

.

It is well-known that ‖fλ‖2A2α≈ 1. Then we have

‖Cϕ(fλ)‖2Hω = |fλ(ϕ(0))|2 +

∫D|f ′λ(ϕ(z))|2|ϕ′(z)|2ω(z)dA(z)

= |fλ(ϕ(0))|2 +

∫ϕ(D)|f ′λ(z)|2Nϕ,ω(z)dA(z)

= |fλ(ϕ(0))|2 +

∫ϕ(D)|f ′λ(z)|2Nϕ,ω(z)dA(z)

= |fλ(ϕ(0))|2 + (2 + α)2|λ|2(1− |α|2)2+α

∫D

Nϕ,ω(z)∣∣1− λz∣∣6+2αdA(z)

= |fλ(ϕ(0))|2 + (2 + α)2|λ|2(1− |α|2)α∫D

Nϕ,ω(z)∣∣1− λz∣∣2+2α |ψ′λ(z)|2dA(z)

= |fλ(ϕ(0))|2 + (2 + α)2|λ|2(1− |α|2)α∫D

Nϕ,ω(ψλ(z))∣∣1− λψλ(z)∣∣2+2αdA(z)

Now for any |z| ≤ 12 , we get

‖Cϕ(fλ)‖2Hω ≥(2 + α)2|λ|2

41+α(1− |λ|2)2+α

∫12DNϕ,ω (ψλ(z)) dA(λ)

≥ (2 + α)2|λ|2

42+α(1− |λ|2)2+αNϕ,ω(λ),

105

Page 106: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

6 AL-RAWASHDEH

where the last inequality can be seen by using the sub-mean property Lemma1.3. If |λ| is sufficiently close to 1, we get for some positive constant C

Nϕ,ω(λ) ≤ C(1− |λ|2)2+α‖Cϕ(fλ)‖2Hω .

Since ‖fλ‖2A2α≈ 1 and (1−|λ|2) is comparable to log( 1

|λ|), we get the desired

growth condition.

For the converse, let f ∈ A2α. Then, by the change of variables formula

Lemma 1.1, we get

‖Cϕ(f)‖2Hω = |f(ϕ(0))|2 +

∫D|f ′(ϕ(z))|2|ϕ′(z)|2ω(z)dA(z)

= |f(ϕ(0))|2 +

∫D|f ′(z)|2Nϕ,ω(z)dA(z).

By our hypothesis, there is a finite positive constant C and 0 < r < 1 suchthat

Nϕ,ω(z) ≤ C(

log

(1

|z|

))2+α

, for z ∈ D\rD.

Therefore, we get

‖Cϕ(f)‖2Hω ≤ C∫D\rD

|f ′(z)|2dAα+2(z)

≤ C‖f‖2A2α.

By using the Closed Graph Theorem, we get the boundedness of Cϕ.

In the next theorem we are following operator-theoretic wisdom: If a“big-oh” condition determines when an operator is bounded, then the cor-responding “little-oh” condition determines when it is compact.

Theorem 2.2. let ϕ be an analytic self-map of D. Then Cϕ : A2α → Hω is

compact if and only if

Nϕ,ω(λ) = o((− log |λ|)2+α

), as |λ| → 1.

Proof. Suppose that Cϕ is compact and the given grow condition does nothold. Then there exists a sequence λn in D with |λn| → 1 as n→∞ suchthat

Nϕ,ω(λn) ≥ β (− log |λn|)2+α , for some β > 0.

Now consider the function

fn(z) =

(1− |λn|2

)(2+α)/2(1− λnz

)2+α .

It is clear that ‖fn‖A2α≈ 1 and fn converges uniformly to 0 on com-

pact subsets of D. So, by compactness of Cϕ and Lemma 1.2, we have

limn→∞

‖Cϕ(fn)‖2Hω = 0.

106

Page 107: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

COMPACT COMPOSITION OPERATORS ON WEIGHTED HILBERT SPACES 7

On the other hand, using similar argument as that in the proof of Theorem2.1, there are positive constants C and C1 such that

‖Cϕ(fn)‖2Hω ≥∫D|f ′n(ϕ(z))|2|ϕ′(z)|2ω(z)dA(z)

≥ C Nϕ,ω(λn)

(1− |λn|2)2+α

≥ CC1Nϕ,ω(λn)

(− log |λn|)2+α

≥ CC1β.

Which contradicts the compactness of Cϕ, because C and C1 are constantsindependent of n.

For the converse, Let fn be a sequence in the unit ball of A2α converges

to 0 uniformly on compact subsets of D. Then Cauchy’s estimate givesf ′n converges uniformly to 0 on compact subsets of D. Let ε > 0, by ourhypothesis there exists δ ∈ (0, 1) such that

Nϕ,ω(z) ≤ ε (− log |z|)2+α , for δ < |z| < 1.

To end the proof, by Lemma 1.2, it is enough to show limn→∞

‖Cϕ(fn)‖2Hω = 0.

‖Cϕ(fn)‖2Hω = |fn(ϕ(0))|2 +

∫D|f ′n(ϕ(z))|2|ϕ′(z)|2ω(z)dA(z)

= |fn(ϕ(0))|2 +

∫D|f ′n(z)|2Nϕ,ω(z)dA(z)

= |fn(ϕ(0))|2 +

∫|z|≤δ

|f ′n(z)|2Nϕ,ω(z)dA(z) +

∫|z|>δ

|f ′n(z)|2Nϕ,ω(z)dA(z)

= |fn(ϕ(0))|2 +

∫|z|≤δ

|f ′n(z)|2Nϕ,ω(z)dA(z) + ε

∫|z|>δ

|f ′n(z)|2(− log |z|)2+αdA(z)

= |fn(ϕ(0))|2 +

∫|z|≤δ

|f ′n(z)|2Nϕ,ω(z)dA(z) + ε‖fn‖2α.

Since ε > 0 is arbitrary, fn and f ′n converge to 0 uniformly on compactsubsets of D, we can make the right hand side of the last inequality as smallas we wish by choosing n sufficiently large. This completes the proof.

References

[1] C.C. Cowen and B.D. MacCluer, Composition Operators on Spaces of Analytic Functions,

CRC press, Boca Raton, 1995.

[2] P. Duren and A. Schuster, Bergman Spaces, American Mathematical Society, 2004[3] H. Hedenmalm, B. Korenblum, and K. Zhu, Theory of Bergman Spaces, Springer-Verlag,

New York, 2000.[4] K. Kellay and P. Lefevre, Compact composition operators on weighted Hilbert spaces of

analytic functions, J. Math. Anal. Appl. 386(2012), 718-727.

107

Page 108: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

8 AL-RAWASHDEH

[5] B.D. MacCluer and J.H. Shapiro, Angular derivatives and compact composition operators onthe Hardy and Bergman spaces, Canad. J. Math. 38(1986), 878-906.

[6] J.H. Shapiro, Composition Operators and Classical Function Theory. Universitext: Tracts in

Mathematics. Springer-Verlag, New York, 1993.[7] C.S. Stanton, Counting funcions and majorization for jensen measures, Pacific J. Math.

125(1986), 459-468.

[8] J.H. Shapiro, The essential norm of a composition operators, Annals of Mathematics125(1987), 375-404.

[9] K. Zhu, Spaces of Holomorphic Functions in the Unit Ball, Springer-Verlag, New York, 2005.[10] K. Zhu, Operator Theory in Function Spaces. Monograghs and Textbooks in Pure and Ap-

plied Mathematics, 139. Marcel Dekker, Inc.,New York, 1990.

(Waleed Al-Rawashdeh) Department of Mathematical Sciences, Montana Tech of The

University of Montana, Butte, Montana 59701, USAE-mail address: [email protected]

108

Page 109: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

A NEW DOUBLE CESÀRO SEQUENCE SPACE DEFINED BYMODULUS FUNCTIONS

O¼GUZ O¼GUR

Abstract. The object of this paper is to introduce a new double Cesàro se-quence spaces Ces(2) (F; p) dened by a double sequence of modulus functions.We study some topologic properties of this space and obtain some inclusionrelations.

1. Introduction

As usual, N, R and R+ denote the sets of positive integers, real numbers andnonnegative real numbers, respectively. A double sequence on a normed linearspace X is a function x from N N into X and briey denoted by x = (x(i; j)).Throughout this work , w and w2 denote the spaces of all single real sequences anddouble real sequences, respectively.First of all, let us recall preliminary denitions and notations.If for every " > 0 there exists n" 2 N such that kxk;l akX < " whenever k; l >

n" then a double sequence fxk;lg is said to be converge (in terms of Pringsheim) toa 2 X [10].A double sequence fxk;lg is called a Cauchy sequence if and only if for every

" > 0 there exists n0 = n0(") such that jxk;l xp;qj < " for all k; l; p; q n0.A double series is innity sum

P1k;l=1 xk;l and its convergence implies the conver-

gence by k:kX of partial sums sequence fSn;mg, where Sn;m =Pn

k=1

Pml=1 xk;l(see

[2],[3]).If each double Cauchy sequence in X converges an element of X according to

norm of X, then X is said to be a double complete space. A normed doublecomplete space is said to be a double Banach space [2].Let X be a linear space. A function p : X ! R is called paranorm, ifi) p(0) = 0;ii) p(x) 0 for all x 2 X;iii) p(x) = p(x) for all x 2 X;iv) p(x+ y) p(x) + p(y) for all x; y 2 X;v) if (n) is a sequence of scalars with n ! as n!1 and (xn) is a sequence

of vectors with p (xn x) ! 0 as n ! 1, then p (nxn x) ! 0 as n ! 1 (continuity of multiplication by scalars).A paranorm p for which p(x) = 0 implies x = 0 is called total [7].A double sequence space E is said to be solid (or normal) if (ykl) 2 E whenever

jyklj jxklj for all k; l 2 N and (xkl) 2 E.

2000 Mathematics Subject Classication. 40A05, 40A25, 40A30.Key words and phrases. Double sequence, modulus function, paranorm, Cesàro sequence space,

solidity.

1

109J. APPLIED FUNCTIONAL ANALYSIS, VOL. 10, NO.'S 1-2,109-116 ,2015, COPYRIGHT 2015 EUDOXUS PRESS, LLC

Page 110: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

2 O ¼GUZ O ¼GUR

A function f : [0;1)! [0;1) is said to be a modulus function if it satises thefollowing:

1) f(x) = 0 if and only if x = 0;2) f(x+ y) f(x) + f(y) for all x; y 2 [0;1) ;3) f is increasing;4) f is continuous from right at 0:

It follows that f is continuous on [0;1) : The modulus function may be boundedor unbounded. For example, if we take f(x) = x(x + 1), then f(x) is bounded.But, for 0 < p < 1, f(x) = xp is not bounded [9].By the condition 2); we have f(nx) n:f(x) for all n 2 N and so f(x)

fnx 1n

nf

xn

; hence

1

nf(x) f

xn

for all n 2 N:

The FK-spaces L (f), introduced by Ruckle in [11], is in the form

L(f) =

(x 2 w :

1Xk=1

f (jxkj) <1)

where f is a modulus function. This space is closely related to the space `1 which isan L (f) space with f(x) = x for all real x 0: Later on, this space was investigatedby many authors (see [1; 4; 5; 8; 13]) :For 1 p <1, the Cesàro sequence space is dened by

Cesp =

8<:x 2 w :1Xj=1

1

j

jXi=1

jx(i)j!p

<1

9=; ;equipped with norm

kxk =

0@ 1Xj=1

1

j

jXi=1

jx(i)j!p1A 1

p

:

This space was rst introduced by Shiue [14] : It is very useful in the theory ofmatrix operators and others [6]. Sanhan and Suantai introduced and studied ageneralized Cesàro sequence space Ces(p); where p = (pj) is a bounded sequenceof positive real numbers (see [12]).Finally, Bala [1] introduced the space Ces(f; p) using a modulus function f as

follows;

Ces(f; p) =

8<:x 2 w :1Xj=1

"f

1

j

jXi=1

jx(i)j!#pj

<1

9=;where p = (pj) is a bounded sequence of positive real numbers.In this work, we introduce double sequence spaces Ces(2)(f; p) as follows;Let p = (pn;m) be a bounded double sequence of positive real numbers and

F = (fn;m) be a double sequence of modulus functions. The space Ces(2)(f; p) isdened by

Ces(2)(F; p) =

8<:x 2 w :1Xn=1

1Xm=1

24fn;m0@ 1

n:m

n;mXi;j=1

jxi;j j

1A35pn;m <19=; .

110

Page 111: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

A NEW DOUBLE CESÀRO SEQUENCE SPACE DEFINED BY MODULUS FUNCTIONS 3

If fn;m(x) = x, pn;m = p (1 p <1) for all n;m 2 N; then the space Ces(2)(F; p)reduced to double Cesaro sequence space.The following inequality will be used throughout this paper. Let (pn;m) be

a bounded double sequence of strictly positive real numbers and denote H =supn;m pn;m: For any complex an;m and bn;m we have

jan;m + bn;mjpn;m D: (jan;mjpn;m + jbn;mjpn;m)where D = max

1; 2H1

: Also, for any complex ;

jjpn;m max1; jjH

:

2. MAIN RESULTS

Theorem 1. The double sequence space Ces(2)(F; p) is a linear space over thecomplex eld C:

Proof. Let x; y 2 Ces(2)(F; p) and ; 2 C: Then there exist integers M; N suchthat jj M and jj N . Hence we have

1Xn=1

1Xm=1

24fn;m0@ 1

n:m

n;mXi;j=1

jxi;j + yi;j j

1A35pn;m

1Xn=1

1Xm=1

24fn;m0@ 1

n:m

n;mXi;j=1

jxi;j j

1A+ fn;m0@ 1

n:m

n;mXi;j=1

jyi;j j

1A35pn;m

1Xn=1

1Xm=1

24Mfn;m

0@ 1

n:m

n;mXi;j=1

jxi;j j

1A+Nfn;m0@ 1

n:m

n;mXi;j=1

jyi;j j

1A35pn;m

D:1Xn=1

1Xm=1

24Mfn;m

0@ 1

n:m

n;mXi;j=1

jxi;j j

1A35pn;m

+D:

1Xn=1

1Xm=1

24Nfn;m0@ 1

n:m

n;mXi;j=1

jyi;j j

1A35pn;m

D:max1;MH1

1Xn=1

1Xm=1

24Mfn;m

0@ 1

n:m

n;mXi;j=1

jxi;j j

1A35pn;m

+D:max1; NH1

1Xn=1

1Xm=1

24Nfn;m0@ 1

n:m

n;mXi;j=1

jyi;j j

1A35pn;m

whereD = max1; 2H1

: This shows that x+y 2 Ces(2)(F; p) and so Ces(2)(F; p)

is a linear space. Theorem 2. The double sequence space Ces(2)(F; p) is paranormed space with theparanorm

g(x) =

0@ 1Xn=1

1Xm=1

24fn;m0@ 1

n:m

n;mXi;j=1

jxi;j j

1A35pn;m1A1M

111

Page 112: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

4 O ¼GUZ O ¼GUR

where H = supn;m pn;m <1 and M = max (1;H) :

Proof. It is clear that g(x) = g(x) and g(0) = 0. For any x; y 2 Ces(2)(F; p),

g(x+ y) =

0@ 1Xn=1

1Xm=1

24fn;m0@ 1

n:m

n;mXi;j=1

jxi;j + yi;j j

1A35pn;m1A1M

0@ 1Xn=1

1Xm=1

24fn;m0@ 1

n:m

n;mXi;j=1

jxi;j j

1A+ fn;m0@ 1

n:m

n;mXi;j=1

jyi;j j

1A35pn;m1A1M

g(x) + g(y):

For the continuity of scalar multiplication suppose thatn(s)

ois sequence of scalar

such that(s) ! 0 and g(x(s) x) ! 0 as s ! 1: We shall show that

g((s)x(s) x)! 0 as s!1:

g((s)x(s) x) =

0@ 1Xn=1

1Xm=1

24fn;m0@ 1

n:m

n;mXi;j=1

(s)x(s)i;j xi;j1A35pn;m1A

1M

=

0@ 1Xn=1

1Xm=1

24fn;m0@ 1

n:m

n;mXi;j=1

(s)x(s)i;j + (s)xi;j (s)xi;j xi;j1A35pn;m1A

1M

0@ 1Xn=1

1Xm=1

24fn;m0@ 1

n:m

n;mXi;j=1

(s)x(s)i;j (s)xi;j1A35pn;m1A

1M

+

0@ 1Xn=1

1Xm=1

24fn;m0@ 1

n:m

n;mXi;j=1

(s)xi;j xi;j1A35pn;m1A

1M

(K)HM

0@ 1Xn=1

1Xm=1

24fn;m0@ 1

n:m

n;mXi;j=1

x(s)i;j xi;j1A35pn;m1A

1M

+

0@ 1Xn=1

1Xm=1

24fn;m0@(s) n:m

n;mXi;j=1

jxi;j j

1A35pn;m1A1M

where K = 1 +max1; sup(s)

. We must show that

0@ 1Xn=1

1Xm=1

24fn;m0@(s) n:m

n;mXi;j=1

jxi;j j

1A35pn;m1A1M

! 0

112

Page 113: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

A NEW DOUBLE CESÀRO SEQUENCE SPACE DEFINED BY MODULUS FUNCTIONS 5

as s!1: For all s 2 N there exists T > 0 such that(s) T : Taking " > 0:

Since x 2 Ces(2)(F; p); then there exist n0;m0 2 N such that

1Xn=n0

1Xm=m0

24fn;m0@(s) n:m

n;mXi;j=1

jxi;j j

1A35pn;m + n0Xn=1

1Xm=m0+1

24fn;m0@(s) n:m

n;mXi;j=1

jxi;j j

1A35pn;m

+1X

n=n0+1

m0Xm=1

24fn;m0@(s) n:m

n;mXi;j=1

jxi;j j

1A35pn;m

1X

n=n0

1Xm=m0

24fn;m0@ T

n:m

n;mXi;j=1

jxi;j j

1A35pn;m + n0Xn=1

1Xm=m0+1

24fn;m0@ T

n:m

n;mXi;j=1

jxi;j j

1A35pn;m

+

1Xn=n0+1

m0Xm=1

24fn;m0@ T

n:m

n;mXi;j=1

jxi;j j

1A35pn;m

<"

2

for all s 2 N: Also, we have

n0Xn=1

m0Xm=1

24fn;m0@(s) n:m

n;mXi;j=1

jxi;j j

1A35pn;m < "

2

as s!1: Consequently,

1Xn=1

1Xm=1

24fn;m0@(s) n:m

n;mXi;j=1

jxi;j j

1A35pn;m ! 0

as s!1: Since

g((s)x(s)x) (K)HM g(x(s)x)+

0@ 1Xn=1

1Xm=1

24fn;m0@(s) n:m

n;mXi;j=1

jxi;j j

1A35pn;m1A1M

we get g((s)x(s) x)! 0 as s!1:

Theorem 3. The double sequence spaces Ces(2)(F; p) is complete with respect toits paranorm.

Proof. Letx(s)

be a double Cauchy sequence in Ces(2)(F; p) such that x(s) =n

x(s)k;l

o1k;l=1

for all s 2 N: Then we have

(1)

0@ 1Xn=1

1Xm=1

24fn;m0@ 1

n:m

n;mXi;j=1

x(s)i;j x(t)i;j 1A35pn;m1A

1M

! 0

as s; t ! 1: Hence for each xed i; j 2 N,x(s)i;j x(t)i;j ! 0 as s; t ! 1 and so

x(s)is a Cauchy sequence in C for each xed i; j 2 N:

113

Page 114: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

6 O ¼GUZ O ¼GUR

Then, there exists xi;j 2 C such that x(s)i;j ! xi;j as s ! 1 and let denex = (xi;j): From (1), we have that for " > 0 there exists N 2 N such that

1Xn=1

1Xm=1

24fn;m0@ 1

n:m

n;mXi;j=1

x(s)i;j x(t)i;j 1A35pn;m < "M

for s; t > N and so by taking t!1 we have

1Xn=1

1Xm=1

24fn;m0@ 1

n:m

n;mXi;j=1

x(s)i;j xi;j1A35pn;m < "M

for all s N: Thus we get g(x(s) x) ! 0 for all s N: Finally we must showthat x 2 Ces(2)(F; p): By denition of a modulus function, for all s N we get0@ 1X

n=1

1Xm=1

24fn;m0@ 1

n:m

n;mXi;j=1

jxi;j j

1A35pn;m1A1M

=

0@ 1Xn=1

1Xm=1

24fn;m0@ 1

n:m

n;mXi;j=1

xi;j x(s)i;j + x(s)i;j 1A35pn;m1A

1M

0@ 1Xn=1

1Xm=1

24fn;m0@ 1

n:m

n;mXi;j=1

xi;j x(s)i;j 1A35pn;m1A

1M

+

0@ 1Xn=1

1Xm=1

24fn;m0@ 1

n:m

n;mXi;j=1

x(s)i;j 1A35pn;m1A

1M

"+ g(x(s));

which implies that x 2 Ces(2)(F; p): Then Ces(2)(F; p) is complete.

Theorem 4. Let F = (fn;m) and H = (hn;m) be two modulus functions. Then

(i) lim supfn;m(t)hn;m(t)

<1 for all n;m 2 N implies Ces(2)(H; p) Ces(2)(F; p);(ii) Ces(2)(H; p) \ Ces(2)(F; p) Ces(2)(F +H; p):

Proof. (i) By the hypothesis there exists K > 0 such that fn;m(t) K:hn;m(t) forall n;m 2 N: Let x 2 Ces(2)(H; p); then we have

1Xn=1

1Xm=1

24fn;m0@ 1

n:m

n;mXi;j=1

jxi;j j

1A35pn;m 1Xn=1

1Xm=1

24K:hn;m0@ 1

n:m

n;mXi;j=1

jxi;j j

1A35pn;m

= CH1Xn=1

1Xm=1

24hn;m0@ 1

n:m

n;mXi;j=1

jxi;j j

1A35pn;m< 1 ;

where C = max (1;K) : Hence we get x 2 Ces(2)(F; p):

114

Page 115: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

A NEW DOUBLE CESÀRO SEQUENCE SPACE DEFINED BY MODULUS FUNCTIONS 7

(ii) Let x 2 Ces(2)(H; p) \ Ces(2)(F; p): Hence we have1Xn=1

1Xm=1

24(fn;m + hn;m)0@ 1

n:m

n;mXi;j=1

jxi;j j

1A35pn;m

=1Xn=1

1Xm=1

24fn;m0@ 1

n:m

n;mXi;j=1

jxi;j j

1A+ hn;m0@ 1

n:m

n;mXi;j=1

jxi;j j

1A35pn;m

D:

8<:1Xn=1

1Xm=1

24fn;m0@ 1

n:m

n;mXi;j=1

jxi;j j

1A35pn;m + 1Xn=1

1Xm=1

24hn;m0@ 1

n:m

n;mXi;j=1

jxi;j j

1A35pn;m9=; ;where D = max

1; 2H1

: Therefore we get x 2 Ces(2)(F+H; p) and this complete

proof.

Corollary 1. The space Ces(2)(F; p) is solid.

Proof. Suppose that x 2 Ces(2)(F; p) and jyi;j j jxi;j j for all i; j 2 N: Then, bythe monotonity of the modulus function we have

1Xn=1

1Xm=1

24fn;m0@ 1

n:m

n;mXi;j=1

jyi;j j

1A35pn;m 1Xn=1

1Xm=1

24fn;m0@ 1

n:m

n;mXi;j=1

jxi;j j

1A35pn;m< 1:

This implies that y 2 Ces(2)(F; p) and so Ces(2)(F; p) is solid.

Theorem 5. Let (pn;m) and (qn;m) be bounded double sequences of positive realnumbers such that 0 < pn;m qn;m < 1 for all n;m 2 N: Then for any modulusF = (fn;m), Ces(2)(F; p) Ces(2)(F; q):

Proof. Let x 2 Ces(2)(F; p) . Then we have1Xn=1

1Xm=1

24fn;m0@ 1

n:m

n;mXi;j=1

jxi;j j

1A35pn;m <1:Hence there exists N 2 N such that fn;m

1n:m

Pn;mi;j=1 jxi;j j

1 for all n;m N:

Since 0 < pn;m qn;m ; we get

1Xn=N

1Xm=N

24fn;m0@ 1

n:m

n;mXi;j=1

jxi;j j

1A35qn;m 1Xn=N

1Xm=N

24fn;m0@ 1

n:m

n;mXi;j=1

jxi;j j

1A35pn;m< 1

and so x 2 Ces(2)(F; q):

References

[1] Bala I., On Cesàro Sequence Space Dened By a Modulus Function, Demonstratio Math.,Vol. XLVI No:1, 2013.

[2] Basar, F., Sever Y., The Space Lq of Double Sequences, Math. J. Okayama Univ. 51(2009),149-157.

[3] Basar, F., Summability Theory and Its Applications, Bentham Science Publishers, e-books,Monographs, ·Istanbul, 2012.

115

Page 116: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

8 O ¼GUZ O ¼GUR

[4] Candan M., Some New Sequence Spaces Dened By A modulus Function and an InniteMatrix in a Seminormed Space, J. Math. Anal., Vol. 3, Issue 2 (2012), 1-9.

[5] Khan V. A., Khan N., On Some I-Convergent Double Sequence Spaces Dened By a ModulusFunction, Engineering, 2013, 5, 35-40.

[6] Lee P. Y., Cesàro Sequence Space, Math. Chronicle, 13 (1984), 29-45.[7] Maddox I. J., Elements of Functional Analysis, Cambridge Univ. Press, 1970.[8] Maddox I. J., Sequence Spaces Dened By a Modulus, Math. Proc. Camb. Philos. Soc. 100

(1986), 161-166.[9] Nakano H., Concave Modulars, J. Math. Soc. Japan 5 (1953), 29-49.[10] Pringsheim A., Zur Theorie de Zweifach Unendlichen Zahlenfolgen, Math. Ann., 53 (1900),

289-321.[11] Ruckle W. H., FK-Spaces in which the Sequence of Coordinate Vectors is Bounded, Canad.

J. Math., 25 (1973), 973-978.[12] Sanhan W., Suantai S., On knearly Uniform Convex Property in Generalized Cesàro Se-

quence Spaces, Internat. J. Math. Sci., 57 (2003), 3599-3607.[13] Savas E., On Some Generalized Sequence Spaces Dened By a Modulus, Indian J. Pure Appl.

Math., 30 (5) (1999), 459-464.[14] Shiue J. S., On the Cesàro Sequence Spaces, Tamkang J. Math., 1 (1970), 19-25.

(O.O¼gur) Ondokuz May¬s University, Art and Science Faculty, Department of Math-ematics, Kurupelit campus, Samsun, TURKEY

E-mail address : [email protected]

116

Page 117: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Right Fractional Monotone Approximation

George A. AnastassiouDepartment of Mathematical Sciences

University of MemphisMemphis, TN 38152, [email protected]

Abstract

Let f 2 Cp ([1; 1]), p 0 and let L be a linear right fractionaldi¤erential operator such that L (f) 0 throughout [1; 0]. We can nda sequence of polynomials Qn of degree n such that L (Qn) 0 over[1; 0], furthermore f is approximated uniformly by Qn: The degree of thisrestricted approximations is given by an inequalities using the modulus ofcontinuity of f (p):

2010 AMS Mathematics Subject Classication : 26A33, 41A10, 41A17,41A25, 41A29.Keywords and Phrases: Monotone Approximation, right Caputo fractionalderivative, right fractional linear di¤erential operator, modulus of continuity.

1 Introduction

The topic of monotone approximation started in [5] has become a major trendin approximation theory. A typical problem in this subject is: given a pos-itive integer k, approximate a given function whose kth derivative is 0 bypolynomials having this property.In [2] the authors replaced the kth derivative with a linear di¤erential oper-

ator of order k. We mention this motivating result.

Theorem 1 Let h; k; p be integers, 0 h k p and let f be a real func-tion, f (p) continuous in [1; 1] with modulus of continuity !1

f (p); x

there.

Let aj (x), j = h; h + 1; :::; k be real functions, dened and bounded on [1; 1]and assume ah (x) is either some number > 0 or some number < 0

throughout [1; 1]. Consider the operator

L =kXj=h

aj (x)

dj

dxj

(1)

1

117J. APPLIED FUNCTIONAL ANALYSIS, VOL. 10, NO.'S 1-2,117-124 ,2015, COPYRIGHT 2015 EUDOXUS PRESS, LLC

Page 118: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

and suppose, throughout [1; 1],

L (f) 0: (2)

Then, for every integer n 1, there is a real polynomial Qn (x) of degree nsuch that

L (Qn) 0 throughout [1; 1] (3)

and

max1x1

jf (x)Qn (x)j Cnkp!1f (p);

1

n

; (4)

where C is independent of n or f .

In this article we extend Theorem 1 to the right fractional level. Now L

is a linear right Caputo fractional di¤erential operator. Here the monotonic-ity property is only true on the critical interval [1; 0]. Quantitative unifomapproximation remains true on all of [1; 1] :To the best of our knowledge this is the rst time right fractional monotone

Approximation is introduced.We need and make

Denition 2 ([3]) Let > 0 and de = m, (de ceiling of the number). Con-sider f 2 Cm ([1; 1]). We dene the right Caputo fractional derivative of f oforder as follows:

D1f

(x) =

(1)m

(m )

Z 1

x

(t x)m1 f (m) (t) dt; (5)

for any x 2 [1; 1], where is the gamma function.We set

D01f (x) = f (x) ;

Dm1f (x) = (1)

mf (m) (x) , 8 x 2 [1; 1] : (6)

2 Main Result

We present

Theorem 3 Let h; k; p be integers, h is even, 0 h k p and let f be areal function, f (p) continuous in [1; 1] with modulus of continuity !1

f (p);

;

> 0; there. Let j (x), j = h; h+1; :::; k be real functions, dened and boundedon [1; 1] and assume for x 2 [1; 0] that h (x) is either some number > 0or some number < 0. Let the real numbers 0 = 0 < 1 < 1 < 2 < 2 <::: < p < p. Here Dj

1f stands for the right Caputo fractional derivative of

2

Right Fractional Monotone Approximation George A. Anastassiou

118

Page 119: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

f of order j anchored at 1. Consider the linear right fractional di¤erentialoperator

L :=kXj=h

j (x)Dj1

(7)

and suppose, throughout [1; 0],

L (f) 0: (8)

Then, for any n 2 N, there exists a real polynomial Qn (x) of degree n

such thatL (Qn) 0 throughout [1; 0] ; (9)

and

max1x1

jf (x)Qn (x)j Cnkp!1f (p);

1

n

; (10)

where C is independent of n or f .

Proof. Let n 2 N. By a theorem of Trigub [6, 7], given a real function g,with g(p) continuous in [1; 1], there exists a real polynomial qn (x) of degree n such that

max1x1

g(j) (x) q(j)n (x) Rpnjp!1g(p); 1

n

; (11)

j = 0; 1; :::; p; where Rp is independent of n or g.Here h; k; p 2 Z+, 0 h k p:Let j > 0, j = 1; :::; p; such that 0 < 1 < 1 < 2 < 2 < 3 < 3::: < ::: <

p < p: That is dje = j, j = 1; :::; p:We consider the right Caputo fractional derivatives

Dj1g

(x) =

(1)j

(j j)

Z 1

x

(t x)jj1 g(j) (t) dt; (12)

Dj1g

(x) = (1)j g(j) (x) ;

and Dj1qn

(x) =

(1)j

(j j)

Z 1

x

(t x)jj1 q(j)n (t) dt; (13)Dj1qn

(x) = (1)j q(j)n (x) ; j = 1; :::; p;

where is the gamma function

(v) =

Z 1

0

ettv1dt; v > 0: (14)

3

Right Fractional Monotone Approximation George A. Anastassiou

119

Page 120: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

We notice that Dj1g

(x)

Dj1qn

(x) =

1

(j j)

Z 1

x

(t x)jj1 g(j) (t) dtZ 1

x

(t x)jj1 q(j)n (t) dt

= (15)

1

(j j)

Z 1

x

(t x)jj1g(j) (t) q(j)n (t)

dt

1

(j j)

Z 1

x

(t x)jj1g(j) (t) q(j)n (t)

dt (11) (16)

1

(j j)

Z 1

x

(t x)jj1 dtRpn

jp!1

g(p);

1

n

=

1

(j j)(1 x)jj

(j j)Rpn

jp!1

g(p);

1

n

(17)

2jj

(j j + 1)Rpn

jp!1

g(p);

1

n

:

We proved that for any x 2 [1; 1] we have

Dj1g

(x)

Dj1qn

(x) 2jj

(j j + 1)Rpn

jp!1

g(p);

1

n

: (18)

Hence it holds

max1x1

Dj1g

(x)

Dj1qn

(x) 2jj

(j j + 1)Rpn

jp!1

g(p);

1

n

;

(19)j = 0; 1; :::; p:

Above we set D01g (x) = g (x), D0

1qn (x) = qn (x), 8 x 2 [1; 1], and0 = 0, i.e. d0e = 0:Put

sj sup1x1

1h (x)j (x) , j = h; :::; k; (20)

and

n := Rp!1

f (p);

1

n

0@ kXj=h

sj2jj

(j j + 1)njp

1A : (21)

I. Suppose, throughout [1; 0], h (x) > 0. Let Qn (x), x 2 [1; 1], be areal polynomial of degree n so that

max1x1

Dj1

f (x) + n (h!)

1xhDj1Qn

(x) (19)

2jj

(j j + 1)Rpn

jp!1

f (p);

1

n

; (22)

4

Right Fractional Monotone Approximation George A. Anastassiou

120

Page 121: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

j = 0; 1; :::; p:

In particular (j = 0) holds

max1x1

f (x) + n (h!)1 xhQn (x) Rpnp!1f (p); 1n; (23)

and

max1x1

jf (x)Qn (x)j n (h!)1+Rpn

p!1

f (p);

1

n

=

(h!)1Rp!1

f (p);

1

n

0@ kXj=h

sj2jj

(j j + 1)njp

1A+Rpn

p!1

f (p);

1

n

(24)

Rp!1

f (p);

1

n

nkp

0@1 + (h!)1 kXj=h

sj2jj

(j j + 1)

1A : (25)

That ismax

1x1jf (x)Qn (x)j

Rp

0@1 + (h!)1 kXj=h

sj2jj

(j j + 1)

1Ankp!1f (p); 1n

; (26)

proving (10).Here

L =kXj=h

j (x)Dj1;

and suppose, throughout [1; 0] ; Lf 0:So over 1 x 0, we get

1h (x)L (Qn (x)) = 1h (x)L (f (x)) + n

(1 x)hh (h h + 1)

+ (27)

kXj=h

1h (x)j (x)hDj1Qn (x)D

j1f (x)

nh!Dj1x

hi (22)

n(1 x)hh (h h + 1)

0@ kXj=h

sj2jj

(j j + 1)njp

1ARp!1f (p); 1n

=

n(1 x)hh (h h + 1)

n = n

"(1 x)hh (h h + 1)

1#= (28)

5

Right Fractional Monotone Approximation George A. Anastassiou

121

Page 122: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

n

"(1 x)hh (h h + 1)

(h h + 1)

# n

1 (h h + 1) (h h + 1)

0: (29)

Explanation: We know (1) = 1, (2) = 1, and is convex and positive on(0;1) : Here 0 h h < 1 and 1 h h + 1 < 2. Thus (h h + 1) 1and

1 (h h + 1) 0: (30)

HenceL (Qn (x)) 0; x 2 [1; 0] : (31)

II. Suppose, throughout [1; 0] ; h (x) < 0. In this case let Qn (x),x 2 [1; 1], be a real polynomial of degree n such that

max1x1

Dj1

f (x) n (h!)

1xhDj1Qn

(x) (32)

2jj

(j j + 1)Rpn

jp!1

f (p);

1

n

;

j = 0; 1; :::; p:

In particular holds (j = 0)

max1x1

f (x) n (h!)1 xhQn (x) Rpnp!1f (p); 1n; (33)

and

max1x1

jf (x)Qn (x)j n (h!)1+Rpn

p!1

f (p);

1

n

(as before)

Rp!1

f (p);

1

n

nkp

0@1 + (h!)1 kXj=h

sj2jj

(j j + 1)

1A : (34)

That ismax

1x1jf (x)Qn (x)j

Rp

0@1 + (h!)1 kXj=h

sj2jj

(j j + 1)

1Ankp!1f (p); 1n

; (35)

reproving (10).Again suppose, throughout [1; 0], Lf 0.Also if 1 x 0, then

1h (x)L (Qn (x)) = 1h (x)L (f (x)) n

(1 x)hh (h h + 1)

+ (36)

6

Right Fractional Monotone Approximation George A. Anastassiou

122

Page 123: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

kXj=h

1h (x)j (x)hDj1Qn (x)D

j1f (x) +

nh!

Dj1x

hi (32)

n(1 x)hh (h h + 1)

+

0@ kXj=h

sj2jj

(j j + 1)njp

1ARp!1f (p); 1n

=

n n(1 x)hh (h h + 1)

= n

1 (1 x)hh

(h h + 1)

!=

n

(h h + 1) (1 x)hh

(h h + 1)

! n

1 (1 x)hh (h h + 1)

! 0; (37)

and hence againL (Qn (x)) 0; 8 x 2 [1; 0] : (38)

Remark 4 Based on [1], here we have that Dj1f are continuous functions,

j = 0; 1; :::; p: Suppose that h (x) ; :::; k (x) are continuous functions in [1; 1],and L (f) 0 on [1; 0] is replaced by L (f) > 0 on [1; 0]. Disregard theassumption made in the Theorem 3 on h (x). For n 2 N, let Qn (x) be qn (x)of (19) for g = f . Then Qn (x) converges to f at the Jackson rate [4, p. 18,Theorem VIII] and at the same time, since L (Qn) converges uniformly to L (f)on [1; 1], L (Qn) > 0 on [1; 0] for all n su¢ ciently large.

References

[1] G.A. Anastassiou, Fractional Korovkin Theorey, Chaos, Solitons and Frac-tals, 42 (2009), 2080-2094.

[2] G.A. Anastassiou, O. Shisha, Monotone approximation with linear di¤eren-tial operators, J. Approx. Theory 44 (1985), 391-393.

[3] A.M.A. El-Sayed, M. Gaber, On the nite Caputo and nite Riesz deriva-tives, Electronic Journal of Theoretical Physics, Vol. 3, No. 12 (2006), 81-95.

[4] D. Jackson, The Theory of Approximation, Amer. Math. Soc. Colloq., Vol.XI, New York, 1930.

[5] O. Shisha, Monotone approximation, Pacic J. Math. 15 (1965), 667-671.

[6] S.A. Teljakovskii, Two theorems on the approximation of functions by alge-braic polynomials, Mat. Sb. 70 (112) (1966), 252-265 [Russian]; Amer. Math.Soc. Trans. 77 (2) (1968), 163-178.

7

Right Fractional Monotone Approximation George A. Anastassiou

123

Page 124: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

[7] R.M. Trigub, Approximation of functions by polynomials with integer coef-cients, Izv. Akad. Nauk SSSR Ser. Mat. 26 (1962), 261-280 [Russian].

8

Right Fractional Monotone Approximation George A. Anastassiou

124

Page 125: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

A differential sandwich-type result using a generalized Salageanoperator and Ruscheweyh operator

Andrei LorianaDepartment of Mathematics and Computer Science

University of Oradea1 Universitatii street, 410087 Oradea, Romania

[email protected]

Abstract

The purpose of this paper is to introduce sufficient conditions for subordination and super-ordination involving the derivative operator DRm,nλ and also to obtain sandwich-type result.

Keywords: analytic functions, differential operator, differential subordination, differential super-ordination.2010 Mathematical Subject Classification: 30C45.

1 Introduction

Let H (U) be the class of analytic function in the open unit disc of the complex plane U =z ∈ C : |z| < 1. Let H (a, n) be the subclass of H (U) consisting of functions of the form f(z) =a+ anz

n + an+1zn+1 + . . . .

Let An = f ∈ H(U) : f(z) = z + an+1zn+1 + . . . , z ∈ U and A = A1.Denote by K =

nf ∈ A : Re zf 00(z)

f 0(z) + 1 > 0, z ∈ Uo, the class of normalized convex functions

in U .Let the functions f and g be analytic in U . We say that the function f is subordinate to g,

written f ≺ g, if there exists a Schwarz function w, analytic in U , with w(0) = 0 and |w(z)| < 1,for all z ∈ U, such that f(z) = g(w(z)), for all z ∈ U . In particular, if the function g is univalent inU , the above subordination is equivalent to f(0) = g(0) and f(U) ⊂ g(U).

Let ψ : C3 × U → C and h be an univalent function in U . If p is analytic in U and satisfies thesecond order differential subordination

ψ(p(z), zp0(z), z2p00(z); z) ≺ h(z), for z ∈ U, (1.1)

then p is called a solution of the differential subordination. The univalent function q is called adominant of the solutions of the differential subordination, or more simply a dominant, if p ≺ q forall p satisfying (1.1). A dominant eq that satisfies eq ≺ q for all dominants q of (1.1) is said to be thebest dominant of (1.1). The best dominant is unique up to a rotation of U .

Let ψ : C2 × U → C and h analytic in U . If p and ψ¡p (z) , zp0 (z) , z2p00 (z) ; z

¢are univalent

and if p satisfies the second order differential superordination

h(z) ≺ ψ(p(z), zp0(z), z2p00 (z) ; z), z ∈ U, (1.2)

then p is a solution of the differential superordination (1.2) (if f is subordinate to F , then F iscalled to be superordinate to f). An analytic function q is called a subordinant if q ≺ p for all p

1

125J. APPLIED FUNCTIONAL ANALYSIS, VOL. 10, NO.'S 1-2,125-132 ,2015, COPYRIGHT 2015 EUDOXUS PRESS, LLC

Page 126: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

satisfying (1.2). An univalent subordinant eq that satisfies q ≺ eq for all subordinants q of (1.2) issaid to be the best subordinant.

Miller and Mocanu [17] obtained conditions h, q and ψ for which the following implication holds

h(z) ≺ ψ(p(z), zp0(z), z2p00 (z) ; z)⇒ q (z) ≺ p (z) .

For two functions f(z) = z +P∞j=2 ajz

j and g(z) = z +P∞j=2 bjz

j analytic in the open unitdisc U , the Hadamard product (or convolution product) of f (z) and g (z), written as (f ∗ g) (z), isdefined by

f (z) ∗ g (z) = (f ∗ g) (z) = z +∞Pj=2

ajbjzj .

Definition 1.1 (Al Oboudi [7]) For f ∈ A, λ ≥ 0 and n ∈ N, the operator Dmλ is defined byDmλ : A→ A,

D0λf (z) = f (z)

D1λf (z) = (1− λ) f (z) + λzf 0(z) = Dλf (z)

...

Dmλ f(z) = (1− λ)Dm−1λ f (z) + λz (Dmλ f (z))0 = Dλ

¡Dm−1λ f (z)

¢, for z ∈ U.

Remark 1.1 If f ∈ A and f(z) = z +P∞j=2 ajz

j, then Dmλ f (z) = z +P∞j=2 [1 + (j − 1)λ]

m ajzj,

for z ∈ U .

Remark 1.2 For λ = 1 in the above definition we obtain the Salagean differential operator [20].

Definition 1.2 (Ruscheweyh [19]) For f ∈ A and n ∈ N, the operator Rn is defined by Rn : A→ A,

R0f (z) = f (z)

R1f (z) = zf 0 (z)

...

(n+ 1)Rn+1f (z) = z (Rnf (z))0 + nRnf (z) , z ∈ U.

Remark 1.3 If f ∈ A, f(z) = z +P∞j=2 ajz

j, then Rnf (z) = z +P∞j=2

(n+j−1)!n!(j−1)! ajz

j for z ∈ U .

The purpose of this paper is to derive the several subordination and superordination resultsinvolving a differential operator. Furthermore, we studied the results of M. Darus, K. Al-Shaqs [15],Shanmugam, Ramachandran, Darus and Sivasubramanian [21], R.W. Ibrahim, M. Darus [16].

In order to prove our subordination and superordination results, we make use of the followingknown results.

Definition 1.3 [18] Denote by Q the set of all functions f that are analytic and injective onU\E (f), where E (f) = ζ ∈ ∂U : lim

z→ζf (z) =∞, and are such that f 0 (ζ) 6= 0 for ζ ∈ ∂U\E (f).

Lemma 1.1 [18] Let the function q be univalent in the unit disc U and θ and φ be analytic ina domain D containing q (U) with φ (w) 6= 0 when w ∈ q (U). Set Q (z) = zq0 (z)φ (q (z)) andh (z) = θ (q (z)) +Q (z). Suppose that

1. Q is starlike univalent in U and2. Re

³zh0(z)Q(z)

´> 0 for z ∈ U .

If p is analytic with p (0) = q (0), p (U) ⊆ D and

θ (p (z)) + zp0 (z)φ (p (z)) ≺ θ (q (z)) + zq0 (z)φ (q (z)) ,

then p (z) ≺ q (z) and q is the best dominant.

2

A differential sandwich-type result for Ruscheweyh operator

Andrei Loriana126

Page 127: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Lemma 1.2 [14] Let the function q be convex univalent in the open unit disc U and ν and φ beanalytic in a domain D containing q (U). Suppose that

1. Re³ν0(q(z))φ(q(z))

´> 0 for z ∈ U and

2. ψ (z) = zq0 (z)φ (q (z)) is starlike univalent in U .If p (z) ∈ H [q (0) , 1] ∩Q, with p (U) ⊆ D and ν (p (z)) + zp0 (z)φ (p (z)) is univalent in U and

ν (q (z)) + zq0 (z)φ (q (z)) ≺ ν (p (z)) + zp0 (z)φ (p (z)) ,

then q (z) ≺ p (z) and q is the best subordinant.

2 Main results

Definition 2.1 Let λ ≥ 0 and n,m ∈ N. Denote by DRm,nλ : A → A the operator given by theHadamard product of the generalized Salagean operator Dmλ and the Ruscheweyh operator Rn,

DRm,nλ f (z) = (Dmλ ∗Rn) f (z) ,

for any z ∈ U and each nonnegative integers m,n.

Remark 2.1 If f ∈ A and f(z) = z +P∞j=2 ajz

j, then

DRm,nλ f (z) = z +P∞j=2 [1 + (j − 1)λ]

m (n+j−1)!n!(j−1)! a

2jzj, for z ∈ U .

This operator was studied in [12].

Remark 2.2 For λ = 1, m = n, we obtain the Hadamard product SRn [1] of the Salagean operatorSn and Ruscheweyh derivative Rn, which was studied in [2], [3].

Remark 2.3 For m = n we obtain the Hadamard product DRnλ [4] of the generalized Salageanoperator Dnλ and Ruscheweyh derivative R

n, which was studied in [5], [6], [8], [9], [10], [11].

Using simple computation one obtains the next result.

Proposition 2.1 [12] For m,n ∈ N and λ ≥ 0 we have

z¡DRm,nλ f (z)

¢0= (n+ 1)DRm,n+1λ f (z)− nDRm,nλ f (z) . (2.1)

We begin with the following

Theorem 2.2 Letz(DRm,nλ f(z))

0

DRm,nλ f(z)∈ H (U) , z ∈ U , f ∈ A, m,n ∈ N, λ ≥ 0 and let the function q (z)

be convex and univalent in U such that q (0) = 1. Assume that

Re

µ1 +

α

βq (z)− zq

0 (z)

q (z)+zq00 (z)

q0 (z)

¶> 0, z ∈ U, (2.2)

for α,β ∈ C,β 6= 0, z ∈ U, andψm,nλ (α,β; z) := (2.3)

β (n+ 1)

³DRm,n+1λ f (z)

´0¡DRm,nλ f (z)

¢0 + (α− .β)z¡DRm,nλ f (z)

¢0DRm,nλ f (z)

− βn.

If q satisfies the following subordination

ψm,nλ (α,β, µ; z) ≺ αq (z) +βzq0 (z)

q (z), (2.4)

3

A differential sandwich-type result for Ruscheweyh operator

Andrei Loriana127

Page 128: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

for,α,β ∈ C, β 6= 0 thenDRm+1,nλ f (z)

DRm,nλ f (z)≺ q (z) , z ∈ U, (2.5)

and q is the best dominant.

Proof. Our aim is to apply Lemma 1.1. Let the function p be defined by p (z) :=z(DRm,nλ f(z))

0

DRm,nλ f(z),

z ∈ U , z 6= 0, f ∈ A. The function p is analytic in U and p (0) = 1Differentiating this function, with respect to z,we get

zp0 (z) =z(DRm,nλ f(z))

0

DRm,nλ f(z)+z2(DRm,nλ f(z))

00

DRm,nλ f(z)−∙z(DRm,nλ f(z))

0

DRm,nλ f(z)

¸2By using the identity (2.1), we obtain

zp0 (z)

p (z)= (n+ 1)

³DRm,n+1λ f (z)

´0¡DRm,nλ f (z)

¢0 −z¡DRm,nλ f (z)

¢0DRm,nλ f (z)

− n (2.6)

By setting θ (w) := αw and φ (w) := βw , α,β ∈ C, β 6= 0 it can be easily verified that θ is analytic

in C, φ is analytic in C\0 and that φ (w) 6= 0, w ∈ C\0.Also, by letting Q (z) = zq0 (z)φ (q (z)) = βzq0(z)

q(z) ,we find that Q (z) is starlike univalent in U.

Let h (z) = θ (q (z)) +Q (z) = αq (z) + βzq0(z)q(z) , z ∈ U

If we derive the function Q, with respect to z, perform calculations, we have Re³zh0(z)Q(z)

´=

Re³1 + α

β q (z)−zq0(z)q(z) +

zq00(z)q0(z)

´> 0.

By using (2.6), we obtain

αp (z) + β zp0(z)p(z) = α

z(DRm,nλ f(z))0

DRm,nλ f(z)+ .β

∙(n+ 1)

(DRm,n+1λ f(z))0

(DRm,nλ f(z))0 −

z(DRm,nλ f(z))0

DRm,nλ f(z)− n

¸=

β (n+ 1)(DRm,n+1λ f(z))

0

(DRm,nλ f(z))0 + (α− .β)

z(DRm,nλ f(z))0

DRm,nλ f(z)− βn.

By using (2.4), we have αp (z) + β zp0(z)p(z) ≺ αq (z) + β zq

0(z)q(z) .

Therefore, the conditions of Lemma 1.1 are met, so we have p (z) ≺ q (z), z ∈ U, i.e.z(DRm,nλ f(z))

0

DRm,nλ f(z)

≺ q (z), z ∈ U, and q is the best dominant.

Corollary 2.3 Let q (z) = 1+Az1+Bz , −1 ≤ B < A ≤ 1, m, n ∈ N, λ ≥ 0, z ∈ U. Assume that (2.2)

holds. If f ∈ A and

ψm,nλ (α,β; z) ≺ α1 +Az

1 +Bz+ β

(A−B) z(1 +Az) (1 +Bz)

,

for α,β ∈ C, β 6= 0, −1 ≤ B < A ≤ 1, where ψm,nλ is defined in (2.3), then

z¡DRm,nλ f (z)

¢0DRm,nλ f (z)

≺ 1 +Az1 +Bz

and 1+Az1+Bz is the best dominant.

Proof. For q (z) = 1+Az1+Bz , −1 ≤ B < A ≤ 1, in Theorem 2.2 we get the corollary.

4

A differential sandwich-type result for Ruscheweyh operator

Andrei Loriana128

Page 129: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Corollary 2.4 Let q (z) = 1+z1−z ,m, n ∈ N, λ ≥ 0, z ∈ U. Assume that (2.2) holds. If f ∈ A and

ψm,nλ (α,β, µ; z) ≺ α1 + z

1− z + β2z

1− z2 ,

for α,β ∈ C, β 6= 0, where ψm,nλ is defined in (2.3), then

z¡DRm,nλ f (z)

¢0DRm,nλ f (z)

≺ 1 + z1− z ,

and 1+z1−z is the best dominant.

Proof. Corollary follows by using Theorem 2.2 for q (z) = 1+z1−z

Theorem 2.5 Let q be convex and univalent in U, such that q (0) = 1, m, n ∈ N, λ ≥ 0. Assumethat

Re

µα

βq (z)

¶> 0, for α,β ∈ C, µ 6= 0, z ∈ U. (2.7)

If f ∈ A, z(DRm,nλ f(z))

0

DRm,nλ f(z)∈ H [q (0) , 1]∩Q and ψm,nλ (α,β, µ; z) is univalent in U , where ψm,nλ (α,β; z)

is as defined in (2.3), then

αq (z) + βzq0 (z)

p (z)≺ ψm,nλ (α,β, µ; z) , z ∈ U, (2.8)

implies

q (z) ≺z¡DRm,nλ f (z)

¢0DRm,nλ f (z)

, z ∈ U, (2.9)

and q is the best subordinant.

Proof. Let the function p be defined by p (z) :=z(DRm,nλ f(z))

0

DRm,nλ f(z), z ∈ U , z 6= 0, f ∈ A.

By setting ν (w) := αw and φ (w) := βw it can be easily verified that ν is analytic in C, φ is

analytic in C\0 and that φ (w) 6= 0, w ∈ C\0.Since q is convex and univalent function, it follows that Re

³ν0(q(z))φ(q(z))

´= Re

³αβ q (z)

´> 0, for

α,β ∈ C, β 6= 0.By using (2.8) we obtain

αq (z) + βzq0 (z)

q (z)≺ αp (z) + β

zp0 (z)

p (z).

Using Lemma 1.2, we have

q (z) ≺ p (z) =z¡DRm,nλ f (z)

¢0DRm,nλ f (z)

, z ∈ U,

and q is the best subordinant.

5

A differential sandwich-type result for Ruscheweyh operator

Andrei Loriana129

Page 130: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Corollary 2.6 Let q (z) = 1+Az1+Bz , −1 ≤ B < A ≤ 1, m, n ∈ N, λ ≥ 0. Assume that (2.7) holds. If

f ∈ A, z(DRm,nλ f(z))

0

DRm,nλ f(z)∈ H [q (0) , 1] ∩Q and

α1 +Az

1 +Bz+ β

(A−B) z(1 +Az) (1 +Bz)

≺ ψm,nλ (α,β; z) ,

for α,β ∈ C, β 6= 0, −1 ≤ B < A ≤ 1, where ψm,nλ is defined in (2.3), then

1 +Az

1 +Bz≺z¡DRm,nλ f (z)

¢0DRm,nλ f (z)

and 1+Az1+Bz is the best subordinant.

Proof. For q (z) = 1+Az1+Bz , −1 ≤ B < A ≤ 1 in Theorem 2.5 we get the corollary.

Corollary 2.7 Let q (z) = 1+z1−z ,m, n ∈ N, λ ≥ 0. Assume that (2.7) holds. If f ∈ A,

z(DRm,nλ f(z))0

DRm,nλ f(z)

∈ H [q (0) , 1] ∩Q and

α1 + z

1− z + β2z

1− z2 ≺ ψm,nλ (α,β; z) ,

for α,β ∈ C, β 6= 0, where ψm,nλ is defined in (2.3), then

1 + z

1− z ≺z¡DRm,nλ f (z)

¢0DRm,nλ f (z)

and 1+z1−z is the best subordinant.

Proof. Corollary follows by using Theorem 2.5 for q (z) = 1+z1−z .

Combining Theorem 2.2 and Theorem 2.5, we state the following sandwich theorem.

Theorem 2.8 Let q1 and q2 be analytic and univalent in U such that q1 (z) 6= 0 and q2 (z) 6= 0,for all z ∈ U , with zq01 (z) and zq02 (z) being starlike univalent. Suppose that q1 satisfies (2.2) andq2 satisfies (2.7). If f ∈ A,

z(DRm,nλ f(z))0

DRm,nλ f(z)∈ H [q (0) , 1] ∩Q and ψm,nλ (α,β; z) is as defined in (2.3)

univalent in U , then

αq1 (z) +βzq01 (z)

q1 (z)≺ ψm,nλ (α,β; z) ≺ αq2 (z) +

βzq02 (z)

q2 (z),

for α,β ∈ C, β 6= 0, implies

q1 (z) ≺z¡DRm,nλ f (z)

¢0DRm,nλ f (z)

≺ q2 (z) ,

and q1 and q2 are respectively the best subordinant and the best dominant.

For q1 (z) = 1+A1z1+B1z

, q2 (z) = 1+A2z1+B2z

, where −1 ≤ B2 < B1 < A1 < A2 ≤ 1, we have the followingcorollary.

6

A differential sandwich-type result for Ruscheweyh operator

Andrei Loriana130

Page 131: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Corollary 2.9 Let m,n ∈ N, λ ≥ 0. Assume that (2.2) and (2.7) hold for q1 (z) = 1+A1z1+B1z

and

q2 (z) =1+A2z1+B2z

, respectively. If f ∈ A, z(DRm,nλ f(z))

0

DRm,nλ f(z)∈ H [q (0) , 1] ∩Q and

α1 +A1z

1 +B1z+ β

(A1 −B1) z(1 +A1z) (1 +B1z)

≺ ψm,nλ (α,β; z)

≺ α1 +A2z

1 +B2z+ β

(A2 −B2) z(1 +A2z) (1 +B2z)

,

for α,β ∈ C, β 6= 0, −1 ≤ B2 ≤ B1 < A1 ≤ A2 ≤ 1, where ψm,nλ is defined in (2.3), then

1 +A1z

1 +B1z≺z¡DRm,nλ f (z)

¢0DRm,nλ f (z)

≺ 1 +A2z1 +B2z

,

hence 1+A1z1+B1z

and 1+A2z1+B2z

are the best subordinant and the best dominant, respectively.

References

[1] A. Alb Lupas, Certain differential subordinations using Salagean and Ruscheweyh operators,Acta Universitatis Apulensis, No. 29/2012, 125-129.

[2] A. Alb Lupas, A note on differential subordinations using Salagean and Ruscheweyh operators,Romai Journal, vol. 6, nr. 1(2010), 1—4.

[3] A. Alb Lupas, Certain differential superordinations using Salagean and Ruscheweyh operators,Analele Universitatii din Oradea, Fascicola Matematica, Tom XVII, Issue no. 2, 2010, 209-216.

[4] A. Alb Lupas, Certain differential subordinations using a generalized Salagean operator andRuscheweyh operator I, Journal of Mathematics and Applications, No. 33 (2010), 67-72.

[5] A. Alb Lupas, Certain differential subordinations using a generalized Salagean operator andRuscheweyh operator II, Fractional Calculus and Applied Analysis, Vol 13, No. 4 (2010), 355-360.

[6] A. Alb Lupas, Certain differential superordinations using a generalized Salagean andRuscheweyh operators, Acta Universitatis Apulensis nr. 25/2011, 31-40.

[7] F.M. Al-Oboudi, On univalent functions defined by a generalized Salagean operator, Ind. J.Math. Math. Sci., 27 (2004), 1429-1436.

[8] L. Andrei, Differential subordination results using a generalized Salagean operator andRuscheweyh operator, Acta Universitatis Apulensis (to appear).

[9] L. Andrei, Some differential subordination results using a generalized Salagean operator andRuscheweyh operator, Jokull Journal (to appear).

[10] L. Andrei, Differential superordination results using a generalized Salagean operator andRuscheweyh operator, Analele Universitatii Oradea (to appear).

[11] L. Andrei, Some differential superordination results using a generalized Salagean operator andRuscheweyh operator, Stud. Univ. Babes-Bolyai Math (to appear).

[12] L. Andrei, Differential Sandwich Theorems using a generalized Salagean operator andRuscheweyh operator, submitted Journal of Inequalities and Applications.

7

A differential sandwich-type result for Ruscheweyh operator

Andrei Loriana131

Page 132: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

[13] L. Andrei, On some differential Sandwich Theorems using a generalized Salagean operator andRuscheweyh operator, Journal of Computational Analysis and Applications (to appear).

[14] T. Bulboaca, Classes of first order differential superordinations, Demonstratio Math., Vol. 35,No. 2, 287-292.

[15] M. Darus, K. Al-Shaqsi, Differential sandwich theorems with generalised derivative operator,Advanced Technologies, October, Kankesu Jayanthakumaran (Ed), ISBN:978-953-307-009-42009

[16] R.W. Ibrahim, M. Darus, On sandwich theorems of analytic functions involving Noor integraloperator, African Journal of Mathematics and Computer Science Research, vol.2, pp.132-137,2009

[17] S.S. Miller, P.T. Mocanu, Subordinants of Differential Superordinations, Complex Variables,vol. 48, no. 10, 815-826, October, 2003.

[18] S.S. Miller, P.T. Mocanu, Differential Subordinations: Theory and Applications, Marcel DekkerInc., New York, 2000.

[19] St. Ruscheweyh, New criteria for univalent functions, Proc. Amet. Math. Soc., 49(1975), 109-115.

[20] G. St. Salagean, Subclasses of univalent functions, Lecture Notes in Math., Springer Verlag,Berlin, 1013(1983), 362-372.

[21] T.N. Shanmugan, C. Ramachandran, M. Darus, S. Sivasubramanian, Differential sandwichtheorems for some subclasses of analytic functions involving a linear operator, Acta Math.Univ. Comenianae, 16 (2007), no. 2, 287-294.

8

A differential sandwich-type result for Ruscheweyh operator

Andrei Loriana132

Page 133: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Predictor Corrector type method for solvingcertain non convex equilibrium problems

Wasim Ul-Haq1;2, Manzoor Hussain2

1Department of Mathematics, College of Science in Al-Zul, MajmaahUniversity, Kingdom of Saudi Arabia

2Department of Mathematics, Abdul Wali Khan University Mardan,Pakistan

Email: [email protected], [email protected]

Abstract: In this paper, we propose and analyze a new predictor corrector typemethod for solving uniformly regular invex equilibrium problems by using the aux-iliary principle technique. The convergence criteria of this new method under somemild restrictions is also studied. New and known methods for solving variational-likeinequalities and equilibrium problems appear as special consequences of our work.

Key Words: Equilibrium problem, Invex function, Prox regular set.

MSC: 49J40, 91B50

1 Introduction

A variety of problems arising in optimization, economics, nance, networking, trans-portation, structural analysis and elasticity can be studied in the framework ofEquilibrium problems. The theory of equilibrium problems provides productive andinnovative techniques to investigate a wide range of such problems. Blum and Oettli[1] and Noor and Oettli [2] has shown that variational inequalities and mathemati-cal programming problems can be seen as a special case of the abstract equilibriumproblems.

There are a many numerical methods including the projection technique and itsvariant forms, Weiner-Hopf equations, the auxiliary principle technique and resol-vent equations method for solving variational inequalities. However, it is known thatprojection, Weiner-Hopf equations, and proximal and resolvent equations techniquescannot be extended and generalized to suggest and analyze similar iterative meth-ods for solving uniformly regular invex equilibrium problems and variational-likeinequalities due to the presence of function (:; :). To overcome this shortcoming,one may use auxiliary principle technique. Several interesting generalizations andextensions of classical convexity have been studied and investigated in recent years.Hanson [3] introduced the invex functions as a generalization of convex functions.Later, subsequent works inspired from Hansons result have greatly found the role

1

133J. APPLIED FUNCTIONAL ANALYSIS, VOL. 10, NO.'S 1-2,133-142 ,2015, COPYRIGHT 2015 EUDOXUS PRESS, LLC

Page 134: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

and applications of invexity in non linear optimization and other branches of pureand applied sciences. The basic properties and role of preinvex functions in opti-mization, equilibrium problems and variational inequalities was studied by Noor[2,4,5] and Wier and Mond [8]. Motivated from the above and recent work of Nooret al. [47], we use the auxiliary principle technique which is mainly due to Glowinskiet al. [12] to propose and analyze a new predictor-corrector type method for solvingthe uniformly regular invex equilibrium problems. We also study the convergenceof this new method under some suitable conditions.

2 Preliminaries

Let h:; :i and k:k denote the inner product and norm respectively in a realHilbert space H. In the sequel let f : K ! H and (:; :) : K K ! H be acontinuous function, where K is a nonempty closed convex set in H. Now, werecall some basic denitions and concepts of non-smooth analysis, for detailssee [48,49].Denition 1.The proximal normal cone denoted by NP

K(u) of the set K at u isdened by

NPK(u) = f 2 H : u 2 PK [u+ ]g

where > 0 is a constant and PK [:] is the projection of H onto the set K anddened as

PK [u] = fu 2 K : dK = kv ukg

Here dK (:) is the usual distance function to the subset K; and dened as

dK (u) = infv2K

kv uk .

The proximal normal cone has the following characterization.

Lemma 1. Let K be a closed subset in H: Then 2 NPK(u) if and only if there

exists a constant such that

h; v ui kv uk2 ; 8v 2 K.

Denition 2. The Clarke normal cone, denoted by NCK(u) is dened as

NCK(u) = co[N

PK(u)]; 8u 2 K

where co means the closure of the convex hull. Clearly NPK(u) NC

K(u); but theconverse is not true. Note that NC

K(u) is always closed and convex, whereas NPK(u)

is convex but may not be closed, see [49].

Poliquin et al. [49] and Clarke et al. [48] have introduced and studied a new classof nonconvex sets, which are also called uniformly prox-regular sets. This classof uniformly prox-regular sets has played an important role in many nonconvex

2

Predictor Corrector type method, Wasim Ul-Haq, Manzoor Hussain

134

Page 135: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

applications such as optimization, dynamical systems and di¤erential inclusion. Inparticular, we have

Denition 3. For a given r 2 (0;1]; a subsetKr is said to be uniformly rproximalregular if and only if for each u 2 Kr and all proximal normal vectors 0 6= 2 NP

K(u),

kk ; (v; u) 1

2rk (v; u)k2 ; 8v 2 Kr:

The class of uniformly prox-regular sets contains the class of convex sets, p-convexsets, C1;1 submanifolds (possibly with boundary) of H; the images under a C1;1

di¤eomorphism of convex sets and many other nonconvex sets, see [48,49]. It isclearsimple to note that if r =1 then uniformly rproximal regular set K reducesto convex set. This observation plays a key role in this work. It is known that if Kis a uniformly rprox-regular set, then the proximal normal cone NP

K(u) is closedas a set-valued mapping. Thus, we have NC

K(u) = NPK(u): From now onward the set

Kr is considered to be uniformly proximal regular invex set. For a given continuousbifunction F (:; :) : KrKr ! H; consider the problem of nding u 2 Kr such that

F (u; v) +

k

2r

k(v; u)k2 0; 8v 2 Kr;

where k is a positive constant. By letting = k2r such that = 0 for r = 1; then

the above problem becomes

F (u; v) + k(v; u)k2 0; 8v 2 Kr: (2.1)

This type of problem is called uniformly regular invex equilibrium problem. Notethat for (v; u) = v u; then the uniformly regular invex equilibrium problemreduces to uniformly regular equilibrium problem introduced and studied by Noor[26; 3 5] and Noor and Noor [30] and for r =1 uniformly regular invex equilibriumproblem reduces to classical equilibrium problem introduced and studied by Blumand Oettli [1] and Noor and Oettli [2]. If F (u; v) = hTu; (v; u)i ; where T : H ! His a nonlinear continuous operator, then problem (2:1) is equivalent to nding u 2Kr

such that DTu+ k(v; u)k2 ; (v; u)

E 0; 8v 2 Kr: (2.2)

which is a variant form of varitainol-like inequality problem. Note that for r =1 problem (2:2) reduces to varitainol-like inequality problem studied by variousauthors in recent years, see Noor [19] and Yang and Chen [43]: If (v; u) = v uand r =1 uniformly rproximal regular set reduces to convex set K.and problem(2:2) is equivalent to nding u 2 K such that

hTu; v ui 0; 8 v 2 K: (2.3)

which is known as the classical variational inequality introduced and studied byStampacchia [40]. For the recent applications, numerical methods and formulationof variational inequalities, variational-like inequalities and equilibrium problems, see[1-51] and the references therein.

Denition 4. The function F (:; :) : K K ! H is said to be:

3

Predictor Corrector type method, Wasim Ul-Haq, Manzoor Hussain

135

Page 136: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

(i). pseudomonotone

F (u; v) 0 =) F (v; u) 0; 8u; v 2 K:

(ii). partially relaxed strongly monotone, if there exist a constant > 0 suchthat

F (u; v) + F (v; z) k(z; u)k2 ; 8u; v; z 2 K;

Note that for z = u; (z; u) = 0, 8 u 2 K; thus partially relaxed strongly monotonic-ity reduces to

F (u; v) + F (v; u) 0; 8u; v;2 K,

which is known as the monotonicity of F (:; :) : It is obvious that monotonicityimplies pseudomonotonicity, but the converse is not true.

3 Iterative methods and convergence analysis

In this section, we introduce some iterative scheme for solving the uniformly regularinvex equilibrium problem (2:1) by using the auxiliary principle technique, which isdue to Glowinski et al. [12].

For a given u 2 Kr; consider the auxiliary uniformly regular invex equilibriumproblem of nding w 2 Kr such that

F (u; v) + k(v; u)k2 +E0(w) E0(wn); (v; un+1)

0 ; 8 v 2 Kr, (3.1)

where is a positive constant and E0(u) is the di¤erential of a strongly preinvexfunction E at u 2 K. From the strong preinvexity of the di¤erentiable functionE(u); it follows that problem (3:1) has a unique solution. Note that if w = u;

then w is a solution of uniformly regular invex equilibrium problem (2:1). Thisobservation leads us to propose and analyze the following iterative scheme for solvingthe uniformly regular invex equilibrium problem (2:1).

Algorithm 1. Given u0 2 H; calculate un+1 from the following iterative scheme

F (wn; v)+ k(v; wn)k2+E0(un+1) E0(wn); (v; un+1)

0; 8 v 2 Kr (3.2)

F (un; v) + k(v; un)k2 +E0(wn) E0(un); (v; wn)

0; 8 v 2 Kr (3.3)

where ; are both positive constants.

If F (u; v) = hTu; (v; u)i ; then we have

Algorithm 2. Given u0 2 H; calculate un+1 through iterative schemeDTwn + k(v; wn)k2 + E0(un+1) E0(wn); (v; un+1)

E 0; 8 v 2 Kr

4

Predictor Corrector type method, Wasim Ul-Haq, Manzoor Hussain

136

Page 137: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

DTun + k(v; un)k2 + E0(wn) E0(un); (v; un)

E 0; 8 v 2 Kr

where ; are both positive constants.

When r = +1; then = 0; so the uniformly regular invex equilibrium problemreduces to equilibrium problem. i.e.

F (u; v) 0; 8 v 2 K

and the uniformly prox-regular set K becomes convex set K: Thus, we have

Algorithm 3 [47]. For u0 2 H we calculate un+1 through the following iterativescheme

F (wn; v) +E0(un+1) E0(wn); v un+1

0; 8 v 2 K

F (un; v) +E0(wn) E0(un); v wn

0; 8 v 2 K

where ; are both positive constants.

Now if, F (u; v) = hTu; vi ; then we have

Algorithm 4[47]. For u0 2 H we calculate un+1 through the following iterativescheme

Twn + E0(un+1) E0(wn); v un+1

0; 8 v 2 K

Tun + E0(wn) E0(un); v wn

0; 8 v 2 K

where ; are both positive constants.

We now study the convergence criteria of Algorithm 1: For this purpose, we needthe following condition.

Assumption 1[47]. 8 u; v; z 2 K; the function (:; :) satises the following condi-tion

(u; v) = (u; z) + (z; v) : (3.4)

Assumption has been used to study the existence of a solution of a variational-likeinequalities. Note that (u; v) = 0 if and only if u = v; 8u; v 2 K:

Theorem 1. Suppose the function F (:; :) is partially relaxed strongly monotonewith constant > 0 and suppose E(u) be a strongly preinvex function with > 0: If0 < <

+ ; 0 < <+ and the above assumption holds, then the approximate

solution obtained from Algorithm 1 converges to a solution u 2 K of the uniformregular invex equilibrium problem (2:1).

Proof Let u 2Kr be a solution of (2:1); see [1,19]. Then

F (u; v) + k (v; u)k2 0; 8v 2 Kr (3.5)

F (u; v) + k (v; u)k2 0; 8v 2 Kr; (3.6)

5

Predictor Corrector type method, Wasim Ul-Haq, Manzoor Hussain

137

Page 138: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

where and are both positive constants.Taking v = un+1 in (3:5) and v = u in(3:2); we have

F (u; un+1) + k (un+1; u)k2 0; 8v 2 Kr; (3.7)

F (wn; u) + k (u;wn)k2 +DE0(un+1) E

0(wn); (u; un+1)

E 0; 8v 2 Kr:

(3.8)Now we consider the generalized Bergman function as

B(u; z) = E(u) E(z)DE0(z); (u; z)

E k (u; z)k2 ;

where we have used the fact that the function E(u) is strongly preinvex.

Combining (3:7) (3:9) and (3:4); we have

B(u;wn)B(u; un+1) = E(un+1)E(wn)DE0(wn); (u;wn)

E+DE0(un+1); (u; un+1)

E

= E(un+1) E(wn)DE0(wn) E

0(un+1); (u; un+1)

EDE0(wn); (un+1; wn)

E k (un+1; wn)k2 +

DE0(un+1) E

0(wn); (u; un+1)

E k (un+1; wn)k2fF (u; un+1)+F (wn; u)gfk (un+1; u)k2+k (u;wn)k2g

f (+ )g k (un+1; wn)k2 :

since F (:; :) is partially relaxed strongly monotone with a constant > 0:

In a similar way, we have

B(u; un)B(u;wn) k (un; wn)k2fF (u;wn)+F (un; u)gfk (u; un)k2+k (wn; u)k2g

f (+ )g k (un; wn)k2 :

since F (:; :) is partially relaxed strongly monotone with a constant > 0:

If un+1 = wn = un; then clearly un is a solution of (2:1): Otherwise, for 0 < <+

and 0 < < + ; the sequences B(u;wn)B(u; un+1) and B(u; un)B(u;wn) are

nonnegative and we must have

limn!1(k (un+1; wn)k) = 0 and limn!1(k (wn; un)k) = 0:

Thus

limn!1 k (un+1; un)k = limn!1(k (un+1; wn)k) + limn!1(k (wn; un)k) = 0:

6

Predictor Corrector type method, Wasim Ul-Haq, Manzoor Hussain

138

Page 139: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

It follows that the sequence fung is bounded. Let_u be a cluster point of the subse-

quence funjg; and let funjg be a subsequence converging towards_u: Now using the

technique of Zhu and Marcotte [46], it can be shown that the entire sequence fungconverges to the cluster point

_u satisfying the uniformly regular invex equilibrium

problem (2:1).

References

[1] E. Blum, W. Oettli, From optimization and variational inequalities toequilibrium problems, Math. Student 63 (1994) 123-145.

[2] M. Aslam Noor, W. Oettli, On general nonlinear complementarity problemsand quasi equilibria, Mathematiche 49 (1994) 313-331.

[3] M.A. Hanson, On su¢ ciency of the Kuhn-Tucker conditions, J. Math. Anal.Appl., 80 (1981), 545550.

[4] M. Aslam Noor, Invex equilibrium problems, J. Math. Anal. Appl. 302 (2005)463-475.

[5] M. Aslam Noor, Fundamental of equilibrium problems, Math. Inequal. Appl. 9(2006) 529-566.

[6] M. Aslam Noor, Mixed quasi invex equilibrium problems, Int. J. Math. Math.Sci. 2004 (2004) 3057-3067.

[7] M. Aslam Noor, Di¤erentiable nonconvex functions and general variationalinequalities, Appl. Math. Comput. 199 (2008) 623-630.

[8] T. Weir, B. Mond, Preinvex functions in multi objective optimization, J. Math.Anal. Appl.,136 (1988), 2938.

[9] L. C. Ceng, J. C. Yao, A hybrid iterative scheme for mixed equilibrium problemsand xed point problems, J. Comput. Appl. Math. 214 (2008) 186-201.

[10] F. Giannessi, A. Maugeri, Variational inequalities and network equilibriumproblems, Plenum Press, New york, 1995.

[11] F. Giannessi, A. Maugeri, P. M. Pardalos, Equilibrium problems: Nonsmoothoptimization and variational inequality models, Kluwer Academic Publishers,Dordrecht, Holland, 2001.

[12] R. Glowinski, J. L. Lions, R. Tremolieres, Numerical analysis of variationalinequalities, North-Holland, Amsterdam, 1981.

[13] K. R. Kazmi, F. A. Khan, Iterative approximation of a solution of multi-valuedvariational-like inclusion in Banach spaces: A; P proximal-point mappingapproach, J. Math. Anal. Appl. 325 (2007) 665-674.

7

Predictor Corrector type method, Wasim Ul-Haq, Manzoor Hussain

139

Page 140: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

[14] K.R. Kazmi, F.A. Khan, Sensitivity analysis for parametric generalized implicitquasi-variational-like inclusions involving accretive mappings, J. Math. Anal.Appl. (2008).

[15] D. Kinderlehrer, G. Stampacchia, An Introduction to Variational Inequalitiesand Their Applications, Academic Press, London, 1980.

[16] G. Mastroeni, Gap functions for equilibrium problems, J. Global Optim. 27(2000) 411-426.

[17] S.R. Mohan, S.K. Neogy, On invex sets and preinvex functions, J. Math. Anal.Appl. 189 (1995) 901-908.

[18] M. Aslam Noor, General variational inequalities, Appl. Math. Lett. 1 (1988)119-121.

[19] M. Aslam Noor, Variational-like inequalities, Optimization 30 (1994) 323-330.

[20] M. Aslam Noor, New approximation schemes for general variational inequalities,J. Math. Anal. Appl. 251 (2000) 217-229.

[21] M. Aslam Noor, Mixed quasi variational inequalities, Appl. Math. Comput. 146(2003) 553-578.

[22] M. Aslam Noor, Fundamentals of mixed quasi variational inequalities, Internat.J. Pure Appl. Math. 15 (2004) 137-258.

[23] M. Aslam Noor, Auxiliary principle technique for equilibrium problems, J.Optim. Theory Appl. 122 (2004) 371-386.

[24] M. Aslam Noor, K. Inayat Noor, Some characterization of strongly preinvexfunctions, J. Math. Anal. Appl. 316 (2006) 697-706.

[25] M. Aslam Noor, Generalized mixed quasi-equilibrium problems withTrifunction, Appl. Math. Lett. 18 (2005) 695-700.

[26] M. Aslam Noor, On a class of nonconvex equilibrium problems, Appl. Math.Comput. 157 (2004) 653-666.

[27] M. Aslam Noor, Mixed quasi invex equilibrium problems, Internat. J. Math.Math. Sci. 57 (2004) 3057-3067.

[28] M. Aslam Noor, On a class of general variational inequalities, J. AdvancedMath. Studies 1 (2008) 31-42.

[29] M. Aslam Noor, Extended general variational inequalities, Appl. Math. Lett.22 (2009) 182-186.

[30] M. Aslam Noor, K. Inayat Noor, On equilibrium problems, Appl. Math. E-Notes4 (2004) 125-132.

[31] M. Aslam Noor, K. Inayat Noor, Hemiequilibrium-like problems, NonlinearAnal. 64 (2006) 2631-2642.

8

Predictor Corrector type method, Wasim Ul-Haq, Manzoor Hussain

140

Page 141: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

[32] M. Aslam Noor, K. Inayat Noor, On strongly generalized preinvex functions, J.Inequal. Pure Appl. Math. 6 (2005) 1 8. Article 102.

[33] M. Aslam Noor, K. Inayat Noor, Some characterization of strongly preinvexfunctions, J. Math. Anal. Appl. 316 (2006) 697-706.

[34] M. Aslam Noor, K. Inayat Noor, H. Yaqoob, On general mixed variationalinequalities, Acta Appl. Math.110 (2010) 227-246.

[35] M. Aslam Noor, K. Inayat Noor, Th. M. Rassias, Some aspects of variationalinequalities, J. Comput. Appl. Math. 47 (1993) 285-312.

[36] M. Aslam Noor, Themistocles M. Rassias, On general hemiequilibriumproblems, J. Math. Anal. Appl. 324 (2006) 1417-1428.

[37] M. Aslam Noor, K. Inayat, V. Gupta, On equilibrium-like problems, Appl. Anal.86 (2007) 807-818.

[38] J.W. Peng, D.L. Zhu, Three-step iterative algorithms for a system of set-valuedvariational inclusions with (H; )monotone operators, Nonlinear Anal. 68(2008) 139-153.

[39] M. Patriksson, Nonlinear Programming and Variational Inequality Problems:A Unied Approach, Kluwer Academic Publishers, Dordrecht, 1999.

[40] G. Stampacchia, Formes bilineaires coercitives sur les ensembles convexes, C.R. Acad. Sci. Paris 258 (1964) 4413-4416.

[41] R.U. Verma, Sensitivity analysis for generalized strongly monotone variationalinclusions on the (A; )resolvent operator technique, Appl. Math. Lett. 19(2006) 1409-1413.

[42] R.U. Verma, Approximation solvability of a class of nonlinear set-valuedvariational inclusion involving (A; )monotone mappings, J. Math. Anal.Appl. 337 (2008) 969-975.

[43] X.Q. Yang, G.Y. Chen, A class of nonconvex functions and pre-variationalinequalities, J. Math. Anal. Appl. 169 (1992) 359-373.

[44] X.M. Yang, X.Q. Yang, K.L. Teo, Generalized invexity and generalized invariantmonotonicity, J. Optim. Theory Appl. 117 (2003) 607-625.

[45] Y. Yao, M. Aslam Noor, S. Zainab, Y.C. Liou, Mixed equilibrium problems andoptimization problems, J. Math. Anal. Appl. 354 (2009) 319-329.

[46] D.L. Zhu, P. Marcotte, Co-coercivity and its role in the convergence of iterativeschemes for solving variational inequalities, SIAM J. Optim. 6 (1996) 714-726.

[47] M. Aslam Noor, K. Inayat, Noor, Saira Zainab, On a Predictor-correctormethod for solving invex equilibrium problems, Nonlinear Anal.71 (2009) 333-338.

[48] F. H. Clarke, Y. S. Ledyaev, R. J. Stern and P. R. Wolenski, NonsmoothAnalysis Control Theory, Springer-Verlag, New York, NY, 1998.

9

Predictor Corrector type method, Wasim Ul-Haq, Manzoor Hussain

141

Page 142: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

[49] R. A. Poliquin, R. T. Rockafellar and L. Thibault, Local di¤erentiability ofdistance functions, Trans. Amer. Math. Soc., 352 (2000), 5231-5249.

10

Predictor Corrector type method, Wasim Ul-Haq, Manzoor Hussain

142

Page 143: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

ON SOME ZWEIER I-CONVERGENT DIFFERENCE SEQUENCE

SPACES

KULDIP RAJ AND SURUCHI PANDOH

Abstract. In the present paper we introduce some Zweier ideal convergent difference

sequence spaces defined by a sequence of Orlicz functions. We study some topologicalproperties and inclusion relations between these spaces. We also make an effort to

study these sequence spaces over n-normed spaces.

1. Introduction and Preliminaries

An ideal convergence is a generalization of statistical convergence. The ideal conver-gence plays a vital role not only in pure mathematics but also in other branches of scienceespecially in computer science, information theory, biological science, dynamical systems,geographic information systems, and motion planning in robotics. Kostyrko et al. [14]introduced the notion of I−convergence based on the structure of admissible ideal I ofsubsets of natural number N. For more details about ideal convergence see ([10], [15], [24],[27]).Let X be a non empty set. Then a family of sets I ⊆ 2X (Power set of X) is said to bean ideal if I is additive, that is, A,B ∈ I ⇒ A ∪B ∈ I and A ∈ I,B ⊆ A⇒ B ∈ I.A non empty family of sets £(I) ⊆ 2X is said to be filter on X if and only if Φ ∈ £(I), forA,B ∈ £(I) we have A ∩ B ∈ £(I) and for each A ∈ £(I) and A ⊆ B implies B ∈ £(I).An ideal I ⊆ 2X is called non-trivial if I 6= 2X . A non-trivial ideal I ⊆ 2X is calledadmissible if x : x ∈ X ⊆ I. A non-trivial ideal I is maximal if there cannot existsany non-trivial ideal J 6= I containing I as a subset.

A subset A of N is said to have asymptotic density δ(A) if δ(A) = limn→∞

1

n

n∑k=1

χA(k) exists,

where χA is the characteristic function of A.If we take I = If = A ⊆ N : A is a finite subset, then If is a non trivial admissible idealof N and the corresponding convergence coincide with the usual convergence.If we take I = Iδ = A ⊆ N : δ(A) = 0 where δ(A) denotes the asymptotic density ofthe set A, then Iδ is a non trivial admissible ideal of N and the corresponding convergencecoincide with the statistical convergence.Recently, some new sequence spaces by mean of the matrix domain have been discussedby Malkowsky [17] and many others. Sengonul [25] defined the sequence y = (yi) which isfrequently used as the Zp−transformation of the sequence x = (xi), that is,

yi = pxi + (1− p)xi−1

where x−1 = 0, 1 < p <∞ and Zp denotes the matrix Zp = (zik) defined by

zik =

p, (i = k);1− p, (i− 1 = k)(i, k ∈ N);0, otherwise.

2010 Mathematics Subject Classification. 40A05, 40D25, 40G15.Key words and phrases. Ideal, I-convergence, Zweier sequence, Orlicz function, n-normed space.

1

143J. APPLIED FUNCTIONAL ANALYSIS, VOL. 10, NO.'S 1-2,143-163 ,2015, COPYRIGHT 2015 EUDOXUS PRESS, LLC

Page 144: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

2 KULDIP RAJ AND SURUCHI PANDOH

Sengonul [25] introduced the Zweier sequence spaces Z and Z0 as follows:

Z = x = (xk) ∈ w : Zpx ∈ cand

Z0 = x = (xk) ∈ w : Zpx ∈ c0,

where w denotes the space of all real or complex sequences x = (xk).By l∞, c, c0 we denote the classes of all bounded, convergent and null sequence spaces.The notion of difference sequence spaces was introduced by Kızmaz [12] who studied thedifference sequence spaces l∞(∆), c(∆) and c0(∆). The notion was further generalizedby Et and Colak [3] by introducing the spaces l∞(∆n), c(∆n) and c0(∆n). Let n be anonnegative integer, then for z = c, c0 and l∞, we have sequence spaces

z(∆n) = x = (xk) ∈ w : (∆nxk) ∈ z,

where ∆nx = (∆nxk) = (∆n−1xk − ∆n−1xk+1) and ∆0xk = xk for all k ∈ N, which isequivalent to the following binomial representation

∆nxk =n∑v=0

(−1)v(nv

)xk+v.

Taking n = 1, we get the spaces studied by Et and Colak [3]. For more details aboutsequence spaces see ([19], [21], [22]) and references therein.Let X be a linear metric space. A function p : X → R is called paranorm, if

(1) p(x) ≥ 0 for all x ∈ X,(2) p(−x) = p(x) for all x ∈ X,(3) p(x+ y) ≤ p(x) + p(y) for all x, y ∈ X,(4) if (λn) is a sequence of scalars with λn → λ as n → ∞ and (xn) is a sequence of

vectors with p(xn − x)→ 0 as n→∞, then p(λnxn − λx)→ 0 as n→∞.

A paranorm p for which p(x) = 0 implies x = 0 is called total paranorm and the pair(X, p) is called a total paranormed space. It is well known that the metric of any linearmetric space is given by some total paranorm (see [28], Theorem 10.4.2, pp. 183).An Orlicz function M : [0,∞) → [0,∞) is a continuous, non-decreasing and convex suchthat M(0) = 0, M(x) > 0 for x > 0. Lindenstrauss and Tzafriri [16] used the idea ofOrlicz function to define the following sequence space. Orlicz sequence space is defined as

`M =x ∈ w :

∞∑k=1

M( |xk|ρ

)<∞, for some ρ > 0

.

The space `M is a Banach space with the norm

||x|| = infρ > 0 :

∞∑k=1

M( |xk|ρ

)≤ 1.

It is shown by Lindenstrauss and Tzafriri [16] that every Orlicz sequence space `M containsa subspace isomorphic to `p(p ≥ 1) which is an Orlicz sequence space with M(t) = tp for1 ≤ p < ∞. An Orlicz function M satisfies ∆2−condition if and only if for any constantL > 1 there exists a constant K(L) such that M(Lu) ≤ K(L)M(u) for all values ofu ≥ 0. Later on, Orlicz sequence spaces were investigated by Parashar and Chaudhary[20], Esi [4], Tripathy et al. [26], Bhardwaj and Singh [1], Et [2], Esi and Et [5] and manyothers. Also if M is an Orlicz function, then we may write M(λx) ≤ λM(x) for all λ with0 < λ < 1.

144

Page 145: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

ON SOME ZWEIER I-CONVERGENT DIFFERENCE SEQUENCE SPACES 3

Definition 1.1. A sequence (xk) ∈ w is said to be I-convergent to a number L if for everyε > 0, the set k ∈ N : |xk − L| ≥ ε ∈ I. In this case we write I − limxk = L.

Definition 1.2. A sequence (xk) ∈ w is said to be I-null if L = 0. In this case we writeI − limxk = 0.

Definition 1.3. A sequence (xk) ∈ w is said to be I-Cauchy if for every ε > 0, there exista number m = m(ε) such that k ∈ N : |xk − xm| ≥ ε ∈ I.

Definition 1.4. A sequence (xk) ∈ w is said to be I-bounded if there exist M > 0 suchthat k ∈ N : |xk| ≥M ∈ I.

Definition 1.5. A sequence space E is said to be solid (or normal) if (αkxk) ∈ E whenever(xk) ∈ E and for all sequence (αk) of scalars with |αk| < 1 for all k ∈ N.

Definition 1.6. A sequence space E is said to be symmetric if (xk) ∈ E implies (xπ(k)) ∈E, where π is a permutation of N.

Definition 1.7. A sequence space E is said to be sequence algebra if (xkyk) ∈ E whenever(xk), (yk) ∈ E.

Definition 1.8. A sequence space E is said to be convergence free if (yk) ∈ E whenever(xk) ∈ E and xk = 0 implies yk = 0.

Definition 1.9. Let K = k1 < k2 < ... ⊂ N and let E be a sequence space. A K−stepspace of E is a sequence space λEK = (xkn) ∈ w : (xk) ∈ E.

Definition 1.10. A canonical preimage of a sequence (xkn) ∈ λEK is a sequence (yk) ∈ wdefined by

yk =

xk, if k ∈ K0 , otherwise.

A canonical preimage of a step space λEK is a set of canonical preimages of all the elementsin λEK , that is, y is in the canonical preimage of λEK if and only if y is a canonical preimageof some x ∈ λEK .

Definition 1.11. A sequence space E is said to be monotone if it contain the canonicalpreimages of its step spaces.

Throughout the article ZI , ZI0 , ZI∞, mIZ and mI

Z0represents Zweier I-convergent, Zweier

I-null, Zweier bounded I-convergent and Zweier bounded I-null sequence spaces, respec-tively.

Lemma 1.12. The sequence space E is solid implies that E is monotone (See [13]).

Let M = (Mn) be a sequence of Orlicz functions, u = (un) be a sequence of strictlypositive real numbers and p = (pn) be a bounded sequence of positive real numbers. Inthe present paper we define the following sequence spaces:

145

Page 146: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

4 KULDIP RAJ AND SURUCHI PANDOH

ZI(M,∆r, p, u) =x = (xk) :

n ∈ N :

∞∑n=1

Mn

[|un(Zp(∆rx))n − L|

ρ

]pn≥ ε

∈ I

for some L ∈ C.

ZI0 (M,∆r, p, u) =

x = (xk) :

n ∈ N :

∞∑n=1

Mn

[|un(Zp(∆rx))n|

ρ

]pn≥ ε

∈ I

.

ZI∞(M,∆r, p, u) =x = (xk) :

n ∈ N : ∃ K > 0,

∞∑n=1

Mn

[|un(Zp(∆rx))n|

ρ

]pn≥ K

∈ I

.

Also, mIZ(M,∆r, p, u) = ZI(M,∆r, p, u) ∩ ZI∞(M,∆r, p, u) and mI

Z0(M,∆r, p, u) =

ZI0 (M,∆r, p, u) ∩ ZI∞(M,∆r, p, u).

If we take (pk) = 1 for all k, then we haveZI(M,∆r, p, u) = ZI(M,∆r, u), ZI0 (M,∆r, p, u) = ZI0 (M,∆r, u), ZI∞(M,∆r, p, u) =ZI∞(M,∆r, u), mI

Z(M,∆r, p, u) = ZI(M,∆r, u)∩ZI∞(M,∆r, u) and mIZ0

(M,∆r, p, u) =

ZI0 (M,∆r, u) ∩ ZI∞(M,∆r, u).

If we take (uk) = 1 for all k, then we haveZI(M,∆r, p, u) = ZI(M,∆r, p), ZI0 (M,∆r, p, u) = ZI0 (M,∆r, p), ZI∞(M,∆r, p, u) =ZI∞(M,∆r, p), mI

Z(M,∆r, p, u) = ZI(M,∆r, p)∩ZI∞(M,∆r, p) and mIZ0

(M,∆r, p, u) =

ZI0 (M,∆r, p) ∩ ZI∞(M,∆r, p).

If we take (Mn) = M , (pn) = 1, (un) = 1 for all n ∈ N and r = 0, then we get thesequence spaces defined by Hazarika et al. [11].

The main purpose of this paper is to introduce the sequence spaces ZI(M,∆r, p, u),ZI0 (M,∆r, p, u) and ZI∞(M,∆r, p, u). We study some topological properties and inclusionrelations between these sequence spaces. An effort is made to study these sequence spacesover n-normed spaces in the section third of this paper.

2. Main Results

Theorem 2.1. LetM = (Mn) be a sequence of Orlicz functions, u = (un) be a sequence ofstrictly positive real numbers and p = (pn) be a bounded sequence of positive real numbers.Then the spaces ZI(M,∆r, p, u), ZI0 (M,∆r, p, u) and ZI∞(M,∆r, p, u) are linear over thecomplex field C.

Proof. We shall prove the result for the space ZI(M,∆r, p, u). Let x = (xk), y = (yk) ∈ZI(M,∆r, p, u) and α, β be scalars. Then there exist positive numbers ρ1 and ρ2 suchthat

n ∈ N :∞∑n=1

Mn

[|un(Zp(∆rx))n − L1|

ρ1

]pn≥ ε

∈ I for some L1 ∈ C.

n ∈ N :∞∑n=1

Mn

[|un(Zp(∆ry))n − L2|

ρ2

]pn≥ ε

∈ I for some L2 ∈ C.

146

Page 147: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

ON SOME ZWEIER I-CONVERGENT DIFFERENCE SEQUENCE SPACES 5

that is,

(2.1) A1 =

n ∈ N :

∞∑n=1

Mn

[|un(Zp(∆rx))n − L1|

ρ1

]pn≥ ε

2

∈ I.

(2.2) A2 =

n ∈ N :

∞∑n=1

Mn

[|un(Zp(∆ry))n − L2|

ρ2

]pn≥ ε

2

∈ I.

Let ρ3 = max2|α|ρ1, 2|β|ρ2. Since M = (Mn) is a nondecreasing and convex func-tion, we have

Mn

[|(αun(Zp(∆rx))n + βun(Zp(∆ry))n

)− (αL1 + βL2)|

ρ3

]pn

≤Mn

[|α||un(Zp(∆rx))n − L1|

ρ3+|β||un(Zp(∆ry))n − L2|

ρ3

]pn

≤Mn

[|un(Zp(∆rx))n − L1|

ρ1

]pn+Mn

[|un(Zp(∆ry))n − L2|

ρ2

]pn.

Now from equations (2.1) and (2.2), we haven ∈ N :

∞∑n=1

Mn

[|(αun(Zp(∆rx))n + βun(Zp(∆ry))n

)− (αL1 + βL2)|

ρ3

]pn≥ ε

⊂ A1 ∪A2 ∈ I.

Therefore, αx + βy ∈ ZI(M,∆r, p, u). Hence ZI(M,∆r, p, u) is a linear space. Simi-larly, we can prove that ZI0 (M,∆r, p, u) and ZI∞(M,∆r, p, u) are linear spaces.

Theorem 2.2. The spaces mIZ(M,∆r, p, u) and mI

Z0(M,∆r, p, u) are paranormed spaces,

with the paranorm g defined by

g(x) = inf

ρ

pnH : sup

nMn

[|un(Zp(∆rx))n|

ρ

] pnH

≤ 1, for some ρ > 0

,

where H = max1, supnpn.

Proof. Clearly g(−x) = g(x) and g(0) = 0. Let x = (xk) and y = (yk) ∈ mIZ(M,∆r, p, u).

Now for ρ1, ρ2 > 0 we denote

A3 =

ρ1 : sup

nMn

[|un(Zp(∆rx))n|

ρ1

]pn≤ 1

and

A4 =

ρ2 : sup

nMn

[|un(Zp(∆ry))n|

ρ2

]pn≤ 1

.

If ρ = ρ1 + ρ2. Then by using Minkowski’s inequality, we obtain

Mn

[|un(Zp(∆r(x+ y)))n|

ρ

]pn≤ ρ1

ρ1 + ρ2Mn

[|un(Zp(∆rx))n|

ρ1

]pn

+ρ2

ρ1 + ρ2Mn

[|un(Zp(∆ry))n|

ρ2

]pn.

147

Page 148: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

6 KULDIP RAJ AND SURUCHI PANDOH

Therefore,

supnMn

[|un(Zp(∆r(x+ y)))n|

ρ

]pn≤ 1

and

g(x+ y) = inf

(ρ1 + ρ2)

pnH : Mn

[|un(Zp(∆r(x+ y)))n|

ρ

] pnH

≤ 1, ρ1 ∈ A3, ρ2 ∈ A4

≤ inf

(ρ1)

pnH : Mn

[|un(Zp(∆rx))n|

ρ1

] pnH

≤ 1, ρ1 ∈ A3

+ inf

(ρ2)

pnH : Mn

[|un(Zp(∆ry))n|

ρ2

] pnH

≤ 1, ρ2 ∈ A4

= g(x) + g(y).

Let tm → L and let g(∆rxm−∆rx)→ 0 asm→∞. To prove that g(tm∆rxm−L∆rx)→ 0as m→∞. We put

A5 =

ρm > 0 : sup

nMn

[|un(Zp(∆rxm))n|

ρm

]pn≤ 1

and

A6 =

ρs > 0 : sup

nMn

[|un(Zp(∆r(xm − x)))n|

ρs

]pn≤ 1

by the continuity of M = (Mn), we observe that

Mn

[|un(Zp(tm∆rxm − L∆rx))n|

|tm − L|ρm + |L|ρs

]pn≤ Mn

[|un(Zp(tm∆rxm − L∆rxm))n|

|tm − L|ρm + |L|ρs

]pn

+ Mn

[|un(Zp(L∆rxm − L∆rx))n||tm − L|ρm + |L|ρs

]pn

≤ |tm − L|ρm|tm − L|ρm + |L|ρs

Mn

[|un(Zp(∆rxm)n|

ρm

]pn

+|L|ρs

|tm − L|ρm + |L|ρsMn

[|un(Zp(∆rxm −∆rx))n|

ρs

]pn.

From the above inequality it follows that

supnMn

[|un(Zp(tm∆rxm − L∆rx))n|

|tm − L|ρm + |L|ρs

]pn≤ 1

and consequently

(2.3) g(tm∆rxm − L∆rx) = inf[|tm − L|ρm + |L|ρs]pnH : ρm ∈ A5, ρs ∈ A6

≤ |tm − L|pnH inf(ρm)

pnH : ρm ∈ A5

+ |L|pnH inf(ρs)

pnH : ρs ∈ A6

≤ max1, |tm − L|pnH g(∆rxm) + max1, |L|

pnH

g(∆rxm −∆rx).

148

Page 149: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

ON SOME ZWEIER I-CONVERGENT DIFFERENCE SEQUENCE SPACES 7

It is obvious that g(∆rxm) ≤ g(∆rx) + g(∆rxm −∆rx), for all m ∈ N. Clearly, the righthand side of the relation (2.3) tends to 0 as m→∞ and the result follows. This completesthe proof.

Theorem 2.3. Let M1 and M2 be Orlicz functions satisfying ∆2−condition. Then(i) W (M2,∆

r, p, u) ⊆W (M1oM2,∆r, p, u).

(ii) W (M1,∆r, p, u) ∩W (M2,∆

r, p, u) = W (M1 +M2,∆r, p, u) for W = ZI , ZI0 , Z

I∞.

Proof. (i) Let x = (xk) ∈ ZI(M2,∆r, p, u). Let ε > 0, be given. For some ρ > 0, we have

(2.4)

x = (xk) :

n ∈ N :

∞∑n=1

M2

[|un(Zp(∆rx))n − L|

ρ

]pn≥ ε

∈ I

.

Let ε > 0 and choose 0 < δ < 1 such that M1(t) ≤ ε for 0 ≤ t ≤ δ. We define

yn = M2

[|un(Zp(∆rx))n − L|

ρ

]and consider lim

n∈N, 0≤yn≤δ[M1(yn)]pn = lim

n∈N, yn≤δ[M1(yn)]pn + lim

n∈N, yn>δ[M1(yn)]pn .

We have

(2.5) limn∈N, yn≤δ

[M1(yn)]pn ≤ [M1(2)]G + limn∈N, yn≤δ

[(yn)]pn , G = supnpn.

For second summation (i.e. yn > δ), we have yn < ynδ < 1 + yn

δ . Since M1 is nonde-creasing and convex, it follows that

M1(yn) < M1

(1 +

ynδ

)≤ 1

2M1(2)+ ≤ 1

2M1

(2ynδ

).

Since M1 satisfies the ∆2−condition, we can write

M1(yn) <1

2KynδM1(2) +

1

2KynδM1(2) = K

ynδM1(2).

We get the following estimates:

(2.6) limn∈N, yn>δ

[M1(yn)]pn ≤ max1, (Kδ−1M1(2))H+ limn∈N, yn>δ

[yn]pn .

From equations (2.4), (2.5) and (2.6), it follows thatx = (xk) :

n ∈ N :

∞∑n=1

M1

(M2

[|un(Zp(∆rx))n − L|

ρ

]pn)≥ ε

∈ I

.

Hence ZI(M2,∆r, p, u) ⊆ ZI(M1oM2,∆

r, p, u).

(ii) Let x = (xk) ∈ ZI(M1,∆r, p, u) ∩ ZI(M2,∆

r, p, u). Let ε > 0, be given. Thenthere exists ρ > 0 such that

x = (xk) :

n ∈ N :

∞∑n=1

M1

[|un(Zp(∆rx))n − L|

ρ

]pn≥ ε

∈ I

and

x = (xk) :

n ∈ N :

∞∑n=1

M2

[|un(Zp(∆rx))n − L|

ρ

]pn≥ ε

∈ I

.

149

Page 150: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

8 KULDIP RAJ AND SURUCHI PANDOH

The rest of the proof follows from the following relations:n ∈ N :

∞∑n=1

(M1 +M2)

[|un(Zp(∆rx))n − L|

ρ

]pn≥ ε

n ∈ N :

∞∑n=1

M1

[|un(Zp(∆rx))n − L|

ρ

]pn≥ ε

n ∈ N :

∞∑n=1

M2

[|un(Zp(∆rx))n − L|

ρ

]pn≥ ε

Taking M2(x) = x and M1(x) = M(x) for all x ∈ [0,∞), we have the following result.

Corollary 2.4. W ⊆W (M,∆r, p, u), where W = ZI , ZI0 .

Theorem 2.5. The spaces ZI0 (M,∆r, p, u) and mIZ0

(M,∆r, p, u) are solid and monotone.

Proof. We shall prove the result for ZI0 (M,∆r, p, u). For mIZ0

(M,∆r, p, u), the result can

be proved similarly. Let x = (xk) ∈ ZI0 (M,∆r, p, u), then there exists ρ > 0 such that

(2.7)

n ∈ N :

∞∑n=1

Mn

[|un(Zp(∆rx))n|

ρ

]pn≥ ε

∈ I.

Let (αk) be a sequence scalar with |αk| ≤ 1 for all k ∈ N. Then the result followsfrom (2.7) and the following inequality

Mn

[|αkun(Zp(∆rx))n|

ρ

]pn≤ |αk|Mn

[|un(Zp(∆rx))n|

ρ

]pn

≤Mn

[|un(Zp(∆rx))n|

ρ

]pnfor all n ∈ N. The space ZI0 (M,∆r, p, u) is monotone follows from the Lemma (1.12).

Theorem 2.6. The spaces ZI(M,∆r, p, u) and mIZ(M,∆r, p, u) are neither monotone

nor solid in general.

Proof. The proof of this result follows from the following example. Let I = If , Mn(x) = xfor all x ∈ [0,∞), p = (pn) = 1, u = (un) = 1, for all n and r = 0. Consider the K-stepspace Tk of T defined as follows.Let (xk) ∈ T and (yk) ∈ Tk be such that

yk =

xk, if k is odd.0, otherwise.

Consider the sequence (xk) defined by xk = 12 for all k ∈ N. Then (xk) ∈ ZI(M,∆r, p, u)

but its K- step space preimage does not belongs to ZI(M,∆r, p, u). Thus ZI(M,∆r, p, u)are not monotone. Hence, ZI(M,∆r, p, u) is not solid by Lemma (1.12).

Theorem 2.7. The spaces ZI(M,∆r, p, u) and ZI0 (M,∆r, p, u) are sequence algebra.

Proof. We prove that ZI0 (M,∆r, p, u) is sequence algebra. For the space ZI(M,∆r, p, u),the result can be proved similarly. Let x = (xk), y = (yk) ∈ ZI0 (M,∆r, p, u). Then

n ∈ N :∞∑n=1

Mn

[|un(Zp(∆rx))n|

ρ1

]pn≥ ε

∈ I, for some ρ1 > 0.

150

Page 151: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

ON SOME ZWEIER I-CONVERGENT DIFFERENCE SEQUENCE SPACES 9

and n ∈ N :

∞∑n=1

Mn

[|un(Zp(∆ry))n|

ρ2

]pn≥ ε

∈ I, for some ρ2 > 0.

Let ρ = ρ1ρ2 > 0. Then we can show thatn ∈ N :

∞∑n=1

Mn

[|un((Zp(∆rxy))n

)|

ρ

]pn≥ ε

∈ I.

Thus, (xkyk) ∈ ZI0 (M,∆r, p, u). Hence ZI0 (M,∆r, p, u) is sequence algebra.

Theorem 2.8. LetM = (Mk) be a sequence of Orlicz functions. Then ZI0 (M,∆r, p, u) ⊂ZI(M,∆r, p, u) ⊂ ZI∞(M,∆r, p, u) and the inclusions are proper.

Proof. Let x = (xk) ∈ ZI(M,∆r, p, u). Then there exists L ∈ C and ρ > 0 such thatn ∈ N :

∞∑n=1

Mn

[|un(Zp(∆rx))n − L|

ρ

]pn≥ ε

∈ I.

We have

Mn

[|un(Zp(∆rx))n|

]pn≤ 1

2Mn

[|un(Zp(∆rx))n − L|

ρ

]pn+Mn

1

2

[|L|ρ

]pn.

Taking supremum over n on both sides we get x = (xk) ∈ ZI∞(M,∆r, p, u). The inclusionZI0 (M,∆r, p, u) ⊂ ZI(M,∆r, p, u) is obvious. The inclusion is proper follows from thefollowing example.Let I = Id, Mn(x) = x2 for all x ∈ [0,∞), u = (un) = 1, p = (pn) = 1 and r = 0 for alln ∈ N.(a) Consider the sequence (xk) defined by xk = 1 for all k ∈ N. Then (xk) ∈ ZI(M,∆r, p, u)but (xk) /∈ ZI0 (M,∆r, p, u).(b) Consider the sequence (yk) defined as

yk =

2, if k is even0, otherwise.

Then (yk) ∈ ZI∞(M,∆r, p, u) but (yk) /∈ ZI(M,∆r, p, u).

Theorem 2.9. The spaces ZI(M,∆r, p, u) and ZI0 (M,∆r, p, u) are not convergence freein general.

Proof. The proof of this result follows from the following example. Let I = If , Mn(x) = x2

for all x ∈ [0,∞), u = (un) = 1, p = (pn) = 1 and r = 0 for all n ∈ N.Consider the sequence (xk) and (yk) defined by xk = 1

k2 and yk = k2 for all k ∈ N. Then

(xk) belongs to ZI(M,∆r, p, u) and ZI0 (M,∆r, p, u), but (yk) does not belongs to bothZI(M,∆r, p, u) and ZI0 (M,∆r, p, u). Hence, the spaces are not convergence free.

Theorem 2.10. mIZ(M,∆r, p, u) is a closed subspace of l∞(M,∆r, p, u).

Proof. Let (x(i)k ) be a Cauchy sequence in mI

Z(M,∆r, p, u) such that x(i) → x. We show

that x ∈ mIZ(M,∆r, p, u). Since (x

(i)k ) ∈ mI

Z(M,∆r, p, u), then there exists ai such thatn ∈ N :

∞∑n=1

Mn

[|un(Zp(∆rx(i)))n − ai|

ρ

]pn≥ ε

∈ I.

We need to show that(i) (ai) converges to a.

151

Page 152: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

10 KULDIP RAJ AND SURUCHI PANDOH

(ii) If U =

x = (xk) :

n ∈ N :

∞∑n=1

Mn

[|un(Zp(∆rx))n − L|

ρ

]pn≥ ε

, then U c ∈ I.

(i) Since (x(i)k ) be a Cauchy sequence in mI

Z(M,∆r, p, u), then for a given ε > 0, thereexists k0 ∈ N such that

supnMn

[|un((Zp(∆rx(i)))n − (Zp(∆rx(j)))n

)|

ρ

]pn<ε

3for all i, j ≥ k0.

For a given ε > 0, we have

Bij =

n ∈ N :

∞∑n=1

Mn

[|un((Zp(∆rx(i)))n − (Zp(∆rx(j)))n

)|

ρ

]pn<ε

3

Bi =

n ∈ N :

∞∑n=1

Mn

[|un(Zp(∆rx(i) − ai))n|

ρ

]pn<ε

3

Bj =

n ∈ N :

∞∑n=1

Mn

[|un(Zp(∆rx(j) − aj))n|

ρ

]pn<ε

3

Then Bij , Bi, Bj ∈ I. Let Bc = Bcji ∪Bci ∪Bcj , where

B =

n ∈ N :

∞∑n=1

Mn

[|un(ai − aj)|

ρ

]pn< ε

Then Bc ∈ I. We choose n0 ∈ Bc, then for each i, j ≥ n0 we haven ∈ N :

∞∑n=1

Mn

[|un(ai − aj)|

ρ

]pn< ε

n ∈ N :

∞∑n=1

Mn

[|un(Zp(∆rx(i) − ai))n|

ρ

]pn<ε

3

n ∈ N :

∞∑n=1

Mn

[|un(Zp(∆rx(i)))n − (Zp(∆rx(j)))n|

ρ

]pn<ε

3

n ∈ N :

∞∑n=1

Mn

[|un(Zp(∆rx(j) − aj))n|

ρ

]pn<ε

3

.

Then (ai) is a Cauchy sequence of scalars in C and so there exists a scalar a ∈ C suchthat ai → a as i→∞.(ii) Let 0 < δ < 1 be given. We show that if

U =

x = (xk) :

n ∈ N :

∞∑n=1

Mn

[|un(Zp(∆rx− a))n|

ρ

]pn< δ

, then U c ∈ I.

Since x(i) → x, then there exists q0 ∈ N such that

(2.8) P =

n ∈ N :

∞∑n=1

Mn

[|un((Zp(∆rx(q0)))n − (Zp(∆rx))n

)|

ρ

]pn<

3D

)Hwhich implies that P c ∈ I. The number q0 can be so chosen that together with (2.8),

152

Page 153: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

ON SOME ZWEIER I-CONVERGENT DIFFERENCE SEQUENCE SPACES 11

we have

Q =

n ∈ N :

∞∑n=1

Mn

[|un(aq0 − a)|

ρ

]pn<

3D

)H.

Thus, we have Qc ∈ I. Sincen ∈ N :

∞∑n=1

Mn

[|un(Zp(∆rx(q0) − aq0))n|

ρ

]pn≥ δ

∈ I.

Then we have a subset S ∈ N such that Sc ∈ I, where

S =

n ∈ N :

∞∑n=1

Mn

[|un(Zp(∆rx(q0) − aq0))n|

ρ

]pn<

3D

)H.

Let U c = P c ∪Qc ∪ Sc, where

U =

n ∈ N :

∞∑n=1

Mn

[|un(Zp(∆rx− a))n|

ρ

]pn< δ

.

Therefore, for each n ∈ U c, we haven ∈ N :

∞∑n=1

Mn

[|un(Zp(∆rx− a))n|

ρ

]pn< δ

n ∈ N :

∞∑n=1

Mn

[|un((Zp(∆rx(q0)))n − (Zp(∆rx))n

)|

ρ

]pn<

3D

)H

n ∈ N :

∞∑n=1

Mn

[|un(Zp(∆rx(q0) − aq0))n|

ρ

]pn<

3D

)H

n ∈ N :

∞∑n=1

Mn

[|un(aq0 − a)|

ρ

]pn<

3D

)H.

Hence, the result follows.

Note that the inclusion mIZ(M,∆r, p, u) ⊂ l∞(M,∆r, p, u) and mI

Z0(M,∆r, p, u) ⊂

l∞(M,∆r, p, u) are strict. So in view of Theorem (2.10). We have the following result.

Theorem 2.11. The spaces mIZ(M,∆r, p, u) and mI

Z0(M,∆r, p, u) are nowhere dense in

l∞(M,∆r, p, u).

Theorem 2.12. The spaces mIZ(M,∆r, p, u) and mI

Z0(M,∆r, p, u) are not separable.

Proof. We shall prove the result for the space mIZ(M,∆r, p, u) . Let A be an infinite

subset of N of increasing natural numbers such that A ∈ I. Let

pn =

1, if n ∈ A2, otherwise.

Mn(x) = x, u = (un) = 1 and r = 0 for all n. Let P0 = (xn) : xn = 0 or 1, n ∈A, xn = 0, otherwise. Since A is infinite, so P0 is uncountable. Consider the classof open balls B1 = B(z, 1

z ) : z ∈ P0. Let C1 be an open cover of mIZ(M,∆r, p, u)

and mIZ0

(M,∆r, p, u) containing B1. Since B1 is uncountable, so C1 cannot be re-

duced to a countable subcover for mIZ(M,∆r, p, u) as well as mI

Z0(M,∆r, p, u). Thus,

mIZ(M,∆r, p, u) and mI

Z0(M,∆r, p, u) are not separable.

153

Page 154: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

12 KULDIP RAJ AND SURUCHI PANDOH

3. Sequence spaces over n-normed spaces

The concept of 2-normed spaces was initially developed by Gahler [6] in the mid of 1960’s,while that for n-normed spaces one can see in Misiak [18]. Since then, many others havestudied this concept and obtained various results, see Gunawan ([7], [8]) and Gunawanand Mashadi [9]. Let n ∈ N and X be a linear space over the field R of reals of dimensiond, where d ≥ n ≥ 2. A real valued function ||·, · · · , ·|| on Xn satisfying the following fourconditions:

(1) ||x1, x2, · · · , xn|| = 0 if and only if x1, x2, · · · , xn are linearly dependent in X,(2) ||x1, x2, · · · , xn|| is invariant under permutation,(3) ||αx1, x2, · · · , xn|| = |α| ||x1, x2, · · · , xn|| for any α ∈ R, and(4) ||x+ x′, x2, · · · , xn|| ≤ ||x, x2, · · · , xn||+ ||x′, x2, · · · , xn||

is called an n-norm on X, and the pair (X, ||·, · · · , ·||) is called a n-normed space over thefield R.For example, we may take X = Rn being equipped with the n-norm ||x1, x2, · · · , xn||E= the volume of the n-dimensional parallelopiped spanned by the vectors x1, x2, · · · , xnwhich may be given explicitly by the formula

||x1, x2, · · · , xn||E = |det(xij)|,where xi = (xi1, xi2, · · · , xin) ∈ Rn for each i = 1, 2, · · · , n. Let (X, ||·, · · · , ·||) be ann-normed space of dimension d ≥ n ≥ 2 and a1, a2, · · · , an be linearly independent setin X. Then the following function ||·, · · · , ·||∞ on Xn−1 defined by

||x1, x2, · · · , xn−1||∞ = max||x1, x2, · · · , xn−1, ai|| : i = 1, 2, · · · , ndefines an (n− 1)-norm on X with respect to a1, a2, · · · , an.A sequence (xk) in a n-normed space (X, ||·, · · · , ·||) is said to converge to some L ∈ X if

limk→∞

||xk − L, z1, · · · , zn−1|| = 0 for every z1, · · · , zn−1 ∈ X.

A sequence (xk) in a n-normed space (X, ||·, · · · , ·||) is said to be Cauchy if

limk,p→∞

||xk − xp, z1, · · · , zn−1|| = 0 for every z1, · · · , zn−1 ∈ X.

If every Cauchy sequence in X converges to some L ∈ X, then X is said to be completewith respect to the n-norm. A complete n-normed space is said to be n-Banach space.For more details about n− normed space see [23] and references therein.Let (X, ||·, · · · , ·||) be an n-normed space and W (n − X) denotes the space of X-valuedsequences. Let M = (Mn) be a sequence of Orlicz functions, u = (un) be a sequence ofstrictly positive real numbers and p = (pn) be a bounded sequence of positive real num-bers. For some ρ > 0, we define the following classes of sequences:

ZI(M,∆r, p, u, ‖·, · · · , ·‖) =x = (xk) :

n ∈ N :

∞∑n=1

Mn

[∥∥∥un(Zp(∆rx))n − Lρ

, z1, · · · , zn−1

∥∥∥]pn ≥ ε ∈ I, L ∈ R

ZI0 (M,∆r, p, u, ‖·, · · · , ·‖) =x = (xk) :

n ∈ N :

∞∑n=1

Mn

[∥∥∥un(Zp(∆rx))nρ

, z1, · · · , zn−1

∥∥∥]pn ≥ ε ∈ I

154

Page 155: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

ON SOME ZWEIER I-CONVERGENT DIFFERENCE SEQUENCE SPACES 13

ZI∞(M,∆r, p, u, ‖·, · · · , ·‖) =x = (xk) :

n ∈ N : ∃K > 0,

∞∑n=1

Mn

[∥∥∥un(Zp(∆rx))nρ

, z1, · · · , zn−1

∥∥∥]pn ≥ K ∈ I

Also, mIZ(M,∆r, p, u, ‖·, · · · , ·‖) = ZI(M,∆r, p, u, ‖·, · · · , ·‖)∩ZI∞(M,∆r, p, u, ‖·, · · · , ·‖)

and mIZ0

(M,∆r, p, u, ‖·, · · · , ·‖) = ZI0 (M,∆r, p, u, ‖·, · · · , ·‖)∩ZI∞(M,∆r, p, u, ‖·, · · · , ·‖).

If we take (pn) = 1 for all n then we haveZI(M,∆r, p, u, ‖·, · · · , ·‖) = ZI(M,∆r, u, ‖·, · · · , ·‖), ZI0 (M,∆r, p, u, ‖·, · · · , ·‖)= ZI0 (M,∆r, u, ‖·, · · · , ·‖), ZI∞(M,∆r, p, u, ‖·, · · · , ·‖) = ZI∞(M,∆r, u, ‖·, · · · , ·‖),mIZ(M,∆r, p, u, ‖·, · · · , ·‖) = ZI(M,∆r, u, ‖·, · · · , ·‖) ∩ ZI∞(M,∆r, u, ‖·, · · · , ·‖)

and mIZ0

(M,∆r, p, u, ‖·, · · · , ·‖) = ZI0 (M,∆r, u, ‖·, · · · , ·‖) ∩ ZI∞(M,∆r, u, ‖·, · · · , ·‖).

If we take (un) = 1 for all n then we getZI(M,∆r, p, u, ‖·, · · · , ·‖) = ZI(M,∆r, p, ‖·, · · · , ·‖), ZI0 (M,∆r, p, u, ‖·, · · · , ·‖)= ZI0 (M,∆r, p, ‖·, · · · , ·‖), ZI∞(M,∆r, p, u, ‖·, · · · , ·‖) = ZI∞(M,∆r, p, ‖·, · · · , ·‖),mIZ(M,∆r, p, u, ‖·, · · · , ·‖) = ZI(M,∆r, p, ‖·, · · · , ·‖) ∩ ZI∞(M,∆r, p, ‖·, · · · , ·‖)

and mIZ0

(M,∆r, p, u, ‖·, · · · , ·‖) = ZI0 (M,∆r, p, ‖·, · · · , ·‖) ∩ ZI∞(M,∆r, p, ‖·, · · · , ·‖).

The main aim of this section is to study some topological properties and inclusion relationsbetween the spaces ZI(M,∆r, p, u, ‖·, · · · , ·‖), ZI0 (M,∆r, p, u, ‖·, · · · , ·‖) and ZI∞(M,∆r, p,u, ‖·, · · · , ·‖).

Theorem 3.1. Let M = (Mn) be a sequence of Orlicz functions, u = (un) be a se-quence of strictly positive real numbers and p = (pn) be a bounded sequence of positivereal numbers. Then the spaces ZI(M,∆r, p, u, ‖·, · · · , ·‖), ZI0 (M,∆r, p, u, ‖·, · · · , ·‖) andZI∞(M,∆r, p, u, ‖·, · · · , ·‖) are linear over the real field R of real numbers.

Proof. We shall prove the result for the space ZI(M,∆r, p, u, ‖·, · · · , ·‖). Let x = (xk),y = (yk) ∈ ZI(M,∆r, p, u, ‖·, · · · , ·‖) and let α, β be scalars. Then there exist positivenumbers ρ1 and ρ2 such that

n ∈ N :

∞∑n=1

Mn

[∥∥∥un(Zp(∆rx))n − L1

ρ1, z1, · · · , zn−1

∥∥∥]pn ≥ ε ∈ I for some L1 ∈ R.

n ∈ N :

∞∑n=1

Mn

[∥∥∥un(Zp(∆ry))n − L2

ρ2, z1, · · · , zn−1

∥∥∥]pn ≥ ε ∈ I for some L2 ∈ R.

that is,

(3.1) D1 =

n ∈ N :

∞∑n=1

Mn

[∥∥∥un(Zp(∆rx))n − L1

ρ1, z1, · · · , zn−1

∥∥∥]pn ≥ ε

2

∈ I.

(3.2) D2 =

n ∈ N :

∞∑n=1

Mn

[∥∥∥un(Zp(∆ry))n − L2

ρ2, z1, · · · , zn−1

∥∥∥]pn ≥ ε

2

∈ I.

Let ρ3 = max2|α|ρ1, 2|β|ρ2. Since M = (Mn) is a nondecreasing and convex func-tion, we have

Mn

[∥∥∥ (αun(Zp(∆rx))n+βun(Zp(∆ry))n)−(αL1+βL2)ρ3

, z1, · · · , zn−1

∥∥∥]pn

155

Page 156: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

14 KULDIP RAJ AND SURUCHI PANDOH

≤Mn

[|α|∥∥∥un(Zp(∆rx))n−L1

ρ3, z1, · · · , zn−1

∥∥∥+ |β|∥∥∥un(Zp(∆ry))n−L2

ρ3, z1, · · · , zn−1

∥∥∥]pn≤Mn

[∥∥∥un(Zp(∆rx))n−L1

ρ1, z1, · · · , zn−1

∥∥∥]pn +Mn

[∥∥∥un(Zp(∆ry))n−L2

ρ2, z1, · · · , zn−1

∥∥∥]pn .Now from equations (3.1) and (3.2), we haven ∈ N :

∞∑n=1

Mn

[∥∥∥ (αun(Zp(∆rx))n + βun(Zp(∆ry))n)− (αL1 + βL2)

ρ3, z1, · · · , zn−1

∥∥∥]pn

≥ ε

⊂ D1 ∪D2 ∈ I.

Therefore, αx+βy ∈ ZI(M,∆r, p, u, ‖·, · · · , ·‖). Hence ZI(M,∆r, p, u, ‖·, · · · , ·‖) is a lin-ear space. Similarly, we can prove that ZI0 (M,∆r, p, u, ‖·, · · · , ·‖) and ZI∞(M,∆r, p, u, ‖·, · · · , ·‖)are linear spaces.

Theorem 3.2. The spaces mIZ(M,∆r, p, u, ‖·, · · · , ·‖) and mI

Z0(M,∆r, p, u, ‖·, · · · , ·‖)

are paranormed spaces, with the paranorm g defined by

g(x) = inf

ρ

pnH : sup

nMn

[∥∥∥un(Zp(∆rx))nρ

, z1, · · · , zn−1

∥∥∥]pnH

≤ 1, for some ρ > 0

,

where H = max1, supn pn.

Proof. Clearly g(−x) = g(x) and g(0) = 0. Let x = (xk) and y = (yk) ∈ mIZ(M,∆r, p, u).

Now for ρ1, ρ2 > 0 we denote

D3 =

ρ1 : sup

nMn

[∥∥∥un(Zp(∆rx))nρ1

, z1, · · · , zn−1

∥∥∥]pn ≤ 1

and

D4 =

ρ2 : sup

nMn

[∥∥∥un(Zp(∆ry))nρ2

, z1, · · · , zn−1

∥∥∥]pn ≤ 1

.

If ρ = ρ1 + ρ2. Then by using Minkowski’s inequality, we obtain

Mn

[∥∥∥un(Zp(∆r(x+y)))nρ , z1, · · · , zn−1

∥∥∥]pn

≤ ρ1

ρ1 + ρ2Mn

[∥∥∥un(Zp(∆rx))nρ1

, z1, · · · , zn−1

∥∥∥]pn

+ρ2

ρ1 + ρ2Mn

[∥∥∥un(Zp(∆ry))nρ2

, z1, · · · , zn−1

∥∥∥]pn

which in terms give us, supnMn

[∥∥∥un(Zp(∆r(x+ y)))nρ

, z1, · · · , zn−1

∥∥∥]pn ≤ 1

and

g(x+ y) = inf

(ρ1 + ρ2)

pnH : Mn

[∥∥∥un(Zp(∆r(x+ y)))nρ

, z1, · · · , zn−1

∥∥∥]pnH

≤ 1

156

Page 157: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

ON SOME ZWEIER I-CONVERGENT DIFFERENCE SEQUENCE SPACES 15

ρ1 ∈ D3, ρ2 ∈ D4

≤ inf

(ρ1)

pnH : Mn

[∥∥∥un(Zp(∆rx))nρ1

, z1, · · · , zn−1

∥∥∥]pnH

≤ 1 ρ1 ∈ D3

+ inf

(ρ2)

pnH : Mn

[∥∥∥un(Zp(∆ry))nρ2

, z1, · · · , zn−1

∥∥∥]pnH

≤ 1 ρ2 ∈ D4

= g(x) + g(y).

Let tm → L and let g(∆rxm−∆rx)→ 0 as m→∞. To prove that g(tm∆rxk−L∆rx)→ 0as m→∞. We put

D5 =

ρm > 0 : sup

nMn

[∥∥∥un(Zp(∆rxm))nρm

, z1, · · · , zn−1

∥∥∥]pn ≤ 1

and

D6 =

ρs > 0 : sup

nMn

[∥∥∥un(Zp(∆r(xm − x)))nρs

, z1, · · · , zn−1

∥∥∥]pn ≤ 1

by the continuity of M = (Mn), we observe that

Mn

[∥∥∥un(Zp(tm∆rxm − L∆rx))n|tm − L|ρm + |L|ρs

, z1, · · · , zn−1

∥∥∥]pn

≤Mn

[∥∥∥un(Zp(tm∆rxm − L∆rxm))n|tm − L|ρm + |L|ρs

, z1, · · · , zn−1

∥∥∥]pn

+Mn

[∥∥∥un(Zp(L∆rxm − L∆rx))n|tm − L|ρm + |L|ρs

, z1, · · · , zn−1

∥∥∥]pn

≤ |tm − L|ρm|tm − L|ρm + |L|ρs

Mn

[∥∥∥un(Zp(∆rxm))nρm

, z1, · · · , zn−1

∥∥∥]pn

+|L|ρs

|tm − L|ρm + |L|ρsMn

[∥∥∥un(Zp(∆r(xm − x)))nρs

, z1, · · · , zn−1

∥∥∥]pn .From the above inequality it follows that

supnMn

[∥∥∥un(Zp(tm∆rxm − L∆rx))n|tm − L|ρm + |L|ρs

, z1, · · · , zn−1

∥∥∥]pn ≤ 1

and consequently

(3.3) g(tm∆rxm − L∆rx) = inf[|tm − L|ρm + |L|ρs]pnH : ρm ∈ D5, ρs ∈ D6

≤ |tm − L|pnH inf(ρm)

pnH : ρm ∈ D5

+ |L|pnH inf(ρs)

pnH : ρs ∈ D6

≤ max1, |tm − L|pnH g(∆rxm) + max1, |L|

pnH

g(∆rxm −∆rx).

It is obvious that g(∆rxm) ≤ g(∆rx) + g(∆rxm −∆rx), for all m ∈ N. Clearly, the righthand side of the relation (3.3) tends to 0 as m→∞ and the result follows. This completesthe proof.

157

Page 158: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

16 KULDIP RAJ AND SURUCHI PANDOH

Theorem 3.3. Let M1 and M2 be Orlicz functions that satisfies ∆2−condition. Then(i) W (M2,∆

r, p, u, ‖·, · · · , ·‖) ⊆W (M1oM2, ,∆rp, u, ‖·, · · · , ·‖).

(ii) W (M1,∆r, p, u, ‖·, · · · , ·‖) ∩W (M2,∆

r, p, u, ‖·, · · · , ·‖) = W (M1 +M2,∆r, p, u,

‖·, · · · , ·‖) for W = ZI , ZI0 , ZI∞.

Proof. (i) Let x = (xk) ∈ ZI(M2,∆r, p, u, ‖·, · · · , ·‖). Let ε > 0, be given. For some

ρ > 0, we have

(3.4)

x = (xk) :

n ∈ N :

∞∑n=1

M2

[∥∥∥un(Zp(∆rx))n − Lρ

, z1, · · · , zn−1

∥∥∥]pn ≥ ε ∈ I.

Let ε > 0 and choose 0 < δ < 1 such that M1(t) ≤ ε for 0 ≤ t ≤ δ. We define

yn = M2

[∥∥∥un(Zp(∆rx))n − Lρ

, z1, · · · , zn−1

∥∥∥]and consider lim

n∈N, 0≤yn≤δ[M1(yn)]pn = lim

n∈N, yn≤δ[M1(yn)]pn + lim

n∈N, yn>δ[M1(yn)]pn .

We have

(3.5) limn∈N, yn≤δ

[M1(yn)]pn ≤ [M1(2)]G + limn∈N, yn≤δ

[(yn)]pn , G = supnpn.

For second summation (i.e. yn > δ), we have yn < ynδ < 1 + yn

δ . Since M1 is nonde-creasing and convex, it follows that

M1(yn) < M1

(1 +

ynδ

)≤ 1

2M1(2)+ ≤ 1

2M1

(2ynδ

).

Since M1 satisfies the ∆2−condition, we can write

M1(yn) <1

2KynδM1(2) +

1

2KynδM1(2) = K

ynδM1(2).

We get the following estimates:

(3.6) limn∈N, yn>δ

[M1(yn)]pn ≤ max1, (Kδ−1M1(2))H+ limn∈N, yn>δ

[yn]pn .

From equations (3.4), (3.5) and (3.6), it follows thatx = (xk) :

n ∈ N :

∞∑n=1

M1

(M2

[∥∥∥un(Zp(∆rx))− Lρ

, z1, · · · , zn−1

∥∥∥]pn) ≥ ε ∈ I.Hence ZI(M2,∆

r, p, u, ‖·, · · · , ·‖) ⊆ ZI(M1oM2,∆r, p, u, ‖·, · · · , ·‖).

(ii) Let x = (xk) ∈ ZI(M1,∆r, p, u, ‖·, · · · , ·‖) ∩ ZI(M2,∆

r, p, u, ‖·, · · · , ·‖). Let ε > 0,be given. Then there exists ρ > 0 such that

x = (xk) :

n ∈ N :

∞∑n=1

M1

[∥∥∥un(Zp(∆rx))n − Lρ

, z1, · · · , zn−1

∥∥∥]pn ≥ ε ∈ Iand

x = (xk) :

n ∈ N :

∞∑n=1

M2

[∥∥∥un(Zp(∆rx))n − Lρ

, z1, · · · , zn−1

∥∥∥]pn ≥ ε ∈ I.

158

Page 159: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

ON SOME ZWEIER I-CONVERGENT DIFFERENCE SEQUENCE SPACES 17

The rest of the proof follows from the following relations:n ∈ N :

∞∑n=1

(M1 +M2)

[∥∥∥un(Zp(∆rx))n − Lρ

, z1, · · · , zn−1

∥∥∥]pn ≥ ε

n ∈ N :

∞∑n=1

M1

[∥∥∥un(Zp(∆rx))n − Lρ

, z1, · · · , zn−1

∥∥∥]pn ≥ ε

n ∈ N :

∞∑n=1

M2

[∥∥∥un(Zp(∆rx))n − Lρ

, z1, · · · , zn−1

∥∥∥]pn ≥ εTaking M2(x) = x and M1(x) = M(x) for all x ∈ [0,∞), we have the following result.

Corollary 3.4. W ⊆W (M,∆r, p, u, ‖·, · · · , ·‖), where W = ZI , ZI0 .

Theorem 3.5. The spaces ZI0 (M,∆r, p, u, ‖·, · · · , ·‖) and mIZ0

(M,∆r, p, u, ‖·, · · · , ·‖) aresolid and monotone.

Proof. We shall prove the result for ZI0 (M,∆r, p, u, ‖·, · · · , ·‖). For mIZ0

(M,∆r, p, u, ‖·,· · · , ·‖), the result can be proved similarly. Let x = (xk) ∈ ZI0 (M,∆r, p, u, ‖·, · · · , ·‖),then there exists ρ > 0 such that

(3.7)

n ∈ N :

∞∑n=1

Mn

[∥∥∥un(Zp(∆rx))nρ

, z1, · · · , zn−1

∥∥∥]pn ≥ ε ∈ I.Let (αk) be a sequence scalar with |αk| ≤ 1 for all k ∈ N, we have

Mn

[∥∥∥αkun(Zp(∆rx))nρ

, z1, · · · , zn−1

∥∥∥]pn ≤ |αk|Mn

[∥∥∥un(Zp(∆rx))nρ

, z1, · · · , zn−1

∥∥∥]pn

≤Mn

[∥∥∥un(Zp(∆rx))nρ

, z1, · · · , zn−1

∥∥∥]pnfor all n ∈ N. The space ZI0 (M,∆r, p, u, ‖·, · · · , ·‖) is monotone follows from the Lemma(1.12).

Theorem 3.6. The spaces ZI(M,∆r, p, u, ‖·, · · · , ·‖) and mIZ(M,∆r, p, u, ‖·, · · · , ·‖) are

neither monotone nor solid in general.

Proof. The proof of result follows from the Theorem (2.6).

Theorem 3.7. The spaces ZI(M,∆r, p, u, ‖·, · · · , ·‖) and ZI0 (M,∆r, p, u, ‖·, · · · , ·‖) aresequence algebra.

Proof. We prove that ZI0 (M,∆r, p, u, ‖·, · · · , ·‖) is sequence algebra. For the space ZI(M,∆r, p, u, ‖·, · · · , ·‖), the result can be proved similarly. Let x = (xk), y = (yk) ∈ ZI0 (M,∆r

, p, u, ‖·, · · · , ·‖). Thenn ∈ N :

∞∑n=1

Mn

[∥∥∥un(Zp(∆rx))nρ1

, z1, · · · , zn−1

∥∥∥]pn ≥ ε ∈ I, for some ρ1 > 0.

and n ∈ N :

∞∑n=1

Mn

[∥∥∥un(Zp(∆ry))nρ2

, z1, · · · , zn−1

∥∥∥]pn ≥ ε ∈ I, for some ρ2 > 0.

159

Page 160: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

18 KULDIP RAJ AND SURUCHI PANDOH

Let ρ = ρ1ρ2 > 0. Then we can show thatn ∈ N :

∞∑n=1

Mn

[∥∥∥un((Zp(∆rxy))n)

ρ, z1, · · · , zn−1

∥∥∥]pn ≥ ε ∈ I.Thus, (xkyk) ∈ ZI0 (M,∆r, p, u, ‖·, · · · , ·‖). Hence ZI0 (M,∆r, p, u, ‖·, · · · , ·‖) is sequencealgebra.

Theorem 3.8. Let M = (Mn) be a sequence of Orlicz function. Then ZI0 (M,∆r, p, u, ‖·, · · · , ·‖)⊂ ZI(M,∆r, p, u, ‖·, · · · , ·‖) ⊂ ZI∞(M,∆r, p, u, ‖·, · · · , ·‖) and the inclusions are proper.

Proof. Let x = (xk) ∈ ZI(M,∆r, p, u, ‖·, · · · , ·‖). Then there exists L ∈ R and ρ > 0 suchthat

n ∈ N :∞∑n=1

Mn

[∥∥∥uk(Zp(∆rx))n − Lρ

, z1, · · · , zn−1

∥∥∥]pn ≥ ε ∈ I.We have

Mn

[∥∥∥ |un(Zp(∆rx))n|2ρ

, z1, · · · , zn−1

∥∥∥]pn ≤ 1

2Mn

[∥∥∥un(Zp(∆rx))n − Lρ

, z1, · · · , zn−1

∥∥∥]pn

+ Mn1

2

[∥∥∥Lρ, z1, · · · , zn−1

∥∥∥]pn .Taking supremum over n on both sides we get x = (xk) ∈ ZI∞(M,∆r, p, u, ‖·, · · · , ·‖). Theinclusion ZI0 (M,∆r, p, u, ‖·, · · · , ·‖) ⊂ ZI(M,∆r, p, u, ‖·, · · · , ·‖) is obvious. The inclusionis proper follows from the following example.Let I = Id, Mn(x) = x2 for all x ∈ [0,∞), u = (un) = 1, p = (pn) = 1 and r = 0 for alln ∈ N.(a) Consider the sequence (xk) defined by xk = 1 for all k ∈ N. Then (xk) ∈ ZI(M,∆r, p, u,‖·, · · · , ·‖) but (xk) /∈ ZI0 (M,∆r, p, u, ‖·, · · · , ·‖).(b) Consider the sequence (yk) defined as

yk =

2, if k is even0, otherwise.

Then (yk) ∈ ZI∞(M,∆r, p, u, ‖·, · · · , ·‖) but (yk) /∈ ZI(M,∆r, p, u, ‖·, · · · , ·‖).

Theorem 3.9. The spaces ZI(M,∆r, p, u, ‖·, · · · , ·‖) and ZI0 (M,∆r, p, u, ‖·, · · · , ·‖) arenot convergence free in general.

Proof. The proof follows from Theorem (2.9).

Theorem 3.10. mIZ(M,∆r, p, u, ‖·, · · · , ·‖) is a closed subspace of l∞(M,∆r, p, u, ‖·, · · · , ·‖).

Proof. Let (x(i)k ) be a Cauchy sequence in mI

Z(M,∆r, p, u, ‖·, · · · , ·‖) such that x(i) → x.

We show that x ∈ mIZ(M,∆r, p, u, ‖·, · · · , ·‖). Since (x

(i)k ) ∈ mI

Z(M,∆r, p, u, ‖·, · · · , ·‖),then there exists ai such that

n ∈ N :∞∑n=1

Mn

[∥∥∥un(Zp(∆rx(i)))n − aiρ

, z1, · · · , zn−1

∥∥∥]pn ≥ ε ∈ I.We need to show that(i) (ai) converges to a.

(ii) If V =

x = (xk) :

n ∈ N :

∞∑n=1

Mn

[∥∥∥un(Zp(∆rx))n − Lρ

, z1, · · · , zn−1

∥∥∥]pn ≥ ε

,

160

Page 161: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

ON SOME ZWEIER I-CONVERGENT DIFFERENCE SEQUENCE SPACES 19

then Vc ∈ I.

(i) Since (x(i)k ) be a Cauchy sequence in mI

Z(M,∆r, p, u, ‖·, · · · , ·‖), then for a given ε > 0,there exists k0 ∈ N such that

supnMn

[∥∥∥un((Zp(∆rx(i)))n − (Zp(∆rx(j)))n)

ρ, z1, · · · , zn−1

∥∥∥]pn < ε

3for all i, j ≥ k0

For a given ε > 0, we have

Fij =

n ∈ N :

∞∑n=1

Mn

[∥∥∥un((Zp(∆rx(i)))n − (Zp(∆rx(j)))n)

ρ, z1, · · · , zn−1

∥∥∥]pn < ε

3

Fi =

n ∈ N :

∞∑n=1

Mn

[∥∥∥un(Zp(∆rx(i) − ai))nρ

, z1, · · · , zn−1

∥∥∥]pn < ε

3

Fj =

n ∈ N :

∞∑n=1

Mn

[∥∥∥un(Zp(∆rx(j) − aj))nρ

, z1, · · · , zn−1

∥∥∥]pn < ε

3

Then Fij , Fi, Fj ∈ I. Let F c = F cji ∪ F ci ∪ F cj , where

F =

n ∈ N :

∞∑n=1

Mn

[∥∥∥un(ai − aj)ρ

, z1, · · · , zn−1

∥∥∥]pn < ε

Then F c ∈ I. We choose n0 ∈ F c, then for each i, j ≥ n0 we have

n ∈ N :

∞∑n=1

Mn

[∥∥∥un(ai − aj)ρ

, z1, · · · , zn−1

∥∥∥]pn < ε

n ∈ N :

∞∑n=1

Mn

[∥∥∥un(Zp(∆rx(i) − ai))nρ

, z1, · · · , zn−1

∥∥∥]pn < ε

3

n ∈ N :

∞∑n=1

Mn

[∥∥∥un(Zp(∆rx(i)))n − (Zp(∆rx(j)))nρ

, z1, · · · , zn−1

∥∥∥]pn < ε

3

n ∈ N :

∞∑n=1

Mn

[∥∥∥un(Zp(∆rx(j) − aj))nρ

, z1, · · · , zn−1

∥∥∥]pn < ε

3

.

Then (ai) is a Cauchy sequence of scalars in R and so there exists a scalar a ∈ R suchthat ai → a as i→∞.(ii) Let 0 < δ < 1 be given. We show that if

V =

x = (xk) :

n ∈ N :

∞∑n=1

Mn

[∥∥∥un(Zp(∆rx− a))nρ

, z1, · · · , zn−1

∥∥∥]pn < δ

, then

Vc ∈ I.Since x(i) → x, then there exists q0 ∈ N such that

(3.8) G =

n ∈ N :

∞∑n=1

Mn

[∥∥∥un((Zp(∆rx(q0)))n − (Zp(∆rx))n)

ρ, z1, · · · , zn−1

∥∥∥]pn <(δ

3D

)H

161

Page 162: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

20 KULDIP RAJ AND SURUCHI PANDOH

which implies that Gc ∈ I. The number q0 can be so choosen that together with (3.8), wehave

H =

n ∈ N :

∞∑n=1

Mn

[∥∥∥un(aq0 − a)

ρ, z1, · · · , zn−1

∥∥∥]pn < ( δ

3D

)H.

Thus, we have Hc ∈ I. Sincen ∈ N :

∞∑n=1

Mn

[∥∥∥un(Zp(∆rx(q0) − aq0))nρ

, z1, · · · , zn−1

∥∥∥]pn ≥ δ ∈ I.Then we have a subset R ∈ N such that Rc ∈ I, where

R =

n ∈ N :

∞∑n=1

Mn

[∥∥∥un(Zp(∆rx(q0) − aq0))nρ

, z1, · · · , zn−1

∥∥∥]pn < ( δ

3D

)H.

Let Vc = Gc ∪Hc ∪Rc, where

V =

n ∈ N :

∞∑n=1

Mn

[∥∥∥un(Zp(∆rx− a))nρ

, z1, · · · , zn−1

∥∥∥]pn < δ

Therefore, for each n ∈ Vc, we haven ∈ N :

∞∑n=1

Mn

[∥∥∥un(Zp(∆rx− a))nρ

, z1, · · · , zn−1

∥∥∥]pn < δ

n ∈ N :

∞∑n=1

Mn

[∥∥∥un((Zp(∆rx(q0)))n − (Zp(∆rx))n)

ρ, z1, · · · , zn−1

∥∥∥]pn < ( δ

3D

)H

n ∈ N :

∞∑n=1

Mn

[∥∥∥un(Zp(∆rx(q0) − aq0))nρ

, z1, · · · , zn−1

∥∥∥]pn < ( δ

3D

)H

n ∈ N :

∞∑n=1

Mn

[∥∥∥un(aq0 − a)

ρ, z1, · · · , zn−1

∥∥∥]pn < ( δ

3D

)H.

Hence, the result follows.

Note that the inclusion mIZ(M,∆r, p, u, ‖·, · · · , ·‖) ⊂ l∞(M,∆r, p, u, ‖·, · · · , ·‖) and

mIZ0

(M,∆r, p, u, ‖·, · · · , ·‖) ⊂ l∞(M,∆r, p, u, ‖·, · · · , ·‖) are strict. So in view of Theorem(3.10). We have the following result.

Remark: The spacesmIZ(M,∆r, p, u, ‖·, · · · , ·‖) andmI

Z0(M,∆r, p, u, ‖·, · · · , ·‖) are nowhere

dense in l∞(M, p, u, ‖·, · · · , ·‖).

Theorem 3.11. The spaces mIZ(M,∆r, p, u, ‖·, · · · , ·‖) and mI

Z0(M,∆r, p, u, ‖·, · · · , ·‖)

are not separable.

Proof. The proof follows from the Theorem (2.12).

162

Page 163: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

ON SOME ZWEIER I-CONVERGENT DIFFERENCE SEQUENCE SPACES 21

References

[1] V. K. Bhardwaj and N. Singh, Some sequence space defined by Orlicz functions, Demonstratio Math.(33) (2000), 571-582.

[2] M. Et, On some new Orlicz sequence spaces, J. Anal., 9 (2001) 21-28.

[3] M. Et and R. Colak, On generalized difference sequence spaces, Soochow J. Math., 21 (1995),377-386.

[4] A. Esi, Some new sequence spaces defined by Orlicz functions, Bull. Inst. Math. Acad. Sinica., 27

(1999), 71-76.[5] A. Esi and M. Et, Some new sequence spaces defined by a sequence of Orlicz functions, Indian J.

Pure Appl. Math., 31 (2000), 967-972.[6] S. Gahler, Linear 2-normietre Rume, Math. Nachr., 28 (1965), 1-43.

[7] H. Gunawan, On n-inner product, n-norms, and the Cauchy-Schwartz inequality, Sci. Math. Jpn., 5

(2001), 47-54.[8] H. Gunawan, The space of p-summable sequence and its natural n-norm, Bull. Aust. Math. Soc., 6

(2001), 137-147.

[9] H. Gunawan and M. Mashadi, On n-normed spaces, Int. J. Math. Math. Sci., 27 (2001), 631-639.[10] B. Hazarika, On generalized difference ideal convergence in random 2- normed spaces, Filomat, 26

(2012), 1265-1274.

[11] B. Hazarika, K. Tamang and B. K. Singh, Zweier ideal convergent sequence spaces defined by Orliczfunctions, Journal of Mathematics and Computer Science, 8 (2014), 307-318.

[12] H. Kızmaz, On certain sequence spaces, Canad. Math. Bull., 24 (1981), pp. 169-176.

[13] P. K. Khamthan and M. Gupta, Sequence spaces and series, Marcel Dekker, New York, (1980).[14] P. Kostyrko, T. Salat and W. Wilczynski, I-convergence, Real Analysis Exchange, 26 (2000), 669-686.

[15] B. K. Lahiri, I and I∗−convergence in topological spaces, Math. Bohem., 130 (2005), 153-160.[16] J. Lindenstrauss and L. Tzafriri, On Orlicz sequence spaces, Israel J. Math., 10 (1971), 379-390.

[17] E. Malkowsky, Recent results in the theory of matrix transformation in sequence spaces, Mat. Vesnik.,

49 (1997), 187-196.[18] A. Misiak, n-Inner product spaces, Math. Nachr., 140 (1989), 299-319.

[19] M. Mursaleen and A. Alotaibi, On I- convergence in random 2-normed spaces, Math. Slovaca, 61

(2011), 933-940.[20] S. D. Parashar and B. Chaudhary, Sequence spaces defined by Orlicz function, Indian J. Pure Appl.

Math., 25 (1994), 419-428.

[21] K. Raj and S. K. Sharma, Difference I−convergent sequence spaces defined by a sequence of modulusfunctions over n- normed spaces, Matematika, 29 (2013), 187-196.

[22] K. Raj, S. Jamwal and S. K. Sharma, New classes of generalized sequence spaces defined by an Orlicz

function, J. Comput. Anal. Appl., 15 (2013), 730-737.[23] K. Raj, S. K. Sharma and A. K. Sharma, Some difference sequence spaces in n-normed spaces defined

by Musielak-Orlicz function, Armen. J. Math., 3 (2010), 127-141.

[24] T. Salat, B. C. Tripathy and M. Ziman, On some properties of I−convergence, Tatra Mt. Math.Publ., 28 (2004), 279-286.

[25] M. Sengonul, On The Zweier Sequence Space, Demonstratio Math., 40 (2007), 181-196.[26] B. C. Tripathy, Y. Altin and M. Et, Generalized difference sequence space on seminormed spaces

defined by Orlicz functions, Math. Slovaca, 58 (2008), 315-324.[27] B. C. Tripathy and B. Hazarika, Some I−convergent sequence space defined by Orlicz functions, Acta

Math. Appl. Sin., English Series., 27 (2011), 149-154.[28] A. Wilansky, Summability through functional analysis, North-Holland Math. Stud., (1984).

School of Mathematics Shri Mata Vaishno Devi University, Katra-182320, J & K (India)E-mail address: [email protected]

E-mail address: [email protected]

163

Page 164: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Composition operators on some general families of

function spaces

R. A. RashwanAssiut University, Faculty of Science, Department of Mathematics Box 71515 Assiut - Egypt

A. El-Sayed AhmedSohag University, Faculty of Science, Department of Mathematics, 82524 Sohag, Egypt

Current Address: Taif University, Faculty of Science, Mathematics Department

Box 888 El-Hawiyah, El-Taif 5700, Saudi Arabia e-mail: [email protected]

M. A. BakhitUniversity of Alazhar, Assiut Branch, Faculty of Science Department of Mathematics Egypt

Abstract

In this paper, we give Carleson measure characterizations of the classes F#(p, q, s), then we usethese characterizations to study boundedness and compactness of the composition operator Cφ fromthe α-normal classes Nα into F#(p, q, s) classes. Moreover, necessary and sufficient conditions forCφ from the spherical Dirichlet class D# to the classes F#(p, q, s) to be compact are given in termsof the map φ.

1 Introduction

Let D = z : |z| < 1 be the unit disc in the complex plane C and ∂D its boundary. Let M(D) be theclass of all meromorphic functions and H(D) the class of all holomorphic functions on D. Also, dA(z) bethe normalized area measure, so that A(D) ≡ 1. The Green’s function is defined by

g(z, a) = log1

|ϕa(z)| ,

where ϕa(z) = a−z1−az is the automorphism of D interchanging the points zero and a ∈ D. Note that

ϕa(ϕa(z)) = z, and so ϕ−1a (z) = ϕa(z). For all z, a ∈ D, we know that

(1− |z|2)|ϕ′a(z)| = 1− |ϕa(z)|2 ≤ 2g(z, a). (1)

For a function f ∈M(D), a natural analogue of f ′(z) is the spherical derivative

f#(z) =|f ′(z)|

1 + |f(z)|2 .

For α ∈ (0,∞) we say that the function f ∈M(D) belongs to the α-normal class Nα = Nα(D), if

‖f‖Nα = |f(0)|+ supz∈D

(1− |z|2)αf#(z) < ∞ (see [18]).

The little α-normal class Nα0 = Nα

0 (D) consists of all f ∈ Nα such that

lim|z|→1

(1− |z|2)αf#(z) = 0 (see [18]).

AMS 2000 classification: 47B38, 30D99, 46E15.Key words and phrases: Composition operators, normal functions, F#(p, q, s) classes

1

164J. APPLIED FUNCTIONAL ANALYSIS, VOL. 10, NO.'S 1-2,164-175 ,2015, COPYRIGHT 2015 EUDOXUS PRESS, LLC

Page 165: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

2

For some results on Nα class, we refer to [1, 8, 11].For some r ∈ (0, 1), we say that the function f ∈M(D) belongs to the class of spherical Bloch functionsB# (see [21]) if

‖f‖2B# := supa∈D

D(a,r)

(f#(z))2dA(z) < ∞,

where, D(a, r) = z : ρ(z, a) = |ϕa(z)| =∣∣ z−a1−az

∣∣ < r is the pseudo-hyperbolic disc with center a andradius r. We clearly have N ⊂ B#. In the analytic case, we know that the corresponding definitionwith f#(z) replaced by |f ′(z)| both give the space of Bloch functions B. In the meromorphic case, thesituation is different.

A function f ∈M(D) belongs to the spherical Dirichlet class D#(see [12]) if

‖f‖2D# :=∫

D(f#(z))2dA(z) < ∞.

Let f ∈M(D), 0 < p < ∞, −2 < q < ∞ and 0 < s < ∞. If

‖f‖pF#(p,q,s)

= supa∈D

D(f#(z))p(1− |z|2)qgs(z, a)dA(z) < ∞,

then f ∈ F#(p, q, s) (see [18]). Moreover, if

lim|a|→1

D(f#(z))p(1− |z|2)qgs(z, a)dA(z) = 0,

then f ∈ F#0 (p, q, s). The classes F#(p, q, s) were intensively studied by Rattya in [18]. The classes

F#(p, q, s) and F#0 (p, q, s) behave sometimes in a quite different way than their analytic counterparts.

For example, the Green’s function g(z, a) cannot be replaced by the weight function (1 − |ϕa(z)|2) ingeneral (see [2] and [21]). Therefore the classes M#(p, q, s) and M#

0 (p, q, s) defined as follows.A function f ∈M(D) is said to belong to the classes M#(p, q, s) if

‖f‖pp,q,s = sup

a∈D

D[f#(z)]p(1− |z|2)q(1− |ϕa(z)|2)sdA(z) < ∞,

If

lim|a|→1

D[f#(z)]p(1− |z|2)q(1− |ϕa(z)|2)sdA(z) = 0,

then f ∈ M#0 (p, q, s). For the inequality 1− |ϕa(z)|2 ≤ 2g(z, a), for all z, a ∈ D, we see that F#(p, q, s) ⊂

M#(p, q, s) and F#0 (p, q, s) ⊂ M#

0 (p, q, s), but, as mentioned before, the opposite inclusions do not holdin general.If E is any set, the characteristic function χE of the set E is given by

χE(z) =

1, if z ∈ E;0, if z /∈ E .

The function χE(z) is measurable if and only if E is measurable (see [19] ).Recall that a linear operator T : X → Y is said to be bounded if there exists a constant C > 0 such

that ||T (f)||Y ≤ C||f ||X for all maps f ∈ X. Moreover, T : X → Y is said to be compact if it takesbounded sets in X to sets in Y which have compact closure. For Banach spaces X and Y contained inM(D) or H(D), T : X → Y is compact if and only if for each bounded sequence xn ∈ X, the sequenceTxn ∈ Y contains a subsequence converging to a function f ∈ Y.Two quantities A and B are said to be equivalent if there exist two finite positive constants C1 and C2

such that C1B ≤ A ≤ C2B, written as A ≈ B. Throughout this paper, the letter C denotes differentpositive constants which are not necessarily the same from line to line.

Composition operators on some general families of function spaces

R. A. Rashwan et al165

Page 166: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

3

The composition operator Cφ : H(D) → H(D) is defined by Cφ = f φ, where φ is a holomorphicself-map of the open unit disc in C. There have been several attempts to study compactness and bound-edness of composition operators in many function spaces (see e.g.[4, 5, 6, 7, 9, 10, 13, 17, 23] and others).Lappan and Xiao in [14] studied the boundedness of the composition operator Cφ in meromorphic casefrom the normal class N to their Mobius invariant subclasses.

In this paper, we show some clear differences between the analytic and the meromorphic cases ofthe classes F (p, q, s). We give a Carleson measure characterization of the composition operator Cφ onF#(p, q, s). Also, we characterize by means of s-Carleson measures and compact s-Carleson measuresthe boundedness and compactness of composition operator Cφ : M(D) →M(D) from α-normal classesNα to F#(p, q, s) classes. Moreover, we investigate the compactness of composition operator Cφ fromspherical Dirichlet class D# to F#(p, q, s).

Now we quote several auxiliary results which will be used in the proofs of our main results. The followinglemma can be found in [18], Corollary 3.2.3.

Lemma 1.1 Let 0 < p < ∞, −2 < q < ∞ and 0 < s < ∞.

• If M#(p, q, s) ⊂ N q+2p , then F#(p, q, s) = M#(p, q, s).

• If M#0 (p, q, s) ⊂ N

q+2p

0 , then F#0 (p, q, s) = M#

0 (p, q, s).

Taking the same technique used in lemma 1 of [14], we get the following lemma:

Lemma 1.2 For α > 0, there are two functions f1, f2 ∈ Nα such that

M0 = infz∈D

[(1− |z|2)α

(f#1 (z) + f#

2 (z))]

> 0,

where M0 is a positive constant.

Using the standard arguments similar to those outlined in proposition 3.11 of [4], we have the followinglemma:

Lemma 1.3 Let α ∈ (0,∞), 0 < p < ∞, −2 < q < ∞ and 0 < s < ∞. If φ : D→ D be an analytic self-map and X = F#(p, q, s) or B#

p . Then Cφ : Nα → X is compact if and only if for every bounded sequencefn ⊂ Nα, which converges to 0 uniformly on compact subsets of D as n →∞, lim

n→∞‖Cφfn‖X = 0.

2 The classes F#(p, q, s) and s-Carleson measures

In this section, we show some clear differences between the analytic and the meromorphic cases of theclasses F (p, q, s).

For 0 < s < ∞, we say that a positive measure µ defined on D is a bounded s-Carleson measure,provided µ(S(I)) = O(|I|s) for all sub-arcs I of ∂D, where |I| denotes the arc length of I ⊂ ∂D and S(I)denotes the Carleson box based on I, that is

S(I) =

z ∈ D : 1− |I|2π

≤ |z| < 1,z

|z| ∈ I

.

For 0 < s < ∞, a positive Borel measure µ on D is a bounded s-Carleson measure if and only if

supa∈D

D|ϕ′a(z)|sdµ < ∞; (2)

and µ is a compact s-Carleson measure if and only if

lim|a|→1

D|ϕ′a(z)|sdµ = 0. (3)

Composition operators on some general families of function spaces

R. A. Rashwan et al166

Page 167: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

4

Lemma 2.1 Let 0 < p < ∞,−2 < q < ∞ and f ∈ M(D). Then, for 0 < s ≤ 1, f ∈ F#(p, q, s) if andonly if dν = (f#(z))p(1− |z|2)q+sdA(z) is a bounded s-Carleson measure on D.

Proof: From (2) it follows that dν = (f#(z))p(1−|z|2)q+sdA(z) is a bounded s-Carleson measure if andonly if

supa∈D

D|ϕ′a(z)|sdν < ∞. (4)

Using (1), we see that

supa∈D

D|ϕ′a(z)|sdν = sup

a∈D

D|ϕ′a(z)|s(f#(z))p(1− |z|2)q+sdA(z)

= supa∈D

D(f#(z))p(1− |z|2)q(1− |ϕa(z)|2)sdA(z)

≤ C supa∈D

D(f#(z))p(1− |z|2)qgs(z, a)dA(z) < ∞.

If dν is a bounded s-Carleson measure, then f ∈ N q+2p , for 0 < s ≤ 1 we have f ∈ F#(p, q, s).

For 1 < s < ∞ the meromorphic case differs from the analytic case (see Theorem 2.4 in [22]) as thefollowing lemma shows.

Lemma 2.2 Let 0 < p < ∞,−2 < q < ∞ and f ∈ M(D). Then, for 1 < s < ∞, the followingstatements are not equivalent:

(i) f ∈ F#(p, q, s);

(ii) dν = (f#(z))p(1− |z|2)q+sdA(z) is a bounded s-Carleson measure on D.

Proof: An easy calculation shows that Nα ⊂ F#(p, αp−2, s) for 0 < p < ∞, 0 < α < ∞ and 1 < s < ∞(see theorems 3.2.1, 3.3.3 and also corollary 3.2.2 in [18]). In the case α = q+2

p , we have that

N q+2p ⊂ F#(p, q, s). (5)

However, in this case it is possible that F#(p, q, s) ⊂ N q+2p as in the case of the class Q#

s (see [2] and [3],Lemma 1).Hence, F#(p, q, s) = N q+2

p (for all 0 < p < ∞,−2 < q < ∞ and 1 < s < ∞), dν being a boundeds-Carleson measure does not imply that f ∈ F#(p, q, s) by Lemma 1.1. Thus the lemma is proved.

In case of compact s-Carleson measures we have a more complete description which is similar to theanalytic case.

Lemma 2.3 Let 0 < p < ∞,−2 < q < ∞ and f ∈ M(D). Then, for 0 < s < ∞, the following areequivalent:

(i) f ∈ F#0 (p, q, s);

(ii) dν = (f#(z))p(1− |z|2)q+sdA(z) is a compact s-Carleson measure on D.

Proof: (i)⇒ (ii). If f ∈ F#0 (p, q, s), then, by (1) and (2), dν is a compact s-Carleson measure on D.

(ii)⇒ (i). For any r ∈ (0, 1) and D(a, r) = z ∈ D : |ϕa(z)| < r, we have

I(a) =∫

D(f#(z))p(1− |z|2)qgs(z, a)dA(z)

=∫

D(a,r)

+∫

D\D(a,r)

(f#(z))p(1− |z|2)qgs(z, a)dA(z)

= I1(a) + I2(a).

Composition operators on some general families of function spaces

R. A. Rashwan et al167

Page 168: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

5

Suppose now that dν is a compact s-Carleson measure on D. Since

g(z, a) = log1

|ϕa(z)|

≥ log 4 > 1, |ϕa(z)| ≤ 1

4 ;

≤ 4(1− |ϕa(z)|2), |ϕa(z)| > 14 ,

we obtain for s0 = max(s, 2) and D(a, 14 ) = z ∈ D : |ϕa(z)| < 1

4 that

I1(a) =∫

D(a, 14 )

(f#(z))p(1− |z|2)qgs(z, a)dA(z)

≤∫

D(a, 14 )

(f#(z))p(1− |z|2)qgs0(z, a)dA(z) (6)

and

I2(a) =∫

D\D(a, 14 )

(f#(z))p(1− |z|2)qgs(z, a)dA(z)

≤ 4s

D\D(a, 14 )

(f#(z))p(1− |z|2)q(1− |ϕa(z)|2)sdA(z). (7)

Hence, by (6) and (7), we get

I(a) =∫

D(f#(z))p(1− |z|2)qgs(z, a)dA(z)

≤ C

D(f#(z))p(1− |z|2)q(1− |ϕa(z)|2)sdA(z). (8)

which means that f ∈ F#0 (p, q, s). Hence the proof is completed.

3 Boundedness of composition operators from N α to F#(p, q, s)

We are going to work with composition operators acting on Nα and Nα0 . For a function φ : D → D, we

say that Cφ : Nα(Nα0 ) → F#(p, q, s) is bounded if

‖Cφf‖F#(p,q,s) ≤ C‖f‖Nα , f ∈ Nα(Nα0 ).

For 0 < p < ∞,−2 < q < ∞ and 0 < s < ∞, we define the following notations:

dµφ(α, p, q, s) =|φ′(z)|p(

1− |φ(z)|2)αp (1− |z|2)q+sdA(z),

Φφ(α, p, q, s, a) =∫

D

|φ′(z)|p(1− |φ(z)|2)αp (1− |z|2)qgs(z, a)dA(z),

Ψφ(α, p, q, s, a) =∫

D

|φ′(z)|p(1− |φ(z)|2)αp (1− |z|2)q(1− |ϕa(z)|2)sdA(z).

Lemma 3.1 If Ψφ(α, p, q, s, a) is a finite at some point a ∈ D, then, Ψφ(α, p, q, s, a) is a continuousfunction of a ∈ D.

Proof: The proof is very similar to that of the lemma 2.3 in [20].

We proof the following result.

Composition operators on some general families of function spaces

R. A. Rashwan et al168

Page 169: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

6

Theorem 3.1 Let 2 ≤ p < ∞,−2 < q < ∞ and 0 < s < ∞. Then, the following statements areequivalent:

(i) Cφ : Nα → F#(p, q, s) is bounded;

(ii) For p ≥ 2, Cφ : Nα0 → F#(p, q, s) is bounded;

(iii) supa∈D

Φφ(α, p, q, s, a) < ∞;

(iv) dµφ(α, p, q, s) is a bounded s-Carleson measure.

Proof: (i) ⇒ (ii) is clear since Nα0 ⊂ Nα.

(ii) ⇒ (iii). First, note that if f ∈ Nα and fr(z) = f(rz) for r ∈ (0, 1), then fr ∈ Nα0 with ‖fr‖Nα ≤

‖f‖Nα . Secondly, for the two functions f1 and f2 given by the Lemma 1.2, we have for k = 1, 2, anda ∈ D,

‖Cφ‖p‖fk‖pNα ≥ ‖Cφ‖p‖(fk)r‖p

Nα ≥ ‖Cφ(fk)r‖pF#(p,q,s)

≥∫

D

(f#

k (rφ(z)))p|rφ′(z)|p(1− |z|2)qgs(z, a)dA(z).

Because of the Lemma 1.2,

(1− |z|2)α(f#1 (z) + f#

2 (z))≥ M0 > 0, for all z ∈ D,

we have that|φ′(z)|p(

1− |φ(z)|2)αp ≤2p

Mp0

((f1 φ)#(z)

)p+2p

Mp0

((f2 φ)#(z)

)p.

Thus,

supa∈D

D

|φ′(z)|p(1− |φ(z)|2)αp (1− |z|2)qgs(z, a)dA(z)

≤ 2p

Mp0

supa∈D

D

((f1 φ)#(z)

)p(1− |z|2)qgs(z, a)dA(z)

+2p

Mp0

supa∈D

D

((f2 φ)#(z)

)p(1− |z|2)qgs(z, a)dA(z)

≤ 2p

Mp0

‖Cφ‖p(‖f1‖p

Nα + ‖f2‖pNα

).

This inequality and Fatou’s lemma imply (iii) holds. For all f with ‖f‖Nα ≤ 1, it is clear that theboundedness of Cφ is identical to ‖Cφf‖F#(p,q,s) < ∞. Let f ∈ Nα with ‖f‖Nα ≤ 1, we have

‖Cφf‖pF#(p,q,s)

= supa∈D

D

((f φ)#(z)

)p(1− |z|2)qgs(z, a)dA(z)

≤ ‖f‖pNα sup

a∈D

D

|φ′(z)|p(1− |φ(z)|2)αp (1− |z|2)qgs(z, a)dA(z)

≤ supa∈D

Φφ(α, p, q, s, a) < ∞.

Hence (iii) implies (i). So, it remains to check (iii) implies (iv), suppose that (iii) holds. By (2), we seethat (iv) is equivalent to sup

a∈DΨφ(α, p, q, s, a) < ∞. From (1), we get that

supa∈D

D

|φ′(z)|p(1− |φ(z)|2)αp (1− |z|2)q(1− |ϕa(z)|2)sdA(z)

= supa∈D

D

|φ′(z)|p(1− |φ(z)|2)αp (1− |z|2)q+s|ϕ′a(z)|sdA(z)

≤ supa∈D

D|ϕ′a(z)|sdµφ(α, p, q, s) < ∞,

Composition operators on some general families of function spaces

R. A. Rashwan et al169

Page 170: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

7

then (iv) holds. Now we suppose that (iv) is true. For f ∈ Nα, we get

‖Cφf‖pF#(p,q,s)

≤ C supa∈D

D

((f φ)#(z)

)p(1− |z|2)q(1− |ϕa(z)|2)sdA(z)

≤ C‖f‖pNα sup

a∈DΨφ(α, p, q, s, a),

and so Cφ : Nα → F#(p, q, s) is bounded. Thus (iv) implies (i) and the proof is complete.

Theorem 3.2 Let 2 ≤ p < ∞,−2 < q < ∞ and 0 < s < ∞. Then, the following statements areequivalent:

(i) Cφ : Nα0 → F#

0 (p, q, s) is bounded;

(ii) supa∈D

Φφ(α, p, q, s, a) < ∞ and φ ∈ F#0 (p, q, s).

Proof: Suppose that (ii) holds, and let f ∈ Nα0 . Then Cφ : Nα

0 → F#0 (p, q, s) is bounded by Theorem

3.1. We only show that Cφf ∈ F#0 (p, q, s). For any ε > 0, we can choose r ∈ (0, 1) such that

(f#(w)

)p(1− |w|2)pα < ε, for |w| > r.

Thus

supa∈D

|φ(z)|>r

((f φ)#(z))p(1− |z|2)qgs(z, a)dA(z)

≤ ε supa∈D

|φ(z)|>r

|φ′(z)|p(1− |z|2)q

(1− |φ(z)|2)pαgs(z, a)dA(z)

≤ ε supa∈D

Φφ(α, p, q, s, a).

Since φ ∈ F#0 (p, q, s), for r as above, we have

lim|a|→1

|φ(z)|≤r

((Cφf)#(z)

)p(1− |z|2)qgs(z, a)dA(z)

≤ ‖f‖pNα lim

|a|→1

|φ(z)|≤r

|φ′(z)|p(1− |z|2)q

(1− |φ(z)|2)pαgs(z, a)dA(z)

≤ ‖f‖pNα

(1− r2)pαlim|a|→1

|φ(z)|≤r

|φ′(z)|p(1− |z|2)qgs(z, a)dA(z) = 0.

Combining above, we get that Cφf ∈ F#0 (p, q, s).

Conversely, if Cφ : Nα0 → F#

0 (p, q, s) is bounded, then Cφ : Nα0 → F#(p, q, s) is bounded. By

Theorem 3.1, we get that supa∈D

Φφ(α, p, q, s, a) < ∞. Finally, by considering f(z) = z, the boundedness of

Cφ implies φ ∈ F#0 (p, q, s). The proof is complete.

4 Compactness of composition operators from N α to F#(p, q, s)

Theorem 4.1 Let α ∈ (0,∞), 0 < p, s < ∞,−2 < q < ∞, with q + s > −1 and let φ : D → D be aholomorphic function. Then, for f ∈M(D) the following are equivalent:

(i) Cφ : Nα → F#(p, q, s) is compact;

(ii) Cφ : Nα0 → F#(p, q, s) is compact;

(iii) φ ∈ F#(p, q, s) and

limr→1

supa∈D

|φ(z)|>r

|φ′(z)|p(1− |φ(z)|2)αp (1− |z|2)qgs(z, a)dA(z) = 0. (9)

Composition operators on some general families of function spaces

R. A. Rashwan et al170

Page 171: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

8

Proof: The proof follows from the proof of Theorem 1 in [23] by replacing the derivative f ′ with thespherical derivative f#.

Theorem 4.2 Let α ∈ (0,∞), p ≥ 1,−2 < q < ∞, and φ : D → D. Then, for f ∈ M(D) the followingstatements are equivalent:

(i) Cφ : Nα → F#0 (p, q, s) is bounded;

(ii) Cφ : Nα → F#0 (p, q, s) is compact;

(iii) lim|a|→1

Φφ(α, p, q, s, a) = 0;

(iv) dµφ(α, p, q, s) is a compact s-Carleson measure;

(v) Cφ : Nα0 → F#

0 (p, q, s) is compact;

(vi) φ ∈ F#0 (p, q, s) and (9) holds.

Proof: (ii)⇒ (i). This implication is obvious.(i)⇒ (iii). Assume that (i) holds. Let f1, f2 ∈ Nα be two functions from Lemma 1.2, such that

(1− |z|2)α(f#1 (z) + f#

2 (z))≥ M0 > 0, for all z ∈ D,

Since Cφ : Nα → F#0 (p, q, s) is bounded, we get Cφf1 ∈ F#

0 (p, q, s) and Cφf2 ∈ F#0 (p, q, s). Thus

lim|a|→1

Φφ(α, p, q, s, a) = lim|a|→1

D

|φ′(z)|p(1− |φ(z)|2)αp (1− |z|2)qgs(z, a)dA(z)

≤ 2p

Mp0

lim|a|→1

D

[((f1 φ)#(z)

)p+((f2 φ)#(z)

)p](1− |z|2)qgs(z, a)dA(z) = 0.

(iii)⇒ (iv). Suppose (iii) holds. By (2) (iv) is identical to lim|a|→1

Ψφ(α, p, q, s, a) = 0.

By (1) we have

lim|a|→1

Ψφ(α, p, q, s, a) ≤ lim|a|→1

Φφ(α, p, q, s, a) = 0.

(iv)⇒ (i). Suppose that (iv) holds, then for all f ∈ Nα, we get

lim|a|→1

D

((f φ)#(z)

)p(1− |z|2)q(1− |ϕa(z)|2)sdA(z)

≤ ‖f‖pNα lim

|a|→1Ψφ(α, p, q, s, a) = 0.

By Lemma 2.3, Cφf ∈ F#0 (p, q, s), so Cφ : Nα → F#

0 (p, q, s) is bounded.(iv)⇒ (ii). Let fn ∈ Nα which converges to 0 uniformly on compact subsets of D and let ‖fn‖Nα ≤ 1.Next, we will show that ‖Cφfn‖F#(p,q,s) converges to 0 as n →∞.Since lim

|a|→1Ψφ(α, p, q, s, a) = 0, there is an r > 0 such that Ψφ(α, p, q, s, a) < ε for every ε > 0 and

a ∈ Gr, where Gr = z ∈ D : |z| > r. So for any compact set U ⊂ D, we have

I(a) =∫

D−U

|φ′(z)|p(1− |φ(z)|2)αp (1− |z|2)qgs(z, a)dA(z) < ε, for a ∈ Gr. (10)

By Lemma 3.1, we see that I(a) is also continuous. For any given ai there is a compact set Ui ⊂ D suchthat (10) holds. The continuousness of I(a) tells us that there is a neighborhood U(ai) of ai such that(10) holds. By the compactness of C(Gr) = D − Gr, we can choose finitely many U(ai)(i = 1, 2, . . . , n)

such that C(Gr) =n⋃

i=1

Ui, then for all a ∈ C(Gr), (8) holds for all a ∈ D

Composition operators on some general families of function spaces

R. A. Rashwan et al171

Page 172: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

9

There exists a number N > 0, such that if n ≥ N, supU

[f#n (φ(z))]p(1−|φ(z)|2)αp < ε. Then for n ≥ N

and any a ∈ D, we have∫

D

(f#

n (φ(z)))p|φ′(z)|p(1− |z|2)qgs(z, a)dA(z)

≤∫

D

(f#

n (φ(z)))p|φ′(z)|p(1− |z|2)q(1− |ϕa(z)|2)sdA(z)

≈‖fn‖p

U

+∫

D−U

|φ′(z)|p(1− |φ(z)|2)αp (1− |z|2)q(1− |ϕa(z)|2)sdA(z) ≤ ε.

Thus ‖Cφfn‖F#(p,q,s) converges to 0 as n →∞, i.e., Cφ : Nα → F#0 (p, q, s) is compact.

(ii) ⇒ (v). It is clear.(v) ⇒ (vi). Since the identical mapping belongs to Nα

0 , φ ∈ F#0 (p, q, s). Since Cφ : Nα

0 → F#0 (p, q, s) is

compact, Cφ : Nα → F#0 (p, q, s) is compact. By Lemma 3.1, it is easy to see that (9) holds.

(vi) ⇒ (i). Suppose (vi) is satisfied. For all ε > 0, there exists D : 0 < D < 1, such that if D < r < 1,that for a ∈ D

|φ(z)|>r

|φ′(z)|p(1− |φ(z)|2)αp (1− |z|2)qgs(z, a)dA(z)

≤ supa∈D

|φ(z)|>r

|φ′(z)|p(1− |φ(z)|2)αp (1− |z|2)qgs(z, a)dA(z) < ε.

Therefore,∫

Ωr

((fn φ)#(z)

)p(1− |z|2)qgs(z, a)dA(z)

≤ ‖fn‖pNα

Ωr

|φ′(z)|p(1− |φ(z)|2)αp (1− |z|2)qgs(z, a)dA(z) < ε.

On the other hand,

lim|a|→1

|φ(z)|≤r

((Cφf)#(z)

)p(1− |z|2)qgs(z, a)dA(z)

≤ ‖f‖pNα lim

|a|→1

|φ(z)|≤r

|φ′(z)|p(1− |z|2)q

(1− |φ(z)|2)pαgs(z, a)dA(z)

≤ ‖f‖pNα

(1− r2)pαlim|a|→1

|φ(z)|≤r

|φ′(z)|p(1− |z|2)qgs(z, a)dA(z) = 0.

Combining above, we get that Cφf ∈ F#0 (p, q, s), i.e., Cφ : Nα → F#

0 (p, q, s) is bounded. The proof iscompleted.

Now we consider the composition operators from the spherical Dirichlet class D# into F#(p, q, s)classes. Our result is stated as follows:

Theorem 4.3 Let α ∈ (0,∞), 2 ≤ p < ∞ , 1 < s < ∞ and −2 < q < ∞. If φ : D → D, thenCφ : D# → F#(p, q, s) is compact if and only if

lim|a|→1

‖Cφ(ϕa)‖F#(p,q,s) = 0. (11)

Proof: Assume that Cφ : D# → F#(p, q, s) is compact. Since ϕa : a ∈ D is a bounded set in D# and(ϕa − a) → 0 uniformly on compact sets as |a| → 1, the compactness of Cφ yields that

‖Cφ(ϕa)‖F#(p,q,s) −→ 0 as |a| → 1.

Composition operators on some general families of function spaces

R. A. Rashwan et al172

Page 173: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

10

Conversely, let fn ⊂ D# be a bounded sequence. Since fn ∈ D# ⊂ N ,

|fn(z)| ≤ supn‖fn‖D#

(1 +

12

log1 + |z|1− |z|

), for z ∈ D.

Hence, fn is a normal family. Thus, there is a subsequence fnk, which converges to f analytic on D

and both fnk→ f and f#

nk→ f# uniformly on compact subsets of D. It follows that f ∈ D# from the

following estimates:∫

D(f#(z))2dA(z) =

Dlim

k→∞(f#

nk(z)

)2dA(z)

≤ lim infk→∞

D

(f#

nk(z)

)2dA(z)

≤ lim infk→∞

‖f#nk‖2D# ≤ C < ∞.

We replace f by Cφf, we remark that Cφ is compact by showing

‖Cφfnk− Cφf‖F#(p,q,s) −→ 0 as k →∞.

We write

‖Cφ(ϕa)‖pF#(p,q,s)

= supa∈D

D

[(ϕa φ)#(z)

]p(1− |z|2)qgs(z, a)dA(z)

= supa∈D

D

(ϕ#

a (φ(z))p|φ′(z)|p(1− |z|2)qgs(z, a)dA(z)

= supa∈D

D

[ |ϕ′a(φ(z))|1 + |ϕa(φ(z))|2

]p

|φ′(z)|2|φ′(z)|p−2(1− |z|2)qgs(z, a)dA(z)

= supa∈D

D

[ |ϕ′a(w)|1 + |ϕa(w)|2

]p

Np,q,sφ,a (w)dA(w)

= supa∈D

D

(1− |a|2)p

|1− aw|2p

[1 +

∣∣∣∣a− w

1− aw

∣∣∣∣2]−p

Np,q,sφ,a (w)dA(w).

Here

Np,q,sφ,a (w) =

z∈φ−1(w)

|φ′(z)|p−2(1− |z|2)qgs(z, a),

is the counting function (see [7, 15]). Thus (11) is equivalent to

lim|a|→1

D

(1− |a|2)p

|1− aw|2p

[1 +

∣∣∣∣a− w

1− aw

∣∣∣∣2]−p

Np,q,sφ,a (w)dA(w) = 0.

Hence, for any ε > 0 there exists an δ, where 0 < δ < 1, such that for |λ| > δ and all a ∈ D,

supa∈D

D(λ,1/2)

Np,q,sφ,a (w)dA(w) < ε (1− |λ|)p. (12)

Thus, by Lemma 2.1 in [16],

supa∈D

D

(f#

nk(w)− f#(w)

)pNp,q,s

φ,a (w)dA(w)

≤ C supa∈D

D

(∫

DχD(w,1/2)(u)

(f#

nk(u)− f#(u)

)pdA(u)

)Np,q,s

φ,a (w)(1− |w|)2 dA(w). (13)

Composition operators on some general families of function spaces

R. A. Rashwan et al173

Page 174: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

11

Observe here that the characteristic function χD(w,1/2)(u) = χD(u,1/2)(w).Using (12) in (13), for sufficiently large k,

supa∈D

D

(f#

nk(w)− f#(w)

)pNp,q,s

φ,a (w)dA(w)

≤ C supa∈D

D

(f#

nk(u)− f#(u)

)p

(1− |u|)2∫

D(u,1/2)

Np,q,sφ,a (w)dA(w)dA(u)

= C supa∈D

(∫

|u|>r

+∫

|u|≤r

)(f#

nk(u)− f#(u)

)p

(1− |u|)2∫

D(u,1/2)

Np,q,sφ,a (w)dA(w)dA(u)

= C (I1(a) + I2(a)).

For one hand, since fnk, f ∈ D# ⊂ N and 2 ≤ p < ∞, we have

I1(a) = supa∈D

|u|>r

(f#

nk(u)− f#(u)

)p

(1− |u|)2∫

D(u,1/2)

Np,q,sφ,a (w)dA(w)dA(u)

≤ ε supa∈D

|u|>r

(f#

nk(u)− f#(u)

)p(1− |u|)p−2dA(u)

≤ ε ‖fnk− f‖p−2

N supa∈D

|u|>r

(f#

nk(u)− f#(u)

)2dA(u)

≤ ε ‖fnk− f‖p−2

N ‖fnk− f‖2D#

≤ ε ‖fnk− f‖2D# .

On the other hand,

I2(a) = supa∈D

|u|≤r

(f#

nk(u)− f#(u)

)p

(1− |u|)2∫

D(u,1/2)

Np,q,sφ,a (w)dA(w)dA(u)

≤ C supa∈D

DNp,q,s

φ,a (w)dA(w)∫

|u|≤r

(f#

nk(u)− f#(u)

)pdA(u) ≤ C ε,

Therefore, for sufficiently large k, the above discussion gives

‖Cφfnk− Cφf‖p

F#(p,q,s)

= supa∈D

D

((fnk

φ)#(z)− (f φ)#(z))p(1− |z|2)qgs(z, a)dA(z)

= supa∈D

D

(f#

nk(w)− f#(w)

)pNp,q,s

φ,a (w) dA(w) < C ε.

It follows that Cφ is a compact operator. Therefore, the proof is completed.

References

[1] J.M. Anderson, J. Clunie and Ch. Pommerenke, On Bloch functions and normal functions, J. Reine.Angew. Math. 270 (1974), 12-37.

[2] R. Aulaskari, H. Wulan and R. Zhao, Carleson measure and some classes of meromorphic functions,Proc. Amer. Math. Soc. 128(8) (2000), 2329-2335.

[3] R. Aulaskari, J. Xiao and R. Zhao , On subspaces and subsets of BMOA and UBC, Analysis 15(1995), 101-121.

[4] C.C. Cowen and B.D. MacCluer, Composition operators on spaces of analytic functions, CRC Press,1995.

Composition operators on some general families of function spaces

R. A. Rashwan et al174

Page 175: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

12

[5] A. El-Sayed Ahmed, Natural metrics and composition operators in generalized hyperbolic functionspaces, Journal of inequalities and applications, 185(2012), 1-12.

[6] A. El-Sayed Ahmed and H. Al-Amri, A class of weighted holomorphic Bergman spaces, J. Comput.Anal. Appl. 13(2)(2011), 321-334.

[7] A. El-Sayed Ahmed and M.A. Bakhit, Composition operators on some holomorphic Banach functionspaces, Mathematica Scandinavica, 104(2) (2009), 275-295

[8] A. El-Sayed Ahmed and M.A. Bakhit, Sequences of some meromorphic function spaces, Bull. Belg.Math. Soc.- Simon Stevin 16(3)(2009), 395-408.

[9] A. El-Sayed Ahmed and M. A. Bakhit, Composition operators in hyperbolic general Besov-typespaces, Cubo A Mathematical Journal, 15(3)(2013), 19-30.

[10] A. El-Sayed Ahmed and A. Kamal, Generalized composition operators on QK,ω(p, q) spaces, Math-ematical Sciences, 6:14 doi:10.1186/2251-7456-6-14, (2012).

[11] M. Essen, A survey of Q-spaces and Q#-classes, Analysis and Applications-proceedings of ISAAC2001, kluwer Academic Publisher, (2005), 73-87.

[12] M. Essen and H. Wulan, On analytic and meromorphic function and spaces of QK-type, Illinois J.Math. 46 (2002), 1233-1258.

[13] L. Jiang and Y. He, Composition operators from Bα to F (p, q, s), Acta Math. Sci. Ser B 32(2003),252-260

[14] P. Lappan and J. Xiao, Q#α -bounded composition maps on normal classes, Note di Matematica,

20(1) (2000/2001), 65-72.

[15] S. Li and H. Wulan, Composition operators on QK spaces, J. Math. Anal. Appl. 327(2007), 948-958.

[16] H. Luecking, Forward and reverse Carleson inequalities for functions in Bergman spaces and theirderivaties, Amer. J. Math. 107 (1985), 85-111.

[17] S. Makhmutov and M. Tjani, Composition operators on some Mobius invariant Banach spaces, Bull.Austral. Math. Soc. 62 (2000), 1-19.

[18] J. Rattya, On some Complex function spaces and classes, Annales Academiae Scientiarum Fennicae.Series A I. Mathematica. Dissertationes. 124. Helsinki: Suomalainen Tiedeakatemia, (2001), 1-73.

[19] W. Rudin, Real and Complex Analysis, MeGraw-Hill, New York, 1987.

[20] W. Smith and R. Zhao, Composition operators mapping into the Qp spaces, Analysis, 17 (1997)239-263.

[21] H. Wulan, On some classes of meromorphic functions, Ann. Acad. Sci. Fenn. Math. Diss. 116 (1998),1-57.

[22] R. Zhao, On a general family of function spaces, Annales Academiae Scientiarum Fennicae. SeriesA I. Mathematica. Dissertationes. 105. Helsinki: Suomalainen Tiedeakatemia, (1996), 1-56.

[23] X. Zhu, Composition operators from Bloch type spaces to F (p, q, s) spaces, Filomat 21(2) (2007),11-20.

Composition operators on some general families of function spaces

R. A. Rashwan et al175

Page 176: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

TABLE OF CONTENTS, JOURNAL OF APPLIED FUNCTIONAL ANALYSIS, VOL. 10, NO.’S 1-2, 2015

Some Results Concerning System of Modulates For 𝐿2(ℝ), A. Zothansanga, and Suman Panwar,….……………………………………………………...............................................13-18

Some Definition of a New Integral Transform Between Analogue and Discrete Systems, S.K.Q. Al-Omari,…………………………………………………………………….............19-30

Some Fixed Point Theorems for Iterated Contraction Maps, Santosh Kumar,……………...31-39

Nonlinear Regularized Nonconvex Random Variational Inequalities with Fuzzy Event in q-uniformly Smooth Banach Spaces, Salahuddin,…………………………………………...40-52

New White Noise Functional Solutions For Wick-type Stochastic Coupled KdV Equations Using F-expansion Method, Hossam A. Ghany, and M. Zakarya, ……………………………..….53-69

A Note on n-Banach Lattices, Birsen Sağir, and Nihan Güngör,……………………………70-77

Differential Subordinations using Ruscheweyh Derivative and a Multiplier Transformation, Alb Lupaș Alina,………………………………………………………………………..……78-88

On a Certain Subclass of Analytic Functions Involving Generalized Sălăgean Operator and Ruscheweyh Derivative, Alina Alb Lupaș, and Loriana Andrei,……………………………89-94

On the (r,s)-Convexity and Some Hadamard-Type Inequalities, M. Emin Özdemir, Erhan Set, and Ahmet Ocak Akdemir,………………………………………………………………….95-100

Compact Composition Operators on Weighted Hilbert Spaces, Waleed Al-Rawashdeh,...101-108

A New Double Cesàro Sequence Space Defined By Modulus Functions, Oğuz Oğur,…..109-116

Right Fractional Monotone Approximation, George A. Anastassiou,…………………….117-124

A Differential Sandwich-Type Result Using a Generalized Sălăgean Operator and Ruscheweyh Operator, Andrei Loriana,………………………………………………………………....125-132

Predictor Corrector Type Method for Solving Certain Non Convex Equilibrium Problems, Wasim Ul-Haq, and Manzoor Hussain,……………………………………………………133-142

On Some Zweier I-Convergent Difference Sequence Spaces, Kuldip Raj, Suruchi Pandoh,143-163

Composition Operators on Some General Families of Function Spaces, R. A. Rashwan, A. El-Sayed Ahmed, and M. A. Bakhit,…………………………………………………..164-175

Page 177: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Volume 10, Numbers 3-4 July-October 2015 ISSN:1559-1948 (PRINT), 1559-1956 (ONLINE) EUDOXUS PRESS,LLC

JOURNAL OF APPLIED FUNCTIONAL ANALYSIS

177

Page 178: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

SCOPE AND PRICES OF

JOURNAL OF APPLIED FUNCTIONAL ANALYSIS A quartely international publication of EUDOXUS PRESS,LLC ISSN:1559-1948(PRINT),1559-1956(ONLINE) Editor in Chief: George Anastassiou Department of Mathematical Sciences The University of Memphis Memphis, TN 38152,USA E mail: [email protected] Assistant to the Editor: Dr.Razvan Mezei,Lenoir-Rhyne University,Hickory,NC 28601, USA. -------------------------------------------------------------------------------- The purpose of the "Journal of Applied Functional Analysis"(JAFA) is to publish high quality original research articles, survey articles and book reviews from all subareas of Applied Functional Analysis in the broadest form plus from its applications and its connections to other topics of Mathematical Sciences. A sample list of connected mathematical areas with this publication includes but is not restricted to: Approximation Theory, Inequalities, Probability in Analysis, Wavelet Theory, Neural Networks, Fractional Analysis, Applied Functional Analysis and Applications, Signal Theory, Computational Real and Complex Analysis and Measure Theory, Sampling Theory, Semigroups of Operators, Positive Operators, ODEs, PDEs, Difference Equations, Rearrangements, Numerical Functional Analysis, Integral equations, Optimization Theory of all kinds, Operator Theory, Control Theory, Banach Spaces, Evolution Equations, Information Theory, Numerical Analysis, Stochastics, Applied Fourier Analysis, Matrix Theory, Mathematical Physics, Mathematical Geophysics, Fluid Dynamics, Quantum Theory. Interpolation in all forms, Computer Aided Geometric Design, Algorithms, Fuzzyness, Learning Theory, Splines, Mathematical Biology, Nonlinear Functional Analysis, Variational Inequalities, Nonlinear Ergodic Theory, Functional Equations, Function Spaces, Harmonic Analysis, Extrapolation Theory, Fourier Analysis, Inverse Problems, Operator Equations, Image Processing, Nonlinear Operators, Stochastic Processes, Mathematical Finance and Economics, Special Functions, Quadrature, Orthogonal Polynomials, Asymptotics, Symbolic and Umbral Calculus, Integral and Discrete Transforms, Chaos and Bifurcation, Nonlinear Dynamics, Solid Mechanics, Functional Calculus, Chebyshev Systems. Also are included combinations of the above topics. Working with Applied Functional Analysis Methods has become a main trend in recent years, so we can understand better and deeper and solve important problems of our real and scientific world. JAFA is a peer-reviewed International Quarterly Journal published by Eudoxus Press,LLC. We are calling for high quality papers for possible publication. The contributor should submit the contribution to the EDITOR in CHIEF in TEX or LATEX double spaced and ten point type size, also in PDF format. Article should be sent ONLY by E-MAIL [See: Instructions to Contributors] Journal of Applied Functional Analysis(JAFA) is published in January,April,July and October of each year by

178

Page 179: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

EUDOXUS PRESS,LLC, 1424 Beaver Trail Drive,Cordova,TN38016,USA, Tel.001-901-751-3553 [email protected] http://www.EudoxusPress.com visit also http://www.msci.memphis.edu/~ganastss/jafa. Annual Subscription Current Prices:For USA and Canada,Institutional:Print $400, Electronic OPEN ACCESS.Individual:Print $150.For any other part of the world add $50 more to the above prices for handling and postages. Single article for individual $20.Single issue for individual $80. No credit card payments.Only certified check,money order or international check in US dollars are acceptable. Combination orders of any two from JoCAAA,JCAAM,JAFA receive 25% discount,all three receive 30% discount. Copyright©2015 by Eudoxus Press,LLC all rights reserved.JAFA is printed in USA. JAFA is reviewed and abstracted by AMS Mathematical Reviews,MATHSCI,and Zentralblaat MATH. It is strictly prohibited the reproduction and transmission of any part of JAFA and in any form and by any means without the written permission of the publisher.It is only allowed to educators to Xerox articles for educational purposes.The publisher assumes no responsibility for the content of published papers. JAFA IS A JOURNAL OF RAPID PUBLICATION

179

Page 180: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Journal of Applied Functional Analysis Editorial Board

Associate Editors

Editor in-Chief: George A.Anastassiou Department of Mathematical Sciences The University of Memphis Memphis,TN 38152,USA 901-678-3144 office 901-678-2482 secretary 901-751-3553 home 901-678-2480 Fax [email protected] Approximation Theory,Inequalities,Probability, Wavelet,Neural Networks,Fractional Calculus Associate Editors: 1) Francesco Altomare Dipartimento di Matematica Universita' di Bari Via E.Orabona,4 70125 Bari,ITALY Tel+39-080-5442690 office +39-080-3944046 home +39-080-5963612 Fax [email protected] Approximation Theory, Functional Analysis, Semigroups and Partial Differential Equations, Positive Operators. 2) Angelo Alvino Dipartimento di Matematica e Applicazioni "R.Caccioppoli" Complesso Universitario Monte S. Angelo Via Cintia 80126 Napoli,ITALY +39(0)81 675680 [email protected], [email protected] Rearrengements, Partial Differential Equations. 3) Catalin Badea UFR Mathematiques,Bat.M2, Universite de Lille1 Cite Scientifique F-59655 Villeneuve d'Ascq,France

24) Nikolaos B.Karayiannis Department of Electrical and Computer Engineering N308 Engineering Building 1 University of Houston Houston,Texas 77204-4005 USA Tel (713) 743-4436 Fax (713) 743-4444 [email protected] [email protected] Neural Network Models, Learning Neuro-Fuzzy Systems. 25) Theodore Kilgore Department of Mathematics Auburn University 221 Parker Hall, Auburn University Alabama 36849,USA Tel (334) 844-4620 Fax (334) 844-6555 [email protected] Real Analysis,Approximation Theory, Computational Algorithms. 26) Jong Kyu Kim Department of Mathematics Kyungnam University Masan Kyungnam,631-701,Korea Tel 82-(55)-249-2211 Fax 82-(55)-243-8609 [email protected] Nonlinear Functional Analysis,Variational Inequalities,Nonlinear Ergodic Theory, ODE,PDE,Functional Equations. 27) Robert Kozma Department of Mathematical Sciences The University of Memphis Memphis, TN 38152 USA [email protected] Neural Networks, Reproducing Kernel Hilbert Spaces, Neural Perculation Theory

180

Page 181: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Tel.(+33)(0)3.20.43.42.18 Fax (+33)(0)3.20.43.43.02 [email protected] Approximation Theory, Functional Analysis, Operator Theory. 4) Erik J.Balder Mathematical Institute Universiteit Utrecht P.O.Box 80 010 3508 TA UTRECHT The Netherlands Tel.+31 30 2531458 Fax+31 30 2518394 [email protected] Control Theory, Optimization, Convex Analysis, Measure Theory, Applications to Mathematical Economics and Decision Theory. 5) Carlo Bardaro Dipartimento di Matematica e Informatica Universita di Perugia Via Vanvitelli 1 06123 Perugia, ITALY TEL+390755853822 +390755855034 FAX+390755855024 E-mail [email protected] Web site: http://www.unipg.it/~bardaro/ Functional Analysis and Approximation Theory, Signal Analysis, Measure Theory, Real Analysis. 6) Heinrich Begehr Freie Universitaet Berlin I. Mathematisches Institut, FU Berlin, Arnimallee 3,D 14195 Berlin Germany, Tel. +49-30-83875436, office +49-30-83875374, Secretary Fax +49-30-83875403 [email protected] Complex and Functional Analytic Methods in PDEs, Complex Analysis, History of Mathematics. 7) Fernando Bombal Departamento de Analisis Matematico Universidad Complutense Plaza de Ciencias,3 28040 Madrid, SPAIN Tel. +34 91 394 5020 Fax +34 91 394 4726 [email protected]

28) Miroslav Krbec Mathematical Institute Academy of Sciences of Czech Republic Zitna 25 CZ-115 67 Praha 1 Czech Republic Tel +420 222 090 743 Fax +420 222 211 638 [email protected] Function spaces,Real Analysis,Harmonic Analysis,Interpolation and Extrapolation Theory,Fourier Analysis. 29) Peter M.Maass Center for Industrial Mathematics Universitaet Bremen Bibliotheksstr.1, MZH 2250, 28359 Bremen Germany Tel +49 421 218 9497 Fax +49 421 218 9562 [email protected] Inverse problems,Wavelet Analysis and Operator Equations,Signal and Image Processing. 30) Julian Musielak Faculty of Mathematics and Computer Science Adam Mickiewicz University Ul.Umultowska 87 61-614 Poznan Poland Tel (48-61) 829 54 71 Fax (48-61) 829 53 15 [email protected] Functional Analysis, Function Spaces, Approximation Theory,Nonlinear Operators. 31) Gaston M. N'Guerekata Department of Mathematics Morgan State University Baltimore, MD 21251, USA tel:: 1-443-885-4373 Fax 1-443-885-8216 Gaston.N'[email protected] Nonlinear Evolution Equations, Abstract Harmonic Analysis, Fractional Differential Equations, Almost Periodicity & Almost Automorphy. 32) Vassilis Papanicolaou Department of Mathematics National Technical University of Athens

181

Page 182: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Operators on Banach spaces, Tensor products of Banach spaces, Polymeasures, Function spaces. 8) Michele Campiti Department of Mathematics "E.De Giorgi" University of Lecce P.O. Box 193 Lecce,ITALY Tel. +39 0832 297 432 Fax +39 0832 297 594 [email protected] Approximation Theory, Semigroup Theory, Evolution problems, Differential Operators. 9)Domenico Candeloro Dipartimento di Matematica e Informatica Universita degli Studi di Perugia Via Vanvitelli 1 06123 Perugia ITALY Tel +39(0)75 5855038 +39(0)75 5853822, +39(0)744 492936 Fax +39(0)75 5855024 [email protected] Functional Analysis, Function spaces, Measure and Integration Theory in Riesz spaces. 10) Pietro Cerone School of Computer Science and Mathematics, Faculty of Science, Engineering and Technology, Victoria University P.O.14428,MCMC Melbourne,VIC 8001,AUSTRALIA Tel +613 9688 4689 Fax +613 9688 4050 [email protected] Approximations, Inequalities, Measure/Information Theory, Numerical Analysis, Special Functions. 11)Michael Maurice Dodson Department of Mathematics University of York, York YO10 5DD, UK Tel +44 1904 433098 Fax +44 1904 433071 [email protected] Harmonic Analysis and Applications to Signal Theory,Number Theory and Dynamical Systems.

Zografou campus, 157 80 Athens, Greece tel:: +30(210) 772 1722 Fax +30(210) 772 1775 [email protected] Partial Differential Equations, Probability. 33) Pier Luigi Papini Dipartimento di Matematica Piazza di Porta S.Donato 5 40126 Bologna ITALY Fax +39(0)51 582528 [email protected] Functional Analysis, Banach spaces, Approximation Theory. 34) Svetlozar (Zari) Rachev, Professor of Finance, College of Business,and Director of Quantitative Finance Program, Department of Applied Mathematics & Statistics Stonybrook University 312 Harriman Hall, Stony Brook, NY 11794-3775 Phone: +1-631-632-1998, Email : [email protected] 35) Paolo Emilio Ricci Department of Mathematics Rome University "La Sapienza" P.le A.Moro,2-00185 Rome,ITALY Tel ++3906-49913201 office ++3906-87136448 home Fax ++3906-44701007 [email protected] [email protected] Special Functions, Integral and Discrete Transforms, Symbolic and Umbral Calculus, ODE, PDE,Asymptotics, Quadrature, Matrix Analysis. 36) Silvia Romanelli Dipartimento di Matematica Universita' di Bari Via E.Orabona 4 70125 Bari, ITALY. Tel (INT 0039)-080-544-2668 office 080-524-4476 home 340-6644186 mobile Fax -080-596-3612 Dept. [email protected] PDEs and Applications to Biology and Finance, Semigroups of Operators.

182

Page 183: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

12) Sever S.Dragomir School of Computer Science and Mathematics, Victoria University, PO Box 14428, Melbourne City, MC 8001,AUSTRALIA Tel. +61 3 9688 4437 Fax +61 3 9688 4050 [email protected] Inequalities,Functional Analysis, Numerical Analysis, Approximations, Information Theory, Stochastics. 13) Oktay Duman TOBB University of Economics and Technology, Department of Mathematics, TR-06530, Ankara, Turkey, [email protected] Classical Approximation Theory, Summability Theory, Statistical Convergence and its Applications

14) Paulo J.S.G.Ferreira Department of Electronica e Telecomunicacoes/IEETA Universidade de Aveiro 3810-193 Aveiro PORTUGAL Tel +351-234-370-503 Fax +351-234-370-545 [email protected] Sampling and Signal Theory, Approximations, Applied Fourier Analysis, Wavelet, Matrix Theory. 15) Gisele Ruiz Goldstein Department of Mathematical Sciences The University of Memphis Memphis,TN 38152,USA. Tel 901-678-2513 Fax 901-678-2480 [email protected] PDEs, Mathematical Physics, Mathematical Geophysics. 16) Jerome A.Goldstein Department of Mathematical Sciences The University of Memphis Memphis,TN 38152,USA Tel 901-678-2484 Fax 901-678-2480 [email protected] PDEs,Semigroups of Operators, Fluid Dynamics,Quantum Theory.

37) Boris Shekhtman Department of Mathematics University of South Florida Tampa, FL 33620,USA Tel 813-974-9710 [email protected] Approximation Theory, Banach spaces, Classical Analysis. 38) Rudolf Stens Lehrstuhl A fur Mathematik RWTH Aachen 52056 Aachen Germany Tel ++49 241 8094532 Fax ++49 241 8092212 [email protected] Approximation Theory, Fourier Analysis, Harmonic Analysis, Sampling Theory. 39) Juan J.Trujillo University of La Laguna Departamento de Analisis Matematico C/Astr.Fco.Sanchez s/n 38271.LaLaguna.Tenerife. SPAIN Tel/Fax 34-922-318209 [email protected] Fractional: Differential Equations-Operators- Fourier Transforms, Special functions, Approximations,and Applications. 40) Tamaz Vashakmadze I.Vekua Institute of Applied Mathematics Tbilisi State University, 2 University St. , 380043,Tbilisi, 43, GEORGIA. Tel (+99532) 30 30 40 office (+99532) 30 47 84 office (+99532) 23 09 18 home [email protected] [email protected] Applied Functional Analysis, Numerical Analysis, Splines, Solid Mechanics. 41) Ram Verma International Publications 5066 Jamieson Drive, Suite B-9, Toledo, Ohio 43613,USA. [email protected] [email protected] Applied Nonlinear Analysis, Numerical Analysis, Variational Inequalities, Optimization Theory, Computational Mathematics, Operator Theory.

183

Page 184: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

17) Heiner Gonska Institute of Mathematics University of Duisburg-Essen Lotharstrasse 65 D-47048 Duisburg Germany Tel +49 203 379 3542 Fax +49 203 379 1845 [email protected] Approximation and Interpolation Theory, Computer Aided Geometric Design, Algorithms. 18) Karlheinz Groechenig Institute of Biomathematics and Biometry, GSF-National Research Center for Environment and Health Ingolstaedter Landstrasse 1 D-85764 Neuherberg,Germany. Tel 49-(0)-89-3187-2333 Fax 49-(0)-89-3187-3369 [email protected] Time-Frequency Analysis, Sampling Theory, Banach spaces and Applications, Frame Theory. 19) Vijay Gupta School of Applied Sciences Netaji Subhas Institute of Technology Sector 3 Dwarka New Delhi 110075, India e-mail: [email protected]; [email protected] Approximation Theory 20) Weimin Han Department of Mathematics University of Iowa Iowa City, IA 52242-1419 319-335-0770 e-mail: [email protected] Numerical analysis, Finite element method, Numerical PDE, Variational inequalities, Computational mechanics 21) Tian-Xiao He Department of Mathematics and Computer Science P.O.Box 2900,Illinois Wesleyan University Bloomington,IL 61702-2900,USA Tel (309)556-3089 Fax (309)556-3864 [email protected] Approximations,Wavelet, Integration Theory, Numerical Analysis, Analytic Combinatorics.

42) Gianluca Vinti Dipartimento di Matematica e Informatica Universita di Perugia Via Vanvitelli 1 06123 Perugia ITALY Tel +39(0) 75 585 3822, +39(0) 75 585 5032 Fax +39 (0) 75 585 3822 [email protected] Integral Operators, Function Spaces, Approximation Theory, Signal Analysis. 43) Ursula Westphal Institut Fuer Mathematik B Universitaet Hannover Welfengarten 1 30167 Hannover,GERMANY Tel (+49) 511 762 3225 Fax (+49) 511 762 3518 [email protected] Semigroups and Groups of Operators, Functional Calculus, Fractional Calculus, Abstract and Classical Approximation Theory, Interpolation of Normed spaces. 44) Ronald R.Yager Machine Intelligence Institute Iona College New Rochelle,NY 10801,USA Tel (212) 249-2047 Fax(212) 249-1689 [email protected] [email protected] Fuzzy Mathematics, Neural Networks, Reasoning, Artificial Intelligence, Computer Science. 45) Richard A. Zalik Department of Mathematics Auburn University Auburn University,AL 36849-5310 USA. Tel 334-844-6557 office 678-642-8703 home Fax 334-844-6555 [email protected] Approximation Theory,Chebychev Systems, Wavelet Theory.

184

Page 185: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

22) Don Hong Department of Mathematical Sciences Middle Tennessee State University 1301 East Main St. Room 0269, Blgd KOM Murfreesboro, TN 37132-0001 Tel (615) 904-8339 [email protected] Approximation Theory,Splines,Wavelet, Stochastics, Mathematical Biology Theory. 23) Hubertus Th. Jongen Department of Mathematics RWTH Aachen Templergraben 55 52056 Aachen Germany Tel +49 241 8094540 Fax +49 241 8092390 [email protected] Parametric Optimization, Nonconvex Optimization, Global Optimization. NEW MEMBERS 46) Jianguo Huang

Department of Mathematics

Shanghai Jiao Tong University

Shanghai 200240

P.R. China

[email protected] Numerical PDE’s

185

Page 186: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Instructions to Contributors

Journal of Applied Functional Analysis A quartely international publication of Eudoxus Press, LLC, of TN.

Editor in Chief: George Anastassiou

Department of Mathematical Sciences University of Memphis

Memphis, TN 38152-3240, U.S.A.

1. Manuscripts files in Latex and PDF and in English, should be submitted via email to the Editor-in-Chief: Prof.George A. Anastassiou Department of Mathematical Sciences The University of Memphis Memphis,TN 38152, USA. Tel. 901.678.3144 e-mail: [email protected] Authors may want to recommend an associate editor the most related to the submission to possibly handle it. Also authors may want to submit a list of six possible referees, to be used in case we cannot find related referees by ourselves. 2. Manuscripts should be typed using any of TEX,LaTEX,AMS-TEX,or AMS-LaTEX and according to EUDOXUS PRESS, LLC. LATEX STYLE FILE. (Click HERE to save a copy of the style file.)They should be carefully prepared in all respects. Submitted articles should be brightly typed (not dot-matrix), double spaced, in ten point type size and in 8(1/2)x11 inch area per page. Manuscripts should have generous margins on all sides and should not exceed 24 pages. 3. Submission is a representation that the manuscript has not been published previously in this or any other similar form and is not currently under consideration for publication elsewhere. A statement transferring from the authors(or their employers,if they hold the copyright) to Eudoxus Press, LLC, will be required before the manuscript can be accepted for publication.The Editor-in-Chief will supply the necessary forms for this transfer.Such a written transfer of copyright,which previously was assumed to be implicit in the act of submitting a manuscript,is necessary under the U.S.Copyright Law in order for the publisher to carry through the dissemination of research results and reviews as widely and effective as possible.

186

Page 187: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

4. The paper starts with the title of the article, author's name(s) (no titles or degrees), author's affiliation(s) and e-mail addresses. The affiliation should comprise the department, institution (usually university or company), city, state (and/or nation) and mail code. The following items, 5 and 6, should be on page no. 1 of the paper. 5. An abstract is to be provided, preferably no longer than 150 words. 6. A list of 5 key words is to be provided directly below the abstract. Key words should express the precise content of the manuscript, as they are used for indexing purposes. The main body of the paper should begin on page no. 1, if possible. 7. All sections should be numbered with Arabic numerals (such as: 1. INTRODUCTION) . Subsections should be identified with section and subsection numbers (such as 6.1. Second-Value Subheading). If applicable, an independent single-number system (one for each category) should be used to label all theorems, lemmas, propositions, corollaries, definitions, remarks, examples, etc. The label (such as Lemma 7) should be typed with paragraph indentation, followed by a period and the lemma itself. 8. Mathematical notation must be typeset. Equations should be numbered consecutively with Arabic numerals in parentheses placed flush right, and should be thusly referred to in the text [such as Eqs.(2) and (5)]. The running title must be placed at the top of even numbered pages and the first author's name, et al., must be placed at the top of the odd numbed pages. 9. Illustrations (photographs, drawings, diagrams, and charts) are to be numbered in one consecutive series of Arabic numerals. The captions for illustrations should be typed double space. All illustrations, charts, tables, etc., must be embedded in the body of the manuscript in proper, final, print position. In particular, manuscript, source, and PDF file version must be at camera ready stage for publication or they cannot be considered. Tables are to be numbered (with Roman numerals) and referred to by number in the text. Center the title above the table, and type explanatory footnotes (indicated by superscript lowercase letters) below the table. 10. List references alphabetically at the end of the paper and number them consecutively. Each must be cited in the text by the appropriate Arabic numeral in square brackets on the baseline. References should include (in the following order): initials of first and middle name, last name of author(s) title of article,

187

Page 188: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

name of publication, volume number, inclusive pages, and year of publication. Authors should follow these examples: Journal Article 1. H.H.Gonska,Degree of simultaneous approximation of bivariate functions by Gordon operators, (journal name in italics) J. Approx. Theory, 62,170-191(1990). Book 2. G.G.Lorentz, (title of book in italics) Bernstein Polynomials (2nd ed.), Chelsea,New York,1986. Contribution to a Book 3. M.K.Khan, Approximation properties of beta operators,in(title of book in italics) Progress in Approximation Theory (P.Nevai and A.Pinkus,eds.), Academic Press, New York,1991,pp.483-495. 11. All acknowledgements (including those for a grant and financial support) should occur in one paragraph that directly precedes the References section. 12. Footnotes should be avoided. When their use is absolutely necessary, footnotes should be numbered consecutively using Arabic numerals and should be typed at the bottom of the page to which they refer. Place a line above the footnote, so that it is set off from the text. Use the appropriate superscript numeral for citation in the text. 13. After each revision is made please again submit via email Latex and PDF files of the revised manuscript, including the final one. 14. Effective 1 Nov. 2009 for current journal page charges, contact the Editor in Chief. Upon acceptance of the paper an invoice will be sent to the contact author. The fee payment will be due one month from the invoice date. The article will proceed to publication only after the fee is paid. The charges are to be sent, by money order or certified check, in US dollars, payable to Eudoxus Press, LLC, to the address shown on the Eudoxus homepage. No galleys will be sent and the contact author will receive one (1) electronic copy of the journal issue in which the article appears. 15. This journal will consider for publication only papers that contain proofs for their listed results.

188

Page 189: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Construction of Spline Type Orthogonal Scaling

Functions and Wavelets

Tung Nguyen and Tian-Xiao HeDepartment of Mathematics

Illinois Wesleyan University

Bloomington, IL 61702-2900, USA

Abstract

In this paper, we present a method to construct orthogonal spline-type scalingfunctions by using B-spline functions. B-splines have many useful propertiessuch as compactly supported and refinable properties. However, except for thecase of order one, B-splines of order greater than one are not orthogonal. Toinduce the orthogonality while keeping the above properties of B-splines, wemultiply a class of polynomial function factors to the masks of the B-splinesso that they become the masks of a spline-type orthogonal compactly-supportedand refinable scaling functions in L2. In this paper we establish the existence ofthis class of polynomial factors and their construction. Hence, the correspond-ing spline-type wavelets and the decomposition and reconstruction formulas fortheir Multiresolution Analysis (MRA) are obtained accordingly.

AMS Subject Classification: 42C40, 41A30, 39A70, 65T60

Key Words and Phrases: B-spline, wavelet, MRA.

1 Introduction

We start with the definitions of Multiresolution Analysis (MRA) and scaling func-tions.

Definition 1.1 A Multiresolutional Analysis (MRA) generated by function φconsists of a sequence of closed subspaces Vj , j ∈ Z, of L2(R) satisfying

(i) (nested) Vj ⊂ Vj+1 for all j ∈ Z;

(ii) (density) ∪j∈ZVj = L2(R);

(iii) (separation) ∩j∈ZVj = 0;

(iv) (scaling) f(x) ∈ Vj if and only if f(2x) ∈ Vj+1 for all j ∈ Z;

1

189J. APPLIED FUNCTIONAL ANALYSIS, VOL. 10, NO.'S 3-4,189-203 ,2015, COPYRIGHT 2015 EUDOXUS PRESS, LLC

Page 190: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

2 Spline Type Orthogonal Scaling Functions and Wavelets

(v) (Basis) There exists a function φ ∈ V0 such that φ(x − k) : k ∈ Z is anorthonormal basis or a Riesz basis for V0.

The function whose existence asserted in (v) is called a scaling function of the MRA.

A scaling function φ must be a function in L2(R) with∫φ 6= 0. Also, since φ ∈ V0

is also in V1 and φ1,k := 2j/2φ(2x− k) : k ∈ Z is a Riesz basis of V1, there exists aunique sequence pk∞k=−∞ ∈ l2(Z) that describe the two-scale relation of the scalingfunction

φ(x) =∞∑

k=−∞

pkφ(2x− k), (1.1)

i.e., φ is of a two-scale refinable property. By taking a Fourier transformationon both sides of (1.1) and denoting the Fourier transformation of φ by φ(ξ) :=∫∞−∞ φ(x)e−iξxdx, we have

φ(ξ) = P (z)φ(ξ

2), (1.2)

where

P (z) =1

2

∞∑k=−∞

pkzk and z = e−iξ/2 (1.3)

Here, P (z) is called the mask of the scaling function. Now, regarding the propertythat φ(x − k) must be either an orthonormal basis or a Riesz basis, we have thefollowing characterization theorem (see, for example, Chs. 5 and 7 of [4])

Theorem 1.2 Suppose the function φ satisfies the refinement relation φ(x) =∑∞−∞ pkφ(2x− k). Then we have the following statements

(i) φ forms an orthonormal basis if and only if |P (z)|2 + |P (−z)|2 = 1 for z ∈ Cwith |z| = 1.(ii) φ forms a Riesz basis if and only if |P (z)|2 + |P (−z)|2 < 1 for z ∈ C with|z| = 1.

Finally, from the scaling function φ, we can construct a corresponding waveletfunction ψ by the following theorem (see, for example, [1, 7, 9])

Theorem 1.3 Let Vjj∈Z be an MRA with scaling function φ, and φ satisfiesrefinement relation in (1.1). Then we can construct a corresponding wavelet func-tion as

190

Page 191: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

T. Nguyen and T.-X. He 3

ψ(x) =∑k∈Z

(−1)kp1−kφ(2x− k) (1.4)

and denote Wj = spanψ(2jx− k) : k ∈ Z.

Furthermore, Wj ⊂ Vj+1 is the orthogonal complement of Vj in Vj+1, and ψjk(x) =2j/2ψ(2jx− k) : k ∈ Z is an orthonormal basis of Wj.

In the next sections, we will examine the B-spline functions as a scaling functionsand construct an orthogonal spline type scaling functions from them. The proof ofthe main result shown in Section 2 and some properties of the constructed spline typescaling functions are presented in Section 3. The corresponding spline type waveletsand their regularities as well as the decomposition and reconstruction formulas arealso given. Finally, we illustrate our construction by using some examples in Section4.

2 Construction of orthogonal scaling functions fromB-splines

In this paper we are interested in a family of B-spline functions, Bn(x), the uniformB-spline with integer knots 0, 1, ..., n+1 defined as follows (see [2, 3]).

Definition 2.1 The cardinal B-splines with integer knots in N0, denoted by Bn(x),is defined inductively by

B1(x) :=

1 if x ∈ [0, 1]0 otherwise,

and Bn(x) := (Bn−1 ∗B1)(x) =

∫ ∞−∞

Bn−1(x− t)B0(t)dt (2.1)

From the definition, it is easy to verify that Bn(x) is compactly supported and inL2(R), which satisfies

∫Bn 6= 0. By using Fourier transformation, we also have the

refinement relation of Bn(x)

Bn(x) =n∑j=0

1

2n−1

(nj

)Bn(2x− j) (2.2)

Next, we would like to see if Bn(x) forms an orthornormal basis or not. We ex-amine the mask Pn(z) of Bn(x). We have

Pn(z) =1

2

n∑j=0

1

2n−1

(nj

)zj =

(1 + z)n

2n=

(1 + z

2

)n(2.3)

191

Page 192: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

4 Spline Type Orthogonal Scaling Functions and Wavelets

Thus considering theorem 1.2 we have

|Pn(z)|2 + |Pn(−z)|2 =

∣∣∣∣1 + z

2

∣∣∣∣2n +

∣∣∣∣1− z2

∣∣∣∣2n=

∣∣∣∣1 + cos(ξ/2)− isin(ξ/2)

2

∣∣∣∣2n +

∣∣∣∣1− cos(ξ/2) + isin(ξ/2)

2

∣∣∣∣2n= cos2n(ξ/4) + sin2n(ξ/4) ≤ cos2(ξ/4) + sin2(ξ/4) = 1

The equality happens only when n=1. Therefore, except for the case of order one(i.e., n = 1), Bn(x) are generally not orthogonal (indeed they are Riesz basis). To in-duce orthogonality, we introduce a class of polynomial function factors S(z). Hence,instead of Bn(x), we consider a scaling function φn(x) with the mask Pn(z)Sn(z),i.e

φn(ξ) = Pn(z)Sn(z)φn(ξ/2) (2.4)

where Pn(z) are defined as (2.3). We want to construct Sn(z) such that the shift setof the new scaling function form an orthogonal basis. In other words, we need thatSn(z) satisfy the following condition

|Pn(z)Sn(z)|2 + |Pn(−z)Sn(−z)|2 = 1 (2.5)

Now we consider Sn(z) of the following type: Sn(z) = a1z+a2z2 + ...+anz

n, n ∈ N,and find their following expressions.

Lemma 2.2Let Sn(z) be defined as above. Then there holds

|Sn(z)|2

=n∑i=1

a2i + 2n−1∑i=1

aiai+1cos(ξ/2) + 2n−2∑i=1

aiai+2cos(2ξ/2) + ...

+ 2a1ancos((n− 1)ξ/2).

192

Page 193: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

T. Nguyen and T.-X. He 5

Proof. We have

|Sn(z)|2

= |a1(cos(ξ/2)− isin(ξ/2)) + ...+ an(cos(nξ/2)− isin(nξ/2))|2

= |(a1cos(ξ/2) + ...+ ancos(nξ/2))− i(a1sin(ξ/2) + ...+ ansin(nξ/2))|2

= (a1cos(ξ/2) + ...+ ancos(nξ/2))2 + (a1sin(ξ/2) + ...+ ansin(nξ/2))2

= a1(cos2(ξ/2) + sin2(ξ/2)) + ...+ an(cos2(nξ/2) + sin2(nξ/2))

+∑i 6=j

2aiaj(cos(iξ/2)cos(jξ/2) + sin(iξ/2)sin(jξ/2))

=n∑i=1

a2i +∑i 6=j

2aiajcos((i− j)ξ/2)

=n∑i=1

a2i + 2n−1∑i=1

aiai+1cos(ξ/2) + 2n−2∑i=1

aiai+2cos(2ξ/2) + ...

+ 2a1ancos((n− 1)ξ/2)

A similar procedure can be applied to find |Sn(−z)|2

|Sn(−z)|2

=n∑i=1

a2i − 2n−1∑i=1

aiai+1cos(ξ/2) + 2n−2∑i=1

aiai+2cos(2ξ/2) + ...

+ (−1)n2a1ancos((n− 1)ξ/2).

From Lemma 2.2, if we write each cos(kξ/2) as a polynomial of cos(ξ/2), then|Sn(z)|2 = Qn(x) where x = cos(ξ/2). Obviously, Qn(x) has the degree of n− 1. Itis also easy to observe that |Sn(−z)|2 = Qn(−x). Now equation (2.5) becomes

1 = |Pn(z)Sn(z)|2 + |Pn(−z)Sn(−z)|2

= cos2n(ξ/4)Qn(x) + sin2n(ξ/4)Qn(−x)

=

(1 + cos(ξ/2)

2

)nQn(x) +

(1− cos(ξ/2)

2

)nQn(−x)

=

(1 + x

2

)nQn(x) +

(1− x

2

)nQn(−x)

So finally we get(1 + x

2

)nQn(x) +

(1− x

2

)nQn(−x) = 1 (2.6)

193

Page 194: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

6 Spline Type Orthogonal Scaling Functions and Wavelets

Next, to show the existence of Q(x) in the above equation, we make use of the Poly-nomial extended Euclidean algorithm (see [6, 16]).

Lemma 2.3 Polynomial extended Euclidean algorithm If a and b are twononzero polynomials, then the extended Euclidean algorithm produces the unique pairof polynomials (s, t) such that as+bt=gcd(a,b), where deg(s) < deg(b)−deg(gcd(a, b))and deg(t) < deg(a)− deg(gcd(a, b)).

We notice that gcd(( 1+x2 )n, ( 1−x

2 )n) = 1, so by Lemma 2.3, there exists uniquelyQ(x)and R(x) with degrees less than n such that (1+x

2 )nQ(x) + ( 1−x2 )nR(x) = 1. If we

replace x by −x in the previous equation, we have ( 1−x2 )nQ(−x)+( 1+x

2 )nR(−x) = 1.Due to the uniqueness of the algorithm, we conclude that R(x) = Q(−x). So wehave showed the existence of a unique Q(x) = Qn(x) satisfying equation 2.6.

To construct Qn(x) explicitly, we use the Lorentz polynomials shown in [8, 15]and the following technique.

1 =

(1 + x

2+

1− x2

)2n−1

=

2n−1∑i=0

(2n− 1i

)(1 + x

2)2n−1−i(

1− x2

)i

=

(1 + x

2

)n [n−1∑i=0

(2n− 1i

)(1 + x

2

)n−1−i(1− x

2

)i]

+

(1− x

2

)n [n−1∑i=0

(2n− 1i

)(1− x

2

)n−1−i(1 + x

2

)i],

where the polynomials presenting in the brackets are the Lorentz polynomials. Wenotice that the degrees of the two polynomials in the brackets are n−1, and becauseQn(x) in equation 2.6 is unique, we can conclude that

Qn(x) =n−1∑i=0

(2n− 1i

)(1 + x

2

)n−1−i(1− x

2

)i(2.7)

With the construction of Qn(x), we take a step further by showing the existence of∑a2i ,∑aiai+1, ... in Lemma 2.2.

It is well-known that the set 1, cos(t), cos(2t), ..., cos((n− 1)t) is linearly inde-pendent. As a result, 1, cos(ξ/2), cos(2ξ/2), ..., cos((n−1)ξ/2) forms a basis of thespace Pn−1(x) = P (x) : x = cos(ξ/2) and P is a polynomial of degree less than n.Based on this fact and the existence of Qn(x) in equation (2.6), it is obvious thatthe coefficients

∑a2i ,∑aiai+1, ... in Lemma 2.2 must exist uniquely.

We now establish the main result of this paper.

194

Page 195: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

T. Nguyen and T.-X. He 7

Theorem 2.1 Let Pn(z) and Sn(z) be defined as above. Then for n = 1, 2, . . .,the spline type function φn(x) with the mask Pn(z)Sn(z) is a scaling function thatgenerates an orthogonal basis of V0 in its MRA.

From the construction of φn(x), we immediately know they are compactly sup-ported and refinable. We will prove that they are in L2(R) in next section.

3 Properties of the constructed scaling function

3.1 The scaling function φn(x) is in L2(R)To show that the newly constructed scaling function φn(x) with mask Pn(z)Sn(z) isin L2(Z), we make use of the following theorem

Theorem 3.1 [11, 12] Let φ be a scaling function with mask P (z)S(z) where

P (z) = (1+z2 )n and S(z) = zi

∑kj=0 ajz

j. Then φ ∈ L2(R) if

(k + 1)k∑j=0

a2j < 22n−1 (3.1)

.

We also need the following lemma for the proof of Theorem 2.1.

Lemma 3.2 [14] We have the following formula for binomial coefficients

(nk

)=nke−

k2

2n−k3

6n2

k!(1 + o(1)) (3.2)

Proof. (The proof of Theorem 2.1) In this case, with the Sn(z) we use to constructφn(x), the condition (3.1) becomes

n

n∑j=1

a2j < 22n−1 (3.3)

Recall from Lemma 2.2 that

Qn(x) = Qn(cos(ξ/2))

=n∑i=1

a2i + 2n−1∑i=1

aiai+1cos(ξ/2) + ...+ 2a1ancos((n− 1)ξ/2)

195

Page 196: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

8 Spline Type Orthogonal Scaling Functions and Wavelets

Taking the integration from 0 to 2π of both sides, we have

∫ 2π

0

Qn(x)dξ

=

∫ 2π

0

(n∑i=1

a2i + 2n−1∑i=1

aiai+1cos(ξ/2) + ...+ 2a1ancos((n− 1)ξ/2)

)dξ

= 2πn∑i=1

a2i

On the other hands, from the expression of Qn(x) in (2.7) we have

∫ 2π

0

Qn(x)dξ =

∫ 2π

0

n−1∑i=0

(2n− 1i

)(1 + x

2

)n−1−i(1− x

2

)idξ

=

∫ 2π

0

n−1∑i=0

(2n− 1i

)(1 + cos(ξ/2)

2

)n−1−i(1− cos(ξ/2)

2

)idξ

Combining the two equations above we have

n∑i=1

a2i =1

∫ 2π

0

n−1∑i=0

(2n− 1i

)(1 + cos(ξ/2)

2

)n−1−i(1− cos(ξ/2)

2

)idξ (3.4)

Now we need to show that the expression on the right hand side of (3.3) is smaller

than 22n−1

n . As it is hard to obtain a simplification for the left hand side of (3.3), wewill use estimation shown in Lemma 3.2 to get an upper bound for it. From lemma3.2, we get

(2n− 1i

)/

(n− 1i

)≈ (2n− 1)ie

− i2

2(2n−1)− i3

6(2n−1)2

i!

i!

(n− 1)ie− i2

2(n−1)− i3

6(n−1)2

=

(2n− 1

n− 1

)ie

i2

2 ( 1n−1−

12n−1 )+ i3

6

(1

(n−1)2− 1

(2n−1)2

)

Using the fact that i ≤ n, we can estimate the right hand side to approximate

(2n− 1

n− 1

)ie

i2

212i+

i3

61

2i2 =

(2n− 1

n− 1

)iei/3 =

(2n− 1

n− 1e1/3

)iThus, from (3.3) we obtain

196

Page 197: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

T. Nguyen and T.-X. He 9

n∑i=1

a2i =1

∫ 2π

0

n−1∑i=0

(2n− 1i

)(1 + cos(ξ/2)

2

)n−1−i(1− cos(ξ/2)

2

)idξ

≈ 1

∫ 2π

0

n−1∑i=0

(n− 1i

)(2n− 1

n− 1e1/3

)i(1 + cos(ξ/2)

2)n−1−i(

1− cos(ξ/2)

2

)idξ

=1

∫ 2π

0

(1 + cos(ξ/2)

2+

(2n− 1

n− 1e1/3

)(1− cos(ξ/2)

2

))ndξ

≤ 1

∫ 2π

0

(2n− 1

n− 1e1/3

)ndξ

=

(2n− 1

n− 1e1/3

)nNotice that the inequality above can be easily verified by checking that the maxi-mum of the expression inside the integral is attained at cos(ξ/2) = −1.

For n ≥ 10, we have(2n− 1

n− 1e1/3

)n=

((2 +

1

n− 1

)e1/3

)n≤((

2 +1

9

)e1/3

)n≤ 2.95n

We now prove that 2.95n < 22n−1

n , or equivalently 2n < ( 42.95 )n. Consider the

function h(n) = ( 42.95 )n − 2n for n ≥ 10. We have

h′(n) =

(4

2.95

)nln

(4

2.95

)− 2 ≥

(4

2.95

)10

ln

(4

2.95

)− 2 > 0

Thus, h(n) ≥ h(10) > 0. We have proven∑ni=1 a

2i < 22n−1/n for all n ≥ 10. Finally,

we can easily check the inequality is also true for all 1 ≤ n ≤ 9. The case of n = 1is trivial since S1(z) = z. For the cases of 2 ≤ n ≤ 9, denote the difference of∑ni=1 a

2i − 22n−1/n by αn. Thus,

αn =n∑i=1

a2i −22n−1

n

=1

∫ 2π

0

n−1∑i=0

(2n− 1i

)(1 + cos(ξ/2)

2

)n−1−i(1− cos(ξ/2)

2

)idξ − 22n−1

n

Using Mathematica, we have the following tablen 2 3 4 5 6 7 8 9αn -2 − 71

12 -19 − 20223320 − 20687

96 − 13430631792 − 84763

32 − 1398580967147456

As for n from 2 to 9, all the αn are negative, inequality (3.3) is completely proven,which indicates that the scaling function φn(x) is in L2(R) and also completes theproof of Theorem 2.1

197

Page 198: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

10 Spline Type Orthogonal Scaling Functions and Wavelets

As examples, we consider the cases of n = 1 and 2. It is easy to find thatS1(z) = z and φn(x) is the Haar function. For n = 2,

S2(z) =1 +√

3

2z +

1−√

3

2z2

and the corresponding φ2(x) is the Daubechies scaling function.

3.2 Refinement relation and corresponding wavelets

Let Mn(z) := Pn(z)Sn(z) = 12

∑2nk=0 ckz

k. Then Mn(z) is the mask of φn(x) andthus we have the refinement equation

φn(x) =∑∀k

ckφn(2x− k) (3.5)

We also have the corresponding wavelets

ψn(x) =∑∀k

(−1)kc1−kφn(2x− k) (3.6)

One natural question at this point is from the refinement equation to determineexplicitly the scaling function φn(x) and thus the wavelet functions ψn(x). Wepropose a method by the iterative procedure described in Theorem 5.23 of [1].

We start with φ(0)n (x) = B1(x). From φ

(0)n (x) we can construct φ

(1)n (x) by using

the refinement equation (3.5)

φ(1)n (x) =∑∀k

ckφ(0)n (2x− k)

Now from φ(1)n (x) we can continue to construct φ

(2)n (x) again by the refinement

equation. Continuing the process for a certain number of times, we can obtain anapproximation of φn(x).

Simple as it seems, this iteration method is not a very good way to construct φas we do not know how many times should we repeat the process to find a goodenough approximation for φ. For this reason, more direct method is developed andwill be addressed in future paper.

From Theorem 2 of [11], we have the following result on the regularities of splinetype wavelets ψn(x).

Theorem 3.1 Let φn(x) be constructed as the previous section with the mask shownin Theorem 2.1. Then the corresponding wavelets ψn(x) established in (3.6) are inCβn , where βn are greater than

n− 1

2log2

n n∑j=1

a2j

.

198

Page 199: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

T. Nguyen and T.-X. He 11

3.3 Decomposition and reconstruction formula

Finally, upon obtaining the new scaling function and corresponding wavelet function,we may establish the corresponding decomposition and reconstruction formulas byusing a routine argument. Roughly speaking consider a function f ∈ L2(R) with fibeing an approximation in Vj , a space generated by the jth order scaling function.Then

fj =∑k∈Z

< f, φjk > φjk (3.7)

Because we have the orthogonal direct sum decomposition Vj = Vj−1 + Wj−1, wecan express fj using bases of Vj−1 and bases of Wj−1 as follows

fj = fj−1 + gj−1 =∑k∈Z

< f, φj−1,k > φj−1,k +∑k∈Z

< f,ψj−1,k > ψj−1,k (3.8)

And now we have the following theorem regarding the relationship of these coeffi-cients.

Let Vj : j ∈ Z be an MRA with scaling function φ satisfying the scaling rela-tion φ(x) =

∑k∈Z pkφ(2x − k), and let Wj be the orthogonal complement of Vj in

Vj+1 with wavelet function ψ. Then the coefficients relative to the different bases in(3.7) and (3.8) satisfy the following decomposition formula

< f, φj−1,l > = 2−1/2∑k∈Z

pk−2l < f, φjk >

< f,ψj−1,l > = 2−1/2∑k∈Z

(−1)kp1−k+2l < f, φjk >

and the reconstruction formula

< f, φjl >= 2−1/2∑k∈Z

pl−2k < f, φj−1,k > +2−1/2∑k∈Z

(−1)kp1−l+2k < f, φj−1,k >

4 Examples for n = 3 and n = 4

Besides the examples of the cases of n = 1 and 2 shown at the end of Section 3.1, inthis section we give an examples on the cases of n = 3 and n = 4. According to theprevious sections, we will construct an orthogonal scaling function φ3(x) from thethird order B-spline function B3(x).

In order to construct the function φ3(x), we start with its mask P3(z)S3(z), whereP3(z) = (1+z

2 )3 is the mask of the third order B-spline. Let S3(z) = a1z+a2z2+a3z

3,

199

Page 200: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

12 Spline Type Orthogonal Scaling Functions and Wavelets

then by Lemma 2.2, we have

Q3(x) = |S3(z)|2 = (a21 + a22 + a23) + 2(a1a2 + a2a3)cos(ξ/2) + 2a1a3cos(ξ)

= (a21 + a22 + a23) + 2(a1a2 + a2a3)cos(ξ/2) + 2a1a3(2cos2(ξ/2)− 1)

= (a21 + a22 + a23 − 2a1a3) + 2(a1a2 + a2a3)cos(ξ/2) + 4a1a3cos2(ξ/2)

= (a21 + a22 + a23 − 2a1a3) + 2(a1a2 + a2a3)x+ 4a1a3x2

where x = cos(ξ/2) and z = e−iξ/2.

On the other hand, by equation (2.7) we have

Q3(x) =2∑i=0

(5i

)(1 + x

2

)2−i(1− x

2

)i=

(1 + x

2

)2

+ 5

(1 + x

2

)(1− x

2

)+ 10

(1− x

2

)2

=3

2x2 − 9

2x+ 4

Thus, from the above equations, we have the following system of equationsa21 + a22 + a23 − 2a1a3 = 4

2(a1a2 + a2a3) = − 92

4a1a3 = 32

(4.1)

Simplify (4.1) we get a21 + a22 + a23 = 19

4

a1a2 + a2a3 = − 94

a1a3 = 38

(4.2)

From this system, we have (a1 +a2 +a3)2 = a21 +a22 +a23 +2(a1a2 +a2a3 +a1a3) = 1.Without loss of generality, consider the case a1 + a2 + a3 = 1. Combining this witha1a2 + a2a3 = − 9

4 and a1a3 = 38 we have the following solution

a1 = 14

(1 +√

10−√

5 + 2√

10)

a2 = 12

(1−√

10)

a3 = 14

(1 +√

10 +√

5 + 2√

10) (4.3)

We verify the condition for φ3 to be in L2(R)

a21 + a22 + a23 =19

4<

32

3=

22·3−1

3

Thus φ3 is indeed in L2(R). Now we will attempt to construct φ3 explicitly. It hasthe mask

P3(z)S3(z) =

(1 + z

2

)3

(a1z + a2z2 + a3z

3)

= 0.0249x− 0.0604x2 − 0.095x3 + 0.325x4 + 0.571x5 + 0.2352x6

200

Page 201: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

T. Nguyen and T.-X. He 13

By (1.1), (1.2) and (1.3) we have the refinement equation

φ3(x) = 0.0498φ3(2x− 1)− 0.121φ3(2x− 2)− 0.191φ3(2x− 3)

+ 0.650φ3(2x− 4) + 1.141φ3(2x− 5) + 0.4705φ3(2x− 6)(4.4)

The corresponding spline type wavelet ψ3(x) can be expressed as

ψ3(x) = 0.0498φ3(2x) + 0.121φ3(2x+ 1)− 0.191φ3(2x+ 2)

− 0.650φ3(2x+ 3) + 1.141φ3(2x+ 4)− 0.4705φ3(2x+ 5),

which is in Cβ3 and

β3 > 4− 1

2log2(57).

We now consider the case n = 4. Again, we construct an orthogonal scalingfunction φ4(x) from the forth order B-spine B4(x) with the mask P4(z) = (1+z

2 )4.

We examine the mask P4(z)S4(z) of φ4(x), where S4(z) = a1z+ a2z2 + a3z

3 + a4z4.

By Lemma 2.2, we have

Q4(x) = |S4(z)|2 = (a21 + a22 + a23 + a24) + 2(a1a2 + a2a3 + a3a4)cos(ξ/2)

+ 2(a1a3 + a2a4)cos(ξ) + 2a1a4cos(3ξ/2)

= (a21 + a22 + a23 + a24) + 2(a1a2 + a2a3 + a3a4)cos(ξ/2)

+ 2(a1a3 + a2a4)(2cos2(ξ/2)− 1) + 2a1a4(4cos3(ξ/2)− 3cos(ξ/2))

= (a21 + a22 + a23 + a24 − 2a1a3 − 2a2a4)

+ (2a1a2 + 2a2a3 + 2a3a4 − 6a1a4)cos(ξ/2)

+ (4a1a3 + 4a2a4)cos2(ξ/2) + 8a1a4cos3(ξ/2)

= (a21 + a22 + a23 + a24 − 2a1a3 − 2a2a4)

+ (2a1a2 + 2a2a3 + 2a3a4 − 6a1a4)x

+ (4a1a3 + 4a2a4)x2 + 8a1a4x3

where x = cos(ξ/2) and z = e−iξ/2.

Using equation (2.7) we get another expression for Q4(x)

Q4(x) =3∑i=0

(7i

)(1 + x

2

)3−i(1− x

2

)i=

(1 + x

2

)3

+ 7

(1 + x

2

)2(1− x

2

)+ 21

(1 + x

2

)(1− x

2

)2

+ 35

(1− x

2

)3

= 8− 29

2x+ 10x2 − 5

2x3

201

Page 202: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

14 Spline Type Orthogonal Scaling Functions and Wavelets

Thus, from the above equations, we have the following system of equationsa21 + a22 + a23 + a24 − 2a1a3 − 2a2a4 = 8

2a1a2 + 2a2a3 + 2a3a4 − 6a1a4 = − 292

4a1a3 + 4a2a4 = 10

8a1a4 = − 52

(4.5)

Simplify (5.1) we have the following systema21 + a22 + a23 + a24 = 13

a1a2 + a2a3 + a3a4 = − 13116

a1a3 + a2a4 = 52

a1a4 = − 516

(4.6)

Solving for this system of equations yields 8 solutions. One of the numerical solutionsis

a1 = 2.6064

a2 = −2.3381

a3 = 0.8516

a4 = −0.1199

(4.7)

We verify the condition for φ4 to be in L2(R)

a21 + a22 + a23 + a24 = 13 < 32 =22·4−1

4

Thus φ4 is indeed in L2(R). Now we will attempt to construct φ4 explicitly. It hasthe mask

P4(z)S4(z) =

(1 + z

2

)4

(a1z + a2z2 + a3z

3 + a4z4)

= 0.1629z + 0.5055z2 + 0.4461z3 − 0.0198z4 − 0.1323z5 + 0.0218z6

+ 0.0233z7 − 0.0075z8

By (1.1), (1.2) and (1.3) we have the refinement equation

φ4(x) = 0.3258φ4(2x− 1) + 1.011φ4(2x− 2) + 0.8922φ4(2x− 3)− 0.0396φ4(2x− 4)

− 0.2646φ4(2x− 5) + 0.0436φ4(2x− 6) + 0.0466φ4(2x− 7)− 0.015φ4(2x− 8)

(4.8)

The corresponding spline type wavelet ψ3(x) can be expressed as

ψ4(x) = 0.3258φ4(2x)− 1.011φ4(2x+ 1) + 0.8922φ4(2x+ 2) + 0.0396φ4(2x+ 3)

− 0.2646φ4(2x+ 4)− 0.0436φ4(2x+ 5) + 0.0466φ4(2x+ 6) + 0.015φ4(2x+ 7)

which is in Cβ4 and

β4 > 4− 1

2log2(52).

202

Page 203: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

T. Nguyen and T.-X. He 15

References

[1] A. Boggess and F. J. Narcowich, A first course in wavelets with Fourier anal-ysis, Second edition, John Wiley & Sons, Inc., Hoboken, NJ, 2009.

[2] C. de Boor, B(asic)-Spline Basics, http://www.cs.unc.edu/ dm/UNC/COMP258/Papers/bsplbasic.pdf.

[3] C. de Boor, A Practical Guide to Splines, Springer-Verlag, New York, 1978.

[4] C. K. Chui, An Introduction to Wavelets, Wavelet Analysis and Its Applica-tions, Vol. 1, Academic Press, INC., New York, 1992.

[5] O. Christensen, An Introduction to Frames and Riesz Bases, Birkhauser,Boston, 2002.

[6] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction toAlgorithms, Second Edition, MIT Press and McGraw-Hill, 2001.

[7] I. Daubechies, Ten lectures on wavelets, CBMS-NSF Regional Conference Se-ries in Applied Mathematics, 61. Society for Industrial and Applied Mathe-matics (SIAM), Philadelphia, PA, 1992.

[8] T. Erdelyi and J. Szabados, On polynomials with positive coefficients. J. Ap-prox. Theory 54 (1988), no. 1, 107–122.

[9] E. Hernandez and G. Weiss, A first Course on Wavelets, With a foreword byYves Meyer, Studies in Advanced Mathematics, CRC Press, Boca Raton, FL,1996.

[10] D. Han, K. Kornelson, D. Larson, and E. Weber, Frames for Undergraduates,AMS, Providence, Rhode Island, 2007.

[11] T.-X. He, Biorthogonal wavelets with certain regularities. Appl. Comput. Har-mon. Anal. 11 (2001), no. 2, 227–242.

[12] T.-X. He, Biorthogonal spline type wavelets. Comput. Math. Appl. 48 (2004),no. 9, 1319–1334.

[13] T.-X. He, Construction of biorthogonal B-spline type wavelet sequences withcertain regularities. J. Appl. Funct. Anal. 2 (2007), no. 4, 339–360.

[14] S. Jukna, Extremal Combinatorics, With Applications in Computer Science,Texts in Theoretical Computer Science, An EATCS Series, Springer-Verlag,Berlin, 2001.

[15] G. G. Lorentz, The degree of approximation by polynomials with positivecoefficients. Math. Ann. 151 1963 239251.

[16] http://en.wikipedia.org/wiki/Extended-Euclidean-algorithm.

203

Page 204: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Fixed point results for commuting mappings inmodular metric spaces

Emine K¬l¬nç1, Cihangir Alaca2;

1 Department of Mathematics, Institute of Natural and Applied Sciences, Celal Bayar

University, 45140 Manisa, Turkey2 Department of Mathematics, Faculty of Science and Arts, Celal Bayar University, 45140

Manisa, Turkey

Abstract: The purpose of this paper is to prove that two main xed point theoremsfor commuting mappings in modular metric spaces. Our main results are more generalthan from those of Jungck [17] and Fisher [16].

2010 Mathematics Subject Classication: 46A80, 47H10, 54E35.

Key words and phrases: Modular metric spaces, xed point, commuting mappings.

1 Introduction

The Banach Contraction Principle which published in 1922 [5] and then the beginning ofxed point theory in metric spaces is related to which in particular situation was alreadyobtained by Liouville, Picard and Goursat. A number of authors have dened and studiedcontractive type mappings on a complate metric space X which are generalizations ofwell-known Banach contraction (see [18], [19],[26], [11]).The notion of modular space was introduced by Nakano [24] and was intensively devel-

oped by Koshi, Shimogaki, Yamamuro (see [21, 27]) and others. A lot of mathematiciansare interested xed point of modular space. In 2008, Chistyakov introduced the notion ofmodular metric space generated by F-modular and developed the theory of this space [7],on the same idea was denedthe notion of a modular on an arbitrary set ad developedthe theory of metric space generated by modular such that called the modular metricspaces in 2010 [8]. Afrah A. N. Abdou [1] studied and proved some new xed points the-orems for pointwise and asymptotic pointwise contraction mappings in modular metricspaces. Azadifer et. al. [2] introduced the notion of modular G-metric spaces and provedsome xed point theorems of contractive and Azadifer et. al. [4] proved the existenceand uniqueness of a common xed point of compatible mappings of integral type in this

Corresponding author. Tel.: +90 236 2013209; fax: +90 236 2412158.E-mail addresses: [email protected] (E. K¬l¬nç), [email protected] (C. Alaca).

1

204J. APPLIED FUNCTIONAL ANALYSIS, VOL. 10, NO.'S 3-4,204-210 ,2015, COPYRIGHT 2015 EUDOXUS PRESS, LLC

Page 205: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

space. K¬l¬nç and Alaca [20] dened ("; k)uniformly locally contractive mappings and-chainable concept and proved a xed point theorem for these concepts in a completemodular metric spaces. Recently, many authors [3, 6, 9, 10, 14, 23] studied on di¤erentxed point results for modular metric spaces.In this paper, we give modular metric versions of the xed point theorem which was

given by Jungck [17] and a generalization of this theorem which was given by Fisher [16].

2 Preliminaries

In this section, we will give some basic concepts and denitions about modular metricspaces.

Denition 2.1 [[8], Denition 2.1] Let X be a nonempty set, a function w : (0;1)X X ! [0;1] is said to be a metric modular on X if satisfying, for all x; y; z 2 X thefollowing condition holds:

(i) w (x; y) = 0 for all > 0 , x = y;

(ii) w (x; y) = w (y; x) for all > 0;

(iii) w+ (x; y) w (x; z) +w (z; y) for all ; > 0:

If instead of (i), we have only the conditioniw (x; x) = 0 for all > 0, then w is said to be a (metric) pseudomodular on X.

The main property of a metric modular [[8]] w on a set X is the following: givenx; y 2 X, the function 0 < 7! w (x; y) 2 [0;1] is nonincreasing on (0;1). In fact, if0 < < , then (iii),

iand (ii) imply

w (x; y) w (x; x) + w (x; y) = w (x; y) :

It follows that at each point > 0 the right limit w+0 (x; y) = lim!+0

w (x; y) and the

left limit w0 (x; y) = lim"!+0

w" (x; y) exist in [0;1] and the following two inequalitieshold:

w+0 (x; y) w (x; y) w0 (x; y) : (2.1)

Denition 2.2 [[23]] Let Xw be a complete modular metric space. A self-mapping Ton Xw is said to be a contraction if there exists 0 k < 1 such that

w (Tx; Ty) kw (x; y) (2.2)

for all x; y 2 Xw and > 0:

Theorem 2.1 [[23]] Let Xw be a complete modular metric space and T a contractionon Xw. Then, the sequence (T nx)n2N converges to the unique xed point of T in Xw forany initial x 2 Xw.

Denition 2.3 [[23]], Denition 2.4] Let Xw be a modular metric space. Thenfollowing denitions exist:

2

Fixed point results for commuting mappings, Emine Kilinc, Cihangir Alaca

205

Page 206: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

(1) The sequence (xn)n2N in Xw is said to be convergent to x 2 Xw if w (xn; x) ! 0,as n!1 for all > 0:

(2) The sequence (xn)n2N in Xw is said to be Cauchy if w (xm; xn)! 0, as m;n!1for all > 0:

(3) A subset C of Xw is said to be closed if the limit of a convergent sequence of Calways belong to C.

(4) A subset C of Xw is said to be complete if any Cauchy sequence in C is a convergentsequence and its limit is in C.

3 Main Results

In this section we will give two main theorems for commuting mappings in modularmetric spaces, the xed point theorem which was given by Jungck in 1976 [17] and ageneralization of this theorem which was given by Fisher [16] in 1981 for metric spaces.

Theorem 3.1 Let I and T be commuting mappings of complete modular metric spaceXw into itsef satisying the inequality

w (Tx; Ty) k:w (Ix; Iy) (3:1)

for all x; y 2 Xw, where 0 < k < 1:If the range of I contains the range of T ,I is continuousand w (x; Ix) <1 then T and I have a common unique xed point.

Proof. Let x0 2 Xw be arbitrary. Then Tx0 and Ix0 are well dened. Since Tx0 2I (Xw) ; there is some x1 2 Xw such that Ix1 = Tx0: Then choose x2 2 Xw such thatIx2 = Tx1: In general if xn is choosen, then we choose a point xn+1 in Xw such thatIxn+1 = Txn: We show that (xn) is Cauchy sequence . From (3:1) we have

w (Ixn; Ixn+1) = w (Txn1; Txn) k:w (Ixn1; Ixn)

Then with induction we have

w (Ixn; Ixn+1) k:w (Ixn1; Ixn) ::: knw (Ix0; Ix1)

If we take the limit as n!1, we get

w (Ixn; Ixn+1)! 0

Without loss of generality we can assume that for mn > 0 there exists

"mn such that

w mn

(Ixn; Ixn+1) "

m nFor m;n 2 N and m > n we have

w (Ixn; Ixm) w mn

(Ixn; Ixn+1) + w mn

(Ixn+1; Ixn+2) + :::+ w mn

(Ixm1; Ixm)

"

m n +"

m n + :::+"

m n "

3

Fixed point results for commuting mappings, Emine Kilinc, Cihangir Alaca

206

Page 207: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Thus (xn)isa Cauchy sequence. Since Xw is complete, there is some u 2 Xw such that

limn!1

Ixn = limn!1

Txn1 = u

Since I is continuous and T and I commute we get

Iu = Ilimn!1

Ixn

= lim

n!1I2xn

Iu = Ilimn!1

Txn

= lim

n!1ITxn = lim

n!1TIxn

From (3:1) we getw (TIxn; Tu) k:w

I2xn; Iu

Taking the limit as n!1 we obtain

w (Iu; Tu) k:w (Iu; Iu)

Hence we get Iu = Tu. Again from (3:1)

w (Txn; Tu) k:w (Ixn; Iu)

Letting n tend to 1 we get

w (u; Tu) k:w (u; Iu) = k:w (u; Tu)

Hence Tu = u: To show the uniqueness of u, assume that there exists v 2 Xw such thatTv = Iv = v, then

w (u; v) = w (Iu; Tv) = w (Tu; Tv) k:w (Iu; Iv)

Then u = v: This completes the proof.

Theorem 3.2 Let S and T be commuting mappings and I and J be commutingmappings of a complete modular metric spaceXw into itself satisfying

w (Sx; Ty) k:w (Ix; Jy)

for all x; y 2 Xw, where 0 k < 1. If I andJ are continuous and w (Ix0; Jx1) < 1,then all S; T; I and J have a unique common xed point.

Proof. Let x0 2 Xw be arbitrary. Since Sx0 2 J (Xw), let x1 2 Xw be such that Jx1 =Sx0 and also, as Tx1 2 I (Xw) ; let x2 2 Xw such that Ix2 = Tx1: In general x2n+1 2 Xw ischoosen such that Jx2n+1 = Sx2n and x2n+2 2 Xw suh that Ix2n+2 = Tx2n+1; n = 0; 1; :::Denote

y2n = Jx2n+1 = Sx2n

y2n+1 = Ix2n+2 = Tx2n+1; n 0

4

Fixed point results for commuting mappings, Emine Kilinc, Cihangir Alaca

207

Page 208: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Then, it can be proved that (yn) isa Cauchy sequence.

w (Jx2n+1; Ix2n+2) = w (Sx2n; Tx2n+1) k:w (Ix2n; Jx2n+1) k:w (Tx2n1; Sx2n) k:k:w (Jx2n1; Ix2n) k2w (Jx2n1; Ix2n)

k3w (Jx2n3; Ix2n2)

:::

k2nw (Jx3; Ix2)

k2n+2w (Jx1; Ix0)

Then taking n!1 we have

w (Jx2n+1; Ix2n+2)! 0

Then there exists " > 0 such that

w (Jx2n+1; Ix2n+2) "

Without loss of generality we can assume that for mn < 0, there exists

"mn > 0 such

thatw

mn(Jx2n+1; Ix2n+2)

"

m nHence choosing m = 2k + 1; n = 2l + 2; k > l and using triangle inequality we get

w (ym; yn) = w (y2k+1; y2l+2) = w (Ix2k+2; Jx2l+1)

w mn

(Ix2k+2; Jx2k+1) + w mn

(Jx2k+1; Ix2k) + :::+ w mn

(Ix2l+2; Jx2l+1)

"

m n +"

m n + :::+"

m n "

Hence (yn) is a Cauchy sequence. Let u 2 Xw be such that

limn!1

Sx2n = limn!1

Jx2n+1 = limn!1

Tx2n+1 = limn!1

Ix2n+2 = u

Since I is continuous and I and S commute it follows that

limn!1

I2x2n+2 = Iu; limn!1

SIx2n = limn!1

ISx2n = Iu = u

From (3:2)w (SIx2n; Tx2n+1) k:w

I2x2n; Jx2n+1

Taking the limit as n!1 we get

w (Iu; u) k:w (Iu; u)

Thence Iu = u:Similarly we obtain Ju = u, as J is continuous and T and J are commute.

5

Fixed point results for commuting mappings, Emine Kilinc, Cihangir Alaca

208

Page 209: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

From (3:2) we have, as Iu = u;

w (Su; Tx2n+) k:w (u; Jx2n+1)

Taking the limit as n!1 we get

w (Su; u) = 0

Hence Su = u: Again from (3:2) considering the results above, we have

w (Su; Tu) = 0

Hence Tu = Su:Thus we proved that

Su = Tu = Iu = Ju = u

Clearly u is the unique common xed point of all S; T; I and J . This completes theproof.

References

[1] Afrah A.N. Abdou, On asymptotic pointwise contractions in modular metricspaces, Abstract and Applied Analysis Volume 2013, Article ID 501631, 7 pages,http://dx.doi.org/10.1155/2013/501631.

[2] B. Azadifar, M. Maramaei, Gh. Sadeghi, On the modular G-metric spaces and xedpoint theorems, J. Nonlinear Sci. Appl. 6 (2013) 293-304.

[3] B. Azadifar, M. Maramaei, Gh. Sadeghi, Common xed point theorems in modularG-metric spaces, Journal Nonlinear Analysis and Application 2013 (2013) 1-9.

[4] B. Azadifar, Gh. Sadeghi, R. Saadati, C. Park, Integral type contractions inmodular metric spaces, Journal of Inequalities and Applications 2013 (2013):483.doi:10.1186/1029-242X-2013-483.

[5] S. Banach, Sur les opérations dans les ensembles abstraits et leur application auxéquations intégrales, Fund. Math. 3 (1922) 133-181.

[6] P. Chaipunya, Y.J. Cho, P. Kumam, Geraghty-type theorems in modular metricspaces with an application to partial di¤erential equation, Advances in Di¤erenceEquations 2012 (2012):83. doi:10.1186/1687-1847-2012-83.

[7] V.V. Chistyakov, Modular metric spaces generated by F-modulars, Folia Math. 14(2008) 3-25.

[8] V.V. Chistyakov, Modular metric spaces I. basic conceps, Nonlinear Anal. 72 (2010)1-14.

[9] V.V. Chistyakov, Fixed points of modular contractive maps, Doklady Mathematics86(1) (2012) 515-518.

6

Fixed point results for commuting mappings, Emine Kilinc, Cihangir Alaca

209

Page 210: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

[10] Y.J. Cho, R. Saadati, Gh. Sadeghi, Quasi-contractive mappings in modular metricspaces, Journal of Applied Mathematics Volume 2012, Article ID 907951, 5 pages,doi:10.1155/2012/907951.

[11] Lj.B. Ciric, Generalized contractions and xed point theorems, Publ. Inst. Math. 12(1971) 19-26.

[12] Lj.B. Ciric, On a familiy of contractive maps and and xed points, Publ. Inst. Math.17 (1974) 52-58.

[13] Lj.B. Ciric, A new xed point theorem forcontractive mappings, Publ. Inst. Math.30(44) (1981) 25-27.

[14] H. Dehghan, M. Eshaghi Gordji, A. Ebadian, Comment on Fixed point theorems forcontraction mappings in modular metric spaces, Fixed Point Theory and Applications2012 (2012):144. doi:10.1186/1687-1812-2011-93.

[15] M. Edelstein, An extension of Banachs contraction principle, Proc. Amer. Math.Soc. 12(1) (1961) 7-10.

[16] B. Fisher, Four mappings with a common xed point, (Arabic summary), J. Univ.Kuwait Sci. 8 (1981) 131-139.

[17] G. Jungck, Commuting maps and xed points, Amer. Math. Monthly 83 (1976)261-263.

[18] R. Kannan, Some results on xed points, Bull. Calcutta Math. Soc. 60 (1968) 71-76.

[19] R. Kannan, On certain sets and xed point theorems, Roum. Math. Pure Appl. 14(1969) 51-54.

[20] E. K¬l¬nç, C. Alaca, A xed point theorem in modular metric spaces, Advances inFixed Point Theory 4(2) (2014) 199-206.

[21] S. Koshi, T. Shimogaki, On F-norms of quasi-modular space, J. Fac. Sci. HokkaidoUniv. Ser 1. 15(3-4) (1961) 202-218.

[22] A. Meir, E. Keeler, A theorem on contraction mappings, J. Math. Anal. Appl. 28(1969) 326-329.

[23] C. Mongkolkeha, W. Sintunavarat, P. Kumam, Fixed point theorems for contractionmappings in modular metric spaces, Fixed Point Theory and Applications (2011).doi:10.1186/1687-1812-2011-93.

[24] H. Nakano, Modulared semi-ordered linearspace, In Tokyo Math. Book Ser, vol.1,Maruzen Co, Tokyo (1950).

[25] E. Rakotch, A note on contractive mappings, Amer. Math. Soc. 3 (1962) 459-465.

[26] S. Reich, Kannans xed point theorem, Boll. Un. Mat. Ital. 4 (1971) 1-11.

[27] S. Yamammuro, On conjugate space of Nakano space, Trans. Amer. Math. Soc. 90(1959) 291-311.

7

Fixed point results for commuting mappings, Emine Kilinc, Cihangir Alaca

210

Page 211: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

A general common fixed point theorem for weakly compatible mappings

Kristaq Kikina, Luljeta Kikina and Sofokli Vasili

Department of Mathematics and Computer Sciences,

University of Gjirokastra, Gjirokastra, Albania. [email protected], [email protected]

Abstract. The purpose of this paper is to complete and generalize many existing results in the literature for weakly compatible mappings and to perform the proof without the Hausdorff’s condition. Mathematics Subject Classification: 47H10, 54H25

Keywords: Cauchy sequence, fixed point, generalized metric space, generalized quasi-metric space, quasi-contraction.

1. Introduction and preliminaries

It is known that common fixed point theorems are generalizations of fixed point theorems. Over the past few decades, there have been many researchers who have interested in generalizing fixed point theorems to coincidence point theorems and common fixed point theorems.

The concept of metric space, as an ambient space in fixed point theory, has been generalized in several directions. Some of such generalizations are: the quasi-metric spaces, the generalized metric spaces and the generalized quasi-metric spaces.

The notion of quasi-metric, also known as b-metric, is in line of metric in which the triangular inequality ( , ) ( , ) ( , )d x y d x z d z y≤ + is replaced by quasi-triangular inequality

( , ) [ ( , ) ( , )], 1d x y k d x z d z y k≤ + ≥ (see [1], [6], [7], [8], [11], [12], [25], [30], [34], [35]). In 2000 Branciari [8] introduced the concept of generalized metric space (gms), by replacing the triangle inequality with tetrahedral inequality. Starting with the paper of Branciari, many fixed point and common fixed point results have been established in this interesting space. For further information, the reader can refer to [2, 13-16, 20, 22-24, 28-29, 32-33, 36].

Recently L. Kikina and K. Kikina [26] introduced the concept of generalized quasi-metric space (gqms) by replacing the tetrahedral inequality ( , ) ( , ) ( , ) ( , )d x y d x z d z w d w y≤ + + with a more general inequality, namely quasi-tetrahedral inequality

( , ) [ ( , ) ( , ) ( , )]d x y k d x z d z w d w y≤ + + , 1k ≥ . The generalized metric space is a special case of generalized quasi-metric space (for 1k = ). Also, every quasi-metric space is a gqms, while the converse is not true [26]. Further aspects may be found in ([21], [26], [27]). By using this generalized quasi-metric space successfully defined an “open” ball and hence a topology. On the other hand this topology fails to provide some useful topological properties: a “open” ball in generalized quasi-metric space needs not to be open set; the generalized quasi-metric needs not to be continuous; a convergent sequence in generalized quasi-metric space needs not to be

211J. APPLIED FUNCTIONAL ANALYSIS, VOL. 10, NO.'S 3-4,211-218 ,2015, COPYRIGHT 2015 EUDOXUS PRESS, LLC

Page 212: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Cauchy; the generalized metric space needs not to be Hausdorff and hence the uniqueness of limits cannot be guaranteed.

The above properties of generalized metric spaces (spesial case of gqms for 1k = ) that do not hold for metric spaces were first observed by Das and Dey [13], [14] and also these facts were observed independently by Samet [32] and also by Sarma, Rao and Rao [33]. Initially these were considered to be true, implying non correct proofs of several theorems. This made some of the previous results to be reconsidered and to be corrected. Under these circumstances, not every fixed point theorem from metric spaces can be extended in gms or in gqms. Even in the case this extension may be done, the proof of such theorem is technically more complicated than in the usual setting. Because of those difficulties, some authors have taken the Hausdorffity as additional condition in their theorems, but this is not always necessary. For example, the assertion in [33] that the space needs to be Hausdorff is superfluous, a fact first noted by Kikina and Kikina [23]. See also [36]. The aim of this paper is to present a general theorem, from which a lot of current results on common fixed points in different kind of spaces (metric, quasi metric, gms, gqms, etc.) are taken as corollaries. The following definition was given by Kikina in 2012.

Definition 1.1 [26] Let X be a set. A function :d X X R+× → is called a generalized quasi-metric in X if and only if there exists a constant 1k ≥ such that for all ,x y X∈ and for all distinct points ,z w X∈ , each of them different from x and y the following conditions hold:

( ) ( , ) = 0 = ;i d x y x y⇔( ) ( , ) = ( , );ii d x y d y x( ) ( , ) [ ( , ) ( , ) ( , )]iii d x y k d x z d z w d w y≤ + + (quasi-tetrahedral inequality)

Inequality (3) is often called quasi-tetrahedral inequality and k is often called the coefficient of d . A pair ( , )X d is called a generalized quasi-metric space (gqms) if X is a set and d is a generalized quasi-metric in X.

The set ( , ) : ( , ) B a r x X d x a r= ∈ < is called “open” ball with center a X∈ and radius0r > .

The family : , 0, ( , ) Q X a Q r B a r Qτ = ⊂ ∀ ∈ ∃ > ⊂ is a topology on X and it is called induced topology by the generalized quasi-metric d .

Definition 1.2 Let ( , )X d be a gqms.(1) A sequence nx in X is said to be convergent to a point x X∈ , if and only if

lim ( , ) 0nnd x x

→∞= . We denote this by nx x→ .

(2) A sequence nx is a Cauchy sequence if and only if for each 0ε > there exists a natural number ( )n ε such that ( , )n md x x ε< for all ( ).m n n ε> >(3) ( , )X d is called complete if every Cauchy sequence is convergent in X.

Definition 1.3 Let ( , )X d be a gqms. The generalized quasi-metric d is said to be continuous if and only if for any sequences , n nx y X⊆ , such that lim nn

x x→∞

= and lim nny y

→∞= , we have

lim ( , ) ( , )n nnd x y d x y

→∞= .

A general common fixed point theorem for weakly compatible mappings,

Kristaq Kikina et al212

Page 213: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

The following example illustrates the existence of the generalized quasi-metric space for an arbitrary coefficient 1k ≥ :

Example 1.4 [26] Let 11 : 1,2,... 1, 2X nn

= − = ∪

, Define :d X X R× → as follow:

01 1 11, 2 1 1,2 1 ,

( , )3 , 1, 2,1

for x y

for x and y or y and x x yd x y n n n

k for x y x yotherwise

= ∈ = − ∈ = − ≠= ∈ ≠

Then it is easy to see that ( , )X d is a generalized quasi-metric space and is not a generalized metric space (for 1k > ).

Note that the sequence 1 1 n nx = − converges to both 1 and 2 and it is not a Cauchy sequence: 1 1( , ) (1 ,1 ) 1, ,n m n md x x d n m N= − − = ∀ ∈

The above example shows us some of the properties of metric spaces which do not hold in gqms (see [26]).

Kikina and Kikina in [27] extended the well-known Ciric's quasi-contraction principle [10] in a generalized quasi-metric space. We restate Theorem 2.6[27] as follows: Theorem 1.5 Let (X, d) be a complete gqms with the coefficient 1k ≥ and let :T X X→ be a self-mapping satisfying:

( , ) max ( , ), ( , ), ( , ), ( , ), ( , )d Tx Ty h d x y d x Tx d y Ty d x Ty d y Tx≤ (1)

for all ,x y X∈ , where 10 kh≤ < . Then T has a unique fixed point in X,

By this Theorem, in special case 1k = , we obtain the Ciric's quasi-contraction principle in a generalized metric space which is presented in [15] and [22].

Let F be the set of functions :[0, ) [0, )ξ ∞ → ∞ satisfying the condition ( ) 0tξ = if and only if t =0. We denote by Ψ the set of functions ψ ∈F such that ψ is continuous and non-decreasing. We reserve Φ for the set of functions α ∈F such that α is continuous. Finally, by Γ we denote the set of functions β ∈F satisfying the following condition: β is lower semi-continuous.

Lakzian and Samet in [28] established the following fixed point theorem.

Theorem 1.6. ([28]). Let ( , )X d be a Hausdorff and complete gms and let :T X X→ be a self-mapping satisfying ( ( , ) ( ( , )) ( ( , ))d Tx Ty d x y d x yψ ψ ϕ≤ − . For all ,x y X∈ , where ψ ∈Ψand ϕ∈Φ . Then T has a unique fixed point in X.

In [5], it is given a slightly improved version of Theorem 1.6, obtained by replacing the continuity condition of ϕ with a lower semi-continuity.

Definition 1.7 ([19]) Let X be a non-empty set and , :T F X X→ . The mappings T and F are said to be weakly compatible if they commute at their coincidence points (i.e. TFx FTx=wheneverTx Fx= ). A point y X∈ is called point of coincidence of T and F if there exists a point x X∈ such that y Tx Fx= = .

A general common fixed point theorem for weakly compatible mappings,

Kristaq Kikina et al213

Page 214: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Di Bari and Vetro [16] extended Theorem 1.6 of Lakzian and Samet to the following common fixed point theorem:

Theorem 1.8 ([16]) Let ( , )X d be a Hausdorff gms and let T and F be self-mappings on X such that TX FX⊆ . Assume that ( , )FX d is a complete gms and that the following condition holds:

( ( , ) ( ( , )) ( ( , ))d Tx Ty d Fx Fy d Fx Fyψ ψ ϕ≤ −for all ,x y X∈ , where ψ ∈Ψ and ϕ ∈Γ . Then T and F have a unique point of coincidence in X. Moreover, if T and F are weakly compatible, then T and F have a unique common fixed point.

In [9], Choudury and Kundu established the (ψ,α,β)-weakly contraction principle in partially ordered metric spaces. Recently, Isik and Turkoglu [18], and Bilgili et al [5] extended the results in [9] to the set of generalized metric spaces:

Theorem 1.9 ([5]). Let ( , )X d be a Hausdorff and complete gms, and let :T X X→ be a self-mapping such that

( ( , ) ( ( , )) ( ( , )d Tx Ty d x y d x yψ α β≤ −for all ,x y X∈ , where , ,ψ α β∈Ψ ∈Φ ∈Γ and these mappings satisfy condition

( ) ( ) ( ) 0t t tψ α β− + > , for all 0t > .Then T has a unique fixed point in X.

Theorem 1.10 (Isik and Turkoglu [18]) Let ( , )X d be a Hausdorff and complete g.m.s. and let , :T F X X→ be self-mappings such that TX FX⊆ , and FX is a closed subspace of X, and that the following condition holds:

( ( , ) ( ( , )) ( ( , ))d Tx Ty d Fx Fy d Fx Fyψ α β≤ −for all , ,x y X∈ where , ,ψ α β∈Ψ ∈Φ ∈Γ and these mappings satisfy condition

( ) ( ) ( ) 0t t tψ α β− + > , for all 0t >Then T and F have a unique coincidence point in X. Moreover, if T and F are weakly compatible, then T and F have a unique common fixed point. The following lemma will play an important role in obtaining the main results of this paper.

Lemma 1.11 [17] Let X be a nonempty set and f : X → X a function. Then there exists a subset E X⊆ such that f (E) = f (X) and f: E → X is one-to-one.

2. Main results

In this section, we state and prove our main results. Firstly, we prove an important proposition that makes unnecessary the Hausdorffity condition for the most of well known theorems on gms and gqms.

Proposition 2.1 If ny is a Cauchy sequence in a generalized quasi-metric space ( , )X dwith the coefficient 1k ≥ and lim nn

y u→∞

= , then the limit u is unique.

Proof. Assume that 'u u≠ is also lim nny

→∞.

A general common fixed point theorem for weakly compatible mappings,

Kristaq Kikina et al214

Page 215: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

If ny is a sequence of distinct points, n my y≠ for all n m≠ , there exists 0n N∈ such that uand 'u are different from ny for all 0n n> . For 0n n> , by rectangular property of Definition 1.1 we obtain

1 1( , ') [ ( , ) ( , ) ( , ')]n n n nd u u k d u y d y y d y u+ +≤ + +

Letting n tend to infinity we get ( , ') 0d u u = and so 'u u= , a contradiction. If ny is not a sequence of distinct points, then we have the following two cases:

a) The sequence ny contains a subsequence pny of distinct points, and in this case, we can

use the same reasoning as above. b) The sequence ny contains a constant subsequence

pny : ;pny c p N= ∀ ∈ . Then

lim 'nnc y u u

→∞= = = , a contradiction. This completes the proof of the Proposition.

Remark 2.2 From the above proposition it follows that even in the cases of gms and gqms, the including of the Hausdorffity condition in any theorem, in order to implies the uniqueness of limit of Cauchy sequence, is unnecessary

The following results are inspired by the techniques and ideas of [17].

The main Theorem

A natural way to unify and prove in a simple manner several fixed point and common fixed point theorems is by considering ”functional” contractive condition like

( ( , ), ( , ), ( , ), ( , ), ( , ), ( , )) 0, ,F d Tx Ty d x y d x Tx d y Ty d x Ty d y Tx x y X≤ ∀ ∈where 6:F R R+ → is an appropriate function. For more details about the possible choices of Fwe refer to the 1977 paper by Rhoades [31]; see also Turinici [37], Berinde and Vetro [3], Berinde [4], etc.

Let (X, d) be a space (metric, quasi-metric, generalized metric, generalized quasi-metric, etc.) in which the following theorem has been proved: Theorem *: If (X, d) is a complete space and :T X X→ is a self-mapping satisfying

( ( , ), ( , ), ( , ), ( , ), ( , ), ( , )) 0, ,F d Tx Ty d x y d x Tx d y Ty d x Ty d y Tx x y X≤ ∀ ∈ (4) for all ,x y X∈ . Then T has a unique fixed point in X. We will prove here the main theorem: Theorem 2.3 Let (X, d) be a space on which the Theorem * holds and , :T f X X→ be self-mappings such that ( ) ( )T X f X⊆ . Assume that ( , )fX d is a complete space and that the following condition holds:

( ( , ), ( , ), ( , ), ( , ), ( , ), ( , )) 0F d Tx Ty d fx fy d fx Tx d fy Ty d fx Ty d fy Tx ≤ (5) for all ,x y X∈ . Then T and f have a unique coincidence point in X. Moreover if T and f are weakly compatible, then T and f have a unique common fixed point.Proof . By Lemma 2.1, there exists E X⊆ such that ( ) ( )f E f X= and :f E X→ is one-to-one. Now, define a map : ( ) ( )h f E f E→ by ( )h fx Tx= . Since f is one-to-one on E, h is well-defined. Note that,

( ( ( ), ( )), ( , ), ( , ( )), ( , ( )), ( , ( )), ( , ( ))) 0F d h fx h fy d fx fy d fx h fx d fy h fy d fx h fy d fy h fx ≤

A general common fixed point theorem for weakly compatible mappings,

Kristaq Kikina et al215

Page 216: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

for all , ( )fx fy f E∈ . Since ( ) ( )f E f X= is complete, by using Theorem *, there exists 0x X∈such that 0 0( )h fx fx= . Hence, T and f have a point of coincidence, which is also unique. It is clear that T and f have a unique common fixed point whenever T and f are weakly compatibles.

Corollary 2.4 (A generalization of the Ciric's quasi-contraction principle [10]) Let (X, d) be a gqms with the coefficient 1k ≥ and let , :T f X X→ be self-mappings such that ( ) ( )T X f X⊆ .Assume that ( , )fX d is a complete gqms and that the following condition holds:

( , ) max ( , ), ( , ), ( , ), ( , ), ( , )d Tx Ty h d fx fy d fx Tx d fy Ty d fx Ty d fy Tx≤ (1) for all ,x y X∈ , where 10 kh≤ < . Then T and f have a unique coincidence point in X. Moreover if T and f are weakly compatible, then T and f have a unique common fixed point.

Proof. Corollary 2.4 is taken from Theorem 2.3 in case when Theorem* is the Theorem 1.5 and

1

( ( , ), ( , ), ( , ), ( , ), ( , ), ( , ))( , ) max ( , ), ( , ), ( , ), ( , ), ( , ) where 0 k

F d Tx Ty d x y d x Tx d y Ty d x Ty d y Txd Tx Ty h d x y d x Tx d y Ty d x Ty d y Tx h= − ≤ <

From this Corollary, in the special case Xf I= we have the Theorem 1.5.

Corollary 2.5 Theorem 1.8 is taken from Theorem 2.3 in case when Theorem* is the Theorem 1.6 and

( ( , ), ( , ), ( , ), ( , ), ( , ), ( , ))( ( , )) ( ( , )) ( ( , )

F d Tx Ty d x y d x Tx d y Ty d x Ty d y Txd Tx Ty d x y d x yψ ψ ϕ= − +

Corollary 2.6 Theorem 1.10 is taken from Theorem 2.3 in case when as Theorem* is the Theorem 1.9 and

( ( , ), ( , ), ( , ), ( , ), ( , ), ( , ))( ( , )) ( ( , )) ( ( , )

F d Tx Ty d x y d x Tx d y Ty d x Ty d y Txd Tx Ty d x y d x yψ α β= − +

Corollary 2.7 Let (X, d) be a gqms with the coefficient 1k ≥ and let , :T f X X→ be self-mappings such that ( ) ( )T X f X⊆ . Assume that ( , )fX d is a complete gqms and that the following condition holds:

( , ) maxd Tx Ty h Uxy≤ (2) for all ,x y X∈ , where 10 kh≤ < ,and

( , ), ( , ), ( , ), ( , ), ( , )xyU d fx fy d fx Tx d fy Ty d fx Ty d fy Tx⊆ (3)

Then T and f have a unique coincidence point in X. Moreover if T and f are weakly compatible, then T and f have a unique common fixed point. Proof. By (2) and (3), we have: ( , ) max max ( , ), ( , ), ( , ), ( , ), ( , )d Tx Ty h Uxy h d fx fy d fx Tx d fy Ty d fx Ty d fy Tx≤ ≤ .Then it suffices to apply Corollary 2.4. For different expression of xyU , in the Corollary 2.2, we get different theorems. For example:

Corollary 2.3 For ( , )xyU d fx fy= , we have an extension and generalization of the Banach contraction principle in a gqms.

A general common fixed point theorem for weakly compatible mappings,

Kristaq Kikina et al216

Page 217: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Corollary 2.4 For ( , ), ( , ), ( , )xyU d fx fy d fx Tx d fy Ty= , we have an extension and generalization of the Rhoades theorem [31] in a gqms.

Remark 2.5 We can obtain many other similar results of Rhoades classification [31].

Conclusion 2.7 If in the future will be proved any new theorem of the type of Theorem* with a new appropriate form of the function F , then it holds also its corresponding Theorem 2.3.

References [1] Aydi H, Bota M.F, Karapinar E, Moradi S, A common fixed point for weak ϕ-contractions on

b-metric spaces, Fixed Point Theory, 13(2), 337-346,2012. [2] Azam, A, Arshad, M: Kannan fixed point theorem on generalized metric spaces. J. Nonlinear Sci. Appl. 1(1), 45-48 (2008) [3] Berinde, V. and F. Vetro, Common fixed points of mappings satisfying implicit contractive

conditions. Fixed Point Theory and Applications 2012, 2012:105 [4] Berinde, V: Approximating fixed points of implicit almost contractions. Hacet J Math Stat.

40, 93–102 (2012) [5] Bilgili, N, Karapinar, E and Turkoglu, D: A note on common fixed points for (ψ,α,β)-weakly

contractive mappings in generalized metric spaces. Fixed Point Theory Appl. 2013, 2013:287.

[6] Boriceanu M., Bota M., Petrusel A., Mutivalued fractals in b-metric spaces, Cen. Eur. J. Math. 8(2)(2010), 367-377.

[7] Bota M., Molnar A., Csaba V., On Ekeland’s variational principle in b-metric spaces, Fixed Point Theory, 12(2011), 21-28.

[8] Branciari, A., A fixed point theorem of Banach-Caccioppoli type on a class of generalized metric spaces. Publ. Math. (Debr.) 57, 31-37 (2000)

[9] Choudhury, BS, Kundu, A: (ψ,α,β)-weak contractions in partially ordered metric spaces.Appl. Math. Lett. 25(1), 6-10 (2012)

[10] Ciric, Lj. B. “A generalization of Banach’s contraction principle,” Proceedings of the American Mathematical Society, vol. 45, no. 2, pp. 267–273, 1974.

[11] Czerwik S, Contraction mappings in b-metric spaces, Acta. Math. Inform. Univ. Ostraviensis, 1, 5-11, 1993.

[12] Czerwik S., Nonlinear set-valued contraction mappings in b-metric spaces, Atti Sem. Mat. Univ. Modena, 1998, 46, 263-276.

[13] Das, P., Dey, L.K.: Fixed point of contractive mappings in a generalized metric space,Math. Slovaca 59(4)(2009), 499-504.

[14] Das, P., Dey, L.K., Porosity of certain classes of operators in generalized metric spaces,Demonstratio Math. 42(1) (2009), 163-174.

[15] Das, P., Lahiri, BK, Fixed point of a Ljubomir ´Ciri´c’s quasi-contraction mapping in a generalized metric space. Publ. Math. (Debr.) 61, 589-594 (2002)

[16] Di Bari, C, Vetro, P: Common fixed points in generalized metric spaces. Appl. Math. Comput. 218(13), 7322-7325 (2012)

[17] Haghi, RH., Rezapour, S., Shahzad, N., Some fixed point generalizations are not real generalizations. Nonlinear Anal. 74, 1799-1803 (2011)

[18] Isik, H., Turkoglu, D., Common fixed points for (ψ,α,β)-weakly contractive mappings in generalized metric spaces, Fixed point theory and Applications 2013, 2013:131.

A general common fixed point theorem for weakly compatible mappings,

Kristaq Kikina et al217

Page 218: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

[19] Jungck, G., Rhoades, BE., Fixed point for set valued functions without continuity. Indian J. Pure Appl. Math. 29(3), 227-238 (1998)

[20] Kikina, L., Kikina, K., A fixed point theorem in generalized metric spaces, Demonstratio Math. 46(1)(2013), 181-190.

[21] Kikina, L., Kikina, K. and Gjino, K., A new fixed point theorem on generalized quasimetric spaces, ISRN Math. Analysis, Volume 2012, Article ID 457846.

[22] Kikina, L., Kikina, K., On fixed point of a Ljubomir Ciric quasi-contraction mapping in generalized metric spaces, Pub. Math. Debrecen, 83(2013), no. 3, 353-358.

[23] Kikina, L, Kikina, K., Fixed points on two Generalized Metric Spaces, Int. J. Math. Anal., 5 (2011), no. 30, 1459-1467.

[24] Kikina, L., Kikina, K., Vardhami, I., Fixed point theorems for almost contraction in generalized metric spaces, Creat. Math. Inform., Accepted.

[25] Kikina L. and Kikina, K., Generalized fixed point theorem for three mappings on three quasi-metric spaces, Journal of Computational Analysis and Applications, Vol.14, no.2, 2012, 228-238.

[26] Kikina, L. and Kikina, K., “Two fixed point theorems on a class of generalized quasi-metric spaces”, "Journal of Computational Analysis and Applications”, Vol.14, no.5, 2012, 950-957

[27] Kikina, L., Kikina, K. and Gjino, K., “Fixed point theorem for Ciric’s type contractions in generalized quasi- metric spaces”, Journal of Computational Analysis and Applications, Vol. 15, no.7, 1257-1265, 2013.

[28] Lakzian, H, Samet, B., Fixed point for (ψ,ϕ)-weakly contractive mappings in generalized metric spaces. Appl. Math. Lett. 25(5), 902-906 (2012)

[29] Mihet, D., On Kannan fixed point principle in generalized metric spaces. J. Nonlinear Sci. Appl. 2(2), 92-96 (2009)

[30] Peppo, C. Fixed point theorems for ( , , , ) ,k i j mappingsϕ − Nonlinear Anal. 72(2010), 562-570.

[31] Rhoades, B. E. A comparison of various definitions of contractive mappings, Trans. Amer. Math. Soc., 336 (1977), 257-290. [32] Samet, B: Discussion on: a fixed point theorem of Banach-Caccioppoli type on a class of

generalized metric spaces by A. Branciari. Publ. Math. (Debr.) 76(4), 493-494 (2010) [33] Sarma, IR, Rao, JM, Rao, SS: Contractions over generalized metric spaces. J. Nonlinear

Sci. Appl. 2(3), 180-182 (2009) [34] Vulpe, M. I., D. Ostraih, F. Hoiman, The topological structure of a quasimetric space.

(Russian) Investigations in functional analysis and differential equations, pp. 14-19, 137, "Shtiintsa", Kishinev, 1981.

[35] Xia, Q. The Geodesic Problem in Quasimetric Spaces, J Geom Anal (2009) 19: 452–479

[36] Turinici, M., Functional contractions in local Branciari metric spaces, ar XIV: 1208. 4610v1 [math. GN] 22 Aug 2012, 1-9. [37] Turinici, M., Fixed points in complete metric spaces, in ”Proc. Inst. Math. Ia¸si” (Romanian Academy, Ia¸si Branch), pp. 179-182, Editura Academiei R.S.R., Bucure¸sti, 1976.

A general common fixed point theorem for weakly compatible mappings,

Kristaq Kikina et al218

Page 219: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Integro-Differential equations of fractional order on un-bounded domains

Xuhuan Wang1,2, Yongfang Qi1

1Department of Mathematics, Pingxiang University, Pingxiang, Jiangxi 337055, P. R. China2Department of Mathematics, Baoshan University, Baoshan, Yunnan 678000, P. R. China

Email: Xuhuan Wang - [email protected];

∗Corresponding author

Xuhuan Wang

Abstract

In this paper, we discuss the nonlocal conditions for fractional order nonlinear integro-differential equations

on an infinite interval, and prove the existence and uniqueness of bounded solutions. Our analysis relies on the

some standard fixed point theorems.

Keywords: Fractional order; Integro-Differential equations; Existence; Fixed point theorem.

1 Introduction

It is well known that fractional differential equations are generalizations of classical differential equations

to an arbitrary order. Fractional differential equations arise in many engineering and scientific disciplines

as the mathematical modeling of systems and processes in the fields of physics, chemistry, aerodynamics,

electrodynamics of complex medium, polymer rheology, etc. involves derivatives of fractional order[3,4]. on

the other hand, the integro-differential equations is an important branch of nonlinear analysis[5].

In 2008, Lakshmikanthama and Vatsala [1] studied basic theory of fractional differential equations

Dqu(t) = f(t, u(t)), u(0) = u0, 0 < q < 1, (1.1)

In 2012, Karthikeyan and Trujillo [2] consider existence and uniqueness results for fractional integrodif-

1

219J. APPLIED FUNCTIONAL ANALYSIS, VOL. 10, NO.'S 3-4,219-224 ,2015, COPYRIGHT 2015 EUDOXUS PRESS, LLC

Page 220: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

ferential equations with boundary value conditions by used fixed point theoremDαy(t) = f(t, y(t), Sy(t)), 0 < α ≤ 1, t ∈ [0, T ]ay(0) + by(T ) = c, (1.2)

Motivated by Lakshmikanthama et al [1] and Karthikeyan et al [2], in this paper, we consider the following

the nonlocal conditions for fractional order nonlinear integro-differential equations in E on an infinite interval:

Dαu = f(t, u, Tu), ∀t ∈ J, u(0) + g(u) = u0, 0 < α ≤ 1, (1.3)

where J = [0,∞) is an infinite interval, u0 ∈ E, f ∈ C(J × E × E,E), g ∈ (J,E),

(Tu)(t) =∫ t

0

k(t, s)u(s)ds,∀t ∈ J, (1.4)

k ∈ C[D,R], D = (t, s) ∈ J × J : t ≥ s.

Let BC[J,E] = u ∈ C[J,E] : supt∈J

‖u(t)‖ < ∞ with norm ‖u‖BC = supt∈J

‖u(t)‖. Then BC[J,E] is a

Banach space.

2 Preliminaries

Let us list some conditions.

(H1) k∗ = supt∈J

∫ t

0

|k(t, s)|ds < ∞.

(H2) ‖f(t, u, v)− f(t, u, v)‖ ≤ β(t)[a‖u− u‖+ b‖v − v‖], g(u)− g(v) ≤ L‖u− v‖ ∀ ∈ J ,u, u, v, v ∈ E,

where constants a ≥ 0, b ≥ 0, L ≥ 0, β ∈ C[J,R+] and

β∗ =∫ ∞

0

(t− s)α−1β(s)ds < ∞, α∗ =∫ ∞

0

(t− s)α−1‖f(s, θ, θ)‖ds < ∞,

here θ denotes the zero element of E.

(H3) c0 = L +(a + bk∗)β∗

Γ(α)< 1.

3 Main Results

Theorem 3.1 If conditions (H1) − (H3) are satisfied, then Eq (1.3) has a unique solution u ∈ C1[J,E] ∩

BC[J,E]; moreover, for any v0 ∈ BC[J,E], the iterative sequence defined by

vn(t) = u0 − g(u) +1

Γ(α)

∫ t

0

(t− s)α−1f(s, vn−1(s), (Tvn−1)(s))ds (n = 1, 2, 3, · · ·) (1.5)

converges to u(t) uniformly in t ∈ J .

2

Integro-Differential equations of fractional order on unbounded domains,

Xuhuan Wang, Yongfang Qi220

Page 221: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Proof. It is clear that u ∈ C1[J,E] ∩ BC[J,E] is a solution of Eq (1.3) if and only if u ∈ BC[J,E] is a

solution of the following integral equation:

u(t) = u0 − g(u) +1

Γ(α)

∫ t

0

(t− s)α−1f(s, u(s), (Tu)(s))ds. (1.6)

Define operator A by

(Au)(t) = u0 − g(u) +1

Γ(α)

∫ t

0

(t− s)α−1f(s, u(s), (Tu)(s))ds. (1.7)

By (H2), we have

‖f(t, u, v)‖ ≤ ‖f(t, θ, θ)‖+ β(t)[a‖u‖+ b‖v‖], ∀t ∈ J, u, v ∈ E. (1.8)

It follows from (H1), (H2) and (1.7), (1.8) that, for u ∈ BC[J,E],

‖(Au)(t)‖ ≤ ‖u0‖+ ‖g(u)‖+1

Γ(α)

∫ t

0

(t− s)α−1[‖f(s, θ, θ)‖+ β(s)(a‖u(s)‖+ b‖(Tu)(s)‖)]ds

≤ ‖u0‖+ ‖g(u)‖+1

Γ(α)

∫ t

0

(t− s)α−1[‖f(s, θ, θ)‖ds +(a + bk∗)‖u‖BC

Γ(α)

∫ t

0

(t− s)α−1β(s)ds

≤ ‖u0‖+ ‖g(u)‖+a∗ + (a + bk∗)‖u‖BC

Γ(α), ∀t ∈ J

so Au ∈ BC[J,E], i.e. A : BC[J,E] → BC[J,E]. On the other hand, for u, v ∈ BC[J,E], we have, by (1.7),

(H1) and (H2),

‖(Au)(t)− (Av)(t)‖ ≤ ‖g(u)− g(v)‖+1

Γ(α)

∫ t

0

(t− s)α−1[β(s)(a‖u(s)− v(s)‖+ b‖(Tu)(s)− (Tv)(s)‖)]ds

≤ ‖g(u)− g(v)‖+1

Γ(α)

∫ t

0

(t− s)α−1[β(s)(a‖u(s)− v(s)‖+ bk∗‖u(s)− v(s)‖)]ds

≤ L‖u− v‖BC +(a + bk∗)‖u− v‖BC

Γ(α)

∫ t

0

(t− s)α−1β(s)ds,

≤ (L +(a + bk∗)β∗

Γ(α))‖u− v‖BC = c0‖u− v‖BC , ∀t ∈ J (1.9)

so,

‖Au−Av‖BC ≤ c0‖u− v‖BC , ∀u, v ∈ BC[J,E].

Since c0 = L + (a+bk∗)β∗

Γ(α) < 1 on account of (H3), the Banach fixed point theorem implies that A has a

unique fixed point u in BC[J,E] and the sequence vn defined by (1.5) converges to u uniformly in t ∈ J .

The theorem is proved.

3

Integro-Differential equations of fractional order on unbounded domains,

Xuhuan Wang, Yongfang Qi221

Page 222: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Theorem 3.2 Let conditions (H1)− (H3) be satisfied and u(t) be the unique solution in C1[J,E] ∩BC[J,E]

of Eq (1.3). Then

limt→∞

u(t) = u(∞) (2.0)

exists, and

u(∞) = u0 − g(u) +1

Γ(α)

∫ ∞

0

(t− s)α−1f(s, u(s), (Tu)(s))ds. (2.1)

then problem (1.4) has a unique solution.

Proof. By the proof of Theorem 3.1, u(t) satisfies Eq.(1.6). Let t2 > t1 > 0. Then, by a simple computation,

we can get

‖u(t2)− u(t1)‖ ≤ ‖ 1Γ(α)

∫ t2

t1

(t− s)α−1f(s, u(s), (Tu)(s))ds‖

≤ ‖ 1Γ(α)

∫ t2

t1

(t− s)α−1f(s, θ, θ)ds‖+1

Γ(α)

∫ t2

t1

(t− s)α−1β(s)(a‖u(s)‖+ b‖(Tu)(s)‖)ds

≤ 1Γ(α)

∫ t2

t1

(t− s)α−1‖f(s, θ, θ)‖ds +(a + bk∗)‖u‖BC

Γ(α)

∫ t2

t1

(t− s)α−1β(s)ds

which implies by virtue of (H2) that limit (2.0) exists and (2.1) holds.

Theorem 3.3. Let conditions (H1) − (H3) be satisfied. Denote by u(t) and u∗(t) the unique solutions in

C1[J,E] ∩BC[J,E] of Eq. (1.3) and the following the nonlocal conditions

Dαu = f(t, u, Tu), ∀t ∈ J, 0 < α ≤ 1; u(0) = u∗0, (2.2)

respectively. Then

‖u∗ − u‖BC ≤ 11− c0

‖u∗0 − u0‖, (2.3)

where c0 is defined in (H3).

proof. By the Theorem 3.1, we have

‖un − u‖BC → 0, ‖u∗n − u∗‖BC → 0 as n →∞, (2.4)

where un(t) and u∗n(t) are defined by (1.5) and

v∗n(t) = u∗0 +1

Γ(α)

∫ t

0

(t− s)α−1f(s, v∗n−1(s), (Tv∗n−1)(s))ds (n = 1, 2, 3, · · ·) (2.5)

4

Integro-Differential equations of fractional order on unbounded domains,

Xuhuan Wang, Yongfang Qi222

Page 223: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

(v∗0 ∈ BC[J,E] is arbitrarily given). Similar to (1.9), we get from (1.5) and (2.5)

‖u∗n(t)− vn(t)‖ ≤ ‖u∗0 − u0‖+1

Γ(α)

∫ t

0

(t− s)α−1[β(s)(a‖v∗n−1(s)− vn−1(s)‖+ b‖(Tv∗n−1)(s)− (Tvn−1)(s)‖)]ds

≤ ‖u∗0 − u0‖+(a + bk∗)‖v∗n−1 − vn−1‖BC

Γ(α)

∫ t

0

(t− s)α−1β(s)ds

≤ ‖u∗0 − u0‖+(a + bk∗)β∗‖v∗n−1 − vn−1‖BC

Γ(α), ∀t ∈ J

so

‖v∗n − vn‖BC ≤ ‖u∗0 − u0‖+ c0‖v∗n−1 − vn−1‖BC (n = 1, 2, 3, · · ·),

and hence

‖v∗n − vn‖BC ≤ (1 + c0 + · · ·+ cn−10 )‖u∗0 − u0‖+ cn

0‖v∗0 − v0‖BC

=1− cn

0

1− c0‖u∗0 − u0‖+ cn

0‖v∗0 − v0‖BC (n = 1, 2, 3, · · ·). (2.6)

Observing (2.4) and 0 ≤ c0 < 1, and taking limits in (2.6), we obtain (2.3).

Remark. From (2.3) we know that u∗(t) → u(t) uniformly in t ∈ J as u∗0 → u0. This means that, under

conditions (H1)− (H3), the unique solution of Eq.(1.3) is continuously dependent on the initial value u0.

In the same way, we can discuss the nonlocal conditions for fractional order nonlinear integro-differential

equation of mixed type:

Dαu = f(t, u, Tu, Su), ∀t ∈ J, u(0) + g(u) = u0, 0 < α ≤ 1, (2.7)

where J = [0,∞), u0 ∈ E, f ∈ C(J × E × E × E,E), Tu, Su are defined by

(Tu)(t) =∫ t

0

k(t, s)u(s)ds,∀t ∈ J, (Su)(t) =∫ t

0

h(t, s)u(s)ds,∀t ∈ J, (2.8)

k, h ∈ C[D,R], here D = (t, s) ∈ J × J : t ≥ s. In this situation, conditions (H1) − (H3) are replaced by

the following conditions (H′

1)− (H′

3):

(H′

1) k∗ = supt∈J

∫ t

0

|k(t, s)|ds < ∞, h∗ = supt∈J

∫ ∞

0

|h(t, s)|ds < ∞ and

limt′→t

∫ ∞

0

|h(t′, s)− h(t, s)|ds = 0,∀t ∈ J

(H′

2) ‖f(t, u, v, w) − f(t, u, v, w)‖ ≤ β(t)[a‖u − u‖ + b‖v − v‖ + c‖w − w‖], g(u) − g(v) ≤ L‖u − v‖ ∀ ∈

J ,u, u, v, v, w, w ∈ E, where constants a ≥ 0, b ≥ 0, c ≥ 0, L ≥ 0, β ∈ C[J,R+] and

β∗ =∫ ∞

0

(t− s)α−1β(s)ds < ∞; α∗ =∫ ∞

0

(t− s)α−1‖f(s, θ, θ)‖ds < ∞,

5

Integro-Differential equations of fractional order on unbounded domains,

Xuhuan Wang, Yongfang Qi223

Page 224: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

here θ denotes the zero element of E.

(H′

3) c0 = L +(a + bk∗ + ch∗)β∗

Γ(α)< 1.

We can prove similarly: if conditions (H ′1)− (H ′

3) are satisfied, then

(a) Eq.(2.7) has a unique solution u ∈ C1[J,E] ∩ BC[J,E]; moreover, for any v0 ∈ BC[J,E], the iterative

sequence defined by

vn(t) = u0 − g(u) +1

Γ(α)

∫ t

0

(t− s)α−1f(s, vn−1(s), (Tvn−1)(s), (Sun−1)(s))ds (n = 1, 2, 3, · · ·)

converges to u(t) uniformly in t ∈ J .

(b) the limit (2.0) exists, and

u(∞) = u0 − g(u) +1

Γ(α)

∫ ∞

0

(t− s)α−1f(s, u(s), (Tu)(s), (Su)(s))ds.

(c) inequality (2.3) hold, where c0 is defined in (H ′3) and u∗(t) is the unique solutions in C1[J,E]∩BC[J,E]

of the following the nonlocal conditions

Dαu = f(t, u, Tu, Su), ∀t ∈ J, u(0) = u∗0, 0 < α ≤ 1.

4 Acknowledgements

The work is supported by the NSF of Yunnan Province, China (2013FD054).

References

1. V. Lakshmikanthama, A.S. Vatsala, Basic theory of fractional differential equations, Nonlinear Anal. 69

(2008) 2677-2682.

2. K. Karthikeyan, J.J. Trujillo, Existence and uniqueness results for fractional integrodifferential equations

with boundary value conditions, Commun Nonlinear Sci Numer Simulat, 17 (2012)4037-4043.

3. I. Podlubny, Fractional Differential Equations, Mathematics in Science and Engineering, Academic Press,

New York 1999.

4. A.A. Kilbas, H.M. Srivastava, J.J. Trujillo, Theory and Applications of Fractional Differential Equations,

in: North-Holland Mathematics Studies, vol. 204, Elsevier, Amsterdam, 2006.

5. D.J. Guo, V.Lakshmikantham, X.Z. Liu, Nonlinear integral equations in abstract spaces, Kluwer Aca-

demic Publishes, 1996.

6

Integro-Differential equations of fractional order on unbounded domains,

Xuhuan Wang, Yongfang Qi224

Page 225: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

1

An iterative method for implicit generalWiener-Hopf equations and nonexpansive

mappingsManzoor Hussain,Wasim Ul-Haq

Department of Mathematics, Abdul Wali Khan University Mardan,Pakistan

Email: [email protected], [email protected]

Abstract: In this note, we suggest a new iterative method for nding thecommon element of the set of common xed points of a nite family of psuedo-contractive mappings and set of solutions of extended general quasi variationalinequality problem on the basis of Wiener-Hopf equations. The convergencecriteria of this new method under some mild restrictions is also studied. Newand known methods for solving various classes of Wiener-Hopf equations andvariational inequalities also appear as a special consequences of our work.Key Words: Variational inequality problem, Wiener-Hopf equations, non-

expansive mapping, psuedo-contractive mapping.MSC: 49J40, 91B50

1. Introduction

A variety of problems arising in optimization, economics, nance, network-ing, transportation, structural analysis and elasticity can be addressed via theframework of variational inequalities. The theory of variational inequalities pro-vides productive and innovative techniques to investigate a wide range of suchproblems. The origin of variational inequality problem can be traced back fromthat of Stampacchias variational inequality problem [1] introduced in early 1964.It is a known fact that the variational inequality problem is equivalent to xedpoint problem, which plays a central role in suggesting new and novel iterativeschemes for solving variational inequality problem and its variant forms. Theclassical variationl inequality has been generalized in many directions by severalresearchers, see [3, 6, 7, 8, 9] and the references therein. Recently Noor et al.[3, 5] has intriduced and investigated a new class of variational inequality prob-lem known as extended general quasi variational inequality problem. It has beenshown by Noor et al. [3] that the extended general quasi variational inequalitiesare equivalent to the implicit xed point problem.

On the other hand, several modications and extentions hve been made tothe classical projection iterative methods in many directions through a numberof techniques It was Shi [2],who proved the equivalence between variationalinequality problem and Weiner-Hopf equations by considering the problem ofsolving a system of nonlinear projections, known as Wiener-Hopf equations.

225J. APPLIED FUNCTIONAL ANALYSIS, VOL. 10, NO.'S 3-4,225-233 ,2015, COPYRIGHT 2015 EUDOXUS PRESS, LLC

Page 226: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

2

Noor et al. [3] have extended the projection method and showed that a classof variational inequalities known as extended general variational inequalities areequivalent to implicit general Weiner-Hopf equations.

In 2008, Lu et al. [4] considered an iterative scheme for nding a commonelement of set xed points of a nite family of nonexpansive mappings and theset of solution of the variational inequalities based on Wiener-Hopf equations.Motivated from the above and recent work of Noor et al. [3], we suggest aniterative algorithm to nd a common element of set of.xed points of a nitefamily of psuedo-contractive mappings and the set of solutions of extended generalquasi variational inequalities on the basis of Wiener-Hopf equations. We alsostudy the convergence of this new method under some suitable conditions. Theresults contained in this paper brings a signicant improvement to already knownresults and will continue to hold for them.

2. Preliminaries

Let h:; :i and k:k denote the inner product and norm respectively in a realHilbert space H . Now, we recall some basic denitions and tools of a real Hilbertspace H:Denition 1.1. A nonlinear mapping T : H ! H is calleda). strongly monotone, if there exists some constant > 0 such that

hTx Ty; x yi kx yk2 ; 8y 2 H;

b). relaxed monotone, if there exists a constant > 0 such that

hTx Ty; x yi () kx yk2 ; 8y 2 H:

Note that for = 0 the Denition 1.1(a) reduces to monotonicity of T:c). Lipschitz continuous, if there exist a constant > 0 such that

kTx Tyk kx yk ; 8y 2 H:

Denition 1.2. A mapping S : K ! K is called kstrictly pseudo-contractive in the sense of [13], if there exists a constant 0 k < 1 such that

kSx Syk2 kx yk2 + k(I S)x (I S) yk2 ; 8x; y 2 K:

where I is the identity map. Observe that k = 0 implies that S is the usualnonexpansive map.Lemma 1.2 [4] Let the sequence fang of non-negative real numbers satis-

fying the following

an+1 (1 n) an + bn; 8n n0

An iterative method for implicit general Wiener-Hopf equations and nonexpansive

mappings, Manzoor Hussain, Wasim Ul-Haq226

Page 227: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

3

where n0 is some non-negative integer, fng is a sequence in (0; 1) such thatPn =1; bn = o(n); then limn!1 an = 0:Lemma 1.3 [3, 5] Let K(x) be a closed convex set in H: Then, for a given

z 2 H; x 2 K(x) satises the inequality

hx z; y xi 0; 8y 2 K(x);

if and only if

x = PK(x)z;

where PK(x) is the projection of H onto the closed convex-valued set K(x) inH:Lemma 1.4 [3, 5] The function x 2 H : h(x) 2 K(x) is a solution of the

extended general quasi variational inequality if and only if x 2 H : h(x) 2 K(x)satises the relation

h(x) = PK(x)fg(x) Txg;where PK(x) is the projection mapping and > 0 is a positive constant.

Let S : K ! K be a kstrict pseudo-contractive mapping with a xedpoint. Let us denote by fSkgnk=1; the nite family of nonexpansive mappingsdened as:

Skx = kx+ (1 k)Sx; 8x 2 K: (2.1)

with the set of xed points F (Sk) (i.e., F (Sk) = fx 2 K : x = Txg): Settingz(S) = \1n=1F (Sk) then the following assertions hold:Lemma 1.6 [11] Let K be a nonempty closed convex subset of a real

Hilbert space H and let S : K ! K be a kstrict pseudo-contraction with axed point. Then F (S) is closed and convex. Dene a mapping Sk : K ! Kby Skx = kx + (1 k)Sx; 8x 2 K: Then Sk is nonexpansive such thatF (Sk) = F (S):Lemma 1.7 [12] Let K be a nonempty closed convex subset of strictly

convex Banach space X: Let fTn : n 2 Ng be a sequence of nonexpansivemappings on K: Suppose \1n=1F (Tn) 6= ;: Let fng be a sequence of positivenumbers such that

P1n=1 n = 1: Then a mapping S on K can be dened by

Sx =1Xn=1

nTnx (2.2)

for each x 2 K is well dened, nonexpansive and F (S) = \1n=1F (Tn) holds.

3. Iterative methods and convergence analysis

Let T; g; h : H ! H be certain nonlinear mappings and K(x) is a nonemptyclosed convex-valued set inH . Consider the problem of nding an element x 2 H;

An iterative method for implicit general Wiener-Hopf equations and nonexpansive

mappings, Manzoor Hussain, Wasim Ul-Haq227

Page 228: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

4

h(x) 2 K(x) such that

hTx; g(y) h(x)i 0; 8y 2 H; g(y) 2 K(x) (3.1)

such type of inequality known as extended general quasi variational inequalitiesintroduced and studied by Noor et al. [3]. We denote by the set of solutionsof variational inequality (3:1) :

Variational inequality (3:1) entails extended general variational inequality,general variational inequality, variational inequality and other numerous pro-belms as special cases as qouted in [3].

We also faces the probelm of solving the Wiener-Hopf equations which observea close reltions to variational inequalities as proved by Shi [2]. A huge amount ofinterest has been shown by many reaschers in this regard, see e.g. [3, 4, 10]. Tobe more specic, let us denote by QK(x) = I Skgh1PK(x); where PK(x) isthe implicit projection mapping, Sk is the nite family of nonexpansive mappingsdened as by (2:1) ; I is identity map and assume that h1 exist. Let us considerthe problem of nding z 2 H such that

Th1SkPK(x)z + 1QK(x)z = 0: (3.2)

where T; g; h : H ! H are given nonlinear mappings and is a positive constant.Equation (3:2) is known as implicit general Wiener-Hopf equations containingthe nite family of nonexpansive mappings Sk.

Now we suggest the iterative method as follow:Algorithm 1. For a given iterate z0 2 H; compute zn+1 via the following

scheme:

h(xn) = (1 n)PK(xn)zn + nSkPK(xn)zn;z = (1 n) zn + n [g(xn) Txn] ;

where f ng and fng are two sequences in [0; 1] and Sk is dened by (2:1) :If 1 n = 0; then Algorithm 1 becomes:Algorithm 2. For a given iterate z0 2 H; compute zn+1 via the following

scheme:

h(xn) = SkPK(xn)zn;

z = (1 n) zn + n [g(xn) Txn] ;

where f ng is the sequence in [0; 1] and Sk is dened by (2:1) :If Sk = I; then Algorithm 1 reduces to:Algorithm 3. For a given iterate z0 2 H; compute zn+1 via the following

scheme:

h(xn) = (1 n)PK(xn)zn + nPK(xn)zn;z = (1 n) zn + n [g(xn) Txn] ;

An iterative method for implicit general Wiener-Hopf equations and nonexpansive

mappings, Manzoor Hussain, Wasim Ul-Haq228

Page 229: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

5

where f ng and fng are two sequences in [0; 1] :If Sk = I and 1 n = 0; then Algorithm 1 collapses with the same one as

suggested by Noor et al. [3]. i.e.Algorithm 4. For a given iterate z0 2 H; compute zn+1 via the following

scheme:

h(xn) = PK(xn)zn;

z = (1 n) zn + n [g(xn) Txn] ;

where f ng is the sequence in [0; 1] :In short, suitable and appropriate choices of mappings may leads one to

obtain certain new and known algorithms for solving various classes of Wiener-Hopf equations and variational inequality problems.

Now we study the convergence of our proposed algorithm which is the mainobjective of this section. Since the implicit projection PK(:) is not nonexpansivebut satisfy some Lipschitz type continuity as quoted by Noor et al. [3]. Also theypointed out that this mapping satisfy an inequality stated as below.Assumption 1 [3] The implicit projection PK(:) mapping satises the

following inequality PK(x)z PK(y)z kx ykwhere is a positive constant. This assumption plays an important part instudying the existence and convergence of iterative algorithms, see [3, 5].

We also need the following:Lemma 1.5 [3, 5] The solution x 2 H : h(x) 2 K(x) satises the extended

general quasi variational inequality if and only if, z 2 H is a solution of theextended general implicit Wiener-Hopf equation, where

h(x) = SkPK(x)z

z = fg(x) Txg;

where > 0 is a positive constant.Theorem 3.1. Let T; g; h : H ! H be relaxed monotone mappings with

constants 1; 2; and 3 and Lipschitz continuous with constants 1; 2 and 3respectively. Let Sk be dened by (2:1) such that = z(S)\ 6= ?: Let fzngand fxng be two sequences generated by Algorithm 2.1. Let f ng and fngare two sequences in [0; 1] satisfying the following conditions:(i)

P1n=0 n =1:

(ii) 1

1

< p2121(2)

21

with 1 > 1p (2 ); < 1; where =

p1 + 22 + 22+

p1 + 23 + 23+

An iterative method for implicit general Wiener-Hopf equations and nonexpansive

mappings, Manzoor Hussain, Wasim Ul-Haq229

Page 230: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

6

If the Assumption 1 and conditions (i) and (ii) holds, then the sequencefzng and fxng converges strongly to z 2 = WHE(T; g; h) and x 2 ;respectively.Proof. Let us assume that x 2 and z 2 be the solution. Then

h(x) = (1 n)PK(x)z + nSkPK(x)z;z = (1 n) z + n [g(x) Tx] :

(3.3)

Consider

kzn+1 zk = k(1 n) zn + n [g(xn) Txn] zk= k(1 n) zn + n [g(xn) Txn] f(1 n) z + n [g(x) Tx]gk (1 n) kzn zk+ n kg(xn) g(x) (Txn Tx)k (1 n) kzn zk+ nfkxn x (g(xn) g(x))k+ kxn x (Txn Tx)kg: (3.4)

Next, we estimate

kxn x (Txn Tx)k2 = kxn xk2 2 hxn x; (Txn Tx)i+ kTxn Txk2

kxn xk2 + 21 kxn xk2 + 21 kxn xk2

1 + 21 +

21

kxn xk2 ;

which yields

kxn x (Txn Tx)k q1 + 21 + 21 kxn xk : (3.5)

Similarly, we have

kxn x (g(xn) g(x))k2 = kxn xk2 2 hxn x; g(xn) g(x)i+ k(g(xn) g(x))k2

kxn xk2 + 22 kxn xk2 + 22 kxn xk2

1 + 22 +

22

kxn xk2 ;

which yields

kxn x (g(xn) g(x))k q1 + 22 + 22 kxn xk : (3.6)

Exploiting (3:5) and (3:6) into (3:4) ; we obtain

kzn+1 zk (1 n) kzn zk+ nq

1 + 21 + 21 +q1 + 22 + 22

kxn xk :

(3.7)

An iterative method for implicit general Wiener-Hopf equations and nonexpansive

mappings, Manzoor Hussain, Wasim Ul-Haq230

Page 231: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

7

Now we estimate kxn xk : For this, consider

kxn xk = xn x h(xn) + h(x) + PK(xn) PK(x)

kxn x (h(xn) + h(x))k+ PK(xn)zn PK(x)z

kxn x (h(xn) + h(x))k+ PK(xn)zn PK(x)zn

+ PK(x)zn PK(x)z :

Invoking the Assumption 1 and Lipschitz type continuity of the implicit pro-jection mapping, we have

kxn xk kxn x (h(xn) + h(x))k+ kxn xk+ kzn zk :(3.8)

Since

kxn x (h(xn) + h(x))k2 = kxn xk2 2 hxn x; h(xn) h(x)i+ kh(xn) h(x)k2

kxn xk2 + 23 kxn xk2 + 23 kxn xk2

1 + 23 +

23

kxn xk2 ;

which implies

kxn x (h(xn) + h(x))k q1 + 23 + 23 kxn xk : (3.9)

Using (3:9) into (3:8) and sipmlifyng, we get

kxn xk 1

1 ( +p1 + 23 + 23)

kzn zk : (3.10)

On subtituting (3:10) into (3:7) we obtain

kzn+1 zk (1 n) kzn zk+ n

(p1 + 21 + 21 +

p1 + 22 + 22

1 ( +p1 + 23 + 23)

)kzn zk :

Letting =p1+21+21+

p1+22+22

1(+p1+23+23)

; we get

kzn+1 zk (1 n) kzn zk+ n kzn zk : f1 n(1 )g kzn zk :

Then clearly

kzn+1 zk nYk=0

f1 k(1 )g kz0 zk :

An iterative method for implicit general Wiener-Hopf equations and nonexpansive

mappings, Manzoor Hussain, Wasim Ul-Haq231

Page 232: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

8

SinceP1

k=0 k=1 and 1 < 1: Thus we have

limn!1

nYk=0

f1 k(1 )g!= 0:

Consequently by virtue of Lemma 1.2, we obtain

limn!1

kzn zk = 0;

implies the sequence fzng strongly converges to z 2 satisfying (3:2). Simi-larly from (3:10) we get

limn!1

kxn xk = 0:

implies that fxng strongly converges to x 2 satisfying (3:1) : This completesthe proof.

REFERENCES

1 G. Stampacchia, Formes bilineaires coercitives sur les ensembles convexes,C. R. Acad. Sci, Paris, 258 (1964), 4413-4416.

2 P. Shi, Equivalence of variational inequalities with Wiener-Hopf equations,Proc. Amer. Math. Soc., 111 (1991), 339-346.

3 M.A. Noor, K.I. Noor, A.G. Gul, Some iterative schemes for solving extendedgeneral quasi variational inequalities, Appl. Math. Inf. Sci., 7, No. 3, 917-9252013.

4 A.X. Lu ,S.J. Yang , W.L. Li, Convergence theorems for variational inequal-ities based on Wiener-Hopf equations method in Hilbert spaces, Int. J. ofAppl. Math.and Mech. 4(3), 46-53, 2008.

5 M.A.Noor, K.I.Noor, Sensitivity analysis of some quasi variational inequali-ties, J. Adv. Math. Stud., 7 (2013), in Press.

6 C. Baiocchiand, A. Capelo, Variational and quasi variational inequalities, J.Wiley and Sons, NewYork, 1984.

7 M.A. Noor, General variational inequalities, Appl. Math. Letters, 1 (1988),119-121.

8 M.A.Noor, Quasi variational inequalities, Appl. Math. Letters, 1 (1988),367-370.

9 M.A.Noor, Extended general variational inequalities, Appl. Math. Letters,22 (2009), 182-185.

10 Noor, M.A., (1993).Wiener-Hopf equations and variational inequalities, J.Optim. Theory Appl., 79, 197206.

11 H. Zhou, Convergence theorems of xed points for k-strict pseudo-contractions in Hilbert spaces, Nonlinear Analysis Theory, Methods & Ap-plications Series A, 69, 456462, 2008.

An iterative method for implicit general Wiener-Hopf equations and nonexpansive

mappings, Manzoor Hussain, Wasim Ul-Haq232

Page 233: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

9

12 R.E. Bruck Jr., Properties of xed point sets of nonexpansive mappings inBanach spaces, Transactions of the American Mathematical Society, 179,251262, 1973.

13 Browder, FE, Petryshyn, WV, (1967) Construction of xed points of non-linear mappings in Hilbert spaces, J. Math. Anal. Appl., 20, 197-228.

An iterative method for implicit general Wiener-Hopf equations and nonexpansive

mappings, Manzoor Hussain, Wasim Ul-Haq233

Page 234: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Parametric Duality Models for MultiobjectiveFractional Programming Based onNew Generation Hybrid Invexities

Ram U. VermaTexas State University

Department of MathematicsSan Marcos, TX 78666, USA

[email protected]

Abstract

In this paper we intend to introduce the new-generation second order hybrid B −(b, ρ,ω, ξ, θ, p, r)-invexities, which generalizes most of the existing invexity concepts inthe literature, and then we develop a wide range of higher order parametric dualitymodels and results for multiobjective fractional programming problems. The obtainedresults generalize and unify, based on this new class for second order hybrid invexities,a wider spectrum of investigations in the literature on applications to other results onmultiobjective fractional programming.

Keywords: New class of generalized invexities, Multiobjective fractional programming,efficient solutions, Parametric duality models

2000 MSC: 90C30, 90C32, 90C34

1 Introduction

In a series of publications, Zalmai [42] introduced Hanson-Antczak type invexities and appliedto a class of second order parametric duality models for semiinfinite multiobjective fractionalprogramming problems. Verma [26 - 28] investigated based on higher order univexities someresults on parametric duality models to the context of fractional programming, while Mishra,Jaiswal and Verma [16] formulated some results on duality models for multiobjective fractionalprogramming problems based on generalized invex functions. Verma [25] also introduced ageneral framework for a class of (ρ, η, θ)−invex functions to examine some parametric suf-ficient efficiency conditions for multiobjective fractional programming problems for weaklyε−efficient solutions.

Inspired by the accelerated recent advances on higher order invexities and other develop-ments to the context of multiobjective fractional programming problems, we first introducethe new generation second order hybrid (b, ρ, ω, ξ, θ, p, r)-invexities, which generalizes largelymost of existing notions, including Antczak type B − (p, r)−invexities, second we developsome duality models for two dual problems with relatively suitable constraints for provingweak and strong duality theorems under the new invexity conditions. The results establishedin this paper, not only generalize the results on exponential type hybrid invexities regarding

1

234J. APPLIED FUNCTIONAL ANALYSIS, VOL. 10, NO.'S 3-4,234-253 ,2015, COPYRIGHT 2015 EUDOXUS PRESS, LLC

Page 235: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

2 R.U. Verma

duality models for multiobjective fractional programming problems, but also generalize thehigher order invexity results in general settings.

We consider under the general framework of the new-generation second order hybrid(b, ρ, ω, ξ, θ, p, r)-invexities of functions, the following multiobjective fractional programmingproblem:

(P)

Minimize( f1(x)

g1(x),f2(x)g2(x)

, · · ·, fp(x)gp(x)

)

subject to x ∈ Q = x ∈ X : Hj(x) ≤ 0, j ∈ 1, 2, · · ·, m, where X is an open convex subsetof Rn (n-dimensional Euclidean space), fi and gi for i ∈ 1, · · ·, p and Hj for j ∈ 1, · · ·, mare real-valued functions defined on X such that fi(x) ≥ 0, gi(x) > 0 for i ∈ 1, · · ·, p andfor all x ∈ Q. Here Q denotes the feasible set of (P).

Next, we observe that problem (P) is equivalent to the nonfractional programming prob-lem:

(Pλ)

Minimize(f1(x) − λ1g1(x), · · ·, fp(x) − λpgp(x)

)

subject to x ∈ Q with

λ =(λ1, λ2, · · ·, λp

)=

(f1(x∗)g1(x∗)

,f2(x∗)g2(x∗)

, · · ·, fp(x∗)gp(x∗)

),

where x∗ is an efficient solution to (P).

Based on efficiency results, we develop some duality models and prove a number of ap-propriate duality theorems under several hybrid invexity constraints. We also recognize thesignificant role of second order hybrid invexiries in semiinfinite programming problems, es-pecially in terms of applications to game theory, statistical analysis, engineering design (in-cluding design of control systems, design of earthquakes-resistant structures, digital filters,and electronic circuits), random graphs, boundary value problems, wavelet analysis, environ-mental protection planning, decision and management sciences, optimal control problems,continuum mechanics, robotics, data envelopment analysis, and beyond. For more details, werefer the reader [1 - 43].

2 New Generation Second Order Invexities

The general invexity has been investigated in several directions. We generalize the notion ofthe first order Antczak type B−(p, r)-invexiies to the case of the second order (b, ρ, ω, ξ, θ, p, r)-invexities. These notions of the second order invexity encompass most of the existing notionsin the literature. Let f be a twice continuously differentiable real-valued function defined on

235

Page 236: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

New Generation Hybrid Invexities 3

X.

Definition 2.1 The function f is said to be second order (b, ρ, ω, ξ, θ, p, r)-invex at x∗ ∈ Xif there exist functions ω, ξ : X × X → Rn, b : X × X → R+, and real numbers r and p suchthat for all x ∈ X and z ∈ Rn,

b(x, x∗)(1

r

(er[f(x)−f(x∗)] − 1

))≥ 1

p

⟨∇f(x∗) + ∇2f(x∗)z, epω(x,x∗) − 1

− 12p

⟨∇2f(x∗)z, epξ(x,x∗) − 1

⟩+ ρ(x, x∗)‖θ(x, x∗)‖2 for p 6= 0 and r 6= 0,

b(x, x∗)(1

r

(er[f(x)−f(x∗)] − 1

))≥

⟨∇f(x∗) +

12∇2f(x∗)z, ω(x, x∗)

− 12⟨∇2f(x∗)z, ξ(x, x∗)

⟩+ ρ(x, x∗)‖θ(x, x∗)‖2 for p = 0 and r 6= 0,

b(x, x∗)([f(x) − f(x∗)]

)≥ 1

p

(⟨∇f(x∗) + ∇2f(x∗)z, epω(x,x∗) − 1

⟩)

− 12⟨∇2f(x∗)z, epξ(x,x∗) − 1

⟩+ ρ(x, x∗)‖θ(x, x∗)‖2 for p 6= 0 and r = 0,

b(x, x∗)([f(x) − f(x∗)]

)≥

⟨∇f(x∗) + ∇2f(x∗)z, ω(x, x∗)

− 12⟨∇2f(x∗)z, ξ(x, x∗)

⟩+ ρ(x, x∗)‖θ(x, x∗)‖2 for p = 0 and r = 0.

Definition 2.2 The function f is said to be second order (b, ρ, ω, ξ, θ, p, r)-pseudoinvex atx∗ ∈ X if there exist functions ω, ξ : X × X → Rn, a function b : X × X → R+, and realnumbers r and p such that for all x ∈ X and z ∈ Rn,

1p

⟨∇f(x∗) + ∇2f(x∗)z, epω(x,x∗) − 1

− 1p

⟨12∇2f(x∗)z, epξ(x,x∗) − 1

⟩+ ρ(x, x∗)‖θ(x, x∗)‖2 ≥ 0

⇒ b(x, x∗)(1

r

(er[f(x)−f(x∗)] − 1

))≥ 0 for p 6= 0 and r 6= 0,

(⟨∇f(x∗) + ∇2f(x∗)z, ω(x, x∗)

⟩)

−(⟨1

2∇2f(x∗)z, ξ(x, x∗)

⟩)+ ρ(x, x∗)‖θ(x, x∗)‖2 ≥ 0

⇒ b(x, x∗)(1

r

(er[f(x)−f(x∗)] − 1

))≥ 0 for p = 0 and r 6= 0,

236

Page 237: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

4 R.U. Verma

1p

(⟨∇f(x∗) + ∇2f(x∗)z, epω(x,x∗) − 1

⟩)

− 1p

(⟨12∇2f(x∗)z, epξ(x,x∗) − 1

⟩)+ ρ(x, x∗)‖θ(x, x∗)‖2 ≥ 0

⇒ b(x, x∗)([f(x) − f(x∗)]

)≥ 0 for p 6= 0 and r = 0,

(⟨∇f(x∗) + ∇2f(x∗)z, ω(x, x∗)

⟩)

−(⟨1

2∇2f(x∗)z, ξ(x, x∗)

⟩)+ ρ(x, x∗)‖θ(x, x∗)‖2 ≥ 0

⇒ b(x, x∗)([f(x) − f(x∗)]

)≥ 0 for p = 0 and r = 0.

Definition 2.3 The function f is said to be second order strictly (b, ρ, ω, ξ, θ, p, r)-pseudoinvexat x∗ ∈ X if there exist a function ω, ξ : X × X → Rn, a function b : X × X → R+, and realnumbers r and p such that for all x ∈ X and z ∈ Rn,

1p

(⟨∇f(x∗) + ∇2f(x∗)z, epω(x,x∗) − 1

⟩)

− 1p

(⟨12∇2f(x∗)z, epξ(x,x∗) − 1

⟩)+ ρ(x, x∗)‖θ(x, x∗)‖2 ≥ 0

⇒ b(x, x∗)(1

r

(er[f(x)−f(x∗ )] − 1

))> 0 for p 6= 0 and r 6= 0,

(⟨∇f(x∗) + ∇2f(x∗)z, ω(x, x∗)

⟩)

−(⟨1

2∇2f(x∗)z, ξ(x, x∗)

⟩)+ ρ(x, x∗)‖θ(x, x∗)‖2 ≥ 0

⇒ b(x, x∗)(1

r

(er[f(x)−f(x∗)] − 1

))> 0 for p = 0 and r 6= 0,

1p

(⟨∇f(x∗) + ∇2f(x∗)z, epω(x,x∗) − 1

⟩)

− 1p

(⟨12∇2f(x∗)z, epξ(x,x∗) − 1

⟩)+ ρ(x, x∗)‖θ(x, x∗)‖2 ≥ 0

⇒ b(x, x∗)([f(x) − f(x∗)]

)> 0 for p 6= 0 and r = 0,

(⟨∇f(x∗) + ∇2f(x∗)z, ω(x, x∗)

⟩)

−(⟨1

2∇2f(x∗)z, ξ(x, x∗)

⟩)+ ρ(x, x∗)‖θ(x, x∗)‖2 ≥ 0

⇒ b(x, x∗)([f(x) − f(x∗)]

)> 0 for p = 0 and r = 0.

237

Page 238: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

New Generation Hybrid Invexities 5

Definition 2.4 The function f is said to be second order prestrictly B − (b, ρ, ω, ξ, θ, p, r)-pseudoinvex at x∗ ∈ X if there exist functions ω, ξ : X×X → Rn, a function b : X×X → R+,and real numbers r and p such that for all x ∈ X and z ∈ Rn,

1p

(⟨∇f(x∗) + ∇2f(x∗)z, epω(x,x∗) − 1

⟩)

− 1p

(⟨12∇2f(x∗)z, epξ(x,x∗) − 1

⟩)+ ρ(x, x∗)‖θ(x, x∗)‖2 > 0

⇒ b(x, x∗)(1

r

(er[f(x)−f(x∗ )] − 1

))≥ 0 for p 6= 0 and r 6= 0,

(⟨∇f(x∗) + ∇2f(x∗)z, ω(x, x∗)

⟩)

−(⟨1

2∇2f(x∗)z, ξ(x, x∗)

⟩)+ ρ(x, x∗)‖θ(x, x∗)‖2 > 0

⇒ b(x, x∗)(1

r

(er[f(x)−f(x∗)] − 1

))≥ 0 for p = 0 and r 6= 0,

1p

(⟨∇f(x∗) + ∇2f(x∗)z, epω(x,x∗) − 1

⟩)

− 1p

(⟨12∇2f(x∗)z, epξ(x,x∗) − 1

⟩)+ ρ(x, x∗)‖θ(x, x∗)‖2 > 0

⇒ b(x, x∗)([f(x) − f(x∗)]

)≥ 0 for p 6= 0 and r = 0,

(⟨∇f(x∗) + ∇2f(x∗)z, ω(x, x∗)

⟩)

−(⟨1

2∇2f(x∗)z, ξ(x, x∗)

⟩)+ ρ(x, x∗)‖θ(x, x∗)‖2 > 0

⇒ b(x, x∗)([f(x) − f(x∗)]

)≥ 0 for p = 0 and r = 0.

Definition 2.5 The function f is said to be second order (b, ρ, ω, ξ, θ, p, r)-quasiinvex at x∗ ∈X if there exist functions ω, ξ : X × X → Rn, a function b : X × X → R+, and real numbersr and p such that for all x ∈ X and z ∈ Rn,

b(x, x∗)(1

r

(er[f(x)−f(x∗)] − 1

))≤ 0 for p 6= 0 and r 6= 0

⇒ 1p

⟨∇f(x∗) + ∇2f(x∗)z, epω(x,x∗) − 1

− 1p

⟨12∇2f(x∗)z, epξ(x,x∗) − 1

⟩+ ρ(x, x∗)‖θ(x, x∗)‖2 ≤ 0,

238

Page 239: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

6 R.U. Verma

b(x, x∗)(1

r

(er[f(x)−f(x∗)] − 1

))≤ 0 for p = 0 and r 6= 0

⇒(⟨

∇f(x∗) + ∇2f(x∗)z, ω(x, x∗)⟩)

−(⟨1

2∇2f(x∗)z, ξ(x, x∗)

⟩)+ ρ(x, x∗)‖θ(x, x∗)‖2 ≤ 0,

b(x, x∗)([f(x) − f(x∗)]

)≤ 0 for p 6= 0 and r = 0

⇒ 1p

⟨∇f(x∗) + ∇2f(x∗)z, epω(x,x∗) − 1

− 1p

⟨12∇2f(x∗)z, epξ(x,x∗) − 1

⟩+ ρ(x, x∗)‖θ(x, x∗)‖2 ≤ 0,

b(x, x∗)([f(x) − f(x∗)]

)≤ 0 for p = 0 and r = 0

⇒(⟨

∇f(x∗) + ∇2f(x∗)z, ω(x, x∗)⟩)

−(⟨1

2∇2f(x∗)z, ξ(x, x∗)

⟩)+ ρ(x, x∗)‖θ(x, x∗)‖2 ≤ 0.

Definition 2.6 The function f is said to be second order strictly (b, ρ, ω, ξ, θ, p, r)-quasiinvexat x∗ ∈ X if there exist functions ω, ξ : X × X → Rn, a function b : X × X → R+, and realnumbers r and p such that for all x ∈ X and z ∈ Rn,

b(x, x∗)(1

r

(er[f(x)−f(x∗)] − 1

))≤ 0 for p 6= 0 and r 6= 0

⇒ 1p

⟨∇f(x∗) + ∇2f(x∗)z, epω(x,x∗) − 1

− 1p

⟨12∇2f(x∗)z, epξ(x,x∗) − 1

⟩+ ρ(x, x∗)‖θ(x, x∗)‖2 < 0,

b(x, x∗)(1

r

(er[f(x)−f(x∗)] − 1

))≤ 0 for p = 0 and r 6= 0

⇒(⟨

∇f(x∗) + ∇2f(x∗)z, ω(x, x∗)⟩)

−(⟨1

2∇2f(x∗)z, ξ(x, x∗)

⟩)+ ρ(x, x∗)‖θ(x, x∗)‖2 < 0,

239

Page 240: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

New Generation Hybrid Invexities 7

b(x, x∗)([f(x) − f(x∗)]

)≤ 0 for p 6= 0 and r = 0

⇒1p

⟨∇f(x∗) + ∇2f(x∗)z, epω(x,x∗) − 1

−1p

⟨12∇2f(x∗)z, epξ(x,x∗) − 1

⟩+ ρ(x, x∗)‖θ(x, x∗)‖2 < 0,

b(x, x∗)([f(x) − f(x∗)]

)≤ 0 for p = 0 and r = 0

⇒(⟨

∇f(x∗) + ∇2f(x∗)z, ω(x, x∗)⟩)

−(⟨1

2∇2f(x∗)z, ξ(x, x∗)

⟩)+ ρ(x, x∗)‖θ(x, x∗)‖2 < 0.

Definition 2.7 The function f is said to be second order prestrictly (b, ρ, ω, ξ, θ, p, r)-quasiinvexat x∗ ∈ X if there exist functions ω, ξ : X × X → Rn, a function b : X × X → R+, and realnumbers r and p such that for all x ∈ X and z ∈ Rn,

b(x, x∗)(1

r

(er[f(x)−f(x∗)] − 1

))< 0 for p 6= 0 and r 6= 0

⇒ 1p

⟨∇f(x∗) + ∇2f(x∗)z, epω(x,x∗) − 1

− 1p

⟨12∇2f(x∗)z, epξ(x,x∗) − 1

⟩+ ρ(x, x∗)‖θ(x, x∗)‖2 ≤ 0,

b(x, x∗)(1

r

(er[f(x)−f(x∗)] − 1

))< 0 for p = 0 and r 6= 0

⇒(⟨

∇f(x∗) + ∇2f(x∗)z, ω(x, x∗)⟩)

−(⟨1

2∇2f(x∗)z, ξ(x, x∗)

⟩)+ ρ(x, x∗)‖θ(x, x∗)‖2 ≤ 0,

b(x, x∗)([f(x) − f(x∗)]

)< 0 for p 6= 0 and r = 0

⇒ 1p

⟨∇f(x∗) + ∇2f(x∗)z, epω(x,x∗) − 1

− 1p

⟨12∇2f(x∗)z, epξ(x,x∗) − 1

⟩+ ρ(x, x∗)‖θ(x, x∗)‖2 ≤ 0,

240

Page 241: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

8 R.U. Verma

b(x, x∗)([f(x) − f(x∗)]

)< 0 for p = 0 and r = 0

⇒(⟨

∇f(x∗) + ∇2f(x∗)z, ω(x, x∗)⟩)

−(⟨1

2∇2f(x∗)z, ξ(x, x∗)

⟩)+ ρ(x, x∗)‖θ(x, x∗)‖2 ≤ 0.

Now we consider the efficiency solvability conditions for (P) and (Pλ) problems. We needto recall some auxiliary results crucial to the problem on hand.

Definition 2.8 A point x∗ ∈ Q is an efficient solution to (P) if there exists no x ∈ Q suchthat

fi(x)gi(x)

≤fi(x∗)gi(x∗)

∀ i = 1, · · ·, p.

Next to this context, we have the following auxiliary problem:

(Pλ)minimizex∈Q (f1(x) − λ1g1(x), · · ·, fp(x) − λpgp(x)),

subject to x ∈ Q,

where λi for i ∈ 1, · · ·, p are parameters, and λi = f(x∗)

gi(x∗) .

Next, we introduce the solvability conditions for (Pλ) problem.

x∗ is an efficient solution to (P) if there exists no x ∈ Q such that

(f1(x)g1(x)

,f2(x)g2(x)

, · · ·, fp(x)gp(x)

) ≤ (f1(x∗)g1(x∗)

,f2(x∗)g2(x∗)

, · · ·, fp(x∗)gp(x∗)

).

Lemma 2.1 [28] Let x∗ ∈ Q. Suppose that fi(x∗) ≥ gi(x∗) for i = 1, ···, p. Then the followingstatements are equivalent:

(i) x∗ is an efficient solution to (P).

(ii) x∗ is an efficient solution to (Pλ), where

λ = (f1(x∗)g1(x∗)

, · · ·, fp(x∗)gp(x∗)

).

Lemma 2.2 [28] Let x∗ ∈ Q. Suppose that fi(x∗) ≥ gi(x∗) for i = 1, ···, p. Then the followingstatements are equivalent:

(i) x∗ is an efficient solution to (P).

241

Page 242: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

New Generation Hybrid Invexities 9

(ii) There exists c = (c1, · · ·, cp) ∈ <p+ \ 0 such that

Σpi=1ci[fi(x) −

(fi(x∗)gi(x∗)

)gi(x)] ≥ 0

= Σpi=1ci[fi(x∗) −

(fi(x∗)gi(x∗)

)gi(x∗)] for any x ∈ Q.

Next, we recall the following result (Verma [28]) that is crucial to developing the resultsfor the next section based on second order (φ, ρ, η, θ, p, r)-invexities.

Theorem 2.1 Let x∗ ∈ F and λ∗ = max1≤i≤p fi(x∗)/gi(x∗), for each i ∈ p, let fi and gi

be twice continuously differentiable at x∗, for each j ∈ q, let the function z → Gj(z, t) betwice continuously differentiable at x∗ for all t ∈ Tj , and for each k ∈ r, let the functionz → Hk(z, s) be twice continuously differentiable at x∗ for all s ∈ Sk. If x∗ is an optimalsolution of (P), if the second order generalized Abadie constraint qualification holds at x∗, andif for any critical direction y, the set cone

(∇Gj(x∗, t), 〈y,∇2Gj(x∗, t)y〉

): t ∈ Tj(x∗), j ∈ q

+ span(∇Hk(x∗, s), 〈y,∇2Hk(x∗, s)y〉

): s ∈ Sk, k ∈ r,

where Tj(x∗) ≡ t ∈ Tj : Gj(x∗, t) = 0,

is closed, then there exist u∗ ∈ U ≡ u ∈ Rp : u ≥ 0,∑p

i=1 ui = 1 and integers ν∗0 and ν∗,

with 0 ≤ ν∗0 ≤ ν∗ ≤ n + 1, such that there exist ν∗

0 indices jm, with 1 ≤ jm ≤ q, together withν∗0 points tm ∈ Tjm (x∗), m ∈ ν∗

0 , ν∗ − ν∗0 indices km, with 1 ≤ km ≤ r, together with ν∗ − ν∗

0

points sm ∈ Skm for m ∈ ν∗\ν∗0 , and ν∗ real numbers v∗m, with v∗m > 0 for m ∈ ν∗

0 , with theproperty that

p∑

i=1

u∗i [∇fi(x∗) − λ∗(∇gi(x∗)] +

ν∗0∑

m=1

v∗m[∇Gjm(x∗, tm)

+ν∗∑

m=ν∗0 +1

v∗m∇Hk(x∗, sm) = 0, (2.1)

〈y,[ p∑

i=1

u∗i [∇2fi(x∗) − λ∗∇2gi(x∗)] +

ν∗0∑

m=1

v∗m∇2Gjm(x∗, tm)

+ν∗∑

m=ν∗0 +1

v∗m∇2Hk(x∗, sm)]y〉 ≥ 0, (2.2)

where Tjm (x∗) = t ∈ Tjm : Gjm(x∗, t) = 0, U = u ∈ Rp : u ≥ 0,∑p

i=1 ui = 1, and ν∗\ν∗0

is the complement of the set ν∗0 relative to the set ν∗.

242

Page 243: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

10 R.U. Verma

We next recall a set of second-order necessary optimality conditions for (P ). This resultwill be needed for proving strong and strict converse duality theorems.

Theorem 2.2 [30] Let x∗ be an optimal solution of (P), let λ∗ = φ(x∗) ≡max1≤i≤p fi(x∗)/gi(x∗), and assume that the functions fi, gi, i ∈ p, Gj , j ∈ q, and Hk, k ∈r, are twice continuously differentiable at x∗, and that the second-order Guignard constraintqualification holds at x∗. Then for each z∗ ∈ C(x∗), there exist

u∗ ∈ U ≡ u ∈ Rp : u ≥ 0,

p∑

i=1

ui = 1,

v∗ ∈ Rq+ ≡ v ∈ Rq : v ≥ 0, and w∗ ∈ Rr such that

p∑

i=1

u∗i [∇fi(x∗) − λ∗∇gi(x∗)] +

q∑

j=1

v∗j∇Gj(x∗) +r∑

k=1

w∗k∇Hk(x∗) = 0,

⟨z∗,

p∑

i=1

u∗i [∇2fi(x∗) − λ∗∇2gi(x∗)] +

q∑

j=1

v∗j∇2Gj(x∗) +r∑

k=1

w∗k∇2Hk(x∗)

z∗

⟩≥ 0,

u∗i [fi(x∗) − λ∗gi(x∗)] = 0, i ∈ p,

v∗j Gj(x∗) = 0, j ∈ q,

where C(x∗) is the set of all critical directions of (P) at x∗, that is,

C(x∗) = z ∈ Rn : 〈∇fi(x∗)−λ∇gi(x∗), z〉 = 0, i ∈ A(x∗), 〈∇Hj(x∗), z〉 ≤ 0, j ∈ B(x∗),

A(x∗) = j ∈ p : fj(x∗)/gj(x∗) = max1≤i≤p

fi(x∗)/gi(x∗), and B(x∗) = j ∈ m : Hj(x∗) = 0.

We shall assume that the functions fi, gi, i ∈ p, Hj, j ∈ m are twice continuously dif-ferentiable on the open set X. Moreover, we shall assume, without loss of generality, thatgi(x) > 0, i ∈ p, and ϕ(x) ≥ 0 for all x ∈ X.

3 Duality Model I

In this section, we consider two duality models with relatively standard constraint structuresand prove weak and strong duality theorems. Consider the following two problems:

(DI) Maximize λ

subject top∑

i=1

ui[∇fi(y) − λ∇gi(y)] +m∑

j=1

vm∇Hj(y) = 0, (3.1)

12

( p∑

i=1

ui[∇2fi(y) − λ∇2gi(y)] +m∑

j=1

vj∇2Hj(y)

z)≥ 0, (3.2)

243

Page 244: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

New Generation Hybrid Invexities 11

p∑

i=1

ui[fi(y) − λgi(y)] ≥ 0, (3.3)

m∑

j=1

vjHj(y) ≥ 0, (3.4)

y ∈ X, z ∈ Rn, u ∈ U, v ∈ Rq+, λ ∈ R+.

(DI) Maximize λ

subject to (3.2) - (3.4) and

⟨ p∑

i=1

ui[∇fi(y) − λ∇gi(y)] +m∑

j=1

vj∇Hj(y), ω(x, y)⟩≥ 0, (3.5)

⟨ p∑

i=1

ui[∇2fi(y) − λ∇2gi(y)] +m∑

j=1

vj∇2Hj(y)

z, ω(x, y)⟩≥ 0 for all x ∈ F, (3.6)

−⟨1

2

p∑

i=1

ui[∇2fi(y) − λ∇2gi(y)] +m∑

j=1

vj∇2Hj(y)

z, ξ(x, y)⟩≥ 0 for all x ∈ F, (3.7)

where ω, ξ : X × X → Rn are functions.

Comparing (DI) and (DI), we see that (DI) is relatively more general than (DI) in thesense that any feasible solution of (DI) is also feasible for (DI), but the converse is notnecessarily true. Furthermore, we observe that (3.1) is a system of n equations, whereas (3.5)is a single inequality. We observe that from a computational point of view, (DI) is preferableto (DI) because of the dependence of (3.5) - (3.7) on the feasible set of (P ). We will justestablish the duality theorems for the pair (P ) − (DI) since the duality results for the pair(P ) − (DI) are similar in nature.

We shall use the following list of symbols in the statements and proofs of our dualitytheorems:

C(x, v) =m∑

j=1

vjHj(x),

Ei(x, λ) = fi(x) − λgi(x), i ∈ p,

E(x, u, λ) =p∑

i=1

ui[fi(x) − λgi(x)],

I+(u) = i ∈ p : ui > 0, J+(v) = j ∈ m : vj ≥ 0.The next two theorems show that (DI) is a dual problem for (P ).

244

Page 245: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

12 R.U. Verma

Theorem 3.1 (Weak Duality) Let x and S ≡ (y, z, u, v, λ) be arbitrary feasible solutions of(P) and (DI), respectively, and assume that either one of the following two sets of hypothesesis satisfied:

(a) (i) for each i ∈ I+ ≡ I+(u), Ei(·, u, λ) is prestrictly (b, ω, ξ, ρi, θ, p, r)- quasiinvex aty,

(ii) for each j ∈ J+ ≡ J+(v), Hj is strictly (b, ω, ξ, ρj, θ, p, r)-quasiinvex at y;

(iii)∑

i∈I+uiρi(x, y) +

∑j∈J+

vjρj(x, y) ≥ 0;

(b) the Lagrangian-type function

ξ → L(ξ, u, v, λ) =p∑

i=1

ui[fi(ξ) − λgi(ξ)] +m∑

j=1

vjHj(ξ)

is (b, ω, ξ, ρ, θ, p, r)-pseudoinvex at y, ρ(x, y) ≥ 0.

Then ϕ(x) ≥ λ for ϕ(x) =(

f1(x)g1(x) , · · ·,

fp(x)gp(x)

).

Proof: Based on the assumptions, we have

Hj(x) ≤ 0 ≤ Hj(y), j ∈ m.

This implies1r(er

∑mj=1 [Hj (x)−Hj (y)] − 1) ≤ 0, (3.8)

which further implies

b(x, y)(1

r(er

∑mj=1 [Hj (x)−Hj (y)] − 1)

)≤ 0. (3.9)

Then by (ii), we have (for j ∈ J+)

1p〈∇Hj(y) + ∇2Hj(y)z, epω(x,y) − 1〉 − 1

2p〈∇2Hj(y)z, epξ(x,y) − 1〉

+ ρj(x, y)‖θ(x, y)‖2 < 0. (3.10)

Using (3.1), (3.2), (3.10) and (iii), we have (for i ∈ I+)

1p〈∇Ei(y, u, λ) + ∇2Ei(y, u, λ)z, epω(x,y) − 1〉 − 1

p〈12∇2Ei(y, u, λ)z, epξ(x,y) − 1〉

> −ρi(x, y)‖θ(x, y)‖2, (3.11)

which applying (i) implies

1r

(er[Ei(x,u,λ)−Ei(y,u,λ) − 1)

)≥ 0, i ∈ I+. (3.12)

245

Page 246: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

New Generation Hybrid Invexities 13

Since the above inequality holds (for both cases of r), we have

p∑

i=1

ui[fi(x) − λgi(x)] ≥p∑

i=1

ui[fi(y) − λgi(y)] ≥ 0.

It follows thatϕ(x) 6≤ λ.

(b): Using the dual feasibility of S, nonnegativity of ρ(x, y), (3.1), and (3.2), we obtainthe following inequality:

1p

⟨∇L(y, u, v, λ) + ∇2L(y, u, v, λ)z, epω(x,y) − 1

⟩− 1

p

⟨12∇2L(y, u, v, λ)z, epξ(x,y) − 1

≥ −ρ(x, y)‖θ(x, y)‖2,

which in view of our (b, ω, ξ, ρ, θ, p, r)-pseudoinvexity assumption implies that

1rb(x, y)

(er[L(x,u,v,λ)−L(y,u,v,λ)] − 1

)≥ 0,

which impliesL(x, u, v, λ)− L(y, u, v, λ) ≥ 0.

Since x ∈ F, v ≥ 0, and (3.3) holds, we get

p∑

i=1

ui[fi(x) − λgi(x)] ≥ 0,

which is precisely (3.11). As seen in the proof of part (a), this inequality leads to the desiredconclusion.2

Now we consider the second the theorem to show that DI) is a dual of (P ). We adopt thesimilar notations as for Theorem 3.1. and define real-valued functions Ei(., u) and Bj(., v) by

Ei(x, u, λ) = ui[fi(x) − λgi(x)], i ∈ 1, · · ·, pand

Bj(x, v) = vjHj(x), j = 1, · · ·, m.

Theorem 3.2 (Weak Duality) Let x and D = (y, z, u, v, λ) be arbitrary feasible solutions of(P) and (DI), respectively. Let fi, gi for i ∈ 1, · · ·, p and Hj for j ∈ 1, · · ·, m be twice con-tinuously differentiable at y ∈ X, and let there exist u∗ ∈ U = u ∈ Rp : u > 0, Σp

i=1ui = 1and v ∈ Rm

+ .

Suppose, in addition, that any one of the following assumptions holds:

246

Page 247: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

14 R.U. Verma

(i) Ei(. ; y, u∗, λ) ∀ i ∈ 1, · · ·, p are second order (b, ρ, ω, ξ, θ, p, r)-pseudoinvex at y ∈ Xif there exist functions ω, ξ : X × X → Rn, b : X × X → R+, and real numbers rand p for all x ∈ X and z ∈ Rn, and Bj(. , v∗) ∀ j ∈ 1, · · ·, m are second order(b, ρ, ω, ξ, θ, p, r)-quasiinvex at y ∈ X if there exist functions ω, ξ : X ×X → Rn, a func-tion b : X×X → R+, and real numbers r and p for all x ∈ X, z ∈ Rn, and ρ(x, x∗) ≥ 0.

(ii) Ei(. ; y, u∗, λ) ∀ i ∈ 1, · · ·, p are second order (b, ρ1, ω, ξ, θ, p, r)-pseudoinvex at y ∈ Xif there exist functions ω, ξ : X×X → Rn, b : X×X → R+, and real numbers r and p forall x ∈ X and z ∈ Rn, and Bj(. , v∗) ∀ j ∈ 1, ···,m are second order (b, ρ2, ω, ξ, θ, p, r)-quasiinvex at y ∈ X if there exist functions ω, ξ : X × X → Rn, b : X × X → R+,and real numbers r and p for all x ∈ X, z ∈ Rn, and ρ1(x, x∗), ρ2(x, x∗) ≥ 0 withρ2(x, x∗) ≥ ρ1(x, x∗).

(iii) Ei(. ; x∗, u∗, λ) ∀ i ∈ 1, ···, p are second order prestrictly (b, ρ1, ω, ξ, θ, p, r)-pseudoinvexat y ∈ X if there exist functions ω, ξ : X × X → Rn, a function b : X × X → R+,and real numbers r and p for all x ∈ X and z ∈ Rn, and Bj(. , v∗) ∀ j ∈ 1, · · ·, mare second order strictly (b, ρ2, ω, ξ, θ, p, r)-quasiinvex at y ∈ X if there exist functionsω, ξ : X × X → Rn, a function b : X × X → R+, and real numbers r and p for allx ∈ X, z ∈ Rn, and ρ1(x, x∗), ρ2(x, x∗) ≥ 0 with ρ2(x, x∗) ≥ ρ1(x, x∗).

(iv) Ei(. ; y, u∗, λ) ∀ i ∈ 1, · · ·, p are second order prestrictly (b, ρ1, ω, ξ, θ, p, r)-quasiinvexat y ∈ X if there exist functions ω, ξ : X × X → Rn, a function b : X × X → R+,and real numbers r and p for all x ∈ X and z ∈ Rn, and Bj(. , v∗) ∀ j ∈ 1, · · ·, mare second order strictly (b, ρ2, ω, ξ, θ, p, r)-pseudoinvex at y ∈ X if there exist functionsω, ξ : X × X → Rn, a function b : X × X → R+, and real numbers r and p for allx ∈ X, z ∈ Rn, and ρ1(x, x∗), ρ2(x, x∗) ≥ 0 with ρ2(x, x∗) ≥ ρ1(x, x∗).

(v) For each i ∈ 1, ···, p, fi is second order (b, ρ1, ω, ξ, θ, p, r)−invex and −gi is second or-der B−(b, ρ2, ω, ξ, θ, p, r)−invex at x∗. Hj(. , v∗) ∀ j ∈ 1, ···, m is (b, ρ3, ω, ξ, θ, p, r)−quasi-invex at y, and Σm

j=1v∗j ρ3+ρ∗ ≥ 0 for ρ∗ = Σp

i=1u∗i (ρ1+φ(x∗)ρ2) and for φ(x∗) = fi(x

∗)gi(x∗) .

Then ϕ(x) ≥ λ for ϕ(x) =(

f1(x)g1(x) , · · ·,

fp(x)gp(x)

).

Proof: If (i) holds, and since x ∈ Q and y ∈ D, it follows from (3.1) and (3.2) that

1p

⟨Σp

i=1u∗i [5fi(y) − λ 5 gi(y)] + Σm

j=1v∗j 5 Hj(y)

+[ p∑

i=1

u∗i [∇2fi(y) − λ∇2gi(y)] + Σm

j=1v∗j 52 Hj(y)

]z, epω(x,y) − 1

− 12p

⟨[ p∑

i=1

u∗i [∇2fi(y) − λ∇2gi(y)] +

m∑

j=1

v∗j∇2Hj(y)]z, epξ(x,y) − 1

⟩≥ 0. (3.13)

247

Page 248: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

New Generation Hybrid Invexities 15

Since v∗ ≥ 0, x ∈ Q and (3.3) holds, we have

Σmj=1v

∗j Hj(x) ≤ 0 ≤ Σm

j=1v∗j Hj(y),

and sob(x, y)

(1r

(er[Hj (x)−Hj (y)] − 1

))≤ 0

since r 6= 0 and b(x, y) ≥ 0 for all x ∈ Q. In light of the (b, ρ, ω, ξ, θ, p, r)-quasiinvexity ofBj(., v∗) at y, it follows that

1p

⟨∇Hj(y) + ∇2Hj(y)z, epω(x,y) − 1

⟩− 1

p

⟨12∇2Hj(y)z, epξ(x,y) − 1

⟩+ ρ(x, y)‖θ(x, y)‖2 ≤ 0,

and hence,

1p

⟨Σm

j=1∇Hj(y) + Σmj=1∇2Hj(y)z, epω(x,y) − 1

− 1p

⟨12Σm

j=1∇2Hj(y)z, epξ(x,y) − 1⟩≤ −ρ(x, y)‖θ(x, y)‖2. (3.14)

It follows from (3.13) and (3.14) that

1p

⟨Σp

i=1u∗i [5fi(y) − λ 5 gi(y)] +

p∑

i=1

u∗i [∇2fi(y)z − λ∇2gi(y)z], epω(x,y) − 1

−1p

⟨12

p∑

i=1

u∗i [∇2fi(y)z − λ∇2gi(y)z], epξ(x,y) − 1

⟩≥ ρ(x, y)‖θ(x, y)‖2. (3.15)

Since ρ(x, y) ≥ 0, applying (b, ρ, ω, ξ, θ, p, r)−pseudo-invexity at y to (3.15), we have

1rb(x, y)

(er[Ei(·,x,u∗)−Ei(·,y,u∗)] − 1

)≥ 0. (3.16)

Since b(x, y) ≥ 0, (3.16) implies

Σpi=1u

∗i [fi(x) − λgi(x)]

≥ Σpi=1u

∗i [fi(y) − λgi(y)])

≥ 0.

Thus, we haveΣp

i=1u∗i [fi(x) − λgi(x)] ≥ 0. (3.17)

Since u∗i > 0 for each i ∈ 1, · · ·, p, we conclude that

fi(x)gi(x)

− λ ≥ 0 ∀ i = 1, · · ·, p.

Hence, ϕ(x) ≥ λ, i.e., (DI) is a dual for (P).

248

Page 249: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

16 R.U. Verma

The proof for (ii) is similar to that of (i), but we include for the sake of the completeness.If (ii) holds, and if x ∈ Q, then it follows from (3.1) and (3.2) that

1p〈Σp

i=1u∗i [5fi(y) − λ 5 gi(y)] + Σm

j=1v∗j 5 Hj(y)

+[ p∑

i=1

u∗i [∇2fi(y) − λ∇2gi(y)] +

m∑

j=1

v∗j∇2Hj(y)]z, epω(x,y) − 1〉

−12p

〈[ p∑

i=1

u∗i [∇2fi(y) − λ∇2gi(y)]

+m∑

j=1

v∗j∇2Hj(y)]z, epξ(x,y) − 1〉 ≥ 0 (3.18)

Since v∗ ≥ 0, x ∈ Q and (3.3) holds, we have

Σmj=1v

∗j Hj(x) ≤ 0 = Σm

j=1v∗j Hj(y),

and sob(x, y)

(1r

(er[Hj (x)−Hj (y)] − 1

))≤ 0

since r 6= 0 and b(x, y) ≥ 0 for all x ∈ Q. In light of the (b, ρ2, ω, ξ, θ, p, r)-quasiinvexity ofBj(., v∗) at y, it follows that

1p

⟨∇Hj(y) + ∇2Hj(y)z, epω(x,y) − 1

− 1p

⟨12∇2Hj(y)z, epξ(x,y) − 1

⟩+ ρ2(x, y)‖θ(x, y)‖2 ≤ 0.

Now it follows from (3.18) that

1p

⟨Σp

i=1u∗i [5fi(y) − λ 5 gi(y)]

+12

p∑

i=1

u∗i [∇2fi(y)z − λ∇2gi(y)z], epω(x,y) − 1

− 1p

⟨Σp

i=1u∗i [

12

p∑

i=1

u∗i [∇2fi(y)z − λ∇2gi(y)z], epξ(x,y) − 1

≥ ρ2(x, y)‖θ(x, y)‖2. (3.19)

Since ρ1(x, y), ρ2(x, y) ≥ 0 with ρ2(x, y) ≥ ρ1(x, y), applying B − (b, ρ1, ω, ξ, θ, p, r)−pseudo-invexity at y to (3.19, we have

1rb(x, y)

(er[Ei(x,u∗,λ)−Ei(y,u∗,λ)] − 1

)≥ 0. (3.20)

It follows from (3.20) that

249

Page 250: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

New Generation Hybrid Invexities 17

Σpi=1u

∗i [fi(x) − λgi(x)]

≥ Σpi=1u

∗i [fi(y) − λgi(y)])

≥ 0.

Thus, we haveΣp

i=1u∗i [fi(x) − λgi(x)] ≥ 0. (3.21)

Next, the proofs for (iii)-(v) are similar to that of (i).2

Before we prove the theorem on strong duality, we recall that an efficient solution x∗ ∈ Fis referred to as a normal efficient solution of (P ) if the generalized Guignard constraintqualification is satisfied at x∗ and for each i0 ∈ p, the set cone(∇Gj(x∗, t) : t ∈ Tj(x∗), j ∈q ∪ ∇fi(x∗) − λ∗

i∇gi(x∗) : i ∈ p, i 6= i0) + span(∇Hk(x∗, s) : s ∈ Sk, k ∈ r) is closed,where λ∗

i = fi(x∗)/gi(x∗), i ∈ p.

Theorem 3.3 (Strong Duality) Let x∗ be a normal efficient solution of (P) and assume thatfor each feasible solution (y, z, u, v, λ) of (DI), any one of the four sets of conditions specifiedin Theorem 3.2 is satisfied. Then there exist u∗, v∗, λ∗, such that (x∗, z∗, u∗, v∗, λ∗) is anefficient solution of (DI) and ϕ(x∗) = λ∗.

Proof: The proof is similar to that of Theorem 3.2. 2

4 Concluding Remarks

We observe that the obtained results in this paper can be generalized to the case of multiob-jective fractional subset programming with generalized invex functions, for instance based onthe work of Mishra, Jaiswal and Verma [16], and Verma [29] to the case of the ε− efficiencyand weak ε−efficiency conditions to the context of minimax fractional programming problemsinvolving n-set functions.

References

[1] T. Antczak, A class of B-(p, r)-invex functions and mathematical programming, J. Math.Anal. Appl. 268 (2003), 187-206.

[2] T. Antczak, B-(p, r)-pre-invex functions, Folia Math. Acta Univ. Lodziensis 11 (2004),3-15.

[3] T. Antczak, Generalized B-(p, r)-invexity functions and nonlinear mathematical program-ming, Numer. Funct. Anal. Optim. 30 (2009), 1-22.

250

Page 251: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

18 R.U. Verma

[4] A. Ben-Israel and B. Mond What is the invexity? Journal of Australian MathematicalSocietyy Ser. B 28 (1986), 1–9.

[5] L. Caiping and Y. Xinmin, Generalized (ρ, θ, η)−invariant monotonicity and generalized(ρ, θ, η)−invexity of non-differentiable functions, Journal of Inequalities and Applications2009 (2009), Article ID # 393940, 16 pages.

[6] M.A Hanson, On sufficiency of the Kuhn-Tucker conditions, Journal of MathematicalAnalysis and Applications 80 (1985), 545–550.

[7] V. Jeyakumar, Strong and weak invexity in mathematical programming, Methods Oper.Res.55 (1985), 109–125.

[8] M.H. Kim, G.S. Kim and G.M. Lee, On ε−optimality conditions for multiobjec-tive fractional optimization problems, Fixed Point Theory & Applications 2011:6doi:10.1186/1687-1812-2011-6.

[9] G.S. Kim and G.M. Lee, On ε−optimality theorems for convex vector optimization prob-lems, Journal of Nonlinear and Convex Analysis 12 (3) (2013), 473–482.

[10] J.C. Liu, Second order duality for minimax programming, Utilitas Math. 56 1999), 53 -63

[11] O.L. Mangasarian, Second- and higher-order duality theorems in nonlinear programming,J. Math. Anal. Appl. 51 (1975), 607 - 620.

[12] S.K. Mishra, Second order generalized invexity and duality in mathematical program-ming, Optimization 42 (1997), 51 - 69.

[13] S.K. Mishra, Second order symmetric duality in mathematical programming with F-convexity, European J. Oper. Res. 127 (2000), 507 - 518.

[14] S.K. Mishra and N.G. Rueda, Higher-order generalized invexity and duality in mathe-matical programming, J. Math. Anal. Appl. bf 247 (2000), 173 - 182.

[15] S.K. Mishra and N. G. Rueda, (2006). Second-order duality for nondifferential minimaxprogramming involving generalized type I functions, J. Optim. Theory Appl. 130 (2006),477 - 486.

[16] S.K. Mishra, M. Jaiswal and R.U. Verma, Optimality and duality for nonsmooth multiob-jective fractional semi-infinite programming problem, Advances in Nonlinear VariationalInequalities 16 (1) (2013), 69 - 83.

[17] B. Mond and T. Weir, Generalized convexity and higher-order duality, J. Math. Sci.16-18 (1981-1983), 74 - 94.

[18] B. Mond and J. Zhang, Duality for multiobjective programming involving second-orderV-invex functions, in Proceedings of the Optimization Miniconference II (B. M. Gloverand V. Jeyakumar, eds.), University of New South Wales, Sydney, Australia ,1997, pp.89 - 100.

251

Page 252: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

New Generation Hybrid Invexities 19

[19] B. Mond and J. Zhang, Higher order invexity and duality in mathematical programming,in Generalized Convexity, Generalized Monotonicity: Recent Results (J. P. Crouzeix, etal., eds.), Kluwer Academic Publishers, printed in the Netherlands, 1998, pp. 357 - 372.

[20] R.B. Patel, Second order duality in multiobjective fractional programming, Indian J.Math. 38 (1997), 39 - 46.

[21] M.K. Srivastava and M. Bhatia, Symmetric duality for multiobjective programming usingsecond order (F, ρ)-convexity, Opsearch 43 (2006), 274 - 295.

[22] K.K. Srivastava and M. G. Govil, Second order duality for multiobjective programminginvolving (F, ρ, σ)-type I functions, Opsearch 37 (2000), 316 - 326.

[23] S.K. Suneja, C. S. Lalitha, and S. Khurana, Second order symmetric duality in multiob-jective programming, European J. Oper. Res. 144 (2003), 492 - 500.

[24] M.N. Vartak and I. Gupta, Duality theory for fractional programming problems underη-convexity, Opsearch 24 (1987), 163 - 174.

[25] R.U. Verma, Weak ε− efficiency conditions for multiobjective fractional programming,Applied Mathematics and Computation 219) (2013), 6819 - 6827.

[26] R.U. Verma, A generalization to Zalmai type second order univexities and applicationsto parametric duality models to discrete minimax fractional programming, Advances inNonlinear Variational Inequalities 15 (2) (2012), 113-123.

[27] R.U. Verma, The (G,β,φ,h(·, ·),ρ,θ)-univexities of higher orders with applications to para-metric duality models in minimax fractional programming, Theory and Applications ofMathematics & Computer Science 3 (1) (2013), 65 - 84.

[28] R.U. Verma, Generalized (G, b, β, φ, h, ρ, θ)-univexities with applications to parametricduality models for discrete minimax fractional programming, Transactions on Mathe-matical Programming and Applications 1 (1) (2013), 1–14.

[29] R.U. Verma, New ε−optimality conditions for multiobjective fractional subset program-ming problems, Transactions on Mathematical Programming and Applications 1 (1)(2013), 69–89.

[30] R.U. Verma, Second order (Φ, Ψ, ρ, η, θ)−invexity frameworks and ε− efficiency condi-tions for multiobjective fractional programming, Theory and Applications of Mathematics& Computer Science 2 (1) (2012), 31 - 47.

[31] X.M. Yang, Second order symmetric duality for nonlinear programs, Opsearch 32 (1995),205 - 209.

[32] X.M. Yang, On second order symmetric duality in nondifferentiable multiobjective pro-gramming, J. Ind. Manag. Optim. 5 (2009), 697 - 703.

[33] X.M. Yang and S. H. Hou, Second-order symmetric duality in multiobjective program-ming, Appl. Math. Lett. 14 (2001), 587 - 592.

252

Page 253: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

20 R.U. Verma

[34] X.M. Yang, K.L. Teo and X.Q. Yang, Higher-order generalized convexity and duality innondifferentiable multiobjective mathematical programming, J. Math. Anal. Appl. 297(2004), 48 - 55.

[35] X.M. Yang, X.Q. Yang and K.L. Teo, Nondifferentiable second order symmetric dualityin mathematical programming with F-convexity, European J. Oper. Res. 144 (2003), 554- 559.

[36] X.M. Yang, X.Q. Yang and K.L. Teo, Huard type second-order converse duality fornonlinear programming, Appl. Math. Lett. 18 (2005), 205 - 208.

[37] X.M. Yang, X.Q. Yang and K.L. Teo, Higher-order symmetric duality in multiobjectiveprogramming with invexity, J. Ind. Manag. Optim. 4 (2008), 385 - 391.

[38] X.M. Yang, X.Q. Yang K.L. Teo and S.H. Hou, Second order duality for nonlinear pro-gramming, Indian J. Pure Appl. Math. 35 (2004), 699 - 708.

[39] K. Yokoyama, Epsilon approximate solutions for multiobjective programming problems,Journal of Mathematical Analysis and Applications 203 (1) (1996), 142–149.

[40] G.J. Zalmai, General parametric sufficient optimality conditions for discrete minmaxfractional programming problems containing generalized (θ, η, ρ)-V-invex functions andarbitrary norms Journal of Applied Mathematics & Computing 23 (1-2) (2007), 1 - 23.

[41] G.J. Zalmai, Hanson-Antczak-type generalized invex functions in semiinfinite minmaxfractional programming, Part I: Sufficient optimality conditions, Communications onApplied Nonlinear Analysis 19 (4) (2012), 1–36.

[42] G.J. Zalmai, Hanson-Antczak-type generalized (α, β, γ, ξ, η, ρ, θ)-V-invex functions insemiinfinite multiobjective fractional programming, Part III: Second order parametricduality models, Advances in Nonlinear Variational Inequalities 16 (2) (2013), 91 - 126.

[43] J. Zhang and B. Mond, Second order b-invexity and duality in mathematical program-ming, Utilitas Math. 50 (1996), 19 - 31.

253

Page 254: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

On the Best Linear Prediction of Linear Processes

Mohammad Mohammadia,b and Adel Mohammadpourb

aDepartment of Statistics, Faculty of Basic Science,

Khatam Alanbia University of Technology, Behbahan, Iran.

Email: [email protected]

bDepartment of Statistics, Faculty of Mathematics & Computer Science,

Amirkabir University of Technology (Tehran Polytechnic), 424, Hafez Ave., Tehran, Iran.

Email: [email protected]

Abstract

Under mild conditions, we find the best linear prediction of linear processes with

arbitrary innovations. The prediction method is based on the theory of Hilbert

spaces. Using the presented method we can predict non-causal ARMA processes.

Finally, we utilize the presented method for prediction of natural logarithms of vol-

umes of Wal-Mart stock traded daily.

2000 Mathematics Subject Classification: Primary: 60G25; Secondary: 62M20.

Key words: ARMA process; Domain of attraction; Hilbert space; Linear process;

Prediction.

1

254J. APPLIED FUNCTIONAL ANALYSIS, VOL. 10, NO.'S 3-4,254-265 ,2015, COPYRIGHT 2015 EUDOXUS PRESS, LLC

Page 255: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

1 INTRODUCTION 2

1 Introduction

Infinite variance distributions have been used for modeling of data that arise in finance,

telecommunications, insurance and medicine, see Kabasinskas et al. (2009), Nolan (2003),

Tsay (2002), Rachev and Mittnik (2000) and Gallardo (2000), among many others. These

applications justify the need to develop an effective methodology for the prediction of

heavy tailed processes.

The set of all finite variance random variables constitutes a Hilbert space (denoted

by H) with the inner product < X, Y >H= E(XY ), for every X,Y in H. If values

of a random process are in a Hilbert space then one can find the best linear prediction

based on projections on the past values. When the second moment is infinite the classical

method (prediction based on the Hilbert space H) for prediction fail. Several researchers

have studied prediction of infinite variance processes. Cline and Brockwell (1985) used

minimum dispersion criterion for the prediction of causal ARMA processes with i.i.d.

innovations. They assumed that innovations have regularly varying tails. In the case

of α-stable random processes, minimum dispersion criterion focuses tails of errors. This

criterion is used in Kokoszka (1996) for predicting FARIMA processes with innovations

in the domain of attraction of an α-stable distribution with 1 < α ≤ 2. Generally, the

predictor using minimum dispersion criterion is not unique and, except some cases, the

calculations for finding predictors are not routine. We refer the reader interested in the

prediction of infinite variance processes to Cambanis and Soltani (1984), Miamee and

Pourahmadi (1988) and Soltani and Moeanaddin (1994).

In Mohammadi and Mohammadpour (2009), using integral representation of α-stable

random variables a Hilbert space, is defined: the following Hilbert space∫E

f(x)dM(x)∣∣∣f ∈ Lα(E, E , p)

,

where M is an α-stable, 0 < α ≤ 2, random measure on (E, E) with finite control measure

p. The prediction method is based on the introduced Hilbert space with an inner product

known as stable covariation. In Karchera et al. (2013) three methods for predicting

α-stable random fields are investigated.

On the Best Linear Prediction of Linear Processes, Mohammad Mohammadia and Adel Mohammadpour

255

Page 256: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

2 HILBERT SPACE GENERATED BY A LINEAR PROCESS 3

Non-causal ARMA processes with finite or infinite variance innovations are important

process that the above described methods cannot be applied to. As we know, there is not

any model based method for predicting such processes. It should be noted that there is

a best linear predictor for non-causal ARMA processes with finite variance innovations,

but, in practice we can not compute predictor.

In this article, our aim is to find the best linear prediction of the linear process Xt|t ∈

Z of the following form

Xt =+∞∑

j=−∞

cjWt−j, t ∈ Z, (1)

where Wt is an arbitrary sequence of random variables which satisfies many weak as-

sumptions (see Theorem 2.1). Consider the process Xt|t ∈ Z which satisfies the follow-

ing system

Φ(B)Xt = Θ(B)Zt,

where Zt is an arbitrary sequence, Φ(B) = 1−ϕ1B1−· · ·−ϕpB

p, Θ(B) = 1+θ1B1+· · ·+

θqBq, p, q ∈ N and B is a backward shift operator. Obviously, two processes Φ(B)Xt

and Θ(B)Zt satisfy (1).

Section 2 is devoted to the construction of Hilbert spaces based on linear processes of

the form (1). In Section 3, we explain the prediction method. Finally, in Section 4, we

explain the application of the new prediction method for a financial time series which a

non-causal ARMA(2, 0) is an appropriate model for that.

2 Hilbert space generated by a linear process

In this section, under mild conditions we present a Hilbert space such that it contains the

linear process of the form (1).

We can rewrite the linear process of the form (1) as

Xt =+∞∑

j=−∞

ct+jW−j. (2)

On the Best Linear Prediction of Linear Processes, Mohammad Mohammadia and Adel Mohammadpour

256

Page 257: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

2 HILBERT SPACE GENERATED BY A LINEAR PROCESS 4

In the sequel, we consider sequence of random variables Wt such that for every sequence

aj ⊂ C,+∞∑

j=−∞

at+jW−j = 0 if and only if aj = 0, for every j ∈ Z. (3)

Also, we consider linear processes of the form (2) such that

+∞∑j=−∞

|cj|2 < ∞ and∣∣ +∞∑j=−∞

ct+jW−j

∣∣ < ∞, a.s. (4)

Theorem 2.1. Let Wt be a sequence of random variables satisfying (3). Let

A = +∞∑

j=−∞

ct+jW−j + µ∣∣∣t ∈ Z, µ ∈ C, and cj satisfies 4

.

For every two elements X =∑+∞

j=−∞ ct1+jW−j + µ1 and Y =∑+∞

j=−∞ dt2+jW−j + µ2 in A,

we assign the following value to them,

< X, Y >A=+∞∑

j=−∞

ct1+j dt2+j + µ1µ2.

Then, there is a Hilbert space (denoted by A) containing A such that

< X, Y >A=< X, Y >A, (5)

for every X, Y ∈ A.

Proof. It is obvious that A is a vector space with field C. It is enough we show that A is

an inner product space. Condition (3) ensures that the function < ., . >A is well defined

and < ., . >A satisfies properties of the inner product. Therefore, A is an inner product

space. Now, the completion of A is a Hilbert space satisfying (5).

Definition 2.2. We call the space A, the Hilbert space generated by the process Wt.

Remark 2.3. If Wt is an i.i.d. sequence of random variables such that V ar(Wt) < +∞

and E(Wt) = 0, for every t ∈ Z then A is a subset of the Hilbert space H generated from

all finite variance random variables. In this case, we have

< X, Y >H= V ar(W1) < X, Y >A,

for every X, Y ∈ A.

On the Best Linear Prediction of Linear Processes, Mohammad Mohammadia and Adel Mohammadpour

257

Page 258: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

2 HILBERT SPACE GENERATED BY A LINEAR PROCESS 5

Remark 2.4. The inner product of the space A is not necessary based on distribution

function. For example, even if the distributions W1 and W2 in A are different, it is

possible to have ||W1||A = ||W2||A, where ||.||A = (< ., . >A)1/2. Conversely, consider two

non-zero random variables W1 and aW2 in A such that they have same distribution for

a = 1. However, it is possible to have ||W1||A = ||aW2||A.

Example 2.5. Assume that the process Xt satisfies the system

Xt =+∞∑

j=−∞

cjWt−j, t ∈ Z,

where cj and Wj satisfy (3) and (4). From Theorem (2.1), the process Wj takes its

values in the space A. On the other hand, by assuming that Xt satisfies (3), the process

Xt takes its values in the following space

A1 = +∞∑

j=−∞

dt+jX−j

∣∣∣t ∈ Z, dj ⊂ C, and dj satisfies 4.

Two functions ||.||A and ||.||A1 are not necessary equal. This result follows from the fact

that the introduced inner product is not based on distribution function.

By a similar argument as in the proof of Theorem 2.1, we can generalize the space A

to a larger space such that sum of linear processes takes its values in a Hilbert space.

Theorem 2.6. Let the following two setsW ν

t |t ∈ Z, ν = 1, . . . , Iand

cνt |t ∈ Z, ν = 1, . . . , I

satisfy

I∑ν=1

+∞∑j=−∞

cνtν+jWν−j = 0 if and only if cνj = 0, for all j ∈ Z and ν = 1, . . . , I, (6)

where I ∈ N. Moreover, assume that

+∞∑j=−∞

|cνj |2 < ∞ and∣∣ +∞∑j=−∞

cνtν+jWν−j

∣∣ < ∞, a.s. (7)

On the Best Linear Prediction of Linear Processes, Mohammad Mohammadia and Adel Mohammadpour

258

Page 259: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

3 PREDICTION METHOD 6

for every ν = 1, . . . , I. Consider the space

B = I∑

ν=1

+∞∑j=−∞

cνtν+jWν−j + µ

∣∣∣tν ∈ Z, µ ∈ C, and cνj satisfies 6 and 7.

For every two elements

X =I∑

ν=1

+∞∑j=−∞

cνtν+jWν−j + µ1 and Y =

I∑ν=1

+∞∑j=−∞

dνsν+jWν−j + µ2

in B assign the following value to them

< X, Y >B =I∑

ν=1

+∞∑j=−∞

cνtν+j dνsν+j + µ1µ2.

Then, there is a Hilbert space (denoted by B) containing B such that

< X, Y >B=< X, Y >B,

for every X, Y ∈ B.

3 Prediction method

In the previous section, under conditions (3) and (4) we showed that every linear process

Xt in the form of (2) takes its values in a Hilbert space. Therefore, using the projection

theorem (see Brockwell and Davis (1991) chapter 2), one can find the best linear prediction

of Xn+h in terms of some past values X1, . . . , Xn. In other words, using the projection

theorem, we can find the coefficient vector an = (a1n, . . . , ann)′ such that∥∥∥ n∑

i=1

ainXn+1−i −Xn+h

∥∥∥A= inf

K∈spanX1,...,Xn

∥∥∥K −Xn+h

∥∥∥A,

where A is the Hilbert space generated by the linear process Wt. The coefficient vector

an satisfies the following equations

<n∑

i=1

ainXn+1−i −Xn+h, Xn+1−j >A= 0, j = 1, . . . , n. (8)

On the Best Linear Prediction of Linear Processes, Mohammad Mohammadia and Adel Mohammadpour

259

Page 260: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

3 PREDICTION METHOD 7

Set γ(h) :=< X0, Xh >A, h ∈ Z. Then, equations (8) can be rewritten as

Γnan = γn, (9)

where

Γn = [γ(i− j)]i,j=1,...,n

and γn = (γ(h), . . . , γ(n+h−1))′. If Γn is non-singular then we have the unique solution,

an = Γ−1n γn.

Remark 3.1. Similarly to Proposition 5.1.1 in Brockwell and Davis (1991), it can be

shown that if γ(0) > 0 and γ(h) → 0, as h → +∞ then Γn is non-singular. For the linear

process Xt of the form (2) with conditions (3) and (4) we have γ(h) → 0, as h → +∞.

Therefore, Γn in (9) is non-singular.

3.1 Prediction method for a class of linear processes with infi-

nite variance

Consider the real valued process Xt =∑+∞

j=−∞ cjWt−j

∣∣t ∈ Z, with conditions (3) and

(4) and Wt is an i.i.d. sequence of random variables such that

P (|Wt| > x) = x−αL(x),P (Wt > x)

P (|Wt| > x)→ p and

P (Wt < −x)

P (|Wt| > x)→ q, (10)

as x → +∞, where L(.) is a slowly varying function at infinity, α ∈ (0, 2), p ∈ [0, 1]

and q = 1 − p. In the sequel, for the process Xt we describe a reasonable method for

estimating the coefficient vector an in (9). Assume that

ρ(h) =

∑+∞k=−∞ ckck+h∑+∞

k=−∞ c2kand ρn(h) =

∑nt=1 XtXt+h∑n

t=1 X2t

, h ∈ Z.

In Davis and Resnick (1985) it is shown that

(ρn(1), . . . , ρn(H)

)→P

(ρ(1), . . . , ρ(H)

), H > 0 (11)

On the Best Linear Prediction of Linear Processes, Mohammad Mohammadia and Adel Mohammadpour

260

Page 261: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

4 APPLICATION IN FINANCE 8

as n → +∞. The notation “ →P ” denotes convergence in probability. By dividing two

sides (9) on∑+∞

k=−∞ |ck|2 we obtain

Γ∗nan = γ∗

n, (12)

where

Γ∗n = [ρ(i− j)]i,j=1,...,n and γ∗

n = (ρ(h), . . . , ρ(n+ h− 1))′.

Using (11), we can estimate Γ∗n and γ∗

n. We summarize the obtained results in the following

lemma. In this lemma a reasonable estimator for an is presented.

Lemma 3.2. Let X1, . . . , XN be a path of length N from the linear process Xt =∑+∞j=−∞ cjWt−j with conditions (3) and (4) and Wt is an i.i.d. sequence of random

variables satisfying (10). Let

Γ∗n,N = [ρN(|i− j|)]i,j=1,...,n,

be non-singular and γ∗n,N = (ρN(h), . . . , ρN(n + h − 1))′. Let an,N = Γ∗−1

n,N γ∗n,N . Then,

an,N →P an, as N → +∞.

4 Application in finance

In this section, we utilize the new prediction method for predicting a set of financial data.

Andrews et al. (2009) showed that the natural logarithms of the volumes of Wal-

Mart stock traded daily on the New York Stock Exchange between December 1, 2003 and

December 31, 2004, can be fitted by the following non-causal ARMA(2,0) process

Xt + 2.0766Xt−1 − 2.0772Xt−2 = Zt,

where Zt is an i.i.d. sequence of α-stable random variables with α = 1.8335. As we

know, there is not any model base method for predicting this process. For the values

xt274t=1 we found∑274

t=1 xt/274 = 16.08896. Our aim is to predict Xn+1 in terms of

some past values. Firstly, we find the best linear prediction for the process X∗t =

Xt − 16.08896|t ∈ Z. Then, we consider X∗n+1 + 16.08896 as a predictor of Xn+1.

On the Best Linear Prediction of Linear Processes, Mohammad Mohammadia and Adel Mohammadpour

261

Page 262: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

4 APPLICATION IN FINANCE 9

We have X∗t + 2.0766X∗

t−1 − 2.0772X∗t−2 = Z∗

t , where Z∗t = Zt − 16.07931. We can

rewriteX∗t asX∗

t = Z∗t −2.0766X∗

t−1+2.0772X∗t−2. Using the introduced space in Theorem

2.6, the values of the process Z∗t −2.0766X∗

t−1+2.0772X∗t−2|t ∈ Z are in a Hilbert space.

Therefore, for the process X∗t we have

γ(h) =

1 + 2.07662 + 2.07722 h = 0

−2.0766× 2.0772 h = 1

0 otherwise

.

Figure 1 shows the value of Xn+1 and its prediction in terms of 20 past values, for n =

20, . . . , 273. We calculated the mean square error of prediction and true values:

1

253

273∑n=21

(xn+1 − xn+1)2 = 0.2435914.

The procedure described in this section can be used for predicting of causal and non-

causal ARMA processes.

On the Best Linear Prediction of Linear Processes, Mohammad Mohammadia and Adel Mohammadpour

262

Page 263: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

4 APPLICATION IN FINANCE 10

0 50 100 150 200 250

15.5

16.0

16.5

17.0

Figure 1: Dotted line displays the natural logarithms of the volumes of Wal-Mart stock traded

daily on the New York Stock Exchange between December 21, 2003 and December 31, 2004.

Smooth line displays the predicted values for Xn+1 in terms of 20 past values, where n =

20, . . . , 273.

On the Best Linear Prediction of Linear Processes, Mohammad Mohammadia and Adel Mohammadpour

263

Page 264: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

REFERENCES 11

References

[1] B. Andrews, M. Calder, R.A. Davis, Maximum likelihood estimation for α-stable

autoregressive processes, Annals of Statistics, 37, 1946-1982 (2009).

[2] P.J. Brockwell, R.A. Davis, Time Series: Theory and Methods (2nd ed.), Springer-

Verlage, New York, 1991.

[3] S. Cambanis, A.R. Soltani, Prediction of stable processes: Spectral and moving av-

erage representation, Probability Theory and Related Fields, 66, 593-612 (1984).

[4] D.B.H. Cline, P.J. Brockwell, Linear prediction of ARMA processes with infinite

variance, Stochastic Processes and their Applications, 19, 181-296 (1985).

[5] R.A. Davis, S. Resnick, Limit theory for moving averages of random variables with

regularly varying tail probabilities, Annals of Probability, 13, 179-195 (1985).

[6] J.R. Gallardo, Fractional Stable Noise Processes and Their Application to Traffic

Models and Fast Simulation of Broadband Telecommunications Networks, Ph.D. dis-

sertation, Department of Electrical and Computer Science, George Washington Uni-

versity, 2000.

[7] A. Kabasinskas, S.T. Rachev, L. Sakalauskas, W. Sun, I. Belovas, Alpha-stable

paradigm in financial markets, Journal of Computational Analysis and Applications,

11(4), 641-668 (2009).

[8] W. Karchera, E. Shmilevab, E. Spodareva, Extrapolation of stable random fields,

Journal of Multivariate Analysis, 115, 516-536 (2013).

[9] P.S. Kokoszka, Prediction of infinite variance fractional ARIMA, Probability and

Mathematical Statistics, 16, 65-83 (1996).

[10] A.G. Miamee, M. Pourahmadi, Wold decomposition, prediction and parameterization

of stationary processes with infinite variance, Probability Theory and Related Fields,

79, 145-164 (1988).

On the Best Linear Prediction of Linear Processes, Mohammad Mohammadia and Adel Mohammadpour

264

Page 265: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

REFERENCES 12

[11] M. Mohammadi, A. Mohammadpour, Best linear prediction for α-stable random

processes, Statistics and Probability Letters, 79, 2266-2272 (2009).

[12] J.P. Nolan, Modeling financial distributions with stable distributions, Chapter 3 of

Handbook of Heavy Tailed Distributions in Finance, Elsevier Science, Amsterdam

(2003).

[13] S.T. Rachev, S. Mittnik, Stable Paretian Models in Finance, Wiley, New York, 2000.

[14] G. Samorodnitsky, M. Taqqu, Stable non-Gaussian Random Processes, Chapman-

Hill, New York, 1994.

[15] A.R. Soltani, R. Moeanaddin, On dispersion of stable random vectors and its applica-

tion in the prediction of multivariate stable processes, Journal of Applied Probability,

31, 3, 691-699 (1994).

[16] R.S. Tsay, Analysis of Financial Time Series, Wiley, New York, 2002.

On the Best Linear Prediction of Linear Processes, Mohammad Mohammadia and Adel Mohammadpour

265

Page 266: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

SOME ESTIMATE FOR THE EXTENDED FRESNEL TRANSFORM

AND ITS PROPERTIES IN A CLASS OF BOEHMIANS

S.K.Q. Al-Omari

Department of Applied Sciences, Faculty of Engineering TechnologyAl-Balqa Applied University, Amman 11134, Jordan

E-mail: [email protected]

AbstractThe construction of Boehmians is similar to the construction of eld of quotients

and in some cases, it just gives the eld of quotients. Strong Boehmians are intro-duced as a subclass of generalized functions to generalize distributions [4]. In thisarticle, we continue the analysis obtained in [20] and [21]. We discuss the Fresneltransform on a certain space of strong Boehmians. The Fresnel transform and itsinverse are extended to a context of Boehmians and are well-dened, linear, analyticand one-to-one mappings. New convergence is also dened.

Keywords: Fresnel Transform; Generalized Fresnel Transform; Tempered Distrib-ution; Boehmian; Strong Boehmian.

1 INTRODUCTION

In wave optics theory, the generalized Fresnel transform is frequently used to describebeam propagation in paraxial approximation and Fresnel di¤raction of light . Thegeneralized Fresnel transform (the di¤raction Fresnel transform) of an arbitraryfunction f(x) is dened as [16; 9]

F df () =

ZR

E (1; 1; 2; 2;x; ) f (x) dx: (1)

where

E (1; 1; 2; 2;x; ) :=1p2i 1

exp

i

2 1

1x

2 2x + 22

is the transform kernel parameterized by a real matrix M being restricted by 12 1 2 = 1: Since di¤raction Fresnel transforms are related to a wide class of opticaltransforms; their various properties and applications have brought great interestsfor physicists recently. In this article, we pay attention to a particular case of thedi¤raction Fresnel transform, namely, the fresnel transform 1 = 2: The reason wechoose this simplied form of the di¤raction Fresnel transform Fresnel transformlies in two aspects: the rst is that the Fresnel transform can be analyzed moreconveniently which provide a favourable performance; the second reason is related

266J. APPLIED FUNCTIONAL ANALYSIS, VOL. 10, NO.'S 3-4,266-280 ,2015, COPYRIGHT 2015 EUDOXUS PRESS, LLC

Page 267: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

2 S.K.Q. Al-Omari

to the complexity of nding a strong Boehmian space that can handle the di¤ractionFresnel transform entangled with four parameters.

However, our results are new even for usual Boehmian spaces. We considercertain space of Boehmians named as "strong Boehmian spacefor the distributionalFresnel transform. Enlightened by this space, the extended Fresnel transform isshown to constitute a usual Boehmian. Further properties are also discussed.

We spread the paper into ve sections: The Fresnel transform is reviewed inSection 1. Section 2 presents a general construction of usual Boehmian spaces. Thestrong Boehmian space is obtained in Section 3. The image space of Boehmians isdescribed in Section 4. Properties of the extended Fresnel transform and its inverseare discussed in Section 5.

By considering the case, 1 = 2; the reduction of Equ.(1) is the so-called Fresneltransform whose integral equation given by [8;p.p:207]

Tf () := F () :=

ZR

f (x) exp i ( x)2 dx; (2)

where f (x) is a single valued function over R. Let E0(R) be the dual space of E (R) of

all innitely smooth complex-valued functions (x) ; over R( the set of real numbers),such that

k () = supx2K

Dkx (x)

<1; (3)

where k 2 N (k = 0; 1; 2; :::) and K run through compact subsets of R. Elements ofE0(R) are called distributions of compact support. The topology of E (R) is generated

by the sequence ( k ())10 of multinorms which makes E (R) locally convex Hausdör¤

topological vector space, D (R) E (R) ; D (R) is the test function space of compactsupport. The topology of D (R) is, therefore, stronger than that induced on D (R)by E

0(R) ; and the restriction of any element of E (R) to D (R) is in D

0(R) ; the dual

space of Schwartz distributions.The classical theory of the Fresnel transform is extended to distributions in E

0(R)

by the formula

Tf () := F ()=Df (x) ; exp i ( x)2

E; (4)

where 2 R is arbitrary but xed.We state without proof the following theorem.

Theorem 1.1(Analyticity of T ). Let f 2 E0(R) ; then T is di¤erentiable and

DkTf () =

Df (x) ; Dk

exp i ( x)2E;

for every nonnegative integer k:

Theorem 1.2. The mapping T is linear and one-one .

Proof is straightforward.

267

Page 268: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

some estimate for the extended Fresnel transform and ... 3

Denition 1.3. Let f and g 2 E0(R) : The generalized convolution product f of f

and g is dened as

h(f f g) (x) ; (x)i = hf (x) ; hg () ; (x+ )ii ;

for every 2 E (R) :

Theorem 1.4. For every f 2 E0(R) ; (x) = hf () ; (x+ )i is innitely smooth

and satises the relation

Dkx (x) =

Df () ; Dk

x (x+ )E;

for all k 2 N:

Proof (see [16] for more details).

2 THE STRONG SPACE OF BOEHMIANS B

Let Y=N R; R = (1;1) ; then f (n; x) 2 E (Y) if and only if f (n; x) is innitelysmooth and

k (f (n; x)) = supx2K

Dkxf (n; x)

<1; (5)

for every k 2 N; K being compact subset of R:Let f (n; x) 2 E (Y) and ! 2 D (R) : The usual convolution product of f (n; x)

and ! with respect to x is given by

(f f !) (n; x) =ZR

f (n; y)! (x y) dy; (6)

n 2 N:Let E (R) ( D (R)) be the set of all test functions ! such thatZ

R

! (x) dx = 1:

The pair (f; !) ; orf (n; x)

!; is said to be a quotient of functions if and only if

f (n; x)f dm! (x) = f (m;x)f dn! (x) ;

n;m 2 N and dr! (x) = r! (rx) :

Two quotients of functions are equivalent,f (n; x)

! g (m;x)

; if and only if

f (n; x)f dm (x) = g (m;x)f dn! (x) : (7)

Let Q =

f (n; x)

!: f (n; x) 2 E (Y) ; ! 2 E (R)

; then the equivalence class

f (n; x)

!

containing

f (n; x)

!is said to be a strong Boehmian. The space of all

such Boehmians is denoted by B :

268

Page 269: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

4 S.K.Q. Al-Omari

Addition, scalar multiplication, di¤erentiation and convolution in B are denedby

f (n; x)

!

+

g (m;x)

=

f (n; x)f + g (m;x)f !

! f

;

k

f (n; x)

!

=

kf (n; x)

!

;Dk

x

f (n; x)

!

=

Dkxf (n; x)

!

and

f (n; x)

!

f =

f (n; x)f

!

;

respectively.

E convergence E! in B if for some f (n; x) ; f (n; x) 2 E

0(Y) such that

=

f (n; x)

!

; =

f (n; x)

!

we have f (n; x) ! f (n; x) as ! 1 in

E0(Y) :

Theorem 2.1. The mapping f (n; x)!f (n; x)f dn!

!

is continuous and imbed-

ding from E0(Y)! B :

Proof see [7, Theorem 3.1] for detailed proof.

3 STRUCTURE OF USUAL BOEHMIANS

One of most youngest generalizations of functions, and more particularly of distri-butions, is the theory of Boehmians. The idea of construction of Boehmians wasinitiated by the concept of regular operators introduced by Boehme [6]. Regularoperators form a subalgebra of the eld of Mikusinski operators and they includeonly such functions whose support is bounded from the left. In a concrete case, thespace of Boehmians contains all regular operators, all distributions and some objectswhich are neither operators nor distributions.

The construction of Boehmians is similar to the construction of the eld ofquotients and in some cases, it gives just the eld of quotients. On the other hand,the construction is possible where there are zero divisors, such as space C (the spaceof continous functions) with the operations of pointwise additions and convolution:

Let G be a linear space and S be a subspace of G: We assume to each pair ofelements f 2 G and ! 2 S; is assigned the product f g such that the followingconditions are satised:

(1) If !; 2 S; then ! ? 2 S and ! ? = ? !:(2) If f 2 G and !; 2 S; then (f !) = f (! ? ) :(3) If f; g 2 G; ! 2 S and 2 R; then

(f + g) ! = f ! + g ! and (f !) = (f) !:

Let be a family of sequences from S; such that:

269

Page 270: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

some estimate for the extended Fresnel transform and ... 5

1 If f; g 2 G; (n) 2 and f n = g n (n = 1; 2; :::) ; then f = g;8n:2 If (!n) ; (n) 2 ; then (!n ? n) 2 :Elements of will be called delta sequences. Consider the class A of pair of

sequences dened by

A =((fn) ; (!n)) : (fn) GN; (!n) 2

;

for each n 2 N:An element ((fn) ; (!n)) 2 A is called a quotient of sequences, denoted by ;

f!n

if fn !m = fm !n;8n;m 2 N:

Two quotients of sequencesfn!n

andgn n

are said to be equivalent,fn!n

gn n

; if

fn m = gm !n;8n;m 2 N:The relation is an equivalent relation on A and hence, splits A into equiv-

alence classes. The equivalence class containingfn!n

is denoted byfn!n

: These

equivalence classes are called Boehmians; or usual Boehmians; and the space ofall Boehmians is denoted by B:

The sum of two Boehmians and multiplication by a scalar can be dened in a nat-

ural wayfn!n

+

gn n

=

fn n + gn !n

!n n

and

fn!n

=

fn!n

=

fn!n

; 2

C, space of complex numbers.

The operation and the di¤erentiation are dened byfn!n

gn n

=

fn gn!n n

and D

fn!n

=

Dfn!n

:

Many a time, G is equipped with a notion of convergence. The relationshipbetween the notion of convergence and are given by:

(4) If fn ! f as n!1 in G and, ! 2 S is any xed element, then

fn ! ! f ! in G (as n!1) :

(5) If fn ! f as n!1 in G and (!n) 2 ; then

fn !n ! f in G (as n!1) :

The operation can be extended to B S by : Iffn!n

2 B and ! 2 S, then

fn!n

! =

fn !!n

:

In B; two types of convergence, convergence and convergence, are dened asfollows:

convergence: A sequence of Boehmians (n) in B is said to be convergent toa Boehmian in B; denoted by n

! ; if there exists a delta sequence (!n)such that

(n !n) ; ( !n) 2 G;8k; n 2 N;

270

Page 271: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

6 S.K.Q. Al-Omari

and(n !k)! ( !k) as n!1; in G; for every k 2 N:

The following is equivalent for the statement of convergence:

Theorem 3.1. n! (n!1) in B if and only if there is fn;k; fk 2 G and

(!k) 2 such that n =fn;k!k

; =

fk!k

and for each k 2 N; fn;k ! fk as

n!1 in G:

convergence: A sequence of Boehmians (n) in B is said to be convergent

to a Boehmian in B; denoted by n! ; if there exists a (!n) 2 such

that (n ) !n 2 G;8n 2 N; and (n ) !n ! 0 as n ! 1 in G: See;[1 7; 15 17; 19] ; for further investigation.

4 THE IMAGE SPACE OF BOEHMIANS

The function ` (n; ) is a member in L (Y) ; Y = N R; if there is f (n; x) 2 E0(Y)

such that ` (n; ) := (Tf) (n; ) := F (n; ) :

A sequence (` (n; ))1=1 ! ` (n; ) as !1; in L (Y) ; if and only if there are

f (n; x) ; f (n; x) in E0(Y) ; such that

f (n; x)! f (n; x) as !1;

where, ` (n; ) = (Tf) (n; ) ; ` (n; ) = (Tf) (n; ) :To L (Y) and E (R) ; the usual convolution product g is interpreted to mean

(`g !) (n; y) =ZR

` (n; y) dm! (y) dy; n;m 2 N; (8)

where ` = Tf; ! 2 E (R) :Let

~E (R) = f(dn!) : ! 2 E (R) ; supp dn! ! 0 as n!1g

then ~E (R) is a family of delta sequences.

Following results constitute the space BL; (E;f) ; ~E;g

of general Boehmians.

Lemma 4.1. Let f 2 E0(Y) and ! 2 E (R) ; then the following holds

T (f (n; x)f dm! (x)) (n; ) = (`g !) (n; ) ;

where ` = Tf .

Proof. Let f 2 E0(Y) and ! 2 E (R) ; then by using Equ.(4) we get

T (f (n; x)f dm!) (n; ) =D(f (n; x)f dm!) (x) ; exp i ( x)2

E:

271

Page 272: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

some estimate for the extended Fresnel transform and ... 7

Denition 1.3 implies

T (f (n; x)f dm!) (n; ) =Df (n; x) ;

Ddm! (y) ; exp i ( (x+ y))2

EE=

Df (n; x) ;

Ddm! (y) ; exp i (( y) x)2

EE:

That is,

T (f (n; x)f dm!) (n; ) =ZR

Df (n; x) ; exp i (( y) x)2

Edm! (y) dy:

Equ.(8) gives

T (f (n; x)f dm!) (n; ) =ZR

` (n; y) dm! (y) dy = (` (n; )g !) (n; ) ;

where ` = Tf:

This completes the proof of the lemma.

Lemma 4.2. Let ` 2 L (Y) and ! 2 E (R) ; then `g ! 2 L (Y) :

Proof. Let f (n; x) 2 E0(Y) be such that ` (n; ) = Tf (n; ) : From Lemma 4.1. we

see that(`g !) (n; ) = T (f (n; x)f dm!) (n; ) ;

m; n 2 N: The fact that ! 2 E (R) implies (dm!) 2 E (R) ; by [7, p.p.886,(2.4)] :Hence,

f (n; x)f dm! 2 E0(Y) :

Thus, `g ! 2 L (Y) :

This completes the proof of the lemma.

Lemma 4.3. Let ` (n; ) 2 L (Y) ; !; 2 E (R) ; then `g (! f ) = (`g !)f :

Proof. The hypothesis of the lemma and Liebniz Theorem yield

(`g (! f )) (n; ) =

ZR

` (n; y) dm (! f ) (y) dy

=

ZR

ZR

` (n; y)m! (my t) dy (t) dt:

The substitution my t = x implies

(`g (! f )) (n; ) =ZR

ZR

`

n; t+ x

m

m! (x) dx

(t) dt

The substitution x = mz implies

(`g (! f )) (n; ) =ZR

ZR

`

n; z t

m

dm! (z) dz

m (t) dt

272

Page 273: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

8 S.K.Q. Al-Omari

Once again, the substitution t = mr togather with Liebniz Theorem imply

(`g (! f )) (n; ) =ZR

ZR

` (n; z r) dm! (z) dzdm (r) dr

Hence, by Equ.(8); we get that

(`g (! f )) (n; ) =

ZR

(`g !) ( r) dm (r) dr

= ((`g !)f ) (n; )

The lemma is therefore completely proved.

Lemma 4.4. Let `1; `2 2 L (Y) ; ! 2 E (R) ; then

(i) (`1 + `2)g ! = `1 g ! + `2 g !;(ii) k (`1 g !) = (k`1)g ! = `1 g (k!) :

Proof is straightforward from properties of integral operators.

Lemma. 4.5. Let `1; `2 2 L (Y) ; (d!) 2 ~E (R) and `1 g ! = `2 g !; then `1 = `2in L (Y) :

Proof. For every `1 (n; ) ; `2 (m; ) 2 L (Y), there are the corresponding dis-tributions f1 (n; x) ; f2 (m;x) 2 E

0(Y) such that `1 = Tf1 and `2 = Tf2:

Hence, the hypothesis of the lemma and linearity of the distributional Fres-nel transform, imply T (f1 (n; x) f2 (m;x)) g ! = 0;8n;m 2 N: ThereforeT (f1 (n; x) f2 (m;x)) = 0; n;m 2 N; x 2 R, in L (Y) : Thus `1 (n; ) =`2 (m; ) in L (Y) ;8n;m 2 N; 2 R:

Hence the lemma.

Lemma 4.6. Let ` ! ` in L (Y) as !1 and ! 2 E (R) ; then ` g ! ! ` g !in L (Y) as !1:

Proof is a straightforward consequence from denitions.

Lemma 4.7. Let ` ! ` in L (Y) as ! 1 and (d!) 2 ~E (R) ; then ` g ! ! `as !1:

Proof. We have ` (n; ) g ! ` (n; ) = Tf (n; ) g ! Tf (n; ) ; for somef (n; x) ; f (n; x) 2 E

0(R) : Hence, Lemma 4.1. implies

` (n; )g ! ` (n; ) = T (f (n; x)f dr!) (n; ) Tf (n; ) ; for some r 2 N:

Once again, linearity of T and the fact that (dr!) is delta sequence yield

` (n; )g ! ` (n; ) = T (f (n; x)f dr! f (n; x)) (n; )! T (f (n; x) f (n; x)) (n; ) as r !1:

Hence` (n; )g ! ` (n; )! 0 as !1:

The proof is therefore completed.

273

Page 274: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

some estimate for the extended Fresnel transform and ... 9

The Boehmian space BL; (E;f) ; ~E;g

is thus constructed.

A typical element in BL; (E;f) ; ~E;g

is denoted by

` (n; )

dn!

:

Two Boehmians`1 (n; )

dn!

and

`2 (m; )

dm

are said to be equal in B

L; (E;f) ; ~E;g

if and only if

`1 (n; )g = `2 (m; )g !: (9)

In BL; (E;f) ; ~E;g

, we dene, addition, scalar multiplication, di¤erentiation and

convolution between Boehmians by`1 (n; )

dn!

+

`2 (m; )

dm

=

`1 (n; )g + `2 (m; )g !

! g

;

k

` (n; )

dn!

=

k` (n; )

dn!

;Dk

` (n; )

dn!

=

Dk` (n; )

dn!

and

` (n; )

dn!

g =

` (n; )g

dn!

;

respectively.

Denition 4.8. ( convergence): ! in B

L; (E;f) ; ~E;g

is conver-

gent if there exists a delta sequence (d!) 2 ~E (R) such that

( g !) ; ( g !) 2 L;8; n 2 N;

and

( g !)! ( g !) as !1; in L; for every 2 N:

Theorem 4.9. ! ( !1) in B

L; (E;f) ; ~E;g

if and only if there are

` (n; ) ; ` (n; ) 2 L and (dn!) 2 ~E (R) such that =` (n; )

dn!

; =

` (n; )

dn!

and for each 2 N; ` (n; )! ` (n; ) as !1 in L:

Denition 4.10.( convergence): ()! in B

L; (E;f) ; ~E;g

is convergent;

if there exists a (d!) 2 ~E (R) such that ( ) g ! 2 L;8 2 N; and( )g ! ! 0 as !1 in L:

274

Page 275: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

10 S.K.Q. Al-Omari

5 FRESNEL TRANSFORMS TO BOEHMIANS

This section is devoted for extending the Fresnel transform to Boehmians in B :

Letf (n; x)

!

2 B then, the extended Fresnel transform of

f (n; x)

!

is viewed

as~T

f (n; x)

!

=

` (n; )

dn!

2 B

L; (E;f) ; ~E;g

; (10)

where ` = Tf:

Theorem 5.1. The mapping ~T : B ! BL; (E;f) ; ~E;g

is well-dened.

Proof. Letf (n; x)

!

=

g (m;x)

2 B ; then, using Equ:(7) yields

f (n; x)f dm = g (m;x)f dn!: (11)

Applying the Fresnel transform on both sides of Equ.(11), and applying Lemma4.1. imply `1 (n; )g = `2 (m; )g ! where, `1 = Tf; `2 = Tg; f; g 2 E0

(R) :Hence, from Equ.(9) ; we obtain

`1 (n; )

dn!

=

`2 (m; )

dm

in the space B

L; (E;f) ; ~E;g

.

The proof is completed.

Theorem 5.2. The mapping ~T : B ! BL; (E;f) ; ~E;g

is linear.

Proof Letf (n; x)

!

;

g (m;x)

2 B ; then

f (n; x)

!

+

g (m;x)

=

f (n; x)f dm + g (m;x)f dn!

! f

: (12)

Employing the Fresnel transform for Equ.(12) yields

~T

f (n; x)

!

+

g (m;x)

=

`1 (n; )g + `2 (m; )g !

! g

=

`1 (n; )

dn!

+

`2 (m; )

dm

:

By using Lemma 4.1., we get

~T

f (n; x)

!

+

g (m;x)

= ~T

f (n; x)

!

+ ~T

g (n; x)

;

where `1 = Tf; `2 = Tg:

275

Page 276: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

some estimate for the extended Fresnel transform and ... 11

Further, for each k 2 R, we have

k ~T

f (n; x)

!

= ~T

k

f (n; x)

!

= ~T

kf (n; x)

!

=

k`1 (n; )

dn!

;

`1 = Tf:

Hence the theorem is proved.

Theorem 5.3. ~T : B ! BL; (E;f) ; ~E;g

is one-one mapping.

Proof Letf (n; x)

!

;

g (m;x)

2 B be such that

~T

f (n; x)

!

= ~T

g (m;x)

in B

L; (E;f) ; ~E;g

:

Equ.(10) then yields `1 (n; )

dn!

=

`2 (m; )

dm

;

where `1 (n; ) = Tf (n; x) ; `2 (m; ) = Tg (m;x) :

From Equ.(9) we get, `1 (n; ) g = `2 (m; ) g !: Applying Lemma 4.1. leadsto

f (n; x)f dm = g (m;x)f dn!:

Hence, upon considering Equ.(7); proof of our theorem is completed.

Denition 5.4. Let` (n; )

dn!

2 B

L; (E;f) ; ~E;g

; then we dene its inverse

Fresnel transform by

~T1

` (n; )

dn!

=

f (n; x)

!

;

where ` = Tf:

Theorem 5.5. The mapping ~T1 : BL; (E;f) ; ~E;g

! B is well dened.

Proof. Let`1 (n; )

dn!

=

`2 (m; )

dm

2 B

L; (E;f) ; ~E;g

; then `1 (n; ) g =

`2 (m; )g !: By virtue of Lemma 4.1. we get

T (f (n; x)f dm ) = T (g (m;x)f dn!) ;

for some f and g in E0(R) ; where `1 = Tf; `2 = Tg:

276

Page 277: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

12 S.K.Q. Al-Omari

The property that T is one-one results in

f (n; x)f dm = g (m;x)f dn!:Therefore,

f (n; x)

!

=

g (m;x)

in B :

The proof is therefore completed.

Theorem 5.6. The mapping ~T1 : BL; (E;f) ; ~E;g

! B is linear.

Proof of this theorem is analogous to that of Theorem 5.2. Details are, thus,avoided.

Theorem 5.7. ~T1 : BL; (E;f) ; ~E;g

! B is continuous with respect to

convergence.

Proof Let ! in BL; (E;f) ; ~E;g

: Then, using Theorem 3.1, there are

` (n; ) ; ` (n; ) 2 BL; (E;f) ; ~E;g

and (d!) 2 ~E (R) such that =

` (n; )

dn!

; =

` (n; )

dn!

and ` (n; ) ! ` (n; ) as ! 1 in L: Hence,

for some f (n; x) ; f (n; x) 2 E0(Y) ; where ` (n; ) = Tf (n; ) ; ` (n; ) =

Tf (n; ) ; we have f (n; x)! f (n; x) as !1 in E0(Y) : This implies

f (n; x)

!

!f (n; x)

!

in B :

That is, ~T1 ! ~T1 as !1:

This completes the proof of the theorem.

Theorem 5.8. ~T1 : BL; (E;f) ; ~E;g

! B is continuous with respect to

convergence.

Proof Let ! as ! 1 in BL; (E;f) ; ~E;g

; then we nd ` (n; ) 2

BL; (E;f) ; ~E;g

; (d!) 2 ~E (R) such that ( ) g ! =

` (n; )g !

!

and ` (n; )! 0 as !1; ` (n; ) = Tf (n; ) ; for some f (n; x) 2 E

0(Y) :

By aid of Lemma 4.1. we get

( )g ! =T (f (n; x)f dm!)

!

:

Employing ~T1 yields

~T1(( )g !) =

f (n; x)f dm!

!

:= f (n; x)! 0;

as !1, by [7, Theorem 3.1]: Thus ~T1 ! ~T1 in B :

277

Page 278: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

some estimate for the extended Fresnel transform and ... 13

The theorem is completely proved.

Theorem 5.9. ~T : B ! BL; (E;f) ; ~E;g

is continuous with respect to E con-

vergence.

Proof. Let E! in B ; then for some f (n; x) ; f (n; x) 2 E

0(Y) we have

=

f (n; x)

!

; =

f (n; x)

!

and f (n; x) ! f (n; x) as ! 1: Hence,

applying the Fresnel transform implies ` (n; )! ` (n; ) as !1: Therefore` (n; )g ! ! ` (n; )g ! as !1 , or equivalently,

` (n; )

dn!

!` (n; )

dn!

as !1: Thus ~T ()

E! ~T () as !1: The proof is, therefore, completed.

It can further be observed that :

Theorem 5.10. ~T : B ! BL; (E;f) ; ~E;g

is continuous with respect to

convergence.

Proof Let ! in B as ! 1: Then, we nd f (n; x) 2 E

0(Y) ; ! 2

E (R) such that ( ) f dm! =

f (n; x)f dm!

!

and f (n; x) ! 0 as

n ! 1: Hence, we have ~T (( )f dm!) =T (f (n; x)f dm!)

!

=

` (n; )g !!

:= ` (n; )! 0 as !1 in E 0 (Y) : Therefore

~T (( )f dm!) =~T ~T

g ! ! 0 as !1:

Hence, ~T! ~T as !1 in B

L; (E;f) ; ~E;g

:

The theorem is proved.

Conict of Interests The author declares there is no conict of interests regardingthe publication of the paper.

References

[1] Al-Omari, S.K.Q. ,Loonker D. , Banerji P. K. and Kalla, S. L. (2008). Fouriersine(cosine) transform for ultradistributions and their extensions to temperedand ultraBoehmian spaces, Integ.Trans. Spl.Funct. 19(6), 453 462.

[2] Al-Omari, S. K. Q. and Adam Kilicman, On the Generalized Hartley-Hilbert andFourier-Hilbert Transforms, Advances in Di¤erence Equations, 2012, 2012:232,doi:10.1186/1687-1847-2012-232.

278

Page 279: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

14 S.K.Q. Al-Omari

[3] Al-Omari, S. K. Q. (2011). On the Distributional Mellin Transformation and itsExtension to Boehmian Spaces, Int. J. Contemp. Math. Sciences, 6(17), 801-810.

[4] E. R. Dill and P. Mikusinski(1993), Strong Boehmians, Proc. Amer. Math. Soc.119 , 885888.

[5] Banerji, P.K., Al-Omari, S.K. and Debnath, L.(2006). Tempered DistributionalFourier Sine(Cosine)Transform, Integ.Trans. Spl.Funct.17(11),759-768.

[6] Boehme, T.K. (1973),The Support of Mikusinski Operators, Tran.Amer. Math.Soc.176,319-334.

[7] Ellin R.Dill and Piotr Mikusinski (1993), Strong Boehmians, Proc. Amer. Math.Soc. 119(3). 885-888.

[8] Daniel F.V. James, Girish S. Agatwal (1996), The generalized Fresnel transformand its application to optics, Optics Communications 126, 207-212.

[9] Hong-Yi Fan,Hai-Liang Lu (2006), Wave-function Transformations by gen-eral SU(1,1) Single-Mode Squeezing and Analogy to Fresnel Transformationsin Wave Optics, Optics Commun. 258, 51-58.

[10] J. N. Pandy(1972), On the Stieltjes Transform of Generalized Functions , Proc.Cambridge Phil. Soc. 71, 85-96.

[11] Mikusinski,P.(1987), Fourier Transform for Integrable Boehmians, RockyMountain J. Math. 17(3), 577-582.

[12] Mikusinski,P.(1983), Convergence of Boehmians, Japan.J.Math.,9, 159-179.

[13] Pathak, R.S.(1997), Integral transforms of generalized functions and their ap-plications, Gordon and Breach Science Publishers, Australia, Canada, India,Japan.

[14] R. S. Pathak(1976), Distributional Stieltjes Transformation ,Proc. EdinburghMath. Soc. Ser., 20 (1), 15-22.

[15] Roopkumar, R.(2007), Stieltjes Transform for Boehmians, Integral Transf. Spl.Funct.18(11),845-853.

[16] S. K. Q. Al-Omari and A. K¬l¬cman (2012), On Di¤raction Fresnel Trans-forms for Boehmians, Abstract and Applied Analysis, Volume 2011. ArticleID 712746.

[17] V.Karunakaran and R.Roopkumar (2000), Boehmians and Their Hilbert Trans-forms, Integ. Trans. Spl. Funct. ,13,131-141.

[18] Zemanian, A.H. (1987), Generalized Integral transformation, Dover Publi-cations, Inc., New York. First published by interscience publishers, NewYork(1968).

279

Page 280: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

some estimate for the extended Fresnel transform and ... 15

[19] S. K. Q. Al-Omari1 and A. K¬l¬cman (2012), Note on Boehmians for Class ofOptical Fresnel Wavelet Transforms, Journal of Function Spaces and Applica-tions,Volume 2012, Article ID 405368, doi:10.1155/2012/405368.

[20] S. K. Q. Al-Omari , Hartley Transforms on Certain Space of Generalized Func-tions, Georgian Mathematical Journal, 20(3),415426, (2013).

[21] S. K. Q. Al-Omari , On the Sumudu Transform and Its Extension to a Classof Boehmians, ISRN Mathematical Analysis, Volume 2014, Article ID 463901,1-6.

280

Page 281: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Resonance Problems for Nonlinear Elliptic Equations with

Nonlinear Boundary Conditions

N. Mavinga

Department of Mathematics & Statistics, Swarthmore College,

Swarthmore, PA 19081-1390

E-mail: [email protected]

and

M. N. Nkashama

Department of Mathematics, University of Alabama at Birmingham,

Birmingham, AL 35294-1170

E-mail: [email protected]

Abstract

We study the solvability of nonlinear second order elliptic partial differential equationswith nonlinear boundary conditions where we impose asymptotic conditions on both non-linearities in the differential equation and on the boundary in such a way that resonanceoccurs at a generalized eigenvalue; which is an eigenvalue of the linear problem in whichthe spectral parameter is both in the differential equation and on the boundary. Theproofs are based on some variational techniques and topological degree arguments.

Keywords: nonlinear elliptic equations, nonlinear boundary conditions, weighted Robin-Neumann-Steklov eigenproblem, resonance conditions.

1 Introduction

In this paper we prove the existence of (weak) solutions to nonlinear second order ellipticpartial differential equations with (possibly) nonlinear boundary conditions−∆u+ c(x)u = f(x, u) in Ω,

∂u

∂ν+ σ(x)u = g(x, u) on ∂Ω,

(1)

where the nonlinear reaction-function f(x, u) and the nonlinear boundary function g(x, u)interact, in some sense, with the generalized spectrum of the following linear problem (withpossibly singular (m, ρ)-weights)−∆u+ c(x)u = µm(x)u in Ω,

∂u

∂ν+ σ(x)u = µρ(x)u on ∂Ω.

(2)

1

281J. APPLIED FUNCTIONAL ANALYSIS, VOL. 10, NO.'S 3-4,281-295 ,2015, COPYRIGHT 2015 EUDOXUS PRESS, LLC

Page 282: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Resonance Problems for Nonlinear Elliptic Equations 2

Notice that the eigenproblem (2) includes as special cases the weighted Steklov eigenproblem(when m ≡ 0 and ρ 6≡ 0) which was considered in [3, 4, 6, 17, 24] as well as the weightedRobin-Neumann eigenproblem (when ρ ≡ 0 and m 6≡ 0); the latter is also referred to in theliterature as Neumann or regular oblique derivative boundary condition (see e.g. [1, 16] andreferences therein). When m 6≡ 0 and ρ 6≡ 0, we have then the eigenparameter µ both in thedifferential equation and on the boundary condition, we refer for instance to [5, 7, 8, 18].Unlike previous results in the literature, what sets our results apart is that we compare boththe reaction nonlinearity f in the differential and the boundary nonlinearity g in Eq.(1) withhigher eigenvalues of the spectrum of problem (2), where the spectral parameter is both inthe differential equation and on the boundary (with weights).The nonlinear problem (1) has received much attention in recent years. Such problem (and itsparabolic analog) has been studied in [9, 22] as a model for heat conduction in a body wherecooling and heating appear inside and at the boundary at a rate proportional to a powerof u. Problem (1) has also been considerably studied by many authors in the framework ofsub and super-solutions method. We refer e.g. to [1, 2, 21], and references therein. Since itis based on (the so-called) comparison techniques, the (ordered) sub-super solutions methoddoes not apply when the nonlinearities are compared with higher eigenvalues.After Landesman-Lazer [14], much work has been devoted to the study of the solvabilityof elliptic boundary value problems (with linear homogeneous boundary conditions) wherethe reaction nonlinearity in the differential equation interacts with the eigenvalues of thecorresponding linear differential equation with linear homogeneous boundary conditions (res-onance and nonresonance problems). For some recent results in this direction we refer e.g.to [11, 12, 13, 19, 20, 23], and references therein.A few results on a disk (n = 2) were obtained in the case of linear elliptic equations withnonlinear boundary conditions, where the nonlinearity on the boundary was compared withthe first Steklov eigenvalue (that is, m ≡ 0 in Eq.(2)). We refer to Cushing [10] and Klin-gelhofer [15] (the results in [15] were significantly generalized to higher dimensions in [1] inthe framework of sub and super-solutions method as aforementioned). In [3, 17] the reso-nance problem for elliptic equations with nonlinear boundary conditions was analyzed usingbifurcation theory (see Remark 3.6 herein). More recently, the authors in [19] proved non-resonance results for problem (1) in which the nonlinearities interact, in some sense, onlywith either the Steklov or the Neumann spectrum. In a very recent paper of one of theauthors [18], nonresonance results for problem (1) were proved in which both nonlinearitiesin the differential equation and on the boundary interact, in some sense, with the generalizedspectrum of problem (2).It is our purpose in this paper to establish existence results for problem (1) by imposingasymptotic conditions on both nonlinearities in the differential equation and on the boundaryin such a way that resonance occurs at a generalized eigenvalue of problem (2). Our resultsgeneralize earlier ones in [1, 2, 12, 23], and in some instances some of those in [3, 11, 17]which were obtained in a different setting.The content of this paper is organized as follows. In Section 2, we state and prove somepreliminary results that we shall need in the sequel. Section 3 is devoted to the statementand proof of existence results for Eq.(1) at resonance. The proof of the main result isbased on variational and topological degree arguments. We conclude the paper with someremarks which show (among other) how our result can be extended to problems with variable

N. Mavinga and M. N. Nkashama282

Page 283: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Resonance Problems for Nonlinear Elliptic Equations 3

coefficients.Throughout this paper we assume that Ω is a bounded domain in Rn (n ≥ 2 ) with boundary∂Ω of class C0,1, ∂/∂ν is the (unit) outer normal derivative. By a weak solution of Eq.(1) wemean a function u ∈ H1(Ω) such that∫

∇u∇v +

∫c(x)uv +

∮σ(x)uv =

∫f(x, u)v +

∮g(x, u)v for all v ∈ H1(Ω), (3)

where

∫denotes the (volume) integral on Ω and

∮denotes the (surface) integral on ∂Ω.

Let us mention that H1(Ω) denotes the usual real Sobolev space of functions on Ω endowedwith the (c, σ)-inner product defined by

(u, v)(c,σ) =

∫∇u∇v +

∫c(x)uv +

∮σ(x)uv (4)

with the associated norm denoted by ‖u‖(c,σ). (The conditions on c and σ which imply that

(4) is an inner product are given below.) This norm is equivalent to the standardH1(Ω)-norm.Besides the Sobolev spaces, we shall make use, in what follows, of the real Lebesgue spacesLq(∂Ω), 1 ≤ q ≤ ∞, and the compactness of the trace operator Γ : H1(Ω) → Lq(∂Ω) for

1 ≤ q <2(n− 1)

n− 2. (Sometimes we will just use u in place of Γu when considering the trace

of a function on ∂Ω.)The functions c : Ω → R, σ : ∂Ω → R, f : Ω × R → R and g : ∂Ω × R → R satisfy thefollowing conditions.

(C1) c ∈ Lp(Ω) with p ≥ n/2 when n ≥ 3 (p > 1 when n = 2) and σ ∈ Lq(∂Ω) with q ≥ n−1when n ≥ 3 (q > 1 when n = 2) with (c, σ) > 0; that is, c(x) ≥ 0 a.e. on Ω and σ(x) ≥0 a.e. on ∂Ω such that ∫

c(x) dx+

∮σ(x) dx 6= 0.

(C2) f : Ω× R→ R and g : ∂Ω× R→ R are Caratheodory functions (i.e., measurable in xfor each u, and continuous in u for a.e. x).

(C3) There exist constants a1, a2 > 0 such that for a.e. x ∈ ∂Ω and all u ∈ R,

|g(x, u)| ≤ a1 + a2|u|s with 0 ≤ s < n

n− 2.

(C3’) There exist constants b1, b2 > 0 such that for a.e. x ∈ Ω and all u ∈ R,

|f(x, u)| ≤ b1 + b2|u|s with 0 ≤ s < n+ 2

n− 2.

2 Generalized Eigenproblems and Weighted Nonresonance

To put our results into context, we have collected in this short section some relevant results ongeneralized linear eigenproblems and nonresonance for nonlinear elliptic problem (1) neededfor our purposes. We refer to a paper of one of the authors [18] for the proofs of these results.

N. Mavinga and M. N. Nkashama283

Page 284: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Resonance Problems for Nonlinear Elliptic Equations 4

Consider the linear problem−∆u+ c(x)u = µm(x)u in Ω,∂u

∂ν+ σ(x)u = µρ(x)u on ∂Ω,

(5)

where (m, ρ) ∈ Lp(Ω)× Lq(∂Ω) with p and q as in Section 1, and (m, ρ) > 0; that is,

m(x) ≥ 0 a.e. on Ω and ρ(x) ≥ 0 a.e. on ∂Ω with

∫m(x) dx+

∮ρ(x) dx 6= 0. (6)

(We stress the fact that the weight-functions m and ρ may vanish on subsets of positivemeasure.)The (generalized) eigenproblem is to find a pair (µ, ϕ) ∈ R×H1(Ω) with ϕ 6≡ 0 such that∫

∇ϕ∇v +

∫c(x)ϕv +

∮σ(x)ϕv = µ

(∫m(x)ϕv +

∮ρ(x)ϕv

)for all v ∈ H1(Ω). (7)

Picking v = ϕ it follows that, if there is such an eigenpair, then one has that µ > 0 and∫m(x)ϕ2 +

∮ρ(x)ϕ2 > 0. Therefore, one can split the Hilbert space H1(Ω) as a direct

(c, σ)-orthogonal sum in the following way,

H1(Ω) = V(m,ρ)(Ω)⊕H1(m,ρ)(Ω), (8)

where V(m,ρ)(Ω) :=

u ∈ H1(Ω) :

∫m(x)u2 +

∮ρ(x)u2 = 0

andH1

(m,ρ)(Ω) =[V(m,ρ)(Ω)

]⊥.

On H1(m,ρ)(Ω) ⊂ H1(Ω), we will also consider the inner product defined by

(u, v)(m,ρ) :=

∫m(x)uv +

∮ρ(x)uv, (9)

with corresponding norm denoted by || · ||(m,ρ) (see e.g. [18] for details).Assuming that the above assumptions are satisfied, one of the authors [18] proved that, forn ≥ 2, the eigenproblem (5) has a sequence of real eigenvalues

0 < µ1 < µ2 ≤ . . . ≤ µj ≤ . . .→∞, as j →∞,

each eigenvalue has a finite-dimensional eigenspace. The eigenfunctions ϕj corresponding tothese eigenvalues form a complete orthonormal family in the (proper) subspace H1

(m,ρ)(Ω).Moreover, the first eigenvalue µ1 is simple, and its associated eigenfunction ϕ1 is strictlypositive (or strictly negative) in Ω and the following inequalities hold.

(i) For all u ∈ H1(Ω),

µ1

(∫m(x)u2 +

∮ρ(x)u2

)≤∫|∇u|2 +

∫c(x)u2 +

∮σ(x)u2, (10)

where µ1 > 0 is the least eigenvalue for Eq.(5). If equality holds in (10), then u is amultiple of an eigenfunction of Eq.(5) corresponding to µ1.

N. Mavinga and M. N. Nkashama284

Page 285: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Resonance Problems for Nonlinear Elliptic Equations 5

(ii) For every v ∈ ⊕i≤jE(µi), and w ∈ ⊕i≥j+1E(µi), we have that

||v||2(c,σ) ≤ µj ||v||2(m,ρ) and ||w||2(c,σ) ≥ µj+1||w||2(m,ρ), (11)

where E(µi) is the µi-eigenspace, ⊕i≤jE(µi) is the span of eigenfunctions associatedwith eigenvalues below and up to µj , and || · ||(m,ρ) is the norm induced by (9).

The next theorem concerns an existence result for the nonlinear problem (1) in the case ofweighted nonresonance. We refer to [18] for the proof of this result.

Theorem 2.1 (Weighted nonresonance between consecutive generalized eigenvalues)Suppose that the assumptions (C1)-(C3’) and (6) are met, and that the following conditionshold.(C4) There exist constants a, b, α, β ∈ R such that

αm(x) ≤ lim inf|u|→∞

f(x, u)

u≤ lim sup|u|→∞

f(x, u)

u≤ βm(x)

and

aρ(x) ≤ lim inf|u|→∞

g(x, u)

u≤ lim sup|u|→∞

g(x, u)

u≤ bρ(x),

uniformly for a.e. x ∈ Ω, respectively for a.e. x ∈ ∂Ω, where

µj < min(a, α) ≤ max(b, β) < µj+1. (12)

Then, Eq.(1) has at least one (weak) solution u ∈ H1(Ω).

Remark 2.2 The result in Theorem 2.1 remains valid if one replaces the functions f(x, u)and g(x, u) with f(x, u) + A(x) and g(x, u) + B(x) respectively, where A ∈ L2(Ω) and B ∈L2(∂Ω).

3 Main Result

In this section, we prove an existence result for problem (1) at resonance which includesboth the Steklov as well as Neumann and Robin problems with appropriate choices of theweights m and ρ as aforementioned. Notice that the nonlinearity in the boundary conditionis at resonance as well. We require that the nonlinearities satisfy some sign conditions and aLandesman-Lazer type condition (possibly at a generalized higher eigenvalue).Consider the following nonlinear problem−∆u+ c(x)u = µjm(x)u+ f(x, u) in Ω,

∂u

∂ν+ σ(x)u = µjρ(x)u+ g(x, u) on ∂Ω,

(13)

where µj is a generalized eigenvalue of the (weighted) problem (5).

Theorem 3.1 (Resonance at any generalized eigenvalue)Suppose that the assumptions (C1)–(C3’) and (6) are met, and that the following conditionshold.

N. Mavinga and M. N. Nkashama285

Page 286: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Resonance Problems for Nonlinear Elliptic Equations 6

(C5) There exists a constant β such that

0 ≤ lim inf|u|→∞

f(x, u)

u≤ lim sup|u|→∞

f(x, u)

u≤ βm(x)

and

0 ≤ lim inf|u|→∞

g(x, u)

u≤ lim sup|u|→∞

g(x, u)

u≤ βρ(x),

uniformly for a.e. x ∈ Ω, respectively for a.e. x ∈ ∂Ω, where β < (µj+1 − µj).

(C6) Sign conditions: There exist functions a, A ∈ L2(Ω), b, B ∈ L2(∂Ω), and constantsr < 0 < R such that

f(x, u) ≥ A(x) and g(x, u) ≥ B(x)

for a.e. x ∈ Ω, respectively for a.e. x ∈ ∂Ω and all u ≥ R,

f(x, u) ≤ a(x) and g(x, u) ≤ b(x)

for a.e. x ∈ Ω, respectively for a.e. x ∈ ∂Ω and all u ≤ r.

Then, Eq. (13) has at least one (weak) solution u ∈ H1(Ω) provided that the followingLandesman-Lazer type condition holds:∫

ϕ>0f+ϕ+

∫ϕ<0

f−ϕ+

∮ϕ>0

g+ϕ+

∮ϕ<0

g−ϕ > 0 for all ϕ ∈ E(µj) \ 0, (14)

where E(µj) is the µj-eigenspace, f+(x) := lim infu→∞

f(x, u), g+(x) := lim infu→∞

g(x, u),

f−(x) := lim supu→−∞

f(x, u), g−(x) := lim supu→−∞

g(x, u) and,

∫ϕ>0

and

∮ϕ>0

denote the integrals

on the sets x ∈ Ω : ϕ(x) > 0 and x ∈ ∂Ω : ϕ(x) > 0 respectively.

Unlike previous results in the literature, what sets our results apart is that we compare boththe reaction nonlinearity f in the differential equation and the boundary nonlinearity g withhigher eigenvalues of the spectrum of problem (2), where the spectral parameter is both inthe differential equation and on the boundary (with possibly singular weights). It shouldalso be noted that the presence of the boundary nonlinearity extends the range of allowable‘forcing’ terms in the condition (14). Our results generalize earlier ones in [1, 2, 3, 12, 17, 23](see Remark 3.6 for details).We will use variational and topological degree techniques combined with some duality ar-guments. Before giving a proof of our main result, we first prove several lemmas that arerelevant in order to obtain a priori estimates. (For some of these lemmas, we borrow sometechniques of proof from [12, 13].)For u ∈ H1(Ω), we shall write

u = u0 + u+ u+ v,

where u0 ∈ E0 := E(µj), u ∈ ⊕i≤j−1E(µi), u ∈ ⊕i≥j+1E(µi), and v ∈ V(m,ρ). Moreover, weshall set

w := u+ v − u− u0 and u⊥ := u+ v + u.

N. Mavinga and M. N. Nkashama286

Page 287: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Resonance Problems for Nonlinear Elliptic Equations 7

Lemma 3.2 Let β > 0 be as in Theorem 3.1. Then there exists δ = δ(β) > 0 such that forall u ∈ H1(Ω)∫

∇u∇w +

∫c(x)uw +

∮σ(x)uw − (µj + β)

(∫m(x)uw +

∮ρ(x)uw

)≥ δ

∥∥∥u⊥∥∥∥2(c,σ)

.

Proof. Let u ∈ H1(Ω), define Dβ(u) by

Dβ(u) :=

∫∇u∇w +

∫c(x)uw +

∮σ(x)uw − (µj + β)

(∫m(x)uw +

∮ρ(x)uw

).

Taking into account the (c, σ)-orthogonality of u, v, u and u0 in H1(Ω) and the fact thatv ∈ V(m,ρ) and u0 ∈ E0 , one has that

Dβ(u) = ‖u‖2(c,σ) + ‖v‖2(c,σ) − ‖u‖2(c,σ) − (µj + β) ‖u‖2(m,ρ) + (µj + β) ‖u‖2(m,ρ) + β

∥∥u0∥∥2(m,ρ)

≥(‖u‖2(c,σ) −

(µj + β)

µj+1‖u‖2(c,σ)

)+ ‖v‖2(c,σ) +

(µjµj−1

‖u‖2(c,σ) − ‖u‖2(c,σ)

)≥ δ

(‖u‖2(c,σ) + ‖v‖2(c,σ) + ‖u‖2(c,σ)

)= δ

∥∥∥u⊥∥∥∥2(c,σ)

,

where δ = min

1, 1− µj + β

µj+1,µjµj−1

− 1

. The proof is complete. 2

Lemma 3.3 Let β > 0 be as in Theorem 3.1, δ > 0 be associated with β by Lemma 3.2, andε > 0. Then for all τ ∈ Lp(Ω) and τ ∈ Lq(∂Ω) satisfying τ(x) ≤ βm(x) + ε, τ(x) ≤ βρ(x) + εrespectively, and all u ∈ H1(Ω, ) one has

(u,w)(c,σ) − µj(∫

m(x)uw +

∮ρ(x)uw

)−∫τ(x)uw +

∮τ(x)uw ≥ Cδ

∥∥∥u⊥∥∥∥2(c,σ)

,

where Cδ > 0 is a constant depending on δ and ε, provided that ε > 0 is sufficiently small.

Proof. Let Dτ (u) := (u,w)(c,σ) − µj(∫

m(x)uw +

∮ρ(x)uw

)−∫τ(x)uw +

∮τ(x)uw.

Using the computations in the proof of Lemma 3.2 we obtain that

Dτ (u) ≥ δ∥∥∥u⊥∥∥∥2

(c,σ)− εK

∥∥∥u⊥∥∥∥2(c,σ)

= (δ − εK)∥∥∥u⊥∥∥∥2

(c,σ)= Cδ

∥∥∥u⊥∥∥∥2(c,σ)

,

where K is a constant. If ε is sufficiently small then we get that Cδ > 0. The proof is complete2

Lemma 3.4 Assume (C1)–(C3’) and (6) are met, and that (in addition) f and g satisfy thesign-condition (C6). Then, for each real number K > 0 there are decompositions

f(x, u) = pK(x, u) + fK(x, u) and g(x, u) = qK(x, u) + gK(x, u) (15)

of f and g such that0 ≤ u pK(x, u) and 0 ≤ u qK(x, u) (16)

N. Mavinga and M. N. Nkashama287

Page 288: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Resonance Problems for Nonlinear Elliptic Equations 8

for a.e. x ∈ Ω, respectively for a.e. x ∈ ∂Ω, and all u ∈ R. Moreover, there exist functionsω ∈ L2(Ω) and ω ∈ L2(∂Ω) depending on a, A, f and b, B, g respectively such that

|fK(x, u)| ≤ ω(x) and |gK(x, u)| ≤ ω(x) (17)

for a.e. x ∈ Ω, respectively for a.e. x ∈ ∂Ω, and all u ∈ R.

Proof. Given K > 0, define gK(x, u) :=

infg(x, u),K if u ≥ 1,

supg(x, u),K if u ≤ −1,

and qK(x, u) := g(x, u)− gK(x, u) for x ∈ Ω and |u| ≥ 1.

Set qK(x, u) :=

qK(x, u) if |u| ≥ 1,

u qK(x, u/|u|) if 0 < |u| ≤ 1,

0 if u = 0.Finally, define gK := g − qK . By an easy calculation, one can check that all the conditionsof the lemma are satisfied with ω = C + max|b(x)|, |B(x)|, K, where the constant C > 0depends on R, −r, 1, b1, b2. Similar arguments are used in the case of the function f. Theproof is complete. 2

Lemma 3.5 Assume (C1)–(C3’) and (6) are met, and that in (addition) f and g satisfy(C5) and (C6). Then, for each real number K > 0, the functions pK and qK provided byLemma 3.4 satisfy the following additional conditions

|pK(x, u)| ≤ (β m(x) + ε)|u| −K and |qK(x, u)| ≤ (β ρ(x) + ε)|u| −K (18)

for a.e. x ∈ Ω, respectively for a.e. x ∈ ∂Ω, and all u ∈ R with |u| ≥ max1, ε.

Proof. It follows from (C5) that for all ε > 0 there exists κ = κ(ε) > 0 such that

|f(x, u)| ≤ (β m(x) + ε)|u| and |g(x, u)| ≤ (β ρ(x) + ε)|u| (19)

for a.e. x ∈ Ω, respectively for a.e. x ∈ ∂Ω, and u ∈ R with |u| ≥ κ.

Let u ∈ R with |u| ≥ 1. Then gK(x, u) :=

g(x, u) if u ≥ 1 and g(x, u) ≤ K,K if u ≥ 1 and g(x, u) ≥ K,g(x, u) if u ≤ −1 and g(x, u) ≥ −K,K if u ≤ −1 and g(x, u) ≤ −K.

It follows that qK(x, u) =

0 if u ≥ 1 and g(x, u) ≤ K,g(x, u)−K if u ≥ 1 and g(x, u) ≥ K,0 if u ≤ −1 and g(x, u) ≥ −K,g(x, u) +K if u ≤ −1 and g(x, u) ≤ −K.

By (19) we get that

0 ≤ qK(x, u) ≤ (β ρ(x) + ε)u−K if u ≥ max1, κ

N. Mavinga and M. N. Nkashama288

Page 289: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Resonance Problems for Nonlinear Elliptic Equations 9

and0 ≥ qK(x, u) ≥ (β ρ(x) + ε)u+K if u ≤ −max1, κ.

Therefore|qK(x, u)| ≤ (β ρ(x) + ε) |u| −K

for a.e. x ∈ ∂Ω and all u ∈ R with |u| ≥ max1, κ. Similar arguments are used in the caseof f. The proof is complete. 2

Proof of Theorem 3.1. The proof is divided into four steps.Step 1. Let δ be associated to the constant β by Lemma 3.2. Then, by assumption (C5),

there exists κ ≡ κδ > 0 such that

|f(x, u)| ≤ (β m(x) + d)|u| and |g(x, u)| ≤ (β ρ(x) + d)|u| (20)

for a.e. x ∈ Ω, respectively for a.e. x ∈ ∂Ω, and all u ∈ R with |u| ≥ κ, where d is asufficiently small constant such that 0 < d << δ. By using Lemma 3.4 with K = 1, Eq.(13)is equivalent to −∆u+ c(x)u = µjm(x)u+ p1(x, u) + f1(x, u) in Ω,

∂u

∂ν+ σ(x)u = µjρ(x)u+ q1(x, u) + g1(x, u) on ∂Ω,

(21)

where p1, f1, q1 and g1 are defined in Lemma 3.4 and satisfy conditions (16) and (17). More-over, since f and g verify the inequalities (20), by Lemma 3.5 we get that

|p1(x, u)| ≤ (β m(x) + d)|u|+ 1 and |q1(x, u)| ≤ (β ρ(x) + d)|u|+ 1 (22)

for a.e. x ∈ Ω, respectively for a.e. x ∈ ∂Ω, and all u ∈ R with |u| ≥ max1, κ (see theconstruction of p1 and q1 in Lemma 3.4). Let us choose κ ≥ max1, κ so that (1/|u|) < d.It follows that

0 ≤ p1(x, u)

u≤ β m(x) + d and 0 ≤ q1(x, u)

u≤ β ρ(x) + d, (23)

for a.e. x ∈ Ω, respectively for a.e. x ∈ ∂Ω, and all u ∈ R with |u| ≥ κ, where d = 2d << δ.Step 2. Let us define γ, γ : Ω× R by

γ(x, u) =

p1(x, u)

ufor |u| ≥ κ

p1(x, κ) + p1(x,−κ)

2κ2u+

p1(x, κ)− p1(x,−κ)

2κfor |u| < κ.

γ(x, u) =

q1(x, u)

ufor |u| ≥ κ

q1(x, κ) + q1(x,−κ)

2κ2u+

q1(x, κ)− q1(x,−κ)

2κfor |u| < κ.

The functions γ and γ are Caratheodory in Ω×R since p1 and q1 are. Moreover, by (23) onehas

0 ≤ γ(x, u) ≤ β m(x) + d and 0 ≤ γ(x, u) ≤ β ρ(x) + d, (24)

N. Mavinga and M. N. Nkashama289

Page 290: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Resonance Problems for Nonlinear Elliptic Equations 10

for a.e. x ∈ Ω, respectively for a.e. x ∈ ∂Ω, and all u ∈ R.Define h, h : Ω× R→ R by

h(x, u) = f1(x, u) + p1(x, u)− γ(x, u)u and h(x, u) = g1(x, u) + q1(x, u)− γ(x, u)u,

then it follows from (17) that for some ζ(x) ∈ L2(Ω) and ζ(x) ∈ L2(∂Ω),

|h(x, u)| ≤ ζ(x) and |h(x, u)| ≤ ζ(x)

for a.e. x ∈ Ω, respectively for a.e. x ∈ ∂Ω, and all u ∈ R, where ζ, ζ depend on β, m, ρ, κand the bounds of f1 and g1.Finally, Eq.(21) is equivalent to−∆u+ c(x)u = µjm(x)u+ γ(x, u)u+ h(x, u) in Ω,

∂u

∂ν+ σ(x)u = µjρ(x)u+ γ(x, u)u+ h(x, u) on ∂Ω.

(25)

We will use the Leray-Schauder Fixed Point Theorem to prove that Eq.(25) has at least one(weak) solution. In order to apply this theorem, we need to show the existence of an a prioribound for all possible (weak) solutions of the family of equations−∆u+ c(x)u− µjm(x)u− (1− λ)dm(x))u− λ

[γ(x, u) + h(x, u)

]= 0 in Ω,

∂u

∂ν+ σ(x)u− µj ρ(x)u− (1− λ)d ρ(x)u− λ

[γ(x, u) + h(x, u)

]= 0 on ∂Ω,

(26)

where λ ∈ [0, 1].It is clear that for λ = 0, Eq.(26) has only the trivial weak solution. Now, if u is a (weak)solution of (26) for some λ ∈ (0, 1], it follows from inequalities (24) that

(1−λ)dm(x)+λγ(x, u) ≤ (β+d)m(x)+d and (1−λ)d ρ(x)+λγ(x, u) ≤ (β+d)ρ(x)+d.

Therefore, using Lemma 3.3, Holder inequality, the Sobolev Embedding Theorem, and thecontinuity of the trace operator, one gets that

0 = (u,w)(c,σ) − µj(∫

m(x)uw +

∮ρ(x)uw

)−∫τ(x, u)w +

∮τ(x, u)w

− λ∫h(x, u)w − λ

∮h(x, u)w

≥ (δ − k d)∥∥∥u⊥∥∥∥2

(c,σ)− k

(∥∥∥u⊥∥∥∥(c,σ)

+∥∥u0∥∥

(c,σ)

)≥ Cδ

∥∥∥u⊥∥∥∥2(c,σ)− k

(∥∥∥u⊥∥∥∥(c,σ)

+∥∥u0∥∥

(c,σ)

),

where w = u+ v − u− u0, u⊥ = u+ v + u, τ(x, u) = (1− λ)dm(x) + λγ(x, u) and τ(x, u) =

(1− λ)d ρ(x) + λγ(x, u). For d sufficiently small, it follows that Cδ > 0. Taking a =k

2Cδwe

get that

N. Mavinga and M. N. Nkashama290

Page 291: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Resonance Problems for Nonlinear Elliptic Equations 11

∥∥∥u⊥∥∥∥(c,σ)≤ a+

(a2 + 2a

∥∥u0∥∥(c,σ)

)1/2. (27)

Step 3. We claim that there exists a constant C > 0 such that

‖u‖H1 < C (28)

for any (possible) weak solution u ∈ H1(Ω) of (26) (C is independent of u and λ). If weassume that the claim does not hold, then there exist sequences (λn) in the interval (0, 1] and(un) in H1(Ω) with ‖un‖H1 →∞ such that un is a (weak) solution of the following problem−∆u+ c(x)u− µjm(x)u− (1− λn)dm(x))u− λn f(x, u) = 0 in Ω,

∂u

∂ν+ σ(x)u− µj ρ(x)u− (1− λn)d ρ(x)u− λn g(x, u) = 0 on ∂Ω.

(29)

That is,

0 = (un, v)(c,σ)−µj(un, v)(m,ρ)−(1−λn) d (un, v)(m,ρ)−λn∫f(x, un)v−λn

∮g(x, un)v (30)

for every v ∈ H1(Ω). From (27), it follows that

∥∥u0n∥∥(c,σ) →∞ and

∥∥u⊥n ∥∥(c,σ)‖u0n‖(c,σ)

→ 0. (31)

Therefore,un

‖u0n‖(c,σ)is bounded in H1(Ω). By the reflexivity of H1(Ω), the compact embed-

ding of H1(Ω) into L2(Ω) and the compactness of the trace operator, one can assume (takinga subsequence if it is necessary) that

un‖u0n‖(c,σ)

w inH1(Ω);un

‖u0n‖(c,σ)→ w inL2(Ω) (also in L2(∂Ω));

u0n‖u0n‖(c,σ)

→ w inL2(Ω) (also in L2(∂Ω)).

Set vn =u0n

‖u0n‖(c,σ). Substituting v in (30) by (vn/λn), and taking into account the orthogo-

nality and the fact that vn ∈ E0, we get

0 ≤ (1− λn)λ−1n d∥∥u0n∥∥−1(c,σ)

∥∥u0n∥∥2(m,ρ) = −∫f(x, un)vn −

∮g(x, un)vn (32)

By taking the liminf as n→∞, we have that

lim infn→∞

(∫f(x, un)vn +

∮g(x, un)vn

)≤ 0. (33)

N. Mavinga and M. N. Nkashama291

Page 292: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Resonance Problems for Nonlinear Elliptic Equations 12

Let I+ = x ∈ Ω : w(x) > 0 and I− = x ∈ Ω : w(x) < 0. Then for a.e. x ∈ I+ thereexists ν(x) ∈ N such that for all n ≥ ν(x), one has (passing to a subsequence if necessary),∣∣∣u⊥n (x)

∣∣∣(∥∥u0n∥∥(c,σ))−1 < 1

4w(x)

and ∣∣∣∣u0n(x)(∥∥u0n∥∥(c,σ))−1 − w(x)

∣∣∣∣ < 1

4w(x).

Therefore, for all n ≥ ν(x) one has

un(x)(∥∥u0n∥∥(c,σ))−1 = (u0n(x)+u⊥n (x))(

∥∥u0n∥∥(c,σ))−1 ≥ (u0n(x)−|u⊥n (x)|)(∥∥u0n∥∥(c,σ))−1 ≥ 1

2w(x).

It follows that for a.e. x ∈ I+, there exists an integer ν(x) ∈ N such that for all n ≥ ν(x),

vn(x) > 0 and un(x) ≥ 1

2w(x)(

∥∥u0n∥∥(c,σ))→∞ (since∥∥u0n∥∥(c,σ))→∞).

On the other hand, for a.e. x ∈ I− there exists ϑ(x) ∈ N such that for all n ≥ ϑ(x),

un(x)(∥∥u0n∥∥(c,σ))−1 = (u0n(x)+u⊥n (x))(

∥∥u0n∥∥(c,σ))−1 ≤ (u0n(x)+|u⊥n (x)|)(∥∥u0n∥∥(c,σ))−1 ≤ 1

2w(x).

Therefore, for n ≥ ϑ(x), un(x) ≤ 1

2w(x)(

∥∥u0n∥∥(c,σ))→ −∞.In order to apply Fatou’s Lemma we need to find n0 ∈ N such that for n ≥ n0,

f(x, un) vn ≥ l(x) a.e. and g(x, un) vn ≥ l(x) a.e., (34)

for some l ∈ L1(Ω) and l ∈ L1(∂Ω). Indeed, from (27) one gets∥∥∥u⊥n ∥∥∥2(c,σ)

(∥∥u0n∥∥(c,σ))−1 ≤ 2a∥∥∥u⊥n ∥∥∥

(c,σ)

(∥∥u0n∥∥(c,σ))−1 + 2a.

Therefore, by (31) one has that for n ≥ n0,∥∥u⊥n ∥∥2(c,σ) (∥∥u0n∥∥(c,σ))−1 ≤ C, where C is a

constant independent of n. Since γ(x, un(x)) ≥ 0, one has that for n ≥ n0,

γ(x, un(x))un(x)vn(x) = γ(x, un(x))un(x)u0n(x)(∥∥u0n∥∥(c,σ))−1

=1

2γ(x, un(x))

(∥∥u0n∥∥(c,σ))−1 [(un(x))2 + (u0n(x))2 − (un(x)− u0n(x))2]

≥ −1

2γ(x, un(x))(u⊥n (x))2

(∥∥u0n∥∥(c,σ))−1 ≥ −Cγ(x, un(x)) l1(x),

where l1 ∈ L1(Ω), and C > 0 are independent of n. Therefore, for n ≥ n0,

γ(x, un(x))un(x)vn(x) ≥ −C(β m(x) + d) l1(x).

Now, using the decomposition of f , one has that for n ≥ n0,

f(x, un(x))vn(x) = γ(x, un(x))un(x)vn(x) + h(x, un(x))vn(x)

≥ −C(β m(x) + d) l1(x)−K1 l2(x) = l(x),

N. Mavinga and M. N. Nkashama292

Page 293: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Resonance Problems for Nonlinear Elliptic Equations 13

where l2 ∈ L1(Ω). We use similar arguments to obtain the function l in (34). Notice that it

follows from (32) and (34) that sup

∫f(x, un)vn <∞ and sup

∮g(x, un)vn <∞. Therefore,

by Fatou’s Lemma and the properties of lim inf, one has∫w>0

f+w ≤ lim infn→∞

∫w>0

f(x, un)vn;

∮w>0

g+w ≤ lim infn→∞

∮w>0

g(x, un)vn∫w<0

f−w ≤ lim infn→∞

∫w<0

f(x, un)vn;

∮w<0

g−w ≤ lim infn→∞

∮w<0

g(x, un)vn,

and that ∫w>0

f+w +

∮w>0

g+w +

∫w<0

f−w +

∮w<0

g−w ≤ 0,

which contradicts the assumption (14). Thus the claim holds.Step 4. We use the Leray-Schauder Fixed Point Theorem combined with some duality

arguments.Define T : H1(Ω)→ (H1(Ω))∗ by

T (u)v =

∫∇u∇v +

∫c(x)uv +

∮σ(x)uv − (µj + d)

(∫m(x)uw +

∮ρ(x)uw

).

It follows from Theorem 2.1 and Remark 2.2 that T is linear, continuous and bijective.Therefore, by the Open Mapping Theorem we have that T−1 is continuous. From (26) onesees that

0 = T (u)v − λ(∫

(f(x, u) + dm(x)u)v +

∮(g(x, u) + d ρ(x)u)v

)where λ ∈ [0, 1]. Applying T−1 we get 0 = u − λ[T−1J ′f (u) − T−1J ′g(u)] where J ′f (u)v =∫

(f(x, u) + dm(x)u)v, J ′g(u)v =

∮(g(x, u) + d ρ(x)u)v. Now, let M be defined by

Mu := T−1J ′f (u)− T−1J ′g(u).

Notice that from the continuity of T−1 and the compactness of J ′f and J ′g (see [19]) we have

that M is a compact operator from H1(Ω) to itself. Therefore, one sees that u− λMu = 0.It follows from the a priori estimate (28) and the Leray-Schauder Fixed Point Theorem thatM has a fixed point. Thus, Problem (13) has a (weak) solution. The proof is complete. 2

Remark 3.6 We (briefly) indicate how some of our results extend previous ones in theliterature.

(i) In [3] no reaction term f is considered and the nonlinear perturbation g is sublinear atinfinity.

(ii) In [17] the p-Laplacian is considered and the nonlinear perturbations f and g arebounded.

Here, we consider the case p = 2 and the nonlinear perturbations f and g may be unbounded,with at most linear growth asymptotically.

N. Mavinga and M. N. Nkashama293

Page 294: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Resonance Problems for Nonlinear Elliptic Equations 14

Remark 3.7 If the boundary nonlinearity g is Lipschitz in u, uniformly in x and the func-tions c,m ∈ L∞(Ω), σ, ρ ∈ C1(Ω), one can show with a slight modification of the proof thatthe solution obtained in Theorem 3.1 is actually in H2(Ω).

Remark 3.8 Our resonance results remain valid if one considers nonlinear equations with amore general linear part (in divergence form) with variable coefficients:

−n∑

i,j=1

∂xj

(aij(x)

∂u

∂xi

)+ c(x)u = f(x, u) in Ω,

∂u

∂ν+ σ(x)u = g(x, u) on ∂Ω,

(35)

where σ ∈ L∞(∂Ω) with σ(x) ≥ 0 a.e. on ∂Ω, and ∂/∂ν := ν ·A∇ is the (unit) outer conormalderivative. The matrix A(x) :=

(aij(x)

)is symmetric with aij ∈ L∞(Ω) such that there is a

constant γ > 0 such that for all ξ ∈ Rn,

〈A(x)ξ, ξ〉 ≥ γ|ξ|2 a.e. on Ω.

References

[1] H. Amann, Nonlinear elliptic equations with nonlinear boundary conditions, Proc. 2ndSchrveningen Conf. Diff. Eq., North-Holland Math. Studies 21 (1976), pp. 43-64.

[2] P. Amster, M. C. Mariani and O. Mendez Nonlinear boundary conditions for ellipticequations, Electron. J. Diff. Equations (2005), No. 144, pp. 1-8.

[3] J. M. Arrieta, R. Pardo and A. Rodrıguez-Bernal, Bifurcation and stability of equilib-ria with asymptotically linear boundary conditions at infinity, Proceedings Royal SocietyEdinburgh 137 A (2007), 225-252.

[4] G. Auchmuty, Steklov eigenproblems and the representation of solutions of elliptic bound-ary value problems, Numer. Funct. Anal. Optim. 25, Nos. 3&4 (2004), 321-348.

[5] G. Auchmuty, Bases and Comparison Results for Linear Elliptic Eigenproblems, J.Math.Analysis and Applications 390 (2012), 394-406.

[6] C. Bandle, Isoperimetric Inequalities and Applications, Pittman, London, 1980.

[7] C. Bandle, A Rayleigh-Faber-Krahn Inequality and Some Monotonicity Properties forEigenvalue Problems with Mixed Boundary Conditions, Internat. Ser. Numer. Math., vol.157, Birkhuser, Basel, 2009, 3-12.

[8] B. Belinskiy, Eigenvalue problems for elliptic type partial differential operators with spec-tral parameter contained linearly in boundary conditions, in: Partial Differential and In-tegral Equations, H.G.W Begehr, et al., eds., , Kluwer Academic Publ. (1999), 319-333.

[9] M. Chipot, M. Fila, and P. Quittner, Stationary solutions, blow up and convergence tostationary solutions for semilinear parabolic equations with nonlinear boundary conditions,Acta Math. Univ. Comenianae, Vol. LX, 1 (1991), 35-103.

N. Mavinga and M. N. Nkashama294

Page 295: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Resonance Problems for Nonlinear Elliptic Equations 15

[10] J. M. Cushing, Nonlinear Steklov problems on the unit circle, J. Math. Anal. Appl. 38(1972), 766-783.

[11] P. Drabek and S. B. Robinson, Resonance problems for the p-Laplacian, J. FunctionalAnalysis 169 (1999), 189 - 200.

[12] D. G. de Figueiredo, Semilinear elliptic equations at resonance: Higher eigenvalues andunbounded nonlinearities, in “Recent Advances in Differential Equations” (R. Conti, Ed.),pp. 89-99, Academic Press, London, 1981.

[13] R. Iannacci and M. Nkashama, Unbounded perturbations of forced second order ordinarydifferential equations at resonance, J. Differential Equations 69 (1986), 289–309.

[14] E. M. Landesman and A. C. Lazer, Nonlinear pertubations of linear elliptic boundaryvalue problems I, J. Mech. Anal. 19 (1970), 609-623.

[15] K. Klingelhofer, Nonlinear harmonic boundary value problems I, Arch. Rat. Mech. Anal.31 (1968), 364-371.

[16] A. Manes and A. M. Micheletti, Un’estensione della teoria variazionale classica degliautovalori per operatri ellitici del secondo ordine, Bolletino U. M. I. Vol. 7 (1973), 285-301.

[17] S. Martınez and J. D. Rossi,Weak solutions for the p-Laplacian with a nonlinear boundarycondition at resonance, Electron. J. Diff. Equations (2003) 2003, No. 27, pp. 1-14.

[18] N. Mavinga, Generalized eigenproblem and nonlinear elliptic equations with nonlinearboundary conditions, Proc. Royal Soc. Edinburgh 142A (2012), 137-153.

[19] N. Mavinga and M. N. Nkashama, Steklov-Neumann eigenproblems and nonlinear ellipticequations with nonlinear boundary conditions, J. Differential Equations 248 (2010), 1212-1229.

[20] J. Mawhin, Landesman-Lazer conditions for boundary value problems: A nonlinear ver-sion of resonance, Bol. de la Sociedad Espanola de Mat. Aplicada 16 (2000), 45-65.

[21] J. Mawhin and K. Schmitt, Corrigendum:Upper and lower solutions and semilinear sec-ond order elliptic equations with non-linear boundary conditions, Proceedings Royal So-ciety Edinburgh 100A (1985), 361.

[22] K. Pfluger, On indefinite nonlinear Neumann Problems, in: Partial Differential andIntegral Equations, H.G.W Begehr, et al., eds., , Kluwer Academic Publ. (1999), 335-346.

[23] P. H. Rabinowitz, Minimax Methods in Critical Point Theory with Application to Dif-ferential Equations, CBMS Regional Conf. Series in Math., no. 65, Amer. Math. Soc.,Providence, RI, 1986.

[24] M. W. Steklov, Sur les problemes fundamentaux de la physique mathematique, Ann. Sci.Ecole Norm. Sup. 19 (1902), 455-490.

N. Mavinga and M. N. Nkashama295

Page 296: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Approximation by Complex GeneralizedDiscrete Singular Operators

George A. Anastassiou & Merve Kester

Department of Mathematical SciencesThe University of MemphisMemphis, TN 38152, [email protected]@memphis.edu

Abstract. In this article, we work on the general complex-valueddiscrete singular operators over the real line regarding their conver-gence to the unit operator with rates in the Lp norm for 1 p 1:The related established inequalities contain the higher order Lp mod-ulus of smoothness of the engaged function or its higher order deriv-ative. Also we study the complex-valued fractional generalized dis-crete singular operators on the real line, regarding their convergenceto the unit operator with rates in the uniform norm. The related es-tablished inequalities involve the higher order moduli of smoothnessof the associated right and left Caputo fractional derivatives of therelated function.

AMS 2010 Mathematics Subject Classication: 26A15, 26A33,26D15, 41A17, 41A25, 41A28, 41A35.

Key Words and Phrases: general singular integral, fractionalsingular integral, discrete singular operator, modulus of smoothness,fractional derivative, complex valued function.

1 Background

In [5], Chapter 19, the authors studied the approximation propeties of the gen-eral complex- valued singular integral operators r;(f ;x) dened as follows:They considered the complex valued Borel measurable functions f : R! C

such that f = f1 + if2; i :=p1: Here f1; f2 : R ! R are implied to be real

valued Borel measurable functions.

1

296J. APPLIED FUNCTIONAL ANALYSIS, VOL. 10, NO.'S 3-4,296-345 ,2015, COPYRIGHT 2015 EUDOXUS PRESS, LLC

Page 297: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Let > 0 and be a Borel probability measure on R: For r 2 N and n 2 Z+they put

j =

((1)rj

rj

jn; j = 1; : : : ; r;

1Pr

i=1 (1)ri r

i

in; j = 0:

(1)

Then, they dened the general complex-valued singular integral operatorsas

r;(f ;x) :=

Z 1

1

0@ rXj=0

jf(x+ jt)

1A d(t);8 > 0: (2)

Clearly by the denition of R.H.S.(2) they had

r;(f ;x) = r;(f1;x) + ir;(f2;x): (3)

They supposed that r;(fj ;x) 2 R; 8x 2 R; j = 1; 2:In [5], Chapter 19, the authors dened the modulus of smoothness nite as

follows:Let f1; f2 2 Cn(R); n 2 Z+ with the rth modulus of smoothness nite, that is

!r(f(n)ej ; h) := sup

jtjhkrtf

(n)ej (x)k1;x <1; (4)

h > 0; where

rtf(n)ej (x) :=

rXj=0

(1)rjr

j

f(n)ej (x+ jt); (5)

ej = 1; 2:In [5], Chapter 19, The authors also dened the rth Lp-modulus of smooth-ness for f1; f2 2 Cn(R) with f (n)1 ; f

(n)2 2 Lp(R); 1 p <1 as

!r(f(n)ej ; h)p := sup

jtjhkrtf

(n)ej (x)kp;x; h > 0; with ej = 1; 2: (6)

There they supposed that !r(f(n)ej ; h)p <1; h > 0; ej = 1; 2:

They denoted

k :=rXj=1

jjk; k = 1; : : : ; n 2 N:

They assumed that the integrals

ck; :=

Z 1

1tkd(t)

are nite, k = 1; : : : ; n:

They noticed the inequalities

jr;(f ;x) f(x)j jr;(f1;x) f1(x)j+ jr;(f2;x) f2(x)j ; (7)

2

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester297

Page 298: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

kr;(f ;x) f(x)k1;x kr;(f1;x) f1(x)k1;x + kr;(f2;x) f2(x)k1;x ;

(8)and

kr;(f ;x) f(x)kp;x kr;(f1;x) f1(x)kp;x+kr;(f2;x) f2(x)kp;x ; p 1:(9)

Furthermore, they stated that the equality

f (k)(x) = f(k)1 (x) + if

(k)2 (x); (10)

holds for all k = 1; : : : ; n:

The authors stated

Theorem 1 ([5] p:365) Let f : R ! C such that f = f1 + if2: Here j = 1; 2:Let fj 2 Cn (R) ; n 2 Z+: Assume thatZ 1

1jtjn

1 +

jtj

rd(t) <1:

Suppose also that !r(f(n)j ; h) <1;8h > 0: Then r;(f ;x) f(x)

nXk=1

f (k)(x)

k!kck;

1;x

(11)

1

n!

Z 1

1jtjn

1 +

jtj

rd(t)

!r(f

(n)1 ; ) + !r(f

(n)2 ; )

:

When n = 0 the sum in L.H.S. (11 ) collapses.

They gave their results for the n = 0 case as follows.

Corollary 2 ([5] p:366)Let f : R ! C : f = f1 + if2: Here j = 1; 2: Letfj 2 C (R) : Assume Z 1

1

1 +

jtj

rd(t) <1:

Suppose also !r(fj ; h) <1;8h > 0: Then

kr;(f) fk1 Z 1

1

1 +

jtj

rd(t)

(!r(f1; ) + !r(f2; )) : (12)

Next they stated

3

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester298

Page 299: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Theorem 3 ([5] p:366) Let g 2 Cn1(R), such that g(n) exists, n; r 2 N. Fur-thermore assume that for each x 2 R the function g(ej)(x + jt) 2 L1(R; ) asa function of t ; for all ej = 0; 1; : : : ; n 1; j = 1; : : : ; r: Suppose that there existej;j 0, ej = 1; : : : ; n;j = 1; : : : ; r; with ej;j 2 L1(R; ) such that for eachx 2 R we have

jg(ej)(x+ jt)j ej;j(t); (13)

for almost all t 2 R, all ej = 1; : : : ; n; j = 1; 2; : : : ; r: Then g(ej)(x+jt) denesa integrable function with respect to t for each x 2 R, all ej = 1; : : : ; n;j = 1; : : : ; r, and

(r; (g;x))(ej)= r;

g(ej);x ; (14)

for all x 2 R, all ej = 1; : : : ; n:They presented the following approximation result

Theorem 4 ([5] p:366)Let f : R! C; such that f = f1+if2: Here j = 1; 2: Letfj 2 Cn+ (R) ; n; 2 Z+; and !r(f (n+

ej)j ; h) < 1;8h > 0; for ej = 0; 1; : : : ; :

Assume Z 1

1jtjn

1 +

jtj

rd(t) <1:

We consider the assumptions of Theorem 3 valid regarding f1; f2 for n = :Then (r;(f ;x))(ej) f(ej)(x)

nXk=1

f(k+ej)(x)k!

kck;

1;x

(15)

1

n!

Z 1

1jtjn

1 +

jtj

rd(t)

!r(f

(n+ej)1 ; ) + !r(f

(n+ej)2 ; )

;

for all ej = 0; 1; : : : ; : When n = 0 the sum in L.H.S.(15 ) collapses.

They started to present their Lp results with the following theorem

Theorem 5 ([5] p:367)Let f : R! C such that f = f1+ if2: Here j = 1; 2: Letfj 2 Cn (R) with f (n)j 2 Lp (R) ; n 2 N; p; q > 1 : 1p +

1q = 1: Suppose thatZ 1

1

"1 +

jtj

rp+1 1#jtjnp1 d(t) <1;

4

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester299

Page 300: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

and ck; 2 R; k = 1; : : : ; n: Then r;(f ;x) f(x)nXk=1

f (k)(x)

k!kck;

p;x

(16)

1

((n 1)!)(q(n 1) + 1)1q (rp+ 1)

1p

"Z 1

1

1 +

jtj

rp+1 1!jtjnp1 d(t)

# 1p

1p

!r(f

(n)1 ; )p + !r(f

(n)2 ; )p

:

For the case of p = 1; they obtained

Theorem 6 ([5] p:368)Let f : R ! C such that f = f1 + if2: Here j = 1; 2:

Let fj 2 Cn (R) with f (n)j 2 L1 (R) ; n 2 N: Suppose thatZ 1

1

"1 +

jtj

r+1 1#jtjn1 d(t) <1;

and ck; 2 R; k = 1; : : : ; n: Then r;(f ;x) f(x)nXk=1

f (k)(x)

k!kck;

1;x

(17)

1

(n 1)! (r + 1)

"Z 1

1

1 +

jtj

r+1 1!jtjn1 d(t)

#!r(f

(n)1 ; )1 + !r(f

(n)2 ; )1

:

For n = 0; they showed

Proposition 7 ([5] p:368) Let f : R! C such that f = f1+ if2; where f1; f2 2(C (R) \ Lp (R)) ; p; q > 1 : 1p +

1q = 1: Suppose thatZ 1

1

1 +

jtj

rpd(t) <1:

Then

kr;(f) fkp Z 1

1

1 +

jtj

rpd(t)

1p

(!r(f1; )p + !r(f2; )p) : (18)

Finally, they gave the case of n = 0; p = 1 as

5

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester300

Page 301: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Proposition 8 ([5] p:368)Let f : R! C such that f = f1+ if2; where f1; f2 2(C (R) \ L1 (R)) : SupposeZ 1

1

1 +

jtj

rd(t) <1:

Then

kr;(f) fk1 Z 1

1

1 +

jtj

rd(t)

(!r(f1; )1 + !r(f2; )1) : (19)

Next, the authors demonstrated their simultenous Lp approximation results.They started with

Theorem 9 ([5] p:369)Let f : R ! C; such that f = f1 + if2: Here j = 1; 2:

Let fj 2 Cn+ (R) ; n 2 N; 2 Z+; with f (n+ej)

j 2 Lp (R) ; ej = 0; 1; : : : ; : Let

p; q > 1 : 1p +1q = 1: Suppose that

R11

1 + jtj

rp+1 1jtjnp1 d(t) <1;

and ck; 2 R; k = 1; : : : ; n: We consider the assumptions of Theorem 3 validregarding f1; f2 for n = : Then (r;(f ;x))(ej) f (ej)(x)

nXk=1

f (k+ej)(x)k!

kck;

p;x

1

((n 1)!) (20)

1

(q(n 1) + 1)1q (rp+ 1)

1p

!r(f

(n+ej)1 ; )p + !r(f

(n+ej)2 ; )p

"Z 1

1

1 +

jtj

rp+1 1!jtjnp1 d(t)

# 1p

1p ;

for all ej = 0; 1; : : : ; :They had

Proposition 10 ([5] p:369)Let f : R! C; such that f = f1+if2: Let f(ej)1 ; f

(ej)2 2

(C (R) \ Lp (R)) ; ej = 0; 1; : : : ; 2 Z+; p; q > 1 : 1p + 1q = 1: Suppose thatZ 1

1

1 +

jtj

rpd(t) <1:

We consider the assumptions of Theorem 3 valid regarding f1; f2 for n = :Then (r;(f))(ej) f(ej)

p(21)

Z 1

1

1 +

jtj

rpd(t)

1p !r(f

(ej)1 ; )p + !r(f

(ej)2 ; )p

;

ej = 0; 1; : : : ; :6

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester301

Page 302: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Next they gave the related L1 result as

Theorem 11 ([5] p:369) Let f : R ! C; such that f = f1 + if2; and j = 1; 2:Let f1; f2 2 Cn+ (R) ; with f (n+

ej)j 2 L1 (R) ; n 2 N; ej = 0; 1; : : : ; 2 Z+:

Suppose that Z 1

1

1 +

jtj

r+1 1!jtjn1d(t) <1;

and ck; 2 R; k = 1; : : : ; n: We consider the assumptions of Theorem 3 validregarding f1; f2 for n = : Then (r;(f ;x))(ej) f(ej)(x)

nXk=1

f(k+ej)(x)k!

kck;

1;x

(22)

1

(n 1)! (r + 1)

"Z 1

1

1 +

jtj

r+1 1!jtjn1d(t)

#

!r(f

(n+ej)1 ; )1 + !r(f

(n+ej)2 ; )1

;

for all ej = 0; 1; : : : ; :Their last simultaneous approximation result follows

Proposition 12 ([5] p:370)Let f : R ! C; such that f = f1 + if2: Heref(j)1 ; f

(j)2 2 (C (R) \ L1 (R)) ; j = 0; 1; : : : ; 2 Z+: SupposeZ 1

1

1 +

jtj

rd(t) <1:

We consider the assumptions of Theorem 3 valid regarding f1; f2 for n = :Then (r;(f))(j) f (j)

1Z 1

1

1 +

jtj

rd(t)

!r(f

(j)1 ; )1 + !r(f

(j)2 ; )1

;

(23)for all j = 0; 1; : : : ; :

Next in [5], Chapter 19, the authors dened the generalized complex frac-tional integral operators as follows:Let > 0; x; x0 2 R; f : R! C Borel measurable, such that f = f1 + if2: Sup-pose f1; f2 2 Cm (R) ; with

f (m)1

1< 1;

f (m)2

1< 1: Let probability

7

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester302

Page 303: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Borel measure on R; 8 > 0: Then

r;(f ;x) : =

Z 1

1

0@ rXj=0

jf(x+ jt)

1A d(t) (24)

=

Z 1

1

0@ rXj=0

jf1(x+ jt)

1A d(t) + iZ 1

1

0@ rXj=0

jf2(x+ jt)

1A d(t)= r;(f1;x) + ir;(f2;x);

where

j =

((1)rj

rj

j ; j = 1; : : : ; r;

1Pr

i=1 (1)ri r

i

i ; j = 0:

(25)

for r 2 N and > 0:

Additionally, they denoted

k =rXj=1

jjk; k = 1; : : : ;m 1; (26)

where m = d e ; de is the ceiling of a number :

They supposed here that r; (f1; x) ;r; (f2; x) 2 R; 8x 2 R:The authors also assumed existence of ck; :=

R11 t

kd(t); k = 1; : : : ;m1;and the existence of

R11 jtj

+kd(t); k = 0; 1; : : : ; r:

Finally, they gave their fractional result as

Theorem 13 ([5] p:371)It holds r; (f; ) f ()m1Xk=1

f (k)()k!

kck;

1

(27)

"

rXk=0

r!

(r k)! ( + k + 1) kZ 1

1jtj +k d(t)

#

supx2R

max

!rD xf1;

; !r (D

xf1; )

+supx2R

max

!rD xf2;

; !r (D

xf2; )

:

If m = 1 the sum disappears in L.H.S.(27):

Let j = 1; 2: Above D x0fj is the right Caputo fractional derivative of order

> 0 is given by

D x0fj(x) :=

(1)m

(m )

Z x0

x

( x)m 1 f (m)j ()d; (28)

8

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester303

Page 304: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

8x x0 2 R xed :They supposed D

x0f(x) = 0;8x > x0:

Also D x0fj is the left Caputo fractional derivative of order > 0 is given

by

D x0fj(x) :=

1

(m )

Z x

x0

(x t)m 1 f (m)j (t)dt; (29)

8x x0 2 R xed, where () =R10ett1dt; > 0:

They assumed D x0fj(x) = 0; for x < x0:

On the other hand, in [1], the authors dened important special cases of r;operators for discrete probability measures as follows :

Let f 2 Cn(R), n 2 Z+; 0 < 1, x 2 R.i) When

() =ejj

1P=1

ejj

, (30)

they dened the generalized discrete Picard operators as

P r; (f ;x) :=

1P=1

rPj=0

jf(x+ j)

!ejj

1P=1

ejj

. (31)

ii) When

() =e2

1P=1

e2

, (32)

they dened the generalized discrete Gauss-Weierstrass operators as

W r; (f ;x) :=

1P=1

rPj=0

jf(x+ j)

!e2

1P=1

e2

. (33)

iii) Let 2 N, and > 1 : When

() =

2 + 2

1P

=1

2 + 2

; (34)

9

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester304

Page 305: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

they dened the generalized discrete Poisson-Cauchy operators as

Qr; (f ;x) :=

1P=1

rPj=0

jf(x+ j)

!2 + 2

1P

=1

2 + 2

. (35)

They observed that for c constant they have

P r; (c;x) =Wr; (c;x) = Q

r; (c;x) = c. (36)

They assumed that the operators P r; (f ;x), Wr; (f ;x), and Q

r; (f ;x) 2 R; for

x 2 R: This is the case when kfk1;R <1:iv) When

() := ;P () :=ejj

1 + 2e1

; (37)

they dened the generalized discrete non-unitary Picard operators as

Pr; (f ;x) :=

1P=1

rPj=0

jf(x+ j)

!ejj

1 + 2e1

: (38)

Here ;P () has mass

m;P :=

1P=1

ejj

1 + 2e1

: (39)

They observed that

;P ()

m;P=

ejj

1P=1

ejj

; (40)

which is the probability measure (30) dening the operators P r;:v) When

() := ;W () :=e2

p1 erf

1p

+ 1

; (41)

with erf(x) = 2p

xR0

et2

dt; erf(1) = 1; they dened the generalized discrete

non-unitary Gauss-Weierstrass operators as

Wr; (f ;x) :=

1P=1

rPj=0

jf(x+ j)

!e2

p1 erf

1p

+ 1

: (42)

10

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester305

Page 306: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Here ;W () has mass

m;W :=

1P=1

e2

p1 erf

1p

+ 1

: (43)

They observed that

;W ()

m;W=

e2

1P=1

e2

; (44)

which is the probability measure (32) dening the operators W r;:

The authors observed that Pr; (f ;x), Wr; (f ;x) 2 R; for x 2 R.

In [1], for k = 1; :::; n, the authors dened the ratios of sums

ck; :=

1P=1

kejj

1P=1

ejj

; (45)

pk; :=

1P=1

ke2

1P=1

e2

; (46)

and for 2 N; > n+r+12 ; they introduced

qk; :=

1P=1

k2 + 2

1P

=1

2 + 2

: (47)

Furthermore, they proved that these ratios of sums ck;; pk;, and q

k; are nite

for all 2 (0; 1].

In [1], the authors also proved

m;P =

1X=1

ejj

1 + 2e1

! 1 as ! 0+ (48)

and

m;W =

1X=1

e2

1 +p1 erf

1p

! 1 as ! 0+: (49)

11

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester306

Page 307: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

The authors introduced also

k :=rXj=1

jjk; k = 1; : : : ; n 2 N: (50)

Additionally, in [1], the authors dened the following error quantities:

E0;P (f; x) : = Pr;(f ;x) f(x) (51)

=

1P=1

rPj=0

jf(x+ j)

!ejj

1 + 2e1

f(x);

E0;W (f; x) : =Wr;(f ;x) f(x) (52)

=

1P=1

rPj=0

jf(x+ j)

!e2

p1 erf

1p

+ 1

f(x):

Furthermore, they introduced the errors (n 2 N):

En;P (f; x) := Pr;(f ;x) f(x)nXk=1

f (k)(x)

k!k

1X=1

kejj

1 + 2e1

(53)

and

En;W (f; x) :=Wr;(f ;x)f(x)nXk=1

f (k)(x)

k!k

1X=1

ke2

p1 erf

1p

+ 1

: (54)

Next, they obtained the inequalities

jE0;P (f; x)j m;P

P r;(f ;x) f(x)+ jf(x)j jm;P 1j ; (55)

jE0;W (f; x)j m;W

W r;(f ;x) f(x)

+ jf(x)j jm;W 1j ; (56)

and

jEn;P (f; x)j (57)

m;P

P r;(f ;x) f(x)nXk=1

f (k)(x)

k!kc

k;

+ jf(x)j jm;P 1j ;

12

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester307

Page 308: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

with

jEn;W (f; x)j (58)

m;W

W r;(f ;x) f(x)

nXk=1

f (k)(x)

k!kp

k;

+ jf(x)j jm;W 1j :

In [1], they rst gave the following simultaneous approximation results forunitary operators. They showed

Theorem 14 Let f 2 Cn(R) with f (n) 2 Cu(R) ( uniformly continuous func-tions on R):i) For n 2 N; P r; (f ;x) f(x)

nXk=1

f (k)(x)

k!kc

k;

1;x

(59)

!r(f(n); )

n!

0BB@1P

=1jjn

1 + jj

rejj

1P=1

ejj

1CCA ;and W

r; (f ;x) f(x)nXk=1

f (k)(x)

k!kp

k;

1;x

(60)

!r(f(n); )

n!

0BB@1P

=1jjn

1 + jj

re2

1P=1

e2

1CCA :ii) For n = 0;

P r; (f ;x) f(x) 1;x !r(f; )

0BB@1P

=1

1 + jj

rejj

1P=1

ejj

1CCA ; (61)

and

W r; (f ;x) f(x)

1;x

!r(f; )

0BB@1P

=1

1 + jj

re2

1P=1

e2

1CCA : (62)

In the above inequalities (59) - (62) ; the ratios of sums in their right hand sides(R.H.S.) are uniformly bounded with respect to 2 (0; 1] :

13

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester308

Page 309: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

In [1], they had also

Theorem 15 Let f 2 Cn(R) with f (n) 2 Cu(R); n 2 N, and > n+r+12 :

i) For n 2 N; Qr; (f ;x) f(x)nXk=1

f (k)(x)

k!kq

k;

1;x

(63)

!r(f(n); )

n!

0BB@1P

=1jjn

1 + jj

r 2 + 2

1P

=1

2 + 2

1CCA :

ii) For n = 0;

Qr; (f ;x) f(x) 1;x !r(f; )

0BB@1P

=1

1 + jj

r 2 + 2

1P

=1

2 + 2

1CCA : (64)

In the above inequalities (63) - (64) ; the ratios of sums in their R.H.S. areuniformly bounded with respect to 2 (0; 1] :

Next, they stated their results in [1] for the errors E0;P ; E0;W ; En;P ; andEn;W : They had

Corollary 16 Let f 2 Cu(R): Then

i)

jE0;P (f; x)j

0BB@1P

=1!r(f; jj)e

jj

1 + 2e1

1CCA+ jf(x)j jm;P 1j ; (65)

ii)

jE0;W (f; x)j

0BB@1P

=1!r(f; jj)e

2

p1 erf

1p

+ 1

1CCA+ jf(x)j jm;W 1j : (66)

In [1], for En;P and En;W , the authors presented

14

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester309

Page 310: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Theorem 17 Let f 2 Cn(R) with f (n) 2 Cu(R), n 2 N;and kfk1;R <1: Then

i)

kEn;P (f; x)k1;x (67)

!r(f(n); )

n!

0BB@1P

=1jjn

1 + jj

rejj

1 + 2e1

1CCA+ kfk1;R jm;P 1j ;

ii)

kEn;W (f; x)k1;x (68)

!r(f(n); )

n!

0BB@1P

=1jjn

1 + jj

re2

p1 erf

1p

+ 1

1CCA+ kfk1;R jm;W 1j :

In the above inequalities (67) - (68) ; the ratios of sums in their R.H.S. areuniformly bounded with respect to 2 (0; 1] :

In [2], the authors represented simultaneous Lp approximation results. Theystarted with

Theorem 18 i) Let f 2 Cn(R); with f (n) 2 Lp (R) ; n 2 N; p; q > 1 : 1p+1q = 1,

and rest as above in this section:Then P r; (f ;x) f(x)nXk=1

f (k)(x)

k!kc

k;

p

(69)

1

((n 1)!)(q(n 1) + 1)1q (rp+ 1)

1p

Mp;

1p

1p!r(f

(n); )p

where

Mp; :=

1X=1

1 + jj

rp+1 1jjnp1 e

jj

1X=1

ejj

(70)

which is uniformly bounded for all 2 (0; 1] :Additionally, as ! 0+ we obtain that R:H:S: of (69) goes to zero.ii) When p = 1; let f 2 Cn(R), f (n) 2 L1(R); and n 2 N f1g: Then P r; (f ;x) f(x)

nXk=1

f (k)(x)

k!kc

k;

1

(71)

1

(n 1)! (r + 1)M1;!r(f

(n); )1

15

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester310

Page 311: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

holds where M1; is dened as in (70). Hence, as ! 0+; we obtain that R:H:S:

of (71) goes to zero.iii) When n = 0, let f 2 (C (R) \ Lp(R)) ; p; q > 1 such that 1p +

1q = 1 and the

rest as above in this section. Then P r; (f ;x) f(x) p Mp;

1=p!r(f; )p (72)

where

Mp; :=

1X=1

1 + jj

rpejj

1X=1

ejj

(73)

which is uniformly bounded for all 2 (0; 1] :Hence, as ! 0+; we obtain that P r; ! unit operator I in the Lp norm forp > 1.iv) When n = 0 and p = 1; let f 2 (C (R) \ L1(R)) and the rest as above inthis section. Then the inequality P r; (f ;x) f(x) 1 M

1;!r(f; )1 (74)

holds where M1; is dened as in (73). Furthermore, we get P

r; ! I in the L1

norm as ! 0+:

Next, the authors presented their quantitative results for the Gauss-Weierstrassoperators, see [2]. They started with

Theorem 19 i) Let f 2 Cn(R); with f (n) 2 Lp (R) ; n 2 N; p; q > 1 : 1p+1q = 1,

and the rest as above in this section. Then W r; (f ;x) f(x)

nXk=1

f (k)(x)

k!kp

k;

p

(75)

1

((n 1)!)(q(n 1) + 1)1q (rp+ 1)

1p

Np;

1p

1p!r(f

(n); )p

where

Np; :=

1X=1

1 + jj

rp+1 1jjnp1 e

2

1X=1

e2

(76)

which is uniformly bounded for all 2 (0; 1] :Additionally, as ! 0+ we obtain that R:H:S: of (75) goes to zero.

16

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester311

Page 312: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

ii) For p = 1, let f 2 Cn(R), f (n) 2 L1(R); and n 2 N f1g: Then W r; (f ;x) f(x)

nXk=1

f (k)(x)

k!kp

k;

1

(77)

1

(n 1)! (r + 1)N1;!r(f

(n); )1

holds where N1; is dened as in (76). Hence, as ! 0+; we obtain that R:H:S:

of (77) goes to zero.iii) For n = 0, let f 2 (C (R) \ Lp(R)) ; p; q > 1 such that 1

p +1q = 1 and the

rest as above in this section. Then W r; (f ;x) f(x)

pNp;

1=p!r(f; )p (78)

where

Np; :=

1X=1

1 + jj

rpe2

1X=1

e2

(79)

which is uniformly bounded for all 2 (0; 1] :Hence, as ! 0+; we obtain that W

r; ! unit operator I in the Lp norm forp > 1.iv) For n = 0 and p = 1, let f 2 (C (R) \ L1(R)) and the rest as above in thissection. Then the inequality W

r; (f ;x) f(x) 1 N

1;!r(f; )1 (80)

holds where N1; is dened as in (79). Furthermore, we get W

r; ! I in the L1

norm as ! 0+:

For the Possion-Cauchy operators, in [2], the authors showed

Theorem 20 i) Let f 2 Cn(R); with f (n) 2 Lp (R) ; n 2 N; p; q > 1 : 1p+1q = 1,

> p(r+n)+12 ; 2 N; and the rest as above in this section. Then Qr; (f ;x) f(x)

nXk=1

f (k)(x)

k!kq

k;

p

(81)

1

((n 1)!)(q(n 1) + 1)1q (rp+ 1)

1p

Sp;

1p

1p!r(f

(n); )p

where

Sp; :=

1X=1

1 + jj

rp+1 1jjnp1

2 + 2

1X

=1

2 + 2

(82)

17

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester312

Page 313: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

is uniformly bounded for all 2 (0; 1] :Additionally, as ! 0+; we obtain that R:H:S: of (81) goes to zero.ii) When p = 1, let f 2 Cn(R), f (n) 2 L1(R); > r+n+1

2 ; and n 2 N f1g:Then Qr; (f ;x) f(x)

nXk=1

f (k)(x)

k!kq

k;

1

(83)

1

(n 1)! (r + 1)S1;!r(f

(n); )1

holds where S1; is dened as in (82). Hence, as ! 0+; we obtain that R:H:S:of (83) goes to zero.iii)When n = 0, let f 2 (C (R) \ Lp(R)) ; p; q > 1 such that 1

p +1q = 1;

> p(r+2)+12 ; and the rest as above in this section. Then Qr; (f ;x) f(x) p Sp;1=p !r(f; )p (84)

where

Sp; :=

1X=1

1 + jj

rp 2 + 2

1X

=1

2 + 2

(85)

which is uniformly bounded for all 2 (0; 1] :Hence, as ! 0+; we obtain that Qr; ! unit operator I in the Lp norm forp > 1.iv) When n = 0 and p = 1, let f 2 (C (R) \ L1(R)) ; > r+3

2 and the rest asabove in this section. The inequality Qr; (f ;x) f(x) 1 S1;!r(f; )1 (86)

holds where S1; is dened as in (85). Furthermore, we get Qr; ! I in the L1

norm as ! 0+:

Next in [2], they stated their results for the errors E0;P ; E0;W ; En;P ; andEn;W as follows

Theorem 21 i) Let p; q > 1 such that 1p +

1q = 1, n 2 N such that np 6= 1,

18

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester313

Page 314: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

f 2 Lp(R); and the rest as above in this section. Then

kEn;P (f; x)kp (87)

1p!r(f

(n); )p

1P=1

ejj

1q

((n 1)!)(q(n 1) + 1)1q (rp+ 1)

1p

26666664

1X=1

1 + jj

rp+1 1jjnp1 e

jj

!1 + 2e

1

1p

37777775+ kf(x)kp jm;P 1j

holds. Additionally, as ! 0+; we obtain that R:H:S: of (87) goes to zero.ii) When p = 1; let f 2 Cn(R); f 2 L1(R), f (n) 2 L1(R); and n 2 N f1g:Then

kEn;P (f; x)k1 (88)

!r(f(n); )1

(n 1)! (r + 1)

2666641X

=1

1 + jj

r+1 1jjn1 e

jj

1 + 2e1

377775+ kf(x)k1 jm;P 1j

holds. Additionally, as ! 0+; we obtain that R:H:S: of (88) goes to zero.iii) When n = 0; let p; q > 1 such that 1

p +1q = 1; f 2 Lp(R), and the rest as

above in this section. Then

kE0;P (f; x)kp !r(f; )p

1X=1

ejj

! 1q

(89)

2666664

1X=1

1 + jj

rpejj

!1 + 2e

1

1=p3777775+ kf(x)kp jm;P 1j

holds. Hence, as ! 0+; we obtain that R:H:S: of (89) goes to zero.

19

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester314

Page 315: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

iv) When n = 0 and p = 1; the inequality

kE0;P (f; x)k1

0BBBB@1X

=1

1 + jj

rejj

1 + 2e1

1CCCCA!r(f; )1 (90)

+ kf(x)k1 jm;P 1j

holds. Hence, as ! 0+; we obtain that R:H:S: of (90) goes to zero.

Next in [2], the authors gave quantitative results for En;W (f; x)

Theorem 22 i) Let p; q > 1 such that 1p +

1q = 1, n 2 N such that np 6= 1,

f 2 Lp(R); and the rest as above in this section. Then

kEn;W (f; x)kp (91)

1p!r(f

(n); )p

1X=1

e2

! 1q

((n 1)!)(q(n 1) + 1)1q (rp+ 1)

1p

26666664

1X=1

1 + jj

rp+1 1jjnp1 e

2

!p1 erf

1p

+ 1

1p

37777775+ kf(x)kp jm;W 1j

holds. Additionally, as ! 0+; we obtain that R:H:S: of (91) goes to zero.ii) For p = 1; let f 2 Cn(R); f 2 L1(R), f (n) 2 L1(R); and n 2 N f1g: Then

kEn;W (f; x)k1 (92)

!r(f(n); )1

(n 1)! (r + 1)

2666641X

=1

1 + jj

r+1 1jjn1 e

2

p1 erf

1p

+ 1

377775+ kf(x)k1 jm;W 1j

holds. Additionally, as ! 0+; we obtain that R:H:S: of (92) goes to zero.iii) For n = 0; let p; q > 1 such that 1p +

1q = 1; f 2 Lp(R), and the rest as above

20

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester315

Page 316: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

in this section. Then

kE0;W (f; x)kp !r(f; )p

1X=1

e2

! 1q

(93)

26666664

1X=1

1 + jj

rpe2

! 1p

p1 erf

1p

+ 1

37777775+ kf(x)kp jm;W 1j

holds. Hence, as ! 0+; we obtain that R:H:S: of (93) goes to zero.iv) For n = 0 and p = 1; the inequality

kE0;W (f; x)k1

0BBBB@1X

=1

1 + jj

re2

p1 erf

1p

+ 1

1CCCCA!r(f; )1 (94)

+ kf(x)k1 jm;W 1j

holds. Hence, as ! 0+; we obtain that R:H:S: of (94) goes to zero.

Furthermore in [3], the authors gave their fractional approximation resultsas follows

Theorem 23 Let f 2 Cm (R) ; m = d e ; > 0; with f (m) 1 < 1; 0 <

1:Theni) P r; (f) f

m1Xk=1

f (k)kck;

k!

1

(95)

"

rXk=0

r!

(r k)! ( + k + 1) kS1 ;k

# supx2R

max

!rD xf;

; !r (D

xf; )

;

where

S1 ;k :=

1P=1

jj +k ejj

1P=1

ejj

(96)

21

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester316

Page 317: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

is uniformly bounded for all 2 (0; 1] :

ii) W r; (f) f

m1Xk=1

f (k)kpk;

k!

1

(97)

"

rXk=0

r!

(r k)! ( + k + 1) kS2 ;k

# supx2R

max

!rD xf;

; !r (D

xf; )

;

where

S2 ;k :=

1P=1

jj +k e2

1P=1

e2

(98)

is uniformly bounded for all 2 (0; 1] :

iii) For > +r+12 ; 2 N; Qr; (f) f

m1Xk=1

f (k)kqk;

k!

1

(99)

"

rXk=0

r!

(r k)! ( + k + 1) kS3 ;k

# supx2R

max

!rD xf;

; !r (D

xf; )

;

where

S3 ;k :=

1P=1

jj +k2 + 2

1P

=1

2 + 2

(100)

is uniformly bounded for all 2 (0; 1] :

In [3], the authors also demonstrated their fractional results for the errorterms as

Theorem 24 Let f 2 Cm (R) ; m = d e ; > 0; with f (m) 1 < 1; 0 <

1; and k be as in (26) :Then

22

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester317

Page 318: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

i)

kE ;P (f)k1 (101)

"

rXk=0

r!

(r k)! ( + k + 1) kS4 ;k

# supx2R

max

!rD xf;

; !r (D

xf; )

+ kfk1 jm;P 1j ;

where

S4 ;k :=

1P=1

jj +k ejj

1 + 2e1

(102)

is uniformly bounded for all 2 (0; 1] :

ii)

kE ;W (f)k1 (103)

"

rXk=0

r!

(r k)! ( + k + 1) kS5 ;k

# supx2R

max

!rD xf;

; !r (D

xf; )

+ kfk1 jm;W 1j ;

where

S5 ;k :=

1P=1

jj +k e2

p1 erf

1p

+ 1

(104)

is uniformly bounded for all 2 (0; 1] :

23

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester318

Page 319: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

2 Main Results

Here we study important special cases of r; operators for discrete probabilitymeasures :Let f : R ! C be Borel measurable complex valued function such that

f = f1 + if2 where f1; f2 : R ! R are real valued Borel measurable functionsand i =

p1: Additionally, assume that f1; f2 2 Cn(R); n 2 Z+; and 0 < 1:

i) When

() =ejj

1P=1

ejj

, (105)

we dene the complex generalized discrete Picard operators as

P r; (f ;x) :=

1P=1

rPj=0

jf(x+ j)

!ejj

1P=1

ejj

. (106)

Here we observe that

P r; (f ;x) = Pr; (f1;x) + iP

r; (f2;x) ; (107)

P r; (f ;x) f (x) (108)

P r; (f1;x) f1 (x)+ P r; (f2;x) f2 (x) ; P r; (f ;x) f (x) 1 (109)

P r; (f1;x) f1 (x) 1 +

P r; (f2;x) f2 (x) 1 ;and for p 1; P r; (f ;x) f (x) p (110)

P r; (f1;x) f1 (x) p + P r; (f2;x) f2 (x) p :

ii) When

() =e2

1P=1

e2

, (111)

we dene the complex generalized discrete Gauss-Weierstrass operators as

W r; (f ;x) :=

1P=1

rPj=0

jf(x+ j)

!e2

1P=1

e2

. (112)

24

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester319

Page 320: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Here we observe that,

W r; (f ;x) =W

r; (f1;x) + iW

r; (f2;x) ; (113)

W r; (f ;x) f (x)

(114)

W

r; (f1;x) f1 (x)+ W

r; (f2;x) f2 (x) ;

W r; (f ;x) f (x)

1 (115)

W

r; (f1;x) f1 (x) 1 +

W r; (f2;x) f2 (x)

1 ;

and for p 1; W r; (f ;x) f (x)

p

(116)

W

r; (f1;x) f1 (x) p+ W

r; (f2;x) f2 (x) p:

iii) Let 2 N, and > 1 : When

() =

2 + 2

1P

=1

2 + 2

, (117)

we dene the complex generalized discrete Poisson-Cauchy operators as

Qr; (f ;x) :=

1P=1

rPj=0

jf(x+ j)

!2 + 2

1P

=1

2 + 2

. (118)

Here we observe that

Qr; (f ;x) = Qr; (f1;x) + iQ

r; (f2;x) ; (119)

Qr; (f ;x) f (x) (120)

Qr; (f1;x) f1 (x)+ Qr; (f2;x) f2 (x) ;

Qr; (f ;x) f (x) 1 (121)

Qr; (f1;x) f1 (x) 1 +

Qr; (f2;x) f2 (x) 1 ;and for p 1; Qr; (f ;x) f (x) p (122)

Qr; (f1;x) f1 (x) p + Qr; (f2;x) f2 (x) p :

25

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester320

Page 321: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Observe that for c 2 C constant we have

P r; (c;x) =Wr; (c;x) = Q

r; (c;x) = c. (123)

We assume that for x 2 R; the operators P r; (fj ;x),W r; (fj ;x), andQ

r; (fj ;x) 2

R where j = 1; 2: This is the case when kfjk1;R <1 for j = 1; 2:iv) When

() := ;P () :=ejj

1 + 2e1

; (124)

we dene the complex generalized discrete non-unitary Picard operators as

Pr; (f ;x) :=

1P=1

rPj=0

jf(x+ j)

!ejj

1 + 2e1

: (125)

Here ;P () has mass

m;P :=

1P=1

ejj

1 + 2e1

: (126)

We observe that;P ()

m;P=

ejj

1P=1

ejj

; (127)

which is the probability measure (105) dening the operators P r;:v) When

() := ;W () :=e2

p1 erf

1p

+ 1

; (128)

we dene the complex generalized discrete non-unitary Gauss-Weierstrass oper-ators as

Wr; (f ;x) :=

1P=1

rPj=0

jf(x+ j)

!e2

p1 erf

1p

+ 1

: (129)

Here ;W () has mass

m;W :=

1P=1

e2

p1 erf

1p

+ 1

: (130)

26

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester321

Page 322: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

We observe that

;W ()

m;W=

e2

1P=1

e2

; (131)

which is the probability measure (111) dening the operators W r;:

We state our rst result as follows

Theorem 25 Let f : R ! C such that f = f1 + if2: Here fj 2 Cn (R) ;f(n)j 2 Cu (R) ; j = 1; 2; and n 2 N: Then

i) P r; (f ;x) f(x)nXk=1

f (k)(x)

k!kc

k;

1;x

(132)

!r(f

(n)1 ; ) + !r(f

(n)2 ; )

n!

!0BB@1P

=1jjn

1 + jj

rejj

1P=1

ejj

1CCA ;ii) W

r; (f ;x) f(x)nXk=1

f (k)(x)

k!kp

k;

1;x

(133)

!r(f

(n)1 ; ) + !r(f

(n)2 ; )

n!

!0BB@1P

=1jjn

1 + jj

re2

1P=1

e2

1CCA ;iii) for 2 N and > n+r+1

2 ; Qr; (f ;x) f(x)nXk=1

f (k)(x)

k!kq

k;

1;x

(134)

!r(f

(n)1 ; ) + !r(f

(n)2 ; )

n!

!0BB@1P

=1jjn

1 + jj

r 2 + 2

1P

=1

2 + 2

1CCA :

Proof. By Theorems 1, 14, 15.

For the case of n = 0; we give the following result

27

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester322

Page 323: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Corollary 26 Let f : R ! C such that f = f1 + if2: Here fj 2 Cu (R) forj = 1; 2: Theni) P r; (f ;x) f(x) 1;x

(135)

(!r(f1; ) + !r(f2; ))

0BB@1P

=1

1 + jj

rejj

1P=1

ejj

1CCA ;ii) W

r; (f ;x) f(x) 1;x

(136)

(!r(f1; ) + !r(f2; ))

0BB@1P

=1

1 + jj

re2

1P=1

e2

1CCA ;iii) for 2 N and > r+1

2 ; Qr; (f ;x) f(x) 1;x(137)

(!r(f1; ) + !r(f2; ))

0BB@1P

=1

1 + jj

r 2 + 2

1P

=1

2 + 2

1CCA :

Proof. By Corollary 2, and the Theorems 14, 15.

In [4], the authors gave

Theorem 27 Let g 2 Cn1(R), such that g(n) exists, n; r 2 N; 0 < 1:

Additionally, suppose that for each x 2 R the function g(~j)(x+ j) 2 L1(R; )as a function of ; for all ~j = 0; 1; : : : ; n 1; j = 1; : : : ; r: Assume that thereexist ~j;j 0, ~j = 1; : : : ; n; j = 1; : : : ; r; with ~j;j 2 L1(R; ) such that foreach x 2 R we have

jg(i)(x+ j)j ~j;j(); (138)

for almost all 2 R, all i = 1; : : : ; n; j = 1; 2; : : : ; r: Then, g(i)(x + j)denes a integrable function with respect to for each x 2 R, all ~j =1; : : : ; n; j = 1; : : : ; r.i) When

() =ejj

1P=1

ejj

28

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester323

Page 324: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

we get P r; (f ;x)

(~j)= P r;

f (~j);x

; (139)

for all x 2 R, and for all ~j = 1; : : : ; n.

ii) When

() =e2

1P=1

e2

we have W r; (f ;x)

(~j)=W

r;

f (~j);x

; (140)

for all x 2 R, and for all ~j = 1; : : : ; n.

iii) Let 2 N, and > 1 : When

() =

2 + 2

1P

=1

2 + 2

,we obtain

Qr; (f ;x)(~j)

= Qr;

f (~j);x

; (141)

for all x 2 R, and for all ~j = 1; : : : ; n.

Now, we present following simultaneous results for the operators P r;; Wr;;

and Qr;:

Theorem 28 Let f : R ! C; such that f = f1 + if2 and 0 < 1: Herefj 2 Cn+; n 2 N; 2 Z+; j = 1; 2; and f(n+

ej) 2 Cu (R) where ej = 0; 1; : : : ; :We consider the assumptions of the Theorem 27 valid for n = there. Then

i) P r;(f ;x)(ej) f (ej)(x)nXk=1

f (ej+k)(x)k!

kck;

1;x

(142)

0@!rf(n+ej)1 ;

+ !r

f(n+ej)2 ;

n!

1A0BB@

1P=1

jjn1 + jj

re

jj

1P=1

ejj

1CCA ;

29

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester324

Page 325: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

ii) W r;(f ;x)

(ej) f (ej)(x) nXk=1

f (ej+k)(x)k!

kpk;

1;x

(143)

0@!rf(n+ej)1 ;

+ !r

f(n+ej)2 ;

n!

1A0BB@

1P=1

jjn1 + jj

re

2

1P=1

e2

1CCA ;iii) Qr;(f ;x)(ej) f (ej)(x)

nXk=1

f (ej+k)(x)k!

kqk;

1;x

(144)

0@!rf(n+ej)1 ;

+ !r

f(n+ej)2 ;

n!

1A0BB@

1P=1

jjn1 + jj

r 2 + 2

1P

=1

2 + 2

1CCA ;

where > n+r+12 ; 2 N:

Proof. By Theorems 25, 27.

Next, we state our Lp results for the operators P r;; Wr;; and Q

r;:We start

with

Theorem 29 Let f : R! C such that f = f1 + if2 and 0 < 1:i)Here fj 2 Cn (R) with f (n)j 2 Lp (R) ; n 2 N; j = 1; 2; and p; q > 1 such that1p +

1q = 1: Then P r; (f ;x) f(x)

nXk=1

f (k)(x)

k!kc

k;

p

(145)

1p

!r(f

(n)1 ; )p + !r(f

(n)2 ; )p

((n 1)!)(q(n 1) + 1)1q (rp+ 1)

1p

!Mp;

1p

where

Mp; :=

1X=1

1 + jj

rp+1 1jjnp1 e

jj

1X=1

ejj

(146)

30

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester325

Page 326: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

which is uniformly bounded for all 2 (0; 1] :ii) When p = 1; let fj 2 Cn(R), f (n)j 2 L1(R); j = 1; 2; and n 2 Nf1g: Then P r; (f ;x) f(x)

nXk=1

f (k)(x)

k!kc

k;

1

(147)

!r(f

(n)1 ; )1 + !r(f

(n)2 ; )1

(n 1)! (r + 1)

!M1;

holds where M1; is dened as in (146).

iii) When n = 0, let fj 2 (C (R) \ Lp(R)) ; j = 1; 2; and p; q > 1 such that1p +

1q = 1. Then P r; (f ;x) f(x) p (!r(f1; )p + !r(f2; )p) M

p;

1=p(148)

where

Mp; :=

1X=1

1 + jj

rpejj

1X=1

ejj

(149)

which is uniformly bounded for all 2 (0; 1] :iv) When n = 0 and p = 1; let fj 2 (C (R) \ L1(R)) ; j = 1; 2. Then theinequality P r; (f ;x) f(x) 1 (!r(f1; )1 + !r(f2; )1) M

1; (150)

holds where M1; is dened as in (149).

Proof. By Theorems 5, 6, 18, and Propositions 7, 8.

For the operators W r;; we obtain

Theorem 30 Let f : R! C such that f = f1 + if2 and 0 < 1:i) Here fj 2 Cn (R) with f (n)j 2 Lp (R) ; n 2 N; j = 1; 2; p; q > 1 such that1p +

1q = 1: Then W

r; (f ;x) f(x)nXk=1

f (k)(x)

k!kp

k;

p

(151)

1p

!r(f

(n)1 ; )p + !r(f

(n)2 ; )p

((n 1)!)(q(n 1) + 1)1q (rp+ 1)

1p

!Np;

1p

where

Np; :=

1X=1

1 + jj

rp+1 1jjnp1 e

2

1X=1

e2

(152)

31

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester326

Page 327: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

which is uniformly bounded for all 2 (0; 1] :ii)When p = 1; let fj 2 Cn(R), f (n)j 2 L1(R); j = 1; 2; and n 2 N f1g: Then W

r; (f ;x) f(x)nXk=1

f (k)(x)

k!kp

k;

1

(153)

!r(f

(n)1 ; )1 + !r(f

(n)2 ; )1

(n 1)! (r + 1)

!N1;

holds where N1; is dened as in (152).

iii) When n = 0, let fj 2 (C (R) \ Lp(R)) ; j = 1; 2; and p; q > 1 such that1p +

1q = 1. Then W

r; (f ;x) f(x) p (!r(f1; )p + !r(f2; )p)

Np;

1=p(154)

where

Np; :=

1X=1

1 + jj

rpe2

1X=1

e2

(155)

which is uniformly bounded for all 2 (0; 1] :iv) When n = 0 and p = 1; let fj 2 (C (R) \ L1(R)) ; j = 1; 2. Then theinequality W

r; (f ;x) f(x) 1 (!r(f1; )1 + !r(f2; )1) N

1; (156)

holds where N1; is dened as in (155).

Proof. By Theorems 5, 6, 19, and Propositions 7, 8.

For the operators Qr;; we get

Theorem 31 Let f : R! C such that f = f1 + if2 and 0 < 1:i) Let fj 2 Cn (R) with f (n)j 2 Lp (R) ; n 2 N; j = 1; 2; p; q > 1 such that1p +

1q = 1, >

p(r+n)+12 ; 2 N. Then Qr; (f ;x) f(x)

nXk=1

f (k)(x)

k!kq

k;

p

(157)

1p

!r(f

(n)1 ; )p + !r(f

(n)2 ; )p

((n 1)!)(q(n 1) + 1)1q (rp+ 1)

1p

!Sp;

1p

where

Sp; :=

1X=1

1 + jj

rp+1 1jjnp1

2 + 2

1X

=1

2 + 2

(158)

32

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester327

Page 328: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

is uniformly bounded for all 2 (0; 1] :ii) When p = 1; let fj 2 Cn(R), f (n)j 2 L1(R); j = 1; 2; n 2 N f1g; and > r+n+1

2 : Then Qr; (f ;x) f(x)nXk=1

f (k)(x)

k!kq

k;

1

(159)

!r(f

(n)1 ; )1 + !r(f

(n)2 ; )1

(n 1)! (r + 1)

!S1;

holds where S1; is dened as in (158).iii)When n = 0, let fj 2 (C (R) \ Lp(R)) ; j = 1; 2; p; q > 1 such that 1p +

1q = 1;

and > p(r+2)+12 . Then Qr; (f ;x) f(x) p (!r(f1; )p + !r(f2; )p) Sp;1=p (160)

where

Sp; :=

1X=1

1 + jj

rp 2 + 2

1X

=1

2 + 2

(161)

which is uniformly bounded for all 2 (0; 1].iv) When n = 0 and p = 1; let fj 2 (C (R) \ L1(R)) ; j = 1; 2; > r+3

2 . Theinequality Qr; (f ;x) f(x) 1 (!r(f1; )1 + !r(f2; )1) S1; (162)

holds where S1; is dened as in (161).

Proof. By Theorems 5, 6, 20, and Propositions 7, 8.

Now, we give our simultaneous results for Lp norm. For the case of p > 1;we have

Theorem 32 Here f : R ! C such that f = f1 + if2 and 0 < 1: Let

fj 2 Cn+(R); with f (n+~j)

j 2 Lp (R) ; n 2 N; ~j = 0; 1; : : : ; 2 Z+: Let p; q >1 : 1p +

1q = 1: We consider the assumptions of Theorem 27 as valid for n =

there. Theni) P r;(f ;x)(~j) f (~j)(x)

nXk=1

f (~j+k)(x)

k!kc

k;

p;x

(163)

1p

!r(f

(n+~j)1 ; )p + !r(f

(n+~j)2 ; )p

((n 1)!) (q (n 1) + 1)1q (rp+ 1)

1p

!Mp;

1p ;

33

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester328

Page 329: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

ii) W r;(f ;x)

(~j) f (~j)(x) nXk=1

f (~j+k)(x)

k!kp

k;

p;x

(164)

1p

!r(f

(n+~j)1 ; )p + !r(f

(n+~j)2 ; )p

((n 1)!) (q (n 1) + 1)1q (rp+ 1)

1p

!Np;

1p ;

iii) for > p(n+r)+12 ; 2 N Qr;(f ;x)(~j) f (~j)(x)

nXk=1

f (~j+k)(x)

k!kq

k;

p;x

(165)

1p

!r(f

(n+~j)1 ; )p + !r(f

(n+~j)2 ; )p

((n 1)!) (q (n 1) + 1)1q (rp+ 1)

1p

!Sp;

1p :

Proof. By Theorems 9, 27, 29-31.Now, we give our results for the special case of n = 0:

Proposition 33 Here f : R ! C such that f = f1 + if2 and 0 < 1:

Let f (~j)j 2 (C (R) \ Lp (R)) ; j = 1; 2; ~j = 0; 1; : : : ; 2 Z+; p; q > 1 such that

1p+

1q = 1: We consider the assumptions of Theorem 27 as valid for n = there.

Then for all ~j = 0; 1; : : : ; ; we havei) P r;(f ;x)(~j) f (~j)(x)

p;x!r(f

(~j)1 ; )p + !r(f

(~j)2 ; )p

Mp;

1p ; (166)

ii) W r;(f ;x)

(~j) f (~j)(x) p;x!r(f

(~j)1 ; )p + !r(f

(~j)2 ; )p

Np;

1p ; (167)

iii) for > p(r+2)+12 ; 2 N Qr;(f ;x)(~j) f (~j)(x)

p;x!r(f

(~j)1 ; )p + !r(f

(~j)2 ; )p

Sp; 1p : (168)

Proof. By Theorems 27, 29-31 and Proposition 10.

For the special case of p = 1; we obtain

Theorem 34 Here f : R ! C such that f = f1 + if2 and 0 < 1: Let

fj 2 Cn+(R); with f (n+~j)

j 2 L1 (R) ; n 2 Nf1g ; ~j = 0; 1; : : : ; 2 Z+: Weconsider the assumptions of Theorem 27 as valid for n = there. Then for all

34

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester329

Page 330: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

~j = 0; 1; : : : ; , we havei) P r;(f ;x)(~j) f (~j)(x)

nXk=1

f (~j+k)(x)

k!kc

k;

1;x

(169)

!r(f

(n+~j)1 ; )1 + !r(f

(n+~j)2 ; )1

(n 1)! (r + 1)

!M1;;

ii) W r;(f ;x)

(~j) f (~j)(x) nXk=1

f (~j+k)(x)

k!kp

k;

1;x

(170)

!r(f

(n+~j)1 ; )1 + !r(f

(n+~j)2 ; )1

(n 1)! (r + 1)

!N1;;

iii) for > n+r+12 ; 2 N Qr;(f ;x)(~j) f (~j)(x)

nXk=1

f (~j+k)(x)

k!kq

k;

1;x

(171)

!r(f

(n+~j)1 ; )1 + !r(f

(n+~j)2 ; )1

(n 1)! (r + 1)

!S1;:

Proof. By Theorems 11, 27, 29-31.

For p = 1 and n = 0; we give

Proposition 35 Here f : R ! C such that f = f1 + if2 and 0 < 1: Letf (~j) 2 (C (R) \ L1 (R)) ; ~j = 0; 1; : : : ; 2 Z+: We consider the assumptions of

Theorem 27 as valid for n = there. Then for all ~j = 0; 1; : : : ; ; we havei) P r;(f ;x)(~j) f (~j)(x)

1;x!r(f

(~j)1 ; )1 + !r(f

(~j)2 ; )1

M1;; (172)

ii) W r;(f ;x)

(~j) f (~j)(x) 1;x

!r(f

(~j)1 ; )1 + !r(f

(~j)2 ; )1

N1;; (173)

iii) for > r+32 ; 2 N Qr;(f ;x)(~j) f (~j)(x)

1;x!r(f

(~j)1 ; )1 + !r(f

(~j)2 ; )1

S1;: (174)

35

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester330

Page 331: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Proof. By Theorems 27, 29-31 and Proposition 12.

Next, we give the fractional approximation results for the operators P r;;W r;; and Q

r;: We start with

Theorem 36 Here f : R ! C such that f = f1 + if2 and 0 < 1: Let

fj 2 Cm (R) ; m = d e ; > 0; with f (m)j

1<1; j = 1; 2:Then

i) P r; (f) f m1Xk=1

f (k)kck;

k!

1

(175)

"

rXk=0

r!

(r k)! ( + k + 1) kS1 ;k

#

supx2R

max

!rD xf1;

; !r (D

xf1; )

+supx2R

max

!rD xf2;

; !r (D

xf2; )

;

where S1 ;k is dened as in (96) :

ii) W r; (f) f

m1Xk=1

f (k)kpk;

k!

1

(176)

"

rXk=0

r!

(r k)! ( + k + 1) kS2 ;k

#

supx2R

max

!rD xf1;

; !r (D

xf1; )

+supx2R

max

!rD xf2;

; !r (D

xf2; )

;

where S2 ;k is dened as in (98) :

36

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester331

Page 332: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

iii)For > +r+12 ; 2 N; Qr; (f) f

m1Xk=1

f (k)kqk;

k!

1

(177)

"

rXk=0

r!

(r k)! ( + k + 1) kS3 ;k

#

supx2R

max

!rD xf1;

; !r (D

xf1; )

+supx2R

max

!rD xf2;

; !r (D

xf2; )

;

where S3 ;k is dened as in (100) :

Proof. By Theorems 13, 23.

Let f : R! C such that f = f1+if2:We dene the following error quantities:

E0;P (f; x) : = Pr;(f ;x) f(x) (178)

=

1P=1

rPj=0

jf(x+ j)

!ejj

1 + 2e1

f(x);

E0;W (f; x) : =Wr;(f ;x) f(x) (179)

=

1P=1

rPj=0

jf(x+ j)

!e2

p1 erf

1p

+ 1

f(x):

Additionally, we introduce the errors (n 2 N):

En;P (f; x) := Pr;(f ;x) f(x)nXk=1

f (k)(x)

k!k

1X=1

kejj

1 + 2e1

(180)

and

En;W (f; x) :=Wr;(f ;x)f(x)nXk=1

f (k)(x)

k!k

1X=1

ke2

p1 erf

1p

+ 1

: (181)

37

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester332

Page 333: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

We observe that

jE0;P (f; x)j (182)

jE0;P (f1; x)j+ jE0;P (f2; x)j ;

jE0;W (f; x)j (183)

jE0;W (f1; x)j+ jE0;W (f2; x)j :

Next, we give following results for errors E0;P (f; x) and E0;W (f; x):

Corollary 37 Here f : R ! C such that f = f1 + if2:Let fj 2 Cu(R) forj = 1; 2: Then

i)

jE0;P (f; x)j

0BB@1P

=1(!r(f1; jj) + !r(f2; jj)) e

jj

1 + 2e1

1CCA (184)

+(jf1(x)j+ jf2(x)j) jm;P 1j ;

ii)

jE0;W (f; x)j

0BB@1P

=1(!r(f1; jj) + !r(f2; jj)) e

2

p1 erf

1p

+ 1

1CCA (185)

+(jf1(x)j+ jf2(x)j) jm;W 1j :

Proof. By Corollary 16, (182) ; and (183) :

We also observe that

kEn;P (f; x)k1 (186)

kEn;P (f1; x)k1 + kEn;P (f2; x)k1

and

kEn;W (f; x)k1 (187)

kEn;W (f1; x)k1 + kEn;W (f2; x)k1 :

For En;P and En;W , we present

Theorem 38 Here f : R ! C such that f = f1 + if2: Let fj 2 Cn(R) withf(n)j 2 Cu(R), j = 1; 2; n 2 N;and kfjk1;R <1: Then

38

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester333

Page 334: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

i)

kEn;P (f; x)k1;x (188)

!r(f

(n)1 ; ) + !r(f

(n)2 ; )

n!

0BB@1P

=1jjn

1 + jj

rejj

1 + 2e1

1CCA+kf1k1;R + kf2k1;R

jm;P 1j ;

ii)

kEn;W (f; x)k1;x (189)

!r(f

(n)1 ; ) + !r(f

(n)2 ; )

n!

0BB@1P

=1jjn

1 + jj

re2

p1 erf

1p

+ 1

1CCA+kf1k1;R + kf2k1;R

jm;W 1j :

In the above inequalities (188) - (189) ; the ratios of sums in their R.H.S. areuniformly bounded with respect to 2 (0; 1] :

Proof. By Theorem 17, (186) ; and (187) :

Furthermore, for p 1; we obtain that

kEn;P (f; x)kp (190)

kEn;P (f1; x)kp + kEn;P (f2; x)kp

and

kEn;W (f; x)kp (191)

kEn;W (f1; x)kp + kEn;W (f2; x)kp :

Next, we state our Lp approximation results for the errors E0;P ; E0;W ; En;P ;and En;W : We start with

Theorem 39 i) Let p; q > 1 such that 1p +

1q = 1, n 2 N such that np 6= 1,

39

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester334

Page 335: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

fj 2 Lp(R); j = 1; 2; and the rest as above in this section. Then

kEn;P (f; x)kp (192)

!r(f

(n)1 ; )p + !r(f

(n)2 ; )p

0BBBB@1p

1P=1

ejj

1q

((n 1)!)(q(n 1) + 1)1q (rp+ 1)

1p

1CCCCA

26666664

1X=1

1 + jj

rp+1 1jjnp1 e

jj

!1 + 2e

1

1p

37777775+kf1(x)kp + kf2(x)kp

jm;P 1j

holds.ii) When p = 1; let fj 2 Cn(R); fj 2 L1(R), f (n)j 2 L1(R); and n 2 N f1g:Then

kEn;P (f; x)k1 (193)

!r(f

(n)1 ; )1 + !r(f

(n)2 ; )1

(n 1)! (r + 1)

2666641X

=1

1 + jj

r+1 1jjn1 e

jj

1 + 2e1

377775+(kf1(x)k1 + kf2(x)k1) jm;P 1j

holds.iii) When n = 0; let p; q > 1 such that 1p +

1q = 1; fj 2 Lp(R), and the rest as

40

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester335

Page 336: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

above in this section. Then

kE0;P (f; x)kp (!r(f1; )p + !r(f2; )p)

1X=1

ejj

! 1q

(194)

2666664

1X=1

1 + jj

rpejj

!1 + 2e

1

1=p3777775+kf1(x)kp + kf2(x)kp

jm;P 1j

holds.iv) When n = 0 and p = 1; the inequality

kE0;P (f; x)k1 (!r(f1; )1 + !r(f2; )1) (195)

0BBBB@1X

=1

1 + jj

rejj

1 + 2e1

1CCCCA+(kf1(x)k1 + kf2(x)k1) jm;P 1j

holds.

Proof. By Theorem 21 and (190).

We have also

Theorem 40 i) Let p; q > 1 such that 1p +

1q = 1, n 2 N such that np 6= 1,

41

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester336

Page 337: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

fj 2 Lp(R); j = 1; 2; and the rest as above in this section. Then

kEn;W (f; x)kp (196)

!r(f

(n)1 ; )p + !r(f

(n)2 ; )p

0BBBBBB@1p

1X=1

e2

! 1q

((n 1)!)(q(n 1) + 1)1q (rp+ 1)

1p

1CCCCCCA

26666664

1X=1

1 + jj

rp+1 1jjnp1 e

2

!p1 erf

1p

+ 1

1p

37777775+kf1(x)kp + kf2(x)kp

jm;W 1j

holds.ii) For p = 1; let fj 2 Cn(R); fj 2 L1(R), f

(n)j 2 L1(R); j = 1; 2; and

n 2 N f1g: Then

kEn;W (f; x)k1 (197)

!r(f

(n)1 ; )1 + !r(f

(n)2 ; )1

(n 1)! (r + 1)

2666641X

=1

1 + jj

r+1 1jjn1 e

2

p1 erf

1p

+ 1

377775+(kf1(x)k1 + kf2(x)k1) jm;W 1j

holds.iii) For n = 0; let p; q > 1 such that 1

p +1q = 1; fj 2 Lp(R), and the rest as

42

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester337

Page 338: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

above in this section. Then

kE0;W (f; x)kp (!r(f1; )p + !r(f2; )p)

1X=1

e2

! 1q

(198)

26666664

1X=1

1 + jj

rpe2

! 1p

p1 erf

1p

+ 1

37777775+kf1(x)kp + kf2(x)kp

jm;W 1j

holds.iv) For n = 0 and p = 1; the inequality

kE0;W (f; x)k1 !r(f

(n)1 ; )1 + !r(f

(n)2 ; )1

(199)

0BBBB@1X

=1

1 + jj

re2

p1 erf

1p

+ 1

1CCCCA+kf1(x)kp + kf2(x)kp

jm;W 1j

holds.

Proof. By Theorem 22, and (191).

Next, we demonstrate our simultaneous approximation results for the deriv-atives of the errors E0;P ; E0;W ; En;P ; and En;W : We start with

Corollary 41 Let f (~j)j 2 Cu(R); ~j = 0; 1; : : : ; ; 2 Z+; and 0 < 1: We

consider the assumptions of Theorem 27 as valid for n = there. Then for all~j = 0; 1; : : : ; ; we have

i)

(E0;P (f; x))(~j)

0BB@1P

=1

!r(f

(~j)1 ; jj) + !r(f

(~j)2 ; jj)

ejj

1 + 2e1

1CCA(200)+f(~j)(x)+ f(~j)(x) jm;P 1j ;

43

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester338

Page 339: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

ii)

(E0;W (f; x))(~j)

0BB@1P

=1

!r(f

(~j)1 ; jj) + !r(f

(~j)2 ; jj)

e2

p1 erf

1p

+ 1

1CCA(201)+f(~j)(x)+ f(~j)(x) jm;W 1j :

Proof. By Corollary 37, and Theorem 27.

We have also

Theorem 42 Let fj 2 Cn+(R); n 2 N; 2 Z+ and f (n+~j)

j 2 Cu(R); ~j =0; 1; : : : ; ; 0 < 1, and

f(~j) 1;R

< 1: We consider the assumptions of

Theorem 27 valid for n = : Then for all ~j = 0; 1; : : : ; ; we havei) (En;P (f; x))(~j)

1;x(202)

!r(f

(n+~j)1 ; ) + !r(f

(n+~j)2 ; )

n!

0BB@1P

=1jjn

1 + jj

rejj

1 + 2e1

1CCA+

f(~j)1

1;R

+

f(~j)2

1;R

!jm;P 1j ;

andii) (En;W (f; x))(~j)

1;x(203)

!r(f

(n+~j)1 ; ) + !r(f

(n+~j)2 ; )

n!

0BB@1P

=1jjn

1 + jj

re2

p1 erf

1p

+ 1

1CCA+

f(~j)1

1;R

+

f(~j)2

1;R

!jm;W 1j :

Proof. By Theorems 27, 38:

Next, we give following Lp approximation results. We obtain

44

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester339

Page 340: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

Theorem 43 i) Let fj 2 Cn+(R); with f (n+~j)

j 2 Lp (R) ; n 2 N; ~j = 0; 1; : : : ; 2Z+: Let p; q > 1 : 1p +

1q = 1; np 6= 1: We consider the assumptions of Theorem

27 as valid for n = there. Then for all j = 0; 1; : : : ; ; (En;P (f; x))(~j) p

(204)

!r(f

(n+~j)1 ; ) + !r(f

(n+~j)2 ; )

x

0BBBB@1p

1P=1

ejj

1q

((n 1)!)(q(n 1) + 1)1q (rp+ 1)

1p

1CCCCA

26666664

1X=1

1 + jj

rp+1 1jjnp1 e

jj

!1 + 2e

1

1p

37777775+

f(~j)1 (x)

p

+

f(~j)2 (x)

p

!jm;P 1j

holds.ii) Let fj 2 Cn+(R); with f (n+

~j)j 2 L1 (R) ; n 2 Nf1g ; ~j = 0; 1; : : : ; 2 Z+:

We consider the assumptions of Theorem 27 as valid for n = there. Then forall ~j = 0; 1; : : : ; , (En;P (f; x))(~j)

1(205)

!r(f

(n+~j)1 ; )1 + !r(f

(n+~j)2 ; )1

(n 1)! (r + 1)

2666641X

=1

1 + jj

r+1 1jjn1 e

jj

1 + 2e1

377775+

f(~j)1 (x)

1

+

f(~j)2 (x)

1

jm;P 1j

holds.iii) Let f

(~j)j 2 (C (R) \ Lp (R)) ; ~j = 0; 1; : : : ; 2 Z+; p; q > 1 such that

1p +

1q = 1: We consider the assumptions of Theorem 27 as valid for n =

45

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester340

Page 341: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

there. Then for all ~j = 0; 1; : : : ; ;

(E0;P (f; x))(~j) p

!r(f

(~j)1 ; )p + !r(f

(~j)2 ; )p

1X=1

ejj

! 1q

(206)

2666664

1X=1

1 + jj

rpejj

!1 + 2e

1

1=p3777775+

f (~j)1 (x) p+ f (~j)2 (x)

p

jm;P 1j

holds.iv) Let f (

~j)j 2 (C (R) \ L1 (R)) ; ~j = 0; 1; : : : ; 2 Z+: We consider the assump-

tions of Theorem 27 as valid for n = there. Then for all ~j = 0; 1; : : : ; ; (E0;P (f; x))(~j) 1

!r(f

(~j)1 ; )1 + !r(f

(~j)2 ; )1

(207)

0BBBB@1X

=1

1 + jj

rejj

1 + 2e1

1CCCCA+ f (~j)1 (x)

1+ f (~j)2 (x)

1

jm;P 1j

holds.

Proof. By Theorems 27, 39:

For En;W (f; x); we have

Theorem 44 i) Let fj 2 Cn+(R); with f (n+~j)

j 2 Lp (R) ; n 2 N; ~j = 0; 1; : : : ; 2Z+: Let p; q > 1 : 1p +

1q = 1; np 6= 1: We consider the assumptions of Theorem

46

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester341

Page 342: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

27 as valid for n = there. Then for all ~j = 0; 1; : : : ; ; (En;W (f; x))(~j) p

(208)

!r(f

(n+~j)1 ; ) + !r(f

(n+~j)2 ; )

0BBBBBB@1p

1X=1

e2

! 1q

((n 1)!)(q(n 1) + 1)1q (rp+ 1)

1p

1CCCCCCA

26666664

1X=1

1 + jj

rp+1 1jjnp1 e

2

!p1 erf

1p

+ 1

1p

37777775+

f(~j)1 (x)

p

+

f(~j)2 (x)

p

!jm;W 1j

holds.ii) Let f 2 Cn+(R); with f (n+~j) 2 L1 (R) ; n 2 Nf1g ; ~j = 0; 1; : : : ; 2 Z+:We consider the assumptions of Theorem 27 as valid for n = there. Then forall ~j = 0; 1; : : : ; , we have

(En;W (f; x))(~j) 1

(209)

!r(f

(n+~j)1 ; )1 + !r(f

(n+~j)2 ; )1

(n 1)! (r + 1)

2666641X

=1

1 + jj

r+1 1jjn1 e

2

p1 erf

1p

+ 1

377775+ f (~j)1 (x)

1+ f (~j)2 (x)

1

jm;W 1j

holds.iii) Let f (

~j) 2 (C (R) \ Lp (R)) ; ~j = 0; 1; : : : ; 2 Z+; p; q > 1 such that1p +

1q = 1: We consider the assumptions of Theorem 27 as valid for n =

47

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester342

Page 343: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

there. Then for all ~j = 0; 1; : : : ; ;

(E0;W (f; x))(~j) p

!r(f

(~j)1 ; )p + !r(f

(~j)2 ; )p

1X=1

e2

! 1q

(210)

26666664

1X=1

1 + jj

rpe2

! 1p

p1 erf

1p

+ 1

37777775+

f (~j)1 (x) p+ f (~j)2 (x)

p

jm;W 1j

holds.iv) Let f (~j) 2 (C (R) \ L1 (R)) ; ~j = 0; 1; : : : ; 2 Z+: We consider the assump-tions of Theorem 27 as valid for n = there. Then for all ~j = 0; 1; : : : ; ; (E0;W (f; x))(~j)

1

!r(f

(~j)1 ; )1 + !r(f

(~j)2 ; )1

(211)

0BBBB@1X

=1

1 + jj

re2

p1 erf

1p

+ 1

1CCCCA+ f (~j)1 (x)

1+ f (~j)2 (x)

1

jm;W 1j

holds.

Proof. By Theorems 27, 40:

Finally, we give the fractional results for the error terms as

Theorem 45 Here f : R ! C such that f = f1 + if2: Let fj 2 Cm (R) ;m = d e ; > 0; with

f (m)j

1<1; 0 < 1; and k be as in (26) :Then

48

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester343

Page 344: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

i)

kE ;P (f)k1 (212)

"

rXk=0

r!

(r k)! ( + k + 1) kS4 ;k

#

supx2R

max

!rD xf1;

; !r (D

xf1; )

+supx2R

max

!rD xf2;

; !r (D

xf2; )

+(kf1k1 + kf2k1) jm;P 1j ;

where S4 ;k is dened as in (102) :ii)

kE ;W (f)k1 (213)

"

rXk=0

r!

(r k)! ( + k + 1) kS5 ;k

#

supx2R

max

!rD xf1;

; !r (D

xf1; )

+supx2R

max

!rD xf2;

; !r (D

xf2; )

+(kf1k1 + kf2k1) jm;W 1j ;

where S5 ;k is dened as in (104) :

Proof. By Theorems 13, 24.

References

[1] G. A. Anastassiou and M. Kester, Quantitative Uniform Approximationby Generalized Discrete Singular Operators, Studia Mathematica Babes-Bolyai, accepted for publication, 2014.

[2] G. A. Anastassiou and M. Kester, Lp Approximation with Rates by Gen-eralized Discrete Singular Operators, Communications in Applied Analysis,accepted for publication, 2014.

[3] G. A. Anastassiou and M. Kester, Quantitative Approximation by FractionalGeneralized Discrete Singular Operators, Results in Mathematics, acceptedfor publication, 2014.

[4] G. A. Anastassiou and M. Kester, Global Smoothness and Approximation byGeneralized Discrete Singular Operators, submitted, 2014.

49

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester344

Page 345: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

[5] G. A. Anastassiou and R.A. Mezei, Approximation by Singular Inte-grals,Cambridge Scientic Publishers, Cambrige, UK, 2012.

50

Approximation by Complex Generalized Discrete Singular Operators,

George A. Anastassiou & Merve Kester345

Page 346: JOURNAL OF APPLIED FUNCTIONAL ANALYSIS - … OF APPLIED FUNCTIONAL ANALYSIS ... Karayiannis@mail.gr Neural Network Models, ... Athens, Greece tel:: +30(210)

TABLE OF CONTENTS, JOURNAL OF APPLIED FUNCTIONAL ANALYSIS, VOL. 10, NO.’S 3-4, 2015

Construction of Spline Type Orthogonal Scaling Functions and Wavelets, Tung Nguyen, and Tian-Xiao He,….……………………………………………………..................................189-203

Fixed Point Results for Commuting Mappings in Modular Metric Spaces, Emine Kılınç, and Cihangir Alaca,……………………………………………………………………............204-210

A General Common Fixed Point Theorem for Weakly Compatible Mappings, Kristaq Kikina, Luljeta Kikina, and Sofokli Vasili,………………………………………………………..211-218

Integro-Differential Equations of Fractional Order on Unbounded Domains, Xuhuan Wang, and Yongfang Qi,…………………………………………………………………………...…219-224

An Iterative Method for Implicit General Wiener-Hopf Equations and Nonexpansive Mappings, Manzoor Hussain, and Wasim Ul-Haq,……………………………………………………225-233

Parametric Duality Models for Multiobjective Fractional Programming Based on New Generation Hybrid Invexities, Ram U. Verma,……………………………………………234-253

On the Best Linear Prediction of Linear Processes, Mohammad Mohammadi, and Adel Mohammadpour,……………………………………………………………………..254-265

Some Estimate for the Extended Fresnel Transform and its Properties in a Class of Boehmians, S.K.Q. Al-Omari,………………………………………………………………………….266-280

Resonance Problems for Nonlinear Elliptic Equations with Nonlinear Boundary Conditions, N. Mavinga, and M. N. Nkashama,………………………………………………………..281-295

Approximation by Complex Generalized Discrete Singular Operators, George A. Anastassiou, and Merve Kester,………………………………………………………………………….296-345