applied mathematics and computationszhang/research/p/2020-25.pdfy. yang, j. wang and s. zhang et al....

17
Applied Mathematics and Computation 387 (2020) 124489 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc Convergence analysis of space-time Jacobi spectral collocation method for solving time-fractional Schrödinger equations Yin Yang a,, Jindi Wang a , Shangyou Zhang b , Emran Tohidi c a Hunan Key Laboratory for Computation and Simulation in Science and Engineering, Key Laboratory of Intelligent Computing & Information Processing of Ministry of Education, School of Mathematics and Computational Science, Xiangtan University, Xiangtan, Hunan 411105, China b Department of Mathematical Sciences, University of Delaware, Newark, DE 19716, USA c Department of Mathematics, Kosar University of Bojnord, Bojnord, P. O. Box 9415615458, Iran a r t i c l e i n f o Article history: Available online 5 July 2019 MSC: 33C45 35Q40 41A55 41A25 65M70 Keywords: Convergence analysis Time-fractional Schr ¨ odinger equation Jacobi spectral-collocation method Gauss-type quadrature a b s t r a c t In this paper, the space-time Jacobi spectral collocation method (JSC Method) is used to solve the time-fractional nonlinear Schr ¨ odinger equations subject to the appropriate initial and boundary conditions. At first, the considered problem is transformed into the associ- ated system of nonlinear Volterra integro partial differential equations (PDEs) with weakly singular kernels by the definition and related properties of fractional derivative and in- tegral operators. Therefore, by collocating the associated system of integro-PDEs in both of the space and time variables together with approximating the existing integral in the equation using the Jacobi-Gauss-Type quadrature formula, then the problem is reduced to a set of nonlinear algebraic equations. We can consider solving the system by some robust iterative solvers. In order to support the convergence of the proposed method, we provided some numerical examples and calculated their L norm and weighted L 2 norm at the end of the article. © 2019 Elsevier Inc. All rights reserved. 1. Introduction The Schr ¨ odinger equation is a significant development in the theory of quantum mechanics [1]. This equation is a pow- erful differential structure that is related to the quantum systems changes over the temporal when the effects of quantum are considerable. It should be noted that, the classic variant of this equation is stated in terms of the integer first order temporal and second order spatial partial derivatives. Because of non-locality of fractional differential operators, the fractional variant of Schr ¨ odinger equation can better describe the physical and chemical events in real world applications [2,3]. The notion of fractional Schr ¨ odinger equation was first introduced by Laskin [4], in which the Feyman path integral is extended. Because of nonlinearity, complexity and non-locality of the fractional Schr ¨ odinger equations, the classical methods for solving PDEs are not efficient. On the other hand, solution of such these equations has considerable importance for researchers to have a deterministic behavior for simulating the events accurately. Therefore, numerical and analytical schemes should be explored and extended to compute the solution of fractional Schr ¨ odinger equations successfully. Corresponding author. E-mail address: [email protected] (Y. Yang). https://doi.org/10.1016/j.amc.2019.06.003 0096-3003/© 2019 Elsevier Inc. All rights reserved.

Upload: others

Post on 15-Feb-2021

0 views

Category:

Documents


0 download

TRANSCRIPT

  • Applied Mathematics and Computation 387 (2020) 124489

    Contents lists available at ScienceDirect

    Applied Mathematics and Computation

    journal homepage: www.elsevier.com/locate/amc

    Convergence analysis of space-time Jacobi spectral collocation

    method for solving time-fractional Schrödinger equations

    Yin Yang a , ∗, Jindi Wang a , Shangyou Zhang b , Emran Tohidi c

    a Hunan Key Laboratory for Computation and Simulation in Science and Engineering, Key Laboratory of Intelligent Computing &

    Information Processing of Ministry of Education, School of Mathematics and Computational Science, Xiangtan University, Xiangtan,

    Hunan 411105, China b Department of Mathematical Sciences, University of Delaware, Newark, DE 19716, USA c Department of Mathematics, Kosar University of Bojnord, Bojnord, P. O. Box 9415615458, Iran

    a r t i c l e i n f o

    Article history:

    Available online 5 July 2019

    MSC:

    33C45

    35Q40

    41A55

    41A25

    65M70

    Keywords:

    Convergence analysis

    Time-fractional Schr ̈o dinger equation

    Jacobi spectral-collocation method

    Gauss-type quadrature

    a b s t r a c t

    In this paper, the space-time Jacobi spectral collocation method (JSC Method) is used to

    solve the time-fractional nonlinear Schr ̈o dinger equations subject to the appropriate initial

    and boundary conditions. At first, the considered problem is transformed into the associ-

    ated system of nonlinear Volterra integro partial differential equations (PDEs) with weakly

    singular kernels by the definition and related properties of fractional derivative and in-

    tegral operators. Therefore, by collocating the associated system of integro-PDEs in both

    of the space and time variables together with approximating the existing integral in the

    equation using the Jacobi-Gauss-Type quadrature formula, then the problem is reduced to

    a set of nonlinear algebraic equations. We can consider solving the system by some robust

    iterative solvers. In order to support the convergence of the proposed method, we provided

    some numerical examples and calculated their L ∞ norm and weighted L 2 norm at the end of the article.

    © 2019 Elsevier Inc. All rights reserved.

    1. Introduction

    The Schr ̈o dinger equation is a significant development in the theory of quantum mechanics [1] . This equation is a pow-

    erful differential structure that is related to the quantum systems changes over the temporal when the effects of quantum

    are considerable.

    It should be noted that, the classic variant of this equation is stated in terms of the integer first order temporal and

    second order spatial partial derivatives. Because of non-locality of fractional differential operators, the fractional variant of

    Schr ̈o dinger equation can better describe the physical and chemical events in real world applications [2,3] . The notion of

    fractional Schr ̈o dinger equation was first introduced by Laskin [4] , in which the Feyman path integral is extended. Because

    of nonlinearity, complexity and non-locality of the fractional Schr ̈o dinger equations, the classical methods for solving PDEs

    are not efficient. On the other hand, solution of such these equations has considerable importance for researchers to have a

    deterministic behavior for simulating the events accurately. Therefore, numerical and analytical schemes should be explored

    and extended to compute the solution of fractional Schr ̈o dinger equations successfully.

    ∗ Corresponding author. E-mail address: [email protected] (Y. Yang).

    https://doi.org/10.1016/j.amc.2019.06.003

    0 096-30 03/© 2019 Elsevier Inc. All rights reserved.

    https://doi.org/10.1016/j.amc.2019.06.003http://www.ScienceDirect.comhttp://www.elsevier.com/locate/amchttp://crossmark.crossref.org/dialog/?doi=10.1016/j.amc.2019.06.003&domain=pdfmailto:[email protected]://doi.org/10.1016/j.amc.2019.06.003

  • 2 Y. Yang, J. Wang and S. Zhang et al. / Applied Mathematics and Computation 387 (2020) 124489

    In recent years, in order to solve the fractional Schr ̈o dinger equation, Scientists proposed many analytical methods and

    numerical methods. Among the analytical methods, one can point out to the homotopy analysis method (HAM) [5] . Ana-

    lytical methods are very straightforward for solving any nonlinear PDE, which do not need to discretization or linearization

    process. But, one of the disadvantages of these methods is being time consuming. Since, in these approaches the integration

    and differentiation processes are performed symbolically. Meanwhile, in numerical methods, we have two robust tools such

    as operational matrices of differentiation and Gaussian quadrature rules which accelerate (and reduce the computational

    time of) the process of differentiation and integration, respectively. Moreover, in some of the numerical approaches such as

    Krylov subspace methods [6] , the no smooth solutions may be computed without any regularization tool, since the solu-

    tion of these schemes depend on (and keep the behavior of) the right hand side functions of operator equations. Therefore,

    numerical methods are more attractive to be implemented by researchers. Among the numerical methods, one can point

    out to low order numerical methods such as finite elements [7] , local discontinuous Galerkin (DG) [8,9] and reproducing

    kernel [10] techniques which was proposed for solving time-fractional Schr ̈o dinger equations recently. It should be noted

    that, since the fractional differential operators are global, it is better to use some global numerical approaches to solve the

    considered equations. Taking into account that, if for solving a PDE, space variables are discretized by global methods such

    as radial basis function collocation technique and time variable is localized by local schemes such as Galerkin finite element

    methods (FEMs) to solve nonlinear time fractional parabolic problems [11] and finite difference methods (FDMs), this may

    results to an unbalanced numerical scheme [12–14] which has spectral accuracy in space variable and low order algebraic

    accuracy in time variable. Therefore, it is desirable to propose a balanced numerical scheme that has spectral accuracy in

    both time and space directions.

    Fractional differential equations (FDEs) are easy to be implemented with respect to the spectral Galerkin methods spe-

    cially for solving nonlinear FDEs [15] . These methods are successfully applied for solving nonlinear fractional boundary

    value problems (BVPs) [16,18] and integral equations (IEs) [19,20] with a rigorous convergence analysis and also regularized

    for solving FDEs with nonsmooth solutions [21] . Also, in [22,23] the author solved fractional Schr ̈o dinger equations just in

    numerical implementation point of view via using spectral collocation method also used in [17] . In [24,25] a linearized L 1-

    Galerkin finite element method and linearized compact alternating direction implicit (ADI) schemes are proposed to solve

    the multidimensional nonlinear time-fractional Schr ̈o dinger equation, respectively. To the authors’ knowledge, space-time

    Jacobi spectral collocation methods (that supported by a rigorous convergence analysis) for solving nonlinear time-fractional

    Schr ̈o dinger equations have had few results. This motivate us to propose a Jacobi spectral collocation scheme together with

    a full convergence analysis for solving time-fractional Schr ̈o dinger equations which is a balanced approach and has spectral

    accuracy in both time and space directions.

    The time-fractional PDE we will considered as follows:

    i ∂ μψ(x, y, t)

    ∂t μ= a 1 ∂

    2 ψ(x, y, t)

    ∂x 2 + a 2 ∂

    2 ψ(x, y, t)

    ∂y 2 + γ | ψ(x, y, t) | 2 ψ(x, y, t) + δR (x, y, t) ,

    0 < μ < 1 , (x, y, t) ∈ �1 × �2 × �3 , (1.1) where �1 = [0 , L 1 ] , �2 = [0 , L 2 ] and �3 = [0 , T ] , with the initial time conditions,

    ψ(x, y, 0) = ζ1 (x, y ) , (x, y ) ∈ �1 × �2 , (1.2) and two-dimensional boundary space conditions,

    ψ(0 , y, t) = ζ2 (y, t) , ψ(L 1 , y, t) = ζ3 (y, t) , (y, t) ∈ �2 × �3 , ψ(x, 0 , t) = ζ4 (x, t) , ψ(x, L 2 , t) = ζ5 (x, t) , (x, t) ∈ �1 × �3 ,

    while ζ 1 , ζ 2 , ζ 3 , ζ 4 , ζ 5 and R ( x , y , t ) are given functions. It should be noted that the definition of Caputo fractional derivative as follows,

    ∂ μ

    ∂τμ(g(τ )) =

    ⎧ ⎨ ⎩

    ∂ m g(τ )

    ∂τ m , μ = m ∈ N ,

    1

    (m − μ) ∫ τ

    0

    g (m ) (s )

    (τ − s ) μ−m +1 ds, m − 1 < μ < m,

    where ∂ μ

    ∂t μ(·) denotes the of order μ derivative.

    The definition of Riemann-Liouville (R-L) fractional integral as follows, indicated by l μτ ,

    l μτ ( g(τ ) ) =

    1

    (μ)

    ∫ τ0

    (τ − s ) μ−1 g(s ) ds, s > 0 .

    Moreover

    l μτ

    (∂ μ

    ∂τμg(τ )

    )= g(τ ) −

    m −1 ∑ i =0

    g (i ) (0) τ i

    i ! , m − 1 < μ < m.

    In the next section, we proposed space-time Jacobi spectral collocation method to solve the Eq. (1.1) . Some useful lemmas

    and preliminaries are provided in Section 3 , such as the error estimation of interpolation by Jacobi-Gauss points. Rigorous

  • Y. Yang, J. Wang and S. Zhang et al. / Applied Mathematics and Computation 387 (2020) 124489 3

    convergence analysis associated to the used numerical method in weighted L 2 norms and L ∞ norms is provided in Section 4 .In Section 5 , algorithm implementation and numerical results are given. Finally, in the last section, some concluding remarks

    and considered problems for the future research works are stated.

    2. Jacobi spectral collocation methods

    We first split all of the known and unknown functions into real and imaginary parts and transform the basic equation

    into a system of coupled time-fractional PDEs in Caputo sense. In the next step, we impose the Riemann-Liouville fractional

    integral operator on both sides of the equations to change this system into a nonlinear system of Volterra integro-PDEs with

    weakly singular kernels that contains initial conditions. Finally, both of the space and time variables are collocated and the

    existing integrals are approximated by the powerful Gaussian quadrature rules to change the considered problem to a set of

    nonlinear algebraic equations. We consider solving the system by some robust iterative solvers.

    Now, one can split all of the known and unknown functions into real and imaginary parts as follows

    ψ = u + i v , R = f + ig, ζ1 = g 1 + ig 2 , ζ2 = g 3 + ig 4 , ζ3 = g 5 + ig 6 , ζ4 = g 7 + ig 8 , ζ5 = g 9 + ig 10 , (2.1)

    where, u, v , f, g, g 1 , g 2 , g 3 , g 4 , g 5 , g 6 , g 7 , g 8 , g 9 and g 10 are the real functions. By considering assumptions of (2.1) , the basic Eq. (1.1) can be rewritten in the following form

    i

    (∂ μu

    ∂t μ− a 1 ∂

    2 v ∂x 2

    − a 2 ∂ 2 v

    ∂y 2 − γ (u 2 + v 2 ) v − δg

    )−

    (∂ μv ∂t μ

    + a 1 ∂ 2 u

    ∂x 2 + a 2 ∂

    2 u

    ∂y 2 + γ (u 2 + v 2 ) u + δ f

    )= 0 . (2.2)

    Therefore, the aforementioned complex equation can be transformed into the following coupled real time-fractional

    PDEs

    ∂ μu

    ∂t μ= a 1 ∂

    2 v ∂x 2

    + a 2 ∂ 2 v

    ∂y 2 + γ (u 2 + v 2 ) v + δg,

    −∂ μv

    ∂t μ= a 1 ∂

    2 u

    ∂x 2 + a 2 ∂

    2 u

    ∂y 2 + γ (u 2 + v 2 ) u + δ f, (2.3)

    the initial boundary conditions became

    u (x, y, 0) = g 1 (x, y ) , v (x, y, 0) = g 2 (x, y ) , (x, y ) ∈ �1 × �2 , u (0 , y, t) = g 3 (y, t) , v (0 , y, t) = g 4 (y, t) ,

    u (L 1 , y, t) = g 5 (y, t) , v (L 1 , y, t) = g 6 (y, t) , (y, t) ∈ �2 × �3 , u (x, 0 , t) = g 7 (x, t) , v (x, 0 , t) = g 8 (x, t) ,

    u (x, L 2 , t) = g 9 (x, t) , v (x, L 2 , t) = g 10 (y, t) , (x, t) ∈ �1 × �3 . (2.4)Taking into account that, Eqs. (2.3) and (2.4) are equal to Eqs. (1.1) and (1.2) . So, instead of solving (1.1) and (1.2) numer-

    ically, we compute numerical solution of (2.3) and (2.4) using the Jacobi spectral collocation method. Before the space-time

    collocation method, first using the R-L fractional integral of order μ, we will transform (2.3) into the associated system ofweakly singular Volterra integro-PDEs,

    u (x, y, t) = 1 (μ)

    ∫ t 0

    (t − τ ) μ−1 k 1 ( x, y, τ, u ( x, y, τ ) , v ( x, y, τ ) ) dτ + δ ˜ g (x, y, t) + g 1 (x, y ) ,

    v (x, y, t) = 1 (μ)

    ∫ t 0

    (t − τ ) μ−1 k 2 ( x, y, τ, u ( x, y, τ ) , v ( x, y, τ ) ) dτ + δ ˜ f (x, y, t) + g 2 (x, y ) , (2.5)

    where

    k 1 (x, y, τ, u (x, y, τ ) , v (x, y, τ )) = a 1 ∂ 2 v (x, y, τ )

    ∂x 2 + a 2 ∂

    2 v (x, y, τ ) ∂y 2

    + γ (u 2 (x, y, τ ) + v 2 (x, y, τ )) v (x, y, τ ) ,

    k 2 (x, y, τ, u (x, y, τ ) , v (x, y, τ )) = −a 1 ∂ 2 u (x, y, τ )

    ∂x 2 − a 2 ∂

    2 u (x, y, τ )

    ∂y 2 − γ (u 2 (x, y, τ ) + v 2 (x, y, τ )) u (x, y, τ ) ,

    ˜ g (x, y, t) = 1 (μ)

    ∫ t 0

    (t − τ ) μ−1 g(x, y, τ ) dτ,

    ˜ f (x, y, t) = − 1 (μ)

    ∫ t 0

    (t − τ ) μ−1 f (x, y, τ ) dτ. (2.6)

    Since (2.5) has weakly singular kernels around t = 0 + , the numerical process of (2.5) may be difficult. To apply theorthogonal Jacobi polynomials for solving equations of (2.5) , we should consider the variable transformations as follows,

  • 4 Y. Yang, J. Wang and S. Zhang et al. / Applied Mathematics and Computation 387 (2020) 124489

    x = L 1 2

    (1 + x̄ ) , x̄ = 2 x L 1

    − 1 , x̄ ∈ [ −1 , 1] ,

    y = L 2 2

    (1 + ȳ ) , ȳ = 2 y L 2

    − 1 , ȳ ∈ [ −1 , 1] ,

    t = T 2

    (1 + ̄t ) , t̄ = 2 t T

    − 1 , t ∈ [ −1 , 1] ,

    τ = T 2

    (1 + s ) , s = 2 τT

    − 1 , s ∈ [ −1 , t] .

    For the sake of simplicity, we still use x , y , t to indicate x̄ , ȳ , t̄ , therefore, equations of (2.5) should be rewritten with the

    following form

    ū (x, y, t) = ∫ t

    −1 (t − s ) μ−1 k̄ 1 (x, y, s, ū (x, y, s ) , ̄v (x, y, s )) ds + δḡ (x, y, t) + ḡ 1 (x, y ) ,

    v̄ (x, y, t) = ∫ t

    −1 (t − s ) μ−1 k̄ 2 (x, y, s, ū (x, y, s ) , ̄v (x, y, s )) ds + δ f̄ (x, y, t) + ḡ 2 (x, y ) , (2.7)

    with the boundary conditions

    ū (−1 , y, t) = ḡ 3 (y, t) , ū (1 , y, t) = ḡ 5 (y, t) , ū (x, −1 , t) = ḡ 7 (x, t) , ū (x, 1 , t) = ḡ 9 (x, t) , v̄ (−1 , y, t) = ḡ 4 (y, t) , v̄ (1 , y, t) = ḡ 6 (y, t) , v̄ (x, −1 , t) = ḡ 8 (x, t) , v̄ (x, 1 , t) = ḡ 10 (y, t) .

    where

    ū (x, y, t) = u (

    L 1 2

    (1 + x ) , L 2 2

    (1 + y ) , T 2

    (1 + t) ),

    v̄ (x, y, t) = v (

    L 1 2

    (1 + x ) , L 2 2

    (1 + y ) , T 2

    (1 + t) ),

    ḡ (x, y, t) = ˜ g (

    L 1 2

    (1 + x ) , L 2 2

    (1 + y ) , T 2

    (1 + t) ),

    f̄ (x, y, t) = ˜ f (

    L 1 2

    (1 + x ) , L 2 2

    (1 + y ) , T 2

    (1 + t) ),

    ḡ i (x, y ) = g i (

    L 1 2

    (1 + x ) , L 2 2

    (1 + y ) ), i = 1 , 2 ,

    ḡ i (y, t) = g i (

    L 2 2

    (1 + y ) , T 2 2

    (1 + t) ), i = 3 , 4 , 5 , 6 ,

    ḡ i (x, t) = g i (

    L 1 2

    (1 + x ) , T 2 2

    (1 + t) ), i = 7 , 8 , 9 , 10 ,

    k̄ 1 (x, y, s, ū (x, y, s ) , ̄v (x, y, s )) = 1 (μ)

    (T

    2

    )μ(a 1

    (2

    L 1

    )2 ∂ 2 v̄ ∂x 2

    + a 2 (

    2

    L 2

    )2 ∂ 2 v̄ ∂y 2

    + γ ( ̄u 2 + ̄v 2 ) ̄v )

    ,

    k̄ 2 (x, y, s, ū (x, y, s ) , ̄v (x, y, s )) = 1 (μ)

    (T

    2

    )μ(− a 1

    (2

    L 1

    )2 ∂ 2 ū ∂x 2

    − a 2 (

    2

    L 2

    )2 ∂ 2 ū ∂y 2

    − γ ( ̄u 2 + ̄v 2 ) ̄u )

    .

    Now, we can collocate the variable t of equations of (2.7) at the Jacobi Gauss nodes corresponding to θ = ϑ = −μ, in whichω θ,ϑ (t) = (1 − t) −μ(1 + t) −μ = (1 − t 2 ) −μ, for 0 ≤ l ≤ M ,

    ū (x, y, t l ) = ∫ t l

    −1 (t l − s ) μ−1 k̄ 1 (x, y, s, ū (x, y, s ) , ̄v (x, y, s )) ds + δḡ (x, y, t l ) + ḡ 1 (x, y ) ,

    v̄ (x, y, t l ) = ∫ t l

    −1 (t l − s ) μ−1 k̄ 2 (x, y, s, ū (x, y, s ) , ̄v (x, y, s )) ds + δ f̄ (x, y, t l ) + ḡ 2 (x, y ) . (2.8)

    We should apply the following change of variables for implementing Jacobi Gauss quadrature rule

    s (θ ) = s l (θ ) = 1 + t l θ + t l − 1 , 0 ≤ l ≤ M, θ ∈ [ −1 , 1] . (2.9)

    2 2

  • Y. Yang, J. Wang and S. Zhang et al. / Applied Mathematics and Computation 387 (2020) 124489 5

    Therefore,

    ū (x, y, t l ) = (

    1 + t l 2

    )μ ∫ 1 −1

    (1 − θ ) μ−1 k̄ 1 (x, y, s (θ ) , ū (x, y, s (θ )) , ̄v (x, y, s (θ ))

    )dθ + δḡ (x, y, t l ) + ḡ 1 (x, y ) ,

    v̄ (x, y, t l ) = (

    1 + t l 2

    )μ ∫ 1 −1

    (1 − θ ) μ−1 k̄ 2 (x, y, s (θ ) , ū (x, y, s (θ )) , ̄v (x, y, s (θ ))

    )dθ + δ f̄ (z, y, t l ) + ḡ 2 (x, y ) . (2.10)

    Applying Gaussian quadrature formula to change the integral parts of the above formulas to a summation form, ∫ 1 −1

    (1 − θ ) μ−1 k̄ 1 (x, y, s (θ ) , ū (x, y, s (θ )) , ̄v (x, y, s (θ ))) dθ ≈L ∑

    k =0 k̄ 1 (x, y, s (θk ) , ū (x, y, s (θk )) , ̄v (x, y, s (θk ))) ω

    μ−1 , 0 k

    ,

    ∫ 1 −1

    (1 − θ ) μ−1 k̄ 2 (x, y, s (θ ) , ū (x, y, s (θ )) , ̄v (x, y, s (θ ))) dθ ≈L ∑

    k =0 k̄ 2 (x, y, s (θk ) , ū (x, y, s (θk )) , ̄v (x, y, s (θk ))) ω

    μ−1 , 0 k

    ,

    (2.11)

    where { θk } L k =0 are Jacobi-Gauss-Lobatto nodes and { ω μ−1 , 0 k } L k =0 is the weight function in [ −1 , 1] and L ≥ M , ω μ−1 , 0 (t) =(1 − t) μ−1 .

    Let ū l (x, y ) and v̄ l (x, y ) denote to the ū (x, y, t l ) and v̄ (x, y, t l ) , respectively. One can approximate ū (x, y, t) and v̄ (x, y, t) bytheir Lagrange interpolation polynomials in the following form

    ū (x, y, t) ≈ ū M (x, y, t) = M ∑

    l=0 ū l (x, y ) F l (t) ,

    v̄ (x, y, t) ≈ v̄ M (x, y, t) = M ∑

    l=0 v̄ l (x, y ) F l (t) ,

    where F l ( t ) is the l th Lagrange polynomial associated to the t l for 0 ≤ l ≤ M . By using the aforementioned approximations together with the implemented Gauss quadrature rule, (2.10) can be re-

    duced as follows (0 ≤ l ≤ M ),

    ū l (x, y ) = (

    1 + t l 2

    )μ L ∑ k =0

    k̄ 1 (x, y, s (θk ) , ū M (x, y, s (θk )) , ̄v M (x, y, s (θk ))) ω

    μ−1 , 0 k

    + δḡ (x, y, t l ) + ḡ 1 (x, y ) ,

    v̄ l (x, y ) = (

    1 + t l 2

    )μ L ∑ k =0

    k̄ 2 (x, y, s (θk ) , ū M (x, y, s (θk )) , ̄v M (x, y, s (θk ))) ω

    μ−1 , 0 k

    + δ f̄ (x, y, t l ) + ḡ 2 (x, y ) . (2.12)

    For applying JSC method to the space variable, we use the Legendre-Gauss-Lobatto points { x i } N 1 i =0 , { y j } N 2 j=0 on the interval[-1,1] corresponding to ω 0 , 0 (x, y ) = (1 − x ) 0 (1 + x ) 0 (1 − y ) 0 (1 + y ) 0 = 1 . Then, (2.12) has the following form ( 1 ≤ i ≤ N 1 −1 , 1 ≤ j ≤ N 2 − 1 ),

    ū j (x i , y j ) = (

    1 + t l 2

    )μ L ∑ k =0

    k̄ 1 (x i , y j , s (θk ) , ū M (x i , y j , s (θk )) , ̄v M (x i , y j , s (θk ))) ω

    μ−1 , 0 k

    + δḡ (x i , y j , t l ) + ḡ 1 (x i , y j ) ,

    v̄ j (x i , y j ) = (

    1 + t l 2

    )μ L ∑ k =0

    k̄ 2 (x i , y j , s (θk ) , ū M (x i , y j , s (θk )) , ̄v M (x i , y j , s (θk ))) ω

    μ−1 , 0 k

    + δ f̄ (x i , y j , t j ) + ḡ 2 (x i , y j ) . (2.13)

    Again, let ū l i j

    and v̄ l i j

    denote to the ū (x i , y j , t l ) and v̄ (x i , y j , t l ) , respectively. One can consider the Lagrange interpolationpolynomial for both of the functions ū and v̄ and write

    ū (x, y, t) ≈ ū M N 1 N 2 (x, y, t) = M ∑

    l=0

    N 1 ∑ i =0

    N 2 ∑ j=0

    ū l i j H i (x ) H j (y ) F l (t) = M ∑

    l=0

    N 1 −1 ∑ i =1

    N 2 −1 ∑ j=1

    ū l i j H i (x ) H j (y ) F l (t) + M ∑

    l=0

    N 2 ∑ j=0

    (ḡ 3 H 0 (x )

    + ̄g 4 H N 1 (x ) )H j (y ) F l (t) +

    M ∑ l=0

    N 1 ∑ i =0

    (ḡ 7 H 0 (y ) + ḡ 8 H N 2 (y )

    )H i (x ) F l (t) ,

    v̄ (x, y, t) ≈ v̄ M N 1 N 2 (x, y, t) = M ∑

    l=0

    N 1 ∑ i =0

    N 2 ∑ j=0

    v̄ l i j H i (x ) H j (y ) F l (t) = M ∑

    l=0

    N 1 −1 ∑ i =1

    N 2 −1 ∑ j=1

    v̄ l i j H i (x ) H j (y ) F l (t) + M ∑

    l=0

    N 2 ∑ j=0

    (ḡ 5 H 0 (x )

    + ̄g 6 H N 1 (x ) )H j (y ) F l (t) +

    M ∑ l=0

    N 1 ∑ i =0

    (ḡ 9 H 0 (y ) + ḡ 10 H N 2 (y )

    )H i (x ) F l (t) , (2.14)

  • 6 Y. Yang, J. Wang and S. Zhang et al. / Applied Mathematics and Computation 387 (2020) 124489

    where H i ( x ) is the i th Lagrange polynomial corresponding to x i for 0 ≤ i ≤ N 1 , where H j ( y ) is the j th Lagrange polynomialcorresponding to y i for 0 ≤ j ≤ N 2 .

    Therefore, the full discrete system of algebraic equations arised from the space-time Jacobi spectral collocation method

    to solve (1.1) and (1.2) can be stated as ( 1 ≤ i ≤ N 1 − 1 , 1 ≤ j ≤ N 2 − 1 , 0 ≤ l ≤ M)

    ū l i j = (

    1 + t l 2

    )μ L ∑ k =0

    k̄ 1 (x i , y j , s ( θk ) , ū

    M N 1 N 2

    (x i , y j , s ( θk )

    ), ̄v M N 1 N 2

    (x i , y j , s ( θk )

    ))ω μ−1 , 0

    k + δḡ (x i , y j , t l ) + ḡ 1 (x i , y j )

    = (

    1 + t l 2

    )μ L ∑ k =0

    k̄ 1 (x i , y j , s (θk ) ,

    N 1 ∑ n 1 =0

    N 2 ∑ n 2 =0

    M ∑ m =0

    ū m n 1 n 2 H n 1 (x i ) H n 2 (y j ) F m (s (θk )) ,

    N 1 ∑ n 1 =0

    N 2 ∑ n 2 =0

    M ∑ m =0

    v̄ m n 1 n 2 H n 1 (x i ) H n 2 (y j ) F m (s (θk )) )ω μ−1 , 0

    k + δḡ (x i , y j , t l ) + ḡ 1 (x i , y j ) ,

    v̄ l i j = (

    1 + t l 2

    )μ L ∑ k =0

    k̄ 1 (x i , y j , s (θk ) , ū M N 1 N 2

    (x i , y j , s (θk )) , ̄v M N 1 N 2 (x i , y j , s (θk ))) ω μ−1 , 0 k

    + δ f̄ (x i , y j , t l ) + ḡ 2 (x i , y j )

    = (

    1 + t l 2

    )μ L ∑ k =0

    k̄ 1 (x i , y j , s (θk ) ,

    N 1 ∑ n 1 =0

    N 2 ∑ n 2 =0

    M ∑ m =0

    ū m n 1 n 2 H n 1 (x i ) H n 2 (y j ) F m (s (θk )) ,

    N 1 ∑ n 1 =0

    N 2 ∑ n 2 =0

    M ∑ m =0

    v̄ m n 1 n 2 H n 1 (x i ) H n 2 (y j ) F m (s (θk )) )ω μ−1 , 0

    k + δ f̄ (x i , y j , t l ) + ḡ 2 (x i , y j ) , (2.15)

    We can solve the question about the above-mentioned system of nonlinear algebraic equations can be solved using

    Newton Raphson iterative method. The practical implementation can be done by a well-known command “fsolve” in matlab

    or maple softwares.

    3. Some preliminaries and useful Lemmas

    In the next section, we will introduction the convergence analysis associated to the proposed space-time JSC method to

    solve (1.1) and (1.2) . Therefore, we need to some definitions and lemmas for stablishing the proof of the main theorem. These

    lemmas include error of the Gauss quadrature rules, estimation of the interpolation errors, Lebesgue constant corresponding

    to the Legendre series, and finally the Gronwall inequality.

    Definition 3.1 [26] . Let I is a bounded interval in R and L p ( I ) is a measurable function space, where 1 ≤ p < ∞ , that is, for∀ u ∈ I → R , if ∫ b a | u (x ) | p dx < ∞ , we can define its norm as

    ‖ u ‖ L p (I) = (∫ b

    a

    | u (x ) | p dx ) 1 p

    . (3.1)

    Obviously, it is a Banach space.

    Definition 3.2 [26] . Let I is a bounded interval in R , one can define H m ( I ), if it satisfied that for u ∈ L 2 ( I ), we can always findthe function v ∈ L 2 (I) so that d k u

    dx k = v . In other words,

    H m (I) = { u ∈ L 2 (I) : d k u

    dx k ∈ L 2 (I) , for 0 ≤ k ≤ m } , (3.2)

    define the inner product in H m ( I ) as,

    (u, v ) m = m ∑

    k =0

    ∫ b a

    d k u (x )

    dx k d k v (x )

    dx k dx, (3.3)

    obviously, H m ( I ) is a Hilbert space. Norm of function can be given,

    ‖ v ‖ H m (I) = ( m ∑

    k =0 ‖ d

    k v dx k

    ‖ 2 L 2 (I) ) 1 2

    . (3.4)

    Lemma 3.1 [27] . (Integration error) Assume that a set of quadrature nodes of three types of Gaussian formulas include Gauss

    or Gauss-Radau or Gauss-Lobatto, for the function u , φ, where u ∈ H m (I) , φ ∈ P N (the set of all algebraic polynomials of degree≤ n), and for some m ≥ 1, with I := (−1 , 1) , we can find a constant C independent of N such that ∣∣ ∫ 1

    −1 u (x ) φ(x ) dx − (u, φ) N

    ∣∣ ≤ CN −m | u | H m,N (I) ‖ φ‖ L 2 (I) , (3.5)

  • Y. Yang, J. Wang and S. Zhang et al. / Applied Mathematics and Computation 387 (2020) 124489 7

    where

    | u | H m,N (I) = (

    m ∑ j= min (m,N+1)

    ‖ u ( j) ‖ 2 L 2 (I) ) 1

    2

    ,

    (u, φ) N = N ∑

    j=0 ω j u (x j ) φ(x j ) . (3.6)

    Lemma 3.2 [27] . For u ∈ H m,N ω α,β

    (I) and by I α,βN

    u denote its interpolation polynomial, where { x i } N i =0 is a set of Jacobi-Gauss points,namely,

    I α,βN

    u = N ∑

    i =0 u (x i ) F i (x ) , (3.7)

    then, the following estimates hold

    ‖ u − I α,βN

    u ‖ L 2 ω α,β

    (I) ≤ CN −m | u | H m,N ω α,β

    (I) ,

    ‖ u − I α,βN

    u ‖ L ∞ (I) ≤{

    CN 1 2 −m | u | H m,N

    ω c (I) , −1 ≤ α, β < − 1 2 ,

    CN 1+ �−m logN| u | H m,N ω c

    (I) , otherwise, (3.8)

    where F i (x ) , i = 0 , 1 , . . . , N, are the lagrange interpolation basis functions associated with the Jacobi collocation points { x i } N i =0 ,where � = max (α, β) and ω c = ω − 1 2 , − 1 2 denotes to the Chebyshev weight function.

    For u ∈ H m , N ( I ) and I N u denote its interpolation polynomial, where { x i } N i =0 is a set of Legendre-Gauss points. Namely,

    I N u = N ∑

    i =0 u (x i ) F i (x ) . (3.9)

    Then, the following estimates hold [20]

    ‖ u − I N u ‖ L 2 (I) ≤ CN −m | u | H m,N (I) , ‖ u − I N u ‖ L ∞ (I) ≤ CN 3 4 −m | u | H m,N (I) . (3.10)

    Lemma 3.3 [26] . For every function u , bounded, we can find a constant C , independent of u , satisfited

    sup N

    ∥∥∥∥∥N ∑

    i =0 u (x i ) F i (x )

    ∥∥∥∥∥L 2 ω α,β

    (I)

    ≤ max x ∈ [ −1 , 1]

    | u (x ) | , (3.11)

    where F i ( x ) is the Lagrange interpolation basis function, and { x i } N i =0 is the Jacobi collocation points. From Mastroianni and Occorsto [28] , we get the following results related to the Lebesgue constant, for the Lagrange interpola-

    tion polynomial associated with the zero of the Jacobian polynomial,

    Lemma 3.4 [29] . Suppose { F j (x ) } N j=0 is the N Lagrangian polynomial taken at the Gaussian node of the Jacobi polynomial, then,

    ‖ I α,βN

    ‖ L ∞ (I) ≤ max x ∈ [ −1 , 1]

    N ∑ j=0

    | F j (x ) | (3.12)

    = {

    O (logN) , −1 < α, β ≤ 1 2 ,

    O (N �+ 1 2 ) , � = max (α, β) , otherwise. (3.13)

    The following lemma is a generalization of the singular kernel of the Gronwall lemma, which is very important in partial

    differential equations, and can be found in the references, for example, in [30] , to a large extent affected the main results

    of our research.

    Lemma 3.5 [31] . Suppose u and v are non-negative local integrable functions defined on the interval [ −1 , 1] , L ≥ 0, 0 < μ< 1 . Inaddition,

    u (x ) ≤ v (x ) + L ∫ x

    −1 (x − τ ) −μu (τ ) dτ,

    we can find a constant C = C(μ) satisfited,

    u (x ) ≤ v (x ) + CL ∫ x

    −1 (x − τ ) −μv (τ ) dτ, for − 1 ≤ x < 1 .

  • 8 Y. Yang, J. Wang and S. Zhang et al. / Applied Mathematics and Computation 387 (2020) 124489

    If E ( x ) satisfies

    E(x ) ≤ L ∫ x

    −1 E(s ) ds + J(x ) , −1 < x < 1 ,

    where E ( x ) is a nonnegative integrable function, J ( x ) is an integrable function, then

    ‖ E‖ L ∞ (−1 , 1) ≤ C‖ J‖ L ∞ (−1 , 1) , ‖ E‖ L p

    ω α,β(−1 , 1) ≤ C‖ J‖ L p

    ω α,β(−1 , 1) . (3.14)

    Lemma 3.6 [32] . ∀ ν ∈ C r,κ [ −1 , 1] , you can always find the polynomial function T N ν ∈ P N , making it for non-negative integersr and κ ∈ (0, 1), satisfying a constant C r , κ > 0,

    ‖ ν − T N ν‖ L ∞ (I) ≤ C r,κN −(r+ κ) ‖ ν‖ r,κ , (3.15) the standard norm in which we define C r,κ [ −1 , 1] is ‖ · ‖ r , κ , and T N is a linear operator from C r,κ [ −1 , 1] to P N . Lemma 3.7 [32] . Let κ ∈ (0, 1) and defined that

    (M ν)(x ) = ∫ x

    −1 (x − τ ) −μK(x, τ ) ν(τ ) dτ.

    Then, for ∀ ν ∈ C[ −1 , 1] , you can always find a positive constant C such that |M ν(x ′ ) − M ν(x ′′ ) |

    | x ′ − x ′′ | ≤ C max x ∈ [ −1 , 1] | ν(x ) | , assume that 0 < κ < 1 − μ, for any x ′ , x ′′ ∈ [ −1 , 1] and x ′ � = x ′′ . This established that

    ‖M ν‖ 0 ,κ ≤ C max x ∈ [ −1 , 1]

    | ν(x ) | , 0 < κ < 1 − μ.

    4. Convergence analysis of Jacobi spectral collocation method

    The main purpose of this section is to make a specific convergence analysis for the numerical scheme presented in the

    previous article. According to the convergence result, we show that the Jacobi collocation method we use is the approximate

    solution obtained in (2.15) exponentially approximates the exact solution. First, we will derive the error estimate for the

    function in L ∞ -norm. Here, we assume that the kernel function K ( x , s , U ( s )) has the two following properties which are required for the proof

    of the convergence analysis:

    (1) the Lipschitz property, in other words

    | K(x, s, ˆ U (s )) − K(x, s, U(s )) | ≤ L K | ̂ U − U(s ) | , ∀ ̂ U , U ∈ [ −1 , 1] (4.1) where U(s ) = [ u (s ) , v (s )] T and ˆ U (s ) = [ ̂ u (s ) , ̂ v (s )] T ;

    (2) K(x, s, 0) = 0 2 ×1 . Since the one-dimensional and two-dimensional convergence methods and results of the space are the same, here we

    make the proof process concise, and the following is only for the rigor of the one-dimensional Schr ̈o dinger equation con-

    vergence analysis.

    Theorem 1. Suppose U ( x , t ) is an exact solution of the Eq. (2.7) and sets U M (x, t) = ∑ M j=0 U j (x ) F j (t) is the discrete solution ofthe Eq. (2.12) in the time direction, �t = max { αt , βt } . Then when M is big enough,

    ‖ U(x, t) − U M (x, t) ‖ L ∞ (I) ≤{

    C M −m logMK ∗ + C M 1 2 −m U ∗, −1 < α, β ≤ − 1 2 ,

    C M �+ 1 2 −m K ∗ + C M 1+ �−m logMU ∗, − 1

    2 ≤ � < μ − 1

    2 .

    ‖ U(x, t) − U M (x, t) ‖ L 2 (I) ≤

    ⎧ ⎪ ⎨ ⎪ ⎩

    CM −m (K ∗ + U ∗) + CM −m −κ logMK ∗+ CM 1 2 −m −κU ∗, −1 < α, β ≤ − 1

    2 ,

    CM −m (K ∗ + U ∗) + CM �+ 1 2 −m −κK ∗+ CM 1+ �−m −κ logMU ∗, − 1

    2 ≤ � < μ − 1

    2 .

    (4.2)

    Where

    K ∗ = max x ∈ I

    | K(x, s (θ ) , U M (z, s (θ ))) | H m,M ω c

    (I) , U ∗ = | U(x, t) | H m,M

    ω c (I) .

    Proof. According to Lamma 1

    (K (x, s ( θ ) , U M ( x, s ( θ ) )

    ))N,s

    := N ∑

    j=0 K (x, s ( θk ) , U

    M ( x, s ( θk ) ) )ω μ−1 , 0

    k , (4.3)

  • Y. Yang, J. Wang and S. Zhang et al. / Applied Mathematics and Computation 387 (2020) 124489 9

    then, the numerical schemes (2.12) can be written as

    U j (x ) −(

    1 + t j 2

    )μ−1 (K(x, s (θ ) , U M (z, s (θ )))) N,s = δG (x, t j ) + F (x ) , (4.4)

    which gives

    U j (x ) −(

    1 + t j 2

    )μ−1 ∫ 1 −1

    (1 − θ ) μ−1 K(x, s (θ ) , U M (z, s (θ ))) dθ = δG (x, t) + F (x ) + J 1 (x ) , (4.5)

    where

    J 1 (x ) = (

    1 + t j 2

    )μ−1 (K(x, s (θ ) , U M (x, s (θ )))) N,s −

    (1 + t j

    2

    )μ−1 ∫ 1 −1

    ( 1 − θ ) μ−1 K(x, s (θ ) , U M (x, s (θ ))) dθ, (4.6)

    then, we have

    | J 1 (x ) | ≤ CN −m (

    1 + t j 2

    )μ−1 | K(x, s (θ ) , U M (x, s (θ ))) | H m,M

    ω c (I) ≤ CN −m | K(x, s (θ ) , U M (x, s (θ ))) | H m,M

    ω c (I) . (4.7)

    On the other hand, (4.5) can be rewritten as follows:

    U j (x ) −∫ t j

    −1 (t j − s ) μ−1 K(x, s, U M (x i , s )) ds = δG (x, t j ) + F (z) , 0 ≤ i ≤ M. (4.8)

    Multipling F j ( t ) on the both sides of (4.8) and summing up from 0 to M yield,

    U M (x, t) − I M ( ∫ t

    −1 (t − s ) μ−1 K(x, s, U M (x, s )) ds = δI M G (x, t) + I M F (x ) + I M (J 1 ) , (4.9)

    which can be restated in the following form:

    δI M G (x, t) + F (x ) + I M (J 1 ) = U M (x, t) − I M ∫ t

    −1 (t − s ) μ−1 K(x, t, U(x, t)) ds

    −I M ( ∫ 1

    −1 (t − s ) μ−1 [ K(x, s, U M (x, s )) − K(x, s, U(x, s ))] ds, (4.10)

    where the interpolation operator I M is defined by (3.7) . It follows from (4.10) and (2.7) , that

    δI M G (x, t) + F (x ) + I M (J 1 ) = U M (x, t) − I M (U(x, t) − δG (x, t) − F (x )) (4.11)

    −I M (∫ t

    −1 (t − s ) μ−1 [ K(x, s, U M (x, s )) − K(x, s, U(x, s ))] ds

    ).

    Let e (x, t) = U M (x, t) − U(x, t) , x ∈ [ −1 , 1] , denote the error function. Then, we have

    I M (J 1 ) = e (x, t) + (U − I M U)(x, t) − I M (∫ t

    −1 (t − s ) μ−1 [ K(x, s, U M (x, s )) − K(x, s, U(x, s ))] ds

    ), (4.12)

    consequently,

    e (x, t) = ∫ t

    −1 (t − s ) μ−1 [ K(x, s, U M (x, s )) − K(x, s, U(x, s ))] ds ) + I M (J 1 ) + J 2 (x, t) + J 3 (x, t) , (4.13)

    where

    J 2 (x, t) = I M U(x, t) − U(x, t) ,

    J 3 (x, t) = −∫ t

    −1 (t − s ) μ−1 [ K(x, s, U M (x, s )) − K(x, s, U(x, s ))] ds

    + I M (∫ t

    −1 (t − s ) μ−1

    [K (x, s, U M (x, s )

    )− K ( x, s, U(x, s ) )

    ]ds

    ).

    According to the Lipschitz property of the kernel K, we have

    | e (x, t) | L 2 (I) ≤∣∣∣∣(∫ t

    −1 (t − s ) μ−1 [ K

    (x, s, U M (x, s )

    )− K ( x, s, U(x, s ) ) ] ds

    )∣∣∣∣ + | I M (J 1 ) + J 2 (x, t) + J 3 (x, t) | ≤ L k

    ∫ t −1

    (t − s ) μ−1 | U M (x, s ) − U(x, s ) | ds + | I M (J 1 ) + J 2 (x, t) + J 3 (x, t) |

    = L k ∫ t

    −1 (t − s ) μ−1 | e (x, s ) | ds + | I M (J 1 ) + J 2 (x, t) + J 3 (x, t) | . (4.14)

  • 10 Y. Yang, J. Wang and S. Zhang et al. / Applied Mathematics and Computation 387 (2020) 124489

    Using the Gronwall inequality yield,

    ‖ e (x ) ‖ L 2 (I) ≤ C (‖ I M (J 1 ) ‖ L 2 (I) + ‖ J 2 (x ) ‖ L 2 (I) + ‖ J 3 (x ) ‖ L 2 (I) ), (4.15)

    ‖ e (x ) ‖ L ∞ (I) ≤ C (‖ I M (J 1 ) ‖ L ∞ (I) + ‖ J 2 (x ) ‖ L ∞ (I) + ‖ J 3 (x ) ‖ L ∞ (I) ).

    From (4.7) and lemma 5, we have

    ‖ I M (J 1 ) ‖ L 2 (I) ≤ max t∈ I

    | J 1 | ≤ CM −m K ∗,

    ‖ I M (J 1 ) ‖ L ∞ (I) ≤ CM −m max x ∈ I

    | K(x, s (θ ) , U M (x, s (θ ))) | H m,M ω c

    (I) × max x ∈ I

    M ∑ j=0

    | F j (t) |

    ≤ CM −m K ∗ ×{

    O (logM) , −1 < α, β ≤ − 1 2 ,

    O (M �+ 1 2 ) , otherwise,

    ≤{

    CM −m logMK ∗, −1 < α, β ≤ − 1 2 ,

    CM �+ 1 2 −m K ∗, otherwise.

    (4.16)

    Using L 2 − error and L ∞ − error bounds for the interpolation polynomials gives ‖ J 2 ‖ L 2 (I) = ‖ I M U(x, t) − U(x, t) ‖ L 2 (I)

    ≤ CM −m | U(x, t) | H m,M ω c

    (I) ,

    ‖ J 2 ‖ L ∞ (I) = ‖ I M U(x, t) − U(x, t) ‖ L ∞ (I)

    ≤{

    CM 1 2 −m | U(x, t) | H m,M

    ω c (I) , −1 < α, β ≤ − 1 2 ,

    CM 1+ �−m logM| U(x, t) | H m,M ω c

    (I) , otherwise. (4.17)

    By letting m = 1 , 2 , . . . , n, we have

    ‖ J 3 ‖ L 2 (I) = ‖ (I M − I) ∫ t j

    −1 (t j − s ) μ−1 [ K(x, s, U M (x, s )) − K(x, s, U(x, s ))] ds ‖ L 2 (I)

    ≤ ‖ (I M − I) ∫ t j

    −1 (t j − s ) μ−1 (U M (x, s ) − U(x, s )) ds ‖ L 2 (I)

    ≤ ‖ (I M − I) ∫ t j

    −1 (t j − s ) μ−1 e (x, s ) ds ‖ L 2 (I)

    = ‖ (I M − I) M e ‖ L 2 (I) = ‖ (I M − I)(M e − T M e ) ‖ L 2 (I) ≤ ‖ I M (M e − T M e ) ‖ L 2 (I) + ‖ (M e − T M e ) ‖ L 2 (I) ≤ C‖M e − T M e ‖ L ∞ (I) ≤ CM −κ‖M e ‖ 0 ,κ≤ CM −κ‖ e ‖ L ∞ (I) ,

    ‖ J 3 ‖ L ∞ (I) = ‖ (I M − I) M e ‖ L ∞ (I) = ‖ (I M − I)(M e − T M e ‖ L ∞ (I) ≤ (1 + ‖ I M ‖ L ∞ (I) ) CM −κ‖M e ‖ 0 ,κ , κ ∈ (0 , μ) ≤

    {CM −κ logM‖ e ‖ L ∞ (I) , −1 < α, β ≤ − 1 2 , CM

    1 2 + �−κ‖ e ‖ L ∞ (I) , − 1 2 ≤ � < μ − 1 2 .

    (4.18)

    Considering all above estimates together, yield

    ‖ e | L 2 (I) ≤ C M 1 2 −m K ∗ + C M −m | U(x, t) | H m,M ω c

    (I) + C M −κ‖ e ‖ L ∞ (I) ,

    ‖ e | L ∞ (I) ≤

    ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩

    C M −m logMK ∗ + C M 1 2 −m | U(x, t) | H m,M ω c

    (I) + C M −κ logM‖ e ‖ L ∞ (I) , −1 < α, β ≤ − 1

    2 ,

    C M �+ 1 2 −m K ∗ + C M 1+ �−m logM| U(x, t) | H m,M

    ω c (I) C M

    1 2 + �−κ‖ e ‖ L ∞ (I) ,

    − 1 2

    ≤ � < μ − 1 2 .

    (4.19)

    Theorem 2. Suppose U M (x, t) = ∑ M j=0 U j (x ) F j (t) is the discrete solution in the time direction of the Eq. (2.12) , and U M N (x, t) =∑ N i =0

    ∑ M j=0 u

    j i H i (x ) F j (t) is the Eq. (2.15) at the completely discrete solution of time space, then for large enough N , we can get the

  • Y. Yang, J. Wang and S. Zhang et al. / Applied Mathematics and Computation 387 (2020) 124489 11

    following error estimate:

    ‖ U M − U M N ‖ L ∞ ≤{

    ClogMN 3 4 −m U ∗,

    CM 1 2 + �N

    3 4 −m U ∗,

    ‖ U M − U M N ‖ L 2 ≤ CM −m U ∗. (4.20)Proof. By subtracting (2.15) from (2.13) , using the Lipschitz condition, we obtain

    U j (x i ) − U j i = 1

    (μ)

    (T

    2

    )α ∫ t j −1

    (t j − s ) μ−1 (

    K (x i , s, U

    M ( x i , s ) )

    − K (x i , s, U

    M N ( x i , s )

    ))ds

    ≤ L k ∫ t j

    −1 (t j − s ) μ−1

    ∣∣U M ( x i , s ) − U M N ( x i , s ) ∣∣ds ≤ L k

    ∫ t j −1

    (t j − s ) μ−1 ∣∣∣∣∣

    M ∑ m =0

    ( U m (x i ) −

    N ∑ n =0

    U m n H n (x i )

    ) F m (s )

    ∣∣∣∣∣ds = L k

    ∣∣∣∣∣M ∑

    m =0

    ( U m (x i ) −

    N ∑ n =0

    U m n H n (x i )

    ) ∣∣∣∣∣∫ t j

    −1 (t j − s ) μ−1 F m (s ) ds

    = L k M ∑

    m =0 C j m

    ( U m (x i ) −

    N ∑ n =0

    U m n H n (x i )

    ) , (4.21)

    where

    C j m = ∫ t j

    −1 (t j − s ) μ−1 F m (s ) ds.

    Multiplying by H i ( z ) on the both sides of (4.21) and summing from 0 to N for i , then

    N ∑ i =0

    U j (x i ) H i (x ) −N ∑

    i =0 U j

    i H i (x ) ≤ L k

    N ∑ i =0

    [ M ∑

    m =0 C j m

    ( U m (x i ) −

    N ∑ n =0

    U m n H n (x i )

    ) ] H i (x ) , (4.22)

    so,

    I N U j (x ) −

    N ∑ i =0

    U j i H i (x ) ≤ L k

    M ∑ m =0

    C j m

    [ I N U

    m (x ) − I N (

    N ∑ n =0

    U m n H n (x )

    ) ] , (4.23)

    where,

    I N U m (x ) =

    N ∑ i =0

    U m (x i ) H i (x ) ,

    I N

    ( N ∑

    n =0 U m n H n (x )

    ) =

    N ∑ i =0

    N ∑ n =0

    U m n H n (x i ) H i (x ) . (4.24)

    Let e (x, t j ) = U j (x ) −∑ N

    i =0 U j

    i H i (x ) , we can rewrite (4.23) as,

    max t∈ [ −1 , 1]

    | e (x, t) | ≤ C| J 4 | , (4.25)where,

    J 4 = I N U j (x ) − U j (x ) , (4.26)According to lemma 2, yield,

    ‖ J 4 ‖ L 2 = ‖ I N U j (x ) − U j (x ) ‖ L 2 ≤ CN −m | u | H m,N (I) , (4.27)‖ J 4 ‖ L ∞ ≤ CN 3 4 −m | u | H m,N (I) .

    On the other hand,

    U M − U M N = M ∑

    j=0

    ( U j (x ) −

    N ∑ i =0

    U j i H i (x )

    ) F j (t)

    = M ∑

    j=0 e (x, t j ) F j (t) = I M e (x, t) . (4.28)

  • 12 Y. Yang, J. Wang and S. Zhang et al. / Applied Mathematics and Computation 387 (2020) 124489

    Fig. 1. The numerical solution and exact solution of (5.1) , μ = 3 / 4 .

    Using lemma 4, we have

    ‖ U M − U M N ‖ L ∞ ≤{

    ClogM max t∈ [ −1 , 1]

    | e (x, t) | , −1 ≤ α, β < − 1 2 ,

    CM 1+ � max t∈ [ −1 , 1]

    | e (x, t) | , otherwise. (4.29)

    Using lemma 3, we have

    ‖ U M − U M N ‖ L 2 ≤ ClogM max t∈ [ −1 , 1]

    | e (x, t) | . (4.30) Thus, combine (4.25), (4.27), (4.29) with (4.30) , we can compute the results showed in (4.20) . �

    Theorem 3. Suppose U ( x , t ) is an exact solution of the Eq. (2.7) and sets U M N (x, t) = ∑ N

    i =0 ∑ M

    j=0 u j i H i (x ) F j (t) is the entire discrete

    solution of (2.15) and satisfies the initial conditions and boundary conditions. If U(x, t) ∈ H n,N ω α,β

    , then for big enough M and N ,

    there are the following error estimates:

    | U(x, t) − U M N (x, t) | L ∞ (I) ≤

    ⎧ ⎪ ⎨ ⎪ ⎩

    C M −m logMK ∗ + C M 1 2 −m U ∗ + C logMN 3 4 −m U ∗, −1 < α, β ≤ − 1

    2 ,

    C M �+ 1 2 −m K ∗ + C M 1+ �−m logMU ∗ + C M 1 2 + �N 3 4 −m U ∗,

    − 1 2

    ≤ � < μ − 1 2 .

    ‖ U(x, t) − U M N (x, t) ‖ L 2 (I) ≤

    ⎧ ⎪ ⎨ ⎪ ⎩

    CM −m (K ∗ + U ∗) + CM −m −κ logMK ∗+ C M 1 2 −m −κU ∗ + C N −m U ∗, −1 < α, β ≤ − 1

    2 ,

    CM −m (K ∗ + U ∗) + CM �+ 1 2 −m −κK ∗+ C M 1+ �−m −κ logMU ∗ + C N −m U ∗, − 1 ≤ � < μ − 1 .

    (4.31)

    2 2

  • Y. Yang, J. Wang and S. Zhang et al. / Applied Mathematics and Computation 387 (2020) 124489 13

    Fig. 2. The numerical solution and exact solution of (5.2) , μ = 7 / 8 .

    Proof. Using the triangle inequality

    | U(x, t) − U M N (x, t) | ≤ | U(x, t) − U M (x, t) | + | U M (x, t) − U M N (x, t) | , (4.32)and combine Theorems 1 and 2 , leads to estimation (4.31) . �

    5. Algorithm implementation and numerical results

    5.1. Example 1. 1D linear equation.

    The domain is (x, t) ∈ (−1 , 1) × (0 , 2) ,

    i ∂ 3 / 4

    ∂t 3 / 4 ψ (x, t) + ∂

    2

    ∂x 2 ψ (x, t) = f 1 (x, t) , (x, t) ∈ (−1 , 1) × (0 , 2) , ψ(x, 0) = 0 , x ∈ (−1 , 1) ,

    ψ(−1 , t) = ψ(1 , t) = −t 2 , t ∈ (0 , 2) , (5.1)where

    f 1 = 16 √

    2

    5 π(

    3

    4

    )t 5 / 4 (i cos πx − sin πx ) + t 2 (−π2 cos πx − i π2 sin πx ) .

    The exact solution is

    ψ(x, t) = t 2 ( cos πx + i sin πx ) . In the space direction x , we use P M+2 Lagrange-Gauss-Lobatto orthogonal polynomials, where the node x 0 = −1 and

    x M+1 = 1 . In the time direction t , we use P N Jacobi orthogonal polynomials with the index (μ − 1 , 0) = (−1 / 4 , 0) , i.e., (1 +s ) −1 / 4 (1 − s ) 0 , where s = t − 1 . The number of collocation points is MN , instead of (M + 2) N, as the boundary values are

  • 14 Y. Yang, J. Wang and S. Zhang et al. / Applied Mathematics and Computation 387 (2020) 124489

    Fig. 3. The solution and the error for (5.3) , μ = 7 / 8 .

    Table 1

    Example 1. The L ∞ error and the L 2 error at t = 2 , for (5.1) .

    M , N ‖ ψ − ψ h ‖ L ∞ ‖ ψ − ψ h ‖ L 2 4, 4 2.2628E-001 6.3147E-002

    6, 6 9.1637E-003 1.2706E-003

    8, 8 2.5137E-004 2.1386E-005

    10, 10 5.4994E-006 1.8660E-006

    12, 12 8.5810E-007 8.3305E-007

    given. The numerical solution and the exact solution at time t = 2 are plotted in Fig. 1 , for M = 4 and N = 4 . We list thenumerical L ∞ error and the L 2 error at time t = 2 in Table 1 , from [26] , The method converges exponentially.

    5.2. Example 2. 1D nonlinear equation.

    Consider 1D nonlinear equation

    i ∂ 7 / 8

    ∂t 7 / 8 ψ + ∂

    2

    ∂x 2 ψ + | ψ | 2 ψ = f 2 , (x, t) ∈ (−1 , 1) × (0 , 2) ,

    ψ(x, 0) = 0 , x ∈ (−1 , 1) , ψ(−1 , t) = ψ(1 , t) = 0 , t ∈ (0 , 2) , (5.2)

    where f 2 is defined such that the exact solution is

    ψ(x, t) = t 15 / 8 (

    1

    8 sin 2 πx + i

    12 sin πx

    ).

  • Y. Yang, J. Wang and S. Zhang et al. / Applied Mathematics and Computation 387 (2020) 124489 15

    Fig. 4. The solution and the error for (5.3) , μ = 7 / 8 .

    Table 2

    Example 2. The L ∞ error and the L 2 error at t = 2 , for (5.2) .

    M , N ‖ ψ − ψ h ‖ L ∞ ‖ ψ − ψ h ‖ L 2 4, 4 8.1372E-001 5.6689E-001

    6, 6 7.3603E-002 4.1019E-002

    8, 8 6.6883E-003 2.7107E-003

    10, 10 4.7876E-004 1.4701E-004

    12, 12 2.7144E-005 1.2426E-005

    14, 14 5.0659E-006 5.1181E-006

    In the space direction x , we use P M+2 Lagrange-Gauss-Lobatto orthogonal polynomials. In the time direction t , we use P NJacobi orthogonal polynomials with the weight index (μ − 1 , 0) = (−1 / 8 , 0) , i.e., (1 + s ) −1 / 8 (1 − s ) 0 , s = t − 1 . The numberof collocation points is MN . The numerical solution and the exact solution at time t = 2 are plotted in Fig. 2 , for M = 6and N = 6 . We list the numerical L ∞ error and the L 2 error at time t = 2 in Table 2 . On the first few levels, the numericalsolution converges exponentially.

    5.3. Example 3. 2D nonlinear equation.

    We solve the following time-fractional Schrödinger equation.

    i ∂ 7 / 8 ψ

    ∂t 7 / 8 + ∂

    2 ψ

    ∂x 2 + ∂

    2 ψ

    ∂y 2 + | ψ | 2 ψ = f 3 , (x, y, t) ∈ (−1 , 1) 2 × (0 , 2) , ψ(x, y, 0) = 0 , (x, y ) ∈ (−1 , 1) 2 ,

    ψ(±1 , ±1 , t) = 0 , t ∈ (0 , 2) , (5.3)

  • 16 Y. Yang, J. Wang and S. Zhang et al. / Applied Mathematics and Computation 387 (2020) 124489

    Table 3

    Example 3. The L ∞ error and the L 2 error at t = 2 , for (5.3) .

    M , N ‖ ψ − ψ h ‖ L ∞ ‖ ψ − ψ h ‖ L 2 4, 4 6.2351E-001 2.8044E-001

    6, 6 7.1289E-002 2.6120E-002

    8, 8 7.6262E-003 1.8877E-003

    10, 10 1.7519E-003 7.8029E-004

    where

    f 3 = (

    7

    8

    )t

    (− 35

    256 sin (πx ) sin (2 πy ) + i 105

    512 sin (2 πx ) sin (πy )

    )+ t 15 / 8

    (− 5

    8 sin (2 πx ) sin (πy ) − i 5

    12 sin (πx ) sin (2 πy )

    )+ t 45 / 8

    (1

    64 sin

    2 (2 πx ) sin 2 (πy ) + 1 144

    sin 2 (πx ) sin 2 (2 πy )

    )·(

    1

    8 sin (2 πx ) sin (πy ) + i 1

    12 sin (πx ) sin (2 πy )

    ).

    The exact solution is

    ψ(x, y, t) = t 15 / 8 (

    1

    8 sin (2 πx ) sin (πy ) + i 1

    12 sin (πx ) sin (2 πy )

    ).

    In the space direction, we use 2D tenor P M+2 Lagrange-Gauss-Lobatto orthogonal polynomials. In the time direction t , weuse P N Jacobi orthogonal polynomials with the weight index (μ − 1 , 0) = (−1 / 8 , 0) , i.e., (1 + s ) −1 / 8 (1 − s ) 0 , s = t − 1 . Thenumber of collocation points is M 2 N . The exact solution and the error of the numerical solution at time t = 2 are plotted inFigs. 3 and 4 , for M = 10 and N = 10 . We list the numerical L ∞ error and the L 2 error at time t = 2 in Table 3 . On the firstfew levels, the numerical solution converges exponentially.

    6. Conclusions

    In this paper, space-time JSC method has been implemented to solve 1D and 2D time-fractional Schr ̈o dinger equations

    with the appropriate initial and boundary conditions. In this regard, we first transform the basic equation into the associ-

    ated system of nonlinear weakly singular integro-PDEs and then apply the collocation scheme together with approximating

    the existing integrals with high accurate Gaussian quadrature rules, which reduce the basic equation into the corresponding

    system of nonlinear algebraic equations. This system of algebraic equations can be solved by robust iterative solvers such as

    Newton Raphson iterative method. The presented method has two basic advantages with respect to recent numerical meth-

    ods. The proposed method is a balanced technique that has spectral accuracy in both of the spatial and temporal variables.

    Moreover, a rigorous convergence analysis is provided to support the proposed numerical idea theoretically. From the results

    of numerical experiments, one can conclude that, even by using small number of collocation points, high accurate solutions

    are computed by the proposed spectral method. At the current time, this approach is considered to be implemented and

    extended to solve other time-fractional PDEs. But, some modifications are need to be applied.

    Acknowledgments

    The author would like to thank the referees for the helpful suggestions. The work was supported by NSFC Project

    ( 11671342 , 91430213 , 11671157 , 11771369 ), Project of Scientific Research Fund of Hunan Provincial Science and Technology

    Department ( 2018JJ2374 , 2018WK4006 ) and Key Project of Hunan Provincial Department of Education ( 17A210 ).

    References

    [1] E. Schrödinger , An undulatory theory of the mechanics of atoms and molecules, Phys. Rev. 28 (1926) 1049–1070 . [2] B.F. Adda , J. Cresson , Fractional differential equations and the Schrödinger equation, Appl. Math. Comput. 161 (2005) 245–323 .

    [3] P. Rozmej , B. Bandrowski , On fractional Schrödinger equation, Comput. Method Sci. Tech. 16 (2010) 191–204 . [4] N. Laskin , Fractional quantum mechanics and levy path integrals, Phys. Lett. A. 268 (20 0 0) 298–305 .

    [5] N.A. Khan , M. Jamil , A. Ara , Approximate solutions to time-fractional Schrödinger equation via homotopy analysis method, ISRN. Math. Phys. 11 (2012) .[6] R. Garrappa , I. Moret , M. Popolizio , Solving the time-fractional Schroinger equation by Krylov projection methods, J. Comput. Phys. 293 (2015) 115–134 .

    [7] A. Esen , O. Tasbozan , Numerical solution of time fractional Schrödinger equation by using quadratic b-spline finite elements, Ann. Math. Silesianae 31

    (2017) 83–98 . [8] L. Wei , Y. He , X. Zhang , S. Wang , Analysis of an implicit fully discrete local discontinuous Galerkin method for the time-fractional schrodinger equation,

    Finite. Elem. Anal. Des. 59 (2012) 28–34 . [9] L. Wei , X. Zhang , S. Kumar , A. Yildirim , A numerical study based on an implicit fully discrete local discontinuous Galerkin method for the time-frac-

    tional coupled Schrödinger system, Comput. Math. Appl. 64 (2012) 2603–2615 . [10] N. Liu , W. Jiang , A numerical method for solving the time fractional Schrödinger equation, Adv. Comput. Math. 44 (2018) 1235–1248 .

    [11] D.F. Li , C.D. Wu , Z.M. Zhang , Linearized Galerkin FEMs for nonlinear time fractional parabolic problems with non-smooth solutions in time direction,

    J. Sci. Comput. (2019) 1–17 . [12] A. Mohebbi , M. Abbaszadeh , M. Dehghan , The use of a meshless technique based on collocation and radial basis functions for solving the time

    fractional nonlinear schrodinger equation arising in quantum mechanics, Eng. Anal. Boundary. Element. 37 (2013) 475–485 . [13] M. Stynes , Too much regularity may force too much uniqueness, Fract. Calc. Appl. Anal. 19 (2016) 1554–1562 .

    [14] E. Shivanian , A. Jafarabadi , Error and stability analysis of numerical solution for the time fractional nonlinear Schrödinger equation on scattered dataof general-shaped domains, Numer. Methods. Partial. Differ. Equ. 33 (2017) 1043–1069 .

    https://doi.org/10.13039/501100001809https://doi.org/10.13039/501100002767https://doi.org/10.13039/100009377http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0001http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0001http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0002http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0002http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0002http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0003http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0003http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0003http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0004http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0004http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0005http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0005http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0005http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0005http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0006http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0006http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0006http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0006http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0007http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0007http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0007http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0008http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0008http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0008http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0008http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0008http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0009http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0009http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0009http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0009http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0009http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0010http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0010http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0010http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0011http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0011http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0011http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0011http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0012http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0012http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0012http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0012http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0013http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0013http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0014http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0014http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0014

  • Y. Yang, J. Wang and S. Zhang et al. / Applied Mathematics and Computation 387 (2020) 124489 17

    [15] Z. Zhang , F. Zeng , G.E. Karniadakis , Optimal error estimates of spectral Petrov-Galerkin and collocation methods for initial value problems of fractionaldifferential equations, SIAM. J. Numer. Anal. 53 (2015) 2074–2096 .

    [16] H. Liang , M. Stynes , Collocation methods for general Caputo two-point boundary value problems, J. Sci. Comput. 76 (2018) 390–425 . [17] Y. Yang, W.Y. Qiao, J.D. Wang, S.Y. Zhang, Spectral collocation methods for nonlinear coupled time fractional Nernest-planck equations in two dimen-

    sions and its convergence analysis, Compu. Math. Appl. https://doi.org/10.1016/j.camwa.2018.12.018 . [18] C. Wang , Z. Wang , L. Wang , A spectral collocation method for nonlinear fractional boundary value problems with a caputo derivative, J. Sci. Comput.

    76 (2018) 166–188 .

    [19] Y. Yang , Jacobi spectral Galerkin methods for fractional integro-differential equations, Calcolo 52 (2015) 519–542 . [20] Y. Yang , Y.Q. Huang , Y. Zhou , Numerical simulation of time fractional cable equations and convergence analysis, Numer. Methods Partial. Differ. Equ.

    34 (2018) 1556–1576 . [21] F. Zeng , Z. Mao , G.E. Karniadakis , A generalized spectral collocation method with tunable accuracy for fractional differential equations with end point

    singularities, SIAM. J. Sci. Comput. 39 (2017) 360–383 . [22] A .H. Bhrawy , M.A . Abdelkawy , A fully discrete collocation approximation for multi-dimensional fractional Schrödinger equations, J. Comput. Phys. 294

    (2015) 462–483 . [23] A.H. Bhrawy , J.F. Alzaidy , M.A. Abdelkawy , A. Biswas , Jacobi spectral collocation approximation for multi-dimensional time-fractional Schrödinger

    equations, Nonlinear Dyn. 84 (2016) 1553–1567 .

    [24] D.F. Li , J.L. Wang , J.W. Zhang , Unconditionally convergent l 1-Galerkin FEMs for nonlinear time-fractional Schrödinger equations, SIAM. J. Sci. Comput39 (2017) A3067–A3088 .

    [25] X.L. Chen , Y.N. Di , J.Q. Duan , D.F. Li , Linearized compact ADI schemes for nonlinear time-fractional Schrödinger equations, Appl. Math. Lett 84 (2018)160–167 .

    [26] J. Shen , T. Tang , L.L. Wang , Spectral methods: Algorithms, analysis and applications, in: Springer Series in Computational Mathematics, Springer, Berlin,2011 .

    [27] C. Canuto , M.Y. Hussaini , A. Quarteroni , Spectral Methods Fundamentals In Single Domains, Springer-Verlag, NewYork, 2006 .

    [28] G. Mastroianni , D. Occorsto , Optimal systems of nodes for lagrange interpolation on bounded intervals: a survey, J. Comput. Appl. Math. 134 (2001)325–341 .

    [29] Y. Yang , Y.P. Chen , Y.Q. Huang , H.Y. Wei , Spectral collocation method for the time-fractional diffusion-wave equation and convergence analysis, Comput.Math. Appl. 73 (2017) 1218–1232 .

    [30] D. Henry , Geometric Theory of Semilinear Parabolic Equations, Springer-Verlage, NewYork, 2006 . [31] Y. Yang , Y.P. Chen , Y.Q. Huang , Convergence analysis of the Jacobi spectral collocation method for fractional integro-differential equations, Acta. Math.

    Sci. 34B (2014) 673–690 .

    [32] Y. Yang , Y.P. Chen , Spectral collocation methods for nonlinear Volterra integro-differential equations with weakly singular kernels, B. Malays. Math. Sci.Soc. 42 (2019) 297–314 .

    http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0015http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0015http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0015http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0015http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0016http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0016http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0016https://doi.org/10.1016/j.camwa.2018.12.018http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0017http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0017http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0017http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0017http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0018http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0018http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0019http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0019http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0019http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0019http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0020http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0020http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0020http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0020http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0021http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0021http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0021http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0022http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0022http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0022http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0022http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0022http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0023http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0023http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0023http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0023http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0024http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0024http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0024http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0024http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0024http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0025http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0025http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0025http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0025http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0026http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0026http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0026http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0026http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0027http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0027http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0027http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0028http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0028http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0028http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0028http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0028http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0029http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0029http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0030http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0030http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0030http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0030http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0031http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0031http://refhub.elsevier.com/S0096-3003(19)30468-0/sbref0031

    Convergence analysis of space-time Jacobi spectral collocation method for solving time-fractional Schrödinger equations1 Introduction2 Jacobi spectral collocation methods3 Some preliminaries and useful Lemmas4 Convergence analysis of Jacobi spectral collocation method5 Algorithm implementation and numerical results5.1 Example 1. 1D linear equation.5.2 Example 2. 1D nonlinear equation.5.3 Example 3. 2D nonlinear equation.

    6 ConclusionsAcknowledgmentsReferences