research article numerical solutions of singularly...

7
Hindawi Publishing Corporation Journal of Function Spaces and Applications Volume 2013, Article ID 542897, 6 pages http://dx.doi.org/10.1155/2013/542897 Research Article Numerical Solutions of Singularly Perturbed Reaction Diffusion Equation with Sobolev Gradients Nauman Raza 1,2 and Asma Rashid Butt 3,4 1 Department of Mathematics and Statistics, McMaster University, Hamilton, ON, Canada L8S 4K1 2 Department of Mathematics, University of the Punjab, Lahore 54590, Pakistan 3 Department of Mathematics, Brock University, St. Catharines, ON, Canada L2S 3A1 4 Department of Mathematics, University of Engineering and Technology, Lahore 54890, Pakistan Correspondence should be addressed to Nauman Raza; raza [email protected] Received 31 May 2013; Accepted 14 September 2013 Academic Editor: Chengbo Zhai Copyright © 2013 N. Raza and A. R. Butt. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Critical points related to the singular perturbed reaction diffusion models are calculated using weighted Sobolev gradient method in finite element setting. Performance of different Sobolev gradients has been discussed for varying diffusion coefficient values. A comparison is shown between the weighted and unweighted Sobolev gradients in two and three dimensions. e superiority of the method is also demonstrated by showing comparison with Newton’s method. 1. Introduction Many problems in biology and chemistry can be represented in terms of partial differential equations (PDEs). One of the important models is reaction diffusion problems. Much attention has been devoted for the solutions of these prob- lems. In the literature it is shown that numerical solutions of these problems can be computed provided the diffusion coefficients, reaction excitations, and initial and boundary data are given in a deterministic way. e solution of these PDEs is extremely challenging when they has singularly perturbed behavior. In this paper, we discuss the numerical solutions of = Δ + (1 − ) , ∈ Ω, (1) where is a small and strictly positive parameter called diffusion coefficient, Ω is some two- or three-dimensional region. e Dirichlet boundary conditions are used to solve the equation. Numerous numerical algorithms are designed to solve these kind of systems [1, 2]. We are also using some numerical techniques to solve these systems based on the Sobolev gradient methods. A weighted Sobolev gradient approach is presented, which provides an iterative method for nonlinear elliptic problems. Weighted Sobolev gradients [3] have been used for the solution of nonlinear singular differential equations. It is shown that [4] significant improvement can be achieved by careful considerations of the weighting. Numerous Sobolev norms can be used as a preconditioner strategies. In Sobolev gradient methods, linear operators are formed to improve the condition number in the steepest descent minimization process. e efficiency of Sobolev gradient methods has been shown in many situations, for example, in physics [411], image processing [12, 13], geometric modelling [14], material sciences [1520], Differential Algebraic Equations, (DAEs)[21] and for the solution of integrodifferential equa- tions [22]. We refer [23] for motivation and background for Sobolev gradients. For some applications and open problems in this area, an interesting article is written by Renka and Neuberger [24]. For the computational comparison an Intel Pentium 1.4 GHZ core i3 machine with 1 GB RAM was used. All the programs were written in FreeFem++ [25] which is freely available to solve these kind of problems. e paper is organized as follows. In Section 2, we introduce the basic Sobolev gradient approach. is section contains some important results for the existence and con- vergence of the solution. In Section 3, we find the expression

Upload: others

Post on 10-Jul-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Research Article Numerical Solutions of Singularly ...downloads.hindawi.com/journals/jfs/2013/542897.pdf · Department of Mathematics and Statistics, McMaster University, Hamilton,

Hindawi Publishing CorporationJournal of Function Spaces and ApplicationsVolume 2013 Article ID 542897 6 pageshttpdxdoiorg1011552013542897

Research ArticleNumerical Solutions of Singularly Perturbed Reaction DiffusionEquation with Sobolev Gradients

Nauman Raza12 and Asma Rashid Butt34

1 Department of Mathematics and Statistics McMaster University Hamilton ON Canada L8S 4K12Department of Mathematics University of the Punjab Lahore 54590 Pakistan3Department of Mathematics Brock University St Catharines ON Canada L2S 3A14Department of Mathematics University of Engineering and Technology Lahore 54890 Pakistan

Correspondence should be addressed to Nauman Raza raza naumanyahoocom

Received 31 May 2013 Accepted 14 September 2013

Academic Editor Chengbo Zhai

Copyright copy 2013 N Raza and A R Butt This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited

Critical points related to the singular perturbed reaction diffusion models are calculated using weighted Sobolev gradient methodin finite element setting Performance of different Sobolev gradients has been discussed for varying diffusion coefficient values Acomparison is shown between the weighted and unweighted Sobolev gradients in two and three dimensionsThe superiority of themethod is also demonstrated by showing comparison with Newtonrsquos method

1 Introduction

Many problems in biology and chemistry can be representedin terms of partial differential equations (PDEs) One ofthe important models is reaction diffusion problems Muchattention has been devoted for the solutions of these prob-lems In the literature it is shown that numerical solutionsof these problems can be computed provided the diffusioncoefficients reaction excitations and initial and boundarydata are given in a deterministic way The solution of thesePDEs is extremely challenging when they has singularlyperturbed behavior In this paper we discuss the numericalsolutions of

119901119905= 120598Δ119901 + 119901 (1 minus 119901) 119901 isin Ω (1)

where 120598 is a small and strictly positive parameter calleddiffusion coefficient Ω is some two- or three-dimensionalregion The Dirichlet boundary conditions are used to solvethe equation Numerous numerical algorithms are designedto solve these kind of systems [1 2] We are also usingsome numerical techniques to solve these systems based onthe Sobolev gradient methods A weighted Sobolev gradientapproach is presented which provides an iterativemethod fornonlinear elliptic problems

Weighted Sobolev gradients [3] have been used for thesolution of nonlinear singular differential equations It isshown that [4] significant improvement can be achieved bycareful considerations of the weighting Numerous Sobolevnorms can be used as a preconditioner strategies In Sobolevgradient methods linear operators are formed to improvethe condition number in the steepest descent minimizationprocess The efficiency of Sobolev gradient methods hasbeen shown in many situations for example in physics[4ndash11] image processing [12 13] geometric modelling [14]material sciences [15ndash20] Differential Algebraic Equations(DAEs)[21] and for the solution of integrodifferential equa-tions [22]

We refer [23] for motivation and background for Sobolevgradients For some applications and open problems in thisarea an interesting article is written by Renka and Neuberger[24] For the computational comparison an Intel Pentium14GHZ core i3 machine with 1 GB RAM was used All theprograms were written in FreeFem++ [25] which is freelyavailable to solve these kind of problems

The paper is organized as follows In Section 2 weintroduce the basic Sobolev gradient approach This sectioncontains some important results for the existence and con-vergence of the solution In Section 3 we find the expression

2 Journal of Function Spaces and Applications

for Sobolev and weighted Sobolev gradients Section 4 iscomposed of numerical results Summary and conclusion arediscussed in Section 5

2 Sobolev Gradient Approach

This section is devoted to show the working of Sobolev gradi-ent methods A detailed analysis regarding the constructionof Sobolev gradients can be seen in [23] and these lines arealso taken from the same reference

Let us consider that 119896 is a positive integer and 120595 is a realvalued 1198621 function on 119877119896 The gradient nabla120595 is defined by

lim119905rarr0

1

119905(120595 (119909 + 119905ℎ) minus 120595 (119909))

= 1205951015840(119909) ℎ = ⟨ℎ nabla120595 (119909)⟩

119877119896 119909 ℎ isin 119877

119896

(2)

For120595 as in (2) but with ⟨sdot sdot⟩119878an inner product on119877119896 different

from the usual inner product ⟨sdot sdot⟩119877119896 there is a function nabla

119904120595

119877119896rarr 119877119896 which satisfies

1205951015840(119909) ℎ = ⟨ℎ nabla

119878120595 (119909)⟩

119878 119909 ℎ isin 119877

119896 (3)

The linear functional 1205951015840(119909) can be represented using anyinner product on 119877119896 Say that nabla

119878120595 is the gradient of 120595 with

respect to the inner product ⟨sdot sdot⟩119878and note that nabla

119878120595 has the

same properties as nabla120595By applying a linear transformation

119860 119877119896997888rarr 119877

119896 (4)

we can relate these two inner products by

⟨119909 119910⟩119878= ⟨119909 119860119910⟩

119877119896 119909 119910 isin 119877

119896 (5)

and by applying a reflection we have

(nabla119878120595) (119909) = 119860

minus1nabla120595 (119909) 119909 isin 119877

119896 (6)

To each 119909 isin 119877119896 an inner product ⟨sdot sdot⟩119909is associated on 119877119896

Therefore for 119909 isin 119877119896 one can define nabla119909120595 119877

119896rarr 119877119896 such

that

1205951015840(119909) ℎ = ⟨ℎ nabla

119909120595 (119909)⟩

119909 119909 ℎ isin 119877

119896 (7)

So corresponding to eachmetric we can define a gradient fora function 120595 and these gradients may have vastly differentnumerical properties Therefore the choice of metric playsa vital role in steepest descent minimization process Agradient of a function 120595 is said to be Sobolev gradient whenthe underlying space is Sobolev Readers who are unfamiliarwith Sobolev spaces are referred to [26] Steepest descentcan be considered both in discrete as well as in continuousversions

Suppose 120595 is a real-valued 1198621 function defined on aHilbert space 119867 and nabla

119878120595 is its gradient with respect to the

inner product ⟨sdot sdot⟩119878defined on 119867 Discrete steepest descent

is a procedure of establishing a sequence 119909119896 such that for

given 1199090we have

119909119895= 119909119895minus1minus 120575119895(nabla120595) (119909

119895minus1) 119895 = 1 2 (8)

where for each 119895 120575119895is chosen in a way that it minimizes if

possible

120595 (119909119895minus1minus 120575119895(nabla120595) (119909

119895minus1)) (9)

In continuous steepest descent a function 119911 [0infin) rarr 119867is constructed so that

119889119911

119889119905= minusnabla120595 (119911 (119905)) 119911 (0) = 119911initial (10)

Under suitable conditions on 120595 119911(119905) rarr 119911infin

where 120595(119911infin) is

the minimum value of 120595So we conclude that discrete steepest descent (8) can

be considered as a numerical method for approximatingsolutions to (10) Continuous steepest descent provides atheoretical starting point for proving convergence for discretesteepest descent

Using (8) one seeks 119901 = lim119896rarrinfin

119909119895 so that

120595 (119901) = 0 or (nabla119878120595) (119901) = 0 (11)

Using (10) we seek 119901 = lim119905rarrinfin119911119905so that (11) holds

For the solution of PDEs we can formulate 120595 by avariational principle such that119901 satisfies the given PDE if andonly if 119901 is a critical point of 120595

To find a zero of the gradient of 120595 we try to usesteepest descentminimization processThe energy functionalassociated with (1) is given by

120595 (119901) = intΩ

120575119905

1199013

3+ (1 minus 120575

119905)1199012

2minus 119891119901 + 120575

119905

120598

2

1003816100381610038161003816nabla11990110038161003816100381610038162

(12)

For the solution of PDEs other functionals are also possibleand one of the prime examples in this direction is the leastsquare formulation Such functions are shown in [11 18] Theexistence and convergence of 119911(119905) rarr 119911(infin) for differentlinear and nonlinear forms of 119865 is discussed in [23]

Here we quote a result [23] for convergence of 119911(119905) undera convexity assumption

Theorem 1 (see [23]) Let119867 be a Hilbert space and let 119865 be anonnegative 1198622 function defined on119867 for which there is 120598 gt 0such that

⟨12059510158401015840(119909) (nabla120595 (119909)) (nabla120595 (119909))⟩ ge 120598

1003817100381710038171003817nabla120595(119909)10038171003817100381710038172

119867 119909 isin 119867 (13)

Suppose also that (10) holds and 119909 isin 119867 nabla120595(119909) = 0 Then 119901 =lim119905rarrinfin119911119905exists and nabla120595(119901) = 0

In this paper only discrete steepest descents are used andfor numerical computation discretized versions of functions120595 are considered

Journal of Function Spaces and Applications 3

3 Gradients and Minimization

Each inner product corresponds a gradient and a descentdirectionThe performance of steepest descent minimizationprocess can be improved by defining gradients in a suitablychosen Sobolev space It is still an open problem how tochoose the best inner product which is suitable for theproblem

A Sobolev space1198671(Ω) is defined as

1198671(Ω) = 119901 isin 119871

2(Ω) 119863

120572119901 isin 1198712(Ω) 0 le 120572 le 1 (14)

where119863120572 denotes the weak derivative of 119901 of order 1205721198671(Ω)is a Hilbert space with the norm defined by

100381710038171003817100381711990110038171003817100381710038172

1198671 = intΩ

1003816100381610038161003816nabla11990110038161003816100381610038162

+100381610038161003816100381611990110038161003816100381610038162

(15)

By inspiration fromMahavierrsquos work onweighted Sobolevgradients we define a new norm as

100381710038171003817100381711990110038171003817100381710038172

1198671119908 = int

Ω

1003816100381610038161003816119908nabla11990110038161003816100381610038162

+100381610038161003816100381611990110038161003816100381610038162

(16)

This norm takes care of 119908 = 120598120575119905which is affecting the

derivative term in (24) We call the Sobolev space with thenorm (16) a weighted Sobolev space

Now we show that the weighted Sobolev space with thenorm defined by (16) is a Hilbert space denoted by1198671119908 Forthis first we show it satisfies the properties of inner productspace and then we show it is complete

Let us denote the inner product ⟨sdot sdot⟩119908associated with the

norm defined by (16)

⟨119901 119901⟩119908= ⟨119901 119901⟩ + ⟨119908nabla119901119908nabla119901⟩ (17)

where ⟨sdot sdot⟩ is the usual 1198712 inner product in the vector space119877119872 defined by ⟨119901 119902⟩ = sum

119894119901(119894)119902(119894) Consider the following

⟨119901 119901⟩119908= intΩ

1003816100381610038161003816119908nabla11990110038161003816100381610038162

+100381610038161003816100381611990110038161003816100381610038162

gt 0 (18)

if 119901 = 0 then ⟨119901 119901⟩119908= 0

⟨119901 119902⟩119908= ⟨119901 119902⟩ + ⟨119908nabla119901119908nabla119902⟩

= ⟨119902 119901⟩ + ⟨119908nabla119902 119908nabla119901⟩ = ⟨119902 119901⟩119908

⟨119901 119902 + 119902⟩119908= ⟨119901 119902 + 119902⟩ + ⟨119908nabla119901119908nabla (119902 + 119902)⟩

= ⟨119901 119902⟩ + ⟨119901 119902⟩

+ ⟨119908nabla119901119908nabla119902⟩ + ⟨119908nabla119901119908nabla119902⟩

= ⟨119901 V⟩ + ⟨119908nabla119901119908nabla119902⟩

+ ⟨119901 119902⟩ + ⟨119908nabla119901119908nabla119902⟩

= ⟨119901 119902⟩119908+ ⟨119901 119902⟩

119908

⟨120572119901 119902⟩119908= ⟨120572119901 119902⟩ + ⟨119908120572nabla119901119908nabla119902⟩

= 120572 ⟨119901 119902⟩ + 120572 ⟨119908nabla119901119908nabla119902⟩

= 120572⟨119901 119902⟩119908

(19)

Let 1199011 1199012 be a Cauchy sequence in 1198671119908 Since 1198712 is

complete so 119901 isin 1198712 such that 119901119899minus 119901 rarr 0 also nabla119901

119899minus

nabla119901 rarr 0Therefore1003817100381710038171003817119901119899 minus 119901

10038171003817100381710038172

= intΩ

1003816100381610038161003816119908nabla(119901119899 minus 119901)10038161003816100381610038162

+1003816100381610038161003816119901119899 minus 119901

10038161003816100381610038162

= intΩ

|119908|21003816100381610038161003816nabla119901119899 minus nabla119901

10038161003816100381610038162

+1003816100381610038161003816119901119899 minus 119901

10038161003816100381610038162

997888rarr 0

(20)

Hence1198671119908 is complete by the norm defined by (16)To incorporate the Dirichlet boundary conditions we

define a perturbation subspace 11987120(Ω) of functions such that

1198712

0(Ω) = 119901 isin 119871

2

0(Ω) 119901 = 0 on Γ (21)

Here Γ denotes the boundary of the domain Ω Perturbationsubspaces related to 1198671 and 1198671119908 would be 1198671

0= 1198712

0⋂1198671

and11986711199080= 1198712

0⋂1198671119908 respectively

We need to find the gradient nabla120595(119901) of a functional 120595(119901)associated with the original problem and to find zero of thegradient

Given an energy functional

120595 (119901) = intΩ

119877 (119901 nabla119901) (22)

the Frechet derivative of 120595(119901) is a bounded linear functional1205951015840(119901) defined by

1205951015840(119901) ℎ = lim

120572rarr0

120595 (119901 + 120572ℎ) minus 120595 (119901)

120572for ℎ isin 1198671

0(Ω) (23)

The energy functional in our case is given by

120595 (119901) = intΩ

120575119905

1199013

3+ (1 minus 120575

119905)1199012

2minus 119891119901 + 120575

119905

120598

2

1003816100381610038161003816nabla11990110038161003816100381610038162

(24)

then according to (23) we have

1205951015840(119901) ℎ

= lim120572rarr0

1

120572intΩ

120575119905

(119901 + 120572ℎ)3

3+ (1 minus 120575

119905)(119901 + 120572ℎ)

2

2

minus 119891 (119901 + 120572ℎ) + 120575119905

120598

2

1003816100381610038161003816nabla(119901 + 120572ℎ)10038161003816100381610038162

minus 120575119905

1199013

3minus (1 minus 120575

119905)1199012

2+ 119891119901 minus 120575

119905

120598

2

1003816100381610038161003816nabla11990110038161003816100381610038162

(25)

Simplifying and applying the limit we have

1205951015840(119901) ℎ = int

Ω

(1205751199051199012+ (1 minus 120575

119905) 119901 minus 119891) ℎ minus int

Ω

120598120575119905nabla119901 sdot nablaℎ

(26)

Let nabla120595(119901) nabla120595119904(119901) and nabla120595

119908(119901) denote the gradients in

11987121198671 and1198671119908 respectively By using (7) we can write

1205951015840(119901) ℎ = ⟨nabla120595(119901) ℎ⟩

1198712 = ⟨nabla120595119904(119901) ℎ⟩1198671

= ⟨nabla120595119908(119901) ℎ⟩

1198671119908

(27)

4 Journal of Function Spaces and Applications

Table 1 Numerical results of steepest descent in11986711198671119908 using 120575119905= 095 120598 = 001 for fifteen time steps in the two-dimensional case

120582 Iterations CPU 119872 Triangles1198671

1198671119908

1198671

1198671119908

1198671

1198671119908 mdash mdash

08 0035 948 793 668 252 10 2408 0035 1782 1137 4529 1487 20 15208 0035 2627 1674 12605 407 30 32208 0035 2309 1621 21516 15584 40 348

Table 2 Numerical results of steepest descent in11986711198671119908 using 120575119905= 095 for fifteen time steps in the three-dimensional case

120582 Iterations CPU 119872 Triangles1198671

1198671119908

1198671

1198671119908

1198671

1198671119908 mdash mdash

12 05 3962 1245 2293 462 5 5012 05 45291 14872 3302 474 10 20012 05 252143 81403 3986 499 20 800

Thus the gradient in 1198712 is

nabla120595 (119901) = (1205751199051199012+ (1 minus 120575

119905) 119901 minus 119891) ℎ + 120598120575

119905nabla2119901 (28)

To satisfy the boundary conditions we are looking forgradients that are zero on the boundary of Ω So insteadof using nabla120595(119901) we use 120587nabla120595(119901) 120587 is a projection that setsvalues of test function ℎ zero on the boundary of the systemFor implementing the Sobolev gradient method a softwareFreeFem++ [25] is used which is designed to solve PDEsusing the finite element method Therefore to find 120587nabla120595(119901)we need to solve the following

120587intΩ

(1205751199051199012+ (1 minus 120575

119905) 119901 minus 119891) ℎ minus int

Ω

120598120575119905nabla119901 sdot nablaℎ

= 120587intΩ

nabla120595 (119901) ℎ + intΩ

nablanabla120595 (119901) sdot nablaℎ

(29)

In other words for 119901 in 1198712(Ω) find nabla120595(119901) isin 11987120 such that

⟨nabla120595(119901) ℎ⟩1198712 = ⟨119860(119901) minus 119891 ℎ⟩

1198712 forallℎ isin 119871

2

0(Ω) (30)

⟨119860(119901) ℎ⟩1198712 = intΩ

(1205751199051199012+ (1 minus 120575

119905) 119901) ℎ

minus intΩ

120598120575119905nabla119901 sdot nablaℎ

(31)

This gradient suffers from the CFL conditions as some-thing well known for numerical analysts By using (15) and(27) we can relate the 1198712 gradient and unweighted Sobolevgradient in the weak from as

⟨(1 minus nabla2)nabla119904120595(119901) ℎ⟩

1198712= ⟨119860(119906) minus 119891 ℎ⟩

1198712 (32)

Similarly using (16) and (27) the weighted Sobolev gradientcan be related with the 1198712 gradient

⟨(1 minus 1199082nabla2)nabla119908120595(119901) ℎ⟩

1198712= ⟨119860(119906) minus 119891 ℎ⟩

1198712 (33)

In order to find gradients we represent the above systems inweak formulation as

120587intΩ

(1205751199051199012+ (1 minus 120575

119905) 119901 minus 119891) ℎ minus int

Ω

120598120575119905nabla119901 sdot nablaℎ

= 120587intΩ

nabla119904120595 (119901) ℎ + int

Ω

nablanabla119904120595 (119901) sdot nablaℎ

(34)

120587intΩ

(1205751199051199012+ (1 minus 120575

119905) 119901 minus 119891) ℎ minus int

Ω

120598120575119905nabla119901 sdot nablaℎ

= 120587intΩ

nabla119908120595 (119901) ℎ + int

Ω

nablanabla119908120595 (119901) sdot nablaℎ

(35)

It is seen that when using the weighted Sobolev gradientthe step size 120582 does not have to be reduced as the numericalgrid becomes finer and the number of minimization stepsremains reasonable At the same time the conceptual simplic-ity and elegance of the steepest descent algorithm has beenretained

4 Numerical Results

Let us consider a two-dimensional domain Ω in the form ofa circle with centre at the origin and radius 12 An ellipticregion is removed that has border 119909(119905) = 6 cos(119905) 119910(119905) =2 sin(119905)with 119905 isin [0 2120587] To start theminimization process wespecify the value of 119901 = 20 as initial state and the boundarycondition was that 119901 = 1 on the outer boundary and 119901 = minus1on the inner boundary on the boundariesWe also let 120598 = 001and time step 120575

119905= 095

For the implementation of FreeFem++ one needs tospecify the number of nodes on each border After that ameshis formed by triangulating the region

FreeFem++ then solves the equations of the type as givenby (35) which can be used to compute gradients in 1198671 and1198671119908 We performed numerical experiments by specifying

different numbers of nodes on each border For each timestep 120575

119905the functional defined by (24) was minimized using

steepest descent stepswith both1198671 and1198671119908 until the infinity

Journal of Function Spaces and Applications 5

Table 3 Comparison of Newtonrsquos method and steepest descent in1198671119908 for different values of 120598

120598 = 10 120598 = 01 120598 = 001 mdashNewton 119867

1119908 Newton 1198671119908 Newton 119867

1119908 Error10 13 30 10 NC 13 10

minus5

11 20 38 16 NC 22 10minus7

11 28 NC 22 NC 34 10minus9

norm of the gradient was less than 10minus6 The system wasevolved over 15 time stepsThe results are reported in Table 3

Table 1 shows that by refining the mesh the weightedSobolev gradient becomes more and more efficient as com-pared to unweighted Sobolev gradients It is also observedthat by decreasing 120598 the performance of steepest descent in1198671119908 is better than in1198671Let us consider now a three-dimensional domainΩ in the

form of a cube with center at the origin and radius 8 To startthe minimization process we specify the value of 119901 = 00as initial state and the boundary condition was set as 119901 = 1on the top and bottom faces and 119901 = minus1 on the front backleft and right faces of the cube We also let 120598 = 01 and timestep 120575

119905= 095 Once again functional was minimized using

steepest descent with both 1198671 and 1198671119908 until the infinitynorm of the gradient was less than 10minus6 Once again the samesoftware is used Numerical experiments are performed byvarying number of nodes specified on each borderThe resultsare recorded in Table 2

By refining the mesh with increasing 119872 the efficiencyof weighted Sobolev gradient in terms of minimization stepstaken for convergence and CPU time also increased Byreducing the value of 120598 the performance of steepest descentin1198671119908 is more pronounced over than over1198671

5 Comparison with Newtonrsquos Method

In this section a comparison is given between Newtonrsquosmethod and the Sobolev gradient method is with weightingNewtonrsquos methods considered as one of the optimal methodsto solve these kinds of problems For convergence ofNewtonrsquosmethod a good initial guess is required For the comparisonbetween the two methods we take a good initial guess forimplementation of Newtonrsquos method so that we fairly cancompare the two methods In variational form the givennonlinear problem can be written as

⟨120595 (119901) ℎ⟩ = intΩ

120575119905

1199013

3ℎ + (1 minus 120575

119905)1199012

2ℎ minus 119891119901ℎ + 120575

119905

120598

2

1003816100381610038161003816nabla11990110038161003816100381610038162

(36)

To apply Newtonrsquos method we need to find Gateaux deriva-tive which is defined by

⟨1198651015840(119906119899) 119888119899 V⟩ = ⟨119865 (119901) V⟩ forallV isin 1198671

0(Ω) (37)

A linear solver is required to solve (37) Thus the Newtonrsquositeration scheme looks like

119906119899+1= 119906119899minus 119888119899 (38)

An example is solved in two-dimensional case We letΩ be in the form of a circle with centre at the origin andradius 12 An elliptic region is removed that has border 119909(119905) =6 cos(119905) 119910(119905) = 2 sin(119905) with 119905 isin [0 2120587] The initial state was119901 = 20 and the boundary condition was that 119901 = 1 on theouter boundary and 119901 = minus1 on the inner boundary on theboundaries The value of time step was set as 120575

119905= 095 The

system was evolved over 1 time step The minimization wasperformed until the infinity norm of the gradient was lessthan some set tolerance We specify 20 nodes on each borderto obtain results

Different values of 120598 results are recorded in Table 3The term NC denotes no convergence From the tableone can observe that Newtonrsquos method perform better thanthe steepest descent in 1198671119908 For strict stopping criterionNewtonrsquos method does not converge Newtonrsquos method wassuperior when the minimization process start but not ableto handle low frequency errors On the other hand steepestdescent in 1198671119908 takes more iterations but it continues toconverge even for very tight stopping criteria By decreasingthe value of diffusion coefficient 120598 the steepest descent in1198671119908 always manage to converge whereas this is not in the

case of Newtonrsquos method

6 Summary and Conclusions

In this draft for the solution of concerned problem aminimization schemebased on the Sobolev gradientmethods[23] is developed The performance of descent in 1198671119908 isbetter than descent in 1198671 as the spacing of the numericalgrid is refined In this paper we signify a systematic way tochoose the underlying space and the importance of weightingto develop efficient codes

The convergence ofNewtonrsquosmethod depend on the goodinitial guess which is sufficiently close to local minima Toimplement Newtonrsquos method we need to evaluate the inverseof Hessian matrix which is expensive at some times A failureof the Newtonrsquos method has been shown in [11 18] by takinglarge size of jump discontinuity and by refining the grid Butthis is not in the case of steepest descentmethod So a broaderrange of problems can be addressed by using steepest descentin appropriate Sobolev space

One of the advantages of thge Sobolev gradient methodsis that it still manages to converge even if we take rough initialguessTheweighted Sobolev gradient offers robust alternativemethod for problems which are highly nonlinear and havelarge discontinuities An interesting project can be done bycomparing the performance of Sobolev gradient methodswith somemultigrid technique in some finite element setting

6 Journal of Function Spaces and Applications

Acknowledgments

The authors are thankful to the precise and kind remarks ofthe learned refereesThe first author acknowledge and appre-ciate theHigher Education Commission (HEC) Pakistan forproviding research grant through Postdoctoral fellowship

References

[1] M Stynes ldquoSteady-state convection-diffusion problemsrdquo ActaNumerica vol 14 pp 445ndash508 2005

[2] C Xenophontos and S R Fulton ldquoUniform approximation ofsingularly perturbed reaction-diffusion problems by the finiteelement method on a Shishkin meshrdquo Numerical Methods forPartial Differential Equations vol 19 no 1 pp 89ndash111 2003

[3] W T Mahavier ldquoA numerical method utilizing weightedSobolev descent to solve singular differential equationsrdquo Non-linear World vol 4 no 4 pp 435ndash455 1997

[4] AMajid and S Sial ldquoApplication of Sobolev gradientmethod toPoisson-Boltzmann systemrdquo Journal of Computational Physicsvol 229 no 16 pp 5742ndash5754 2010

[5] J Karatson and L Loczi ldquoSobolev gradient preconditioning forthe electrostatic potential equationrdquo Computers amp Mathematicswith Applications vol 50 no 7 pp 1093ndash1104 2005

[6] J Karatson and I Farago ldquoPreconditioning operators andSobolev gradients for nonlinear elliptic problemsrdquo Computersamp Mathematics with Applications vol 50 no 7 pp 1077ndash10922005

[7] J Karatson ldquoConstructive Sobolev gradient preconditioningfor semilinear elliptic systemsrdquo Electronic Journal of DifferentialEquations vol 75 pp 1ndash26 2004

[8] J J Garcıa-Ripoll V V Konotop B Malomed and V M Perez-Garcıa ldquoA quasi-local Gross-Pitaevskii equation for attractiveBose-Einstein condensatesrdquo Mathematics and Computers inSimulation vol 62 no 1-2 pp 21ndash30 2003

[9] N Raza S Sial S S Siddiqi and T Lookman ldquoEnergyminimization related to the nonlinear Schrodinger equationrdquoJournal of Computational Physics vol 228 no 7 pp 2572ndash25772009

[10] N Raza S Sial and J W Neuberger ldquoNumerical solution ofBurgersrsquo equation by the Sobolev gradient methodrdquo AppliedMathematics and Computation vol 218 no 8 pp 4017ndash40242011

[11] A Majid and S Sial ldquoApproximate solutions to Poisson-Boltzmann systems with Sobolev gradientsrdquo Journal of Compu-tational Physics vol 230 no 14 pp 5732ndash5738 2011

[12] W B Richardson Jr ldquoSobolev preconditioning for the Poisson-Boltzmann equationrdquo Computer Methods in Applied Mechanicsand Engineering vol 181 no 4 pp 425ndash436 2000

[13] W B Richardson Jr ldquoSobolev gradient preconditioning forimage-processing PDEsrdquo Communications in Numerical Meth-ods in Engineering vol 24 no 6 pp 493ndash504 2008

[14] R J Renka ldquoConstructing fair curves and surfaces with aSobolev gradient methodrdquo Computer Aided Geometric Designvol 21 no 2 pp 137ndash149 2004

[15] S Sial J Neuberger T Lookman and A Saxena ldquoEnergyminimization using Sobolev gradients application to phaseseparation and orderingrdquo Journal of Computational Physics vol189 no 1 pp 88ndash97 2003

[16] S Sial J Neuberger T Lookman and A Saxena ldquoenergyminimization using Sobolev gradients finite-element settingrdquo

in Proceedings of the World Conference on 21st Century Math-ematics Lahore Pakistan 2005

[17] N Raza S Sial and S Siddiqi ldquoApproximating time evolutionrelated to Ginzburg-Landau functionals via Sobolev gradientmethods in a finite-element settingrdquo Journal of ComputationalPhysics vol 229 no 5 pp 1621ndash1625 2010

[18] N Raza S Sial and S S Siddiqi ldquoSobolev gradient approach forthe time evolution related to energyminimization of Ginzburg-Landau functionalsrdquo Journal of Computational Physics vol 228no 7 pp 2566ndash2571 2009

[19] S Sial ldquoSobolev gradient algorithm for minimum energy statesof s-wave superconductors finite-element settingrdquo Supercon-ductor Science and Technology vol 18 pp 675ndash677 2005

[20] BM BrownM Jais and IWKnowles ldquoA variational approachto an elastic inverse problemrdquo Inverse Problems vol 21 no 6 pp1953ndash1973 2005

[21] R Nittka and M Sauter ldquoSobolev gradients for differentialalgebraic equationsrdquoElectronic Journal of Differential Equationsvol 42 pp 1ndash31 2008

[22] N Raza S Sial and J W Neuberger ldquoNumerical solutions ofintegrodifferential equations using Sobolev gradient methodsrdquoInternational Journal of Computational Methods vol 9 ArticleID 1250046 2012

[23] J W Neuberger Sobolev Gradients and Differential Equationsvol 1670 of Lecture Notes in Mathematics Springer BerlinGermany 1997

[24] R J Renka and J W Neuberger ldquoSobolev gradients Introduc-tion applications problemsrdquo Contemporary Mathematics vol257 pp 85ndash99 2004

[25] F Hecht O Pironneau and K Ohtsuka ldquoFreeFem++ Manualrdquohttpwwwfreefemorg

[26] R A Adams Sobolev spaces Academic Press New York NYUSA 1975

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 2: Research Article Numerical Solutions of Singularly ...downloads.hindawi.com/journals/jfs/2013/542897.pdf · Department of Mathematics and Statistics, McMaster University, Hamilton,

2 Journal of Function Spaces and Applications

for Sobolev and weighted Sobolev gradients Section 4 iscomposed of numerical results Summary and conclusion arediscussed in Section 5

2 Sobolev Gradient Approach

This section is devoted to show the working of Sobolev gradi-ent methods A detailed analysis regarding the constructionof Sobolev gradients can be seen in [23] and these lines arealso taken from the same reference

Let us consider that 119896 is a positive integer and 120595 is a realvalued 1198621 function on 119877119896 The gradient nabla120595 is defined by

lim119905rarr0

1

119905(120595 (119909 + 119905ℎ) minus 120595 (119909))

= 1205951015840(119909) ℎ = ⟨ℎ nabla120595 (119909)⟩

119877119896 119909 ℎ isin 119877

119896

(2)

For120595 as in (2) but with ⟨sdot sdot⟩119878an inner product on119877119896 different

from the usual inner product ⟨sdot sdot⟩119877119896 there is a function nabla

119904120595

119877119896rarr 119877119896 which satisfies

1205951015840(119909) ℎ = ⟨ℎ nabla

119878120595 (119909)⟩

119878 119909 ℎ isin 119877

119896 (3)

The linear functional 1205951015840(119909) can be represented using anyinner product on 119877119896 Say that nabla

119878120595 is the gradient of 120595 with

respect to the inner product ⟨sdot sdot⟩119878and note that nabla

119878120595 has the

same properties as nabla120595By applying a linear transformation

119860 119877119896997888rarr 119877

119896 (4)

we can relate these two inner products by

⟨119909 119910⟩119878= ⟨119909 119860119910⟩

119877119896 119909 119910 isin 119877

119896 (5)

and by applying a reflection we have

(nabla119878120595) (119909) = 119860

minus1nabla120595 (119909) 119909 isin 119877

119896 (6)

To each 119909 isin 119877119896 an inner product ⟨sdot sdot⟩119909is associated on 119877119896

Therefore for 119909 isin 119877119896 one can define nabla119909120595 119877

119896rarr 119877119896 such

that

1205951015840(119909) ℎ = ⟨ℎ nabla

119909120595 (119909)⟩

119909 119909 ℎ isin 119877

119896 (7)

So corresponding to eachmetric we can define a gradient fora function 120595 and these gradients may have vastly differentnumerical properties Therefore the choice of metric playsa vital role in steepest descent minimization process Agradient of a function 120595 is said to be Sobolev gradient whenthe underlying space is Sobolev Readers who are unfamiliarwith Sobolev spaces are referred to [26] Steepest descentcan be considered both in discrete as well as in continuousversions

Suppose 120595 is a real-valued 1198621 function defined on aHilbert space 119867 and nabla

119878120595 is its gradient with respect to the

inner product ⟨sdot sdot⟩119878defined on 119867 Discrete steepest descent

is a procedure of establishing a sequence 119909119896 such that for

given 1199090we have

119909119895= 119909119895minus1minus 120575119895(nabla120595) (119909

119895minus1) 119895 = 1 2 (8)

where for each 119895 120575119895is chosen in a way that it minimizes if

possible

120595 (119909119895minus1minus 120575119895(nabla120595) (119909

119895minus1)) (9)

In continuous steepest descent a function 119911 [0infin) rarr 119867is constructed so that

119889119911

119889119905= minusnabla120595 (119911 (119905)) 119911 (0) = 119911initial (10)

Under suitable conditions on 120595 119911(119905) rarr 119911infin

where 120595(119911infin) is

the minimum value of 120595So we conclude that discrete steepest descent (8) can

be considered as a numerical method for approximatingsolutions to (10) Continuous steepest descent provides atheoretical starting point for proving convergence for discretesteepest descent

Using (8) one seeks 119901 = lim119896rarrinfin

119909119895 so that

120595 (119901) = 0 or (nabla119878120595) (119901) = 0 (11)

Using (10) we seek 119901 = lim119905rarrinfin119911119905so that (11) holds

For the solution of PDEs we can formulate 120595 by avariational principle such that119901 satisfies the given PDE if andonly if 119901 is a critical point of 120595

To find a zero of the gradient of 120595 we try to usesteepest descentminimization processThe energy functionalassociated with (1) is given by

120595 (119901) = intΩ

120575119905

1199013

3+ (1 minus 120575

119905)1199012

2minus 119891119901 + 120575

119905

120598

2

1003816100381610038161003816nabla11990110038161003816100381610038162

(12)

For the solution of PDEs other functionals are also possibleand one of the prime examples in this direction is the leastsquare formulation Such functions are shown in [11 18] Theexistence and convergence of 119911(119905) rarr 119911(infin) for differentlinear and nonlinear forms of 119865 is discussed in [23]

Here we quote a result [23] for convergence of 119911(119905) undera convexity assumption

Theorem 1 (see [23]) Let119867 be a Hilbert space and let 119865 be anonnegative 1198622 function defined on119867 for which there is 120598 gt 0such that

⟨12059510158401015840(119909) (nabla120595 (119909)) (nabla120595 (119909))⟩ ge 120598

1003817100381710038171003817nabla120595(119909)10038171003817100381710038172

119867 119909 isin 119867 (13)

Suppose also that (10) holds and 119909 isin 119867 nabla120595(119909) = 0 Then 119901 =lim119905rarrinfin119911119905exists and nabla120595(119901) = 0

In this paper only discrete steepest descents are used andfor numerical computation discretized versions of functions120595 are considered

Journal of Function Spaces and Applications 3

3 Gradients and Minimization

Each inner product corresponds a gradient and a descentdirectionThe performance of steepest descent minimizationprocess can be improved by defining gradients in a suitablychosen Sobolev space It is still an open problem how tochoose the best inner product which is suitable for theproblem

A Sobolev space1198671(Ω) is defined as

1198671(Ω) = 119901 isin 119871

2(Ω) 119863

120572119901 isin 1198712(Ω) 0 le 120572 le 1 (14)

where119863120572 denotes the weak derivative of 119901 of order 1205721198671(Ω)is a Hilbert space with the norm defined by

100381710038171003817100381711990110038171003817100381710038172

1198671 = intΩ

1003816100381610038161003816nabla11990110038161003816100381610038162

+100381610038161003816100381611990110038161003816100381610038162

(15)

By inspiration fromMahavierrsquos work onweighted Sobolevgradients we define a new norm as

100381710038171003817100381711990110038171003817100381710038172

1198671119908 = int

Ω

1003816100381610038161003816119908nabla11990110038161003816100381610038162

+100381610038161003816100381611990110038161003816100381610038162

(16)

This norm takes care of 119908 = 120598120575119905which is affecting the

derivative term in (24) We call the Sobolev space with thenorm (16) a weighted Sobolev space

Now we show that the weighted Sobolev space with thenorm defined by (16) is a Hilbert space denoted by1198671119908 Forthis first we show it satisfies the properties of inner productspace and then we show it is complete

Let us denote the inner product ⟨sdot sdot⟩119908associated with the

norm defined by (16)

⟨119901 119901⟩119908= ⟨119901 119901⟩ + ⟨119908nabla119901119908nabla119901⟩ (17)

where ⟨sdot sdot⟩ is the usual 1198712 inner product in the vector space119877119872 defined by ⟨119901 119902⟩ = sum

119894119901(119894)119902(119894) Consider the following

⟨119901 119901⟩119908= intΩ

1003816100381610038161003816119908nabla11990110038161003816100381610038162

+100381610038161003816100381611990110038161003816100381610038162

gt 0 (18)

if 119901 = 0 then ⟨119901 119901⟩119908= 0

⟨119901 119902⟩119908= ⟨119901 119902⟩ + ⟨119908nabla119901119908nabla119902⟩

= ⟨119902 119901⟩ + ⟨119908nabla119902 119908nabla119901⟩ = ⟨119902 119901⟩119908

⟨119901 119902 + 119902⟩119908= ⟨119901 119902 + 119902⟩ + ⟨119908nabla119901119908nabla (119902 + 119902)⟩

= ⟨119901 119902⟩ + ⟨119901 119902⟩

+ ⟨119908nabla119901119908nabla119902⟩ + ⟨119908nabla119901119908nabla119902⟩

= ⟨119901 V⟩ + ⟨119908nabla119901119908nabla119902⟩

+ ⟨119901 119902⟩ + ⟨119908nabla119901119908nabla119902⟩

= ⟨119901 119902⟩119908+ ⟨119901 119902⟩

119908

⟨120572119901 119902⟩119908= ⟨120572119901 119902⟩ + ⟨119908120572nabla119901119908nabla119902⟩

= 120572 ⟨119901 119902⟩ + 120572 ⟨119908nabla119901119908nabla119902⟩

= 120572⟨119901 119902⟩119908

(19)

Let 1199011 1199012 be a Cauchy sequence in 1198671119908 Since 1198712 is

complete so 119901 isin 1198712 such that 119901119899minus 119901 rarr 0 also nabla119901

119899minus

nabla119901 rarr 0Therefore1003817100381710038171003817119901119899 minus 119901

10038171003817100381710038172

= intΩ

1003816100381610038161003816119908nabla(119901119899 minus 119901)10038161003816100381610038162

+1003816100381610038161003816119901119899 minus 119901

10038161003816100381610038162

= intΩ

|119908|21003816100381610038161003816nabla119901119899 minus nabla119901

10038161003816100381610038162

+1003816100381610038161003816119901119899 minus 119901

10038161003816100381610038162

997888rarr 0

(20)

Hence1198671119908 is complete by the norm defined by (16)To incorporate the Dirichlet boundary conditions we

define a perturbation subspace 11987120(Ω) of functions such that

1198712

0(Ω) = 119901 isin 119871

2

0(Ω) 119901 = 0 on Γ (21)

Here Γ denotes the boundary of the domain Ω Perturbationsubspaces related to 1198671 and 1198671119908 would be 1198671

0= 1198712

0⋂1198671

and11986711199080= 1198712

0⋂1198671119908 respectively

We need to find the gradient nabla120595(119901) of a functional 120595(119901)associated with the original problem and to find zero of thegradient

Given an energy functional

120595 (119901) = intΩ

119877 (119901 nabla119901) (22)

the Frechet derivative of 120595(119901) is a bounded linear functional1205951015840(119901) defined by

1205951015840(119901) ℎ = lim

120572rarr0

120595 (119901 + 120572ℎ) minus 120595 (119901)

120572for ℎ isin 1198671

0(Ω) (23)

The energy functional in our case is given by

120595 (119901) = intΩ

120575119905

1199013

3+ (1 minus 120575

119905)1199012

2minus 119891119901 + 120575

119905

120598

2

1003816100381610038161003816nabla11990110038161003816100381610038162

(24)

then according to (23) we have

1205951015840(119901) ℎ

= lim120572rarr0

1

120572intΩ

120575119905

(119901 + 120572ℎ)3

3+ (1 minus 120575

119905)(119901 + 120572ℎ)

2

2

minus 119891 (119901 + 120572ℎ) + 120575119905

120598

2

1003816100381610038161003816nabla(119901 + 120572ℎ)10038161003816100381610038162

minus 120575119905

1199013

3minus (1 minus 120575

119905)1199012

2+ 119891119901 minus 120575

119905

120598

2

1003816100381610038161003816nabla11990110038161003816100381610038162

(25)

Simplifying and applying the limit we have

1205951015840(119901) ℎ = int

Ω

(1205751199051199012+ (1 minus 120575

119905) 119901 minus 119891) ℎ minus int

Ω

120598120575119905nabla119901 sdot nablaℎ

(26)

Let nabla120595(119901) nabla120595119904(119901) and nabla120595

119908(119901) denote the gradients in

11987121198671 and1198671119908 respectively By using (7) we can write

1205951015840(119901) ℎ = ⟨nabla120595(119901) ℎ⟩

1198712 = ⟨nabla120595119904(119901) ℎ⟩1198671

= ⟨nabla120595119908(119901) ℎ⟩

1198671119908

(27)

4 Journal of Function Spaces and Applications

Table 1 Numerical results of steepest descent in11986711198671119908 using 120575119905= 095 120598 = 001 for fifteen time steps in the two-dimensional case

120582 Iterations CPU 119872 Triangles1198671

1198671119908

1198671

1198671119908

1198671

1198671119908 mdash mdash

08 0035 948 793 668 252 10 2408 0035 1782 1137 4529 1487 20 15208 0035 2627 1674 12605 407 30 32208 0035 2309 1621 21516 15584 40 348

Table 2 Numerical results of steepest descent in11986711198671119908 using 120575119905= 095 for fifteen time steps in the three-dimensional case

120582 Iterations CPU 119872 Triangles1198671

1198671119908

1198671

1198671119908

1198671

1198671119908 mdash mdash

12 05 3962 1245 2293 462 5 5012 05 45291 14872 3302 474 10 20012 05 252143 81403 3986 499 20 800

Thus the gradient in 1198712 is

nabla120595 (119901) = (1205751199051199012+ (1 minus 120575

119905) 119901 minus 119891) ℎ + 120598120575

119905nabla2119901 (28)

To satisfy the boundary conditions we are looking forgradients that are zero on the boundary of Ω So insteadof using nabla120595(119901) we use 120587nabla120595(119901) 120587 is a projection that setsvalues of test function ℎ zero on the boundary of the systemFor implementing the Sobolev gradient method a softwareFreeFem++ [25] is used which is designed to solve PDEsusing the finite element method Therefore to find 120587nabla120595(119901)we need to solve the following

120587intΩ

(1205751199051199012+ (1 minus 120575

119905) 119901 minus 119891) ℎ minus int

Ω

120598120575119905nabla119901 sdot nablaℎ

= 120587intΩ

nabla120595 (119901) ℎ + intΩ

nablanabla120595 (119901) sdot nablaℎ

(29)

In other words for 119901 in 1198712(Ω) find nabla120595(119901) isin 11987120 such that

⟨nabla120595(119901) ℎ⟩1198712 = ⟨119860(119901) minus 119891 ℎ⟩

1198712 forallℎ isin 119871

2

0(Ω) (30)

⟨119860(119901) ℎ⟩1198712 = intΩ

(1205751199051199012+ (1 minus 120575

119905) 119901) ℎ

minus intΩ

120598120575119905nabla119901 sdot nablaℎ

(31)

This gradient suffers from the CFL conditions as some-thing well known for numerical analysts By using (15) and(27) we can relate the 1198712 gradient and unweighted Sobolevgradient in the weak from as

⟨(1 minus nabla2)nabla119904120595(119901) ℎ⟩

1198712= ⟨119860(119906) minus 119891 ℎ⟩

1198712 (32)

Similarly using (16) and (27) the weighted Sobolev gradientcan be related with the 1198712 gradient

⟨(1 minus 1199082nabla2)nabla119908120595(119901) ℎ⟩

1198712= ⟨119860(119906) minus 119891 ℎ⟩

1198712 (33)

In order to find gradients we represent the above systems inweak formulation as

120587intΩ

(1205751199051199012+ (1 minus 120575

119905) 119901 minus 119891) ℎ minus int

Ω

120598120575119905nabla119901 sdot nablaℎ

= 120587intΩ

nabla119904120595 (119901) ℎ + int

Ω

nablanabla119904120595 (119901) sdot nablaℎ

(34)

120587intΩ

(1205751199051199012+ (1 minus 120575

119905) 119901 minus 119891) ℎ minus int

Ω

120598120575119905nabla119901 sdot nablaℎ

= 120587intΩ

nabla119908120595 (119901) ℎ + int

Ω

nablanabla119908120595 (119901) sdot nablaℎ

(35)

It is seen that when using the weighted Sobolev gradientthe step size 120582 does not have to be reduced as the numericalgrid becomes finer and the number of minimization stepsremains reasonable At the same time the conceptual simplic-ity and elegance of the steepest descent algorithm has beenretained

4 Numerical Results

Let us consider a two-dimensional domain Ω in the form ofa circle with centre at the origin and radius 12 An ellipticregion is removed that has border 119909(119905) = 6 cos(119905) 119910(119905) =2 sin(119905)with 119905 isin [0 2120587] To start theminimization process wespecify the value of 119901 = 20 as initial state and the boundarycondition was that 119901 = 1 on the outer boundary and 119901 = minus1on the inner boundary on the boundariesWe also let 120598 = 001and time step 120575

119905= 095

For the implementation of FreeFem++ one needs tospecify the number of nodes on each border After that ameshis formed by triangulating the region

FreeFem++ then solves the equations of the type as givenby (35) which can be used to compute gradients in 1198671 and1198671119908 We performed numerical experiments by specifying

different numbers of nodes on each border For each timestep 120575

119905the functional defined by (24) was minimized using

steepest descent stepswith both1198671 and1198671119908 until the infinity

Journal of Function Spaces and Applications 5

Table 3 Comparison of Newtonrsquos method and steepest descent in1198671119908 for different values of 120598

120598 = 10 120598 = 01 120598 = 001 mdashNewton 119867

1119908 Newton 1198671119908 Newton 119867

1119908 Error10 13 30 10 NC 13 10

minus5

11 20 38 16 NC 22 10minus7

11 28 NC 22 NC 34 10minus9

norm of the gradient was less than 10minus6 The system wasevolved over 15 time stepsThe results are reported in Table 3

Table 1 shows that by refining the mesh the weightedSobolev gradient becomes more and more efficient as com-pared to unweighted Sobolev gradients It is also observedthat by decreasing 120598 the performance of steepest descent in1198671119908 is better than in1198671Let us consider now a three-dimensional domainΩ in the

form of a cube with center at the origin and radius 8 To startthe minimization process we specify the value of 119901 = 00as initial state and the boundary condition was set as 119901 = 1on the top and bottom faces and 119901 = minus1 on the front backleft and right faces of the cube We also let 120598 = 01 and timestep 120575

119905= 095 Once again functional was minimized using

steepest descent with both 1198671 and 1198671119908 until the infinitynorm of the gradient was less than 10minus6 Once again the samesoftware is used Numerical experiments are performed byvarying number of nodes specified on each borderThe resultsare recorded in Table 2

By refining the mesh with increasing 119872 the efficiencyof weighted Sobolev gradient in terms of minimization stepstaken for convergence and CPU time also increased Byreducing the value of 120598 the performance of steepest descentin1198671119908 is more pronounced over than over1198671

5 Comparison with Newtonrsquos Method

In this section a comparison is given between Newtonrsquosmethod and the Sobolev gradient method is with weightingNewtonrsquos methods considered as one of the optimal methodsto solve these kinds of problems For convergence ofNewtonrsquosmethod a good initial guess is required For the comparisonbetween the two methods we take a good initial guess forimplementation of Newtonrsquos method so that we fairly cancompare the two methods In variational form the givennonlinear problem can be written as

⟨120595 (119901) ℎ⟩ = intΩ

120575119905

1199013

3ℎ + (1 minus 120575

119905)1199012

2ℎ minus 119891119901ℎ + 120575

119905

120598

2

1003816100381610038161003816nabla11990110038161003816100381610038162

(36)

To apply Newtonrsquos method we need to find Gateaux deriva-tive which is defined by

⟨1198651015840(119906119899) 119888119899 V⟩ = ⟨119865 (119901) V⟩ forallV isin 1198671

0(Ω) (37)

A linear solver is required to solve (37) Thus the Newtonrsquositeration scheme looks like

119906119899+1= 119906119899minus 119888119899 (38)

An example is solved in two-dimensional case We letΩ be in the form of a circle with centre at the origin andradius 12 An elliptic region is removed that has border 119909(119905) =6 cos(119905) 119910(119905) = 2 sin(119905) with 119905 isin [0 2120587] The initial state was119901 = 20 and the boundary condition was that 119901 = 1 on theouter boundary and 119901 = minus1 on the inner boundary on theboundaries The value of time step was set as 120575

119905= 095 The

system was evolved over 1 time step The minimization wasperformed until the infinity norm of the gradient was lessthan some set tolerance We specify 20 nodes on each borderto obtain results

Different values of 120598 results are recorded in Table 3The term NC denotes no convergence From the tableone can observe that Newtonrsquos method perform better thanthe steepest descent in 1198671119908 For strict stopping criterionNewtonrsquos method does not converge Newtonrsquos method wassuperior when the minimization process start but not ableto handle low frequency errors On the other hand steepestdescent in 1198671119908 takes more iterations but it continues toconverge even for very tight stopping criteria By decreasingthe value of diffusion coefficient 120598 the steepest descent in1198671119908 always manage to converge whereas this is not in the

case of Newtonrsquos method

6 Summary and Conclusions

In this draft for the solution of concerned problem aminimization schemebased on the Sobolev gradientmethods[23] is developed The performance of descent in 1198671119908 isbetter than descent in 1198671 as the spacing of the numericalgrid is refined In this paper we signify a systematic way tochoose the underlying space and the importance of weightingto develop efficient codes

The convergence ofNewtonrsquosmethod depend on the goodinitial guess which is sufficiently close to local minima Toimplement Newtonrsquos method we need to evaluate the inverseof Hessian matrix which is expensive at some times A failureof the Newtonrsquos method has been shown in [11 18] by takinglarge size of jump discontinuity and by refining the grid Butthis is not in the case of steepest descentmethod So a broaderrange of problems can be addressed by using steepest descentin appropriate Sobolev space

One of the advantages of thge Sobolev gradient methodsis that it still manages to converge even if we take rough initialguessTheweighted Sobolev gradient offers robust alternativemethod for problems which are highly nonlinear and havelarge discontinuities An interesting project can be done bycomparing the performance of Sobolev gradient methodswith somemultigrid technique in some finite element setting

6 Journal of Function Spaces and Applications

Acknowledgments

The authors are thankful to the precise and kind remarks ofthe learned refereesThe first author acknowledge and appre-ciate theHigher Education Commission (HEC) Pakistan forproviding research grant through Postdoctoral fellowship

References

[1] M Stynes ldquoSteady-state convection-diffusion problemsrdquo ActaNumerica vol 14 pp 445ndash508 2005

[2] C Xenophontos and S R Fulton ldquoUniform approximation ofsingularly perturbed reaction-diffusion problems by the finiteelement method on a Shishkin meshrdquo Numerical Methods forPartial Differential Equations vol 19 no 1 pp 89ndash111 2003

[3] W T Mahavier ldquoA numerical method utilizing weightedSobolev descent to solve singular differential equationsrdquo Non-linear World vol 4 no 4 pp 435ndash455 1997

[4] AMajid and S Sial ldquoApplication of Sobolev gradientmethod toPoisson-Boltzmann systemrdquo Journal of Computational Physicsvol 229 no 16 pp 5742ndash5754 2010

[5] J Karatson and L Loczi ldquoSobolev gradient preconditioning forthe electrostatic potential equationrdquo Computers amp Mathematicswith Applications vol 50 no 7 pp 1093ndash1104 2005

[6] J Karatson and I Farago ldquoPreconditioning operators andSobolev gradients for nonlinear elliptic problemsrdquo Computersamp Mathematics with Applications vol 50 no 7 pp 1077ndash10922005

[7] J Karatson ldquoConstructive Sobolev gradient preconditioningfor semilinear elliptic systemsrdquo Electronic Journal of DifferentialEquations vol 75 pp 1ndash26 2004

[8] J J Garcıa-Ripoll V V Konotop B Malomed and V M Perez-Garcıa ldquoA quasi-local Gross-Pitaevskii equation for attractiveBose-Einstein condensatesrdquo Mathematics and Computers inSimulation vol 62 no 1-2 pp 21ndash30 2003

[9] N Raza S Sial S S Siddiqi and T Lookman ldquoEnergyminimization related to the nonlinear Schrodinger equationrdquoJournal of Computational Physics vol 228 no 7 pp 2572ndash25772009

[10] N Raza S Sial and J W Neuberger ldquoNumerical solution ofBurgersrsquo equation by the Sobolev gradient methodrdquo AppliedMathematics and Computation vol 218 no 8 pp 4017ndash40242011

[11] A Majid and S Sial ldquoApproximate solutions to Poisson-Boltzmann systems with Sobolev gradientsrdquo Journal of Compu-tational Physics vol 230 no 14 pp 5732ndash5738 2011

[12] W B Richardson Jr ldquoSobolev preconditioning for the Poisson-Boltzmann equationrdquo Computer Methods in Applied Mechanicsand Engineering vol 181 no 4 pp 425ndash436 2000

[13] W B Richardson Jr ldquoSobolev gradient preconditioning forimage-processing PDEsrdquo Communications in Numerical Meth-ods in Engineering vol 24 no 6 pp 493ndash504 2008

[14] R J Renka ldquoConstructing fair curves and surfaces with aSobolev gradient methodrdquo Computer Aided Geometric Designvol 21 no 2 pp 137ndash149 2004

[15] S Sial J Neuberger T Lookman and A Saxena ldquoEnergyminimization using Sobolev gradients application to phaseseparation and orderingrdquo Journal of Computational Physics vol189 no 1 pp 88ndash97 2003

[16] S Sial J Neuberger T Lookman and A Saxena ldquoenergyminimization using Sobolev gradients finite-element settingrdquo

in Proceedings of the World Conference on 21st Century Math-ematics Lahore Pakistan 2005

[17] N Raza S Sial and S Siddiqi ldquoApproximating time evolutionrelated to Ginzburg-Landau functionals via Sobolev gradientmethods in a finite-element settingrdquo Journal of ComputationalPhysics vol 229 no 5 pp 1621ndash1625 2010

[18] N Raza S Sial and S S Siddiqi ldquoSobolev gradient approach forthe time evolution related to energyminimization of Ginzburg-Landau functionalsrdquo Journal of Computational Physics vol 228no 7 pp 2566ndash2571 2009

[19] S Sial ldquoSobolev gradient algorithm for minimum energy statesof s-wave superconductors finite-element settingrdquo Supercon-ductor Science and Technology vol 18 pp 675ndash677 2005

[20] BM BrownM Jais and IWKnowles ldquoA variational approachto an elastic inverse problemrdquo Inverse Problems vol 21 no 6 pp1953ndash1973 2005

[21] R Nittka and M Sauter ldquoSobolev gradients for differentialalgebraic equationsrdquoElectronic Journal of Differential Equationsvol 42 pp 1ndash31 2008

[22] N Raza S Sial and J W Neuberger ldquoNumerical solutions ofintegrodifferential equations using Sobolev gradient methodsrdquoInternational Journal of Computational Methods vol 9 ArticleID 1250046 2012

[23] J W Neuberger Sobolev Gradients and Differential Equationsvol 1670 of Lecture Notes in Mathematics Springer BerlinGermany 1997

[24] R J Renka and J W Neuberger ldquoSobolev gradients Introduc-tion applications problemsrdquo Contemporary Mathematics vol257 pp 85ndash99 2004

[25] F Hecht O Pironneau and K Ohtsuka ldquoFreeFem++ Manualrdquohttpwwwfreefemorg

[26] R A Adams Sobolev spaces Academic Press New York NYUSA 1975

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 3: Research Article Numerical Solutions of Singularly ...downloads.hindawi.com/journals/jfs/2013/542897.pdf · Department of Mathematics and Statistics, McMaster University, Hamilton,

Journal of Function Spaces and Applications 3

3 Gradients and Minimization

Each inner product corresponds a gradient and a descentdirectionThe performance of steepest descent minimizationprocess can be improved by defining gradients in a suitablychosen Sobolev space It is still an open problem how tochoose the best inner product which is suitable for theproblem

A Sobolev space1198671(Ω) is defined as

1198671(Ω) = 119901 isin 119871

2(Ω) 119863

120572119901 isin 1198712(Ω) 0 le 120572 le 1 (14)

where119863120572 denotes the weak derivative of 119901 of order 1205721198671(Ω)is a Hilbert space with the norm defined by

100381710038171003817100381711990110038171003817100381710038172

1198671 = intΩ

1003816100381610038161003816nabla11990110038161003816100381610038162

+100381610038161003816100381611990110038161003816100381610038162

(15)

By inspiration fromMahavierrsquos work onweighted Sobolevgradients we define a new norm as

100381710038171003817100381711990110038171003817100381710038172

1198671119908 = int

Ω

1003816100381610038161003816119908nabla11990110038161003816100381610038162

+100381610038161003816100381611990110038161003816100381610038162

(16)

This norm takes care of 119908 = 120598120575119905which is affecting the

derivative term in (24) We call the Sobolev space with thenorm (16) a weighted Sobolev space

Now we show that the weighted Sobolev space with thenorm defined by (16) is a Hilbert space denoted by1198671119908 Forthis first we show it satisfies the properties of inner productspace and then we show it is complete

Let us denote the inner product ⟨sdot sdot⟩119908associated with the

norm defined by (16)

⟨119901 119901⟩119908= ⟨119901 119901⟩ + ⟨119908nabla119901119908nabla119901⟩ (17)

where ⟨sdot sdot⟩ is the usual 1198712 inner product in the vector space119877119872 defined by ⟨119901 119902⟩ = sum

119894119901(119894)119902(119894) Consider the following

⟨119901 119901⟩119908= intΩ

1003816100381610038161003816119908nabla11990110038161003816100381610038162

+100381610038161003816100381611990110038161003816100381610038162

gt 0 (18)

if 119901 = 0 then ⟨119901 119901⟩119908= 0

⟨119901 119902⟩119908= ⟨119901 119902⟩ + ⟨119908nabla119901119908nabla119902⟩

= ⟨119902 119901⟩ + ⟨119908nabla119902 119908nabla119901⟩ = ⟨119902 119901⟩119908

⟨119901 119902 + 119902⟩119908= ⟨119901 119902 + 119902⟩ + ⟨119908nabla119901119908nabla (119902 + 119902)⟩

= ⟨119901 119902⟩ + ⟨119901 119902⟩

+ ⟨119908nabla119901119908nabla119902⟩ + ⟨119908nabla119901119908nabla119902⟩

= ⟨119901 V⟩ + ⟨119908nabla119901119908nabla119902⟩

+ ⟨119901 119902⟩ + ⟨119908nabla119901119908nabla119902⟩

= ⟨119901 119902⟩119908+ ⟨119901 119902⟩

119908

⟨120572119901 119902⟩119908= ⟨120572119901 119902⟩ + ⟨119908120572nabla119901119908nabla119902⟩

= 120572 ⟨119901 119902⟩ + 120572 ⟨119908nabla119901119908nabla119902⟩

= 120572⟨119901 119902⟩119908

(19)

Let 1199011 1199012 be a Cauchy sequence in 1198671119908 Since 1198712 is

complete so 119901 isin 1198712 such that 119901119899minus 119901 rarr 0 also nabla119901

119899minus

nabla119901 rarr 0Therefore1003817100381710038171003817119901119899 minus 119901

10038171003817100381710038172

= intΩ

1003816100381610038161003816119908nabla(119901119899 minus 119901)10038161003816100381610038162

+1003816100381610038161003816119901119899 minus 119901

10038161003816100381610038162

= intΩ

|119908|21003816100381610038161003816nabla119901119899 minus nabla119901

10038161003816100381610038162

+1003816100381610038161003816119901119899 minus 119901

10038161003816100381610038162

997888rarr 0

(20)

Hence1198671119908 is complete by the norm defined by (16)To incorporate the Dirichlet boundary conditions we

define a perturbation subspace 11987120(Ω) of functions such that

1198712

0(Ω) = 119901 isin 119871

2

0(Ω) 119901 = 0 on Γ (21)

Here Γ denotes the boundary of the domain Ω Perturbationsubspaces related to 1198671 and 1198671119908 would be 1198671

0= 1198712

0⋂1198671

and11986711199080= 1198712

0⋂1198671119908 respectively

We need to find the gradient nabla120595(119901) of a functional 120595(119901)associated with the original problem and to find zero of thegradient

Given an energy functional

120595 (119901) = intΩ

119877 (119901 nabla119901) (22)

the Frechet derivative of 120595(119901) is a bounded linear functional1205951015840(119901) defined by

1205951015840(119901) ℎ = lim

120572rarr0

120595 (119901 + 120572ℎ) minus 120595 (119901)

120572for ℎ isin 1198671

0(Ω) (23)

The energy functional in our case is given by

120595 (119901) = intΩ

120575119905

1199013

3+ (1 minus 120575

119905)1199012

2minus 119891119901 + 120575

119905

120598

2

1003816100381610038161003816nabla11990110038161003816100381610038162

(24)

then according to (23) we have

1205951015840(119901) ℎ

= lim120572rarr0

1

120572intΩ

120575119905

(119901 + 120572ℎ)3

3+ (1 minus 120575

119905)(119901 + 120572ℎ)

2

2

minus 119891 (119901 + 120572ℎ) + 120575119905

120598

2

1003816100381610038161003816nabla(119901 + 120572ℎ)10038161003816100381610038162

minus 120575119905

1199013

3minus (1 minus 120575

119905)1199012

2+ 119891119901 minus 120575

119905

120598

2

1003816100381610038161003816nabla11990110038161003816100381610038162

(25)

Simplifying and applying the limit we have

1205951015840(119901) ℎ = int

Ω

(1205751199051199012+ (1 minus 120575

119905) 119901 minus 119891) ℎ minus int

Ω

120598120575119905nabla119901 sdot nablaℎ

(26)

Let nabla120595(119901) nabla120595119904(119901) and nabla120595

119908(119901) denote the gradients in

11987121198671 and1198671119908 respectively By using (7) we can write

1205951015840(119901) ℎ = ⟨nabla120595(119901) ℎ⟩

1198712 = ⟨nabla120595119904(119901) ℎ⟩1198671

= ⟨nabla120595119908(119901) ℎ⟩

1198671119908

(27)

4 Journal of Function Spaces and Applications

Table 1 Numerical results of steepest descent in11986711198671119908 using 120575119905= 095 120598 = 001 for fifteen time steps in the two-dimensional case

120582 Iterations CPU 119872 Triangles1198671

1198671119908

1198671

1198671119908

1198671

1198671119908 mdash mdash

08 0035 948 793 668 252 10 2408 0035 1782 1137 4529 1487 20 15208 0035 2627 1674 12605 407 30 32208 0035 2309 1621 21516 15584 40 348

Table 2 Numerical results of steepest descent in11986711198671119908 using 120575119905= 095 for fifteen time steps in the three-dimensional case

120582 Iterations CPU 119872 Triangles1198671

1198671119908

1198671

1198671119908

1198671

1198671119908 mdash mdash

12 05 3962 1245 2293 462 5 5012 05 45291 14872 3302 474 10 20012 05 252143 81403 3986 499 20 800

Thus the gradient in 1198712 is

nabla120595 (119901) = (1205751199051199012+ (1 minus 120575

119905) 119901 minus 119891) ℎ + 120598120575

119905nabla2119901 (28)

To satisfy the boundary conditions we are looking forgradients that are zero on the boundary of Ω So insteadof using nabla120595(119901) we use 120587nabla120595(119901) 120587 is a projection that setsvalues of test function ℎ zero on the boundary of the systemFor implementing the Sobolev gradient method a softwareFreeFem++ [25] is used which is designed to solve PDEsusing the finite element method Therefore to find 120587nabla120595(119901)we need to solve the following

120587intΩ

(1205751199051199012+ (1 minus 120575

119905) 119901 minus 119891) ℎ minus int

Ω

120598120575119905nabla119901 sdot nablaℎ

= 120587intΩ

nabla120595 (119901) ℎ + intΩ

nablanabla120595 (119901) sdot nablaℎ

(29)

In other words for 119901 in 1198712(Ω) find nabla120595(119901) isin 11987120 such that

⟨nabla120595(119901) ℎ⟩1198712 = ⟨119860(119901) minus 119891 ℎ⟩

1198712 forallℎ isin 119871

2

0(Ω) (30)

⟨119860(119901) ℎ⟩1198712 = intΩ

(1205751199051199012+ (1 minus 120575

119905) 119901) ℎ

minus intΩ

120598120575119905nabla119901 sdot nablaℎ

(31)

This gradient suffers from the CFL conditions as some-thing well known for numerical analysts By using (15) and(27) we can relate the 1198712 gradient and unweighted Sobolevgradient in the weak from as

⟨(1 minus nabla2)nabla119904120595(119901) ℎ⟩

1198712= ⟨119860(119906) minus 119891 ℎ⟩

1198712 (32)

Similarly using (16) and (27) the weighted Sobolev gradientcan be related with the 1198712 gradient

⟨(1 minus 1199082nabla2)nabla119908120595(119901) ℎ⟩

1198712= ⟨119860(119906) minus 119891 ℎ⟩

1198712 (33)

In order to find gradients we represent the above systems inweak formulation as

120587intΩ

(1205751199051199012+ (1 minus 120575

119905) 119901 minus 119891) ℎ minus int

Ω

120598120575119905nabla119901 sdot nablaℎ

= 120587intΩ

nabla119904120595 (119901) ℎ + int

Ω

nablanabla119904120595 (119901) sdot nablaℎ

(34)

120587intΩ

(1205751199051199012+ (1 minus 120575

119905) 119901 minus 119891) ℎ minus int

Ω

120598120575119905nabla119901 sdot nablaℎ

= 120587intΩ

nabla119908120595 (119901) ℎ + int

Ω

nablanabla119908120595 (119901) sdot nablaℎ

(35)

It is seen that when using the weighted Sobolev gradientthe step size 120582 does not have to be reduced as the numericalgrid becomes finer and the number of minimization stepsremains reasonable At the same time the conceptual simplic-ity and elegance of the steepest descent algorithm has beenretained

4 Numerical Results

Let us consider a two-dimensional domain Ω in the form ofa circle with centre at the origin and radius 12 An ellipticregion is removed that has border 119909(119905) = 6 cos(119905) 119910(119905) =2 sin(119905)with 119905 isin [0 2120587] To start theminimization process wespecify the value of 119901 = 20 as initial state and the boundarycondition was that 119901 = 1 on the outer boundary and 119901 = minus1on the inner boundary on the boundariesWe also let 120598 = 001and time step 120575

119905= 095

For the implementation of FreeFem++ one needs tospecify the number of nodes on each border After that ameshis formed by triangulating the region

FreeFem++ then solves the equations of the type as givenby (35) which can be used to compute gradients in 1198671 and1198671119908 We performed numerical experiments by specifying

different numbers of nodes on each border For each timestep 120575

119905the functional defined by (24) was minimized using

steepest descent stepswith both1198671 and1198671119908 until the infinity

Journal of Function Spaces and Applications 5

Table 3 Comparison of Newtonrsquos method and steepest descent in1198671119908 for different values of 120598

120598 = 10 120598 = 01 120598 = 001 mdashNewton 119867

1119908 Newton 1198671119908 Newton 119867

1119908 Error10 13 30 10 NC 13 10

minus5

11 20 38 16 NC 22 10minus7

11 28 NC 22 NC 34 10minus9

norm of the gradient was less than 10minus6 The system wasevolved over 15 time stepsThe results are reported in Table 3

Table 1 shows that by refining the mesh the weightedSobolev gradient becomes more and more efficient as com-pared to unweighted Sobolev gradients It is also observedthat by decreasing 120598 the performance of steepest descent in1198671119908 is better than in1198671Let us consider now a three-dimensional domainΩ in the

form of a cube with center at the origin and radius 8 To startthe minimization process we specify the value of 119901 = 00as initial state and the boundary condition was set as 119901 = 1on the top and bottom faces and 119901 = minus1 on the front backleft and right faces of the cube We also let 120598 = 01 and timestep 120575

119905= 095 Once again functional was minimized using

steepest descent with both 1198671 and 1198671119908 until the infinitynorm of the gradient was less than 10minus6 Once again the samesoftware is used Numerical experiments are performed byvarying number of nodes specified on each borderThe resultsare recorded in Table 2

By refining the mesh with increasing 119872 the efficiencyof weighted Sobolev gradient in terms of minimization stepstaken for convergence and CPU time also increased Byreducing the value of 120598 the performance of steepest descentin1198671119908 is more pronounced over than over1198671

5 Comparison with Newtonrsquos Method

In this section a comparison is given between Newtonrsquosmethod and the Sobolev gradient method is with weightingNewtonrsquos methods considered as one of the optimal methodsto solve these kinds of problems For convergence ofNewtonrsquosmethod a good initial guess is required For the comparisonbetween the two methods we take a good initial guess forimplementation of Newtonrsquos method so that we fairly cancompare the two methods In variational form the givennonlinear problem can be written as

⟨120595 (119901) ℎ⟩ = intΩ

120575119905

1199013

3ℎ + (1 minus 120575

119905)1199012

2ℎ minus 119891119901ℎ + 120575

119905

120598

2

1003816100381610038161003816nabla11990110038161003816100381610038162

(36)

To apply Newtonrsquos method we need to find Gateaux deriva-tive which is defined by

⟨1198651015840(119906119899) 119888119899 V⟩ = ⟨119865 (119901) V⟩ forallV isin 1198671

0(Ω) (37)

A linear solver is required to solve (37) Thus the Newtonrsquositeration scheme looks like

119906119899+1= 119906119899minus 119888119899 (38)

An example is solved in two-dimensional case We letΩ be in the form of a circle with centre at the origin andradius 12 An elliptic region is removed that has border 119909(119905) =6 cos(119905) 119910(119905) = 2 sin(119905) with 119905 isin [0 2120587] The initial state was119901 = 20 and the boundary condition was that 119901 = 1 on theouter boundary and 119901 = minus1 on the inner boundary on theboundaries The value of time step was set as 120575

119905= 095 The

system was evolved over 1 time step The minimization wasperformed until the infinity norm of the gradient was lessthan some set tolerance We specify 20 nodes on each borderto obtain results

Different values of 120598 results are recorded in Table 3The term NC denotes no convergence From the tableone can observe that Newtonrsquos method perform better thanthe steepest descent in 1198671119908 For strict stopping criterionNewtonrsquos method does not converge Newtonrsquos method wassuperior when the minimization process start but not ableto handle low frequency errors On the other hand steepestdescent in 1198671119908 takes more iterations but it continues toconverge even for very tight stopping criteria By decreasingthe value of diffusion coefficient 120598 the steepest descent in1198671119908 always manage to converge whereas this is not in the

case of Newtonrsquos method

6 Summary and Conclusions

In this draft for the solution of concerned problem aminimization schemebased on the Sobolev gradientmethods[23] is developed The performance of descent in 1198671119908 isbetter than descent in 1198671 as the spacing of the numericalgrid is refined In this paper we signify a systematic way tochoose the underlying space and the importance of weightingto develop efficient codes

The convergence ofNewtonrsquosmethod depend on the goodinitial guess which is sufficiently close to local minima Toimplement Newtonrsquos method we need to evaluate the inverseof Hessian matrix which is expensive at some times A failureof the Newtonrsquos method has been shown in [11 18] by takinglarge size of jump discontinuity and by refining the grid Butthis is not in the case of steepest descentmethod So a broaderrange of problems can be addressed by using steepest descentin appropriate Sobolev space

One of the advantages of thge Sobolev gradient methodsis that it still manages to converge even if we take rough initialguessTheweighted Sobolev gradient offers robust alternativemethod for problems which are highly nonlinear and havelarge discontinuities An interesting project can be done bycomparing the performance of Sobolev gradient methodswith somemultigrid technique in some finite element setting

6 Journal of Function Spaces and Applications

Acknowledgments

The authors are thankful to the precise and kind remarks ofthe learned refereesThe first author acknowledge and appre-ciate theHigher Education Commission (HEC) Pakistan forproviding research grant through Postdoctoral fellowship

References

[1] M Stynes ldquoSteady-state convection-diffusion problemsrdquo ActaNumerica vol 14 pp 445ndash508 2005

[2] C Xenophontos and S R Fulton ldquoUniform approximation ofsingularly perturbed reaction-diffusion problems by the finiteelement method on a Shishkin meshrdquo Numerical Methods forPartial Differential Equations vol 19 no 1 pp 89ndash111 2003

[3] W T Mahavier ldquoA numerical method utilizing weightedSobolev descent to solve singular differential equationsrdquo Non-linear World vol 4 no 4 pp 435ndash455 1997

[4] AMajid and S Sial ldquoApplication of Sobolev gradientmethod toPoisson-Boltzmann systemrdquo Journal of Computational Physicsvol 229 no 16 pp 5742ndash5754 2010

[5] J Karatson and L Loczi ldquoSobolev gradient preconditioning forthe electrostatic potential equationrdquo Computers amp Mathematicswith Applications vol 50 no 7 pp 1093ndash1104 2005

[6] J Karatson and I Farago ldquoPreconditioning operators andSobolev gradients for nonlinear elliptic problemsrdquo Computersamp Mathematics with Applications vol 50 no 7 pp 1077ndash10922005

[7] J Karatson ldquoConstructive Sobolev gradient preconditioningfor semilinear elliptic systemsrdquo Electronic Journal of DifferentialEquations vol 75 pp 1ndash26 2004

[8] J J Garcıa-Ripoll V V Konotop B Malomed and V M Perez-Garcıa ldquoA quasi-local Gross-Pitaevskii equation for attractiveBose-Einstein condensatesrdquo Mathematics and Computers inSimulation vol 62 no 1-2 pp 21ndash30 2003

[9] N Raza S Sial S S Siddiqi and T Lookman ldquoEnergyminimization related to the nonlinear Schrodinger equationrdquoJournal of Computational Physics vol 228 no 7 pp 2572ndash25772009

[10] N Raza S Sial and J W Neuberger ldquoNumerical solution ofBurgersrsquo equation by the Sobolev gradient methodrdquo AppliedMathematics and Computation vol 218 no 8 pp 4017ndash40242011

[11] A Majid and S Sial ldquoApproximate solutions to Poisson-Boltzmann systems with Sobolev gradientsrdquo Journal of Compu-tational Physics vol 230 no 14 pp 5732ndash5738 2011

[12] W B Richardson Jr ldquoSobolev preconditioning for the Poisson-Boltzmann equationrdquo Computer Methods in Applied Mechanicsand Engineering vol 181 no 4 pp 425ndash436 2000

[13] W B Richardson Jr ldquoSobolev gradient preconditioning forimage-processing PDEsrdquo Communications in Numerical Meth-ods in Engineering vol 24 no 6 pp 493ndash504 2008

[14] R J Renka ldquoConstructing fair curves and surfaces with aSobolev gradient methodrdquo Computer Aided Geometric Designvol 21 no 2 pp 137ndash149 2004

[15] S Sial J Neuberger T Lookman and A Saxena ldquoEnergyminimization using Sobolev gradients application to phaseseparation and orderingrdquo Journal of Computational Physics vol189 no 1 pp 88ndash97 2003

[16] S Sial J Neuberger T Lookman and A Saxena ldquoenergyminimization using Sobolev gradients finite-element settingrdquo

in Proceedings of the World Conference on 21st Century Math-ematics Lahore Pakistan 2005

[17] N Raza S Sial and S Siddiqi ldquoApproximating time evolutionrelated to Ginzburg-Landau functionals via Sobolev gradientmethods in a finite-element settingrdquo Journal of ComputationalPhysics vol 229 no 5 pp 1621ndash1625 2010

[18] N Raza S Sial and S S Siddiqi ldquoSobolev gradient approach forthe time evolution related to energyminimization of Ginzburg-Landau functionalsrdquo Journal of Computational Physics vol 228no 7 pp 2566ndash2571 2009

[19] S Sial ldquoSobolev gradient algorithm for minimum energy statesof s-wave superconductors finite-element settingrdquo Supercon-ductor Science and Technology vol 18 pp 675ndash677 2005

[20] BM BrownM Jais and IWKnowles ldquoA variational approachto an elastic inverse problemrdquo Inverse Problems vol 21 no 6 pp1953ndash1973 2005

[21] R Nittka and M Sauter ldquoSobolev gradients for differentialalgebraic equationsrdquoElectronic Journal of Differential Equationsvol 42 pp 1ndash31 2008

[22] N Raza S Sial and J W Neuberger ldquoNumerical solutions ofintegrodifferential equations using Sobolev gradient methodsrdquoInternational Journal of Computational Methods vol 9 ArticleID 1250046 2012

[23] J W Neuberger Sobolev Gradients and Differential Equationsvol 1670 of Lecture Notes in Mathematics Springer BerlinGermany 1997

[24] R J Renka and J W Neuberger ldquoSobolev gradients Introduc-tion applications problemsrdquo Contemporary Mathematics vol257 pp 85ndash99 2004

[25] F Hecht O Pironneau and K Ohtsuka ldquoFreeFem++ Manualrdquohttpwwwfreefemorg

[26] R A Adams Sobolev spaces Academic Press New York NYUSA 1975

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 4: Research Article Numerical Solutions of Singularly ...downloads.hindawi.com/journals/jfs/2013/542897.pdf · Department of Mathematics and Statistics, McMaster University, Hamilton,

4 Journal of Function Spaces and Applications

Table 1 Numerical results of steepest descent in11986711198671119908 using 120575119905= 095 120598 = 001 for fifteen time steps in the two-dimensional case

120582 Iterations CPU 119872 Triangles1198671

1198671119908

1198671

1198671119908

1198671

1198671119908 mdash mdash

08 0035 948 793 668 252 10 2408 0035 1782 1137 4529 1487 20 15208 0035 2627 1674 12605 407 30 32208 0035 2309 1621 21516 15584 40 348

Table 2 Numerical results of steepest descent in11986711198671119908 using 120575119905= 095 for fifteen time steps in the three-dimensional case

120582 Iterations CPU 119872 Triangles1198671

1198671119908

1198671

1198671119908

1198671

1198671119908 mdash mdash

12 05 3962 1245 2293 462 5 5012 05 45291 14872 3302 474 10 20012 05 252143 81403 3986 499 20 800

Thus the gradient in 1198712 is

nabla120595 (119901) = (1205751199051199012+ (1 minus 120575

119905) 119901 minus 119891) ℎ + 120598120575

119905nabla2119901 (28)

To satisfy the boundary conditions we are looking forgradients that are zero on the boundary of Ω So insteadof using nabla120595(119901) we use 120587nabla120595(119901) 120587 is a projection that setsvalues of test function ℎ zero on the boundary of the systemFor implementing the Sobolev gradient method a softwareFreeFem++ [25] is used which is designed to solve PDEsusing the finite element method Therefore to find 120587nabla120595(119901)we need to solve the following

120587intΩ

(1205751199051199012+ (1 minus 120575

119905) 119901 minus 119891) ℎ minus int

Ω

120598120575119905nabla119901 sdot nablaℎ

= 120587intΩ

nabla120595 (119901) ℎ + intΩ

nablanabla120595 (119901) sdot nablaℎ

(29)

In other words for 119901 in 1198712(Ω) find nabla120595(119901) isin 11987120 such that

⟨nabla120595(119901) ℎ⟩1198712 = ⟨119860(119901) minus 119891 ℎ⟩

1198712 forallℎ isin 119871

2

0(Ω) (30)

⟨119860(119901) ℎ⟩1198712 = intΩ

(1205751199051199012+ (1 minus 120575

119905) 119901) ℎ

minus intΩ

120598120575119905nabla119901 sdot nablaℎ

(31)

This gradient suffers from the CFL conditions as some-thing well known for numerical analysts By using (15) and(27) we can relate the 1198712 gradient and unweighted Sobolevgradient in the weak from as

⟨(1 minus nabla2)nabla119904120595(119901) ℎ⟩

1198712= ⟨119860(119906) minus 119891 ℎ⟩

1198712 (32)

Similarly using (16) and (27) the weighted Sobolev gradientcan be related with the 1198712 gradient

⟨(1 minus 1199082nabla2)nabla119908120595(119901) ℎ⟩

1198712= ⟨119860(119906) minus 119891 ℎ⟩

1198712 (33)

In order to find gradients we represent the above systems inweak formulation as

120587intΩ

(1205751199051199012+ (1 minus 120575

119905) 119901 minus 119891) ℎ minus int

Ω

120598120575119905nabla119901 sdot nablaℎ

= 120587intΩ

nabla119904120595 (119901) ℎ + int

Ω

nablanabla119904120595 (119901) sdot nablaℎ

(34)

120587intΩ

(1205751199051199012+ (1 minus 120575

119905) 119901 minus 119891) ℎ minus int

Ω

120598120575119905nabla119901 sdot nablaℎ

= 120587intΩ

nabla119908120595 (119901) ℎ + int

Ω

nablanabla119908120595 (119901) sdot nablaℎ

(35)

It is seen that when using the weighted Sobolev gradientthe step size 120582 does not have to be reduced as the numericalgrid becomes finer and the number of minimization stepsremains reasonable At the same time the conceptual simplic-ity and elegance of the steepest descent algorithm has beenretained

4 Numerical Results

Let us consider a two-dimensional domain Ω in the form ofa circle with centre at the origin and radius 12 An ellipticregion is removed that has border 119909(119905) = 6 cos(119905) 119910(119905) =2 sin(119905)with 119905 isin [0 2120587] To start theminimization process wespecify the value of 119901 = 20 as initial state and the boundarycondition was that 119901 = 1 on the outer boundary and 119901 = minus1on the inner boundary on the boundariesWe also let 120598 = 001and time step 120575

119905= 095

For the implementation of FreeFem++ one needs tospecify the number of nodes on each border After that ameshis formed by triangulating the region

FreeFem++ then solves the equations of the type as givenby (35) which can be used to compute gradients in 1198671 and1198671119908 We performed numerical experiments by specifying

different numbers of nodes on each border For each timestep 120575

119905the functional defined by (24) was minimized using

steepest descent stepswith both1198671 and1198671119908 until the infinity

Journal of Function Spaces and Applications 5

Table 3 Comparison of Newtonrsquos method and steepest descent in1198671119908 for different values of 120598

120598 = 10 120598 = 01 120598 = 001 mdashNewton 119867

1119908 Newton 1198671119908 Newton 119867

1119908 Error10 13 30 10 NC 13 10

minus5

11 20 38 16 NC 22 10minus7

11 28 NC 22 NC 34 10minus9

norm of the gradient was less than 10minus6 The system wasevolved over 15 time stepsThe results are reported in Table 3

Table 1 shows that by refining the mesh the weightedSobolev gradient becomes more and more efficient as com-pared to unweighted Sobolev gradients It is also observedthat by decreasing 120598 the performance of steepest descent in1198671119908 is better than in1198671Let us consider now a three-dimensional domainΩ in the

form of a cube with center at the origin and radius 8 To startthe minimization process we specify the value of 119901 = 00as initial state and the boundary condition was set as 119901 = 1on the top and bottom faces and 119901 = minus1 on the front backleft and right faces of the cube We also let 120598 = 01 and timestep 120575

119905= 095 Once again functional was minimized using

steepest descent with both 1198671 and 1198671119908 until the infinitynorm of the gradient was less than 10minus6 Once again the samesoftware is used Numerical experiments are performed byvarying number of nodes specified on each borderThe resultsare recorded in Table 2

By refining the mesh with increasing 119872 the efficiencyof weighted Sobolev gradient in terms of minimization stepstaken for convergence and CPU time also increased Byreducing the value of 120598 the performance of steepest descentin1198671119908 is more pronounced over than over1198671

5 Comparison with Newtonrsquos Method

In this section a comparison is given between Newtonrsquosmethod and the Sobolev gradient method is with weightingNewtonrsquos methods considered as one of the optimal methodsto solve these kinds of problems For convergence ofNewtonrsquosmethod a good initial guess is required For the comparisonbetween the two methods we take a good initial guess forimplementation of Newtonrsquos method so that we fairly cancompare the two methods In variational form the givennonlinear problem can be written as

⟨120595 (119901) ℎ⟩ = intΩ

120575119905

1199013

3ℎ + (1 minus 120575

119905)1199012

2ℎ minus 119891119901ℎ + 120575

119905

120598

2

1003816100381610038161003816nabla11990110038161003816100381610038162

(36)

To apply Newtonrsquos method we need to find Gateaux deriva-tive which is defined by

⟨1198651015840(119906119899) 119888119899 V⟩ = ⟨119865 (119901) V⟩ forallV isin 1198671

0(Ω) (37)

A linear solver is required to solve (37) Thus the Newtonrsquositeration scheme looks like

119906119899+1= 119906119899minus 119888119899 (38)

An example is solved in two-dimensional case We letΩ be in the form of a circle with centre at the origin andradius 12 An elliptic region is removed that has border 119909(119905) =6 cos(119905) 119910(119905) = 2 sin(119905) with 119905 isin [0 2120587] The initial state was119901 = 20 and the boundary condition was that 119901 = 1 on theouter boundary and 119901 = minus1 on the inner boundary on theboundaries The value of time step was set as 120575

119905= 095 The

system was evolved over 1 time step The minimization wasperformed until the infinity norm of the gradient was lessthan some set tolerance We specify 20 nodes on each borderto obtain results

Different values of 120598 results are recorded in Table 3The term NC denotes no convergence From the tableone can observe that Newtonrsquos method perform better thanthe steepest descent in 1198671119908 For strict stopping criterionNewtonrsquos method does not converge Newtonrsquos method wassuperior when the minimization process start but not ableto handle low frequency errors On the other hand steepestdescent in 1198671119908 takes more iterations but it continues toconverge even for very tight stopping criteria By decreasingthe value of diffusion coefficient 120598 the steepest descent in1198671119908 always manage to converge whereas this is not in the

case of Newtonrsquos method

6 Summary and Conclusions

In this draft for the solution of concerned problem aminimization schemebased on the Sobolev gradientmethods[23] is developed The performance of descent in 1198671119908 isbetter than descent in 1198671 as the spacing of the numericalgrid is refined In this paper we signify a systematic way tochoose the underlying space and the importance of weightingto develop efficient codes

The convergence ofNewtonrsquosmethod depend on the goodinitial guess which is sufficiently close to local minima Toimplement Newtonrsquos method we need to evaluate the inverseof Hessian matrix which is expensive at some times A failureof the Newtonrsquos method has been shown in [11 18] by takinglarge size of jump discontinuity and by refining the grid Butthis is not in the case of steepest descentmethod So a broaderrange of problems can be addressed by using steepest descentin appropriate Sobolev space

One of the advantages of thge Sobolev gradient methodsis that it still manages to converge even if we take rough initialguessTheweighted Sobolev gradient offers robust alternativemethod for problems which are highly nonlinear and havelarge discontinuities An interesting project can be done bycomparing the performance of Sobolev gradient methodswith somemultigrid technique in some finite element setting

6 Journal of Function Spaces and Applications

Acknowledgments

The authors are thankful to the precise and kind remarks ofthe learned refereesThe first author acknowledge and appre-ciate theHigher Education Commission (HEC) Pakistan forproviding research grant through Postdoctoral fellowship

References

[1] M Stynes ldquoSteady-state convection-diffusion problemsrdquo ActaNumerica vol 14 pp 445ndash508 2005

[2] C Xenophontos and S R Fulton ldquoUniform approximation ofsingularly perturbed reaction-diffusion problems by the finiteelement method on a Shishkin meshrdquo Numerical Methods forPartial Differential Equations vol 19 no 1 pp 89ndash111 2003

[3] W T Mahavier ldquoA numerical method utilizing weightedSobolev descent to solve singular differential equationsrdquo Non-linear World vol 4 no 4 pp 435ndash455 1997

[4] AMajid and S Sial ldquoApplication of Sobolev gradientmethod toPoisson-Boltzmann systemrdquo Journal of Computational Physicsvol 229 no 16 pp 5742ndash5754 2010

[5] J Karatson and L Loczi ldquoSobolev gradient preconditioning forthe electrostatic potential equationrdquo Computers amp Mathematicswith Applications vol 50 no 7 pp 1093ndash1104 2005

[6] J Karatson and I Farago ldquoPreconditioning operators andSobolev gradients for nonlinear elliptic problemsrdquo Computersamp Mathematics with Applications vol 50 no 7 pp 1077ndash10922005

[7] J Karatson ldquoConstructive Sobolev gradient preconditioningfor semilinear elliptic systemsrdquo Electronic Journal of DifferentialEquations vol 75 pp 1ndash26 2004

[8] J J Garcıa-Ripoll V V Konotop B Malomed and V M Perez-Garcıa ldquoA quasi-local Gross-Pitaevskii equation for attractiveBose-Einstein condensatesrdquo Mathematics and Computers inSimulation vol 62 no 1-2 pp 21ndash30 2003

[9] N Raza S Sial S S Siddiqi and T Lookman ldquoEnergyminimization related to the nonlinear Schrodinger equationrdquoJournal of Computational Physics vol 228 no 7 pp 2572ndash25772009

[10] N Raza S Sial and J W Neuberger ldquoNumerical solution ofBurgersrsquo equation by the Sobolev gradient methodrdquo AppliedMathematics and Computation vol 218 no 8 pp 4017ndash40242011

[11] A Majid and S Sial ldquoApproximate solutions to Poisson-Boltzmann systems with Sobolev gradientsrdquo Journal of Compu-tational Physics vol 230 no 14 pp 5732ndash5738 2011

[12] W B Richardson Jr ldquoSobolev preconditioning for the Poisson-Boltzmann equationrdquo Computer Methods in Applied Mechanicsand Engineering vol 181 no 4 pp 425ndash436 2000

[13] W B Richardson Jr ldquoSobolev gradient preconditioning forimage-processing PDEsrdquo Communications in Numerical Meth-ods in Engineering vol 24 no 6 pp 493ndash504 2008

[14] R J Renka ldquoConstructing fair curves and surfaces with aSobolev gradient methodrdquo Computer Aided Geometric Designvol 21 no 2 pp 137ndash149 2004

[15] S Sial J Neuberger T Lookman and A Saxena ldquoEnergyminimization using Sobolev gradients application to phaseseparation and orderingrdquo Journal of Computational Physics vol189 no 1 pp 88ndash97 2003

[16] S Sial J Neuberger T Lookman and A Saxena ldquoenergyminimization using Sobolev gradients finite-element settingrdquo

in Proceedings of the World Conference on 21st Century Math-ematics Lahore Pakistan 2005

[17] N Raza S Sial and S Siddiqi ldquoApproximating time evolutionrelated to Ginzburg-Landau functionals via Sobolev gradientmethods in a finite-element settingrdquo Journal of ComputationalPhysics vol 229 no 5 pp 1621ndash1625 2010

[18] N Raza S Sial and S S Siddiqi ldquoSobolev gradient approach forthe time evolution related to energyminimization of Ginzburg-Landau functionalsrdquo Journal of Computational Physics vol 228no 7 pp 2566ndash2571 2009

[19] S Sial ldquoSobolev gradient algorithm for minimum energy statesof s-wave superconductors finite-element settingrdquo Supercon-ductor Science and Technology vol 18 pp 675ndash677 2005

[20] BM BrownM Jais and IWKnowles ldquoA variational approachto an elastic inverse problemrdquo Inverse Problems vol 21 no 6 pp1953ndash1973 2005

[21] R Nittka and M Sauter ldquoSobolev gradients for differentialalgebraic equationsrdquoElectronic Journal of Differential Equationsvol 42 pp 1ndash31 2008

[22] N Raza S Sial and J W Neuberger ldquoNumerical solutions ofintegrodifferential equations using Sobolev gradient methodsrdquoInternational Journal of Computational Methods vol 9 ArticleID 1250046 2012

[23] J W Neuberger Sobolev Gradients and Differential Equationsvol 1670 of Lecture Notes in Mathematics Springer BerlinGermany 1997

[24] R J Renka and J W Neuberger ldquoSobolev gradients Introduc-tion applications problemsrdquo Contemporary Mathematics vol257 pp 85ndash99 2004

[25] F Hecht O Pironneau and K Ohtsuka ldquoFreeFem++ Manualrdquohttpwwwfreefemorg

[26] R A Adams Sobolev spaces Academic Press New York NYUSA 1975

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 5: Research Article Numerical Solutions of Singularly ...downloads.hindawi.com/journals/jfs/2013/542897.pdf · Department of Mathematics and Statistics, McMaster University, Hamilton,

Journal of Function Spaces and Applications 5

Table 3 Comparison of Newtonrsquos method and steepest descent in1198671119908 for different values of 120598

120598 = 10 120598 = 01 120598 = 001 mdashNewton 119867

1119908 Newton 1198671119908 Newton 119867

1119908 Error10 13 30 10 NC 13 10

minus5

11 20 38 16 NC 22 10minus7

11 28 NC 22 NC 34 10minus9

norm of the gradient was less than 10minus6 The system wasevolved over 15 time stepsThe results are reported in Table 3

Table 1 shows that by refining the mesh the weightedSobolev gradient becomes more and more efficient as com-pared to unweighted Sobolev gradients It is also observedthat by decreasing 120598 the performance of steepest descent in1198671119908 is better than in1198671Let us consider now a three-dimensional domainΩ in the

form of a cube with center at the origin and radius 8 To startthe minimization process we specify the value of 119901 = 00as initial state and the boundary condition was set as 119901 = 1on the top and bottom faces and 119901 = minus1 on the front backleft and right faces of the cube We also let 120598 = 01 and timestep 120575

119905= 095 Once again functional was minimized using

steepest descent with both 1198671 and 1198671119908 until the infinitynorm of the gradient was less than 10minus6 Once again the samesoftware is used Numerical experiments are performed byvarying number of nodes specified on each borderThe resultsare recorded in Table 2

By refining the mesh with increasing 119872 the efficiencyof weighted Sobolev gradient in terms of minimization stepstaken for convergence and CPU time also increased Byreducing the value of 120598 the performance of steepest descentin1198671119908 is more pronounced over than over1198671

5 Comparison with Newtonrsquos Method

In this section a comparison is given between Newtonrsquosmethod and the Sobolev gradient method is with weightingNewtonrsquos methods considered as one of the optimal methodsto solve these kinds of problems For convergence ofNewtonrsquosmethod a good initial guess is required For the comparisonbetween the two methods we take a good initial guess forimplementation of Newtonrsquos method so that we fairly cancompare the two methods In variational form the givennonlinear problem can be written as

⟨120595 (119901) ℎ⟩ = intΩ

120575119905

1199013

3ℎ + (1 minus 120575

119905)1199012

2ℎ minus 119891119901ℎ + 120575

119905

120598

2

1003816100381610038161003816nabla11990110038161003816100381610038162

(36)

To apply Newtonrsquos method we need to find Gateaux deriva-tive which is defined by

⟨1198651015840(119906119899) 119888119899 V⟩ = ⟨119865 (119901) V⟩ forallV isin 1198671

0(Ω) (37)

A linear solver is required to solve (37) Thus the Newtonrsquositeration scheme looks like

119906119899+1= 119906119899minus 119888119899 (38)

An example is solved in two-dimensional case We letΩ be in the form of a circle with centre at the origin andradius 12 An elliptic region is removed that has border 119909(119905) =6 cos(119905) 119910(119905) = 2 sin(119905) with 119905 isin [0 2120587] The initial state was119901 = 20 and the boundary condition was that 119901 = 1 on theouter boundary and 119901 = minus1 on the inner boundary on theboundaries The value of time step was set as 120575

119905= 095 The

system was evolved over 1 time step The minimization wasperformed until the infinity norm of the gradient was lessthan some set tolerance We specify 20 nodes on each borderto obtain results

Different values of 120598 results are recorded in Table 3The term NC denotes no convergence From the tableone can observe that Newtonrsquos method perform better thanthe steepest descent in 1198671119908 For strict stopping criterionNewtonrsquos method does not converge Newtonrsquos method wassuperior when the minimization process start but not ableto handle low frequency errors On the other hand steepestdescent in 1198671119908 takes more iterations but it continues toconverge even for very tight stopping criteria By decreasingthe value of diffusion coefficient 120598 the steepest descent in1198671119908 always manage to converge whereas this is not in the

case of Newtonrsquos method

6 Summary and Conclusions

In this draft for the solution of concerned problem aminimization schemebased on the Sobolev gradientmethods[23] is developed The performance of descent in 1198671119908 isbetter than descent in 1198671 as the spacing of the numericalgrid is refined In this paper we signify a systematic way tochoose the underlying space and the importance of weightingto develop efficient codes

The convergence ofNewtonrsquosmethod depend on the goodinitial guess which is sufficiently close to local minima Toimplement Newtonrsquos method we need to evaluate the inverseof Hessian matrix which is expensive at some times A failureof the Newtonrsquos method has been shown in [11 18] by takinglarge size of jump discontinuity and by refining the grid Butthis is not in the case of steepest descentmethod So a broaderrange of problems can be addressed by using steepest descentin appropriate Sobolev space

One of the advantages of thge Sobolev gradient methodsis that it still manages to converge even if we take rough initialguessTheweighted Sobolev gradient offers robust alternativemethod for problems which are highly nonlinear and havelarge discontinuities An interesting project can be done bycomparing the performance of Sobolev gradient methodswith somemultigrid technique in some finite element setting

6 Journal of Function Spaces and Applications

Acknowledgments

The authors are thankful to the precise and kind remarks ofthe learned refereesThe first author acknowledge and appre-ciate theHigher Education Commission (HEC) Pakistan forproviding research grant through Postdoctoral fellowship

References

[1] M Stynes ldquoSteady-state convection-diffusion problemsrdquo ActaNumerica vol 14 pp 445ndash508 2005

[2] C Xenophontos and S R Fulton ldquoUniform approximation ofsingularly perturbed reaction-diffusion problems by the finiteelement method on a Shishkin meshrdquo Numerical Methods forPartial Differential Equations vol 19 no 1 pp 89ndash111 2003

[3] W T Mahavier ldquoA numerical method utilizing weightedSobolev descent to solve singular differential equationsrdquo Non-linear World vol 4 no 4 pp 435ndash455 1997

[4] AMajid and S Sial ldquoApplication of Sobolev gradientmethod toPoisson-Boltzmann systemrdquo Journal of Computational Physicsvol 229 no 16 pp 5742ndash5754 2010

[5] J Karatson and L Loczi ldquoSobolev gradient preconditioning forthe electrostatic potential equationrdquo Computers amp Mathematicswith Applications vol 50 no 7 pp 1093ndash1104 2005

[6] J Karatson and I Farago ldquoPreconditioning operators andSobolev gradients for nonlinear elliptic problemsrdquo Computersamp Mathematics with Applications vol 50 no 7 pp 1077ndash10922005

[7] J Karatson ldquoConstructive Sobolev gradient preconditioningfor semilinear elliptic systemsrdquo Electronic Journal of DifferentialEquations vol 75 pp 1ndash26 2004

[8] J J Garcıa-Ripoll V V Konotop B Malomed and V M Perez-Garcıa ldquoA quasi-local Gross-Pitaevskii equation for attractiveBose-Einstein condensatesrdquo Mathematics and Computers inSimulation vol 62 no 1-2 pp 21ndash30 2003

[9] N Raza S Sial S S Siddiqi and T Lookman ldquoEnergyminimization related to the nonlinear Schrodinger equationrdquoJournal of Computational Physics vol 228 no 7 pp 2572ndash25772009

[10] N Raza S Sial and J W Neuberger ldquoNumerical solution ofBurgersrsquo equation by the Sobolev gradient methodrdquo AppliedMathematics and Computation vol 218 no 8 pp 4017ndash40242011

[11] A Majid and S Sial ldquoApproximate solutions to Poisson-Boltzmann systems with Sobolev gradientsrdquo Journal of Compu-tational Physics vol 230 no 14 pp 5732ndash5738 2011

[12] W B Richardson Jr ldquoSobolev preconditioning for the Poisson-Boltzmann equationrdquo Computer Methods in Applied Mechanicsand Engineering vol 181 no 4 pp 425ndash436 2000

[13] W B Richardson Jr ldquoSobolev gradient preconditioning forimage-processing PDEsrdquo Communications in Numerical Meth-ods in Engineering vol 24 no 6 pp 493ndash504 2008

[14] R J Renka ldquoConstructing fair curves and surfaces with aSobolev gradient methodrdquo Computer Aided Geometric Designvol 21 no 2 pp 137ndash149 2004

[15] S Sial J Neuberger T Lookman and A Saxena ldquoEnergyminimization using Sobolev gradients application to phaseseparation and orderingrdquo Journal of Computational Physics vol189 no 1 pp 88ndash97 2003

[16] S Sial J Neuberger T Lookman and A Saxena ldquoenergyminimization using Sobolev gradients finite-element settingrdquo

in Proceedings of the World Conference on 21st Century Math-ematics Lahore Pakistan 2005

[17] N Raza S Sial and S Siddiqi ldquoApproximating time evolutionrelated to Ginzburg-Landau functionals via Sobolev gradientmethods in a finite-element settingrdquo Journal of ComputationalPhysics vol 229 no 5 pp 1621ndash1625 2010

[18] N Raza S Sial and S S Siddiqi ldquoSobolev gradient approach forthe time evolution related to energyminimization of Ginzburg-Landau functionalsrdquo Journal of Computational Physics vol 228no 7 pp 2566ndash2571 2009

[19] S Sial ldquoSobolev gradient algorithm for minimum energy statesof s-wave superconductors finite-element settingrdquo Supercon-ductor Science and Technology vol 18 pp 675ndash677 2005

[20] BM BrownM Jais and IWKnowles ldquoA variational approachto an elastic inverse problemrdquo Inverse Problems vol 21 no 6 pp1953ndash1973 2005

[21] R Nittka and M Sauter ldquoSobolev gradients for differentialalgebraic equationsrdquoElectronic Journal of Differential Equationsvol 42 pp 1ndash31 2008

[22] N Raza S Sial and J W Neuberger ldquoNumerical solutions ofintegrodifferential equations using Sobolev gradient methodsrdquoInternational Journal of Computational Methods vol 9 ArticleID 1250046 2012

[23] J W Neuberger Sobolev Gradients and Differential Equationsvol 1670 of Lecture Notes in Mathematics Springer BerlinGermany 1997

[24] R J Renka and J W Neuberger ldquoSobolev gradients Introduc-tion applications problemsrdquo Contemporary Mathematics vol257 pp 85ndash99 2004

[25] F Hecht O Pironneau and K Ohtsuka ldquoFreeFem++ Manualrdquohttpwwwfreefemorg

[26] R A Adams Sobolev spaces Academic Press New York NYUSA 1975

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 6: Research Article Numerical Solutions of Singularly ...downloads.hindawi.com/journals/jfs/2013/542897.pdf · Department of Mathematics and Statistics, McMaster University, Hamilton,

6 Journal of Function Spaces and Applications

Acknowledgments

The authors are thankful to the precise and kind remarks ofthe learned refereesThe first author acknowledge and appre-ciate theHigher Education Commission (HEC) Pakistan forproviding research grant through Postdoctoral fellowship

References

[1] M Stynes ldquoSteady-state convection-diffusion problemsrdquo ActaNumerica vol 14 pp 445ndash508 2005

[2] C Xenophontos and S R Fulton ldquoUniform approximation ofsingularly perturbed reaction-diffusion problems by the finiteelement method on a Shishkin meshrdquo Numerical Methods forPartial Differential Equations vol 19 no 1 pp 89ndash111 2003

[3] W T Mahavier ldquoA numerical method utilizing weightedSobolev descent to solve singular differential equationsrdquo Non-linear World vol 4 no 4 pp 435ndash455 1997

[4] AMajid and S Sial ldquoApplication of Sobolev gradientmethod toPoisson-Boltzmann systemrdquo Journal of Computational Physicsvol 229 no 16 pp 5742ndash5754 2010

[5] J Karatson and L Loczi ldquoSobolev gradient preconditioning forthe electrostatic potential equationrdquo Computers amp Mathematicswith Applications vol 50 no 7 pp 1093ndash1104 2005

[6] J Karatson and I Farago ldquoPreconditioning operators andSobolev gradients for nonlinear elliptic problemsrdquo Computersamp Mathematics with Applications vol 50 no 7 pp 1077ndash10922005

[7] J Karatson ldquoConstructive Sobolev gradient preconditioningfor semilinear elliptic systemsrdquo Electronic Journal of DifferentialEquations vol 75 pp 1ndash26 2004

[8] J J Garcıa-Ripoll V V Konotop B Malomed and V M Perez-Garcıa ldquoA quasi-local Gross-Pitaevskii equation for attractiveBose-Einstein condensatesrdquo Mathematics and Computers inSimulation vol 62 no 1-2 pp 21ndash30 2003

[9] N Raza S Sial S S Siddiqi and T Lookman ldquoEnergyminimization related to the nonlinear Schrodinger equationrdquoJournal of Computational Physics vol 228 no 7 pp 2572ndash25772009

[10] N Raza S Sial and J W Neuberger ldquoNumerical solution ofBurgersrsquo equation by the Sobolev gradient methodrdquo AppliedMathematics and Computation vol 218 no 8 pp 4017ndash40242011

[11] A Majid and S Sial ldquoApproximate solutions to Poisson-Boltzmann systems with Sobolev gradientsrdquo Journal of Compu-tational Physics vol 230 no 14 pp 5732ndash5738 2011

[12] W B Richardson Jr ldquoSobolev preconditioning for the Poisson-Boltzmann equationrdquo Computer Methods in Applied Mechanicsand Engineering vol 181 no 4 pp 425ndash436 2000

[13] W B Richardson Jr ldquoSobolev gradient preconditioning forimage-processing PDEsrdquo Communications in Numerical Meth-ods in Engineering vol 24 no 6 pp 493ndash504 2008

[14] R J Renka ldquoConstructing fair curves and surfaces with aSobolev gradient methodrdquo Computer Aided Geometric Designvol 21 no 2 pp 137ndash149 2004

[15] S Sial J Neuberger T Lookman and A Saxena ldquoEnergyminimization using Sobolev gradients application to phaseseparation and orderingrdquo Journal of Computational Physics vol189 no 1 pp 88ndash97 2003

[16] S Sial J Neuberger T Lookman and A Saxena ldquoenergyminimization using Sobolev gradients finite-element settingrdquo

in Proceedings of the World Conference on 21st Century Math-ematics Lahore Pakistan 2005

[17] N Raza S Sial and S Siddiqi ldquoApproximating time evolutionrelated to Ginzburg-Landau functionals via Sobolev gradientmethods in a finite-element settingrdquo Journal of ComputationalPhysics vol 229 no 5 pp 1621ndash1625 2010

[18] N Raza S Sial and S S Siddiqi ldquoSobolev gradient approach forthe time evolution related to energyminimization of Ginzburg-Landau functionalsrdquo Journal of Computational Physics vol 228no 7 pp 2566ndash2571 2009

[19] S Sial ldquoSobolev gradient algorithm for minimum energy statesof s-wave superconductors finite-element settingrdquo Supercon-ductor Science and Technology vol 18 pp 675ndash677 2005

[20] BM BrownM Jais and IWKnowles ldquoA variational approachto an elastic inverse problemrdquo Inverse Problems vol 21 no 6 pp1953ndash1973 2005

[21] R Nittka and M Sauter ldquoSobolev gradients for differentialalgebraic equationsrdquoElectronic Journal of Differential Equationsvol 42 pp 1ndash31 2008

[22] N Raza S Sial and J W Neuberger ldquoNumerical solutions ofintegrodifferential equations using Sobolev gradient methodsrdquoInternational Journal of Computational Methods vol 9 ArticleID 1250046 2012

[23] J W Neuberger Sobolev Gradients and Differential Equationsvol 1670 of Lecture Notes in Mathematics Springer BerlinGermany 1997

[24] R J Renka and J W Neuberger ldquoSobolev gradients Introduc-tion applications problemsrdquo Contemporary Mathematics vol257 pp 85ndash99 2004

[25] F Hecht O Pironneau and K Ohtsuka ldquoFreeFem++ Manualrdquohttpwwwfreefemorg

[26] R A Adams Sobolev spaces Academic Press New York NYUSA 1975

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 7: Research Article Numerical Solutions of Singularly ...downloads.hindawi.com/journals/jfs/2013/542897.pdf · Department of Mathematics and Statistics, McMaster University, Hamilton,

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of