a new parallel numerical algorithm for solving singular

12
Available online at http://jprm.sms.edu.pk/ Journal of Prime Research in Mathematics, 17(2) (2021), 111–122 A new parallel numerical algorithm for solving singular Perturbation problems in partial differential equations Khalid Mindeel Mohammed Al-Abrahemee a,* , Madeha Shaltagh Yousif a a Department of Mathematics, College of Education, Department of Applied Sciences, University of AL, Qadisiyhah, Iraq University of Technology, Baghdad, Iraq. Abstract In this study, a new method based on a neural network has been used as a solution for singular per- turbation problems in partial differential equations (SPPDEs). Specifically, a modified neural network with a parallel numerical algorithm was used to train the Levenberg-Marquardt (L-M) algorithm with new data and hypotheses. This method is generally applicable to SPPDEs. We consider the convergence under ϑ k = min(kE k k, kJ T k E k k) of the L-M algorithm, where kE k k provides a local error bound, and J ($ k )= E 0 ($ k ) is the Jacobian. The sequence generated by the L-M algorithm converges to the solu- tion set quadratically. We use some examples to prove that the proposed algorithm, when implemented in MATHEMATICA 11.2, is more efficient and accurate than the standard algorithm. Keywords: Nonlinear equations, Local error bound, Levenberg-Marquardt algorithm. 1. Introduction Singularly perturbed problems (SPP) in partial differential equations (PDE) are realized by the occur- rence of a minor parameter that increases the highest derivative. These issues are rigid. Many approaches have been established explaining singularly perturbed boundary value problems (SPBVP). Currently, there are different methods of calculating the solution depending on the artificial intelligence (one of those per- formances is identified as Artificial Neural Networks (ANN)) (see [1]–[10]). These approaches are capable of handling the fuzziness’s and reservations that perform when demanding to resolve issues connected to the real life problems (see [11]–[20]). * Corresponding author Email addresses: [email protected] (Khalid Mindeel Mohammed Al-Abrahemee), [email protected] (Madeha Shaltagh Yousif) Received : 2 March 2021; Accepted: 26 August 2021; Published Online: 28 August 2021.

Upload: others

Post on 15-Feb-2022

7 views

Category:

Documents


0 download

TRANSCRIPT

Available online at http://jprm.sms.edu.pk/Journal of Prime Research in Mathematics, 17(2) (2021), 111–122

A new parallel numerical algorithm for solvingsingular Perturbation problems in partial differentialequations

Khalid Mindeel Mohammed Al-Abrahemeea,∗, Madeha Shaltagh Yousifa

aDepartment of Mathematics, College of Education, Department of Applied Sciences, University of AL, Qadisiyhah, Iraq Universityof Technology, Baghdad, Iraq.

Abstract

In this study, a new method based on a neural network has been used as a solution for singular per-turbation problems in partial differential equations (SPPDEs). Specifically, a modified neural networkwith a parallel numerical algorithm was used to train the Levenberg-Marquardt (L-M) algorithm withnew data and hypotheses. This method is generally applicable to SPPDEs. We consider the convergenceunder ϑk = min(‖Ek‖, ‖JTk Ek‖) of the L-M algorithm, where ‖Ek‖ provides a local error bound, andJ($k) = E

′($k) is the Jacobian. The sequence generated by the L-M algorithm converges to the solu-

tion set quadratically. We use some examples to prove that the proposed algorithm, when implemented inMATHEMATICA 11.2, is more efficient and accurate than the standard algorithm.

Keywords: Nonlinear equations, Local error bound, Levenberg-Marquardt algorithm.

1. Introduction

Singularly perturbed problems (SPP) in partial differential equations (PDE) are realized by the occur-rence of a minor parameter that increases the highest derivative. These issues are rigid. Many approacheshave been established explaining singularly perturbed boundary value problems (SPBVP). Currently, thereare different methods of calculating the solution depending on the artificial intelligence (one of those per-formances is identified as Artificial Neural Networks (ANN)) (see [1]–[10]). These approaches are capable ofhandling the fuzziness’s and reservations that perform when demanding to resolve issues connected to thereal life problems (see [11]–[20]).

∗Corresponding authorEmail addresses: [email protected] (Khalid Mindeel Mohammed Al-Abrahemee),

[email protected] (Madeha Shaltagh Yousif)

Received : 2 March 2021; Accepted: 26 August 2021; Published Online: 28 August 2021.

K.M.M.A. Abrahemee, M.S. Yousif, Journal of Prime Research in Mathematics, 17(2) (2021), 111–122 112

In this paper, we present a new method based on a neural network for finding results to singularperturbation problems in partial differential equations (SPPDEs) that avoids the difficulties associated withthe Levenberg-Marquardt (L-M) algorithm. The proposed method updates the L-M algorithm with theparameter

ϑk = min(‖Ek‖, ‖JTk Ek‖). (1.1)

Let E($) = 0, where E($): Rn → Rm is continuously differentiable, and E(x) is Lipschitz continuous,i.e., ‖E($2)− E($1)‖ ≤ L‖$2 −$1‖.

We shall use procedure with fast local convergence below a non-singular bound (see [1]–[3]).At the kth iteration of the L-M algorithm [4]–[8], the following trial step is computed:

dk = −(JTk Jk + ϑkI)−1JTk Ek, (1.2)

where Jk = J($k) = E′($k) is the Jacobian, and ϑk > 0 is the L-M parameter that is updated from

iteration to iteration.

2. Algorithm for Updating the L-M Algorithm

Algorithm 2.1 (L-M Algorithm). Step 1: Choose $0‖$, ε ≥ 0, k = 0andk = 0.Step 2: If ‖JTk Ek‖ ≤ ε, stop.Step 3: ϑk = min(‖Ek‖, ‖JTk Ek‖, and compute dkwith dk = (JTk Jk + ϑkI)−1JTk EkStep 4: $k+1 = $k + dκ, κ = κ+ 1, go to Step 2.

Hypothesis 2.2. (a) Equation (1.1) has the solution set W ∗.

(b) Let E(($)): Rn → Rm be a continually differentiable function. Then, for some solution $∗‖W ∗, thereexist constants b > 0, q1 > 0, q2 > 0, andL > 0 such that the following inequalities hold:

(i)‖E($)‖ ≥ q1dist(w,$∗)∀$‖N($∗, b) = {$ : ‖$ −$∗‖ < b}, (2.1)

(ii)‖E($

′)− E($)− J($)($

′ −$)‖ ≤ q2‖$′ −$‖2∀$,$′‖N($∗, b), (2.2)

‖E($′)− E($)‖ ≤ L‖$′ −$‖∀$,$′‖N($∗, b). (2.3)

3. Convergence of L-M Algorithm 2.1

In this section, we will discuss the quadratic convergence of L-M Algorithm 2.1. Let $′k‖$∗ denote the

closest solution to $k so that

‖$′k −$k‖ = dist($k, $∗) ∀ w

′k‖$∗. (3.1)

Lemma 3.1. Let Hypothesis 2.2 hold. If $k‖N($∗, b), then there exist constants q3>0andL1 > 0 such that

q3dist($k, $∗) ≤ ϑk = min(‖Ek‖, ‖JTk Ek‖) ≤ L1‖$

′k −$k‖ (3.2)

Proof. First, we will take up the right inequality. We have that

‖Ek‖ = ‖E($′k − E($k)‖ ≤ L‖$

′k −$k‖ ∀ ϑk = ‖Ek‖,

andϑk = ‖Ek‖.‖Jk‖ ≤ ‖Jk‖L‖$

′k −$k‖ ≤ L2‖$′k −$k‖ ∀ ϑk = ‖JTk Ek‖.

Let L1 = max(L,L2), then the right side holds.Next, we show that the left inequality in (3.2) holds. For µk = ‖Ek‖, we have

K.M.M.A. Abrahemee, M.S. Yousif, Journal of Prime Research in Mathematics, 17(2) (2021), 111–122 113

‖Ek‖ ≥ q2dist($,$∗)∀w‖N($$∗, b).

When µk = ‖JTk Ek‖,

‖Fk‖2 = ETk Ek = ETk [E($′k) + Jk($k −$

′k)] + ETk Vk,

Vk = Ek − E($′k)− Jk($k −$

′k),

andETk Ek($k −$

′k) = ‖Fk‖2 − ETk Vk.

From Equations (2.1), (2.2) and (3.1), we have

‖JTk Ek‖ ≥ q22‖$k −$′k‖ − Lq1‖$k −$

′k‖2,

which means that

‖JTk Ek‖‖$k −$′k‖ ≥ q22‖$k −$

′k‖ − Lq1‖$k −$

′k‖3.

Hence, there exists q′ ≥ 0 such that

‖JTk Ek‖ ≥ q′‖$k −$

′k‖.

Let aq3 = min(q2, q′), then ϑk ≥ q3dist($k, $

∗).

Lemma 3.2. Let Hypothesis 2.2 hold. If $k‖N($∗, b/2), then there exist constants q4 > 0 and q5 > 0 suchthat the following inequalities hold:

a) ‖dk‖ ≤ q4dist($k, $∗);

b) ‖Jkdk + Ek‖ ≤ q5dist($k, $∗)

32 .

Proof. Consider the function ξk(d∗) = ‖Ek +Jkd‖2 +ϑk‖d‖2. Let dk be the global minimum of ξk(d) arisingfrom the convex property of ξk(d). Then we have

ξk(d) ≤ ξk($′k −$k) (3.3)

for‖$′k −$∗‖ ≤ ‖$′k −$k‖+ ‖$∗ −$k‖ ≤ ‖$∗ −$k‖+ ‖$∗ −$k‖ ≤ b.

From the defining of ξk(d), (3.3), hypothesis 2.2(b) and Lemma 3.1, we have

‖dk‖2 ≤ (1

ϑk)ξk(dk) ≤

1

ϑk)ξk($

′k −$k),

= (1

ϑk)[‖Ek + Jk($

′k −$k)‖2 + ϑk‖$

′k −$k‖2]

≤ (1

ϑk)[q21‖($

′k −$k)‖4 + ϑk‖$

′k −$k‖2]

≤ (q21‖($

′k −$k)‖2

ϑk+ 1)‖$′k −$k‖2

≤ (q21q3

)‖$′k −$k‖2 + 1,

and‖$′k −$k‖ ≤ ‖$k −$∗‖ ≤ b/2.

K.M.M.A. Abrahemee, M.S. Yousif, Journal of Prime Research in Mathematics, 17(2) (2021), 111–122 114

Hence, ‖dk‖ ≤ q4dist($k, $∗), for q4 =

√bq21+2q3

2q3, proving (a).

Next, we show (b). From (3.3), we have

‖Jkdk + Ek‖2 ≤ ξk(dk) ≤ ξk($′k −$k)

= ‖Jk($′k −$k) + Ek‖2 + µk‖$

′k −$k‖2

≤ q21‖$′k −$k‖4 + ϑk‖$

′k −$k‖2.

From (3.2), we get that

‖Jkdk + Ek‖ ≤√q21‖$

′k −$k‖+ L1‖$′k −$k‖

32

≤√q21b+ 2L1/2‖$′k −$k‖

32 .

Hence,

‖Jkdk + Ek‖ ≤ q5dist($k, $∗)

32 ,

where q5 =√q21b+ 2L1/2.

Lemma 3.3. If $k+1, $k‖N($∗, b/2), then there exists a constant q6 such that ($k+1, $∗) ≤ q6dist($k, $

∗)3/2.

Proof. Since $k+1, $k‖N($∗, b/2), and $k+1 = $k + dk, from Hypothesis 2.2 and Lemma 3.2, we get that

q2dist($k+1, $∗) = q2dist($k, $

∗) ≤ ‖E($k + dk)‖

≤ ‖Jkdk + Ek‖+ q1‖dk‖2 ≤ q5dist($k, $∗)

32 + q1q

24dist($k, $

∗)2

≤ q5 + q1q24dist($k, $

∗)3/2.

Thus, we have dist($k+1, $∗) ≤ q6dist($k, $

∗)32 , where q6 = (q5 + q1q

24)/q2.

We take that $k converges to $∗‖N∗, which contains the singular value decomposition (SVD) of Jk andthe proof of the next theorem.

Theorem 3.4. If there exists a sequence of weights {$k}generated by L-M Algorithm 2.1 and Hypothesis2.2 holds, then {$k} converges quadratically to the solution of Equation (1.1).

Proof. The iteration law for the L-M algorithm is:

dk = −(JTk Jk + ϑkI)−1JTk Ek.

Let the SVD of Jk be Jk = ψkχkϕTk , then

dk = −[(ψkχkϕTk )T (ψkχkϕ

Tk ) + µkI]−1(ψkχkϕ

Tk )TEk

= −[ϕkχTk ψ

Tk ψk‖=IχkϕTk + ϑkI]−1(ϕkχ

Tk ψ

Tk )Ek.

Since ψ is orthogonal, i.e., ψ−1k = ψTk , we have

dk = −(ϕkχ2kϕ

Tk + ϑkI)−1(ϕkχkψ

Tk )Ek since χTk = χk is a diagonal matrix

= −[ϕkχ2kϕ

Tk + ϑkI(ϕkϕ

Tk )‖=I ]−1(ϕkχkψ

Tk )Ek.

Since ϕ is orthogonal, i.e., ϕ−1k = ϕTk , we have

K.M.M.A. Abrahemee, M.S. Yousif, Journal of Prime Research in Mathematics, 17(2) (2021), 111–122 115

dk = −ϕk(χ2kϕ

Tk + ϑkIϕ

Tk )−1(ϕkχkψ

Tk )Ek

= −ϕk(χ2k + ϑkI)−1ϕTk ϕk‖=IχkψTk Ek.

Hence,

dk = −ϕk(χ2k + ϑkI)−1χkψ

Tk Ek. (3.4)

Now, when Jk = ψ1χ1ϕT1 + ψ2χ2ϕ

T2 , we have

dk = −ϕ1(χ21 + ϑkI)−1χ1ψ

T1 Ek − ϕ2(χ

22 + ϑkI)−1χ2ψ

T2 Ek. (3.5)

Next, we need to prove that ‖($k+1 − $∗)‖ = o(‖$k − $∗‖2). By the Taylor expansion, and the fact$k+1 = $k + dk, we have

E($k+1) = E($k) + E′($k)($k+1 −$k) + o($2

k)

= Ek + Jkdk.

A computation yields

Ek + Jkdk = Ek + (ψ1χ1ϕT1 + ψ2χ2ϕ

T2 )[−ϕ1(χ

21 + ϑkI)−1χ1ψ

T1 Ek − ϕ2(χ

22 + ϑkI)−1χ2ψ

T2 Ek]

= Ek − [ψ1χ1ϕT1 ϕ1‖I(χ2

1 + ϑkI)−1χ1ψT1 Ek]

− [ψ1χ1ϕT1 ϕ2‖o(χ2

2 + ϑkI)−1χ2ψT2 Ek‖0]− [ψ2χ2ϕ

T2 ϕ1‖0(χ2

1 + ϑkI)−1χ1ψT1 Ek‖0]

− [ψ2χ2ϕT2 ϕ2‖I(χ2

2 + ϑkI)−1χ2ψT2 Ek].

Since ϕ is orthogonal, i.e., ϕ−1k = ϕTk and ϕTk ϕi = 0 ∀ k 6= i

= Ek − ψ1χ1(χ21 + ϑkI)−1χ1ψ

T1 Ek − ψ2χ2(χ

22 + ϑkI)−1χ2ψ

T2 Ek.

By the SVD, we arrive at

E($k+1) = ϑkψ1(χ21 + ϑkI)−1ψT1 Ek + µkψ2(χ

22 + ϑkI)−1ψEk + ψ3ψ

T3 Ek. (3.6)

Since {$k}converges to $∗ super linearly, we assume that

L1‖$k −$∗‖ < σ∗r2∀ sufficiently large k. (3.7)

Then‖(χ2

1 + ϑkI)−1‖ ≤ ‖χ−21 ‖.

From (1.2), we have

‖χ1 − χ∗1‖ ≤ q1‖$k −$∗‖

‖χ1‖+ ‖χ∗1‖ ≤ q1‖$k −$∗‖

‖χ1‖ ≤ q1‖$k −$∗‖ − ‖χ∗1‖

‖χ1‖2 ≤ (‖χ∗1‖ − q1‖$k −$∗‖)2

‖χ1‖−2 ≤ 1

(σ∗r − q1‖$k −$∗‖)2.

K.M.M.A. Abrahemee, M.S. Yousif, Journal of Prime Research in Mathematics, 17(2) (2021), 111–122 116

From (3.7), we have

‖χ1‖−2 <1

(σ∗r −σ∗r2 )2

=4

σ∗2r. (3.8)

Also, we have χ22 + ϑkI ≥ ϑk, and (χ2

2 + ϑkI)−1 ≤ ϑ−1k . This leads to

‖(χ22 + ϑkI)−1‖ ≤ ϑ−1

k . (3.9)

From inequalities (3.8) and (3.9), we get

‖Ek + Jkdk‖ ≤ ϑkψ14

σ∗2rψT1 Ek + ϑkψ2ϑ

−1k ψT2 Ek + ψ3ψ

T3 Ek

≤ 4

σ∗2rϑkψ1ψ

T1 IEk + ϑkϑ

−1k I

ψ2ψT2 Ek + ψ3ψ

T3 Ek.

(because ψ is orthogonal, i.e., ψ−1k = ψTk )

≤ 4

σ∗2rq22‖$k −$∗‖q2‖$k −$∗‖+ (ψ2ψ

T2 + ψ3ψ

T3 )Ek

≤ 4

σ∗2rq32‖$k −$∗‖2 + 2q1‖$k −$∗‖2

≤(

4

σ∗2rq32 + 2q1

)‖$k −$∗‖2.

Let q7 = 4σ∗2r

q32 + 2q1, then we get

‖Ek + Jkdk‖ ≤ q7‖$k −$∗‖2. (3.10)

From (3.1), we getq1dist($k+1, $

∗) ≤ ‖E($k+1)‖ = ‖E($k + dk)‖.By the Taylor series, we have

q1dist($k+1, $∗) ≤ ‖Ek + Jkdk‖+ q1‖dk‖2.

From (3.6) and Lemma 3.2, we get

q1dist($k+1, $∗) ≤ q7‖$k −$∗‖2 + q24q1‖$k −$∗‖2

≤ (q7 + q24q1)‖$k −$∗‖2.

Since dist($k, $∗) ≤ ‖dk‖+ ‖dk‖, we can use Lemma 3.2 to show that

‖dk+1‖ = o(‖dk‖2).

Hence, we get

‖($k+1 −$∗)‖ = o(‖$k −$∗‖2).

4. Numerical experiments

In this section, we present some numerical experiments involving L-M Algorithm 2.1. In order to testthe efficiency of the proposed algorithm, we use the method in [10] to solve some SPPDEs. Comparingthe standard L-M algorithm (SLM) with L-M Algorithm 2.1 (LMM) when the µk values generated byEquation (1.2) are used, we found that the proposed method significantly increases the accuracy and speedof convergence and decreases the number of iterations needed to reach the goal, as shown in the Tables 1and 2 below. We implemented both algorithms in MATHMATICA 11.2.

K.M.M.A. Abrahemee, M.S. Yousif, Journal of Prime Research in Mathematics, 17(2) (2021), 111–122 117

Figure 1: The analytic solution for the O2(ϑ) = 0, ϑ(u, v) = c1(v − u) + c2(u+ v)

Figure 2: The artificial solution for the O2(ϑ) = 0, ϑ(u, v) = c1(v − u) + c2(u+ v) with test lose 0.587 and training loss 0.572

4.1. Examples

Consider the following 2nd-order nonlinear two-dimensional wave equation. A comparison with SLM andMML is given in the Tables 1 and 2.

1. Let

O2(ϑ) = ε(∂2ϑ(u, v)

∂u2− ∂2ϑ(u, v)

∂v2), (4.1)

on the unit disk with Dirichlet boundary condition ϑ(u, v) = ε sin(uv), then the solution of (4.1) whenε = 0.01 is given in Fig. 1, while Fig. 2 represents the numerical approximated solution.

2. Now, we discuss the solution of O2(ϑ) = c, under the same condition in Example 1, where c in [0,1] isa constant. The solution (see Fig. 3) is formulated by ϑ(u, v) = c1(v − u) + c2(u+ v) + c

u2,while the

numerical solution is appeared in Fig. 4. This example shows the perturbation is held by the functionitself ( is c = ε).

3. Next, we consider the problem O2(ϑ) = 0, ϑ(u, v) = 1, u ≥ 0.35, ϑ(u, v) = 0, u ≥ 0.35 (see Fig.5). Inthis example, the perturbation equation is valid for all ε. Moreover, the approximated solution is givenin Fig.6.

4. We assume the 3D-problem ε(uxx + uyy + uzz) = 1 on the unit ball with Dirichlet boundary conditionu(x, y, z) = εsin(xyz). Fig.7 shows the combination of 3D solution in a ball. This brings the Equation(1.1) when ϑk = min(‖Ek‖, ‖JTk Ek‖ ∼= 0).

5. Letε(∂2ϑ(u,v)∂u2

+ ∂2ϑ(u,v)∂v2

) = exp(−(u2 + 10v2)), ϑ(u,−1) = ϑ(u, 1) = ϑ(−1, v) = ϑ(1, v). Fig. 9 indicatesthe solution Poisson equation in two variables on a rectangular region with zero boundary conditions.With the condition ϑk = min(‖Ek‖, ‖JTk Ek‖) ∼= 0.

K.M.M.A. Abrahemee, M.S. Yousif, Journal of Prime Research in Mathematics, 17(2) (2021), 111–122 118

Figure 3: The solution behaviour ofO2(ϑ) = c, c = ε

Figure 4: The artificial solution of O2(ϑ) = c with test loss 0.557 and training loss 0.541

Figure 5: The solution of O2(ϑ) = 0, ϑ(u, v) = 1, u ≥ 0.35, ϑ(u, v) = 0, u ≥ 0.35

Figure 6: The artificial solution of the O2(ϑ) = 0, ϑ(u, v) = 1 with test loos 0.628 and training loos 0.599

K.M.M.A. Abrahemee, M.S. Yousif, Journal of Prime Research in Mathematics, 17(2) (2021), 111–122 119

Figure 7: The analytic solution of 0.9(uxx + uyy + uzz) = 1

Figure 8: The artificial solution of ε(uxx + uyy + uzz) = 1 , with test loos 0.561 and training loos 0.511

Figure 9: The analytic Solution of 0.96( ∂2ϑ(u,v)

∂u2 + ∂2ϑ(u,v)

∂v2 ) = exp(−(u2 + 10v2))

K.M.M.A. Abrahemee, M.S. Yousif, Journal of Prime Research in Mathematics, 17(2) (2021), 111–122 120

Figure 10: The neural solution of 0.96( ∂2ϑ(u,v)

∂u2 + ∂2ϑ(u,v)

∂v2 ) = exp(−(u2 + 10v2)) using four neurons in the 1-hidden layer andtwo neurons in the 2-hidden layer with test loos 0.568 and training loos 0.540

Figure 11: The analytic outcome of P(u,v)

6. Finally, we have the complicated problem, with the solution that is given in Figs. 11 and 12

Pε(u, v) = −εO(1√

1 + O2(ϑ))O(ϑ) = 0, ϑ(u, v) = sin(2πε(u+ v)).

The solution shows the minimal surface over a unit diskwith a sinusoidal boundary conditionsin sin(2π(u+v)), this satisfies the Equation (1.1) with ϑk = min(‖Ek‖, ‖JTk Ek‖).And training loos 0.501.

5. Conclusion

From the above presentation, we conclude that the generalized ANNs have been effectively employedto indicate the artificial solution (AS) of a class of singular perturbation problems in partial differentialequations. The same principle can be utilized to find different classes of PDE. The examples showedthat the approximated AS to the analytic solution is converged by satisfying the condition (1.1).

Figure 12: The artificial solution of P(u,v) using sin(.) with test loos 0.499

K.M.M.A. Abrahemee, M.S. Yousif, Journal of Prime Research in Mathematics, 17(2) (2021), 111–122 121

Table 1: Comparison of training performance iterations, times and MSEs

MSE Time Iters Training Function

2.53762846397243e-6 0:00:10 700 Trainlm(SLM)

1.80571207840797e-18 0:00:002 34 Trainlm(LMM)

Table 2: Comparison of training performance iterations, times and MSEs

MSE Time Iters Training Function

3.87386211973529e-09 0:00:18 1276 Trainlm(SLM)

2.32860157820317e-22 0:00:10 11 Trainlm(LMM)

References

[1] Bellavia, S., Macconi, M., & Morini, B. (2003). An affine scaling trust-region approach to bound-constrainednonlinear systems. Applied Numerical Mathematics, 44 (3), 257-280.

[2] Gabriel, S. A., & Pang, J. S. (1994). A trust region method for constrained nonsmooth equations. In Large ScaleOptimization (pp. 155-181). Springer, Boston, MA.

[3] Kanzow, C. (2001). An active set-type Newton method for constrained nonlinear systems. In Complementarity:Applications, Algorithms and Extensions (pp. 179-200). Springer, Boston, MA.

[4] Levenberg, K. (1944). A method for the solution of certain non-linear problems in least squares. Quarterly ofApplied Mathematics, 2 (2), 164-168.

[5] Marquardt, D. W. (1963). An algorithm for least-squares estimation of nonlinear parameters. Journal of theSociety for Industrial and Applied Mathematics, 11 (2), 431-441.

[6] Yamashita, N., & Fukushima, M. (2001). On the rate of convergence of the Levenberg-Marquardt method. InTopics in numerical analysis (pp. 239-249). Springer, Vienna.

[7] Haq, F. I. (2009). Numerical Solution Of Boundary-Value And Initial-Boundary-Value Problems Using SplineFunctions. Ph.D. Thesis, Faculty of Engineering Sciences GIK Institute of Engineering Sciences and Technology,Topi, Pakistan.

[8] Valanarasu, T., & Ramanujam, N. (2007). Asymptotic initial-value method for second-order singular perturba-tion problems of reaction-diffusion type with discontinuous source term. Journal of Optimization Theory andApplications, 133 (3), 371-383.

[9] Yamashita, N., & Fukushima, M. (2001). On the rate of convergence of the Levenberg-Marquardt method. InTopics in Numerical Analysis (pp. 239-249). Springer, Vienna.

[10] Tawfiq, L. N., & Al-Abrahemee, K. M. M. (2014). Design Collocation Neural Network to Solve Singular PerturbedProblems with Initial Conditions. International Journal of Modern Engineering Sciences, 3 (1), 29-38.

[11] Tawfiq, L. N. M., & Abdul-Jabbar, S. A. (2015). Diagnosis of cancer using artificial neural network. InternationalJournal of Advanced Applied Mathematics and Mechanics, 3 (2), 45-49.

[12] Suhhiem, M. H. (2017). Artificial Neural Network for Solving Fuzzy Differential Equations under GeneralizedHDerivation. International Journal, 5 (1), 1-9.

[13] Zhumanazarova, A., & Im Cho, Y. (2020, October). Principles of Neural Network Approaches to Solving Sin-gularly Perturbed Problems. In 2020 International Conference on Information and Communication TechnologyConvergence (ICTC) (pp. 446-448). IEEE.

[14] Kim, J. H., & Cho, Y. I. (2021). The Possibility of Neural Network Approach to Solve Singular PerturbedProblems. Journal of the Korea Society of Computer and Information, 26 (1), 69-76.

[15] Betelin, V. B., B. V. Kryzhanovsky, N. N. Smirnov, V. F. Nikitin, I. M. Karandashev, M. Yu Malsagov, and E. V.Mikhalchenko. ”Neural network approach to solve gas dynamics problems with chemical transformations.”ActaAstronautica180 (2021): 58-65.

[16] Zhang, Z., Chen, S., Deng, X., & Liang, J. (2021). A Circadian Rhythms Neural Network for Solving RedundantRobot Manipulators Tracking Problem Perturbed by Periodic Noise. IEEE/ASME Transactions on Mechatronics.

[17] Tran, D. T., Truong, H. V. A., & Ahn, K. K. (2021). Adaptive Nonsingular Fast Terminal Sliding mode Controlof Robotic Manipulator Based Neural Network Approach. International Journal of Precision Engineering andManufacturing, 22 (3), 417-429.

[18] Guo, Y., Li, J., & Duan, R. (2021). Extended dissipativity-based control for persistent dwell-time switchedsingularly perturbed systems and its application to electronic circuits. Applied Mathematics and Computation,402, 126114.

[19] Chakraborty, S. (2021). Transfer learning based multi-fidelity physics informed deep neural network. Journal ofComputational Physics, 426, 109942.

K.M.M.A. Abrahemee, M.S. Yousif, Journal of Prime Research in Mathematics, 17(2) (2021), 111–122 122

[20] Suganthan, P. N., & Katuwal, R. (2021). On the origins of randomization-based feedforward neural networks.Applied Soft Computing, 105, 107239.