high dimensional sparse polynomial approximations of ... · the curse of dimensionality consider a...
TRANSCRIPT
![Page 1: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/1.jpg)
High dimensional sparse polynomial approximationsof parametric and stochastic PDE’s
Albert Cohen
Laboratoire Jacques-Louis LionsUniversite Pierre et Marie Curie
Paris
with Ronald DeVore and Christoph Schwabnumerical results by Abdellah Chkifa
Banff, 2011
![Page 2: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/2.jpg)
The curse of dimensionality
Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points.Reconstruct, for example by piecewise linear interpolation.
0 1
u
Error in terms of point spacing h > 0 : if u has C2 smoothness
‖u −R(u)‖L∞ ≤ C‖u ′′‖L∞ h2.
Using piecewise polynomials of higher order, if u has Cm smoothness
‖u −R(u)‖L∞ ≤ C‖u(m)‖L∞ hm.
In terms of the number of samples N ∼ h−1, the error is estimated by N−m.
In d dimensions : u(y) = u(y1, · · · ,yd) with y ∈ [0,1]d . With a uniform sampling, westill have
‖u −R(u)‖L∞ ≤ C‖dmu‖L∞ hm,
but the number of samples is now N ∼ h−d , and the error estimate is in N−m/d .
![Page 3: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/3.jpg)
The curse of dimensionality
Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points.Reconstruct, for example by piecewise linear interpolation.
0 1
u
Error in terms of point spacing h > 0 : if u has C2 smoothness
‖u −R(u)‖L∞ ≤ C‖u ′′‖L∞ h2.
Using piecewise polynomials of higher order, if u has Cm smoothness
‖u −R(u)‖L∞ ≤ C‖u(m)‖L∞ hm.
In terms of the number of samples N ∼ h−1, the error is estimated by N−m.
In d dimensions : u(y) = u(y1, · · · ,yd) with y ∈ [0,1]d . With a uniform sampling, westill have
‖u −R(u)‖L∞ ≤ C‖dmu‖L∞ hm,
but the number of samples is now N ∼ h−d , and the error estimate is in N−m/d .
![Page 4: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/4.jpg)
The curse of dimensionality
Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points.Reconstruct, for example by piecewise linear interpolation.
0
1
Error in terms of point spacing h > 0 : if u has C2 smoothness
‖u −R(u)‖L∞ ≤ C‖u ′′‖L∞ h2.
Using piecewise polynomials of higher order, if u has Cm smoothness
‖u −R(u)‖L∞ ≤ C‖u(m)‖L∞ hm.
In terms of the number of samples N ∼ h−1, the error is estimated by N−m.
In d dimensions : u(y) = u(y1, · · · ,yd) with y ∈ [0,1]d . With a uniform sampling, westill have
‖u −R(u)‖L∞ ≤ C‖dmu‖L∞ hm,
but the number of samples is now N ∼ h−d , and the error estimate is in N−m/d .
![Page 5: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/5.jpg)
The curse of dimensionality
Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points.Reconstruct, for example by piecewise linear interpolation.
0 1
R(u)
Error in terms of point spacing h > 0 : if u has C2 smoothness
‖u −R(u)‖L∞ ≤ C‖u ′′‖L∞ h2.
Using piecewise polynomials of higher order, if u has Cm smoothness
‖u −R(u)‖L∞ ≤ C‖u(m)‖L∞ hm.
In terms of the number of samples N ∼ h−1, the error is estimated by N−m.
In d dimensions : u(y) = u(y1, · · · ,yd) with y ∈ [0,1]d . With a uniform sampling, westill have
‖u −R(u)‖L∞ ≤ C‖dmu‖L∞ hm,
but the number of samples is now N ∼ h−d , and the error estimate is in N−m/d .
![Page 6: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/6.jpg)
The curse of dimensionality
Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points.Reconstruct, for example by piecewise linear interpolation.
0 1
R(u)
Error in terms of point spacing h > 0 : if u has C2 smoothness
‖u −R(u)‖L∞ ≤ C‖u ′′‖L∞ h2.
Using piecewise polynomials of higher order, if u has Cm smoothness
‖u −R(u)‖L∞ ≤ C‖u(m)‖L∞ hm.
In terms of the number of samples N ∼ h−1, the error is estimated by N−m.
In d dimensions : u(y) = u(y1, · · · ,yd) with y ∈ [0,1]d . With a uniform sampling, westill have
‖u −R(u)‖L∞ ≤ C‖dmu‖L∞ hm,
but the number of samples is now N ∼ h−d , and the error estimate is in N−m/d .
![Page 7: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/7.jpg)
The curse of dimensionality
Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points.Reconstruct, for example by piecewise linear interpolation.
0 1
R(u)
Error in terms of point spacing h > 0 : if u has C2 smoothness
‖u −R(u)‖L∞ ≤ C‖u ′′‖L∞ h2.
Using piecewise polynomials of higher order, if u has Cm smoothness
‖u −R(u)‖L∞ ≤ C‖u(m)‖L∞ hm.
In terms of the number of samples N ∼ h−1, the error is estimated by N−m.
In d dimensions : u(y) = u(y1, · · · ,yd) with y ∈ [0,1]d . With a uniform sampling, westill have
‖u −R(u)‖L∞ ≤ C‖dmu‖L∞ hm,
but the number of samples is now N ∼ h−d , and the error estimate is in N−m/d .
![Page 8: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/8.jpg)
The curse of dimensionality
Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points.Reconstruct, for example by piecewise linear interpolation.
0 1
R(u)
Error in terms of point spacing h > 0 : if u has C2 smoothness
‖u −R(u)‖L∞ ≤ C‖u ′′‖L∞ h2.
Using piecewise polynomials of higher order, if u has Cm smoothness
‖u −R(u)‖L∞ ≤ C‖u(m)‖L∞ hm.
In terms of the number of samples N ∼ h−1, the error is estimated by N−m.
In d dimensions : u(y) = u(y1, · · · ,yd) with y ∈ [0,1]d . With a uniform sampling, westill have
‖u −R(u)‖L∞ ≤ C‖dmu‖L∞ hm,
but the number of samples is now N ∼ h−d , and the error estimate is in N−m/d .
![Page 9: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/9.jpg)
Other sampling/reconstruction methods cannot do better !
Can be explained by nonlinear manifold width (DeVore-Howard-Micchelli).
Let X be a normed space and K ⊂ X a compact set.
Consider maps E : K 7→ RN (encoding) and R : R
N 7→ X (reconstruction).
Introducing the distorsion of the pair (E ,R) over K
maxu∈K
‖u −R(E(u))‖X ,
we define the nonlinear N-width of K as
dN(K) := infE ,R
maxu∈K
‖u −R(E(u))‖X ,
where the infimum is taken over all continuous maps (E ,R).
If X = L∞ and K is the unit ball of Cm([0,1]d ) it is known that
cN−m/d ≤ dN(K) ≤ CN−m/d .
![Page 10: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/10.jpg)
Other sampling/reconstruction methods cannot do better !
Can be explained by nonlinear manifold width (DeVore-Howard-Micchelli).
Let X be a normed space and K ⊂ X a compact set.
Consider maps E : K 7→ RN (encoding) and R : R
N 7→ X (reconstruction).
Introducing the distorsion of the pair (E ,R) over K
maxu∈K
‖u −R(E(u))‖X ,
we define the nonlinear N-width of K as
dN(K) := infE ,R
maxu∈K
‖u −R(E(u))‖X ,
where the infimum is taken over all continuous maps (E ,R).
If X = L∞ and K is the unit ball of Cm([0,1]d ) it is known that
cN−m/d ≤ dN(K) ≤ CN−m/d .
![Page 11: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/11.jpg)
Other sampling/reconstruction methods cannot do better !
Can be explained by nonlinear manifold width (DeVore-Howard-Micchelli).
Let X be a normed space and K ⊂ X a compact set.
Consider maps E : K 7→ RN (encoding) and R : R
N 7→ X (reconstruction).
Introducing the distorsion of the pair (E ,R) over K
maxu∈K
‖u −R(E(u))‖X ,
we define the nonlinear N-width of K as
dN(K) := infE ,R
maxu∈K
‖u −R(E(u))‖X ,
where the infimum is taken over all continuous maps (E ,R).
If X = L∞ and K is the unit ball of Cm([0,1]d ) it is known that
cN−m/d ≤ dN(K) ≤ CN−m/d .
![Page 12: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/12.jpg)
Other sampling/reconstruction methods cannot do better !
Can be explained by nonlinear manifold width (DeVore-Howard-Micchelli).
Let X be a normed space and K ⊂ X a compact set.
Consider maps E : K 7→ RN (encoding) and R : R
N 7→ X (reconstruction).
Introducing the distorsion of the pair (E ,R) over K
maxu∈K
‖u −R(E(u))‖X ,
we define the nonlinear N-width of K as
dN(K) := infE ,R
maxu∈K
‖u −R(E(u))‖X ,
where the infimum is taken over all continuous maps (E ,R).
If X = L∞ and K is the unit ball of Cm([0,1]d ) it is known that
cN−m/d ≤ dN(K) ≤ CN−m/d .
![Page 13: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/13.jpg)
Other sampling/reconstruction methods cannot do better !
Can be explained by nonlinear manifold width (DeVore-Howard-Micchelli).
Let X be a normed space and K ⊂ X a compact set.
Consider maps E : K 7→ RN (encoding) and R : R
N 7→ X (reconstruction).
Introducing the distorsion of the pair (E ,R) over K
maxu∈K
‖u −R(E(u))‖X ,
we define the nonlinear N-width of K as
dN(K) := infE ,R
maxu∈K
‖u −R(E(u))‖X ,
where the infimum is taken over all continuous maps (E ,R).
If X = L∞ and K is the unit ball of Cm([0,1]d ) it is known that
cN−m/d ≤ dN(K) ≤ CN−m/d .
![Page 14: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/14.jpg)
High dimensional problems occur frequently
PDE’s with solutions u(x,v,t) defined in phase space : d = 7.
Post-processing of numerical codes : u solver with imput parameters (y1, · · · ,yd).
Learning theory : u regression function of imput parameters (y1, · · · ,yd)
In these applications d may be of the order up to 103.
Approximation of stochastic-parametric PDEs (this talk) : d = +∞.
Smoothness properties of functions should be revisited by other means than Cm
classes, and appropriate approximation tools should be used.
Key ingredients :
(i) Sparsity
(ii) Variable reduction
(iii) Anisotropy
![Page 15: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/15.jpg)
High dimensional problems occur frequently
PDE’s with solutions u(x,v,t) defined in phase space : d = 7.
Post-processing of numerical codes : u solver with imput parameters (y1, · · · ,yd).
Learning theory : u regression function of imput parameters (y1, · · · ,yd)
In these applications d may be of the order up to 103.
Approximation of stochastic-parametric PDEs (this talk) : d = +∞.
Smoothness properties of functions should be revisited by other means than Cm
classes, and appropriate approximation tools should be used.
Key ingredients :
(i) Sparsity
(ii) Variable reduction
(iii) Anisotropy
![Page 16: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/16.jpg)
High dimensional problems occur frequently
PDE’s with solutions u(x,v,t) defined in phase space : d = 7.
Post-processing of numerical codes : u solver with imput parameters (y1, · · · ,yd).
Learning theory : u regression function of imput parameters (y1, · · · ,yd)
In these applications d may be of the order up to 103.
Approximation of stochastic-parametric PDEs (this talk) : d = +∞.
Smoothness properties of functions should be revisited by other means than Cm
classes, and appropriate approximation tools should be used.
Key ingredients :
(i) Sparsity
(ii) Variable reduction
(iii) Anisotropy
![Page 17: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/17.jpg)
High dimensional problems occur frequently
PDE’s with solutions u(x,v,t) defined in phase space : d = 7.
Post-processing of numerical codes : u solver with imput parameters (y1, · · · ,yd).
Learning theory : u regression function of imput parameters (y1, · · · ,yd)
In these applications d may be of the order up to 103.
Approximation of stochastic-parametric PDEs (this talk) : d = +∞.
Smoothness properties of functions should be revisited by other means than Cm
classes, and appropriate approximation tools should be used.
Key ingredients :
(i) Sparsity
(ii) Variable reduction
(iii) Anisotropy
![Page 18: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/18.jpg)
A model elliptic PDE
We consider the steady state diffusion equation
−div(a∇u) = f in D ⊂ IRm and u = 0 on ∂D,
where f = f (x) ∈ L2(D) and a = a(x,y) are variable coefficients depending on x ∈ D
and on a vector y of parameters in an affine manner :
a = a(x,y) = a(x)+∑
j>0
yjψj(x), x ∈ D,y = (yj)j>0 ∈ U := [−1,1]N,
where (ψj)j>0 is a given family of functions.
The parameters may be deterministic (control, optimization) or random (uncertaintymodeling and propagation, reliability assessment).
Uniform ellipticity assumption :
(UEA) 0 < r ≤ a(x,y) ≤ R, x ∈ D, y ∈ U.
Then u : y 7→ u(y) = u(·,y) is a bounded map from U to V := H10 (Ω) :
‖u(y)‖V ≤ C0 :=‖f ‖V∗
r, y ∈ U, where ‖v‖V := ‖∇v‖L2 .
Proof : multiply equation by u and integrate
r‖u‖2V ≤
∫
D
a∇u · ∇u = −
∫
D
u div(a∇u) =
∫
D
uf ≤ ‖u‖V ‖f ‖V∗ .
Objective : build a computable approximation to this map at reasonable cost, i.e.simultaneaously approximate u(y) for all y ∈ U.
![Page 19: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/19.jpg)
A model elliptic PDE
We consider the steady state diffusion equation
−div(a∇u) = f in D ⊂ IRm and u = 0 on ∂D,
where f = f (x) ∈ L2(D) and a = a(x,y) are variable coefficients depending on x ∈ D
and on a vector y of parameters in an affine manner :
a = a(x,y) = a(x)+∑
j>0
yjψj(x), x ∈ D,y = (yj)j>0 ∈ U := [−1,1]N,
where (ψj)j>0 is a given family of functions.
The parameters may be deterministic (control, optimization) or random (uncertaintymodeling and propagation, reliability assessment).
Uniform ellipticity assumption :
(UEA) 0 < r ≤ a(x,y) ≤ R, x ∈ D, y ∈ U.
Then u : y 7→ u(y) = u(·,y) is a bounded map from U to V := H10 (Ω) :
‖u(y)‖V ≤ C0 :=‖f ‖V∗
r, y ∈ U, where ‖v‖V := ‖∇v‖L2 .
Proof : multiply equation by u and integrate
r‖u‖2V ≤
∫
D
a∇u · ∇u = −
∫
D
u div(a∇u) =
∫
D
uf ≤ ‖u‖V ‖f ‖V∗ .
Objective : build a computable approximation to this map at reasonable cost, i.e.simultaneaously approximate u(y) for all y ∈ U.
![Page 20: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/20.jpg)
A model elliptic PDE
We consider the steady state diffusion equation
−div(a∇u) = f in D ⊂ IRm and u = 0 on ∂D,
where f = f (x) ∈ L2(D) and a = a(x,y) are variable coefficients depending on x ∈ D
and on a vector y of parameters in an affine manner :
a = a(x,y) = a(x)+∑
j>0
yjψj(x), x ∈ D,y = (yj)j>0 ∈ U := [−1,1]N,
where (ψj)j>0 is a given family of functions.
The parameters may be deterministic (control, optimization) or random (uncertaintymodeling and propagation, reliability assessment).
Uniform ellipticity assumption :
(UEA) 0 < r ≤ a(x,y) ≤ R, x ∈ D, y ∈ U.
Then u : y 7→ u(y) = u(·,y) is a bounded map from U to V := H10 (Ω) :
‖u(y)‖V ≤ C0 :=‖f ‖V∗
r, y ∈ U, where ‖v‖V := ‖∇v‖L2 .
Proof : multiply equation by u and integrate
r‖u‖2V ≤
∫
D
a∇u · ∇u = −
∫
D
u div(a∇u) =
∫
D
uf ≤ ‖u‖V ‖f ‖V∗ .
Objective : build a computable approximation to this map at reasonable cost, i.e.simultaneaously approximate u(y) for all y ∈ U.
![Page 21: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/21.jpg)
A model elliptic PDE
We consider the steady state diffusion equation
−div(a∇u) = f in D ⊂ IRm and u = 0 on ∂D,
where f = f (x) ∈ L2(D) and a = a(x,y) are variable coefficients depending on x ∈ D
and on a vector y of parameters in an affine manner :
a = a(x,y) = a(x)+∑
j>0
yjψj(x), x ∈ D,y = (yj)j>0 ∈ U := [−1,1]N,
where (ψj)j>0 is a given family of functions.
The parameters may be deterministic (control, optimization) or random (uncertaintymodeling and propagation, reliability assessment).
Uniform ellipticity assumption :
(UEA) 0 < r ≤ a(x,y) ≤ R, x ∈ D, y ∈ U.
Then u : y 7→ u(y) = u(·,y) is a bounded map from U to V := H10 (Ω) :
‖u(y)‖V ≤ C0 :=‖f ‖V∗
r, y ∈ U, where ‖v‖V := ‖∇v‖L2 .
Proof : multiply equation by u and integrate
r‖u‖2V ≤
∫
D
a∇u · ∇u = −
∫
D
u div(a∇u) =
∫
D
uf ≤ ‖u‖V ‖f ‖V∗ .
Objective : build a computable approximation to this map at reasonable cost, i.e.simultaneaously approximate u(y) for all y ∈ U.
![Page 22: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/22.jpg)
A model elliptic PDE
We consider the steady state diffusion equation
−div(a∇u) = f in D ⊂ IRm and u = 0 on ∂D,
where f = f (x) ∈ L2(D) and a = a(x,y) are variable coefficients depending on x ∈ D
and on a vector y of parameters in an affine manner :
a = a(x,y) = a(x)+∑
j>0
yjψj(x), x ∈ D,y = (yj)j>0 ∈ U := [−1,1]N,
where (ψj)j>0 is a given family of functions.
The parameters may be deterministic (control, optimization) or random (uncertaintymodeling and propagation, reliability assessment).
Uniform ellipticity assumption :
(UEA) 0 < r ≤ a(x,y) ≤ R, x ∈ D, y ∈ U.
Then u : y 7→ u(y) = u(·,y) is a bounded map from U to V := H10 (Ω) :
‖u(y)‖V ≤ C0 :=‖f ‖V∗
r, y ∈ U, where ‖v‖V := ‖∇v‖L2 .
Proof : multiply equation by u and integrate
r‖u‖2V ≤
∫
D
a∇u · ∇u = −
∫
D
u div(a∇u) =
∫
D
uf ≤ ‖u‖V ‖f ‖V∗ .
Objective : build a computable approximation to this map at reasonable cost, i.e.simultaneaously approximate u(y) for all y ∈ U.
![Page 23: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/23.jpg)
Polynomial expansions
Use of multivariate polynomials in the y variable.
Sometimes referred to as “polynomial chaos” in the random setting (Ghanem-Spanos,Babushka-Tempone-Nobile-Zouharis, Karniadakis, Schwab...).
We study the convergence of the Taylor development
u(y) =∑
ν∈F
tνyν,
whereyν :=
∏
j>0
yνj
j.
Here F is the set of all finitely supported sequences ν= (νj)j>0 of integers (onlyfinitely many νj are non-zero). The Taylor coefficients tν ∈ V are
tν :=1
ν!∂νu|y=0 with ν! :=
∏
j>0
νj ! and 0! := 1.
We also studied Legendre series u(y) =∑ν∈F uνLν where Lν(y) :=
∏j>0 Lνj (yj).
![Page 24: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/24.jpg)
Polynomial expansions
Use of multivariate polynomials in the y variable.
Sometimes referred to as “polynomial chaos” in the random setting (Ghanem-Spanos,Babushka-Tempone-Nobile-Zouharis, Karniadakis, Schwab...).
We study the convergence of the Taylor development
u(y) =∑
ν∈F
tνyν,
whereyν :=
∏
j>0
yνj
j.
Here F is the set of all finitely supported sequences ν= (νj)j>0 of integers (onlyfinitely many νj are non-zero). The Taylor coefficients tν ∈ V are
tν :=1
ν!∂νu|y=0 with ν! :=
∏
j>0
νj ! and 0! := 1.
We also studied Legendre series u(y) =∑ν∈F uνLν where Lν(y) :=
∏j>0 Lνj (yj).
![Page 25: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/25.jpg)
Sparse N-term polynomial approximation
The sequence (tν)ν∈F is indexed by countably many integers.
ν
1
ν3
2
νObjective : identify a set Λ⊂ F with #(Λ) ≤ N such that u is well approximated inthe space
VΛ := ∑
ν∈Λ
cνyν ; uν ∈ V ,
for example by the partial Taylor expansion
uΛ(y) :=∑
ν∈Λ
tνyν.
![Page 26: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/26.jpg)
Sparse N-term polynomial approximation
The sequence (tν)ν∈F is indexed by countably many integers.
ν
1
ν3
2
νObjective : identify a set Λ⊂ F with #(Λ) ≤ N such that u is well approximated inthe space
VΛ := ∑
ν∈Λ
cνyν ; uν ∈ V ,
for example by the partial Taylor expansion
uΛ(y) :=∑
ν∈Λ
tνyν.
![Page 27: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/27.jpg)
Best N-term approximation
A-priori choices for Λ have been proposed : (anisotropic) sparse grid defined byrestrictions of the type
∑j αjνj ≤ A(N) or
∏j(1 +βjνj) ≤ B(N).
Instead we want study a choice of Λ optimally adapted to u.
For all y ∈ U = [−1,1]N we have
‖u(y)−uΛ(y)‖V ≤ ‖∑
ν/∈Λ
tνyν‖V ≤∑
ν/∈Λ
‖tν‖V
Best N-term approximation in the ℓ1(F) norm : use for Λ the N largest ‖tν‖V .
Observation (Stechkin) : if (‖tν‖V )ν∈F ∈ ℓp(F) for some p < 1, then for this Λ,
∑
ν/∈Λ
‖tν‖V ≤ CN−s , s :=1
p−1, C := ‖(‖tν‖V )‖p.
Proof : with (tn)n>0 the decreasing rearrangement, we combine
∑
ν/∈Λ
‖tν‖V =∑
n>N
tn =∑
n>N
t1−pn tp
n ≤ t1−pN
Cp and NtpN≤
N∑
n=1
tpn ≤ Cp.
Question : do we have (‖tν‖V )ν∈F ∈ ℓp(F) for some p < 1 ?
![Page 28: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/28.jpg)
Best N-term approximation
A-priori choices for Λ have been proposed : (anisotropic) sparse grid defined byrestrictions of the type
∑j αjνj ≤ A(N) or
∏j(1 +βjνj) ≤ B(N).
Instead we want study a choice of Λ optimally adapted to u.
For all y ∈ U = [−1,1]N we have
‖u(y)−uΛ(y)‖V ≤ ‖∑
ν/∈Λ
tνyν‖V ≤∑
ν/∈Λ
‖tν‖V
Best N-term approximation in the ℓ1(F) norm : use for Λ the N largest ‖tν‖V .
Observation (Stechkin) : if (‖tν‖V )ν∈F ∈ ℓp(F) for some p < 1, then for this Λ,
∑
ν/∈Λ
‖tν‖V ≤ CN−s , s :=1
p−1, C := ‖(‖tν‖V )‖p.
Proof : with (tn)n>0 the decreasing rearrangement, we combine
∑
ν/∈Λ
‖tν‖V =∑
n>N
tn =∑
n>N
t1−pn tp
n ≤ t1−pN
Cp and NtpN≤
N∑
n=1
tpn ≤ Cp.
Question : do we have (‖tν‖V )ν∈F ∈ ℓp(F) for some p < 1 ?
![Page 29: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/29.jpg)
Best N-term approximation
A-priori choices for Λ have been proposed : (anisotropic) sparse grid defined byrestrictions of the type
∑j αjνj ≤ A(N) or
∏j(1 +βjνj) ≤ B(N).
Instead we want study a choice of Λ optimally adapted to u.
For all y ∈ U = [−1,1]N we have
‖u(y)−uΛ(y)‖V ≤ ‖∑
ν/∈Λ
tνyν‖V ≤∑
ν/∈Λ
‖tν‖V
Best N-term approximation in the ℓ1(F) norm : use for Λ the N largest ‖tν‖V .
Observation (Stechkin) : if (‖tν‖V )ν∈F ∈ ℓp(F) for some p < 1, then for this Λ,
∑
ν/∈Λ
‖tν‖V ≤ CN−s , s :=1
p−1, C := ‖(‖tν‖V )‖p.
Proof : with (tn)n>0 the decreasing rearrangement, we combine
∑
ν/∈Λ
‖tν‖V =∑
n>N
tn =∑
n>N
t1−pn tp
n ≤ t1−pN
Cp and NtpN≤
N∑
n=1
tpn ≤ Cp.
Question : do we have (‖tν‖V )ν∈F ∈ ℓp(F) for some p < 1 ?
![Page 30: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/30.jpg)
Best N-term approximation
A-priori choices for Λ have been proposed : (anisotropic) sparse grid defined byrestrictions of the type
∑j αjνj ≤ A(N) or
∏j(1 +βjνj) ≤ B(N).
Instead we want study a choice of Λ optimally adapted to u.
For all y ∈ U = [−1,1]N we have
‖u(y)−uΛ(y)‖V ≤ ‖∑
ν/∈Λ
tνyν‖V ≤∑
ν/∈Λ
‖tν‖V
Best N-term approximation in the ℓ1(F) norm : use for Λ the N largest ‖tν‖V .
Observation (Stechkin) : if (‖tν‖V )ν∈F ∈ ℓp(F) for some p < 1, then for this Λ,
∑
ν/∈Λ
‖tν‖V ≤ CN−s , s :=1
p−1, C := ‖(‖tν‖V )‖p.
Proof : with (tn)n>0 the decreasing rearrangement, we combine
∑
ν/∈Λ
‖tν‖V =∑
n>N
tn =∑
n>N
t1−pn tp
n ≤ t1−pN
Cp and NtpN≤
N∑
n=1
tpn ≤ Cp.
Question : do we have (‖tν‖V )ν∈F ∈ ℓp(F) for some p < 1 ?
![Page 31: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/31.jpg)
The main result
Theorem (Cohen-DeVore-Schwab, 2009) : under the uniform ellipticity assumption(UAE), then for any p < 1,
(‖ψj‖L∞ )j≥0 ∈ ℓp(N) ⇒ (‖tν‖V )ν∈F ∈ ℓp(F).
Interpretations :
(i) The Taylor expansion of u(y) inherits the sparsity properties of the expansion ofa(y) into the ψj .
(ii) We approximate u(y) in L∞ (U) with algebraic rate N−s despite the curse of(infinite) dimensionality, due to the fact that yj is less influencial as j gets large.
(iii) The set K := u(y) ; y ∈ U is compact in V and has small N-widthdN(K) := infdim(E)≤N maxv∈K dist(v,E)V : for all y
uΛ(y) :=∑
ν∈Λ
tνyν =∑
ν∈Λ
yνtν ∈ EΛ := Spantν ; ν∈ Λ.
With Λ corresponding to the N largest ‖tν‖V , we find that
dN(K) ≤ maxy∈U
dist(u(y),EΛ)V ≤ maxy∈U
‖u(y)−uΛ(y)‖V ≤ CN−s .
Such approximation rates cannot be proved for the usual a-priori choices of Λ.
![Page 32: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/32.jpg)
The main result
Theorem (Cohen-DeVore-Schwab, 2009) : under the uniform ellipticity assumption(UAE), then for any p < 1,
(‖ψj‖L∞ )j≥0 ∈ ℓp(N) ⇒ (‖tν‖V )ν∈F ∈ ℓp(F).
Interpretations :
(i) The Taylor expansion of u(y) inherits the sparsity properties of the expansion ofa(y) into the ψj .
(ii) We approximate u(y) in L∞ (U) with algebraic rate N−s despite the curse of(infinite) dimensionality, due to the fact that yj is less influencial as j gets large.
(iii) The set K := u(y) ; y ∈ U is compact in V and has small N-widthdN(K) := infdim(E)≤N maxv∈K dist(v,E)V : for all y
uΛ(y) :=∑
ν∈Λ
tνyν =∑
ν∈Λ
yνtν ∈ EΛ := Spantν ; ν∈ Λ.
With Λ corresponding to the N largest ‖tν‖V , we find that
dN(K) ≤ maxy∈U
dist(u(y),EΛ)V ≤ maxy∈U
‖u(y)−uΛ(y)‖V ≤ CN−s .
Such approximation rates cannot be proved for the usual a-priori choices of Λ.
![Page 33: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/33.jpg)
The main result
Theorem (Cohen-DeVore-Schwab, 2009) : under the uniform ellipticity assumption(UAE), then for any p < 1,
(‖ψj‖L∞ )j≥0 ∈ ℓp(N) ⇒ (‖tν‖V )ν∈F ∈ ℓp(F).
Interpretations :
(i) The Taylor expansion of u(y) inherits the sparsity properties of the expansion ofa(y) into the ψj .
(ii) We approximate u(y) in L∞ (U) with algebraic rate N−s despite the curse of(infinite) dimensionality, due to the fact that yj is less influencial as j gets large.
(iii) The set K := u(y) ; y ∈ U is compact in V and has small N-widthdN(K) := infdim(E)≤N maxv∈K dist(v,E)V : for all y
uΛ(y) :=∑
ν∈Λ
tνyν =∑
ν∈Λ
yνtν ∈ EΛ := Spantν ; ν∈ Λ.
With Λ corresponding to the N largest ‖tν‖V , we find that
dN(K) ≤ maxy∈U
dist(u(y),EΛ)V ≤ maxy∈U
‖u(y)−uΛ(y)‖V ≤ CN−s .
Such approximation rates cannot be proved for the usual a-priori choices of Λ.
![Page 34: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/34.jpg)
The main result
Theorem (Cohen-DeVore-Schwab, 2009) : under the uniform ellipticity assumption(UAE), then for any p < 1,
(‖ψj‖L∞ )j≥0 ∈ ℓp(N) ⇒ (‖tν‖V )ν∈F ∈ ℓp(F).
Interpretations :
(i) The Taylor expansion of u(y) inherits the sparsity properties of the expansion ofa(y) into the ψj .
(ii) We approximate u(y) in L∞ (U) with algebraic rate N−s despite the curse of(infinite) dimensionality, due to the fact that yj is less influencial as j gets large.
(iii) The set K := u(y) ; y ∈ U is compact in V and has small N-widthdN(K) := infdim(E)≤N maxv∈K dist(v,E)V : for all y
uΛ(y) :=∑
ν∈Λ
tνyν =∑
ν∈Λ
yνtν ∈ EΛ := Spantν ; ν∈ Λ.
With Λ corresponding to the N largest ‖tν‖V , we find that
dN(K) ≤ maxy∈U
dist(u(y),EΛ)V ≤ maxy∈U
‖u(y)−uΛ(y)‖V ≤ CN−s .
Such approximation rates cannot be proved for the usual a-priori choices of Λ.
![Page 35: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/35.jpg)
Idea of proof : extension to complex variable
Estimates on ‖tν‖V by complex analysis : extend u(y) to u(z) with z = (zj) ∈ C|| IN.
Uniform ellipticity 0 < r ≤ a(x)+∑
j>0 yjψj(x) for all x ∈ D,y ∈ U is equivalent to
∑
j>0
|ψj(x)| ≤ a(x)− r, x ∈ D.
This allows to say that with a(x,z) = a(x)+∑
j>0 zjψj(x),
0 < r ≤ ℜ(a(x,z)) ≤ |a(x,z)| ≤ 2R,
for all z ∈ U := |z | ≤ 1N =⊗|zj | ≤ 1.
Lax-Milgram theory applies : ‖u(z)‖ ≤ C0 =‖f ‖V∗
rfor all z ∈ U . The function
u 7→ u(z) is holomorphic in each variable zj at any z ∈ U .
Extended domains of holomorphy : if ρ= (ρj)j≥0 is any positive sequence such thatfor some δ> 0 ∑
j>0
ρj |ψj(x)| ≤ a(x)−δ, x ∈ D,
then u is holomorphic with uniform bound ‖u(z)‖ ≤ Cδ =‖f ‖V∗
δin the polydisc
Uρ :=⊗|zj | ≤ ρj ,
We call such sequences ρ “δ-admissible”. If δ< r , we can take ρj > 1.
![Page 36: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/36.jpg)
Idea of proof : extension to complex variable
Estimates on ‖tν‖V by complex analysis : extend u(y) to u(z) with z = (zj) ∈ C|| IN.
Uniform ellipticity 0 < r ≤ a(x)+∑
j>0 yjψj(x) for all x ∈ D,y ∈ U is equivalent to
∑
j>0
|ψj(x)| ≤ a(x)− r, x ∈ D.
This allows to say that with a(x,z) = a(x)+∑
j>0 zjψj(x),
0 < r ≤ ℜ(a(x,z)) ≤ |a(x,z)| ≤ 2R,
for all z ∈ U := |z | ≤ 1N =⊗|zj | ≤ 1.
Lax-Milgram theory applies : ‖u(z)‖ ≤ C0 =‖f ‖V∗
rfor all z ∈ U . The function
u 7→ u(z) is holomorphic in each variable zj at any z ∈ U .
Extended domains of holomorphy : if ρ= (ρj)j≥0 is any positive sequence such thatfor some δ> 0 ∑
j>0
ρj |ψj(x)| ≤ a(x)−δ, x ∈ D,
then u is holomorphic with uniform bound ‖u(z)‖ ≤ Cδ =‖f ‖V∗
δin the polydisc
Uρ :=⊗|zj | ≤ ρj ,
We call such sequences ρ “δ-admissible”. If δ< r , we can take ρj > 1.
![Page 37: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/37.jpg)
Idea of proof : extension to complex variable
Estimates on ‖tν‖V by complex analysis : extend u(y) to u(z) with z = (zj) ∈ C|| IN.
Uniform ellipticity 0 < r ≤ a(x)+∑
j>0 yjψj(x) for all x ∈ D,y ∈ U is equivalent to
∑
j>0
|ψj(x)| ≤ a(x)− r, x ∈ D.
This allows to say that with a(x,z) = a(x)+∑
j>0 zjψj(x),
0 < r ≤ ℜ(a(x,z)) ≤ |a(x,z)| ≤ 2R,
for all z ∈ U := |z | ≤ 1N =⊗|zj | ≤ 1.
Lax-Milgram theory applies : ‖u(z)‖ ≤ C0 =‖f ‖V∗
rfor all z ∈ U . The function
u 7→ u(z) is holomorphic in each variable zj at any z ∈ U .
Extended domains of holomorphy : if ρ= (ρj)j≥0 is any positive sequence such thatfor some δ> 0 ∑
j>0
ρj |ψj(x)| ≤ a(x)−δ, x ∈ D,
then u is holomorphic with uniform bound ‖u(z)‖ ≤ Cδ =‖f ‖V∗
δin the polydisc
Uρ :=⊗|zj | ≤ ρj ,
We call such sequences ρ “δ-admissible”. If δ< r , we can take ρj > 1.
![Page 38: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/38.jpg)
Idea of proof : extension to complex variable
Estimates on ‖tν‖V by complex analysis : extend u(y) to u(z) with z = (zj) ∈ C|| IN.
Uniform ellipticity 0 < r ≤ a(x)+∑
j>0 yjψj(x) for all x ∈ D,y ∈ U is equivalent to
∑
j>0
|ψj(x)| ≤ a(x)− r, x ∈ D.
This allows to say that with a(x,z) = a(x)+∑
j>0 zjψj(x),
0 < r ≤ ℜ(a(x,z)) ≤ |a(x,z)| ≤ 2R,
for all z ∈ U := |z | ≤ 1N =⊗|zj | ≤ 1.
Lax-Milgram theory applies : ‖u(z)‖ ≤ C0 =‖f ‖V∗
rfor all z ∈ U . The function
u 7→ u(z) is holomorphic in each variable zj at any z ∈ U .
Extended domains of holomorphy : if ρ= (ρj)j≥0 is any positive sequence such thatfor some δ> 0 ∑
j>0
ρj |ψj(x)| ≤ a(x)−δ, x ∈ D,
then u is holomorphic with uniform bound ‖u(z)‖ ≤ Cδ =‖f ‖V∗
δin the polydisc
Uρ :=⊗|zj | ≤ ρj ,
We call such sequences ρ “δ-admissible”. If δ< r , we can take ρj > 1.
![Page 39: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/39.jpg)
Estimate on the Taylor coefficients
Use Cauchy formula. In 1 complex variable if z 7→ u(z) is holomorphic and bounded ina neighbourhood of disc |z | ≤ a, then for all z in this disc
u(z) =1
2iπ
∫
|z ′|=a
u(z ′)
z − z ′dz ′,
which leads by m differentiation at z = 0 to |u(m)(0)| ≤ m!a−m max|z|≤a |u(z)|.
Recursive application of this to all variables zj such that νj 6= 0, with a = ρj , for aδ-admissible sequence ρ gives
‖∂νu|z=0‖V ≤ Cδν!∏
j>0
ρ−νj
j.
and therefore‖tν‖V ≤ Cδ
∏
j>0
ρ−νj
j= C0ρ
−ν.
Since ρ is not fixed we have
‖tν‖V ≤ Cδ infρ−ν ; ρ is δ−admissible.
We do not know the general solution to this problem, except when the ψj havedisjoint supports. Instead design a particular choice ρ=ρ(ν) of δ-admissiblesequences with δ= r/2, for which we prove that
(‖ψj‖L∞ )j≥0 ∈ ℓp(N) ⇒ (ρ(ν)−ν)ν∈F ∈ ℓp(F).
![Page 40: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/40.jpg)
Estimate on the Taylor coefficients
Use Cauchy formula. In 1 complex variable if z 7→ u(z) is holomorphic and bounded ina neighbourhood of disc |z | ≤ a, then for all z in this disc
u(z) =1
2iπ
∫
|z ′|=a
u(z ′)
z − z ′dz ′,
which leads by m differentiation at z = 0 to |u(m)(0)| ≤ m!a−m max|z|≤a |u(z)|.
Recursive application of this to all variables zj such that νj 6= 0, with a = ρj , for aδ-admissible sequence ρ gives
‖∂νu|z=0‖V ≤ Cδν!∏
j>0
ρ−νj
j.
and therefore‖tν‖V ≤ Cδ
∏
j>0
ρ−νj
j= C0ρ
−ν.
Since ρ is not fixed we have
‖tν‖V ≤ Cδ infρ−ν ; ρ is δ−admissible.
We do not know the general solution to this problem, except when the ψj havedisjoint supports. Instead design a particular choice ρ=ρ(ν) of δ-admissiblesequences with δ= r/2, for which we prove that
(‖ψj‖L∞ )j≥0 ∈ ℓp(N) ⇒ (ρ(ν)−ν)ν∈F ∈ ℓp(F).
![Page 41: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/41.jpg)
Estimate on the Taylor coefficients
Use Cauchy formula. In 1 complex variable if z 7→ u(z) is holomorphic and bounded ina neighbourhood of disc |z | ≤ a, then for all z in this disc
u(z) =1
2iπ
∫
|z ′|=a
u(z ′)
z − z ′dz ′,
which leads by m differentiation at z = 0 to |u(m)(0)| ≤ m!a−m max|z|≤a |u(z)|.
Recursive application of this to all variables zj such that νj 6= 0, with a = ρj , for aδ-admissible sequence ρ gives
‖∂νu|z=0‖V ≤ Cδν!∏
j>0
ρ−νj
j.
and therefore‖tν‖V ≤ Cδ
∏
j>0
ρ−νj
j= C0ρ
−ν.
Since ρ is not fixed we have
‖tν‖V ≤ Cδ infρ−ν ; ρ is δ−admissible.
We do not know the general solution to this problem, except when the ψj havedisjoint supports. Instead design a particular choice ρ=ρ(ν) of δ-admissiblesequences with δ= r/2, for which we prove that
(‖ψj‖L∞ )j≥0 ∈ ℓp(N) ⇒ (ρ(ν)−ν)ν∈F ∈ ℓp(F).
![Page 42: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/42.jpg)
A simple case
Assume that the ψj have disjoint supports. Then we maximize separately the ρj sothat ∑
j>0
ρj |ψj (x)| ≤ a(x)−r
2, x ∈ D,
which leads to
ρj := minx∈D
a(x)− r2
|ψj(x)|.
We have‖tν‖V ≤ 2C0ρ
−ν = 2C0bν,
where b = (bj) and
bj := ρ−1j
=|ψj(x)|
a(x)− r2
≤‖ψj‖L∞
R − r2
.
Therefore b ∈ ℓp(N). From (UEA), we have |ψj (x)| ≤ a(x)− r and thus ‖b‖ℓ∞ < 1.
We finally observe that
b ∈ ℓp(N) and ‖b‖ℓ∞ < 1 ⇔ (bν)ν∈F ∈ ℓp(F).
Proof : factorize ∑
ν∈F
bpν =∏
j>0
∑
n≥0
bpnj
=∏
j>0
1
1 −bpj
.
![Page 43: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/43.jpg)
A simple case
Assume that the ψj have disjoint supports. Then we maximize separately the ρj sothat ∑
j>0
ρj |ψj (x)| ≤ a(x)−r
2, x ∈ D,
which leads to
ρj := minx∈D
a(x)− r2
|ψj(x)|.
We have‖tν‖V ≤ 2C0ρ
−ν = 2C0bν,
where b = (bj) and
bj := ρ−1j
=|ψj(x)|
a(x)− r2
≤‖ψj‖L∞
R − r2
.
Therefore b ∈ ℓp(N). From (UEA), we have |ψj (x)| ≤ a(x)− r and thus ‖b‖ℓ∞ < 1.
We finally observe that
b ∈ ℓp(N) and ‖b‖ℓ∞ < 1 ⇔ (bν)ν∈F ∈ ℓp(F).
Proof : factorize ∑
ν∈F
bpν =∏
j>0
∑
n≥0
bpnj
=∏
j>0
1
1 −bpj
.
![Page 44: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/44.jpg)
A simple case
Assume that the ψj have disjoint supports. Then we maximize separately the ρj sothat ∑
j>0
ρj |ψj (x)| ≤ a(x)−r
2, x ∈ D,
which leads to
ρj := minx∈D
a(x)− r2
|ψj(x)|.
We have‖tν‖V ≤ 2C0ρ
−ν = 2C0bν,
where b = (bj) and
bj := ρ−1j
=|ψj(x)|
a(x)− r2
≤‖ψj‖L∞
R − r2
.
Therefore b ∈ ℓp(N). From (UEA), we have |ψj (x)| ≤ a(x)− r and thus ‖b‖ℓ∞ < 1.
We finally observe that
b ∈ ℓp(N) and ‖b‖ℓ∞ < 1 ⇔ (bν)ν∈F ∈ ℓp(F).
Proof : factorize ∑
ν∈F
bpν =∏
j>0
∑
n≥0
bpnj
=∏
j>0
1
1 −bpj
.
![Page 45: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/45.jpg)
An adaptive algorithm
Strategies to build the set Λ :
(i) Non-adaptive, based on the available a-priori estimates for the ‖tν‖V .
(ii) Adaptive, based on a-posteriori information gained in the computationΛ1 ⊂ Λ2 ⊂ · · · ⊂ ΛN .
Objective : develop adaptive strategies that converge with optimal rate (similar toadaptive wavelet methods for elliptic PDE’s : Cohen-Dahmen-DeVore, Stevenson).
Recursive computation of the Taylor coefficients : with ej the Kroenecker sequence
∫
D
a∇tν∇v = −∑
j : νj 6=0
∫
D
ψj∇tν−ej∇v, v ∈ V .
We compute the tν on sets Λ with monotone structure : ν∈ Λ and µ≤ ν⇒ µ∈ Λ.
Given such a Λk and the (tν)ν∈Λkwe compute the tν for ν in the margin
Mk := ν /∈ Λk ; ν− ej ∈ Λk for some j,
and build the new set by bulk search : Λk+1 =Λk ∪ Sk , with Sk ⊂ Mk smallest suchthat
∑ν∈Sk
‖tν‖2V
≥ θ∑ν∈Mk
‖tν‖2V
, with θ∈ (0,1).
Such a strategy can be proved to converge with optimal convergence rate #(Λk)−s .
![Page 46: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/46.jpg)
An adaptive algorithm
Strategies to build the set Λ :
(i) Non-adaptive, based on the available a-priori estimates for the ‖tν‖V .
(ii) Adaptive, based on a-posteriori information gained in the computationΛ1 ⊂ Λ2 ⊂ · · · ⊂ ΛN .
Objective : develop adaptive strategies that converge with optimal rate (similar toadaptive wavelet methods for elliptic PDE’s : Cohen-Dahmen-DeVore, Stevenson).
Recursive computation of the Taylor coefficients : with ej the Kroenecker sequence
∫
D
a∇tν∇v = −∑
j : νj 6=0
∫
D
ψj∇tν−ej∇v, v ∈ V .
We compute the tν on sets Λ with monotone structure : ν∈ Λ and µ≤ ν⇒ µ∈ Λ.
Given such a Λk and the (tν)ν∈Λkwe compute the tν for ν in the margin
Mk := ν /∈ Λk ; ν− ej ∈ Λk for some j,
and build the new set by bulk search : Λk+1 =Λk ∪ Sk , with Sk ⊂ Mk smallest suchthat
∑ν∈Sk
‖tν‖2V
≥ θ∑ν∈Mk
‖tν‖2V
, with θ∈ (0,1).
Such a strategy can be proved to converge with optimal convergence rate #(Λk)−s .
![Page 47: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/47.jpg)
An adaptive algorithm
Strategies to build the set Λ :
(i) Non-adaptive, based on the available a-priori estimates for the ‖tν‖V .
(ii) Adaptive, based on a-posteriori information gained in the computationΛ1 ⊂ Λ2 ⊂ · · · ⊂ ΛN .
Objective : develop adaptive strategies that converge with optimal rate (similar toadaptive wavelet methods for elliptic PDE’s : Cohen-Dahmen-DeVore, Stevenson).
Recursive computation of the Taylor coefficients : with ej the Kroenecker sequence
∫
D
a∇tν∇v = −∑
j : νj 6=0
∫
D
ψj∇tν−ej∇v, v ∈ V .
We compute the tν on sets Λ with monotone structure : ν∈ Λ and µ≤ ν⇒ µ∈ Λ.
Given such a Λk and the (tν)ν∈Λkwe compute the tν for ν in the margin
Mk := ν /∈ Λk ; ν− ej ∈ Λk for some j,
and build the new set by bulk search : Λk+1 =Λk ∪ Sk , with Sk ⊂ Mk smallest suchthat
∑ν∈Sk
‖tν‖2V
≥ θ∑ν∈Mk
‖tν‖2V
, with θ∈ (0,1).
Such a strategy can be proved to converge with optimal convergence rate #(Λk)−s .
![Page 48: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/48.jpg)
An adaptive algorithm
Strategies to build the set Λ :
(i) Non-adaptive, based on the available a-priori estimates for the ‖tν‖V .
(ii) Adaptive, based on a-posteriori information gained in the computationΛ1 ⊂ Λ2 ⊂ · · · ⊂ ΛN .
Objective : develop adaptive strategies that converge with optimal rate (similar toadaptive wavelet methods for elliptic PDE’s : Cohen-Dahmen-DeVore, Stevenson).
Recursive computation of the Taylor coefficients : with ej the Kroenecker sequence
∫
D
a∇tν∇v = −∑
j : νj 6=0
∫
D
ψj∇tν−ej∇v, v ∈ V .
We compute the tν on sets Λ with monotone structure : ν∈ Λ and µ≤ ν⇒ µ∈ Λ.
Given such a Λk and the (tν)ν∈Λkwe compute the tν for ν in the margin
Mk := ν /∈ Λk ; ν− ej ∈ Λk for some j,
and build the new set by bulk search : Λk+1 =Λk ∪ Sk , with Sk ⊂ Mk smallest suchthat
∑ν∈Sk
‖tν‖2V
≥ θ∑ν∈Mk
‖tν‖2V
, with θ∈ (0,1).
Such a strategy can be proved to converge with optimal convergence rate #(Λk)−s .
![Page 49: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/49.jpg)
An adaptive algorithm
Strategies to build the set Λ :
(i) Non-adaptive, based on the available a-priori estimates for the ‖tν‖V .
(ii) Adaptive, based on a-posteriori information gained in the computationΛ1 ⊂ Λ2 ⊂ · · · ⊂ ΛN .
Objective : develop adaptive strategies that converge with optimal rate (similar toadaptive wavelet methods for elliptic PDE’s : Cohen-Dahmen-DeVore, Stevenson).
Recursive computation of the Taylor coefficients : with ej the Kroenecker sequence
∫
D
a∇tν∇v = −∑
j : νj 6=0
∫
D
ψj∇tν−ej∇v, v ∈ V .
We compute the tν on sets Λ with monotone structure : ν∈ Λ and µ≤ ν⇒ µ∈ Λ.
Given such a Λk and the (tν)ν∈Λkwe compute the tν for ν in the margin
Mk := ν /∈ Λk ; ν− ej ∈ Λk for some j,
and build the new set by bulk search : Λk+1 =Λk ∪ Sk , with Sk ⊂ Mk smallest suchthat
∑ν∈Sk
‖tν‖2V
≥ θ∑ν∈Mk
‖tν‖2V
, with θ∈ (0,1).
Such a strategy can be proved to converge with optimal convergence rate #(Λk)−s .
![Page 50: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/50.jpg)
Test case in moderate dimension d = 16
Physical domain D = [0,1]2 =∪dj=1Dj .
Diffusion coefficients a(x,y) = 1 +∑d
j=1 yj
“
0.9j2
”
χDj.
Adaptive search of Λ implemented in C++, spatial discretization by FreeFem++.
Comparison between the Λk generated by the adaptive algorithm (red) andnon-adaptive choices supνj ≤ k (blue) or
∑νj ≤ k (green) or k largest a-priori
bounds on the ‖tν‖V (pink)
101
102
103
104
10−10
10−9
10−8
10−7
10−6
10−5
10−4
10−3
10−2
#Λ
Err
or In
f
algo BSalgo Pkalgo Qkalgo MEalgo NN
Highest polynomial degree with #(Λ) = 1000 coefficients : 1, 4, 115 and 81.
![Page 51: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/51.jpg)
What I did not speak about
Use of Legendre polynomials instead of Taylor series : leads to approximation errorestimates in L2(U,dµ) with dµ the tensor product probability measure on U.
Computation of the approximate Legendre coefficients : either use a Galerkin(projection) method or a Collocation (interpolation) method. For the second one,designing optimal collocation points is an open problem.
Strategies to build the set Λ :
(i) A-priori, based on the available estimates for the ‖tν‖V .
(ii) A-posteriori, based on error indicators in the Galerkin framework :Λ1 ⊂ Λ2 ⊂ · · · ⊂ ΛN . Optimal convergence of this strategy may be proved by similartechniques as for adaptive wavelet methods for elliptic PDE’s(Cohen-Dahmen-DeVore, Stevenson).
(iii) Reconstruction a sparse orthogonal series from random sampling : techniquesfrom Compressed Sensing (Sparse Fourier series : Gilbert-Strauss-Tropp,Candes-Romberg-Tao, Rudelson-Vershynin. Sparse Legendre series : Rauhut-Ward2010).
Space discretization : should be properly tuned (use different resolution for each tν)and injected in the final error analysis.
Our results can be used in the analysis of reduced basis methods.
![Page 52: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/52.jpg)
What I did not speak about
Use of Legendre polynomials instead of Taylor series : leads to approximation errorestimates in L2(U,dµ) with dµ the tensor product probability measure on U.
Computation of the approximate Legendre coefficients : either use a Galerkin(projection) method or a Collocation (interpolation) method. For the second one,designing optimal collocation points is an open problem.
Strategies to build the set Λ :
(i) A-priori, based on the available estimates for the ‖tν‖V .
(ii) A-posteriori, based on error indicators in the Galerkin framework :Λ1 ⊂ Λ2 ⊂ · · · ⊂ ΛN . Optimal convergence of this strategy may be proved by similartechniques as for adaptive wavelet methods for elliptic PDE’s(Cohen-Dahmen-DeVore, Stevenson).
(iii) Reconstruction a sparse orthogonal series from random sampling : techniquesfrom Compressed Sensing (Sparse Fourier series : Gilbert-Strauss-Tropp,Candes-Romberg-Tao, Rudelson-Vershynin. Sparse Legendre series : Rauhut-Ward2010).
Space discretization : should be properly tuned (use different resolution for each tν)and injected in the final error analysis.
Our results can be used in the analysis of reduced basis methods.
![Page 53: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/53.jpg)
What I did not speak about
Use of Legendre polynomials instead of Taylor series : leads to approximation errorestimates in L2(U,dµ) with dµ the tensor product probability measure on U.
Computation of the approximate Legendre coefficients : either use a Galerkin(projection) method or a Collocation (interpolation) method. For the second one,designing optimal collocation points is an open problem.
Strategies to build the set Λ :
(i) A-priori, based on the available estimates for the ‖tν‖V .
(ii) A-posteriori, based on error indicators in the Galerkin framework :Λ1 ⊂ Λ2 ⊂ · · · ⊂ ΛN . Optimal convergence of this strategy may be proved by similartechniques as for adaptive wavelet methods for elliptic PDE’s(Cohen-Dahmen-DeVore, Stevenson).
(iii) Reconstruction a sparse orthogonal series from random sampling : techniquesfrom Compressed Sensing (Sparse Fourier series : Gilbert-Strauss-Tropp,Candes-Romberg-Tao, Rudelson-Vershynin. Sparse Legendre series : Rauhut-Ward2010).
Space discretization : should be properly tuned (use different resolution for each tν)and injected in the final error analysis.
Our results can be used in the analysis of reduced basis methods.
![Page 54: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/54.jpg)
What I did not speak about
Use of Legendre polynomials instead of Taylor series : leads to approximation errorestimates in L2(U,dµ) with dµ the tensor product probability measure on U.
Computation of the approximate Legendre coefficients : either use a Galerkin(projection) method or a Collocation (interpolation) method. For the second one,designing optimal collocation points is an open problem.
Strategies to build the set Λ :
(i) A-priori, based on the available estimates for the ‖tν‖V .
(ii) A-posteriori, based on error indicators in the Galerkin framework :Λ1 ⊂ Λ2 ⊂ · · · ⊂ ΛN . Optimal convergence of this strategy may be proved by similartechniques as for adaptive wavelet methods for elliptic PDE’s(Cohen-Dahmen-DeVore, Stevenson).
(iii) Reconstruction a sparse orthogonal series from random sampling : techniquesfrom Compressed Sensing (Sparse Fourier series : Gilbert-Strauss-Tropp,Candes-Romberg-Tao, Rudelson-Vershynin. Sparse Legendre series : Rauhut-Ward2010).
Space discretization : should be properly tuned (use different resolution for each tν)and injected in the final error analysis.
Our results can be used in the analysis of reduced basis methods.
![Page 55: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/55.jpg)
What I did not speak about
Use of Legendre polynomials instead of Taylor series : leads to approximation errorestimates in L2(U,dµ) with dµ the tensor product probability measure on U.
Computation of the approximate Legendre coefficients : either use a Galerkin(projection) method or a Collocation (interpolation) method. For the second one,designing optimal collocation points is an open problem.
Strategies to build the set Λ :
(i) A-priori, based on the available estimates for the ‖tν‖V .
(ii) A-posteriori, based on error indicators in the Galerkin framework :Λ1 ⊂ Λ2 ⊂ · · · ⊂ ΛN . Optimal convergence of this strategy may be proved by similartechniques as for adaptive wavelet methods for elliptic PDE’s(Cohen-Dahmen-DeVore, Stevenson).
(iii) Reconstruction a sparse orthogonal series from random sampling : techniquesfrom Compressed Sensing (Sparse Fourier series : Gilbert-Strauss-Tropp,Candes-Romberg-Tao, Rudelson-Vershynin. Sparse Legendre series : Rauhut-Ward2010).
Space discretization : should be properly tuned (use different resolution for each tν)and injected in the final error analysis.
Our results can be used in the analysis of reduced basis methods.
![Page 56: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/56.jpg)
What I did not speak about
Use of Legendre polynomials instead of Taylor series : leads to approximation errorestimates in L2(U,dµ) with dµ the tensor product probability measure on U.
Computation of the approximate Legendre coefficients : either use a Galerkin(projection) method or a Collocation (interpolation) method. For the second one,designing optimal collocation points is an open problem.
Strategies to build the set Λ :
(i) A-priori, based on the available estimates for the ‖tν‖V .
(ii) A-posteriori, based on error indicators in the Galerkin framework :Λ1 ⊂ Λ2 ⊂ · · · ⊂ ΛN . Optimal convergence of this strategy may be proved by similartechniques as for adaptive wavelet methods for elliptic PDE’s(Cohen-Dahmen-DeVore, Stevenson).
(iii) Reconstruction a sparse orthogonal series from random sampling : techniquesfrom Compressed Sensing (Sparse Fourier series : Gilbert-Strauss-Tropp,Candes-Romberg-Tao, Rudelson-Vershynin. Sparse Legendre series : Rauhut-Ward2010).
Space discretization : should be properly tuned (use different resolution for each tν)and injected in the final error analysis.
Our results can be used in the analysis of reduced basis methods.
![Page 57: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/57.jpg)
What I did not speak about
Use of Legendre polynomials instead of Taylor series : leads to approximation errorestimates in L2(U,dµ) with dµ the tensor product probability measure on U.
Computation of the approximate Legendre coefficients : either use a Galerkin(projection) method or a Collocation (interpolation) method. For the second one,designing optimal collocation points is an open problem.
Strategies to build the set Λ :
(i) A-priori, based on the available estimates for the ‖tν‖V .
(ii) A-posteriori, based on error indicators in the Galerkin framework :Λ1 ⊂ Λ2 ⊂ · · · ⊂ ΛN . Optimal convergence of this strategy may be proved by similartechniques as for adaptive wavelet methods for elliptic PDE’s(Cohen-Dahmen-DeVore, Stevenson).
(iii) Reconstruction a sparse orthogonal series from random sampling : techniquesfrom Compressed Sensing (Sparse Fourier series : Gilbert-Strauss-Tropp,Candes-Romberg-Tao, Rudelson-Vershynin. Sparse Legendre series : Rauhut-Ward2010).
Space discretization : should be properly tuned (use different resolution for each tν)and injected in the final error analysis.
Our results can be used in the analysis of reduced basis methods.
![Page 58: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/58.jpg)
What I did not speak about
Use of Legendre polynomials instead of Taylor series : leads to approximation errorestimates in L2(U,dµ) with dµ the tensor product probability measure on U.
Computation of the approximate Legendre coefficients : either use a Galerkin(projection) method or a Collocation (interpolation) method. For the second one,designing optimal collocation points is an open problem.
Strategies to build the set Λ :
(i) A-priori, based on the available estimates for the ‖tν‖V .
(ii) A-posteriori, based on error indicators in the Galerkin framework :Λ1 ⊂ Λ2 ⊂ · · · ⊂ ΛN . Optimal convergence of this strategy may be proved by similartechniques as for adaptive wavelet methods for elliptic PDE’s(Cohen-Dahmen-DeVore, Stevenson).
(iii) Reconstruction a sparse orthogonal series from random sampling : techniquesfrom Compressed Sensing (Sparse Fourier series : Gilbert-Strauss-Tropp,Candes-Romberg-Tao, Rudelson-Vershynin. Sparse Legendre series : Rauhut-Ward2010).
Space discretization : should be properly tuned (use different resolution for each tν)and injected in the final error analysis.
Our results can be used in the analysis of reduced basis methods.
![Page 59: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/59.jpg)
Conclusion and perspective
Rich topic : involves a variety of tools such as stochastic processes, high dimensionalapproximation, complex analysis, sparsity and non-linear approximation, adaptivity anda-posteriori analysis.
First numerical results in moderate dimensionality : reveal the adavantages of anadaptive approach. Goal : implementation for very high or infinite dimensionality.
Many applications in engineering.
Many other models to be studied :
(i) Non-affine dependence of a in the variable y .
(ii) Other linear or non-linear PDE’s.
Papers : www.ann.jussieu.fr/cohen
THANKS !
![Page 60: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/60.jpg)
Conclusion and perspective
Rich topic : involves a variety of tools such as stochastic processes, high dimensionalapproximation, complex analysis, sparsity and non-linear approximation, adaptivity anda-posteriori analysis.
First numerical results in moderate dimensionality : reveal the adavantages of anadaptive approach. Goal : implementation for very high or infinite dimensionality.
Many applications in engineering.
Many other models to be studied :
(i) Non-affine dependence of a in the variable y .
(ii) Other linear or non-linear PDE’s.
Papers : www.ann.jussieu.fr/cohen
THANKS !
![Page 61: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/61.jpg)
Conclusion and perspective
Rich topic : involves a variety of tools such as stochastic processes, high dimensionalapproximation, complex analysis, sparsity and non-linear approximation, adaptivity anda-posteriori analysis.
First numerical results in moderate dimensionality : reveal the adavantages of anadaptive approach. Goal : implementation for very high or infinite dimensionality.
Many applications in engineering.
Many other models to be studied :
(i) Non-affine dependence of a in the variable y .
(ii) Other linear or non-linear PDE’s.
Papers : www.ann.jussieu.fr/cohen
THANKS !
![Page 62: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/62.jpg)
Conclusion and perspective
Rich topic : involves a variety of tools such as stochastic processes, high dimensionalapproximation, complex analysis, sparsity and non-linear approximation, adaptivity anda-posteriori analysis.
First numerical results in moderate dimensionality : reveal the adavantages of anadaptive approach. Goal : implementation for very high or infinite dimensionality.
Many applications in engineering.
Many other models to be studied :
(i) Non-affine dependence of a in the variable y .
(ii) Other linear or non-linear PDE’s.
Papers : www.ann.jussieu.fr/cohen
THANKS !
![Page 63: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/63.jpg)
Conclusion and perspective
Rich topic : involves a variety of tools such as stochastic processes, high dimensionalapproximation, complex analysis, sparsity and non-linear approximation, adaptivity anda-posteriori analysis.
First numerical results in moderate dimensionality : reveal the adavantages of anadaptive approach. Goal : implementation for very high or infinite dimensionality.
Many applications in engineering.
Many other models to be studied :
(i) Non-affine dependence of a in the variable y .
(ii) Other linear or non-linear PDE’s.
Papers : www.ann.jussieu.fr/cohen
THANKS !
![Page 64: High dimensional sparse polynomial approximations of ... · The curse of dimensionality Consider a continuous function y 7→ u(y) with y ∈ [0,1]. Sample at equispaced points](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0336f9342a37eab134a22/html5/thumbnails/64.jpg)
Conclusion and perspective
Rich topic : involves a variety of tools such as stochastic processes, high dimensionalapproximation, complex analysis, sparsity and non-linear approximation, adaptivity anda-posteriori analysis.
First numerical results in moderate dimensionality : reveal the adavantages of anadaptive approach. Goal : implementation for very high or infinite dimensionality.
Many applications in engineering.
Many other models to be studied :
(i) Non-affine dependence of a in the variable y .
(ii) Other linear or non-linear PDE’s.
Papers : www.ann.jussieu.fr/cohen
THANKS !