sampling-based approximation algorithms for multi-stage stochastic optimization chaitanya swamy...
TRANSCRIPT
![Page 1: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/1.jpg)
Sampling-based Approximation Algorithms for Multi-stage Stochastic
OptimizationChaitanya Swamy
University of Waterloo
Joint work with David Shmoys Cornell University
![Page 2: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/2.jpg)
Stochastic Optimization
• Way of modeling uncertainty.
• Exact data is unavailable or expensive – data is uncertain, specified by a probability distribution.
Want to make the best decisions given this uncertainty in the data.
• Applications in logistics, transportation models, financial instruments, network design, production planning, …
• Dates back to 1950’s and the work of Dantzig.
![Page 3: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/3.jpg)
Stochastic Recourse Models
Given : Probability distribution over inputs.
Stage I : Make some advance decisions – plan ahead or hedge against uncertainty.
Uncertainty evolves through various stages.
Learn new information in each stage.
Can take recourse actions in each stage – can augment earlier solution paying a recourse cost.
Choose initial (stage I) decisions to minimize
(stage I cost) + (expected recourse cost).
![Page 4: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/4.jpg)
2-stage problem 2 decision points
0.2
0.020.3 0.1
stage I
stage II scenarios
0.5
0.20.3
0.4
stage I
stage II
scenarios in stage k
k-stage problem k decision points
![Page 5: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/5.jpg)
2-stage problem 2 decision points
Choose stage I decisions to minimize expected total cost =
(stage I cost) + Eall scenarios [cost of stages 2 … k].
0.2
0.020.3 0.1
stage I
stage II scenarios
0.5
0.2 0.4
stage I
stage II
k-stage problem k decision points
0.3
scenarios in stage k
![Page 6: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/6.jpg)
Stochastic Set Cover (SSC)Universe U = {e1, …, en }, subsets S1, S2, …, Sm U, set S has weight wS.
Deterministic problem: Pick a minimum weight collection of sets that covers each element.Stochastic version: Target set of elements to be covered is given by a probability distribution.
– target subset A U to be covered (scenario) is revealed after k stages
– choose some sets Si initially – stage I
– can pick additional sets Si in each stage
paying recourse cost.
stage I
A1
UAk
U
Minimize Expected Total cost =Escenarios A U [cost of sets picked for
scenario A in stages 1, … k].
![Page 7: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/7.jpg)
![Page 8: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/8.jpg)
stage II
A
stage I
stage III
C
B
B
B
C
C DA
A
D
D0.2
0.8
0.5
0.5
0.3
0.7
![Page 9: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/9.jpg)
stage II
A
stage I
stage III
C
B
B
B
C
C DA
A
D
D0.2
0.8
0.5
0.5
0.3
0.7
![Page 10: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/10.jpg)
Stochastic Set Cover (SSC)
Universe U = {e1, …, en }, subsets S1, S2, …, Sm U, set S has weight wS.
Deterministic problem: Pick a minimum weight collection of sets that covers each element.Stochastic version: Target set of elements to be covered is given by a probability distribution.
How is the probability distribution on subsets specified?
•A short (polynomial) list of possible scenarios
•Independent probabilities that each element exists
•A black box that can be sampled.
![Page 11: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/11.jpg)
Approximation Algorithm
Hard to solve the problem exactly. Even special cases are #P-hard.Settle for approximate solutions. Give polytime algorithm that always finds near-optimal solutions.
A is a -approximation algorithm if,
•A runs in polynomial time.
•A(I) ≤ .OPT(I) on all instances I,
is called the approximation ratio of A.
![Page 12: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/12.jpg)
Previous Models Considered
•2-stage problems– polynomial scenario model: Dye, Stougie &
Tomasgard; Ravi & Sinha; Immorlica, Karger, Minkoff & Mirrokni.
– Immorlica et al.: also consider independent activation model
proportional costs: (stage II cost) = (stage I cost),
e.g., wSA = .wS for each set S, in
each scenario A.
– Gupta, Pál, Ravi & Sinha: black-box model but also with proportional costs.
– Shmoys, S (SS04): black-box model with arbitrary costs.
gave an approximation scheme for 2-stage LPs +
rounding procedure that “reduces” stochastic problems to their deterministic versions.
![Page 13: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/13.jpg)
Previous Models (contd.)
•Multi-stage problems
– Hayrapetyan, S & Tardos: O(k)-approximation algorithm for k-stage Steiner tree.
– Gupta, Pál, Ravi & Sinha: also other k-stage problems.
2k-approximation algorithm for Steiner tree factors exponential in k for vertex cover, facility location.
Both only consider proportional, scenario-dependent costs.
![Page 14: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/14.jpg)
Our Results• Give the first fully polynomial approximation scheme
(FPAS) for a broad class of k-stage stochastic linear programs for any fixed k. – black-box model: arbitrary distribution. – no assumptions on costs. – algorithm is the Sample Average Approximation (SAA)
method. First proof that SAA works for (a class of) k-stage LPs with poly-bounded sample size.
Shapiro ’05: k-stage programs but with independent stages Kleywegt, Shapiro & Homem De-Mello ’01: bounds for 2-stage programs S, Shmoys ’05: unpublished note that SAA works for 2-stage LPsCharikar, Chekuri & Pál ’05: another proof that SAA works for
(a class of) 2-stage programs
![Page 15: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/15.jpg)
Results (contd.)
• FPAS + rounding technique of SS04 gives approximation algorithms for k-stage stochastic integer programs.– no assumptions on distribution or costs– improve upon various results obtained in more
restricted models: e.g., O(k)-approx. for k-stage vertex cover (VC) , facility location. Munagala; Srinivasan: improved factor for k-stage VC to 2.
![Page 16: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/16.jpg)
A Linear Program for 2-stage SSC
xS : 1 if set S is picked in stage I yA,S : 1 if S is picked in scenario AMinimize ∑S SxS + ∑AU pA ∑S WSyA,S
s.t. ∑S:eS xS + ∑S:eS yA,S ≥ 1 for each A
U, eA
xS, yA,S ≥ 0 for each S, A
Exponentially many variables and constraints.
0.20.02
0.3
stage I
stage II scenario A U
pA : probability of scenario A U. Let cost wA
S = WS for each set S, scenario A.
Equivalent compact, convex program:Minimize h(x) = ∑S SxS + ∑AU pAfA(x) s.t. 0 ≤ xS ≤ 1 for each S
fA(x) = min {∑S WSyA,S : ∑S:eS yA,S ≥ 1 – ∑S:eS xS for each eA yA,S ≥ 0 for each
S}
![Page 17: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/17.jpg)
Wanted result: With poly-bounded N, x solves (SA-P) h(x) ≈ OPT.
Sample Average Approximation
Sample Average Approximation (SAA) method:– Sample some N times from distribution
– Estimate pA by qA = frequency of occurrence of scenario A = nA/N.
True problem: minxP (h(x) = .x
+ ∑AU pA fA(x)) (P)
Sample average problem: minxP (h'(x) = .x
+ ∑AU qA fA(x)) (SA-P)
Size of (SA-P) as an LP depends on N – how large should N be?
Possible approach: Try to show that h'(.) and h(.) take similar values.Problem: Rare scenarios can significantly influence value of h(.), but will almost never be sampled.
![Page 18: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/18.jpg)
Wanted result: With poly-bounded N, x solves (SA-P) h(x) ≈ OPT.
Sample Average Approximation (SAA) method:– Sample some N times from distribution
– Estimate pA by qA = frequency of occurrence of scenario A = nA/N.
True problem: minxP (h(x) = .x
+ ∑AU pA fA(x)) (P)
Sample average problem: minxP (h'(x) = .x
+ ∑AU qA fA(x)) (SA-P)
Size of (SA-P) as an LP depends on N – how large should N be?
Sample Average Approximation
Possible approach: Try to show that h'(.) and h(.) take similar values.Problem: Rare scenarios can significantly influence value of h(.), but will almost never be sampled.
Key insight: Rare scenarios do not much affect the optimal first-stage decisions x*
instead of function value, look at how function varies with x show that “slopes” of h'(.) and h(.) are “close” to each other
x
h'(x)
h(x)
x*x
![Page 19: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/19.jpg)
True problem: minxP (h(x) =
.x + ∑AU pA fA(x)) (P)
Sample average problem: minxP (h'(x) =
.x + ∑AU qA fA(x)) (SA-P)
Slope subgradient
Closeness-in-subgradients
Closeness-in-subgradients: At “most” points u in P, vector d'u such that
(*) d'u is a subgradient of h'(.) at u, AND an -subgradient of h(.) at u.
True with high probability for h(.) and h'(.).
Lemma: For any convex functions g(.), g'(.), if (*) holds then, x solves minxP g'(x) x is a near-optimal solution to minxP
g(x).
dm is a subgradient of h(.) at u, if v, h(v) – h(u) ≥ d.(v–u).
d is an -subgradient of h(.) at u, if vP, h(v) – h(u) ≥ d.(v–u) –
.h(v) – .h(u).
![Page 20: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/20.jpg)
Closeness-in-subgradients
Closeness-in-subgradients: At “most” points u in P, vector d'u such that
(*) d'u is a subgradient of h'(.) at u, AND an -subgradient of h(.) at u.
Lemma: For any convex functions g(.), g'(.), if (*) holds then, x solves minxP g'(x) x is a near-optimal solution to minxP
g(x). P
ug(x) ≤ g(u)
du
Intuition:• Minimizer of convex function is determined by
subgradient.
• Ellipsoid-based algorithm of SS04 for convex minimization only uses (-) subgradients: uses (-) subgradient to cut ellipsoid at a feasible point u in P(*) can run SS04 algorithm on both minxP g(x) and minxP g'(x) using same vector d'u to cut ellipsoid at uP algorithm will return x that is near-optimal for both problems.
dm is a subgradient of h(.) at u, if v, h(v) – h(u) ≥ d.(v–u).
d is an -subgradient of h(.) at u, if vP, h(v) – h(u) ≥ d.(v–u) –
.h(v) – .h(u).
![Page 21: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/21.jpg)
Proof for 2-stage SSCTrue problem: minxP (h(x) =
.x + ∑AU pA fA(x)) (P)
Sample average problem: minxP (h'(x) =
.x + ∑AU qA fA(x)) (SA-P)
Let = maxS WS /S, zA optimal dual solution for scenario A at point uP.
Facts from SS04:A.vector du = {du,S} with du,S = S – ∑ApA ∑eAS zA is subgradient
of h(.) at u; can write du,S = E[XS] where XS = S – ∑eAS zA in scenario A
B.XS [–WS, S] Var[XS] ≤ WS2 for every set S
C. if d' = {d'S} is a vector such that |d'S – du,S| ≤ .S for every set S then,d' is an -subgradient of h(.) at u.
A vector d'u with components d'u,S = S – ∑AqA ∑eAS zA = Eq[XS] is a subgradient of h'(.) at u
B, C with poly(2/2.log(1/)) samples, d'u is an -subgradient of h(.) at u with probability ≥ 1– polynomial samples ensure that with high probability, at “most” points uP, d'u is an -subgradient of h(.) at u
property (*)
![Page 22: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/22.jpg)
3-stage SSCTrue
distribution
pA
TA
stage I
stage II
stage III scenario (A,B) specifies set of elements to cover
A
pA,B
Sampled distributio
nqA
TA
stage I
stage II
stage III scenario (A,B) specifies set of elements to cover
A
qA,B
– True distribution pA is estimated by qA
– True distribution {pA,B} in TA is only estimated by distribution {qA,B}
True and sample average problems solve different recourse problems for a given scenario ATrue problem: minxP (h(x) = .x + ∑A pA fA(x)) (3-P)
Sample avg. problem: minxP (h'(x) = .x + ∑A qA
gA(x)) (3SA-P)
fA(x), gA(x) 2-stage set-cover problems specified by tree TA
![Page 23: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/23.jpg)
3-stage SSC (contd.)
True problem: minxP (h(x) = .x + ∑A pA fA(x)) (3-P)
Sample avg. problem: minxP (h'(x) = .x + ∑A qA
gA(x)) (3SA-P)main difficulty: h(.) and h'(.) solve different recourse problemsFrom current 2-stage theorem, can infer that for “most” xP,
any second-stage soln. y that minimizes gA(x) also “nearly” minimizes fA(x) – is this enough to prove desired theorem for h(.) and h'(.)?
Suppose H(x) = miny a(x,y)
H'(x) = miny b(x,y)
s.t. x, each y that minimizes b(x,.) also
minimizes a(x,.)
If x minimizes H'(.), does it also approximately minimize H(.)?
NO: e.g., a(x,y) = A(x)+(y – y0)2
b(x,y)= B(x)+(y – y0)2
where A(.) f’n of x, B(.) f’n of x
a(.), b(.) are convex f’ns.
![Page 24: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/24.jpg)
Proof sketch for 3-stage SSC
True problem: minxP (h(x) = .x + ∑A pA fA(x))(3-P)
Sample avg. problem: minxP (h'(x) = .x + ∑A qA
gA(x)) (3SA-P)Will show that h(.) and h'(.) are close in subgradient.
main difficulty: h(.) and h'(.) solve different recourse problems
Subgradient of h(.) at u is du ; du,S = S – ∑A pA(dual soln. to fA(u))
Subgradient of h'(.) at u is d'u ; d'u,S = S – ∑A qA(dual soln. to gA(u))
To show d' is an -subgradient of h(.) need that: (dual soln. to gA(u)) is a near-optimal (dual soln. to
fA(u))
This is a Sample average theorem for the dual of a 2-stage problem!
![Page 25: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/25.jpg)
Proof sketch for 3-stage SSC
True problem: minxP (h(x) = .x + ∑A pA fA(x)) (3-P)
Sample average problem: minxP (h'(x) = .x + ∑A qA
gA(x)) (3SA-P)Subgradient of h(.) at u is du with du,S = S – ∑A pA(dual soln. to fA(u))
Subgradient of h'(.) at u is d'u with d'u,S = S – ∑A qA(dual soln. to gA(u))
To show d'u is an -subgradient of h(.) need that: (dual soln. to gA(u)) is a near-optimal (dual soln. to fA(u))
Idea: Show that the two dual objective f’ns. are close in subgradients
Problem: Cannot get closeness-in-subgradients by looking at standard exponential size LP-dual of fA(x), gA(x)
![Page 26: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/26.jpg)
fA(x) = min ∑S wAS yA,S + ∑scenarios (A,B), S pA,B
.wAB,S zA,B,S
s.t. ∑S:eS yA,S + ∑S:eS zA,B,S ≥ 1 – ∑S:eS xS scenarios (A,B), eE(A,B)
yA,S, zA,B,S ≥ 0 scenarios (A,B), SDual is max ∑A,B,e (1 – ∑S:eS xS) A,B,e
s.t. ∑scenarios (A,B), eS A,B,e ≤ wAS S
∑eS A,B,e ≤ pA,B.wA
B,S scenarios (A,B), S
A,B,e ≥ 0 scenarios (A,B), eE(A,B)
pA
TA
stage I
stage II
stage III scenario (A,B) specifies set of elements to cover
A
pA,B
![Page 27: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/27.jpg)
Proof sketch for 3-stage SSC
True problem: minxP (h(x) = .x + ∑A pA fA(x)) (3-P)
Sample average problem: minxP (h'(x) = .x + ∑A qA
gA(x)) (3SA-P)Subgradient of h(.) at u is du with du,S = S – ∑A pA(dual soln. to fA(u))
Subgradient of h'(.) at u is d'u with d'u,S = S – ∑A qA(dual soln. to gA(u))
To show d'u is an -subgradient of h(.) need that: (dual soln. to gA(u)) is a near-optimal (dual soln. to fA(u))
Idea: Show that the two dual objective f’ns. are close in subgradients
Problem: Cannot get closeness-in-subgradients by looking at standard exponential size LP-dual of fA(x), gA(x)
– formulate a new compact non-linear dual of polynomial size.
– (approximate) subgradient of dual objective function comes from(near-) optimal solution to a 2-stage primal LP: use earlier SAA result.
Recursively apply this idea to solve k-stage stochastic LPs.
![Page 28: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/28.jpg)
Summary of Results•Give the first approximation scheme to solve a broad
class of k-stage stochastic linear programs for any fixed k.– prove that Sample Average Approximation method
works for our class of k-stage programs.
•Obtain approximation algorithms for k-stage stochastic integer problems – no assumptions on costs or distribution.– k.log n-approx. for k-stage set cover. (Srinivasan: log n)
– O(k)-approx. for k-stage vertex cover, multicut on trees, uncapacitated facility location (FL), some other FL variants.
– (1+)-approx. for multicommodity flow.
Results improve previous results obtained in restricted k-stage models.
![Page 29: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/29.jpg)
Open Questions
• Obtain approximation factors independent of k for k-stage (integer) problems: e.g., k-stage FL, k-stage Steiner tree
• Improve analysis of SAA method, or obtain some other (polynomial) sampling algorithm:– any -approx. solution to constructed problem
gives (+)-approx. solution to true problem– better dependence on k – are exp(k) samples
required?– improved sample-bounds when stages are
independent?
![Page 30: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell](https://reader038.vdocuments.us/reader038/viewer/2022110321/56649d035503460f949d695e/html5/thumbnails/30.jpg)
Thank You.