computing the rational univariate reduction by sparse resultants koji ouchi, john keyser, j. maurice...
Post on 20-Dec-2015
214 views
TRANSCRIPT
Computingthe Rational Univariate
Reductionby Sparse Resultants
Koji Ouchi, John Keyser, J. Maurice Rojas
Department of Computer Science, Mathematics
Texas A&M University
ACA 2004
Texas A&M University ACA2004 2RUR
Outline
What is Rational Univariate Reduction? Computing RUR by Sparse Resultants Complexity Analysis Exact Implementation
Texas A&M University ACA2004 3RUR
Rational Univariate Reduction
Problem: Solve a system of n polynomials f1, …, fn
in n variables X1, …, Xn
with coefficients in the field K
Reduce the system to
n + 1 univariate polynomials h, h1, …, hn
with coefficients in K s.t.
if is a root of h then
(h1(), …, hn()) is a solution to the system
Texas A&M University ACA2004 4RUR
RUR via Sparse Resultant Notation
ei the i-th standard basis vector = {o, e1, …,en} u0, u1,…, un indeterminates Ai = Supp(fi) the algebraic closure of K
Texas A&M University ACA2004 5RUR
Toric Perturbation
Toric Generalized Characteristic PolynomialLet f1*, …, fn* be n polynomials in n variables X1, …, Xn
with coefficients in K andSupp(fi*) Ai =Supp(fi) , i = 1, …, n
that have only finitely many solutions in ( \ {0})n
DefineTGCP(u, Y) =
Res (, A1, …, An) (a ua Xa, f1 - Y f1*, …, fn - Y fn*)
Texas A&M University ACA2004 6RUR
Toric Perturbation
Toric Perturbation [Rojas 99]Define Pert(u) to be
the non-zero coefficient of the lowest degree term(in Y) of TGCP(u, Y)
Pert(u) is well-defined A version of “projective operator technique”
[Rojas 98, D’Andrea and Emiris 03]
Texas A&M University ACA2004 7RUR
Toric Perturbation Toric Perturbation
If (1, …, n) ( \ {0})n is an isolated root ofthe input system f1, …, fn then
a ua a Pert(u)
Pert(u) completely splits into linear factorsover ( \ {0})n. For every irreducible component of the zero setof the input system, there is at least one factor ofPert(u)
Texas A&M University ACA2004 8RUR
Computing RUR Step 1:
Compute Pert(u) Use Emiris’ sparse resultant algorithm [Canny and
Emiris 93, 95, 00] to construct Newton matrixwhose determinant is some multiple of the resultant
Evaluate resultant with distinct u and interpolatethem
Texas A&M University ACA2004 9RUR
Computing RUR Step 2:
Compute h(T) Set h(T) = Pert(T, u1, …, un)
for some values of u1, …, un
Evaluate Pert(u) with distinct u0 and interpolate them
Texas A&M University ACA2004 10RUR
Computing RUR Step 3:
Compute h1 (T), …, hn (T) Computation of hi involves
- Evaluating Pert(u), - Interpolate them, and - Some univariate polynomial operations
Texas A&M University ACA2004 11RUR
Complexity Analysis
Count the number of arithmetic operations
Notation O˜( ) the polylog factor is ignored
Gaussian elimination ofm dimensional matrix requires O(m)
Texas A&M University ACA2004 12RUR
Complexity Analysis Quantities
MA The mixed volume MV(A1 , …, An)of the convex hull of A1 , …, An
RA MV(A1, …, An)+ i = 1,…,n MV(, A1, …, Ai-1, Ai+1, …, An)
The total degree of the sparse resultant
SA The dimension of Newton matrix Possibly exponentially bigger than RA
Texas A&M University ACA2004 13RUR
Complexity Analysis
[Emiris and Canny 95] Evaluating
Res (, A1, …, An) (a ua Xa, f1, …, fn)requires
O˜(n RA SA
)
or O˜(SA1+) if char K = 0
Texas A&M University ACA2004 14RUR
Complexity Analysis
[Rojas 99] Evaluating Pert (u) requires
O˜(n RA2 SA
)
or O˜(SA1+) if char K = 0
Texas A&M University ACA2004 15RUR
Complexity Analysis Computing h (T) requires
O˜(n MA RA2 SA
)
or O˜(MA SA1+) if char K = 0
Texas A&M University ACA2004 16RUR
Complexity Analysis Computing every hi (T) requires
O˜(n MA RA2 SA
)
or O˜(MA SA1+) if char K = 0
Texas A&M University ACA2004 17RUR
Complexity Analysis Computing RUR
h (T), h1 (T), …, hn (T)for fixed u1, …, un requires
O˜(n2 MA RA2 SA
)
or O˜(n MA SA1+) if char K = 0
Texas A&M University ACA2004 18RUR
Complexity Analysis
Derandomize the choice of u1, …, un Computing RUR
h (T), h1 (T), …, hn (T)requires
O˜(n4 MA3 RA
2 SA)
or O˜(n3 MA3 SA
1+) if char K = 0
Texas A&M University ACA2004 19RUR
Complexity Analysis
Emiris Division Emiris GCD
char K = 0
“Small”
Newton MatrixEvaluating Res
n RA SA SA
1+ RA
Evaluating Pert
n RA2 SA
SA1+ RA
1+
RUR
for fixed u
n2 MA RA2 SA
n MA SA1+ n MA RA
1+
RUR n4 MA3 RA
2 SA n3 MA
3 SA1+ n3 MA
3 RA1+
Texas A&M University ACA2004 20RUR
Complexity Analysis A great speed up is achieved
if we could compute “small” Newton matrixwhose determinant is the resultant No such method is known
Texas A&M University ACA2004 21RUR
Khetan’s Method Khetan’s method gives Newton matrix
whose determinant is the resultantof unmixed systems when n = 2 or 3[Kehtan 03, 04]
Let B = A1 An
Then, computing RUR requires
n3 MA3 RB
1+
arithmetic operations
Texas A&M University ACA2004 22RUR
ERUR: Implementation
Current implementation The coefficients are rational numbers Use the sparse resultant algorithm [Emiris and
Canny 93, 95, 00] to construct Newton matrix All the coefficients of RUR h, h1,…, hn are exact
Texas A&M University ACA2004 23RUR
ERUR Non square system is converted to
some square system
Solutions in ( )n are computedby adding the origin o to supports.
Texas A&M University ACA2004 24RUR
ERUR
Exact Sign Given an expression e, tell whether or not
e(h1(), …, hn()) = 0 Use (extended) root bound approach. Use Aberth’s method [Aberth 73] to
numerically compute an approximation fora root of hto any precision.
Im1Re
Texas A&M University ACA2004 25RUR
Applications by ERUR
Real Root Given a system of polynomial equations,
list all the real roots of the system
Positive Dimensional Component Given a system of polynomial equations,
tell whether or not the zero set of the systemhas a positive dimensional component
Texas A&M University ACA2004 26RUR
Applications by ERUR Presented today’s last talk in Session 3
“ApplyingComputer Algebra Techniques
forExact Boundary Evaluation”
4:30 – 5:00 pm
Texas A&M University ACA2004 27RUR
The Other RUR
GB+RS [Rouillier 99, 04] Compute the exact RUR for real solutions
of a 0-dimensional system GB computes the Gröebner basis
[Giusti, Lecerf and Salvy01] An iterative method
Texas A&M University ACA2004 28RUR
Conclusion
ERUR Strong for handling degeneracies
Need more optimizations and faster algorithms
Texas A&M University ACA2004 29RUR
Future Work
RUR Faster sparse resultant algorithms Take advantages of sparseness of matrices
[Emiris and Pan 97] Faster univariate polynomial operations
Texas A&M University ACA2004 30RUR
Thank you for listening!
Contact Koji Ouchi, [email protected] John Keyser, [email protected] Maurice Rojas, [email protected]
Visit Our Web http://research.cs.tamu.edu/keyser/geom/erur/
Thank you