Arrow Research search

Author name cluster

Victor Y. Pan

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

37 papers
2 author rows

Possible papers

37

TCS Journal 2025 Journal Article

A new fast root-finder for black box polynomials

  • Victor Y. Pan
  • Soo Go
  • Qi Luan
  • Liang Zhao

Univariate polynomial root-finding has been studied for four millennia and very intensively in the last decades. Our black box root-finder involves no coefficients and works for a black box polynomial, defined by an oracle (that is, black box subroutine) for its evaluation. Such root-finders have various benefits, e. g. , are particularly efficient where a polynomial can be evaluated fast, say, is the sum of a small number of shifted monomials ( x − c ) a. With incorporation of a fast algorithm by the first author for compression of a disc on the complex plane without losing roots, our root-finder approximates all d complex zeros of a dth degree polynomial p ( x ) (aka roots of equation p ( x ) = 0 ) by using near-optimal Las Vegas expected number of bit-operations, 1 that is, the root-finder is expected to run almost as fast as one accesses the coefficients with a precision required for the solution within a prescribed error bound. The only other known near-optimal polynomial root-finder was presented by the first author at ACM STOC 1995. It is quite involved and has never been implemented, while already in its initial implementation our new root-finder competed with user's choice package of root-finding subroutines MPSolve, according to extensive numerical experiments with standard test polynomials. Furthermore we readily extend our black box root-finder to approximation of the eigenvalues of a matrix in record expected bit operation time, while the root-finder of STOC 1995, using the coefficients of p ( x ), does not support such extension.

SODA Conference 2024 Conference Paper

Nearly Optimal Black Box Polynomial Root-finders

  • Victor Y. Pan

Univariate polynomial root-finding has been studied for four millennia and very intensively in the last decades. Our novel nearly optimal Las Vegas randomized root-finders approximate all zeros of a polynomial almost as fast as one accesses its coefficients with the precision required for the solution within a prescribed error bound. 1 Moreover, our root-finders can be applied to a black box polynomial, defined by an oracle (that is, black box subroutine) for its evaluation rather than by its coefficients. Such root-finders are particularly fast for polynomials that can be evaluated fast, e. g. , the sum of a few shifted monomials, but the only other known black box root-finder is the pioneering one by Louis and Vempala at FOCS 2016, and it only approximates the absolutely largest root of a real-rooted polynomial. Our deterministic divide and conquer algorithm of ACM STOC 1995 is the only other known nearly optimal polynomial root-finder, and it extensively uses the coefficients, is quite involved, and has never been implemented, while according to extensive numerical experiments with standard test polynomials, already initial implementations of our new root-finders compete with user's choice package of root-finding subroutine MPSolve and supersede it more and more significantly where the degree of a polynomial grows large. Our root-finders are readily extended to support approximation of the eigenvalues of a matrix within a record Las Vegas expected bit operation time bound. Our auxiliary algorithms and techniques for computations with black box polynomials can be of independent interest. Keywords: polynomial computations, matrix eigenvalues, complexity, polynomial zeros, black box polynomials, root-squaring, symbolic-numeric computing.

TCS Journal 2017 Journal Article

Accelerated approximation of the complex roots and factors of a univariate polynomial

  • Victor Y. Pan
  • Elias Tsigaridas

The algorithms of Pan (1995) [20] and Pan (2002) [22] approximate the roots of a complex univariate polynomial in nearly optimal arithmetic and Boolean time but require a precision of computing that exceeds the degree of the polynomial. This causes numerical stability problems when the degree is large. We observe, however, that such a difficulty disappears at the initial stage of the algorithms, and in our present paper we extend this stage to root-finding within a nearly optimal arithmetic and Boolean complexity bounds provided that some mild initial isolation of the roots of the input polynomial has been ensured. Furthermore our algorithm is nearly optimal for the approximation of the roots isolated in a fixed disc, square or another region on the complex plane rather than all complex roots of a polynomial. Moreover the algorithm can be applied to a polynomial given by a black box for its evaluation (even if its coefficients are not known); it promises to be of practical value for polynomial root-finding and factorization, the latter task being of interest on its own right. We conclude with summarizing our algorithms and their extension to the approximation of isolated multiple roots and root clusters.

TCS Journal 2017 Journal Article

Nearly optimal computations with structured matrices

  • Victor Y. Pan
  • Elias P. Tsigaridas

We estimate the Boolean complexity of multiplication of structured matrices by a vector and the solution of nonsingular linear systems of equations with these matrices. We study four basic and most popular classes, that is, Toeplitz, Hankel, Cauchy and Vandermonde matrices, for which the cited computational problems are equivalent to the task of polynomial multiplication and division and polynomial and rational multipoint evaluation and interpolation. The Boolean cost estimates for the latter problems have been obtained by Kirrinnis in [10], except for rational interpolation. We supply them now as well as the Boolean complexity estimates for the important problems of multiplication of transposed Vandermonde matrix and its inverse by a vector. All known Boolean cost estimates from [10] for such problems rely on using Kronecker product. This implies the d-fold precision increase for the d-th degree output, but we avoid such an increase by relying on distinct techniques based on employing FFT. Furthermore we simplify the analysis and make it more transparent by combining the representations of our tasks and algorithms both via structured matrices and via polynomials and rational functions. This also enables further extensions of our estimates to cover Trummer's important problem and computations with the popular classes of structured matrices that generalize the four cited basic matrix classes, as well as the transposed Vandermonde matrices. It is known that the solution of Toeplitz, Hankel, Cauchy, Vandermonde, and transposed Vandermonde linear systems of equations is generally prone to numerical stability problems, and numerical problems arise even for multiplication of Cauchy, Vandermonde, and transposed Vandermonde matrices by a vector. Thus our FFT-based results on the Boolean complexity of these important computations could be quite interesting because our estimates are reasonable even for more general classes of structured matrices, showing rather moderate growth of the complexity as the input size increases.

TCS Journal 2017 Journal Article

Real polynomial root-finding by means of matrix and polynomial iterations

  • Victor Y. Pan
  • Liang Zhao

Univariate polynomial root-finding is a classical subject, still important for modern computing. Frequently one seeks just the real roots of a polynomial with real coefficients. They can be approximated at a low computational cost if the polynomial has no nonreal roots, but for high degree polynomials, nonreal roots are typically much more numerous than the real ones. The challenge is known for a long time, and the subject has been intensively studied. Nevertheless, we propose some novel ideas and techniques and obtain dramatic acceleration of the known algorithms. In order to achieve our progress we exploit the correlation between the computations with matrices and polynomials, randomized matrix computations, and complex plane geometry, extend the techniques of the matrix sign iterations, and use the structure of the companion matrix of the input polynomial. The results of our extensive tests with benchmark polynomials and random matrices are quite encouraging. In particular in our tests the number of iterations required for convergence of our algorithms grew very slowly (if it grew at all) as we increased the degree of the input polynomials and the dimension of the input matrices from 64 to 1024.

TCS Journal 2013 Journal Article

Preface

  • Ilias S. Kotsireas
  • Bernard Mourrain
  • Victor Y. Pan
  • Lihong Zhi

TCS Journal 2011 Journal Article

Preface

  • Ilias S. Kotsireas
  • Bernard Mourrain
  • Victor Y. Pan

TCS Journal 2004 Journal Article

Iterative inversion of structured matrices

  • Victor Y. Pan
  • Marc Van Barel
  • Xinmao Wang
  • Gianni Codevico

Iterative processes for the inversion of structured matrices can be further improved by using a technique for compression and refinement via the least-squares computation. We review such processes and elaborate upon incorporation of this technique into the known frameworks.

TCS Journal 1999 Journal Article

Sign determination in residue number systems

  • Hervé Brönnimann
  • Ioannis Z. Emiris
  • Victor Y. Pan
  • Sylvain Pion

Sign determination is a fundamental problem in algebraic as well as geometric computing. It is the critical operation when using real algebraic numbers and exact geometric predicates. We propose an exact and efficient method that determines the sign of a multivariate polynomial expression with rational coefficients. Exactness is achieved by using modular computation. Although this usually requires some multiprecision computation, our novel techniques of recursive relaxation of the moduli and their variants enable us to carry out sign determination and comparisons by using only single precision. Moreover, to exploit modern day hardware, we exclusively rely on floating point arithmetic, which leads us to a hybrid symbolic-numeric approach to exact arithmetic. We show how our method can be used to generate robust and efficient implementations of real algebraic and geometric algorithms including Sturm sequences, algebraic representation of points and curves, convex hull and Voronoi diagram computations and solid modeling. This method is highly parallelizable, easy to implement, and compares favorably with known multiprecision methods from a practical complexity point of view. We substantiate these claims by experimental results and comparisons to other existing approaches.

FOCS Conference 1998 Conference Paper

A Unified Superfast Algorithm for Boundary Rational Tangential Interpolation Problems and for Inversion and Factorization of Dense Structured Matrices

  • Vadim Olshevsky
  • Victor Y. Pan

The classical scalar Nevanlinna-Pick interpolation problem has a long and distinguished history, appearing in a variety of applications in mathematics and electrical engineering. There is a vast literature on this problem and on its various far reaching generalizations. It is widely known that the now classical algorithm for solving this problem proposed by Nevanlinna in 1929 can be seen as a way of computing the Cholesky factorization for the corresponding Pick matrix. Moreover; the classical Nevanlinna algorithm takes advantage of the special structure of the Pick matrix to compute this triangular factorization in only O(n/sup 2/) arithmetic operations, where n is the number of interpolation points, or equivalently, the size of the Pick matrix. Since the structure-ignoring standard Cholesky algorithm [though applicable to the wider class of general matrices] has much higher complexity O(n/sup 3/), the Nevanlinna algorithm is an example of what is now called fast algorithms. In this paper we use a divide-and-conquer approach to propose a new superfast O(n log/sup 3/ n) algorithm to construct solutions for the more general boundary tangential Nevanlinna-Pick problem. This dramatic speed-up is achieved via a new divide-and-conquer algorithm for factorization of rational matrix functions; this superfast algorithm seems to have a practical and theoretical significance itself. It can be used to solve similar rational interpolation problems [e. g. , the matrix Nehari problem], and a variety, of engineering problems. It can also be used for inversion and triangular factorization of matrices with displacement structure, including Hankel-like, Vandermonde-like, and Cauchy-like matrices.

FOCS Conference 1993 Conference Paper

The NC Equivalence of Planar Integer Linear Programming and Euclidean GCD

  • David Shallcross
  • Victor Y. Pan
  • Yu Lin-Kriz

We show NC-reduction of integer linear programming with two variables to the evaluation of the remainder sequence arising in the application of the Euclidean algorithm to two positive integers. Due to the previous result of X. Deng (1989), this implies NC-equivalence of both of these problems, whose membership in NC, as well as P-completeness, remain unresolved open problems. >

FOCS Conference 1992 Conference Paper

Improved Parallel Polynomial Division and Its Extensions

  • Dario Bini
  • Victor Y. Pan

The authors compute the first N coefficients of the reciprocal r(x) of a given polynomial p(x), (r(x)p(x)=1 mod x/sup N/, p(0) not=0), by using, under the PRAM arithmetic models, O(h log N) time-steps and O((N/h)(1+2/sup -h/log/sup (h)/ N)) processors, for any h, h=1, 2, .. ., log/sup */ N, provided that O(logm) steps and m processors suffice to perform DFT on m points and that log/sup (0)/ N=N, log/sup (h)/ N=log/sub 2/log/sup (h-1)/N, h=1, .. ., log/sup */N, log/sup */N=max(h: log/sup (h)/N>0). The same complexity estimates apply to some other computations, such as the division with a remainder of two polynomials of degrees O(N) and the inversion of an N*N triangular Toeplitz matrix. They also show how to extend the techniques to parallel implementation of other recursive processes, such as the evaluation modulo x/sup N/ of the m-th root, p(x)/sup 1/m/, of p(x) (for any fixed natural m), for which we need O(log N log log N) time-steps and O(N/log log N) processors. The paper demonstrates some new techniques of supereffective slowdown of parallel algebraic computations, which they combine with a technique of stream contraction. >

FOCS Conference 1992 Conference Paper

Processor-Efficient Parallel Solution of Linear Systems II: The Positive Characteristic and Singular Cases (Extended Abstract)

  • Erich L. Kaltofen
  • Victor Y. Pan

For pt. I see Proc. 3rd Ann. ACM Symp. Parallel Algms. Architecture, p. 180-91 (1991). The authors show that over any field, the solution set to a system of n linear equations in n unknowns can be computed in parallel with randomization simultaneously in poly-logarithmic time in n and with only as many processors as are utilized to multiply two n * n matrices. A time unit represents an arithmetic operation in the field. For singular systems the parallel timings are asymptotically as fast as those for non-singular systems, due to the avoidance of binary search in the matrix rank problem, except when the field has small positive characteristic; in that case, binary search is avoided at a somewhat higher processor count measure. >

FOCS Conference 1992 Conference Paper

The Power of Combining the Techiques of Algebraic and Numerical Computing: Improved Approximate Multipoint Polynomial Evaluation and Improved Multipole Algorithms

  • Victor Y. Pan
  • John H. Reif
  • Stephen R. Tate

The authors demonstrate the power of combining the techniques of algebraic computation with ones of numerical computation. They do this by improving the known methods for polynomial evaluation on a set of real points and for simulation of n charged particles on the plane. In both cases they approximate (rather than exactly compute) the solutions and do this by exploiting algebraic techniques of the algorithm design. >

FOCS Conference 1985 Conference Paper

Fast and Efficient Algorithms for Sequential and Parallel Evaluation of Polynomial Zeros and of Matrix Polynomials

  • Victor Y. Pan

We evaluate all the real and complex zeros λ1, .. ., λn of an n-th degree univariate polynomial with the relative precision 1/2nc for a given positive constant c. If for all g, h, log |λg/λh-1| ≥ 1/2O(n) unless λg = λh, then we need O(n3log2n) arithmetic operations or O(n2log n) steps, n log n processors. O(n2log n) operations or O(n log n) parallel steps, n processors suffice if either all the zeros are real or for all g, h either |λg| = |λh| or 2O(n) ≥ (|λg/λh| - 1)| ≥ 1/2O(n). If all the zeros are either multiple or form complex conjugate pairs or if their moduli pairwise differ by the factors at least 1+1/nO(1), then O(n log2n) operations or O(log2n) steps, n processors suffice. Replacing 1+1/nO(1) above by 1+1/nO(loghn) for a positive h only requires to increase the time-complexity bounds by the factor loghn. Some of the presented algorithms extend Graeffe's method, other algorithms use the power sum techniques and the companion matrix computation; the latter ones are related to Bernoulli's and Leverrier's methods and to the power method and are extended in this paper to the evaluation of a matrix polynomial u(X) of degree N, (X is an n×n matrix), using O(N log N+n2. 496) arithmetic operations. Such evaluation can be performed using O(log N+log2n) parallel steps, Nn+n3. 496 processors or alternatively O(log2(nN)) steps, N/log N+n3. 496 processors over arbitrary field of constants. Over rational constants, for almost all matrices X the number of processors can be reduced to Nn+n2. 933 or to N/log N+n2. 933, respectively; the bounds can be further reduced to O(log N+log2n)steps, N+n2. 933 processors if u(X) is to be computed with a fixed arbitrarily high precision rather than exactly. For integer and well-conditioned matrices, the exponent 2. 933 above can be decreased to 2. 496. The results substantially improve the previously known upper estimates for the complexity of sequential and parallel evaluation of polynomial zeros and of matrix polynomials.

FOCS Conference 1979 Conference Paper

Field Extension and Triangular Aggregating, Uniting and Canceling for the Acceleration of Matrix Multiplications

  • Victor Y. Pan

The acceleration of matrix multiplication MM, is based on the combination of the method of algebraic field extension due to D. Bini, M. Capovani, G. Lotti, F. Romani and S. Winograd and of trilinear aggregating, uniting and canceling due to the author. A fast algorithm of O(N2. 7378) complexity for N × N matrix multiplication is derived. With A. Schönhage's Theorem about partial and total MM, our approach gives the exponent 2. 6054 by the price of a serious increase of the constant.