Publisher:
Society for Industrial and Applied Mathematics

Issued date:
2013-07

Citation:
SIAM Journal on Matrix Analysis and Applications, 34 (2013) 3, pp. 1112-1128

ISSN:
1095-7162 (Online) 0895-4798 (Print)

DOI:
10.1137/12088642X

Sponsor:
This work was supported by the Ministerio de Economía y Competitividad of Spain through grants MTM-2009-09281, MTM-2012-32542 (Ceballos, Dopico, and Molera) and MTM2010-18057 (Castro-González).

Least squares problems min(x) parallel to b - Ax parallel to(2) where the matrix A is an element of C-mXn (m >= n) has some particular structure arise frequently in applications. Polynomial data fitting is a well-known instance of problems that yield highly stLeast squares problems min(x) parallel to b - Ax parallel to(2) where the matrix A is an element of C-mXn (m >= n) has some particular structure arise frequently in applications. Polynomial data fitting is a well-known instance of problems that yield highly structured matrices, but many other examples exist. Very often, structured matrices have huge condition numbers kappa(2)(A) = parallel to A parallel to(2) parallel to A(dagger)parallel to(2) (A(dagger) is the Moore-Penrose pseudoinverse of A) and therefore standard algorithms fail to compute accurate minimum 2-norm solutions of least squares problems. In this work, we introduce a framework that allows us to compute minimum 2-norm solutions of many classes of structured least squares problems accurately, i.e., with errors parallel to(x) over cap (0) - x(0)parallel to(2)/parallel to x(0)parallel to(2) = O(u), where u is the unit roundoff, independently of the magnitude of kappa(2)(A) for most vectors b. The cost of these accurate computations is O(n(2)m) flops, i.e., roughly the same cost as standard algorithms for least squares problems. The approach in this work relies in computing first an accurate rank-revealing decomposition of A, an idea that has been widely used in recent decades to compute, for structured ill-conditioned matrices, singular value decompositions, eigenvalues, and eigenvectors in the Hermitian case and solutions of linear systems with high relative accuracy. In order to prove that accurate solutions are computed, a new multiplicative perturbation theory of the least squares problem is needed. The results presented in this paper are valid for both full rank and rank deficient problems and also in the case of underdetermined linear systems (m < n). Among other types of matrices, the new method applies to rectangular Cauchy, Vandermonde, and graded matrices, and detailed numerical tests for Cauchy matrices are presented.[+][-]