The main contribution of this thesis is a new O(n^2), easily parallelizable algorithm for solving the tridiagonal eigenproblem. Three main advances lead to our new algorithm. A tridiagonal matrix is traditionally represented by its diagonal and off-diagonal elements. Our most important advance is in recognizing that its bidiagonal factors are "better" for computational purposes. The use of bidiagonals enables us to invoke a relative criterion to judge when eigenvalues are "close". The second advance comes by using multiple bidiagonal factorizations in order to compute different eigenvectors independently of each other. Thirdly, we use carefully chose dqds-like transformations as inner loops to compute eigenpairs that are highly accurate and "faithful" to the various bidiagonal representations. Orthogonality of the eigenvectors is a consequence of this accuracy. Only O(n) work per eigenpair is needed by our new algorithm.
Conventional wisdom is that there is usually a trade-off between speed and accuracy in numerical procedures, i.e., higher accuracy can be achieved only at the expense of greater computing time. An interesting aspect of our work is that increased accuracy in the eigenvalues and eigenvectors obviates the need for explicit orthogonalization and leads to greater speed.
We present timing and accuracy results comparing a computer implementation of our new algorithm with four existing EISPACK and LAPACK software routines. Our test-bed contains a variety of tridiagonal matrices, some coming from quantum chemistry applications. The numerical results demonstrate the superiority of our new algorithm. For example, on a matrix of order 966 that occurs in the modeling of a biphenyl molecule our method is about 10 times faster than LAPACK's inverse iteration on a serial IBM RS/6000 processor and nearly 100 times faster on a 128 processor IBM SP2 parallel machine.