When is a matrix consistent

Relaxation method for consistently ordered matrices

Relaxation method for consistently ordered matrices

Definition 6.15. A matrix is called consistently ordered if the eigenvalues ​​of the matrix

regardless of the parameter value are.
Theorem 6.16 Triagonal matrices with regular diagonal matrices are consistently ordered.
Proof. With the diagonal matrix

we can for tridiagonal matrices write

From this follows the similarity of the matrices and thus the equality of the eigenvalues.
The searched result for the SOR method in the case of consistently ordered matrices gives the


Proof. We note the identity

In conjunction with the regularity of we conclude from this that then and only then is the eigenvalue of is when






there is only one in the interval in fact


For the eigenvalues ​​become complex with

From the equations Eq. (580) and Eq. (581) you can see that monotonously not falling in is so

We examine the function

Next applies

The last inequality arises because of

Example 6.18: Under the assumptions of Theorem 6.17, the single-step method converges about twice as fast as the total step method (both methods with ).
Proof. Because of equation Eq. (579) holds For so for the spectral radii

The a-priori estimate given in connection with Theorem 5.14 shows that the number of iteration steps required for a given tolerance is indirectly proportional to the logarithm of the spectral radius. So it follows

The following example shows the clear influence of an optimally chosen relaxation parameter on the required number of iterations in the case of special tridiagonal matrices.
Example 6.19. In the case of the tridiagonal matrix from Example 6.3 (or 1.3) applies to the process effort with the optimal value of the SOR relaxation parameter

Proof. Insertion into the eigenvalue equation shows that the GSV iteration matrix

the following eigenvalues and eigenvectors ~ has

The addition theorem is used for this

The greatest eigenvalue of the GSV iteration matrix is ​​thus


Theorem 6.17 gives the optimal parameter value




and with the a-prori estimate already used in Example 6.18, the assertion is derived from it.
Note 6.21. The sharp estimates for the spectral radius of the total or (relaxed) single-step method from Example 6.19 show that the convergence properties of the basic iteration method change with increasing dimensions constantly deteriorate. This is also shown in Figure 6.1 for increasing dimension clearly with the optimized SOR procedure. This requires a fundamental improvement in the procedural approach (see Section 6.1). In the Numerical Mathematics II course, such an approach is studied with Krylov subspace methods. The elementary iteration methods examined here are used for preconditioning.