Fast Matrix Based Computation of Eigenvalues and the Loewner Order in PolSAR Data

We describe the calculation of eigenvalues of <inline-formula> <tex-math notation="LaTeX">$2 \times 2$ </tex-math></inline-formula> or <inline-formula> <tex-math notation="LaTeX">$3 \times 3$ </tex-math></inline-formula> Hermitian matrices as used in the analysis of multilook polarimetric synthetic aperture radar (SAR) data. The eigenvalues are calculated as the roots of quadratic or cubic equations. We also describe the pivot-based calculation of the Loewner order for the partial ordering of differences between such matrices. The methods are well suited for fast matrix-oriented computer implementation, and the speed-up over simpler calculations based on built-in eigenproblem solvers is enormous.


I. INTRODUCTION
I N THE analysis of multilook polarimetric synthetic aperture radar (SAR) data described by Hermitian (variance-) covariance matrices, the complex Wishart distribution can be used for change detection between acquisitions at two time points [1]. Many authors have worked with this type of change detection (see [2]- [7]). In [3] and [5], we specifically work with the change between several time points.
Used in combination with a polarimetric change detector, the Loewner order calculates whether the difference between two Hermitian covariance matrices is positive definite, negative definite or indefinite [8], [9]. This allows us to establish if the radar response increases, decreases, or if it changes the structure or nature between two time points in a way such that we cannot say whether it has increased or decreased. The Loewner order is thus concerned with the direction of detected change.
This letter deals with fast matrix-based calculations of: 1) eigenvalues of Hermitian matrices and 2) the Loewner order, i.e., the definiteness of differences between Hermitian matrices. Specifically, for the eigenvalues, we compare (MATLAB) computing times for the suggested matrix-oriented calculations with a simple implementation with loops over rows and columns and calls to the built-in eigenvalue/vector function eig. For the Loewner order, we compare computing times for eigenvalue-based calculations with the pivot-based ones.
The data used here to illustrate the techniques are obtained from the airborne Electromagnetics Institute SAR (EMISAR) system [10], [11]. This is a computational letter with very little focus on actual change on the ground. II. MULTILOOK POLARIMETRIC SAR DATA In the covariance matrix formulation of multilook polarimetric SAR image data, each pixel can be described by a complex 3 × 3 matrix This matrix is Hermitian also known as self-adjoint, i.e., the matrix is equal to its own conjugate transpose. If we multiply by the number of looks, n, Z = n C will follow a complex Wishart distribution (for fully developed speckle). This is the matrix that is used in the change detection methods described in [2], [3], [5], and [7]. In the following and for the methods in [9], we may work on C or Z, and here we will use Z. III. EIGENVALUES Let us use the notation The first, fourth, fifth, and sixth terms are real. The second and third terms are complex conjugates of each other.
Since the imaginary parts cancel out, the sum of the two terms is twice the real part. Both trace and determinant of Z are real. Fast matrix inversion and fast determinant calculation are dealt with in [12].
Below we give computational details. For completeness, we start by giving the more well-known solutions to the dual-pol and azimuthal symmetry cases. All solutions given are well suited for fast matrix-based computer implementation.

A. Dual Pol Case
For dual polarimetry we have, say, the upper left 2 × 2 matrix of Z only k a a * ξ . The trace is k + ξ , the determinant is kξ − |a| 2 , both are real. The eigenvalue problem can be written as which has the well-known solution Here A = 1, B is minus the trace and C the determinant so we get The discriminant (k − ξ) 2 + 4|a| 2 is always nonnegative; therefore, the eigenvalues are real.

B. Azimuthal Symmetry Case
For the azimuthal symmetry case, we force the elements a and b in matrix Z to zero so we have ⎡ In this case, the eigenvalue problem can be written as so ξ is an eigenvalue and the two other eigenvalues are found as in the dual pol case. Again, the eigenvalues are real.

C. Quad/Full Pol Case
In this case, we have the full matrix Z. The eigenvalues η of Z are the roots of the cubic equation set the second derivative to zero leading to To solve the cubic equation f (η) = 0, divide by A and introduce A which translates the inflection point along the x-axis so that its abscissa becomes zero to obtain f (x) = 0 or f (x) is the so-called depressed cubic (with no quadratic term) where At the inflection point, px which translates the inflection point along the y-axis so that its ordinate becomes zero to see that g(−x) = −g(x). This shows that the cubic function is point symmetric (i.e., it has 180 • rotational symmetry) around the inflection point.
Inserting our A, B, and C into the expression for 3 p, we get so in our case p ≤ 0 and p = 0 if and only if k = ξ = ζ and |a| 2 = |b| 2 = |ρ| 2 = 0, i.e., the matrix we are analyzing is proportional to the identity matrix in which case the triple To find the eigenvalues substitute (due to François Viète, late 1500s) into x 3 + 3 px + 2q = 0 to get which leads to a quadratic equation for z 3 The solution for z 3 is The solution depends on the sign of the discriminant q 2 + p 3 (q 2 + p 3 < 0 is referred to as the casus irreducibilis). Instead we may use another substitution (also due to Viète) for x in x 3 + 3 px + 2q = 0, namely x = u cos θ leading to u 3 cos 3 θ + 3 pu cos θ + 2q = 0.
we divide by u 3 /4 (u = 0) leading to and choose u such that 12 p/u 2 = −3, i.e., u 2 = −4 p and u = 2 ∘ − p which gives This gives (arccos is the inverse cosine sometimes written as cos −1 ) and finally We see that and that the three cosines, cos θ k , sum to 0, so Since arccos gives angles in [0, π], i.e., 0 ≤ 3θ 1 ≤ π, we have Note that with the cosine substitution, we easily see that for p < 0 the eigenvalues are real (noncomplex) irrespective of the sign of the discriminant q 2 + p 3 . If q 2 + p 3 < 0, the magnitude and cosine of the argument of z 3 = −q ±i −q 2 − p 3 are − p ∘ − p and q/( p ∘ − p), respectively, leading to the same solution as immediately above (see [14]).
Under the heading "Cubic equation," Wikipedia has a good description of the problem and a great illustration of the above trigonometric solution to the cubic equation. This illustration is reproduced in Fig. 1. Another useful reference is, for example, [15].

IV. LOEWNER ORDER
For scalar quantities, it is easy to establish whether one quantity is larger than another, for example, we can check whether the difference between them is positive, negative, or zero. For matrices, this is a more complicated matter. The Loewner (or Löwner) order provides a partial ordering of matrices [8], [9]. Here in our context, it gives a direction of change: does the radar response X at time point one and Y at time point two increase or decrease (or does it possibly change structure or nature) between the two time points? To establish the Loewner order, we calculate the definiteness of

A. Positive Definiteness and Pivots
A matrix A is positive definite if it is symmetric (here, in the complex case if it is Hermitian) and has positive eigenvalues. If all eigenvalues are negative, it is negative definite. If the eigenvalues have different signs, it is indefinite.
If we do not need the eigenvalues but merely need to establish whether A is positive definite, negative definite, or indefinite, it is easier and faster to calculate the pivots. For a symmetric (Hermitian) matrix, the pivots (without row exchanges) and the eigenvalues have the same signs [16], so a matrix is positive definite if it is symmetric (Hermitian) and has all positive pivots and negative definite if it has all negative pivots. If the pivots have different signs, it is indefinite. A simple example with a real, symmetric (as opposed to a complex, Hermitian) matrix is To find the pivots, perform elimination: to replace 2 in the second row, first column, with 0 multiply the first row by 2 and subtract the resulting row from the second row and get The pivots are the diagonal elements of B, 1 and −1. Since the second pivot is negative, A is not positive definite. Since the first pivot is positive, A is not negative definite either, it is indefinite.
A simple example with the Hermitian matrix is To eliminate 2 − i in the second row, first column, multiply the first row with 2 − i and subtract it from the second row to obtain 1 2 + i 0 1 both pivots are positive and the matrix is positive definite. In general, for a 2 × 2 Hermitian matrix to eliminate a * in the second row, first column, multiply the first row with a * /k, and subtract the resulting row from the second row to obtain k a The pivots are k and ξ − |a| 2 /k. The second pivot is the determinant divided by k, the first pivot.
This can be extended to test whether a p × p matrix is positive definite by looking at the k × k upper left determinants, k = 1, . . . , p, [16]. The first pivot is the element in the first row, first column. The kth pivot (k = 2, . . . , p) is All pivots are positive if and only if det A k is positive for all k. In the above real case, det A 1 = 1 and det A 2 = −1, so A is not positive definite, it is indefinite. In the above complex case, det A 1 = 1 and det A 2 = 1, so A is positive definite.
In the Loewner order case, looking at Z = X − Y to determine if Z is positive definite, we must examine whether B. Why Use the Loewner Order . . . . . . and not, for example, comparisons between trace and determinant before and after? Here is a simple example with large differences between X and Y tr X = tr Y = 11 and det X = det Y = 10, i.e., no difference is detected, whereas the Loewner order gives an indefinite difference matrix with eigenvalues 9 and −9.
Another example with a large difference is (here with no change on the diagonal) tr X = tr Y = 4 and det X = det Y = 1, again no difference is detected, whereas the Loewner order gives an indefinite difference matrix with eigenvalues 2 and −2.
In both these simple, constructed examples, checking trace and determinant before and after gives no indication of change, whereas the application of the Loewner order successfully does.

V. DATA EXAMPLE
The data chosen for illustration are the last 512 rows of the 1024 rows by 1024 columns data that were used in our original 2003 work [1], namely airborne EMISAR [10], [11] data from 17 April and 20 May covering an agricultural test site near Foulum, Denmark (see Fig. 2). The gray band across the two images in the top row is a wooded area which, as opposed to most of the surrounding agricultural fields, shows no change. Specifically, the bottom-right image shows significant changes between the two time points color-coded for the direction of change.
Change and direction of change can be determined at patch or segment level also [5].

VI. CONCLUSION
The speed-up factors (all approximate and for a few tests on 1024 × 1024 pixel images) for fast matrix-based computer implementation based on the above eigenvalue calculations compared with a simple implementation based on calls to MATLAB's built-in eigenvalue solver eig in for loops over rows and columns, is 350 for dual-pol, 275 for azimuthal symmetry, and 175 for quad/full pol, all enormous speedups. The largest absolute value of the difference between the eigenvalues obtained from the two methods is less than 10 −11 .
Calculating the Loewner order by means of pivots instead of eigenvalues speeds up these calculations further by a factor of around 2.
MATLAB code covering quad/full pol as well as azimuthal symmetry and dual-pol including diagonal only data for change detection in polSAR data (with support functions), for calculating eigenvalues and derived quantities (entropy, etc.) and the Loewner order is available on the author's homepage.
The methods described here can be used in the analysis of Sentinel-1 and Radarsat-2 data also, as well as in other contexts, for example, in the analysis of real symmetric variance-covariance matrices from RGB imagery.