[ieee propagation in wireless communications (iceaa) - torino, italy (2011.09.12-2011.09.16)] 2011...

4
Designed Basis Functions for Speed and Stability Francis X. Canning Mav6, Alexandria, VA USA [email protected] Abstract—Putting physical information into a numerical solution method can be used to produce fast solution methods. These methods can be extended to improve the stability of the solution procedure and answer. Iterative solution that requires a stable (well conditioned) formulation. However, truly large problems are also sensitive to stability issues. This paper shows a range of methods that may be used to make truly large problems manageable. Some methods improve how local interactions are handled and improve stability for dense meshes. Others improve how distant interactions are treated and improve stability when the problem size measured in wavelengths increases. Keywords Moment Method, Fast Methods, Well Conditioned, Physical Basis Functions. I. INTRODUCTION As larger problems have been solved using numerical methods, several Fast Methods have been introduced. Generally, these Fast Methods are understood to be useful since they permit the solution of a given problem using less computer memory and / or less computation time. Some advances in solving large problems are due to improved and / or special purpose hardware. Solving truly large problems of the future will require not only Fast Methods and better hardware, but also numerical algorithms that have the stability and flexibility that flows from putting into the numerical algorithm significant physical properties of the problem being solved. One place where this has already been noticed is in Fast Methods that require the iterative solution of a matrix problem. Consider using the Fast Multipole Method (FMM) to solve a frequency domain integral equation problem in electromagnetics. FMM supplies an efficient method for finding the product of the Moment Method matrix with a vector of sources (e.g., electric currents). Iterative methods are then used to solve the electromagnetics problem. A significant amount of work has been done on developing improved integral equations [1,2] that provide a better conditioned problem that allows the iterative methods to converge in fewer steps to an accurrate solution. The authors of these new integral equations were generally motivated by the physics of the effects described, even when they did not describe that in their papers. As problems being solved increase in size, stability of the solution tends to decrease. Thus, methods for controlling the stability of numerical methods for large problems must be developed. Thus, this paper looks at new computational methods that have allowed fast solutions of large problems. These methods incorporate the physical properties of the problem to solve it faster. These methods of incorporating physical properties will be used here to develop more stable solutions and more adaptable solutions. II. BASIS FUNCTIONS FOR COMBINED EQUATIONS The combined field integral equation (CFIE) has been used to stabilize computations involving a closed region (as has the Combined Source Integral Equation, or CSIE). The driving physics is that EM fields from an external source do not penetrate inside a closed cavity sourrounded by a perfect conductor. However, the Moment Method only implicitly enforces this, and instabilities result. The matrix for the EFIE and for the MFIE can have a zero eigenvalue at discrete frequencies. One proves that for the CFIE (and CSIE) there can be no zero eigenvalues. The proof involves showing that energy in the sources must leak out, at least a little. However, while this proves the eigenvalues cannot be exactly zero, they still can be very small, allowing poor conditioning. Large problems may then be numerically unstable. A first more stable method will be developed by noting that the failing of the CFIE and CSIE is that although the integral operators (and hopefully also the associated matrices) do not have any zero eigenvalues (or zero singular values), these values may be very close to zero. Clearly, the way to improve this is to use sources that strongly shed energy to the exterior. Consider the matrices that result from the EFIE and MFIE Z E J = E; Z C J = H (1) Previous work on Impedance Matrix Localization (IML) [3] showed how to use linear combinations of the basis and testing functions used in (1) produce to new functions. Some of these functions only produced a local effect and others produced narrow beams. If the change of basis is given by the unitary matrix T, then the new formula that results are: [T Z E T h ] TJ = TE Ù Z’ E J’ = E’ (2) [T Z M T h ] TI = TH Ù Z’ M I’ = H’ (3) Each of the testing functions in (2) or (3) receives electric / magnetic fields incident from a narrow range of directions on either side of the conducting surface. The symmetry for comparing the narrow beams describing the received E or H inside versus outside the perfectly conductiong surface of the cavity is symmetric for (2) and antisymmetric for (3). If one uses a diagonal matrix D to make the strength of the narrow beams describing the electric and magnetic fields tested in (2) 978-1-61284-978-2/11/$26.00 ©2011 IEEE 839

Upload: francis-x

Post on 13-Apr-2017

213 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: [IEEE Propagation in Wireless Communications (ICEAA) - Torino, Italy (2011.09.12-2011.09.16)] 2011 International Conference on Electromagnetics in Advanced Applications - Designed

Designed Basis Functions for Speed and Stability

Francis X. Canning Mav6, Alexandria, VA USA

[email protected]

Abstract—Putting physical information into a numerical solution method can be used to produce fast solution methods. These methods can be extended to improve the stability of the solution procedure and answer. Iterative solution that requires a stable (well conditioned) formulation. However, truly large problems are also sensitive to stability issues. This paper shows a range of methods that may be used to make truly large problems manageable. Some methods improve how local interactions are handled and improve stability for dense meshes. Others improve how distant interactions are treated and improve stability when the problem size measured in wavelengths increases.

Keywords Moment Method, Fast Methods, Well Conditioned, Physical Basis Functions.

I. INTRODUCTION As larger problems have been solved using numerical

methods, several Fast Methods have been introduced. Generally, these Fast Methods are understood to be useful since they permit the solution of a given problem using less computer memory and / or less computation time. Some advances in solving large problems are due to improved and / or special purpose hardware.

Solving truly large problems of the future will require not

only Fast Methods and better hardware, but also numerical algorithms that have the stability and flexibility that flows from putting into the numerical algorithm significant physical properties of the problem being solved. One place where this has already been noticed is in Fast Methods that require the iterative solution of a matrix problem.

Consider using the Fast Multipole Method (FMM) to solve

a frequency domain integral equation problem in electromagnetics. FMM supplies an efficient method for finding the product of the Moment Method matrix with a vector of sources (e.g., electric currents). Iterative methods are then used to solve the electromagnetics problem. A significant amount of work has been done on developing improved integral equations [1,2] that provide a better conditioned problem that allows the iterative methods to converge in fewer steps to an accurrate solution. The authors of these new integral equations were generally motivated by the physics of the effects described, even when they did not describe that in their papers.

As problems being solved increase in size, stability of the

solution tends to decrease. Thus, methods for controlling the stability of numerical methods for large problems must be

developed. Thus, this paper looks at new computational methods that have allowed fast solutions of large problems. These methods incorporate the physical properties of the problem to solve it faster. These methods of incorporating physical properties will be used here to develop more stable solutions and more adaptable solutions.

II. BASIS FUNCTIONS FOR COMBINED EQUATIONS The combined field integral equation (CFIE) has been used

to stabilize computations involving a closed region (as has the Combined Source Integral Equation, or CSIE). The driving physics is that EM fields from an external source do not penetrate inside a closed cavity sourrounded by a perfect conductor. However, the Moment Method only implicitly enforces this, and instabilities result. The matrix for the EFIE and for the MFIE can have a zero eigenvalue at discrete frequencies. One proves that for the CFIE (and CSIE) there can be no zero eigenvalues. The proof involves showing that energy in the sources must leak out, at least a little. However, while this proves the eigenvalues cannot be exactly zero, they still can be very small, allowing poor conditioning. Large problems may then be numerically unstable.

A first more stable method will be developed by noting that the failing of the CFIE and CSIE is that although the integral operators (and hopefully also the associated matrices) do not have any zero eigenvalues (or zero singular values), these values may be very close to zero. Clearly, the way to improve this is to use sources that strongly shed energy to the exterior. Consider the matrices that result from the EFIE and MFIE

ZE J = E; ZC J = H (1)

Previous work on Impedance Matrix Localization (IML) [3] showed how to use linear combinations of the basis and testing functions used in (1) produce to new functions. Some of these functions only produced a local effect and others produced narrow beams. If the change of basis is given by the unitary matrix T, then the new formula that results are:

[T ZETh] TJ = TE Z’E J’ = E’ (2)

[T ZMTh] TI = TH Z’M I’ = H’ (3)

Each of the testing functions in (2) or (3) receives electric / magnetic fields incident from a narrow range of directions on either side of the conducting surface. The symmetry for comparing the narrow beams describing the received E or H inside versus outside the perfectly conductiong surface of the cavity is symmetric for (2) and antisymmetric for (3). If one uses a diagonal matrix D to make the strength of the narrow beams describing the electric and magnetic fields tested in (2)

978-1-61284-978-2/11/$26.00 ©2011 IEEE

839

Page 2: [IEEE Propagation in Wireless Communications (ICEAA) - Torino, Italy (2011.09.12-2011.09.16)] 2011 International Conference on Electromagnetics in Advanced Applications - Designed

and (3) approximately the same magnitudes, and then adds, one produces an equation that physically receives very weakly from the interior and strongly for the fields incident from the exterior. Thus, the physical property that the incident fields affect the lit side is manifently evident in this formulation. This gives the matrix equation

{D Z’E + Z’M } J’ = DE’ +H’ (4)

This method is described in [4]. However, the condition number (or stability properties) of the resulting matrix was not described there. The change in going from (1) to (2) and (3) did not change the condition number of the matrix, since T was a unitary matrix. However, (4) has a much improved condition number from that of either (2) or (3).

As an example, consider scattering from the two dimensional problem illustrated below. Figure 1 shows a 2:1 axis ratio ellipse, with the surface partitioned into several regions.

Figure 1. A perfectly conducting 2-D scatterer.

When the matrix of (4) is produced, many of its matrix elements are very small. They are small because the sources on the surface were designed to (approximately) produce only fields that propagate to the exterior. However, some sources produce a field propagating nearly parallel to the surface.

Figure 2. Magnitude of matrix elements in (4)

for a dB scale over 40 dB of magnitudes.

This produces very small matrix elements, especially for the convex body of Figure 1. The elements remaining that are large are due to physical effects that are nearly parallel to the surface.

For any row “i” of the matrix of (4) one may sum the magnitudes of all of the off diagonal elements and then divide that sum by the magnitude of the diagonal element in that row. Often, people speak loosly about a diagonally dominant matrix meaning only that the diagonal is somewhat large. The mathematical definition of diagonal dominant (strictly diagonal dominant) is the surprisingly strong condition that the ratio computed above is less than or equal to one (less than one) for every row. That is, for matrix elements Mi,j and for row i define the ratio

ii

ijji

i M

MR

,

,≠= (5)

A matrix is diagonally dominant if, for every row i, Ri is less than or equal to one. The usual Moment Method matrices are far from having this property. However, we have nearly achieved it here. Figure 3 shows the ratio Ri for rows in the matrix within the regions 2, 3, and 4 of the scatterer as shown in Figure 1.

Figure 3. Diagonal Dominance ratio Ri for the matrix of (4).

Examining Figure 3 it is clear that nearly all rows of the matrix satisfy the diagonal dominance condition. In Region 1, where the curvature is greater, none of the rows fails this condition. In Regions 2 and 3 only one row in each fails this condition. Each of these two rows is associated with a testing function that receives only from the direction tangential to the surface.

Although this matrix as a whole is not diagonally dominant, one could permute rows and columns to produce a large diagonally dominant matrix and a small remainder matrix. This could be used along with iterative methods.

If the matrix were diagonally dominant, then Gershgorin’s Circle Theorem could be used to find an upper bound on its condition number. That can not be done here. However, computations of the condition number produced for this and

840

Page 3: [IEEE Propagation in Wireless Communications (ICEAA) - Torino, Italy (2011.09.12-2011.09.16)] 2011 International Conference on Electromagnetics in Advanced Applications - Designed

similar matrices shows that this is a powerful method for reducing the condition number.

The method described above works by considering the effect of distant interactions on the condition number. This is in contrast to previous work that has considered local interactions. For example, [6] showed a transformation that gave the EFIE a condition number that scaled the same as that of the MFIE as the sampling density was increased. That transformation could be described as a rescaling of the basis and testing functions. Below, additional methods that consider large scale interactions will be introduced.

III. BASIS FUNCTIONS IMPROVING DISTANT INTERECTIONS A difficulty of using the method of the previous section is that the transformation matrix T used there is that it is difficult to compute for general three dimensional problems.Recent research has produced fast methods for three dimensional problems. This section considers how to extend those results to also reducing the condition number, especially for sources on irregularly shaped surfaces.

A diagonal preconditioner using a wavelet basis was used in [6]. This improved the condition number due to local effects for dense discretizations. A generalization of this method to three dimensional problems was introduced in [7]. However, these methods still were awkward to use for sources on irregular surfaces. A method for producing wavelet like sources on arbitrarily irregular surfaces was introduced in [8]. However, as pointed out in [7], the sources need not only to be wavelet like, but each source must radiate one polarization or the other. Fortunately, more recently [9] shows how to produce basis functions with any physically achievable property in their radiated fields. While the examples given in [9] are primarily targeted at producing narrow beams of radiation, the method described there can be used to take the wavelet like solutions of [8] and from them produce basis functions with the needed polarization of their radiated fields. Thus, this can be used to take the results of [7] and apply them to arbitraly irregular source configurations.

Thus, a method for improving the local behavior (or the dense mesh behavior) that affects the condition number has been outlined in the paragraph above. Next, we consider a much harder problem: how to find a general method for improving the non local behavior. This method must improve the condition number of the resulting matrix (even for low sampling densities) and must also be easy to impliment even for highly irregular distributions of the sources.

The tools of this section are the new methods that have been developed to produce fast solvers. Thus, first some features of those methods will be surveyed.

Wavelet methods and the generalized wavelet methods of [8] (as usually applied) used the fact that one only needs two basis functions per wavelength to describe effects at a large distance (compared to a wavelength). For sources on a two-dimensional surface in three dimensions one needs π sources per square wavelength. One samples much more densly than this in order to describe local effects. Thus, by choosing the right basis one can have π basis functions per square

wavelength that have an effect at long distance while the rest have nearly zero effect at long distances.

For methods that produce interesting physical properties at a distance, two stand out. One method takes the physical electromagneticaly active object(s) (e.g., a scatterer) and partition it into R regions. Then, for one region with n sources, m sources are found which have approximately zero effect in all other region(s). The other n - m sources have some effect in one or more other regions. That is, a solution is found to the matrix problem

[ ] =

0000

1.

3

2

J

BBBB

R

(6)

The blocks B2 … BR are off-diagonal blocks of a Moment Method matrix, associated with the n sources for Region 1. J is a block with n rows and m columns. Based on matrix algebra alone, one would not expect a solution for any positive m, even when m<n. However, because due to physical considerations, the B blocks have a low rank and (6) does have a solution, at least to a good approximation. These methods are described in each of [9], [10], [11], and [12].

Solving (6) produces a block J1 that describes a transformation to new basis functions that produce approximately zero effect in the desired other regions. Computing J1 through JR provides the entire change of basis. Although many basis functions result that have only an effect on the self physical region, others have an effect everywhere. The new basis gives a highly sparse matrix with a structure that allows a fast direct solution. The localization of effects also is useful if a problem must be resolved many times with one small part of the structure changed [13]. The total computer time per iteration scalse so that for large problems it depends primarily on the number of interactions between the changed and unchanged parts..

This method allows localization of effects. One would hope that these new basis functions result in a reduced condition number. The description using (6) did not completely specify these functions. Each column of Ji specifies a linear combination of the original basis functions that gives a new basis function. One could additionally specify that either the columns of Ji are orthonormal or that the physical functions that result are orthonormal. Enforcing either condition might improve the condition number of the resulting Moment Method matrix that is based on these basis functions. However, isn’t the condition that matters the most that these basis functions produce fields that are, in some sense, orthogonal? Such a method is described next.

In addition to describing the method of (6), [9] also describes a method that considers the orthogonality of the field produced. This method is somewhat more complicated as it first performs a Singular Value Decomposition (SVD) on a matrix that describes how sources radiate. This SVD produces

841

Page 4: [IEEE Propagation in Wireless Communications (ICEAA) - Torino, Italy (2011.09.12-2011.09.16)] 2011 International Conference on Electromagnetics in Advanced Applications - Designed

two unitary matrixes. Then, individual SVDs are performed on different parts of one of those unitary matrices.

Figure 4. Partitioning of the Unitary matrix U from first SVD.

Consider a matrix “A” describing how a number of sources in a physical region produce some effect outside that region. For example, “A” might result from taking columns 101 through 150 of a Moment Method matrix, sampling every fifth row, and excluding the self interaction part of rows 105 through 150. As another example, “A” might result from how currents 220 through 280 produce a far field at every degree in azimuth and elevation.

One might then group the rows of “A” into say 15 groups. Each group might correspond to a physical region of neighboring locations or a group of angular directions near to eachother. Then, compute the SVD of “A”:

A = U D Vh (7)

Take the U of (7) and partition it as shown in Figure 4. The different but nearby rows of U correspond to nearby physical effects. The singular values are ordered from large to small down the diagonal of the diagonal matrix D. The left part of U corresponds to currents that have a stronger physical effect, since they produce a larger singular values.

If sources exist that produce a strong effect in region r=k, and very little effect in the other regions, then this method will find them. Notice that in Figure 4 Uk

P is the part of U for effects in Region k. The left part of this is Ur’’. If one then performs an SVD on Ur’’, one will find many singular values that are slightly less than one. Using the left part only ensures sources with little reactive energy. The associated sources that produce this strong effect will produce almost no effect for the other regions, since this sungular value was almost one and U was unitary. Although many details were omitted here, they can be found in [9]. The result of using an SVD to orthogonaliz the fileds produced by the basis functions is a better condition number for the resulting Moment Method matrix. The better condition number that results will be published elsewhere. The localization of the fields produced by these sources can be used in a matrix factorizaton algorithm to improve solution speed.

IV. SUMMARY Several methods have been developed that improve the stability and speed of numerical solutions. These methods may be considered as a change in the basis and/or testing functions used. First, this was accomplished using the local properties of the fields due to the basis functions, using wavelets and a generalized wavelet method that was easier to impliment. Next, by combining integral equations, the interior fields in a cavity were explicitly removed from the calculation resulting in speed and stability. The new basis functions were a novel combination of electric and magnetic sources, or equivalently the testing functions tested a novel combination of electric and magnetic fields. Finally, a method involving two levels of SVD was introduced. This localized the fields produced for improved speed while it also put orthogonality (in a certain sense) into the fields produced. This improved the stability of the associated matrices. Overall, several methods were demonstrated that put desired physical properties into a computational method resulting in improved speed and stability.

REFERENCES [1] R. J. Adams, “Physical and Analytical Properties of a Stabilized Electric

Field Integral Equation,” IEEE Trans. on Antennas and Propagat., V. 52, pp. 362-72, February 2004.

[2] R. J. Adams, “Combined Field Integral Equation Formulation for Electromagnetic Scattering from Convex Geometries,” IEEE Trans. on Antennas and Propagat., V. 52, pp. 1294-1303, May 2004.

[3] Francis X. Canning, “Improved Impedance Matrix Localization,” IEEE Trans. on Antennas and Propagat., Vol 41, pp. 659-667, May 1993.

[4] Francis X. Canning, “Fast integral equations solutions using geometrical-theory-of diffraction like matrices,” Radio Science, Vol. 29, pp. 993-1008, July-August 1994.

[5] S. Gershgorin and R. S. Varga, “Uber die Abgrenzung der Eigenwerte einer Matrix,” Izv. Akad. Nauk. USSR Otd. Fiz-Mat. Nauk Vol 7, pp. 749-754, 1931.

[6] Francis X. Canning and James F. Scholl, “Diagonal Preconditioners for the EFIE Using a Wavelet Basis,” IEEE Trans. on Antennas and Propagat., Vol . 44, pp. 1239-1246, September 1996.

[7] F. Vipiana, P. Prinoli, G. Vecchi, “Spectral Properties of the EFIE-MoM Matrix for Dense Meshes with Different Types of Bases,” IEEE Trans. on Antennas and Propagat., pp. 3229-3238, November 2007.

[8] Francis X. Canning and Kevin Rogovin, “A Universal Matrix Solver for Integral-Equation-Based Problems,” IEEE Antennas and Propagation Magazine, Vol 45, pp. 19-26, February 2003.

[9] Francis X. Canning, “Compression of Interaction Data using Directional Sources / Testers”, U. S. Patent Publication 20040010400, Filed February 7, 2003.

[10] P. G. Martinsson and V. Rokhlin, “Direct Solution of Boundary Integral Equations in Two Dimensions,” Journal of Computational Physics, Vol. 205, pp. 1-23, 2005.

[11] R. J. Adams, F. X. Canning, F. Mev and B. A. Davis, “Beam transform method for plane wave response matrices,” Progress in Electromagnetics Research, Vol. 55, pp. 189-208, 2005.

[12] R. J. Adams, A. Zhu, and F. X. Canning, “Efficient solution of integral equations in a localizing basis,” Journal of Electromagnetic Waves and Applications, Vol. 19, pp. 1583-1594, December 2005.

[13] Robert J. Adams, Yuan Xu, Xin Xu, Jun-shik Choi, Stephen D. Gedney and Francis X.Canning, “Modular fast direct electromagnetic analysis using local-global solution modes,” IEEE Trans. On Antennas and Propagat. V. 56, pp. 2427-2441, 2008.

842