0% found this document useful (0 votes)
188 views6 pages

Generalised Moment Methods in Electromagnetics: J.J.H. Wang, PHD

The document summarizes an effort to unify three major numerical methods in electromagnetics - Harrington's direct method of moments, iterative methods, and the reaction integral equation method. It shows that these three methods are generally equivalent and can be unified as the generalized method of moments. Specifically, it demonstrates that the reaction integral equation method is a moment method, and that moment methods satisfy reciprocity when defined in a symmetric space. It also shows a broad equivalence between direct and iterative moment methods.

Uploaded by

anjan_debnath
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
188 views6 pages

Generalised Moment Methods in Electromagnetics: J.J.H. Wang, PHD

The document summarizes an effort to unify three major numerical methods in electromagnetics - Harrington's direct method of moments, iterative methods, and the reaction integral equation method. It shows that these three methods are generally equivalent and can be unified as the generalized method of moments. Specifically, it demonstrates that the reaction integral equation method is a moment method, and that moment methods satisfy reciprocity when defined in a symmetric space. It also shows a broad equivalence between direct and iterative moment methods.

Uploaded by

anjan_debnath
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Generalised moment methods in electromagnetics

J.J.H. Wang, PhD

Indexing terms: Electromagnetic theory, Mathematical techniques


____

Abstract: An effort to unify three major numerical methods in electromagnetics is presented. Harringtons direct method of moments, the iterative methods, and the reaction integral equation method are shown to be generally equivalent and are unified as the generalised method of moments. It is shown that the reaction integral equation method is in general a moment method, and that the moment method, when defined in a symmetric space, generally satisfies the reaction theorem, and therefore reciprocity. A broad, though limited, equivalence between the moment and the iterative methods is also demonstrated. A numerical example is discussed to illustrate these and other points.

Introduction

In the last three decades there has been a proliferation of numerical methods in electromagnetics. In 1968 Harrington identified a unifying concept for them and called it the method of moments (MM) [l]. Since then several apparently divergent methods have appeared. This paper represents another effort to unify and organise numerical methods which have so far been identified separately as the MM, the iterative methods, and the reaction integral equation method. The MM is a widely used numerical technique for electromagnetic problems such as phased arrays, waveguide discontinuities, antennas, scattering, energy deposition in biological bodies, etc. In this method, one solves the problem by first formulating it into an operator equation, usually of the integral or integro-differential type, that has a finite (and preferably small) spatial domain. The unknowns are then expanded in terms of a finite number of well-chosen basis functions; this process is called discretisation. A set of matrix equations is then generated by performing a symmetric or scalar product between the operator equation and a set of selected weighting functions. At the same time, iterative methods also exist and were sometimes also called in applied mathematics the MM [a]. Although a matrix interpretation for, and equivalence between, these two moment methods is sometimes implied, the relationship between them has not been made clear. The reaction integral equation technique explored mostly by Richmond is another well-known matrix technique for solving electromagnetic problems [3-51. Its relationship to the MM of Harrington has been indicated
Paper 71488 (Ell, E12), reseived 3rd July 1989 The author is with the Georgia Tech Research Institute, Georgia Institute of Technology, Atlanta, Georgia, USA
IEE PROCEEDINGS, Vol. 137, Pt. H , No. 2, APRIL 1990

[3]. However, it is not clear as to exactly how they are related to each other. Finally, one often wonders why these methods were so contrived, and what their mathematical and physical interpretations are. The relationship between these three methods has been recently investigated [6-71. In this paper, it is shown that a broad, though limited, equivalence exists among these three methods, which can be unified and called the generalised method of moments. It also appears desirable and appropriate to incorporate the entire numerical process, including the formulation of the operator equation, within the context of the generalised MM. We will call the MM of Harrington the direct MM, the iterative techniques such as that of Vorobyev the iterative MM, for reasons that will soon be clear, and we will show that the reaction integral equation method is a direct MM. Before entering detailed discussions, we would like to point out that the direct and iterative MM are not the direct (exact) and indirect (iterative) matrix solution methods generally referred to in the solution of matrix equations, even though they are related. In the generalised MM, a continuous integral equation (sometimes an integro-differential equation) is to be solved. Thus the use of a matrix to represent a linear operator as often seen in the literature, though enlightening in many instances, may lead to a narrow and distorted view on the subject. The direct MM is a method that formulates a problem into a specific matrix, which is then solved by an exact or iterative matrix solution algorithm. Because it terminates in an exact, predetermined number of arithmetic steps, it is called a direct method. The iterative MM is, in general, not explicitly associated with a particular matrix, and is an iterative process that terminates after an undeterminable number of steps.

The direct MM and the iterative MM

In both the direct MM and the iterative MM, the physical problem is formulated into one or more equations over a finite spatial domain. The integral equation can be represented by an operator equation as follows:
A(x) = y on S (1) where A is a linear operator on a linear space c, y is the known excitation, and x is an unknown vector (function) to be solved for [l, 23. In MM for electromagnetics, x and y are both elements of the linear space c consisting of complex vector functions. The operator A denotes a process which assigns to a vector x another vector y in the same space. S is a spatial domain such as a line, a surface, or a volume. The solution of eqn. 1 by either direct or iterative MM begins with an approximation of x by xN in terms of
127

basis functions #1,


N X N X N E

*2,.

. .,* N ,

that is,

Eqns. 8 can be written in a matrix form as


(9)

cx:*.
n= 1

where

We are therefore projecting x into a finite N-dimensional subspace [ of : Substituting eqn. 2 into eqn. 1, we obtain

e.

A ( x N ) = y on S (3) The above discretised equation is, strictly speaking, an ill-defined statement. The approximate equal sign is the source of the problem, but is necessary because the left side cannot equal the right side in the entire domain S (except for trivial cases). Additional weighted measures are needed to make the above equation meaningful. The following step is the point of departure between the two moment methods. 2.1 The direct MM

Y)\

The direct method is the MM defined by Hamngton [l]. In this method, we may define a symmetric product [8] between x and y which is a scalar satisfying
(x,

The unknown column matrix {x:}

is then

U>=
)

cv,

1 )

<x,+x,,Y)=
<U, Y

<x,,Y)+<x,,Y)

=4x9

(x*,x) > O

Y) if x # O

(x*,x) = O if x = O (4) Where the superscript * denotes complex conjugate. An example of the symmetric product is

(x, Y ) =

1j x

Y ds

(5)

Note that the symmetric product is not the scalar product or inner product in linear algebra. The scalar product or inner product, denoted by ( x , y ) is a scalar between two elements x and y satisfying
(x, U)=

cv, x)*
a(4 U)
x#0

(x1 + x2 9 U) = (x1, Y ) + (x2, Y )


(U, ) = Y

where {A;:} is the inverse matrix of {A,,,,,}. The matrix solution can be carried out by either a direct (exact) matrix method or an indirect (iterative) method, either of which should lead to the same results barring round-off numerical errors. Note that by choosing various weighting functions, we can have many different sets of matrix equations, and therefore different results in general. Eqns. 8 to 12 can also be derived by taking a scalar product. This reduction of the discretised eqn. 3 to a matrix equation is a key phase of the direct MM. It must be pointed out that in the direct MM, the symmetric product, rather than the scalar product, is generally used and preferred. This preference and practice can be justified by a physical consideration. If a symmetric product is used, the resulting matrix equation is not only an approximation to eqn. 3, but also a statement of the reaction theorem or reciprocity. This additional constraint in the matrix equation based on a symmetric product generally results in a better approximation to the discretised integral eqn. 3. This feature is the basis for our preference of the symmetric product in the direct MM.
2.2 The iterative MM In an iterative MM, the imprecise eqn. 3 is written as

(x, x) > 0 for

(x, x) = 0 for x = 0 (6) where the superscript * denotes complex conjugate. A frequent example of inner product is (x, Y ) = I I x . Y* ds

(7)

In a direct MM, we give meaning to eqn. 3 by selecting N direct weighting functions w l , w 2 , ..., wN in a subspace of and take either a symmetric product or a scalar product [4] on eqn. 3 with w, ,obtaining

e,

e",

(8) The ill-defined eqn. 3 is now replaced by N well-defined equations in eqn. 8, which can be solved for the unknown xy, x ...,x i . The issue of the convergence of xN toward ! , x is a separate matter not considered here. Note that w, is in and may or may not be in w, must be linearly independent, and as a result, are in a of [ spanned by w,. subspace is chosen to be Ndimensional so that a unique solution for eqns. 8 may exist.

(w,, A x N )= (w,,,, y) m = 1, 2,

..., N

cN.

A(xN)--y = R on S (13) The above equation serves as a definition for R, called the residual. Ideally one would like to seek a solution satisfying R = 0 on the entire S, in which case x N satisfies the continuous operator eqn. 1. This of course, is generally impossible. We must therefore compromise and accept xN as a solution of eqn. 1 if R is sufficiently minimised. What is meant by ' R is sufficiently minimised'? This depends on the method of pursuing the solution. In an iterative approach, a sequence of guesses x('), d2), .,x(") .. to approximate xN according to a specific scheme are made, e.g. a conjugate gradient method. This process results in a sequence of residuals R"),fi2),.., R(") from . eqn. 13. A fairly general criterion for minimising R begins with defining a weighted error ERR'") computed after the nth iteration as ERR'"' = (w("),R " ) '' (14) where we have chosen an iterative weighting function w(") that is generally different for each iteration.
IEE PROCEEDINGS, Vol. 137, Pt. H , No. 2, APRIL 1990

128

We consider x'") close enough to x N and the problem solved if (15) where E is a small positive number which one chooses after considering the desired accuracy, computational cost, realisability, etc. In order to insure that eqn. 15 is satisfied in subsequent iterations after the nth, we require that the iterative algorithm for MM be monotonically convergent, that is, (16) This condition is met by all the good iterative algorithms used in MM. We further require that the iterative MM leads to a unique solution. For the solution to be unique, the iterative weighting functions must be appropriate, the iterative algorithm must be monotonically convergent, and the problem must be well posed. This uniqueness of solution is essential to the equivalence between direct and iterative MM discussed in the next section. It is obvious that if the iterative weighting functions are not properly chosen, or if E is chosen too small, or both, one may never obtain an iterative solution meeting the criterion eqn. 15. A common choice for w'") is (17) In this case the weighted error becomes the familiar norm or integrated square error which is ordinarily used. It can be shown that unless the problem is ill-posed, there is a one-to-one correspondence between an iterative MM and a scalar-product-based direct MM if the direct weighting functions w, in the direct MM are orthogonal, and if the iterative weighting functions w(,) are chosen as
w(") =
m=l

I ERR'")I < E

I ERR'")I < 1ERR'"- ') I n = 1, 2, ...

=p

ative MM. Thus Galerkin's method in a real-valued orthonormal basis approaches the exact solution to the continuous operator equation when N becomes infinite. This conclusion appears to explain our experience that MM codes based on Galerkin's method are generally more accurate and rapidly convergent. As a result, whenever possible, real-valued orthonormal functions should be chosen as the basis. This observation is consistent with the finding that the characteristic mode currents of conducting scatterers are real-valued 18, 141. However, sometimes non-Galerkin methods may produce better results because the errors due to the first and second steps of approximation (the choice of x, and the choice of w,) may occasionally compensate each other, resulting in a better final result. Other factors such as the choice of basis functions are also essential to the success of the computational results. A rigorous analysis showing that the use of Galerkin's method leads to the exact solution of the continuous operator equation under certain conditions was recently performed [l5]. Opposing opinions have been expressed in the literature with regard to the equivalence of a direct MM and an iterative MM. For some MM, an equivalence can be demonstrated, while for others an equivalence appears impossible. Furthermore, the equivalence between a direct MM and an iterative MM is only in the sense that they lead to the same numerical results barring round-off errors. The equivalence is limited; while the direct MM can handle unknowns limited by the computer's central memory, the iterative MM can handle a much larger number of unknowns. For example, on a CDC Cyber 855, the direct MM can handle up to about 240 unknowns, while the iterative MM can handle up to about 11OOO unknowns.
4 Reaction interpretation of the direct MM

1 w,

for every n

(18)

Here the iterative MM solution is assumed to be monotonically convergent toward a unique solution, satisfying eqns. 15 and 16 with E = 0. The theorem is proved in Reference 9. 3
Comparison between direct and iterative MM

The advantage of the direct MM is that, when based on a symmetric product, it is an approximation that satisfies reciprocity. On the other hand, a distinct advantage of the iterative MM is that it does not have to store a large matrix, which can be as large as N x (N + 1) in size, where N is the number of basis functions or unknowns. Such a large matrix, even though sometimes reducible to a smaller one if the matrix is symmetric, sparse, or persymmetric (Toeplitz), is often too large for ordinary mainframe computers, whose central memory generally cannot handle a matrix more than, say, 200 x 201 elements. In an iterative method, the core memory of the computer is used to store the functions about the unknowns. Thus instead of 200 unknowns in the direct matrix method, several thousand unknowns can be handled by the same computer in an iterative method [10-13]. In addition, the iterative MM has been found to be more stable in solving problems that are not well conditioned. The choice of w, = x, in a direct MM based on either a symmetric product or a scalar product is known as Galerkin's method, which with real-valued orthonormal basis functions has advantages of both direct and iterIEE PROCEEDINGS, Vol. 137, Pt. H , No. 2, A P R I L 1990

The solution of electromagnetic problems by a matrix method has also been obtained by the use of the reaction theorem [3-51. Richmond [3] pointed out that the first type of reaction integral equation is equivalent to the electric field integral equation or the magnetic field integral equation, depending on whether a delta-function electric test source or magnetic test source is applied. Because the reaction theorem is a statement of reciprocity, it follows that the matrix method of Richmond satisfies reciprocity. In the following, we will show that the reaction integral equation method is a special case of the direct MM. At the same time, the direct MM can generally be interpreted as a reaction matching method if a symmetric product is chosen in the process.
4.1 The first type of reaction integral equation

The first type of reaction integral equation for a perfectly conducting scaterer illuminated by an incident field E' is c31:
~ C J J. ~ ) r- Mm(r) . H"(~)I ) ds

- L[J,(r) Ei(r) - Mm(r). H'(r)] ds (19)

where S is the surface enclosing the scatterer, E(r) and F ( r ) are the scattered electric and magnetic fields at r, and J,(r) and Mm(r)are the mth electric and magnetic test current source. By choosing different test sources, J l , J, , ...,JN, and MI,M , , ..., M N in eqn. 19, we can obtain N equations to solve for the unknown surface current on S,
129

which is implicitly contained in Es and [Link] speaking, eqn. 19 is not an integral equation but corresponds to the mth matrix equation in eqn. 8. Let us first choose only electric test sources in eqn. 19, thereby reducing eqn. 19 to k,,,(r) . E'(r) ds = - JJr) . E'(r) ds

where J without a domain denotes integration throughout the entire space. Next we note that

It can then be shown that eqn. 20 corresponds to the rnth direct MM matrix equation of eqn. 8 with the basis functions unspecified. We begin by writing the following implicit integral equation :
fi x E"(r) = - A x P ( r ) on S (21) where A is a unit vector normal to S at r. Eqn. 21 is a short form of the electric field integral equation, or simply a statement of a boundary condition on S. Now if we choose A x J,,,(r), m = 1, 2, ..., N, as weighting functions and eqn. 21 as the integral equation in eqn. 8, the resulting equations will be identical to eqn. 20. (The facts that J,,, is tangential to the surface S and that J,,, . A x E = E . J,,, x A are used in the derivation). It is therefore clear that regardless what basis is chosen, the direct MM for the electric integral equation satisfies the reaction theorem, and therefore reciprocity. A similar result can be derived for the magnetic field integral equation by choosing only magnetic test sources in eqn. 19. The general equation, eqn. 19, can then be obtained by superposition.

J:

(20)

The first equality in eqn. 27 is due to reciprocity and is a statement of the reaction theorem. The second equality in eqn. 27 is due to eqn. 25 and that J,,, = 0 outside V and on S ; that is, the condition stated earlier that the test source J,,, is in the interior region of V. Combining eqns. 26 and 27 we obtain eqn. 23; the difference between the domains of the two volume integrals, V and q , being immaterial because Ji is zero outside & . To prove that the more general form of equation in 22 is the mth MM equation in eqn. 8, we use the superposition theorem to deal with the electric and magnetic test fields E,,, and H,,, separately, and also the electric and magnetic source currents Ji and M i separately. By using the technique for deriving eqn. 23, we can obtain the two individual equations, which can be superposed to yield eqn. 22.
5
Reaction and MM interpretation of the mode matching method

[Link]=

J,,,.Edv=O

(27)

4 2 The second type of reaction integral equation

The second type of reaction integral equation is [3] LIJ,(r) .


- Mk). H"(r)l

ds

where E'"(r) and H"(r) are the electric and magnetic fields in free space due to test sources J,,, and M,. It is also required that J,,, and M,,, are located in interior points of V. Let us first look at the simple case in which the magnetic current source Mi = 0 and the surface S is perfectly conducting. Eqn. 22 is now reduced to b s ( r ) . F ( r ) ds

1"

[Ji(r). P ( r ) - Mdr) . H"(r)] do = 0 (22)

61.)

. F ( r ) ds = 0

(23)

Again we will show that eqn. 23 corresponds to one of the rnth MM equations in eqn. 8 with an unspecified basis. To prove this, we begin with the implicit integral equation (24) where the domain can be considered to be the entire space or the surface S and the volume & occupied by the electric current source J i . Note that we have now replaced the scatterer enclosed by the surface S by an equivalent surface current J, on S. In this equivalent problem the total fields inside I/ vanish, that is E=O inV (25) Let us take the symmetric product with E" on both sides ' of eqn. 24 over the entire space, obtaining
r

The mode matching method is sometimes used in the numerical solution of problems in waveguides, cavities, phased arrays, and screen or frequency-selective surfaces. In these problems the discontinuities are located at canonical interfaces and one tries, in the mode-matching method, to match the modal contents in the two regions at the interface. For example, if an iris exists in a rectangular waveguide, one can write down all the possible waveguide modes on both sides of the iris. One then enforces the boundary condition on the plane of the iris and solves the equations numerically by a matrix method. For example, Lee et al. [16] derived a mode-matching formulation for a thick iris in a parallel-plate waveguide for matrix solutions. They also derived an integral equation formulation for an MM solution, and showed that the mode-matching formulation is identical to their integral equation method (or MM). They further emphasised that only the form of mode matching equivalent to an integral equation formulation of the problem (or MM) should be employed. We will demonstrate in the following that this mode-matching formulation is also a reaction matching process. Let z be the axis of a parallel-plate waveguide of height a. Along the z axis an incident TM wave Hi propagates, and
m

J = Ji + J,

H'=jCA:
n= 1

exp(-Y,z)cos-

(n - 1)nx
a

The existence of an iris at z = 0 belonging to the type of Reference 16 leads to a total field which can be expressed in terms of TM modes, as H=jX$,=H+
n

forz>O forz<O

=H-

where $" denotes all the TM propagating modes whose exact details are of no concern to the following derivation. In the mode-matching method we enforce the boundary condition that the tangential fields be continuous in
IEE PROCEEDINGS, Vol. 137, Pt. H , N o . 2, APRIL 1990

130

(30) The next step in the mode matching is to multiply both sides of eqn. 30 by M,,, = ju,,, cos (m - 1)zx/u m = 1, 2, . .., N (31) and integrate over the aperture A. The resulting equations are

the aperture at z = 0, that is, H + = H - at z = 0, in the aperture A

To show that eqn. 37 satisfies reciprocity, or the reaction theorem, let us choose the test source J, to be w",, that is, J, = w", p = (k - l)N + m We can then write eqn. 37 as
(38)

. M,,, d A = H- . M,,, d A
(32) Eqn. 32 can be interpreted as the reaction integral equation between two regions separated by the surface A , in a manner similar to the 'internal' and 'external' reaction demonstrated by Rumsey [17] in his eqns. 43 to 45. Thus the mode matching method, when properly carried out, should be a direct MM and is also equivalent to a reaction integral equation.
6
An example and numerical observations

p = 1, 2, ..., 3N (39)

Now by definition,

i = 1, 2, 3, n

..., N

J = Ji + J, We have

(40)

Jv

E . J , dv + E" . J, dv = Ep . Ji do +
=[Link]
=

Jv

fl

Ep

J, do

. J, dv

(41)

The application of both direct and iterative MM by the example of a three-dimensional dielectric or biological body illuminated by a time-harmonic source is now demonstrated, a case for which difficulties in achieving numerical convergence by the iterative MM have been reported and discussed [18]. An arbitrarily-shaped three-dimensional inhomogeneous dielectric or biological body occupying volume V and illuminated by a current source Ji can be represented by an equivalent problem as far as the field E is concerned. In the equivalent problem the dielectric body is replaced by a free space with an equivalent volume current J, as follows:

Comparing eqns. 39 and 40, we obtain the desired result that eqn. 39 (direct MM) holds if and only if the following reciprocity relation (or reaction theorem) is true : (42) Next we will discretise the integral eqn. 35 by letting the unknown Js be Js(r) =
n=l j=1

Jf P,,(r)iij

(43)

J,k)=joCE(r) - EOYW (33) where r is a position vector specifying the point of interest, o is the radian frequency, E is the total electric field throughout the space and E and denote the permittivity of the medium at r and free space, respectively. An integral equation can be written as
E(r) = E'(r) +

where P,,(r) = 1 for rm,,


= 0 elsewhere (44) We have divided V into N volume cells of the sizes v l , v 2 ,

..., U N .

where E' is the field due to Ji in the absence of the dielectric body, and is the electric dyadic Green's function. Either E or J, in eqn. 34 can be eliminated by using eqn. 33. Solution by the direct MM using this integral equation has been studied originally by Livesay and Chen [19] and later by the author. We will show first here that the direct MM of point matching is equivalent to reaction matching, therefore satisfying reciprocity. For simplicity we rewrite eqn. 34 as E=E'+ES rev In point matching we choose weighting functions

Js(r) . G(r, r') dv rEV

(34)

By substituting eqn. 34 with eqn. 43, we obtain a discretised integral equation like equation 3. Next we apply the direct and iterative MM separately and solve it numerically. The iterative scheme is a conjugate gradient method similar to the one-dimensional algorithm of van den Berg [ll]. The numerical results between the direct and iterative MM differ by less than one millionth [13]. These results demonstrate the fundamental similarities between direct and iterative MM discussed in Section 3. In fact, we have demonstrated in this case that the same matrices (implicit matrix in the case o the iterative MM) f are solved in both the direct and the iterative MM.
7
Conclusions

(35)

N", = iik6(r- r,,,)

rev

k = 1, 2, 3(x, y, z) m = 1, ..., N (36) where 2, = % or j or 2 (all being unit vectors) depending on whether k = 1, 2 or 3. 6 is the delta function. The resulting direct MM matrix equations are

The method of moments is generalised to encompass and unify the direct and iterative methods, as well as the reaction integral equation method. The reaction integral equation method is shown to be a direct MM if a symmetric product is used. A numerical example is also discussed to illustrate some of the theoretical findings.
8
References
1 HARRINGTON, R.F.: 'Field computation by moment methods' (Macmillan, New York, 1968) 2 VOROBYEV, Y.V.: 'Method of moments in applied mathematics' (Gordon and Breach, New York, 1965) 3 RICHMOND, J.H.: 'Radiation and scattering by thin-wire structures in complex frequency domain'. NASA Report NASA CR-2396, 131

E . wf dv = IvE wf dv +

k = 1, 2, 3(x, y , z) m

I . wf P
=

dv
1,

..., N

(37)

IEE PROCEEDINGS, Vol. 137, Pt. H,No. 2, APRIL 1990

Contract NGL36-008-138, as the Ohio State University Report also TR2902-10, May 1974 n 4 RICHMOND, J.H.: Computer program for thin wire structures i a homogeneous conducting medium. NASA Report CR-2399, June 1974 5 WANG, N.N., RICHMOND, J.H., and GILREATH, M.C.: Sinusoidal reaction formulation for radiation and scattering from conducting surfaces, IEEE Trans., 1975,AP-U, pp. 376382 6 WANG, J.J.H.: Generalization and interpretation of the moment methods in electromagnetin, submitted to IEEE Trans. Microw. Theo. Tech. 7 RAY, S.L., and PETERSON, A.F.: Error and convergence in numerical implementationsof the conjugate gradient method, IEEE Trans., 1988,AP-36, pp. 1824-1827 8 HARRINGTON, R.F., and MAUTZ, J.R.: Theory of characteristic modes for conducting bodies, IEEE Trans., 1971, AP-19, pp. 622-628 9 WANG, J.J.H.: Comments on-error and convergence in numerical implementations of the conjugate gradient method, submitted to IEEE Trans. Ant. Prop. 10 PETERSON, A.F., and MITTRA, R.: Method of conjugate gradients for the numerical solution of large-body electromagnetic scattering problems, J . Opt. Soc. Am. A, 1985,2,( ) pp. 971-977 6, 11 VAN DEN BERG, P.M.: Iterative computational techniques in scattering based upon the integrated square error criterion, IEEE Trans., 1984,AP-35 pp. 106%1071

12 SARKAR, T.K., ARVAS, E., and RAO, S.M.: Application of the fast Fourier transform and the conjugate gradient method for effcient solution of electromagnetic scattering from both electrically large and small conducting bodies, Electromagnetics, 1985, 5, (4), pp. 99-122 1 WANG, J.J.H., and DUBBERLEY, J.R.: Computation of fields in 3 arbitrarily-shaped heterogeneous dielectric or biological body by an iterative conjugate gradient method, IEEE Trans., 1989, m - 3 7 , pp. 1119-1125 14 GARBACZ, R.J.: A generalised expansion for radiated and scattered fields. Ph.D. Dissertation, Ohio State University, Columbus, Ohio, 1968 15 WANG, J.J.H.: On the rapid convergence of Galerkins method, submitted to IEEE Trans. Ant. Prop. 16 LEE, S.W., JONES, W.R., and CAMPBELL, J.J.: Convergence of numerical solutions of iris-type discontinuity problems, IEEE Trans., 1971,MlT-19, pp. 528-535 17 RUMSEY, V.H.: Reaction concept in electromagnetic theory, Phys. Reo., 1954,94,pp. 1483-1491 18 Panel discussions in Session 16, Special Session on Iterative Methods. 1988 joint IEEE AP-S Symposium and National Radio Science Meeting, Syracuse, N.Y., June 7,1988 19 LIVESAY, D.E., and CHEN, K.M. Electromagnetic fields induced inside arbitrarily shaped biological bodies, IEEE Trans., 1974, m - 2 2 , pp. 127S1280

132

IEE PROCEEDINGS, Vol. 137, Pt. H , No. 2, APRIL 1990

You might also like