FIELD OF THE INVENTION
-
The present invention relates to radar systems and methods generally, and more specifically to methods of detecting a jammer. [0001]
BACKGROUND OF THE INVENTION
-
U.S. Pat. No. 5,600,326 to Yu, et al. describes a system for adaptive beamforming so as to null one mainlobe and multiple sidelobe jammers. Yu addresses a problem wherein the monopulse technique for direction of arrival (DOA) estimation failed when there was sidelobe jamming (SLJ) and/or main lobe jamming (MLJ). If not effectively countered, electronic jamming prevents successful radar target detection and tracking. The situation is exacerbated by introduction of stealth technology to reduce the radar cross section (RCS) of unfriendly aircraft targets. The frequency dependence of the RCS encourages use of lower microwave frequency bands for detection. This leads to large apertures to achieve angular resolution. Large apertures to achieve small beamwidth results in interception of more jamming. On the other hand, constrained apertures lead to wider beamwidth, which implies interception of more mainlobe jamming. [0002]
-
Adaptive beamforming techniques have been used to form a beam having one or more nulls pointing in the direction of one or more respective jammers. When there is no jamming, Taylor and Bayliss weightings are typically used for sum beams and difference beams, respectively, so as to have a narrow mainlobe and low sidelobes. The quiescent Taylor and Bayliss weightings are designed for reducing the sidelobes in a practical system. In the presence of jamming, the weights are adapted so as to form nulls responsive to the jammers. [0003]
-
Adaptive receiving arrays for radar maximize the ratio of antenna gain in a specified scan direction to the total noise in the output signal. The sum and difference beams at array outputs are determined by adaptive receiving array techniques, which serve to null the interference sources. The adaptivity involves using multipliers to apply an adaptive weight to antenna array signals furnished at multiplier inputs. Because of the adaptivity, the sum and difference patterns vary with the external noise field and are distorted relative to the conventional monopulse sum and difference beams, which possess even and odd symmetry, respectively, about a prescribed boresight angle. The adaptive weights for the sum and difference beams are determined so that the antenna patterns are distorted. This technique cancels both the mainlobe and sidelobe jammers but distorts the monopulse ratio. [0004]
-
Yu et al. describe a sample matrix inverse approach for jamming cancellation, which effectively forms nulls responsive to jammers. The covariance matrix is inverted in order to form the adaptive weighting coefficients. If one of the jammers is within the mainbeam, a null is formed responsive to the mainlobe jammer and the mainbeam is distorted. In order to maintain the mainbeam without distortion, the mainlobe jammer effect is excluded from the covariance matrix estimate. This may be accomplished by using a modified covariance matrix in forming the adapted row beamforming weights, from which information of the mainlobe jammer has been removed (so there is no null responsive to the mainlobe jammer). Yu et al. use prefiltering to block the mainlobe jammer. [0005]
-
Although the matrix inverse approach can generate desired adaptive weights for pointing nulls toward jammers, this technique does not output the locations of the jammers. To implement active countermeasures (such as sending energy at a particular frequency or band of frequencies to a jammer), it is necessary to know the location of the jammer. In order to determine the DOA of ajammer using the prior art techniques, it was necessary to “point” the receiving antenna array at the jammer, essentially placing the jammer in the mainlobe. Thus, an improved system is desired. [0006]
SUMMARY OF THE INVENTION
-
The present invention is a method and system for locating a radar jammer. Sampled aperture data are received from an antenna array. The sampled aperture data include data that do not correspond to echo returns from a beam transmitted by the antenna. A covariance matrix is generating using the sampled aperture data. An eigenvalue decomposition is performed on the covariance matrix. A direction of arrival is determined from which at least one jammer is transmitting a signal included in the sampled aperture data, based on the eigenvalue decomposition.[0007]
BRIEF DESCRIPTION OF THE DRAWINGS
-
FIG. 1 is a diagram showing a plurality of jammers transmitting signals to a radar antenna array in an exemplary system according to the present invention. [0008]
-
FIG. 2 is a block diagram showing exemplary signal processing performed on the signals received by the antenna array of FIG. 1.[0009]
DETAILED DESCRIPTION
-
U.S. Pat. Nos. 5,600,326 to Yu, et al. and 6,087,974 to Yu are incorporated herein by reference in their entireties, for their teachings on monopulse radar systems. [0010]
-
The present invention is a method and system for determining the angular location of at least one mainlobe or sidelobe jammer. The preferred embodiment allows determination of the DOA of one or more mainlobe jammers and/or one or more sidelobe jammers. Conventional radars locate a jammer by pointing the beam at the jammer once in a while. In the exemplary embodiment of the present invention, sampled aperture data, collected during normal target surveillance and tracking, are used to obtain the jammer DOA and update the DOA continuously. [0011]
-
FIG. 1 shows an environment in which the invention may be practiced. A [0012] radar antenna array 100 may receive signals from a plurality of jammers 120-123. While jammers 120, 121 are located within the mainlobe 110, jammers 122, 123 are located within sidelobes. Antenna 100 may be part of a system operating in surveillance mode. Thus, at different times, the mainlobe may be pointed in the direction shown by beam 110′ or in the direction shown by beam 110″. Note that at different times, jammer 122 may be in the mainlobe 110′ (while jammers 120, 121 and 123 are in sidelobes) or jammer 123 may be in the mainlobe 110″ (while jammers 120-122 are in sidelobes).
-
FIG. 2 shows an exemplary system according to the invention. In the exemplary embodiment, it is not necessary to point the mainlobe of the [0013] antenna 100 in the direction of a jammer to be able to locate the jammer. The DOA can be determined using sampled aperture data received by the antenna array 100. The sampled aperture data include data that do not correspond to echo returns from a beam transmitted by the antenna 100. Because the jammers 120-123 transmit energy independently of the beam transmitted by antenna 100, the system can passively collect the sampled aperture data by collecting data while echo returns from the energy transmitted by antenna 100 are not being received. Energy from jammers 120-123, which arrives within the sidelobes, is still identified. Thus, the sampled aperture data for jammer location determination can be obtained regardless of whether the radar system is being operated in surveillance or tracking mode.
-
The invention may be practiced using conventional radar hardware, include an [0014] antenna array 100, receivers and analog-to-digital converters.
-
In [0015] block 210, a covariance matrix is estimated using the sampled aperture data. This can be the same covariance matrix formed to generate the adaptive weighting coefficients. At least the initial covariance matrix is formed by block processing. Preferably, updates are estimated to allow rapid, real-time or near real-time processing, for example, using a sliding window to update the covariance matrix.
-
In [0016] block 220, an eigenvalue decomposition is performed on the covariance matrix. The number of jammers is determined by the number of significant eigenvalues of the covariance matrix found while the antenna array 100 is not transmitting. More formal statistical methods, such as the Akaike Information Criterion (AIC) or Minimum Description length (MDL) can be used to determine the number of jammers 120-123 within a field of view of the antenna 100 based on the updated eigenvalue decomposition. Eigenvalue decomposition may be performed using a conventional block processing technique. However, in a preferred embodiment, once the eigenvalue decomposition values are initially determined, a fast eigenvalue decomposition update (such as an exemplary technique described below) is performed, to estimate the eigenvalues quickly. Eigenvalue decomposition is numerically intensive. Thus, a fast eigenvalue decomposition update technique allows estimation of the eigenvalues in real-time or near real-time, without interfering with ongoing surveillance and/or tracking activity.
-
[0017] Block 230 explicitly determines a direction of arrival from which at least one jammer 120-123 is transmitting a signal included in the sampled aperture data, based on the eigenvalue decomposition. Preferably, a modern super-resolution subspace algorithm, such as MUSIC, minimum-norm or the like, is used.
-
[0018] Storage devices 240 and 250 are provided. The storage devices may be any conventional data storage devices, including random access memory (RAM), latches, or registers. Each time the covariance matrix is evaluated (or an update thereof is estimated), the results are stored in storage device 240, to be used when updating the covariance matrix on subsequent iterations.
-
Similarly, each time the eigenvalue decomposition is evaluated (or an update thereof is estimated), the results are stored in [0019] storage device 250, to be used when updating the eigenvalue decomposition on subsequent iterations. Recursive, cumulative covariance processing allows estimation of the DOA to be performed in real-time or near real-time with reduced processing resource consumption.
-
In one example, the updates to the covariance matrix are performed at regular intervals. For example, the regular intervals may have a period of between about one millisecond and about ten milliseconds, corresponding to a frequency between about 100 Hz and 1000 Hz. Each time the covariance matrix is updated, the eigenvalue decomposition is updated, preferably using a fast parallel decomposition technique. By processing recursively, it is possible to use the maximum number of samples within the computational period. Use of sliding window allows the most up-to-date samples to be used. The covariance processing can be performed continuously in the background, without interfering with surveillance or tracking of targets. Further, because the sampled aperture data from any given direction can be sampled as often as desired, super-resolution is possible, enabling detection of multiple jammers within a single range cell. [0020]
-
Once determined, the direction of arrival may be provided to a tracker. [0021]
-
Using the above method, angular locations of one or more jammers can be determined, including one or more jammers within the mainlobe of the [0022] antenna array 100, and/or one or more jammers located in sidelobes of the antenna array.
-
Because the exemplary method does not require the mainlobe to be pointed towards a jammer to determine the jammer location, it is possible to detect and track jammers in normal surveillance mode. [0023]
-
In addition to the conventional abilities for target detection and tracking in the presence of jamming, the exemplary embodiment makes it possible to add awareness of the DOA of multiple jammers. This allows the system to monitor and track the locations of jammers. [0024]
-
Although the exemplary embodiment described above uses recursive processing, block processing may alternatively be used. For block processing, the covariance matrix is periodically generated based on additional sampled aperture data. The eigenvalue decomposition is evaluated by block processing each time the covariance matrix is evaluated. The number of jammers within the field of view of the antenna is evaluated based on the updated eigenvalue decomposition. Block processing may be performed every ten pulses (100 Hz), for example. [0025]
FAST EIGENVALUE DECOMPOSITION PROCESSING ALGORITHM
EIGENVALUE DECOMPOSITION
-
Eigenspace decompositions are used in solving many signal processing problems, such as source location estimation, high-resolution frequency estimation, and beamforming. In each case, either the eigenvalue decomposition (EVD) of a covariance matrix or the singular value decomposition (SVD) of a data matrix is performed. For adaptive applications in a non-stationary environment, the EVD is updated with the acquisition of new data and the deletion of old data. This situation arises where the target sources are moving or the sinusoidal frequencies are varying with time. For computational efficiency or for real-time applications, an algorithm is used to update the EVD without solving the EVD problem from scratch again, i.e., an algorithm that makes use of the EVD of the original covariance matrix. In numerical linear algebra, this problem is called the modified eigenvalue problem. Other examples of modified eigenvalue problems include extension and restriction problems where the order of the matrix is modified, corresponding to filter order updating or downdating. [0026]
-
One aspect of the exemplary embodiment of the invention is the use of a fast recursive eigenvalue decomposition method in processing the radar echo signals in a raid-counting mode. Formally, these problems are concerned with computing the eigensystem of Hermitian matrices that have undergone finite changes that are of restricted rank. The rank values of the modified part are usually small compared with the rank values of the original matrices. In these situations, perturbation ideas relating the eigensystem of the modified and original matrices can be exploited to derive a computationally efficient algorithm. [0027]
-
In a time-varying environment, there are two strategies for updating the covariance matrix. The most common one is the rank-1 update, which involves applying an exponential forgetting window to the covariance matrix, i.e.,[0028]
-
{circumflex over (R)}=μR+(1−μ)ααH (1)
-
where R and {circumflex over (R)} correspond to the original and updated covariance matrices, respectively, μ is an appropriate exponential forgetting factor, and α is the most recent data vector. The data vectors correspond to linear prediction sample vectors in frequency estimation or snapshots in array processing. The exponential window approach may have several drawbacks: the influence of old data can last for a very long time, and the exponential forgetting factor, μ, must be determined appropriately. [0029]
-
Another strategy is to use a sliding window, which is analogous to short-time signal analysis, where data within the window is assumed to be stationary. This strategy corresponds to adding a row to and deleting a row from the data matrix, simultaneously, or to the following rank-2 covariance matrix update problem:[0030]
-
{circumflex over (R)}=R+αα H−ββH (2)
-
where α is the data vector to be included, and β is the data vector to be deleted. The eigenvalue can be computed explicitly when the order of the matrix is less than 5 (say, 3 or 4) after deflation. One can also add and delete blocks of data, leading to the general rank-κ modification problem, and one can modify equation (2), or the rank-κ modification problem, by adding weighting factors, thus indicating the relative weight of the previous covariance matrix and the new data vector in forming the new covariance matrix estimate. [0031]
-
Simultaneous data addition and deletion have also been considered in other signal processing applications such as recursive least squares problems. Of course, the modification problem can be solved by applying a rank-1 update as many times as desired. That approach is computationally involved because nonlinear searches for eigenvalues have to be carried out for each rank-1 update, and the procedure is repeated for κ times. Solving this eigenvalue search problem in an efficient way speeds up the process. It should also be noted that subtracting data may lead to ill-conditioning, since the smallest eigenvalue may move toward zero. Theoretically, this should not happen because only different segments of data are used for forming the covariance matrix. Numerically, the ill-conditioning may happen; thus, in general, data subtraction should be avoided. The choice of updating schemes depends on specific applications and assumptions in signal sources. The exemplary embodiment of the present invention provides an algorithm for recursively updating the EVD of the covariance matrix when the covariance matrix is under low-rank updating, which may include data deleting. [0032]
-
An overview of the modified eigenvalue problem for raid counting is provided herein and its application to the recursive updating of the eigen-subspace when the covariance matrix is under low-rank updating. It includes a theoretical framework in which rank-κ updates are related to one another. The spectrum-slicing theorem enables location of the eigenvalue to any desired accuracy by means of the inertia of a much reduced size Hermitian matrix whose elements depend on the eigenvalue parameter, λ. This idea is incorporated and a complete algorithm and analysis is worked out for updating the eigen-decomposition of the covariance matrix as it arises in eigenbased techniques for frequency or angle of arrival estimation and tracking. A novel algorithm is employed for computing the eigenvector and work out the deflation procedure for rank-κ updates. The efficient procedure for computing eigenvalues is a two-step procedure. It improves the computational complexity from O(N[0033] 3) to O(N2k). The deflation procedure is important for the eigen-subspace updating in high-resolution algorithms, as high filter order leads to multiplicity of noise eigenvalues. If only the principal eigenvalues and eigenvectors need to be monitored (as in the PE algorithm), the noise eigenvalue multiplicity consideration would lead to a highly efficient algorithm. An analysis of the computational complexity indicates an improvement of an order of magnitude from O(N3) to O(N2k) when compared with prior known routines. The most efficient EVD routines involve Householder reduction to tridiagonal form followed by QL iteration.
-
An efficient algorithm for computing the eigensystem of the modified covariance matrix is described is below. The algorithm involves a nonlinear search for the eigenvalues that can be done in parallel. Once the eigenvalues are obtained, the eigenvectors can be computed explicitly in terms of an intermediate vector, which is a solution of a deflated homogeneous linear system. The general procedure, coupled with the multiplicity of noise eigenvalues in subspace signal processing applications, leads to a highly efficient algorithm for adaptive estimation or tracking problems. [0034]
-
II. Updating the EVD of the Covariance Matrix [0035]
-
(A) Modified Eigenvalue Problem [0036]
-
The modified eigenvalue problem is concerned with computing the eigensystem of the modified Hermitian matrix, given the a priori knowledge of the eigensystem. of the original matrix. This specifically concerns the following additive modification:[0037]
-
{circumflex over (R)}=R+E (3)
-
where {circumflex over (R)} and R and RεC[0038] N×N are the modified and original covariance matrices and EεCN×N is the additive modification matrix. This matrix is also Hermitian, and, in general, is of indefinite nature; i.e., it may have negative eigenvalues (corresponding to downdating). Assume E is of rank k, where k is usually much smaller than N. Because E is Hermitian, it has the following weighted outer product expansion:
-
E=USU H (4)
-
where UεC[0039] N×k and SεRk×k is a nonsingular matrix. For example, equation (4) can be obtained as the eigenvalue and eigenvector expansion of E, where S is a diagonal matrix with eigenvalues on the diagonal and U is the corresponding orthonormal eigenvector matrix. Another example for the decomposition of E, as shown in equation (4), is expressed directly in terms of the data. In that case, S has diagonal elements equal to 1 or −1, corresponding to updating or downdating, respectively, and U is the matrix with the corresponding data vectors. U is then not orthonormal. A priori information of the EVD of R is also available as follows:
-
R=QDQ H (5)
-
where DεR[0040] N×N, QεCN×N, D=diag[d1,d2 . . . dN] is the diagonal eigenvalue matrix, and Q=[q1, . . . qN] is the corresponding eigenvector matrix. Note that Q is an orthonormal matrix, i.e.,
-
QQ H =Q H Q=I (6)
-
The problem now is to find the modified eigensystern. Assuming that λ, x are the eigenpairs, of R, the following expression is obtained:[0041]
-
({circumflex over (R)}−λI)x=0 (7)
-
The eigenvalue can be determined by solving for the zeros of[0042]
-
det[{circumflex over (R)}−λI]=0 (8)
-
Substituting {circumflex over (R)} from (3) and E from (4) into (7) gives the following expression:[0043]
-
(R−λI)x+USU H x=0 (9)
-
It is convenient to split away the rank-k modification aspect from the rest of the problem by inducing the following system of equations, where y=SU[0044] H x and y ε=Ck:
-
(R−λI)x+Uy=0 (10a)
-
U H x−S −1 y=0 (10b)
-
Solving x in terms of y in (10a) and substituting it into (10b) gives the following:[0045]
-
W(λ)y=0 (11)
-
where
-
W(λ)=S −1 +U H(R−λI)−1 U (12)
-
Note that W(λ) can be identified to be the Schur complement of R−λI in M(λ), where
[0046]
-
W(λ) is also called the Weinstein-Aronszajn (W-A) matrix, which arises in perturbation problems in Hilbert space operators. The modified eigenvalues can be obtained from solving det[W(λ)]=0 rather than from equation (8). In fact, λ is an eigenvalue of {circumflex over (R)}, i.e., λελ({circumflex over (R)}) if and only if λ is a solution of det[W(λ)]=0. This can be derived easily by using the known Schur equation on equation (13), leading to
[0047]
-
Because S is invertible, det[└{circumflex over (R)}−λI┘=0 will imply det[M(λ)]=0. Also, det[M(λ)] can be expressed as[0048]
-
det[M(λ)]=(−1)kdet[R−λI]det[W(λ)]
-
leading to
[0049]
-
with {circumflex over (λ)}[0050] iελ({circumflex over (R)}) and λiελ(R). Thus {{circumflex over (λ)}i} and {λi} are the zeros and poles of the rational polynomial det[W(λ)]. Note that the above derivation is valid only when the eigenvalues of R and {circumflex over (R)} are distinct. In fact, R−λI in equation (12) is not invertible when λ coincides with one of the eigenvalues of R. A deflation procedure is prescribed when some of the eigenvalues of the modified and original matrices are in fact the same. Note that w(λ)=det[W(λ)] is a nonlinear equation, which is easy to evaluate. W(λ) is a k×k matrix which is of a dimension much smaller than the N×N matrix {circumflex over (R)}−λI. Moreover, the resolvant (R−λI)−1 is easy to compute if the EVD of R is available, i.e.,
-
W(λ)=S −1 +U H Q(D−λI)−1 Q H U (16)
-
where (D−λI) is a diagonal matrix. The fast algorithm to be described depends on a spectrum-slicing theorem relating the eigenvalues of the modified matrix to the eigenvalues of the original matrix. This theorem enables the search to be localized in any interval. Moreover, the search can be carried out in parallel, leading to an efficient implementation. [0051]
-
Deflation [0052]
-
Several situations exist for deflating the problem. The details of two situations are described: (1) U is orthogonal to q[0053] i, and (2) there are multiplicities of the eigenvalues of the original covariance matrix. The consideration here is for rank-k update. Other special situations exist where the problem can be deflated immediately, such as the existence of zero elements in the update vector. For the first case, qi HU=0; therefore di and qi remain the eigenpair of the modified matrix {circumflex over (R)}. For the second case, if λ is an eigenvalue of multiplicity m with the corresponding set of eigenvectors Q=[qi . . . qm], it is then possible to perform a Householder transformation on Q, such that the last M−1 eigenvalues are orthogonal to u1 (where U=[u1 . . . uk]), i.e.,
-
QH 1 =└q 1 (1) . . . qm (1)┘Δ Q (1) (17a)
-
q i (1) H u 1=0 i=2, . . . m (17b)
-
Thus, {q
[0054] i (1)}
i=2 m remain the eigenvectors of {circumflex over (R)} corresponding to eigenvalue λ. The Householder matrix H
1 is an orthogonal matrix given by
-
where[0055]
-
b
1
=z+σe
1
-
with Q[0056] H u1=z, σ∥z∥2 and e1=[1,0 . . . 0]H. Now let Q1=[q2 (1) . . . qm (1)], and after performing another appropriate Householder transformation, we obtain
-
{circumflex over (Q)} (1) H 2 =[q 2 (2) q 3 (2) . . . q m (2) ]q i (2)H u 2=0 i=3, . . . m (19)
-
After continuing this procedure k times until[0057]
-
{circumflex over (Q)} (k−1) H k =[q k (k) q k+1 (k) . . . q m (k) ]q i (k)H u k=0 i=k+1, . . . m (20)
-
then {q[0058] i (k)}i=k+1 m are the eigenvectors of {circumflex over (R)} with λελ({circumflex over (R)}) of multiplicity m−k.
-
Spectrum-Slicing Theorem [0059]
-
Assuming all deflation procedures have been carried out, i.e., all d[0060] i are distinct and qi HU=0, a computationally efficient algorithm can be used to locate the eigenvalues to any desired accuracy. The fast algorithm that allows one to localize the eigenvalue search depends on the following spectrum-slicing equation:
-
N {circumflex over (R)}(λ)=N R(λ)+D + [W(λ)]−D+ [S] (21)
-
where N[0061] {circumflex over (R)}(λ) and NR(λ) are the number of eigenvalues of {circumflex over (R)} and R less than λ, respectively, D+[W(λ.)] is the positive inertia of W(λ) (i.e., the number of positive eigenvalues of W(λ) and, likewise, D+[S] is the positive inertia of S. The above spectrum-slicing equation (21) provides information of great computational value. W(λ) is easy to compute for each value of A. If QHU is computed and stored initially, the dominant cost is on the order of Nk2 floating-point operations per evaluation. Upon evaluating the W-A matrix W(λ), one may then compute its inertia efficiently from an LDLH or the diagonal pivoting factorization. This requires k3 number of operations. The value of NR(λ) is available readily, as the EVD of R. D+[S] can be determined easily either from the EVD of E (because only a few eigenvalues are nonzero) or from counting the number of the data vectors to be added (assuming they do not align with each other). This needs to be determined only once. The spectrum-slicing equation provides an easy mechanism for counting the eigenvalues of {circumflex over (R)} in any given interval. This information enables one to localize the search in much the same way as Sturm sequence/bisection techniques are used in polynomial root finding. The bisection scheme can be carried out until convergence occurs. The convergence rate is linear. The eigenvalue search algorithm can be summarized as follows:
-
1. Use the knowledge of the original spectrum and equation (21) to localize the eigenvalues to disjoint intervals. [0062]
-
2. For each iteration step of each eigenvalue search in the interval (l,u) set
[0063]
-
and test it with N[0064] {circumflex over (R)}(λ) (equation (21)). Repeat until convergence to desired accuracy.
-
Since the distribution of the eigenvalue of the original matrix is known, the eigenvalues can be localized to disjoint intervals easily by using equation (21). For the bisection search, the dominant cost for each iteration is the evaluation of W(λ) and the LDL[0065] H decomposition for the evaluation of W(λ).
-
This algorithm can be illustrated by considering the following numerical example. Let the original covariance matrix be given by R=diag(50, 45, 40, 35, 30, 25, 20, 15, 10, 5). Thus the eigenvalues are the diagonal elements, and the eigenvectors are the unit vectors {e[0066] i}i 10=1. Now assume the covariance matrix undergoes the following rank-3 modification given by
-
{circumflex over (R)}=R+u
1
u
1
t
+u
2
u
2
t
+u
3
u
3
t
-
where u[0067] 1, u2, and u3 are randomly generated data vectors given by [0.3563, −0.2105, −0.3559, −0.3566, 2.1652, −0.5062, −1.1989, −0.8823, 0.7211, −0.00671t[−0.5539, −0.4056, −0.3203, −1.0694, −0.5015, 1.6070, 0.0628, −1.6116, −0.4073, −0.59501}t, and [0.6167, −1.1828, 0.3437, −0.3574, −0.4066, −0.3664, 0.8533, −1.5147, −0.7389, 2.1763]t. Thus S=diag(1,1,1) and D+[S]=3. For this example, the resulting eigenvalues are {circumflex over (λ)}=(51.1439, 47.1839, 40.6324, 36.9239, 36.0696, 27.6072, 22.1148, 20.0423, 11.0808, 8.6086). Now let us illustrate how (21) can be used to locate the eigenvalues accurately. Start with the interval (20, 25). The associated W-A matrices are evaluated and the inertias computed. The counting formulas evaluated at 20+ε and 25−ε (NR(25−ε)=4, NR(20+ε)=2, ε=a small constant depending on accuracy requirement) indicate that there are two eigenvalues in the interval (20, 25).
-
This interval is then split into intervals (20, 22.5) and (22.5, 25). Evaluating the counting equation indicates that an eigenvalue has been isolated in each disjoint interval. Bisection can then be employed to the desired accuracy. Table 1 illustrates the iteration steps for the interval (20, 21.25) for an accuracy of 10
[0068] −3 digits. It converges to 20.0425 in 12 steps. The maximum number of iterations required for convergence depends on the interval, (l,u), to be searched and the accuracy requirement, ε. This number q can be determined such that 2
q>(u−l)/ε and is given by Table 2.
TABLE 1 |
|
|
Bisection Search for Eigenvalues Using the Spectrum-Slicing Equation |
Bisection | | | | | NR |
Step | Interval | Midpoint | NR(λ) | D+[W(λ)] | (λ) |
|
1 | (20,21.25) | 20.625 | 4 | 2 | 3 |
2 | (20, 20.625) | 20.3125 | 4 | 2 | 3 |
3 | (20, 20.3125) | 20.1563 | 4 | 2 | 3 |
4 | (20, 20.1563) | 20.0782 | 4 | 2 | 3 |
5 | (20, 20.0782) | 20.0391 | 4 | 1 | 2 |
6 | (20.0391, 20.0782) | 20.0587 | 4 | 2 | 3 |
7 | (20.0391, 20.0587) | 20.0489 | 4 | 2 | 3 |
8 | 20.0391, 20.0489) | 20.044 | 4 | 2 | 3 |
9 | (20.0391, 20.044) | 20.0416 | 4 | 1 | 2 |
10 | (20.0416, 20.044) | 20.0428 | 4 | 2 | 3 |
11 | (20.0416, 20.0428) | 20.0422 | 4 | 1 | 2 |
12 | (20.0422, 20.0428) | 20.0425 | 4 | 2 | 3 |
|
-
When an interval has been found to contain a single eigenvalue, λ, then bisection is a rather slow way to pin down λ, to high accuracy. To speed up the convergence rate, a preferable strategy would be to switch to a rapidly root-finding scheme after the roots have been sufficiently localized within subintervals by using the counting equation. Root-finding schemes, including variation of secant methods, can be employed. The order of convergence is 1.618 for secant methods, as against 1 for bisection. Thus convergence can be accelerated. [0069]
-
Eigenvector [0070]
-
Once the eigenvalues are obtained, the eigenvectors can be evaluated efficiently in two steps. First, the intermediate vector y can be solved from the k×k homogeneous Hermitian system (11). y can actually be obtained as the byproduct of the LDL
[0071] H decomposition of W(λ); i.e., y is the zero eigenvector of W(λ) for the convergent eigenvalue λ. The eigenvector x can then be computed explicitly using equation (10a). This two-step procedure is much more efficient than solving the original N×N homogeneous system of equation (7) or the conventional eigenvector solver using the inverse iteration. The explicit expression x relating y is given by
-
Normalizing x gives the update eigenvectors {circumflex over (q)}, i.e.,
[0072]
| TABLE 2 |
| |
| |
| Number of | Number of |
| Subintervals to be | Iteration |
| Searched | Steps |
| |
|
| 102 | 7 |
| 103 | 10 |
| 104 | 14 |
| 105 | 17 |
| 106 | 20 |
| |
-
The computational complexity for solving the k×k homogeneous Hermitian system is O(k[0073] 3) and the back substitution of equation (22) involves an order of magnitude O(N2k). This is to be contrasted to the Householder reduction followed by QL Implicit Shift, which requires complexity of order of magnitude O(N3). Similarly, the inverse iteration requires O(N3). This represents an order of magnitude improvement from O(N3) to O(N2k).
-
For completeness, a discussion on the rank-1 and rank-2 updates is provided. Since w(λ) and its derivatives can be evaluated easily for these two cases, Newton's method can be employed for the eigenvalue search. Some new results were obtained as we worked out the complete algorithm, including the deflation procedure (for high-order rank updates), bounds for the eigenvalues (derived from the spectrum-slicing theorem instead of DeGroat and Roberts' generalized interlacing theorem), as well as the explicit eigenvector computation. [0074]
-
(B) Rank-1 Modification [0075]
-
For rank-1 updating modification {circumflex over (R)}R+αα
[0076] H, the modification matrix E can be identified with E=αα
H, corresponding to the decomposition equation (4) with U=α and S=1. W(λ) is a 1×1 matrix, which is also equal to the determinant w(λ), i.e.,
-
This is a rational polynomial, with N roots corresponding to N eigenvalues. [0077]
-
It is assumed that this is an N×N problem for which no deflation is possible. Thus, all d[0078] i are distinct, and qi Ha≠0. The eigenvalues λi, of the rank-1 modified Hermitian matrix satisfy the following interlacing property:
-
d i<λi <d i−1 i=1,2 . . . N (24a)
-
with d[0079] 0=d1+|a|2. Therefore, the search intervals for each λi can be restricted to Ii=(di, di−1). for all i=1,2, . . . N. For downdating, {circumflex over (R)}=R−ββt, the interlacing equation is given by
-
d i+1<λi <d i i=1,2, . . . N (24b)
-
where d
[0080] N+1=d
N−|β|
2. The corresponding search intervals for each λ can be restricted to I=(d
i+1,d
i). An iterative search technique can then be used on w(λ) to identify the updated eigenvalues. The function w(λ) is a monotone increasing function between its boundary points because
-
Thus, the following Newton's method, safeguarded with bisection, can be applied in an iterative search to identify the j-th eigenvalue, λ. For convenience, let us denote the endpoints of the interval by l=d
[0081] j, and u=d
j−1. The full set of eigenvalues can be solved in parallel by using the following iterative algorithm for each eigenvalue:
-
and stopping if |λ[0082] (k+1)−λ(k)|<δλ(k+1), where δ is the convergence threshold. Note that when an iteration goes outside the restricted zone, since the gradient is greater than zero, this indicates the direction in which the solution is to be found. Consequently, the interval can be further restricted as indicated. Newton's method is guaranteed to converge, and this convergence is quadratic near the solution. It should be noted that Newton's method is based on a local linear approximation to the function w(λ). Since w(λ) is a rational function, it is possible to speed up the convergence rate by using simple rational polynomials for local approximations.
-
Once the eigenvalues are determined to sufficient accuracy, the eigenvector can be solved in a procedure described in (24). It is given explicitly as follows:
[0083]
-
Exponential Window Rank-1 Update [0084]
-
This procedure can be modified to accommodate the exponential window easily. For the following exponential window, rank-1 updating modification {circumflex over (R)}=μR+(1−μ)aa
[0085] H the Weinstein-Aronszajn matrix W(λ) is a 1×1 matrix, which is also equal to the determinant w(λ),
-
The N roots are the updated eigenvalue. They must satisfy the following modified interlacing property:[0086]
-
μd i<λi <μd i−1 i=1,2, . . . N (30)
-
With ud
[0087] 0=ud
1+|a|
2. Therefore, the search intervals for each λ
i can be restricted to I
i=(μd
i, μd
i−1) for all i=1,2 . . . N. Assuming eigenvalues do not change dramatically from one update to another, d
i would be a good initial estimate for λ
i. Once the eigenvalues are determined to sufficient accuracy, the eigenvector is given explicitly as follows:
-
This can be normalized accordingly. [0088]
-
(C) Rank-2 Modification [0089]
-
For a rank-2 modification problem, equation (4) can be expressed in the following:
[0090]
-
where U=[aβ] and S=diag(1, −1). The Weinstein-Aronszajn matrix W(λ) is now given by[0091]
-
W(λ)=S −1 +U H Q(D−λI)−1 Q H U (33)
-
Let Q
[0092] HU=[yz] and W(λ) can be computed easily with the following expression for the determinant w(λ)
-
Assuming all deflation procedures have been carried out, i.e., all d[0093] i are distinct and qi H a≠0,qi H β≠0, simultaneously, the interlacing property relating the modified eigenvalues to the original eigenvalues is much more complicated than the rank-1 case. Combining the interlacing theorems for the rank-1 update equation (24a) and the rank-1 downdate equation (24b) simultaneously, provides the following generalized interlacing theorem for the rank-2 update:
-
d i+1<λi <d i−1 (35)
-
where d[0094] N+1dN−|β|2 and d0=d1+|a|2. That is, the modified eigenvalues are bounded by the upper and lower eigenvalues, of the original covariance matrix. In this situation, each interval may have zero, one, or two eigenvalues. Fortunately, the spectrum-slicing equation can be used to isolate the eigenvalues in disjoint intervals. Once the eigenvalues are isolated, Newton's method can be employed for the eigenvalue search. This nonlinear search can be implemented in parallel as for the rank-1 case.
-
The eigenvector can be determined in two steps. First, an intermediate vector, y, can be obtained as the solution of the homogeneous system equation (11), and the eigenvector, x, can be obtained explicitly in terms of y, as in equation (22). Since k=2, we can specify the homogeneous system solution y=[1v]
[0095] H. Solving W(λ) y=0 gives v
-
Using equation (24) we have the following explicit expression for the eigenvector
[0096]
-
(D) Signal and Noise Subspace Updates with Multiplicity [0097]
-
In many applications, it is convenient to decompose the eigenvalues and eigenvectors into signal and noise eigenvalues and eigenvectors, respectively, i.e.,
[0098]
-
and[0099]
-
S=[q 1 q 2 . . . q M] (39)
-
N=[q M+1 . . . q N] (40)
-
The first M eigenvalues are the signal eigenvalues corresponding to M complex sinusoids in frequency estimation or M target sources in array processing. The last N-M eigenvalues cluster to each other and correspond to the noise level. S and N are the signal and noise subspace, respectively. For a rank-1 update, the first M+1 eigenvalues and eigenvectors are modified, and the last N−M−1 eigenpairs stay the same. In order to conform to the model for M sources, we have the following update equation for eigenvalues:[0100]
-
{circumflex over (λ)}1≧{circumflex over (λ)}2≧ . . . ≧{circumflex over (λ)}M{circumflex over (σ)}2 (41)
-
where
[0101]
-
Similarly, for a rank-k update, the first M+k eigenpairs are modified, and the rest remain the same. We have the following update equation for the noise eigenvalues according to the model given by equation (38):
[0102]
-
If, in fact, there are only M sources, {{circumflex over (λ)}[0103] M+i}i=1 k should be close to σ2. If {{circumflex over (λ)}M+1}i=1 k are not close to σ2 there may be another sinusoid or target. This observation can be used for detection of new sources.
-
(E) Computational Complexity [0104]
-
Note that rather different techniques are used in the fast algorithm and in the conventional algorithm for solving the eigensystem. Basically, the present algorithm involves iterative search for eigenvalues (using the LDL[0105] H decomposition together with the spectrum-slicing equation) and then subsequent eigenvector computation. However, all modern eigensystem solvers involve similarity transformation for diagonalization. If a priori knowledge of the EVD of the original covariance matrix were not available, the similarity transformation technique would be generally preferred, since it has better computational efficiency and numerical properties. The present fast algorithm makes use of the fact that the eigenvalue of the modified matrix is related to the eigenvalue of the original matrix. Moreover, the eigenvalue can be isolated to disjoint intervals easily and simultaneously searched in parallel. It can be located to any desired accuracy by using a spectrum-slicing equation requiring evaluation of the inertia of a much-reduced-size (k×k) Hermitian matrix. In this situation, the computational complexity is evaluated and compared with the complexity of the conventional eigensystem solver. The results are compared with distinct eigenvalue situations. Further reduction in computational complexity can be achieved when the multiplicity of the noise eigenvalues can be exploited as discussed above in the Signal and Noise Subspace Updates section.
-
The grand strategy of almost all modern eigensystem solvers is to push the matrix R toward a diagonal form by a sequence of similarity transformations. If the diagonal form is obtained, then the eigenvalues are the diagonal elements, and the eigenvectors are the columns of the product matrices. The most popular algorithms, such as those used in IMSL, EISPACK, or the Handbook for Automatic Computation, involve the following two steps: (1) Householder transformation to reduce the matrix to tridiagonal form and (2) the QL algorithm with an implicit shift to diagonalize it. In the limit of large N, the operation count is approximately
[0106]
-
For the algorithm described herein, the efficiency depends on the fact that the eigenvalues can be isolated easily and thus searched in parallel. For each iteration step, W(λ) is evaluated and the inertia computed. The overall computational complexity is summarized as follows:
[0107] | |
| |
| Function | No. of Operations |
| |
|
| Evaluate S−1 | k |
| Evaluate Q1 H = QHU | N2k |
| Evaluate {qliq1i}i =1 N | Nk2 |
| Total | N2k + Nk2 + k = N2k |
| Evaluate W(λ) | Nk2 |
| LDLH decomposition | k3 |
| Total for Ni number of iterations | Ni(Nk2 + k 3) = NiNk2 |
| Backsubstitution | N2(k + 1) − N2k |
| Total (overhead, eigenvalue, eigenvector) | 2N2k + NiNk2 |
| |
-
Thus, in general, the computational complexity is of order 2N
[0108] 2k+N
iNk
2. It corresponds to an improvement from
-
(F) Numerical Properties [0109]
-
It should be noted that the eigenvalue can be determined to any desired accuracy by using the spectrum-slicing equation. The nonlinear search is bisection with a linear convergent rate. The number of iterations depends on the accuracy requirement, and details are discussed above in Section II-A. A numerical example is also included in Section II-A to illustrate how the spectrum-slicing equation can be used to locate the eigenvalues to any desired accuracy. The calculation of the updated eigenvectors depends upon the accurate calculation of the eigenvalues. A possible drawback of the recursive procedures is the potential error accumulation from one update to the next. Thus the eigenpairs should be as accurate as possible to avoid excessive degradation of the next decomposition, which can be accomplished either from pairwise Gram-Schmidt or from full-scale Gram-Schmidt orthogonalization procedures on the derived eigenvectors. Experimental results indicate that the pairwise Gram-Schmidt partial orthogonalization at each update seems to control the error buildup in the recursive rank-1 updating. Another approach is to refresh the procedure from time to time to avoid any possible roundoff error buildup. [0110]
-
III. Adaptive EVD for Frequency or Direction of Arrival Estimation and Tracking [0111]
-
This section applies the modified EVD algorithms to the eigenbased techniques for frequency or angle of arrival estimation and tracking. Specifically, the adaptive versions of the principal eigenvector (PE) method of Tufts and Kumersan, the total least squares (TLS) method of Rahman and Yu, and the MUSIC algorithm are applied. [0112]
-
(A) Adaptive PE Method [0113]
-
In the linear prediction (LP) method for frequency estimation, the following LP equation is set up:
[0114]
-
or[0115]
-
X H c=x (47)
-
where L is the prediction order chosen to be M≦L≦N−1, M is the number of sinusoids, and X[0116] H is the data matrix. Note that in the case of array processing for angle of arrival estimation, each row of the data matrix is a snapshot of sensor measurement. Premultiplying both sides of equation (47) by X gives the following covariance matrix normal equation:
-
Rc=r (48)
-
where R=XX
[0117] H and r=Xx. The frequency estimates can be derived from the linear prediction vector coefficient c. In a non-stationary environment, it is desirable to update the frequency estimates by modifying the LP system continuously as more and more data are available. Specifically, the LP system is modified by deleting the first k rows and appending another k rows (corresponding to a rank-2k update) as follows:
-
which leads to the following modification of the covariance matrix normal equation:[0118]
-
{circumflex over (R)}ĉ={circumflex over (r)}(50)
-
where
[0119]
-
where α
[0120] i=[x(N+i−L−1) . . . x(N+i−2)]
H and β
i=[x(i−1) . . . x(L+i−2)]
H. The PE solution for ĉ is obtained using the following pseudo-rank approximation (assuming that there are M sources):
-
where {circumflex over (λ)} and {circumflex over (q)}[0121] i are the eigenpair solutions at each time instance. The frequencies can then be obtained by first solving the zeros of the characteristic polynomials formed by ĉ. M of the zeros closest to the unit circle are then used for determining the sinusoidal frequency estimates.
-
Because only M principal eigenvalues and eigenvectors are used in equation (53) for solving the LP coefficient vector ĉ, it is desirable to modify the previous algorithm to monitor only the M principal eigenvalues and eigenvectors instead of the complete set of N eigenvalues and eigenvectors to facilitate computational efficiency. In order to implement the update, noise eigenvalues need to be monitored. It is not necessary to monitor the entire set of noise eigenvectors, however. Now assume the principal eigenvalue λ[0122] 1≧λ2≧ . . . ≧λM, the noise eigenvalue σ2 (of multiplicity N−M), and the signal subspace S=[q1 . . . qM] are available. The rank-2k update algorithm requires the knowledge of 2k noise eigenvectors, the noise eigenvalues, and the principal eigenvalues and eigenvectors. The 2k noise eigenvectors can be obtained in the normalized orthogonal projection of {αi} and {βi} on noise subspace N=[qM+1 . . . qN]. These constructions lead to qj H αi=0 and qj H βi=0 for i=1 , . . . k and j=M+2k, . . . N, and it is not necessary to construct {qj}j=M+2k+1 N. (This procedure corresponds to the deflation situation and subspace update with multiplicity as discussed in an earlier section.) σ2 remains as the last N−M−2k+1 eigenvalue. The algorithm is summarized as follows:
-
1. Construct {q
[0123] M+i}
i=1 2k such that they are orthogonal to {q
i}
i=M+2k+1 N, i.e.,
-
2. Conduct a nonlinear parallel search for {{circumflex over (λ)}[0124] 1, {circumflex over (λ)}2, . . . {circumflex over (λ)}M+2k} using (21). Conduct a nonlinear parallel search for {{circumflex over (λ)}1, {circumflex over (λ)}2, . . . {circumflex over (λ)}M+2k} using (21).
-
3. Update the noise eigenvalue {circumflex over (σ)}
[0125] 2
-
4. Update the signal subspace eigenvector using (24). [0126]
-
(B) Adaptive Total Least Squares Method [0127]
-
The TLS method is a refined and improved method for solving a linear system of equations when both the data matrix, X
[0128] H, and the observed vector, x, are contaminated by noise. In the LP method for frequency estimation, both sides of the equation are contaminated by noise. Thus it is desirable to apply the TLS method to solve the LP equation. The relative factor for weighting the noise contribution for both sides of the equation is equal to 1, because X
H and x are derived from the same noisy samples {x(i)}. This leads to the following homogeneous system of equations:
-
or in terms of the covariance matrix
[0129]
-
where R=XX
[0130] H and X
H is the data matrix for (54). In a time-varying environment, we delete k rows and append k rows to the data matrix X
H in (54) as done in the adaptive PE method discussed above, leading to the following rank-2k modification of equation (55):
-
where
[0131]
-
and {α
[0132] i}, {β
i} are the appended and deleted rows, respectively. The TLS solution is obtained as the null vector solution of {circumflex over (R)}. In general, for M sinusoids or M target sources, there are N-M number of minimum eigenvalues and eigenvectors. Any vector in the noise subspace is a solution. Out of these infinite number of solutions, one can choose the following minimum norm solution:
-
Thus, in the TLS solution, it is the noise subspace N=[q[0133] M+1 . . . qN] that must be monitored. Because the signal eigenvalues and eigenvectors are also used for updating the noise eigensystem, reduction in computational complexity, as done in the adaptive PE method, is not achieved.
-
(C) Adaptive Music [0134]
-
In this subsection, the recursive version of a class of high-resolution algorithms is considered for multiple target angle estimation or frequency estimation based on the eigenvalue decomposition of the ensemble-averaged covariance matrix of the received signal. Consider a system of K moving targets to be tracked by an array of M sensors. The sensors are linearly distributed with each sensor separated by a distance d from the adjacent sensor. For a narrowband signal, the model of the output of the m-th sensor becomes
[0135]
-
where A[0136] k (t) is the complex amplitude of the k-th target at time t,Tk(t)=sin {θk(t)} where θk(t) is the angle of arrival of the k-th target at time t, and Nm(t) is the m-th sensor noise. Using vector notation, equation (58) can be written as
-
r(t)=A(t)s(t)+N(t) (59)
-
where
[0137]
-
and the M×K direction of arrival (DOA) matrix A(t) is defined as
[0138]
-
The output covariance matrix can then be expressed as follows:[0139]
-
R(t)=A(t)S(t)A H(t)+σ2(t)I (60)
-
where S(t)=E└s(t)s[0140] H(t)┘ is the signal covariance matrix, and σ2(t) is the noise power. Assuming that K<M, the MUSIC algorithm applied at time t yields an estimate of the number of targets K, their DOA {θk(t)}, the signal covariance matrix S(t), and the noise power σ2(t), by examining the eigenstructure of the output covariance matrix R(t). R(t) can be estimated from an ensemble of outer products of snapshots in a sliding window or in an exponential forgetting window as discussed in Section 1.
-
The MUSIC algorithm and its root-finding variations are briefly reviewed here. Suppose at time t, the estimated covariance matrix has the following EVD:
[0141]
-
The algorithm depends on the fact that that the noise subspace E
[0142] N=[q
K+1 . . . q
M] is orthogonal to the signal manifold; i.e.,
-
where u(θ) is the steering vector of the angles to be searched. The conventional MUSIC algorithm involves searching for the peaks of the following eigenspectrum:
[0143]
-
To do this, the complete angular interval
[0144]
-
is scanned. One can avoid the need for this one-dimensional scanning by the use of a root-finding approach. This can be accomplished by, for example, using a known root-MUSIC or minimum norm algorithm. [0145]
-
In the root-MUSIC algorithm,
[0146]
-
is replaced by the complex variable z in the eigenspectrum J(θ) defined in (63). Let D(z) denote the resulting denominator polynomial. The polynomial D(z) can be expressed as the product of two polynomials, H(z) and H(z[0147] −1), each with real coefficients. The first polynomial, H(z), has its zero inside or on the unit circle; K of them will be on (or very close to) the unit circle and represent the signal zero. The remaining ones represent extraneous zeros. The zeros of the other polynomial, H(z−1), lie on or outside the unit circle, exhibiting inverse symmetry with respect to the zero of H(z) . The angle estimation is thus performed by extracting the zeros of the polynomial D(z) and identifying the signal zeros from the knowledge that they should lie on the unit circle.
-
The minimum norm algorithm is derived by linearly combining the noise eigenvectors such that: [0148]
-
1. The first element of the resulting noise eigenvector is unity. [0149]
-
2. The resulting noise eigenvector lies in the noise subspace. [0150]
-
3. The resulting vector norm is minimum. [0151]
-
Equation (62) is then modified to
[0152]
-
where δ=[0153] 1 H=[10 . . . 0]. The angle estimation problem is then solved by computing the zeros of the resulting polynomial of equation (64) and identifying the signal zeros as the K zeros of A(z) that lie on (or very close to) the unit circle.
-
(IV.) Simulation Results [0154]
-
(A) Numerical Properties [0155]
-
In this section, the numerical performance of the discussed algorithms are considered as demonstrated by simulation. The simulations are performed with a time-varying signal in additive white noise. Consider the measurement model (Equation 58) for a scenario where there are three sources (K=3) impinging on a linear array of 10 sensors (M=10). The signal-to-noise ratio for each source is 20 dB. The angles are given by θ
[0156] 1(t)=5°, θ
2(t)=22° and θ
3(t)=12°.
-
In each experiment, 100 updates are carried out. Each EVD update is obtained by updating the covariance matrix derived from the data snapshots within a window of length [0157] 41.
-
As recursive procedures may suffer potential error accumulation from one update to the next, therefore the sensitivity of the exemplary algorithm was investigated as a function of the order of the matrix, and the accuracy requirements for the eigenvalues searched. Stability and sensitivity tests have been conducted for this algorithm and comparisons of the performance of the recursive algorithms with conventional measures. [0158]
-
The angle estimates for various sizes of matrices (M=7,10) and eigenvalue search accuracy requirements (tol E-10, E-5) were compared with the estimates obtained from a conventional routine. It was observed that the performance is merely a function of the size of the matrix used, and is not dependent on whether a conventional eigenvalue decomposition routine or the present recursive procedure for the eigenvalue decomposition is used, nor on the accuracy requirements on the eigenvalue search. In fact the results are practically the same for each case. [0159]
-
Although the invention has been described in terms of exemplary embodiments, it is not limited thereto. Rather, the appended claim should be construed broadly, to include other variants and embodiments of the invention which may be made by those skilled in the art without departing from the scope and range of equivalents of the invention. [0160]