[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Lane Marking Detection and Reconstruction with Line-Scan Imaging Data
Previous Article in Journal
Agreement Technologies for Energy Optimization at Home
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

3D Target Localization of Modified 3D MUSIC for a Triple-Channel K-Band Radar

1
Department Of Electronic Engineering, Hanyang University, Seoul 04763, Korea
2
Collaborative Robots Research Center, Daegu Gyeongbuk Institute of Science and Technology, Daegu 42988, Korea
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(5), 1634; https://doi.org/10.3390/s18051634
Submission received: 17 April 2018 / Revised: 18 May 2018 / Accepted: 18 May 2018 / Published: 20 May 2018
(This article belongs to the Section Physical Sensors)
Figure 1
<p>(<b>a</b>) Implemented 2 × 2 planar array with horn antennas; (<b>b</b>) Illustration of array configuration for joint range, azimuth, and elevation angle estimation.</p> ">
Figure 2
<p>3D shift-invariant structure.</p> ">
Figure 3
<p>Block diagram of implemented triple-channel K-band radar system.</p> ">
Figure 4
<p>Frequency-modulated continuous-wave (FMCW) chirp generator ADF5901 evaluation board and 20 dB power amplifier.</p> ">
Figure 5
<p>(<b>a</b>) Triple-channel receiver; (<b>b</b>) high-pass filter; (<b>c</b>) intermediate frequency (IF) amplifier.</p> ">
Figure 6
<p>Data-logging platform.</p> ">
Figure 7
<p>Transmitting antenna and receiving antennas.</p> ">
Figure 8
<p>Transmitting antenna return loss and receiving antennas isolation characteristics.</p> ">
Figure 9
<p>Radiation pattern of antenna elements.</p> ">
Figure 10
<p>Experiment scenario in the chamber: (<b>a</b>) layout; (<b>b</b>) inside view.</p> ">
Figure 11
<p>Results of the first set of experiments: (<b>a</b>) azimuth angle and elevation angle estimation; (<b>b</b>) range and azimuth angle estimation; (<b>c</b>) 3D view of target locations; (<b>d</b>) distribution of singular values.</p> ">
Figure 11 Cont.
<p>Results of the first set of experiments: (<b>a</b>) azimuth angle and elevation angle estimation; (<b>b</b>) range and azimuth angle estimation; (<b>c</b>) 3D view of target locations; (<b>d</b>) distribution of singular values.</p> ">
Figure 12
<p>Results of the second set of experiments: (<b>a</b>) azimuth angle and elevation angle estimation; (<b>b</b>) range and azimuth angle estimation; (<b>c</b>) 3D view of target locations; (<b>d</b>) distribution of singular values.</p> ">
Figure 13
<p>Results of the third set of experiments: (<b>a</b>) azimuth angle and elevation angle estimation; (<b>b</b>) range and azimuth angle estimation; (<b>c</b>) 3D view of target locations; (<b>d</b>) distribution of singular values.</p> ">
Figure 13 Cont.
<p>Results of the third set of experiments: (<b>a</b>) azimuth angle and elevation angle estimation; (<b>b</b>) range and azimuth angle estimation; (<b>c</b>) 3D view of target locations; (<b>d</b>) distribution of singular values.</p> ">
Figure 14
<p>Results of fourth group experiments: (<b>a</b>) azimuth angle and elevation angle estimation; (<b>b</b>) range and azimuth angle estimation; (<b>c</b>) 3D view of target locations; (<b>d</b>) distribution of singular values.</p> ">
Versions Notes

Abstract

:
In this paper, a modified 3D multiple signal classification (MUSIC) algorithm is proposed for joint estimation of range, azimuth, and elevation angles of K-band radar with a small 2 × 2 horn antenna array. Three channels of the 2 × 2 horn antenna array are utilized as receiving channels, and the other one is a transmitting antenna. The proposed modified 3D MUSIC is designed to make use of a stacked autocorrelation matrix, whose element matrices are related to each other in the spatial domain. An augmented 2D steering vector based on the stacked autocorrelation matrix is proposed for the modified 3D MUSIC, instead of the conventional 3D steering vector. The effectiveness of the proposed modified 3D MUSIC is verified through implementation with a K-band frequency-modulated continuous-wave (FMCW) radar with the 2 × 2 horn antenna array through a variety of experiments in a chamber.

1. Introduction

In recent years, much research has been done on the topic of localization by estimating the directions-of-arrival (DOA) (i.e., azimuth angles and elevation angles) and ranges of multiple targets in many applications, such as radar, sonar, and wireless communications [1,2,3]. Many methods based on one-dimensional (1D) uniform linear array (ULA) have been proposed for estimating azimuth angle [4,5,6,7], or joint range and azimuth angle estimation [8,9], such as the well-known multiple signal classification (MUSIC) algorithm [4] and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm [5], and the generalized MUSIC and ESPRIT [6,7]. Two-dimensional (2D) DOA estimation for azimuth and elevation angle [10,11,12] has been further extended from the conventional 1D estimation methods based on some 2D arrays. Among the conventional works [8,9,10,11,12] on 2D estimation, in [9], a frequency-modulated continuous-wave (FMCW) radar system was implemented with the simplest form of ULA (only two receiving channels), and a dual smoothing algorithm was proposed for joint range and azimuth angle estimation. Obviously, the 1D structure of the two receiving channels of the radar system implemented in [9] restricts its ability to estimate the three-dimensional (3D) localization of targets, because the 1D structure of the two receiving array is not able to get two-dimensional angle information, azimuth, and elevation angle. However, 3D localization in a realistic environment is of greater interest, and will lead to further development of localization technology. To the best of our knowledge, there has been no research on the 3D MUSIC method implemented in conjunction with a K-band FMCW radar system. In [8], a 2D MUSIC algorithm for joint range and azimuth angle estimation was developed for an FMCW radar system, and its performance was analyzed by using simulated radar data. In [10], an ESPRIT-like method was proposed for 2D DOA estimation of coherent signals with a rectangular array, only verified by simulation results. In [11], 2D DOAs were estimated by using the 2D DFT-ESPRIT algorithms with a cylindrical conformal array. In [12], a recursive procedure based on an extended Kalman filter was proposed for 2D DOA estimation with a 3D antenna array, composed of three ULAs. Although the effectiveness of the suggested algorithms [8,10,11,12] has been demonstrated through simulations, they have not been verified by implementation in a system and corresponding experiments. However, in this work, the proposed modified 3D MUSIC method was implemented with a 2 × 2 horn antenna array, and its feasibility was verified through experiments using the implemented K-band radar system.
In this paper, we extend the antenna array structure from 1D in [9] to 2D to perform the joint 3D estimation of range, azimuth angle, and elevation angle. The implemented 2D antenna array structure has the simplest geometry of the planar array, and it is constructed as one miniaturized 2 × 2 horn antenna array composed of 4 small horn antennas, one as a transmitting antenna and the other three as receiving antennas. At the same time, the design of one K-band FMCW radar equipped with the constructed 2 × 2 horn antenna array is presented.
The conventional 3D MUSIC algorithm requires a 3D steering vector to calculate the 3D pseudo-spectrum, and the 3D steering vector is constructed by the Kronecker product of three 1D steering vectors. In our modified 3D MUSIC algorithm, an augmented 2D steering vector is proposed for the 3D MUSIC spectrum calculation. The proposed augmented 2D steering vector is obtained by connecting two 2D steering vectors in a specific way, which is related to how the stacked autocorrelation matrix is composed. The augmented 2D steering vector leads to more efficient computational complexity in 3D pseudo-spectrum calculation than the conventional 3D steering vector.
Several experiments were conducted in a chamber, and the results verified that the proposed modified 3D MUSIC algorithm implemented with the radar system with the 2 × 2 horn antenna array achieves good performance.

2. System Model

As depicted in Figure 1a, we implemented a 2 × 2 horn antenna array, the array element at the upper-right corner is the transmitting antenna, and the other three elements are a triple-channel receiving antenna array. We consider the 2 × 2 horn antenna array at the x–z plane as shown in Figure 1b, and the receiving antenna elements at the origin, z-axis and x-axis are numbered as l = 0, 1, and 2, respectively. Thus, a ULA containing two elements is located on the x-axis, and the other ULA containing two elements is located on the z-axis. The array element spacing is d = λ (λ denotes the wavelength). More specifications of the implemented antenna will be introduced in Section 4.
In this paper, the triple-channel receiving antenna array processes narrowband plane waves incident on the sensor elements of the array. There are two types of system models: the vector wave model and the scalar wave model. In [13], the MUSIC method was handled with the vector wave model based on an electromagnetic field scattering model. In [8,9,14,15,16,17], the scalar models for the FMCW signals to estimate the range and angles have been suggested. Since our proposed method is developed with FMCW signals for joint estimation of range, azimuth, and elevation, the mathematical model in the Equations (1)–(6) come from the suggested mathematical model for FMCW radar signals in [8,9,14,15,16,17]. The relationship between the vector wave model and the scalar wave model was explained in [18] (the relationship between the Equations (7) and (20)).
We assume that the received signals from K targets impinge on the triple-channel receiving antenna array, including the information of {ϕk, θk, τk}, k = 0, 1, …, K − 1, where ϕk, θk, and τk are the azimuth angle, elevation angle, and time delay of the k-th target. We define xl(t) as the received signal of the l-th antenna element, and the signal representation of two ULAs along x-axis and z-axis can be extended from [17] as
x ( t ) = [ x 0 ( t ) x 1 ( t ) x 2 ( t ) ] = A s ( t ) + n ( t ) ,
where
A = [ a 0 a 1 a K 1 ] , a k = [ 1 p k q k ] , p k = exp ( j 2 π d cos θ k λ ) , q k = exp ( j 2 π d cos θ k sin ϕ k λ ) , s ( t ) = [ s ( t τ 0 ) s ( t τ 1 ) s ( t τ K 1 ) ] .
In the Equation (1), A denotes the array manifold matrix, s(t) denotes the vector composed of the received signals of K targets, and n(t) is the additive white Gaussian noise (AWGN) vectors whose elements have mean zero and variance. We define two electrical angles of the k-th target as αk = −2πdcosθk/λ and βk = −2πdcosθksinϕk/λ, and one array factor vector of the k-th source ak, as shown in the Equation (2). Hence, we have the definition of array manifold matrix A as A = [a0 a1aK−1]. In our implemented system, the components s(t) of received signal vector s(t) are applied with the FMCW chirp signal, defined by
s ( t ) = { exp [ j ( f c t + μ 2 t 2 ) ]   for   0 t < T s y m 0        elsewhere ,
where fc denotes the carrier frequency, μ is the rate of change of the instantaneous frequency of a chirp signal, and Tsym is the duration of the FMCW chirp pulse. Based on the Equations (2) and (3), xl(t) in the Equation (1) can be rewritten as
x l ( t ) = k = 1 K a k ( l ) s ( t τ k ) + n l ( t )   for   l = 0 , 1 , 2 ,
where ak(l) denotes the l-th elements of the vector ak in the Equation (2). The received signal in the Equation (4) will be moved to a beat frequency in a sinusoidal model by a mixer as
y l ( t ) = k = 1 K a k ( l ) y ( t τ k ) + n ˜ l ( t ) ,
where
y ( t τ k ) = exp ( j ( μ τ k t μ 2 τ k 2 + f c τ k ) ) ,
and n ˜ l ( t ) denotes the transformed AWGN for the l-th antenna. Here, y(tτk) is the mathematical model for the beat signal, whose frequency is proportional to τk, as in [14]. Thus, the signal model can be rewritten as
Y = A [ y ( t τ 0 ) y ( t τ 1 ) y ( t τ K 1 ) ] + n ˜ ( t ) = A y ( t ) + n ˜ ( t ) .
In (7), n ˜ ( t ) denotes the transformed AWGN vectors composed of n ˜ l ( t ) . The resultant beat signal in (7) will be converted with sampling frequency fs = 1/Ts into sequences of samples yl[n] = yl(nTs) for n = 0, …, N − 1, where N = Tsym/Ts.

3. Proposed Algorithm

Since the proposed method is developed for joint estimation of the elevation angle, azimuth angle, and range for an FMCW radar with a triple-channel receiving array, we propose a stacked autocorrelation matrix, instead of a stacked Hankel matrix to exploit the 3D pseudo-spectrum estimation. Prior to explaining the spatially stacked autocorrelation matrix, the temporally averaged autocorrelation matrix and its factorization for one antenna is addressed with a mathematical model for noiseless data. Then, the matrix factorization is extended to the spatially stacked autocorrelation matrix. In the presence of noise, singular-value decomposition (SVD) of the stacked autocorrelation matrix and proposed 3D pseudo-spectrum estimation are developed.

3.1. Temporal Autocorrelation Matrix

The temporal autocorrelation matrix for the l-th element of the triple-channel receiving array can be defined based on the sampled sequences as in [19,20] by
R l = n = 0 N L r y l , n y l , n H ,
where yl,n = [yl[n], yl[n + 1], …, yl[n + Lr − 1]]T, [•]T denotes the transpose of a vector or a matrix contained within, and Lr is the selection parameter that satisfies Lr > K.
We provide the factorization model for the matrix Rl with an assumption of no AWGN, and the transformation matrix T1 such that
R l = P H l Q P H T 1 ,
where
P = [ 1 1 1 ζ 0 ζ 1 ζ K 1 ζ 0 L r 1 ζ 1 L r 1 ζ K 1 L r 1 ] L r × K ,   and   ζ k = exp ( j μ τ k T s ) ,
Q = d i a g [ ρ 0 , ρ 1 , , ρ K 1 ]   and   ρ k = exp ( j ( μ 2 τ k 2 + f c τ k ) ) ,
H l = d i a g [ a 0 ( l ) , a 1 ( l ) , , a K 1 ( l ) ] .
Here, diag[•] denotes the diagonal matrix with the elements contained within the main diagonal.

3.2. Spatially Stacked Autocorrelation Matrix

A spatially stacked form of the autocorrelation matrix in the Equation (8) is expressed as
R = [ R 0 R 1 R 2 ] = [ P H 0 Q P H T 1 P H 1 Q P H T 1 P H 2 Q P H T 1 ] = A Q P H T 1 ,
where
A = [ PH 0 PH 1 PH 2 ] = [ 1 1 1 ζ 0 ζ 1 ζ K 1 ζ 0 L r 1 ζ 1 L r 1 ζ K 1 L r 1 p 0 p 1 p K 1 p 0 ζ 0 p 1 ζ 1 p K 1 ζ K 1 p 0 ζ 0 L r 1 p 1 ζ 1 L r 1 p K 1 ζ K 1 L r 1 q 0 q 1 q K 1 q 0 ζ 0 q 1 ζ 1 q K 1 ζ K q 0 ζ 0 L r 1 q 1 ζ 1 L r 1 q K 1 ζ K 1 L r 1 ] .
The phase shift ζk in (10), pk and qk in (2), is composed of a time delay-induced element, an elevation-angle-induced element, and an elevation- and azimuth-angle-induced element, respectively. The 3D shift-invariant structure is shown in Figure 2.

3.3. SVD and Noise Subspace

In the presence of noise signals, the spatially stacked autocorrelation matrix R can be factorized like [21,22,23] in the subspace domain by SVD:
R = U Σ V H ,
where
U = [ U s U n ] , Σ = [ Σ s Σ n ]   and   V = [ V s V n ] .
Here, the submatrix Us = [u1uK] contains K eigenvectors that span the signal subspace of the matrix R, and the submatrix Un = [uK+1uLr] contains LrK eigenvectors spanning the noise subspace of the matrix R. The values in the diagonal matrices Σs = diag[δ0, δ1, …, δK−1] represent the eigenvalues for the K-dimensional signal subspace, and values in the matrix Σn = diag[δK, δK+1, …, δLr−1], δK = δK+1 = … = δLr−1 = σ n 2 , denote the noise variance, i.e., eigenvalues for the noise subspace of the matrix R.
Since the signal subspace spanning matrix Us contains the first K eigenvectors of the matrix U as described previously, it requires exact knowledge of the number of signals K, in order to separate the signal and noise subspaces. The number of signals can be estimated by dealing with the singular values in a specific manner. The criterion of the minimum description length (MDL) [24,25] is adopted in this paper for classification of the signal and noise subspaces. We assume that the estimated signal subspace is correctly separated from the noise subspace, and the estimated number of targets is K ^ :
K ^ = arg min m { 0 , 1 , , L r 1 } M D L ( m ) ,
where
M D L ( m ) = N ( L r m ) log f ( m ) + 1 2 m ( 2 L r m ) log N and   f ( k ) = ( i = m L r 1 δ i ) 1 L r m ( 1 L r m i = m L r 1 δ i ) .
Since the number of signal K ^ is obtained by the Equation (17), the signal subspace and noise subspace can be defined by
R = U s S s V s H signal   subspace + U n Σ n V n H noise   subspace .
There can be the following relationship between the matrix A in the Equation (14) and the matrix Us, such that
U s = A T 2 ,
where T2 is a K ^ by K ^ non-singular transformation matrix, as in [26]. The matrix Us of the Equation (19) is the estimated signal subspace, and we assume the signal and noise subspaces are separated correctly. Since the steering matrix A of the Equation (14) shares the same signal subspace with the matrix Us, the steering matrix A can be related to the estimated matrix Us based on the full rank transformation matrix T2. If the signal and noise subspaces are separated incorrectly, the transformation between the matrix A and the matrix Us will not be available.

3.4. Modified 3D Steering Vector

Since the signal subspace and noise subspace are orthogonal, we propose a modified 3D MUSIC pseudo-spectrum estimation for the azimuth angle, elevation angle, and time delay. Assuming three pseudo-spectrum steering vectors, sz for time-delay estimation, sw for electrical angle αk estimation, sg for electrical angle βk estimation, are defined such that
s z = [ 1 exp ( j 2 π Z z k ) exp ( j 2 π Z ( L z 1 ) z k ) ] 1 × L z s w = [ 1 exp ( j 2 π W w k ) ] 1 × 2 s g = [ 1 exp ( j 2 π G g k ) ] 1 × 2
respectively, where zk denotes the estimation for ξk, wk denotes the estimation for αk, gk denotes the estimation for βk, and Lz is the selection parameter. For the 2D MUSIC algorithms [27,28,29] for elevation angle and azimuth angle estimation or range and azimuth angle estimation, the Kronecker product between the steering vectors for two electrical angles is organized as sw sg. While the Kronecker product is directly extended to the 3D steering vector sz sw sg, the stacked autocorrelation matrix must be defined as
R = [ R 0 R 1 R 0 R 2 ] .
Comparing (21) to our definition in the Equation (13), R0 is utilized only once, and a smaller matrix size is obtained in the Equation (13). Further, we propose an augmented 2D steering vector corresponding to the proposed stacked structure in the Equation (13). We first obtain two 2D steering vectors, by preforming the Kronecker product between sz and sw, sz and sg, respectively, such that
s z w = s z s w   and   s z g = s z s g .
In the Equation (22), we obtain two steering vectors szw and szg. As shown in Figure 1, the implemented triple channel receiving antenna array is composed of two ULAs, and the two ULAs share the same antenna element at the origin. Since the two ULAs will be used for elevation and azimuth angles respectively, the received signals from the antenna element at the origin are repeatedly used. Thus, the obtained two steering vectors szw and szg in the Equation (22) have the same elements, namely, the second half of szw is same as the first half of szg. Herein, we choose to delete the first half of szg, and connect the szw and the second half of szg, and the augmented 2D steering vector for the modified 3D MUSIC algorithm can be obtained by
s z w g = [ s z w ( the   second   half   of   s z g ) ] .
The 3D pseudo-spectrum can be obtained through the augmented 2D steering vector szwg and the noise subspace spanning matrix Un in the Equation (16), such that
P s e u d o ( z , w , g ) = 1 s z w g H U n U n H s z w g .

3.5. Transformation

Employing the peak detection method, the K ^ peaks can be detected by 3D pseudo-spectrum searching, and the three estimated indexes { z k , w k , g k } k = 0 K ^ 1 at which the K ^ peaks are found, such that
{ z k , w k , g k } = { max k [ P s e u d o ( z , w , g ) ] } k = 0 K ^ 1 ,
where maxk[•] denotes the k-th biggest value of the elements contained within. Note that the parameter matching for range, azimuth, and elevation sets is avoided. Since the three indexes of the 3D pseudo-spectrum are estimated, estimations for the elevation angle, azimuth angle, and time-delay of K ^ targets, can be obtained by
θ ^ k = acos ( λ π d × w k W ) ,
ϕ ^ k = asin ( λ π d cos θ ^ k × g k G ) ,
τ ^ k = c 2 μ ( z k Z × 1 T s ) ,
respectively, where acos(•) denotes the inverse cosine function of the value contained within, and asin (•) denotes the inverse sine function of the value contained within.

3.6. Computational Burden Analysis

The costs for the required individual operations are summarized in Table 1. For the given data matrix R in Equation (13), the computational burden costs for the modified 3D MUSIC can be derived to be O(13 L r 3 + L r 2 K +B3K3 L r 3 ), where B is the iteration number for the three-dimensional searching. In general, the iteration number B is set to be much bigger than M and 13 for the high resolution 3D pseudo-spectrum. Hence, the derived computational complexity for the modified 3D MUSIC can be simplified to O(B3K3 L r 3 ). It implies that the computational burden is still expensive since the 3D spectrum searching is unavoidable.

4. System Implementation

In this section, implementation of the proposed K-band FMCW radar system with a small 2 × 2 horn antenna array is presented.

4.1. Transceiver and IF

The proposed K-band FMCW radar system operates in the 24.025 to 24.225 GHz range with a 200 MHz bandwidth and a 100 μs period. The radar system comprises a 2 × 2 horn antenna array, an FMCW chirp generator ADF5901 evaluation module, a 4-channel receiver, an intermediate frequency (IF) amplifier, and a data logging platform, and so forth. A block diagram of the whole radar system is shown in Figure 3. The FMCW signals in the Equation (3) generated by the ADF5901 evaluation module are transmitted to the receiver as the local oscillator (LO) signal and to the transmitting antenna through a power amplifier (PA). The LO signals from the ADF5901 evaluation board are fed to the mixers of the triple receiving channels (one of the four channels is not used).
As depicted in Figure 4, the ADF5901 evaluation board can provide the transmitted FMCW signal by the ADF5901 (voltage-controlled oscillator (VCO) and two output channels) in conjunction with the ADF4159 (fractional-N frequency synthesizer). Only one output channel of the ADF5901 evaluation board is used to feed the input of the PA for the transmitting antenna.
The triple-channel receiver and IF amplifier are implemented as shown in Figure 5. Each receiving channel has three low noise amplifiers (LNAs) with −15 dBm P1dB output and one mixer, and the implemented receiving channel has a 10 dB maximum noise figure. The generated beat signals are amplified by the IF amplifier and voltage gain control (VGA) amplifier, and then the amplified IF signals are processed by the proposed algorithm. One high-pass filter (HPF) is utilized between the mixer and IF amplifier, and more specifications of the HPF will be explained in next subsection.
The received beat signal was sampled through a field programmable gate array (FPGA) and a digital signal processing (DSP) board as shown in Figure 6. The analog to digital convertor (ADC) in the implemented radar system has a 12 bit output (72 dB dynamic range), and a 12.5 MHz sampling rate.
The parameters of the implemented K-band radar system are summarized in Table 2.

4.2. 2 × 2 Horn Antenna Array

The 2 × 2 horn antenna array is implemented as shown in Figure 7. As mentioned in Section 2, the array element (Ant. 1) at the upper-right corner is the transmitting antenna, and the other three elements (Ant. 2–4) are a triple-channel receiving antenna array. We identify the input ports as Port 1–4 for the antennas Ant. 1–4, respectively, to explain the characteristics as shown in Figure 8 and Figure 9.
Figure 8 shows the transmitting antenna return loss and adjacent receiving antenna isolation characteristics over the range of 21 GHz to 30 GHz. The performance was tested by a network analyzer. S11 means the return loss of the transmitting antenna Ant.1, and S21, S31, and S41 represents the power transferred from Port 1 to Port 2, Port 1 to Port 3, and Port 1 to Port 4, respectively. It should be noted that the isolations between the transmitting antenna and the receiving antenna array were measured as −28 dB (max.) at 24 GHz; therefore, the HPF, as shown in Figure 3 and Figure 5b, is used in the receiver to mitigate the isolation effectiveness.
Figure 9 shows the normalized radiation pattern of a single element of the implemented 2 × 2 horn antenna array, and each element has a similar pattern, which has 8 dBi antenna gain and 45° 3 dB beam width.

5. Experiments and Results

Experiments were conducted to test the performance of the proposed 3D MUSIC algorithm and the developed K-Band FMCW radar with the 2 × 2 horn antenna array. The received data of the radar system were sampled by an FPGA and DSP board, and then they were processed by the proposed algorithm in MATLAB 2016b. For each experiment, 100 repeated detection trials were conducted.
The experimental set up of the chamber in Daegu Gyeongbuk Institute of Science & Technology (DGIST, Daegu, Korea) is shown in Figure 10. The implemented antenna array was set up on one pillar at the “Radar system” location of Figure 10, and the position of the Ant. 3 was assumed to be the origin of the Cartesian coordinates. Hence, the Cartesian coordinate system was implemented in the same way as shown in Figure 1, with the z-axis along the pillar and the y-axis along the boresight of the antenna array. There were four targets (four small iron blocks with 10 cm side length) in the same horizontal plane mounted on four different rails inside the test zone.
Four sets of experiments were conducted to detect the immobile targets in the chamber, respectively, and the locations of targets for the four sets of chamber experiments are shown as the form (x, y, z) in Table 3.
We conducted 100 trials for each set of experiments, and the estimated locations are presented as the azimuth-elevation angle map and range-azimuth map in Figure 11, Figure 12, Figure 13 and Figure 14, respectively.
The root mean square error (RMSE) was selected as the measurement of the experiment results, and we defined the RMSE for each target as 1 100 b = 1 100 ( x ^ b x ) 2 + ( y ^ b y ) 2 + ( z ^ b z ) 2 , where ( x ^ b , y ^ b , z ^ b ) denotes the estimated position of the target (x, y, z) from the b-th trial. The measured RMSE values for all experiments are shown in Table 4.

6. Conclusions

This paper extended the previous work [9] to obtain the joint 3D estimation for range, azimuth, and elevation angle. An augmented 2D steering vector based on the proposed stacked autocorrelation matrix was proposed. The 3D pseudo-spectrum of the modified 3D MUSIC algorithm is constructed through the proposed augmented 2D steering vector. Since the augmented 2D steering vector has a smaller size than the conventional 3D steering vector, the 3D pseudo-spectrum calculation based on the augmented 2D steering vector obtains more efficient computational complexity than that based on the conventional 3D steering vector. At the same time, the modified 3D MUSIC algorithm avoids parameter matching for range, azimuth, and elevation sets. Several sets of chamber experiments were conducted to verify the performance of the proposed algorithm implemented with a K-band radar system with the 2 × 2 horn antenna array. The experimental results demonstrated the effectiveness of the proposed algorithm for 3D estimation of range, azimuth, and elevation angles. Although the proposed augmented 2D steering vector is utilized in the modified 3D MUSIC algorithm, the computational complexity is still high for real-time applications, and 3D spectrum searching is needed for the obtained 3D pseudo-spectrum.

Author Contributions

All authors conceived and designed the system and experiments together; Y.-C.L. and B.C. performed the experiments; Y.-C.L. analyzed the data. D.O. was the lead developer for the hardware used and contributed to the experimental work. C.-W.J. contributed analysis tools/evaluation of system.

Funding

This work was supported by the DGIST R&D Program of the Ministry of Science and ICT(18-ST-01).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Krim, H.; Viberg, M. Two decades of array signal processing research: The parametric approach. IEEE Signal Process. Mag. 1996, 13, 67–94. [Google Scholar] [CrossRef]
  2. Colone, F.; Bongioanni, C.; Lombardo, P. Multifrequency integration in FM radio-based passive bistatic radar. Part II: Direction of arrival estimation. IEEE Aerosp. Electron. Syst. Mag. 2003, 28, 40–47. [Google Scholar] [CrossRef]
  3. Kazemi, I.; Moniri, M.R.; Kandovan, R.S. Optimization of angle-of-arrival estimation via real-valued sparse representation with circular rrray radar. IEEE Access 2003, 1, 404–407. [Google Scholar] [CrossRef]
  4. Schmidt, R.O. Multiple emitter location and signal parameters estimation. IEEE Trans. Antennas Propag. 1986, 34, 267–280. [Google Scholar] [CrossRef]
  5. Roy, R.; Kailath, T. ESPRIT—Estimation of signal parameters via rotational invariance technique. IEEE Trans. Acoust. Speech Signal Process. 1989, 37, 984–995. [Google Scholar] [CrossRef]
  6. Gao, F.F.; Gershman, A.B. A generalized ESPRIT approach to direction-of-arrival estimation. IEEE Signal Process. Lett. 2005, 12, 254–257. [Google Scholar] [CrossRef]
  7. Lizzi, L.; Viani, F.; Benedetti, M.; Rocca, P.; Massa, A. The MDSO-ESPRIT method for maximum likelihood DOA estimation. Prog. Electromagn. Res. 2008, 80, 477–497. [Google Scholar] [CrossRef]
  8. Manokhin, G.O.; Erdyneev, Z.T.; Geltser, A.A.; Monastyrev, E.A. MUSIC-based algorithm for range-azimuth FMCW radar data processing without estimating number of targets. In Proceedings of the 2015 IEEE 15th Mediterranean Microwave Symposium (MMS), Lecce, Italy, 30 November–2 December 2015; pp. 1–4. [Google Scholar] [CrossRef]
  9. Oh, D.; Ju, Y.; Nam, H.; Lee, J.H. Dual smoothing DOA estimation of two-channel FMCW radar. IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 904–917. [Google Scholar] [CrossRef]
  10. Chen, F.J.; Kwong, S.; Kok, C.W. ESPRIT-like two-dimensional DOA estimation for coherent signals. IEEE Trans. Aerosp. Electron. Syst. 2010, 46, 1477–1484. [Google Scholar] [CrossRef]
  11. Yang, P.; Yang, F.; Nie, Z.-P. DOA estimation with subarray divided technique and interporlated ESPRIT algorithm on a cylindrical conformal array antenna. Prog. Electromagn. Res. 2010, 103, 201–216. [Google Scholar] [CrossRef]
  12. Harabi, F.; Changuel, H.; Gharsallah, A. Direction of arrival estimation method using a 2-L shape arrays antenna. Prog. Electromagn. Res. 2007, 69, 145–160. [Google Scholar] [CrossRef]
  13. Zhong, Y.; Chen, X. MUSIC Imaging and Electromagnetic Inverse Scattering of Multiply Scattering Small Anisotropic Spheres. IEEE Trans. Antennas Propag. 2007, 55, 3542–3549. [Google Scholar] [CrossRef]
  14. Winkler, V. Range Doppler detection for automotive FMCW radars. In Proceedings of the 2007 European Microwave Conference, Munich, Germany, 9–12 October 2007; pp. 1445–1448. [Google Scholar] [CrossRef]
  15. Lee, M.S. Signal Modeling and Analysis of a Planar Phased-Array FMCW Radar with Antenna Switching. IEEE Antennas Wirel. Propag. Lett. 2011, 10, 179–182. [Google Scholar] [CrossRef]
  16. Gu, J.F.; Wang, K.; Wu, K. System Architecture and Signal Processing for Frequency-Modulated Continuous-Wave Radar Using Active Backscatter Tags. IEEE Trans. Signal Process. 2018, 66, 2258–2272. [Google Scholar] [CrossRef]
  17. Cao, R.; Liu, B.; Gao, F.; Zhang, X. A Low-Complex One-Snapshot DOA Estimation Algorithm with Massive ULA. IEEE Commun. Lett. 2017, 21, 1071–1074. [Google Scholar] [CrossRef]
  18. Fazli, R.; Nakhkash, M.; Heidari, A.A. Alleviating the Practical Restrictions for MUSIC Algorithm in Actual Microwave Imaging Systems: Experimental Assessment. IEEE Trans. Antennas Propag. 2014, 62, 3108–3118. [Google Scholar] [CrossRef]
  19. Nishimura, T.; Endo, T.; Ohgane, T.; Ogawa, Y. Parameter settings on DOA estimation of multi-band signals using a compressed sensing technique. In Proceedings of the 2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), Jeju, Korea, 13–16 December 2016; pp. 1–6. [Google Scholar] [CrossRef]
  20. Yoo, D.-S. A Low Complexity Subspace-based DOA Estimation Algorithm with Uniform Linear Array Correlation Matrix Subsampling. Int. J. Antennas Propag. 2015, 2015, 323545. [Google Scholar] [CrossRef]
  21. Chen, Y.; Huang, J.; He, C. High resolution direction-of-arrival estimation based on compressive sensing with noval compression matrix. In Proceedings of the 2012 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC 2012), Hong Kong, China, 12–15 August 2012; pp. 764–767. [Google Scholar] [CrossRef]
  22. Feng, G.; Liu, Z. Parallel computation of SVD for high resolution DOA estimation. In Proceedings of the 1998 IEEE International Symposium on Circuits and Systems (ISCAS ’98), Monterey, CA, USA, 31 May–3 June 1998; Volume 5, pp. 25–28. [Google Scholar] [CrossRef]
  23. Gu, J.F.; Wei, P.; Tai, H.M. Two-Dimensional DOA Estimation by Cross-Correlation Matrix Stacking. Circuits Syst. Signal Process. 2011, 30, 339–353. [Google Scholar] [CrossRef]
  24. Wax, M.; Kailath, T. Detection of Signals by Information Theoretic Criteria. IEEE Trans. Acoust. Speech Signal Process. 1985, 33, 387–392. [Google Scholar] [CrossRef]
  25. Grünwald, P.D.; Myung, J.I.; Pitt, M.A. Advances in Minimum Description Length: Theory and Applications; MIT Press: Cambridge, MA, USA, 2005. [Google Scholar]
  26. Lemma, A.N.; van der Veen, A.J.; Deprettere, E.F. Analysis of joint angle-frequency estimation using ESPRIT. IEEE Trans. Signal Process. 2003, 51, 1264–1283. [Google Scholar] [CrossRef]
  27. Yan, F.; Chen, Z.; Sun, M.; Shen, Y.; Jin, M. Two-Dimensional Direction-of-Arrivals Estimation Based on One-Dimensional Search Using Rank Deficiency Principle. Int. J. Antennas Propag. 2015, 2015, 127621. [Google Scholar] [CrossRef]
  28. Si, W.; Wang, Y.; Hou, C.; Wang, H. Real-Valued 2D MUSIC Algorithm Based on Modified Forward/Backward Averaging Using an Arbitrary Centrosymmetric Polarization Sensitive Array. Sensors 2017, 17, 2241. [Google Scholar] [CrossRef] [PubMed]
  29. Van Treesn, H. Detection, Estimation, and Modulation Theory—Part IV Optimum Array Process; Wiley: New York, NY, USA, 2002. [Google Scholar]
Figure 1. (a) Implemented 2 × 2 planar array with horn antennas; (b) Illustration of array configuration for joint range, azimuth, and elevation angle estimation.
Figure 1. (a) Implemented 2 × 2 planar array with horn antennas; (b) Illustration of array configuration for joint range, azimuth, and elevation angle estimation.
Sensors 18 01634 g001
Figure 2. 3D shift-invariant structure.
Figure 2. 3D shift-invariant structure.
Sensors 18 01634 g002
Figure 3. Block diagram of implemented triple-channel K-band radar system.
Figure 3. Block diagram of implemented triple-channel K-band radar system.
Sensors 18 01634 g003
Figure 4. Frequency-modulated continuous-wave (FMCW) chirp generator ADF5901 evaluation board and 20 dB power amplifier.
Figure 4. Frequency-modulated continuous-wave (FMCW) chirp generator ADF5901 evaluation board and 20 dB power amplifier.
Sensors 18 01634 g004
Figure 5. (a) Triple-channel receiver; (b) high-pass filter; (c) intermediate frequency (IF) amplifier.
Figure 5. (a) Triple-channel receiver; (b) high-pass filter; (c) intermediate frequency (IF) amplifier.
Sensors 18 01634 g005
Figure 6. Data-logging platform.
Figure 6. Data-logging platform.
Sensors 18 01634 g006
Figure 7. Transmitting antenna and receiving antennas.
Figure 7. Transmitting antenna and receiving antennas.
Sensors 18 01634 g007
Figure 8. Transmitting antenna return loss and receiving antennas isolation characteristics.
Figure 8. Transmitting antenna return loss and receiving antennas isolation characteristics.
Sensors 18 01634 g008
Figure 9. Radiation pattern of antenna elements.
Figure 9. Radiation pattern of antenna elements.
Sensors 18 01634 g009
Figure 10. Experiment scenario in the chamber: (a) layout; (b) inside view.
Figure 10. Experiment scenario in the chamber: (a) layout; (b) inside view.
Sensors 18 01634 g010
Figure 11. Results of the first set of experiments: (a) azimuth angle and elevation angle estimation; (b) range and azimuth angle estimation; (c) 3D view of target locations; (d) distribution of singular values.
Figure 11. Results of the first set of experiments: (a) azimuth angle and elevation angle estimation; (b) range and azimuth angle estimation; (c) 3D view of target locations; (d) distribution of singular values.
Sensors 18 01634 g011aSensors 18 01634 g011b
Figure 12. Results of the second set of experiments: (a) azimuth angle and elevation angle estimation; (b) range and azimuth angle estimation; (c) 3D view of target locations; (d) distribution of singular values.
Figure 12. Results of the second set of experiments: (a) azimuth angle and elevation angle estimation; (b) range and azimuth angle estimation; (c) 3D view of target locations; (d) distribution of singular values.
Sensors 18 01634 g012
Figure 13. Results of the third set of experiments: (a) azimuth angle and elevation angle estimation; (b) range and azimuth angle estimation; (c) 3D view of target locations; (d) distribution of singular values.
Figure 13. Results of the third set of experiments: (a) azimuth angle and elevation angle estimation; (b) range and azimuth angle estimation; (c) 3D view of target locations; (d) distribution of singular values.
Sensors 18 01634 g013aSensors 18 01634 g013b
Figure 14. Results of fourth group experiments: (a) azimuth angle and elevation angle estimation; (b) range and azimuth angle estimation; (c) 3D view of target locations; (d) distribution of singular values.
Figure 14. Results of fourth group experiments: (a) azimuth angle and elevation angle estimation; (b) range and azimuth angle estimation; (c) 3D view of target locations; (d) distribution of singular values.
Sensors 18 01634 g014
Table 1. Costs of individual operations.
Table 1. Costs of individual operations.
Operation DescriptionComputational Complexity
SVD of RO((3Lr)2Lr + 3Lr L r 2 + L r 3 ) = O(13 L r 3 )
UsO( L r 2 )
Three-dimensional searchingO(B3K3 L c 3 )
Table 2. Summary of system specifications.
Table 2. Summary of system specifications.
ParameterSpecification
Modulation typeFMCW
Carrier frequency24.025 GHz~24.225 GHz
Bandwidth200 MHz
Sweep time100 μs
Tx and Rx antenna2 × 2 horn antenna array
Number of Rx channels3 Channel
EIRP28 dBm
Receiver noise figure10 dB
Receiver RF maximum gain50 dB (Max.)
Maximum IF gain40 dB (Max.)
Receiver dynamic range72 dB
RF power consumption4 W
EIRP: equivalent isotropically radiated power.
Table 3. The locations of targets (unit: m).
Table 3. The locations of targets (unit: m).
Target 1Target 2Target 3Target 4
1st experiment(−1.2, 6.2, −0.3)
2nd experiment(−1.2, 6.2, −0.3)(0.4, 6, −0.3)
3rd experiment (−1.2, 6.2, −0.3)(−0.4, 6.2, −0.3)(0.4, 6, −0.3)
4th experiment(−1.2, 6.2, −0.3)(−0.4, 6.2, −0.3)(0.4, 6, −0.3)(1.2, 6, −0.3)
Table 4. Measured RMSE values.
Table 4. Measured RMSE values.
Target 1Target 2Target 3Target 4
1st experiment0.1725
2nd experiment0.17730.1768
3rd experiment 0.18060.17840.1788
4th experiment0.18320.18120.17950.1821

Share and Cite

MDPI and ACS Style

Li, Y.-C.; Choi, B.; Chong, J.-W.; Oh, D. 3D Target Localization of Modified 3D MUSIC for a Triple-Channel K-Band Radar. Sensors 2018, 18, 1634. https://doi.org/10.3390/s18051634

AMA Style

Li Y-C, Choi B, Chong J-W, Oh D. 3D Target Localization of Modified 3D MUSIC for a Triple-Channel K-Band Radar. Sensors. 2018; 18(5):1634. https://doi.org/10.3390/s18051634

Chicago/Turabian Style

Li, Ying-Chun, Byunggil Choi, Jong-Wha Chong, and Daegun Oh. 2018. "3D Target Localization of Modified 3D MUSIC for a Triple-Channel K-Band Radar" Sensors 18, no. 5: 1634. https://doi.org/10.3390/s18051634

APA Style

Li, Y. -C., Choi, B., Chong, J. -W., & Oh, D. (2018). 3D Target Localization of Modified 3D MUSIC for a Triple-Channel K-Band Radar. Sensors, 18(5), 1634. https://doi.org/10.3390/s18051634

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop