J. Electromagn. Eng. Sci Search

CLOSE


J. Electromagn. Eng. Sci > Volume 25(3); 2025 > Article
Kang, Jun, Park, Kim, and Jeong: Partially Binarized Deep MUSIC for Multiple Target Angle Estimation Using Wireless Sensor Array Systems

Abstract

In this paper, a partially binarized deep learning-based MUltiple SIgnal Classification (MUSIC) algorithm for estimating the angle-of-arrivals (AoAs) of multiple targets using wireless sensor array systems is proposed. Since the sensor array system has limited computing power, it is not desirable to put the entire neural network on a single sensor node. Accordingly, the neural network was partitioned into two parts: the sensor node and the ground server. The neural network output (that is, the partially processed data) at the node was forwarded to the server through a noisy backhaul link channel. By modeling the noisy backhaul link as a binarized feedforward layer, we developed a new neural network architecture suitable for AoA estimation using wireless sensor array systems, and we trained it using a straight-through gradient estimator. Furthermore, unlike conventional deep learning-based MUSIC in which the MUSIC pseudospectrum for multiple targets is exploited as a label for training neural networks, a new training dataset generation method is proposed. Specifically, we generated the label by using the weighted sum of the MUSIC pseudospectra of each single target, resulting in more apparent peaks in the target angles and enhancing multiple target angle estimation accuracy.

Introduction

Angle-of-arrival (AoA) estimation has been extensively investigated in the past few decades due to its applications in wireless communications, vehicular navigation, unmanned aerial vehicle (UAV) navigation, and military and commercial surveillance [13]. The MUltiple SIgnal Classification (MUSIC) algorithm, a subspace-based technique [4, 5], has received considerable attention for its accuracy and stability in performing the eigenvalue decomposition of the received signal covariance matrix to compute the noise subspace. However, to achieve high estimation accuracy, a process for accurately estimating the number of targets (relevant to the model order of the signal/noise subspace) is required.
To avoid model order estimation, data-driven deep learning (DL)-based approaches have been developed to estimate the AoA of targets [68]. In [6], a deep convolutional neural network (CNN) was exploited for AoA estimation, and a neural network architecture consisting of multiple subnetworks, each supporting the subregion of the azimuth angle range, was used to reduce the complexity of the estimation of multiple target AoAs. In [7], an unsupervised learning strategy for AoA estimation was proposed. In [8], DL-based AoA estimation was validated by utilizing sensor data measured from an automotive multiple-input multiple-output radar environment. However, most DL approaches assume that the deep neural network is mounted on a single platform. When a sensor node has limited computing power, it is not desirable to put the entire neural network on it.
This paper proposes a partially binarized DL-based MUSIC algorithm for estimating the AoAs of multiple targets using a wireless sensor array system. Given the limited computing power of the sensor node, the neural network was partitioned into two parts: the sensor node and the ground server. The output of the neural network (that is, the partially processed data) at the node was forwarded to the server through the noisy backhaul link channel. By modeling the noisy backhaul link as a binarized feedforward layer, we developed a new neural network architecture suitable for AoA estimation using wireless sensor array systems. Through the proposed network architecture, we can reduce the communication overhead at the backhaul link instead of sending the output of the network at the sensor node in decimal or real numbers.
Binarized neural networks have been actively investigated because they can reduce memory size and improve power efficiency by replacing most arithmetic operations with bit-wise operations (see [9] and [10] and the references therein). In the present study, the weights and activations of the neural network were binary. Unlike previous works, we replaced the backhaul link channel with the binarized feedforward layer, which enabled us to connect two neural networks separately assigned to the sensor node and the ground server and train them accordingly as a single neural network. Therefore, we were able to train them using a straight-through gradient estimator [1114].
Furthermore, unlike conventional DL-based MUSIC in which the MUSIC pseudospectrum for multiple targets is exploited as the label to train the neural networks, we proposed a new training dataset generation method. Specifically, we generated the label by using the weighted sum of the MUSIC pseudospectra of each single target, resulting in more apparent peaks in the target angles and enhancing multiple target angle estimation accuracy.
In the context of wireless sensor array systems, the contributions of this paper are as follows:
• The neural network architecture is newly proposed for a wireless sensor array system in which the sensor has limited computing resources. As shown in Fig. 1, the proposed neural network is partitioned into two parts: the sensor node and the ground server.
• We propose the utilization of the binarized feedforward layer to connect two partitioned neural networks. Since these networks are connected through the noisy backhaul link channel introducing the binarized feedforward layer causes the quantized output of the partitioned neural network on the sensor node to be forwarded to the server. In addition, unlike DeepMUSIC [6], replacing the backhaul link channel with the binarized feedforward layer enabled us to connect two neural networks separately assigned to the sensor node and the ground server with reduced communication overhead at the backhaul link and train them accordingly as a single neural network. Therefore, we were able to train them using a straight-through gradient estimator [11]–[14].
• To train the proposed neural network, a new training dataset generation method is presented. We generated the label by using the weighted sum of the MUSIC pseudospectra of each single target, resulting in more apparent peaks in the target angles and enhancing multiple target angle estimation accuracy. In particular, when targets are located close together in the angular domain, the conventional MUSIC pseudospectrum exhibits less apparent peaks (In [6], the conventional MUSIC pseudospectrum was exploited, but it was assumed that one single target was located in each subregion of the angular domain, guaranteeing sufficient space between the AoAs of multiple targets).
• Through computer simulations, we show that the proposed partially binarized deep MUSIC exhibits a mean square error (MSE) performance comparable to that of the conventional deep MUSIC without quantization. Specifically, the binarized layer in the proposed deep MUSIC has noise immunity for the signal-to-noise ratio (SNR) when the backhaul link is higher than a certain threshold.
The rest of this paper is organized as follows. In Section II, the wireless sensor array system model is introduced, and the conventional MUSIC algorithm is presented. Section III describes the development of the neural network architecture for the wireless sensor array system. In addition, a new training dataset generation method is proposed. Sections IV and V present the simulation results and conclusions, respectively.

System Model and Conventional MUSIC Algorithm

1. Signal Model

As shown in Fig. 1, we considered a wireless sensor array system in which a remote sensor node with a uniform linear array (ULA) of M antenna elements was wirelessly connected to the ground server, and we estimated the target angles. When uncorrelated signals were emitted from K point targets in the far field with directions of θk, k = 1,…,K, the n-th signal snapshot measured at the sensor node was expressed as follows:
(1)
x[n]=k=1Kβka(θk)sk[n]+n[n],
where βk is the aggregated channel coefficient, and sk[n] is the signal emitted from the k-th target. Here, n[n] is a zero-mean spatially white additive Gaussian noise vector with variance σn2, and a(θk) is the array response vector, which was expressed as follows:
(2)
a(θk)=[1,ej2πλdsinθk,,ej2πλ(M-1)dsinθk]T,
where λ is the wavelength of the incident signal, and d is the inter-element distance.
Since the sensor node had limited computing power, it was connected to the server over a wireless link and exploited the computation resource to estimate multiple target angles. Accordingly, throughout this study, we considered that the partially processed data were forwarded to the main server through the limited backhaul link with the capacity of N bits per channel use. In other words, N bits could be transmitted error-free in one channel use.

2. Conventional MUSIC Algorithm

To accurately estimate multiple target angles, MUSIC can be applied to the sample covariance matrix of the received signal [4]. This matrix, R^x ∈ ℂM×M, is then expressed as follows:
(3)
R^x=1Nn=1Nx[n]xH[n].
For a large N, R^x is approximately decomposed as follows:
(4)
R^x=[EsEn][Σs00Σn][EsEn]H,
where Es ∈ ℂM×K (respectively, En ∈ ℂM×(MK)) have the eigenvectors associated with the signal subspace (resp., the noise subspace) as its columns. In addition, s ∈ ℝK×K (resp., n ∈ ℝ(MK)×(MK)) is a diagonal matrix whose diagonal entries are the eigenvalues associated with the signal subspace (resp., the noise subspace). The noise subspace spanned by the columns of En was orthogonal to the actual target steering vector a(θk). Accordingly, the pseudospectrum of the MUSIC estimator for the angle was expressed as follows:
(5)
J(θ)=1aH(θ)EnEnHa(θ)
and estimated θk, k = 1,…,K were equivalent to θs associated with K largest peaks of J(θ). The number of targets (K) was estimated using the minimum description length (MDL) or Akaike information criterion (AIC) methods (see [15] and the references therein). Specifically, using Eq. (4), the diagonal elements in ∑s (resp., ∑N) were the eigenvalues associated with the signal subspace (resp., noise subspace). Therefore, the number of targets was determined from the multiplicity of the smallest eigenvalues of the sample covariance matrix, R^x. Using the MDL method, the number of targets was estimated as follows:
(6)
K^=argkmink={0,M-1)MDL(k),
where MDL(k)=-log (Πi=k+1Mλ^i1(M-k)1M-kΣi=k+1Mλ^i)(M-k)N+12k(2M-k)logN [15]. Here, λ̂ is the i-th eigenvalue of the same covariance matrix, R^x.

Deep Learning for Multiple Target Angle Estimation using a Wireless Sensor Array System

1. System Architecture for Conventional Deep MUSIC

Fig. 2 shows the block diagram of the conventional deep MUSIC for estimating multiple target angles [6]. First, the sample covariance matrix of the received signal was computed. Then, the covariance matrix was passed to the input layer of the neural network, which generated the desired MUSIC pseudospectrum. The input data vector was constructed with the real, imaginary, and angular values of the covariance matrix, which were expressed as follows:
(7)
I=[Re[R^x]|Im[R^x]|[R^x]]M×M×3.
In addition, supervised learning with the synthetic dataset of the sample covariance matrices of the received signals and the associated MUSIC pseudospectra was exploited to train the neural network for deep MUSIC. For a given received signal covariance matrix, R^x (equivalently, I in (7)), Ji) was evaluated for Θi=ΘL+ΘU-ΘLP(i-1), i = 1,…,P, where ΘU and ΘL are the upper and lower boundaries of the searching grid points for the azimuth angle, respectively, and P is the number of grid points.
Then, by letting the output of the neural network at the ground server y ∈ ℝP, the network model parameters, such as yJ, where J = [J1),…,JP)], were trained. Therefore, throughout this study, MSE was used for the loss function in the training phase and expressed as follows:
(8)
L(y,J)=i=1P(yi-Ji)2.
Furthermore, to reduce the computational complexity in [6], the searching grid points for the azimuth angle were partitioned into Q subregions, and the associated pseudospectrum output vector J was rewritten as follows:
(9)
J=[J1,,JQ],         Jq=[J(ΘPQ(q-1)+1),,J(ΘPQq)],
where PQ is assumed to be an integer. Then, the overall neural network was split into Q neural networks, and the q-th subnetwork was trained to generate the pseudospectrum output vector for the q-th subregion, which enabled us to train the neural network separately with a lower computational complexity.

2. Neural Network Architecture for a Wireless Sensor Array System

2.1 Partially binarized deep MUSIC for the wireless backhaul link

Fig. 3 shows the block diagram of the proposed neural network architecture for the wireless sensor array system. Unlike [6], since the sensor array system had limited computing power, it was not desirable to put the entire neural network on the sensor node. Therefore, the neural network was partitioned into two parts: the sensor node and the ground server. Without the loss of generality, we considered that multiple CNN layers were mounted on the sensor node, while fully connected feedforward neural network (FNN) layers were mounted on the server. The output of the CNN layers was flattened and binarized with two levels. The binarized data were then forwarded to the server through the noisy backhaul link channel.

2.2 Training method for partially binarized deep MUSIC

The input matrix I in Eq. (7) was exploited for the input data of the CNN mounted on the sensor node. Then, the output of the CNN for the q-th subregion, Xq, was expressed as Xq = Sq(I) ∈ ℝo×n×m for q = 1,…, Q.
Here, Sq(·) denotes the q-th CNN module on the sensor node, o denotes the number (or size) of the convolution layer output channels, and n, m represent the convolution kernel size. Then, Xq was flattened as xq = fl(Xq) and forwarded to the server through the backhaul link channel. The architecture of each CNN module in the sensor node was given, as shown in Fig. 4.
Because the output of the CNN module, xq, needs to be reported through the limited backhaul link channel, it cannot be forwarded to the ground server in a floating number format. Therefore, in this study, we introduced a partially binarized scheme for quantization. As in [14], we binarized the output data, Xq, before flattening rather than binarizing the CNN weights. The output data of sensor node XiqRn×m for the i-th kernel output can be binarized as follows:
(10)
X˜iq=B(Xiq)=αiqHiq,i=1,,o
where B(·) denotes the binarization function. In addition, αi ∈ ℝ+ and Hi{+1,−1}n×m were expressed as follows:
(11)
αiq=1nmXiql1,Hiq=sign(Xiq).
The flattened and quantized output x̃q ( = fl(q)) was then forwarded to the FNN of the ground server for the q-th subregion as q = Gq(q) for q = 1,…,Q, and the output vector of the feedforward network at the ground server was expressed as follows:
(12)
y^=[y^1,,y^Q]=[G1(x˜1),,GQ(x˜Q)].
The binarization process in (10) can be regarded as an activation function and that the wireless link between the sensor node and the ground server can be replaced accordingly by the binarization layer. The CNN at the sensor node and the FNN at the ground server were connected through the binarization layer and trained as a single neural network. Then, the gradient of the binarization layer was required to train the neural network parameters during the training phase. As in [14], the derivative of the binarization layer at backpropagation was computed as follows:
(13)
[B(·)W]jk={1,[Hiq]jk>0.0,[Hiq]jk<0.
In addition, by exploiting the pseudospectrum output vector J in Eq. (9) with the synthetic dataset of the sample covariance matrices of the received signals, supervised learning can be applied such that the overall network is trained to generate the pseudospectrum output vector, J. This training process is summarized in Algorithm 1.

3. Proposed Training Dataset Generation Method for Multiple Target Angle Estimation

When the AoAs of targets were close to each other, the MUSIC spectra become vague, which may not be suitable for training a neural network for target angle estimation. For example, Fig. 5(a) shows the MUSIC pseudospectrum obtained from Eq. (5) when two AoAs were θ1 = 10.08° and θ2 = −7.03°, with M = 12. Fig. 5(b) shows the associated neural network output of the deep MUSIC when the dataset from the conventional MUSIC pseudospectra of Eq. (5) was exploited.
As shown in Fig. 5(b), the AoA of the second target could not be clearly detected. This occurred because, when the AoAs of multiple targets are similar (or in the same azimuth angle subregion, Θi), the neural network is not trained effectively. Fig. 5(e) shows the MUSIC pseudospectrum obtained from Eq. (5) when there were three targets with (θ1,θ3,θ3) = (−14.06°,−10.08°,−2.11°). As shown in Fig. 5(f), only one target was detected.
Accordingly, we proposed a new training dataset generation method in which we generated the label by using the weighted sum of the MUSIC pseudospectra of each single target. The proposed method is described in Fig. 6, and the pseudospectrum was modified as follows:
(14)
J(θ)=k=1KαkaH(θ)EnkEnkHa(θ),
where Enk is obtained from the eigenvalue decomposition (EVD) of the sample covariance matrix of xks[n](=βka(θk)sk[n]+n[n]), n = 1,...,N. Here, αk was expressed as follows:
(15)
αk=αkαmax,         αk=log1aH(θk)EnkEnkHa(θk),
and αmax=max (α1,,αk), resulting in more apparent peaks in the target angles. Accordingly, R^x was exploited as the input for the neural network, and J′(θ) was exploited for its desired output in the training phase. As shown in Fig. 5(c), J′(θ), in Eq. (15), was exhibited when θ1 = 10.08° and θ2 = −7.03° with M = 12. We were able to identify clearer peaks for the targets than those in Fig. 5(a). When the proposed dataset generation method was exploited, its associated neural network output was also given in Fig. 5(d), and the two peaks were easily detected. As shown in Fig. 5(g) and 5(h), similar observations were made for K = 3.
We noted that supervised learning was exploited with the synthetic dataset of the sample covariance matrices of the received signals and the associated pseudospectra. That is, when the sample covariance matrix was applied to the neural network as the input, it was trained to give the associated pseudospectra as the output. Therefore, in the proposed algorithm, estimation of the number of targets and eigenvalue decomposition for the signal subspace estimation were not required. Instead, the number of targets corresponded to the number of peaks in the output spectra of the neural network (Fig. 5(d) and 5(h)).

Simulation Results

Through computer simulations, we validated the performance of the proposed deep MUSIC for multiple target angle estimation. The operating frequency was set at 77 GHz, and the interelement distance was set at a half-wavelength. For the weight updating, an Adam optimizer [16] with a learning rate of 0.0005 was adopted. βk followed a circularly symmetric complex Gaussian distribution with a zero-mean and unit variance. Table 1 summarizes the simulation parameters.
Fig. 7 compares the MSE of the deep MUSIC algorithm for various SNRs at the sensor node with two different dataset generation methods: the proposed dataset generation method based on (15) and the conventional method based on (5). The SNR at the sensor node implies the SNR for the received data at the sensor node in (1). Throughout the simulations, the AoAs of two targets were in the same subregion with a probability of 1/2. Otherwise, they were not in the same subregion (In [6], it was assumed that the AoAs of the targets were not in the same subregion. Accordingly, the study did not consider a situation in which the AoAs of the targets were close to each other).
Fig. 7 also shows that the deep MUSIC exhibited lower MSE performance with the proposed dataset generation method than with the dataset obtained from the MUSIC spectra of Eq. (5). As discussed in Section III-3, since the peaks were clearly recognized in the dataset obtained from Eq. (15), the neural network was efficiently trained.
As shown in Fig. 8, the MSE of the partially binarized deep MUSIC was evaluated for various SNRs at the sensor node. The CNN module at the sensor node consisted of four two dimensional-CNN layers. Then, the length of the flattened and quantized output q was set as 65536. The FNN at the ground server consisted of x FNN layers. The proposed dataset generation method using Eq. (15) was exploited to train the partially binarized neural network.
For comparison purposes, the MSE of the deep MUSIC without binarization was also evaluated. The number of CNN/FNN layers was set to equal the number of CNN/FNN layers in the partially binarized deep MUSIC. In addition, instead of the binarized layer, the output of the sensor node was forwarded to the FNN layers on the ground server without quantization.
Here, additive white Gaussian noise was added to the output of the sensor node in consideration of the noisy backhaul link. As shown in Fig. 8, when the SNR at the backhaul link (SNRBL) was 0 dB, the partially binarized deep MUSIC exhibited the worst MSE performance. In contrast, when SNRBL was 25 dB, the MSE performance was comparable to that of the deep MUSIC without quantization. Therefore, when the SNR at the backhaul link (SNRBL) is high, the partially binarized deep MUSIC can be effectively trained to estimate the AoAs of multiple targets.
As shown in Fig. 8, we also evaluated the MSEs of the conventional MUSIC and Cramer-Rao lower bound (CRLB) [4]. We noted that the proposed deep MUSIC exhibited a similar MSE performance as the conventional MUSIC algorithm, while the proposed deep MUSIC did not require estimation of the signal/noise subspace or the number of targets. Furthermore, while the conventional MUSIC process could not be divided and allocated to two different nodes (i.e., the sensor node and the ground server), the neural network architecture in the proposed deep MUSIC was partitioned into two parts and effectively trained as a single neural network over a noisy backhaul link.
As shown in Fig. 9, the MSE of the partially binarized deep MUSIC was evaluated for various SNRs at the backhaul link. If SNRBL was higher than a certain threshold (e.g., 10 dB in Fig. 9), the binarized deep MUSIC was well trained, and its MSE was much improved, regardless of the SNR at the sensor node. Therefore, the binarized layer can have noise immunity for the SNR when the backhaul link is higher than a certain threshold.
As shown in Fig. 10, we also evaluated the resolution probability of the proposed partially binarized deep MUSIC. Resolution probability is widely used to analyze the resolution capabilities of AoA estimators [17, 18]. For comparison purposes, the resolution probabilities of the deep MUSIC with the conventional training method and the conventional MUSIC algorithm were also evaluated. Resolution probability can be improved as the SNR at the backhaul link increases. In addition, the proposed partially binarized deep MUSIC with SNRBL = 25 dB showed the highest resolution probability when the SNR at the sensor node was larger than 0 dB.

Conclusion

This paper proposes a partially binarized deep MUSIC to estimate the AoAs of multiple targets using a wireless sensor array system. The neural network for the AoA estimation was partitioned into two parts: the sensor node and the ground server. The output of the neural network (that is, the partially processed data) at the node was forwarded to the server through the noisy backhaul link channel. By modeling the noisy backhaul link as a binarized feedforward layer, we developed a new neural network architecture suitable for AoA estimation using wireless sensor array systems. Furthermore, by exploiting the straight-through gradient estimation technique, the proposed neural network was efficiently trained, and a new training dataset generation method was utilized such that the peaks in the pseudospectrum spectra were clearly recognized during the training phase.
Through computer simulations, the proposed partially binarized deep MUSIC exhibited an MSE performance comparable to that of the conventional deep MUSIC without quantization. Specifically, the binarized layer in the proposed deep MUSIC can have noise immunity for the SNR if the backhaul link is higher than a certain threshold. Therefore, the neural network for the partially binarized deep MUSIC can be effectively sliced into the sensor node and the ground server. Our simulation results offer useful insights into the design of the partially binarized neural network architecture for wireless sensing systems. Verification with the measurement data through the hardware implementation is left for our future work.

Notes

This work was partially supported by an Electronics and Telecommunications Research Institute (ETRI) grant funded by the Korean government (No. 24ZH1100, Study on 3D communication technology for hyperconnectivity) and partially supported by a National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. RS-2023-00241706).

Fig. 1
Wireless sensor array system for multiple target angle estimation.
jees-2025-3-r-291f1.jpg
Fig. 2
System architecture for conventional deep MUSIC.
jees-2025-3-r-291f2.jpg
Fig. 3
Block diagram of the proposed deep CNN (DCNN) for radar imaging.
jees-2025-3-r-291f3.jpg
Fig. 4
Sensor node CNN layers.
jees-2025-3-r-291f4.jpg
Fig. 5
(a, e) Pseudospectrum from the conventional MUSIC algorithm and (b, f) its associated neural network output. (c, g) Pseudospectrum from the proposed training dataset generation method and (d, h) its associated neural network output with K = (2,3).
jees-2025-3-r-291f5.jpg
Fig. 6
Proposed training dataset generation method.
jees-2025-3-r-291f6.jpg
Fig. 7
MSE of deep MUSIC with two different dataset generation methods for various SNRs at the sensor node.
jees-2025-3-r-291f7.jpg
Fig. 8
MSE of the partially binarized deep MUSIC for various SNRs when K = 2 and M = 8.
jees-2025-3-r-291f8.jpg
Fig. 9
MSE of the partially binarized deep MUSIC for various SNRs at the backhaul link when K = 2 and M = 8.
jees-2025-3-r-291f9.jpg
Fig. 10
Resolution probability of the partially binarized deep MUSIC for various SNRs at the backhaul link when K = 2 and M = 8.
jees-2025-3-r-291f10.jpg
Table 1
Simulation parameters for the proposed deep MUSIC using wireless sensor array systems
Parameter Value
Number of antenna elements, M 12
Carrier frequency (GHz) 77
Number of subregions, Q 4
Target angle range, [ΘL, ΘU] (°) [−30, 30]
Number of grid points, P 256
Number of the received signal snapshots, N 50
Number of target, K 2
Algorithm 1
Training for partially binarized deep MUSIC
Input: A minibatch of input data I and label J, current weights of Wt, current learning rate ηt.
Output: MUSIC pseudospectra estimate ŷ.
1. Compute the sample covariance matrix R^x.
2. Formulate the input vector I, as in Eq. (7).
3. for t =1 to epochs do
4. for q = 1 to Q do
5.   Sensor node: Xq = Sq (I)
6.   for i = 1 to 0 do
7.     αiq=1nmXiql1
8.     Hiq=sign(Xiq)
9.     X˜iq=αiqHiq
10.     x˜iq=fl(X˜iq)
11.   end for
12.    ground server:y^q=GQ(x˜iQ)
13.    ŷ = [ŷ, ŷq]
14. end for
15. Wt+1=Adam(Wt,L(y,J)W,ηt)
16. end for

References

1. Y. Zheng, M. Sheng, J. Liu, and J. Li, "Exploiting AoA estimation accuracy for indoor localization: a weighted AoA-based approach," IEEE Wireless Communications Letters, vol. 8, no. 1, pp. 65–68, 2019. https://doi.org/10.1109/LWC.2018.2853745
crossref
2. A. Fascista, G. Ciccarese, A. Coluccia, and G. Ricci, "A localization algorithm based on V2I communications and AOA estimation," IEEE Signal Processing Letters, vol. 24, no. 1, pp. 126–130, 2017. https://doi.org/10.1109/LSP.2016.2639098
crossref
3. W. Wang, P. Bai, Y. Zhou, X. Liang, and Y. Wang, "Optimal configuration analysis of AOA localization and optimal heading angles generation method for UAV swarms," IEEE Access, vol. 7, pp. 70117–70129, 2019. https://doi.org/10.1109/ACCESS.2019.2918299
crossref
4. P. Stoica and A. Nehorai, "MUSIC, maximum likelihood, and Cramer-Rao bound," IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 37, no. 5, pp. 720–741, 1989. https://doi.org/10.1109/29.17564
crossref
5. C. P. Mathews and M. D. Zoltowski, "Eigenstructure techniques for 2-D angle estimation with uniform circular arrays," IEEE Transactions on Signal Processing, vol. 42, no. 9, pp. 2395–2407, 1994. https://doi.org/10.1109/78.317861
crossref
6. A. M. Elbir, "DeepMUSIC: Multiple signal classification via deep learning," IEEE Sensors Letters, vol. 4, no. 4, article no. 7001004, 2020. https://doi.org/10.1109/LSENS.2020.2980384
crossref
7. Y. Yuan, S. Wu, M. Wu, and N. Yuan, "Unsupervised learning strategy for direction-of-arrival estimation network," IEEE Signal Processing Letters, vol. 28, pp. 1450–1454, 2021. https://doi.org/10.1109/LSP.2021.3096117
crossref
8. J. Fuchs, M. Gardill, M. Lubke, A. Dubey, and F. Lurz, "A machine learning perspective on automotive radar direction of arrival estimation," IEEE Access, vol. 10, pp. 6775–6797, 2022. https://doi.org/10.1109/ACCESS.2022.3141587
crossref
9. I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio, "Binarized neural networks," Advances in Neural Information Processing Systems, vol. 29, pp. 4107–4115, 2016.

10. T. Simons and D. J. Lee, "A review of binarized neural networks," Electronics, vol. 8, no. 6, article no. 661, 2019. https://doi.org/10.3390/electronics8060661
crossref
11. M. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv, and Y. Bengio, Binarized neural networks: training deep neural networks with weights and activations constrained to+ 1 or −1, 2016. [Online]. Available:https://arxiv.org/abs/1602.02830

12. S. Zhou, Y. Wu, Z. Ni, X. Zhou, H. Wen, and Y. Zou, DoReFa-Net: training low bitwidth convolutional neural networks with low bitwidth gradients, 2016. [Online]. Available:https://arxiv.org/abs/1606.06160v1

13. J. Choi, Z. Wang, S. Venkataramani, P. I. J. Chuang, V. Srinivasan, and K. Gopalakrishnan, Pact: Parameterized clipping activation for quantized neural networks, 2018. [Online]. Available: https://arxiv.org/abs/1805.06085

14. M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, "Xnor-net: ImageNet classification using binary convolutional neural networks," Computer Vision – ECCV 2016. Cham, Switzerland: Springer, 2016. p.525–542. https://doi.org/10.1007/978-3-319-46493-0_32
crossref
15. M. Wax and T. Kailath, "Detection of signals by information theoretic criteria," IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 33, no. 2, pp. 387–392, 1985. https://doi.org/10.1109/TASSP.1985.1164557
crossref
16. D. P. Kingma and J. Ba, Adam: a method for stochastic optimization, 2014. [Online]. Available: https://arxiv.org/abs/1412.6980v1

17. Q. T. Zhang, "Probability of resolution of the MUSIC algorithm," IEEE Transactions on Signal Processing, vol. 43, no. 4, pp. 978–987, 1995. https://doi.org/10.1109/78.376849
crossref
18. D. Schenck, X. Mestre, and M. Pesavento, "Probability of resolution of MUSIC and g-MUSIC: an asymptotic approach," IEEE Transactions on Signal Processing, vol. 70, pp. 3566–3581, 2022. https://doi.org/10.1109/TSP.2022.3178820
crossref

Biography

jees-2025-3-r-291f11.jpg
Jongsung Kang, https://orcid.org/0000-0002-9963-119X received his B.S. degree in electronic engineering in 2023 and is currently pursuing an M.S. degree in the Department of Smart Robot Convergence and Application Engineering at Pukyong National University, Busan, Korea. His research interests include radar signal processing and deep learning-based radar imaging.

Biography

jees-2025-3-r-291f12.jpg
Joonhyeon Jun, https://orcid.org/0000-0002-6415-3861 received his B.S. degree in electronic engineering in 2023 and is currently pursuing an M.S. degree in the Department of Smart Robot Convergence and Application Engineering at Pukyong National University, Busan, South Korea. His research interests include the implementation of orthogonal frequency division multiplexing (OFDM)-based multiple-input multiple-output (MIMO) radar systems and signal processing for radar systems.

Biography

jees-2025-3-r-291f13.jpg
Jaehyun Park, https://orcid.org/0000-0001-5327-9111 received his B.S. and Ph.D. (M.S.-Ph.D. joint program) degrees in electrical engineering from the Korea Advanced Institute of Science and Technology (KAIST) in 2003 and 2010, respectively. From 2010 to 2013, he was a senior researcher at the Electronics and Telecommunications Research Institute (ETRI), where he worked on transceiver design and spectrum sensing for cognitive radio systems. From 2013 to 2014, he was a postdoctoral research associate in the Electrical and Electronic Engineering Department at Imperial College London. He is currently an associate professor in the Electronic Engineering Department at Pukyong National University, South Korea. His research interests include signal processing for wireless communications and radar systems, with a focus on detection and estimation for MIMO systems, MIMO radar, cognitive radio networks, and joint information and energy transfer.

Biography

jees-2025-3-r-291f14.jpg
Hyungju Kim, https://orcid.org/0000-0003-3593-4113 received his B.S. degree in electronics engineering from Kyungpook National University, Daegu, Korea, in 2010, and M.S. and Ph.D. degrees in electrical engineering from the Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, in 2012 and 2018, respectively. Since 2018, he has been a senior researcher at the Electronics and Telecommunications Research Institute (ETRI), Daejeon, Korea. His research interests include EM numerical analysis and radar signal processing.

Biography

jees-2025-3-r-291f15.jpg
Byung Jang Jeong, https://orcid.org/0000-0003-3606-0593 received his B.S. degree from Kyungpook National University, Daegu, Korea, in 1988, and his M.S. and Ph.D. degrees in electrical engineering from the Korea Advanced Institute of Science and Technology (KAIST) in 1992 and 1997, respectively. He was with the Samsung Advanced Institute of Technology (SAIT) from 1994 to 2003. Since 2003, he has been a principal member of the research staff at the Electronics and Telecommunications Research Institute (ETRI). His research interests include signal processing for wireless communications, MIMO systems, cognitive radio networks, and radar signal processing.
TOOLS
Share :
Facebook Twitter Linked In Google+
METRICS Graph View
  • 0 Crossref
  • 0 Scopus
  • 213 View
  • 39 Download
Related articles in JEES

ABOUT
ARTICLE CATEGORY

Browse all articles >

BROWSE ARTICLES
AUTHOR INFORMATION
Editorial Office
#706 Totoo Valley, 217 Saechang-ro, Yongsan-gu, Seoul 04376, Korea
Tel: +82-2-337-9666    Fax: +82-2-6390-7550    E-mail: admin-jees@kiees.or.kr                

Copyright © 2025 by The Korean Institute of Electromagnetic Engineering and Science.

Developed in M2PI

Close layer
prev next