1. Overview
The geometry pertaining to the radar and the target can be represented using a 3D coordinate system, as shown in
Fig. 1.
In
Fig. 1, the radar is assumed to be located at origin
O⃗ and directed toward direction
D̂ while using a stepped-frequency waveform. The target is located at position
R⃗ in a 3D coordinate system that satisfies the conditions
y > 0 and
z > 0. It exhibits both bulk motion, which refers to the movement of the entire target over time, and micro motion, which involves the movement of specific structures within the target, such as propellers.
Fig. 2 provides an overview of the proposed method using a block diagram.
In accordance with
Fig. 2, the “Generate 3D Scene” block first creates a 3D mesh that captures the micro motions of the specific structures within the target. Next, the “Generate Field” block calculates the backscattered field of the 3D mesh and stores it in Target Storage. The “Synthesize” block generates a dataset similar to the measurements by blending the target and background data obtained from the storage. Following this, the backscattered field for a single radar pulse is obtained by simultaneously conducting the Generate 3D Scene and Generate Field steps. This process is repeated for a desired number of pulses, after which the results are stored in Target Storage. During this process, the time interval of the Generate 3D Scene is set to be the same as that of the pulse repetition interval (PRI) of the radar pulse.
2. Generate 3D Scene Block
The Generate 3D Scene block generates a 3D mesh representing the shape and posture of the target, with the target considered the sum of its N triangle patches in the mesh.
where
M denotes the total number of triangular patches in the 3D mesh, while
P1,m→∈ℝ3, P2,m→∈ℝ3, and
P3,m→∈ℝ3 are the position vectors representing the vertices of the triangular patches. Therefore, if
Vobject is a rigid body, all its positions and orientations can be expressed by repeatedly applying
Eqs. (2) and
(3) to its vertices.
where
Rrotate ∈ ℝ
3×3 is the Euler rotation matrix and
Ptranslate→∈ℝ3 is the position vector used to translate
Vobject. However, targets with complex movements, such as drones, cannot be represented using simple rotational and translational transformations because the entire shape changes as the propellers rotate. Therefore, assuming that the target can be represented as a combination of rigid bodies, all its possible poses can be expressed by applying rotational and translational transformations to the 3D mesh of each component that constitutes the target and then assembling them. Taking this into account,
Fig. 3 shows an internal block diagram of the Generate 3D Scene block.
According to
Fig. 3, each element of the target is loaded into a separate coordinate system, after which different rotational and translational transformations are applied. Following this, the elements are assembled into a single coordinate system. Subsequently, rotational transformations are applied to the assembled 3D target to determine the pose angles.
Figs. 4 and
5 show examples of the 3D meshes obtained using the proposed method.
In
Figs. 4 and
5, the targets represented in the 3D coordinate system not only show the orientation angles but also exhibit movements, such as blade rotation or wing flapping. Therefore, by applying the above method, the state of the target at the moment of radar pulse emission can be represented in a 3D mesh by varying the time of the target according to the PRI.
3. Generate Field Block
Fig. 6 shows a block diagram of the Generate Field block.
In
Fig. 6, the Generate Field block uses the GPU-based PO method proposed in [
7] to calculate the backscattered field of the target from the 3D mesh.
Fig. 7 compares the computation results obtained using FEKO—a commercial electromagnetic numerical analysis software—and those acquired on implementing the PO characterized by GPU and interpolation. The simulation was performed using the following hardware: CPU, Intel Core i7-10750H CPU @2.60 GHz, 64 GB RAM, and NVIDIA GeForce RTX 2070 Super with Max-Q.
We measured the computation time of the GPU-based PO in the same environment. It was found that the computation time required by the GPU-based PO to calculate 2,048 frequency samples for a target containing 10,782 triangular patches was approximately 1.54 seconds. Although this is substantially faster than the conventional PO method, it is too slow to acquire data on moving targets. For example, it takes approximately 197 seconds to simulate one RDMAP using 128 samples along the Doppler axis and 2,048 samples along the range axis. Assuming that a radar updates 4 RDMAPs per second, it would take about 6 days (132 hours) to generate 2,400 frames of RDMAPs for a duration of 10 minutes.
We addressed this problem by employing range compensation and linear interpolation. In the range profile, targets situated at a closer range are represented by lower-frequency components in the frequency domain, whereas targets situated at a farther range are represented by higher-frequency components. Therefore, by aligning the target’s position at the origin and conducting scattered field analysis using PO, we can fully reconstruct the target’s range profile using fewer calculated samples. In other words, the number of samples Q′ used for an analysis would be significantly larger (by approximately 10 times or more) than the number of cells that represent the target’s length in the range profile.
Therefore, the following methods were implemented:
Step 1. The starting point of the target’s 3D mesh at origin was aligned and the Q′ scattered fields were calculated.
Step 2. Linear interpolation was applied to the scattering fields to generate the same number of samples Q as that of the radar received signals.
Step 3. Range compensation was applied to shift the target’s position in the range profile back to its original range.
This methodology can also be expressed in mathematical terms. For instance, the backscattered field
E′→ can be formulated as
Eq. (4).
where Q′ is the number of frequency samples calculated by the PO, q′ refers to the index for Q′, and
sq′′ is the q′-th element of vector
E′→.
In Step 1, since the target location is aligned at the origin, an approximate estimation of frequency samples can be obtained by acquiring fewer frequency samples and then interpolating them. In this context, Q′ should satisfy the following condition:
where B is the bandwidth of the radar, c refers to the velocity of light in vacuum, and
rmax′ indicates the maximum range that can be represented in the range profile calculated by the PO. Furthermore, the coefficient for oversampling is denoted as α, and the value α = 10 is used.
In Step 2, the interpolated field
E″→ can be obtained using the following equation:
where Q is the number of frequency samples acquired from the radar, q refers to the index for Q,
sq″ signifies the q-th element of vector
E″→, and interp(·) is an interpolation operation. Notably, a linear interpolation method was employed in this study.
In Step 3, the backscattered field of the moving target is obtained by substituting the position vector of the target into
Eq. (7).
where O⃗ and &Rrarr; are the position vectors of the radar and the target, respectively, and fq indicates the frequency value of the q-th frequency sample.
Table 1 compares the computation time required to calculate 2,048 frequency samples for a target containing 10,782 triangular patches using the two methods explained above.
The results show a significant improvement in the calculation speed of the GPU when using interpolation, exhibiting approximately 100 times faster performance compared to the GPU-only case.
Fig. 8 demonstrates the results obtained by simulating the Drone and Bird targets, each flying at 20 m/s, using the proposed method to acquire spectrograms displaying the velocity of the targets. For the Drone target, the presence of micro-Doppler signatures generated by the blade is revealed.
4. Synthesize Block
The signal obtained from radar measurements can be considered the sum of the target and background signals. Notably, the background signal, which includes clutter, is challenging to obtain using numerical methods, but can be readily acquired through measurement. To obtain a signal similar to that resulting from measuring the target in the presence of background clutter, a strategy that involves mixing the background and target signals obtained through measurement and numerical methods, respectively, was employed.
In accordance with the steps in
Fig. 9, the backscattered field of the target was obtained using a numerical method, whereas the backscattered field of the background was acquired through measurement. Blending was performed by adding the two signals.
Fig. 10 presents a comparison of the synthesized data and the measured data for the Drone target. In this context, it should be noted that the classification performance of a deep learning model may vary based on the method used to normalize the training and test data. In this paper, the RDMAP data included in the presented dataset were normalized to pixel values ranging from 0 to 255, with the maximum value set to 0 dB and values extending down to −15 dB.
As shown in
Fig. 10, the RDMAPs of the synthesized and measured data are highly similar to the extent that identifying any discernible differences with the naked eye is challenging.