I. Introduction
In an antenna array, thinning refers to the switching off of certain antenna elements at a fixed rate from a full array system without causing major degradation of the performance metrics. These metrics include the gain, half-power beam width (HPBW), and peak side-lobe level (PSLL) [1]. In general, thinned arrays offer reduced power consumption and less complex beamforming network [2, 3]. However, reducing the number of active elements can reduce the gain or increase the PSLL compared to a reduction of the full elements.
In recent years, optimization techniques such as the genetic algorithm (GA) or particle swarm optimization (PSO), which do not require gradient information of the objective function, have been applied to the thinning of arrays [4, 5]. These methods can find the global optimum in the case of smooth objective functions. However, when the objective function has many local optima, these methods are likely to converge to one of them. To prevent this problem, these methods require higher populations and more iterations.
Similarly, in an array antennas with many elements, optimal thinning is deterred by numerous local optima. Various hybrid GA methods have been proposed to enhance the search performance of thinning designs [6, 7]. However, these methods have only been applied to linear or two-dimensional (2D) arrays with a small number of elements. In this paper, a nonlinear chirp function (NCF) is used to achieve an efficient thinning algorithm. With the oscillation of the NCF determined by a design variable, the proposed thinning process is conducted by switching off several array elements whose NCF values are below the threshold.
We propose a new hybrid GA that improves the convergence rate and the global search performance by exploiting the moving least squares (MLS) method. Here, MLS obtains local polynomial functions from non-uniform sample data and interpolates the sample data [8]. By applying MLS to the population and the value of the objective function, an interpolated objective function is obtained. Subsequently, a conjugate gradient (CG) is applied to the interpolated objective function for improved local optima. After additional GA processes, the obtained solution is added to the population of the next generation. The proposed hybrid method (MLS-GA) enhances the rate of convergence and increases the possibility of finding a global optimum. It is then applied to a test function and a thinning design algorithm for verification.
The hybrid algorithm combines conventional GA and CG and can be divided into three types according to the timing of the CG application. These are pre-hybridization methods that generate initial solutions of GA through CG, organic-hybridization methods that use CG as an operator of the GA, and post-hybridization methods that apply CG to the results of the GA. The first method exhibits good performance when solving specific problems, although it cannot be applied to general problems. While the second method improves convergence, it has limitations in finding global solutions. Therefore, GA was performed n times using the third method, and CG was applied to the best solution [9]. Unlike previous studies, we obtained local optima that applied gradient information to interpolated data acquired through MLS for each generation. The local optima and the solution of conventional GA are preserved for the next generation.
A GA that adds gradient information through the MLS concept is also presented in [10]. However, the algorithm design in this paper is based on discrete design variables compared to the continuous design variable utilized in [10]. In addition, the element position (presented as 2D information) is also regarded as a design variable, and the solution can be derived from the NCF described previously.
II. Proposed Thinning Design Algorithm
The conventional GA for a thinned array converts the location of each array element into a binary form: “1” (switch-on) or “0” (switch-off) [4, 6]. However, the number of local optimum increases as the number of array elements increases using this approach; consequently, the optimization performance decreases. In this paper, we propose an indirect thinning algorithm (based on an NCF) that provides multiple oscillations compared to a conventional thinning algorithm. In general, chirp signals are used in radar and sonar systems, and their frequencies increase (up-chirp) or decrease (down-chirp) with time [11]. The proposed NCF ℑ(t) is defined as follows.
where t represents the normalized location of the array elements, and ψ (t) is a phase function of ℑ(t). With increases in the phase function ψ (t), the oscillation density of ℑ(t) also increases. In Eq. (2), controls the interval of the oscillation of ℑ(t) and β is the starting point of oscillation. These parameters (α, β, τl) determine the shape of the NCF.
The proposed thinning algorithm obtains the NCF value of each array element and switches off any elements whose NCF values do not meet the threshold. The term χ refers to the threshold value, where −1 ≤ χ ≤ 1.
The process of the proposed thinning algorithm with 12 array elements is presented in Fig. 1. Here, the design variables are α, β, {τl: l = 1,2,…, L}, and the threshold χ.
We assume that α = 70, β = 0, τl = [0.16, −0.12, −0.1, 0.25, 0.21, 0.5], and χ = −0.2; hence, NCF ℑ(t) is determined as shown in Fig. 1. Because the NCF values of array elements 10 and 11 are less than the threshold, they are “OFF.”
For simplicity of computation, the 2D planar array is rearranged as a one-dimensional (1D) linear array. The rearrangement order was set according to the distance from each element to the center of the 2D array. After applying the proposed thinning algorithm to the converted 1D linear array, it was reconverted back into a 2D array. For the sake of simplicity, this paper assumes that the planar array has rotational symmetry. Accordingly, only the 1st quadrant of the planar array was designed.
An example of the conversion and reconversion process is shown in Fig. 2. If the number of planar arrays in the 1st quadrant is 9, the elements are rearranged, as shown in Fig. 2(a). Additionally, the proposed thinning algorithm was applied to the rearranged 1D linear array. The result of the algorithm with the seventh and ninth elements switched off is displayed in Fig. 2(b). The result is then reconverted into a 2D planar array, and the result of the entire quadrant is obtained by rotational symmetry.
In this paper, the thinning coefficient is defined as
where N is the number of array elements and NOFF is the number of “OFF” elements. For example, in the case shown in Fig. 1, η = 16.7%.
III. Proposed Hybrid GA based on MLS
The MLS is a local approximation method that is used to estimate the value at an arbitrary point. Moreover, the MLS can obtain local interpolation functions from scattered data or non-uniform sample data. The local interpolation function in the local domain D can be represented as follows [8]:
Here, D denotes the local domain of estimation, f̃D(x) denotes the local interpolation function, pk(x) is the kth basis function, K represents the number of the basis functions, and ak,D(x) denotes the kth coefficient at the local domain D. In addition, x is the position vector, which acts as a population. In general, the basis functions for the MLS consist of polynomial functions. For example, the basis function of the quadratic order is expressed as p(x) = {1, x1, x2, x12, x1x2, x22}. The weighted least square method is used to obtain the coefficient vector aD(x). This has the form
where
Here, ND is the number of sample data instances in the local domain D, LD is the size of the local domain D, xi is the ith sample data, fi,D is the value of the objective function of xi, wi(x) is the weighting function due to the distance between xi and x, and W(x) is a diagonal matrix with ND × ND. The range of the weighting function is limited by the following conditions: the weight should be (1) equal to 1 if x = xi and (2) equal to 0 if |x = xi| > LD [8]. Various weighting functions can be used in the MLS interpolation process [12]. In this paper, the weighting function expressed by Eq. (9) is used. From Eq. (7), the local interpolation function to minimize JD(x) is expressed as follows:
where
φ i ( x ) = ∑ k = 1 K p k ( x ) [ A _ - 1 ( x ) B _ ( x ) ] k i , A(x) = PWPT and B(x) = PW.
After obtaining the local interpolation function, the CG method is applied to find the local optimal population in each local domain, as shown in Fig. 3.
For enhanced clarity, Fig. 3 depicts an interpolation concept based on one dimension. The blue circles represent the previous populations, and the red circles are the local optima obtained through the interpolation function for each local domain. If the local optimum at the local domain D is better than the previous best value of the objective function (as depicted in Fig. 3), the population becomes a new population candidate for the parents of the next generation.
A flowchart of the proposed MLS-GA is presented in Fig. 4. In the first step, initial populations are generated and the objective function value is evaluated. The next step involves checking whether new local optima are created in the local domain.
The MLS interpolation is only conducted when the new local optima have been created; otherwise, the MLS process is skipped for computational efficiency.
When performing MLS interpolation, the local optimal point is obtained using the CG method in the obtained local interpolation function [13]. Subsequently, selection, crossover, and mutation processes are performed, similar to the conventional GA [14]. The next step is a replacement process, in which the population of the next generation is selected. This only leaves the best solutions from those generated by the MLS process and from the conventional GA process and previous parent solutions. We then return to the MLS interpolation step, and the process is repeated until the termination criteria are fulfilled.
The proposed MLS-GA is a hybrid optimization algorithm that combines the CG method (offering fast local search performance) and the conventional GA. Accordingly, the algorithm is capable of strong global search performance and offers a fast convergence rate. To verify the performance, the algorithm was applied to the test functions for optimization and to the optimal design of a thinned array.
IV. Application to Test Functions
The proposed MLS-GA was applied to test functions for verification, and the conventional GA was also applied for comparison purposes. The proposed thinning design algorithm required nine design variables, and the test function should have many local optimal values. Therefore, a 9D Rastrigin function was selected [15]. In addition, a 9D Rosenbrock function was selected because of its gentle slope, rendering it difficult to find precise optimal values [16]. The two test functions are represented in Eqs. (13) and (14):
The shape of the 9D test functions cannot be drawn; hence, 2D test functions are depicted in Fig. 5 for further comprehension. The Rastrigin function is a function with many local optimal points similar to the global optimal point, rendering it very difficult to find the global optimal point.
With the Rosenbrock function, finding the exact global optimal point is more difficult because the slope becomes gentler when approaching the global optimum point.
At the beginning of the optimization process, the population was 200, the number of generations was 200, and the termination condition was that the change rate of the average value of the top 20% solutions during the fifth generation < 0.1%. The GA and MLS-GA were applied with Rastrigin and Rosenbrock under the conditions defined previously. Both algorithms shared an initial generated set of solutions to obtain results under identical conditions [15].
The Rastrigin function has a minimum value of 0 at x = (0,…,0), while the Rosenbrock function has a minimum value of 0 at x = (1,…,1). The optimization was performed 100 times, and the average value was obtained. The optimization results are shown in Figs. 6 and 7 and Table 1. The results in Fig. 6 demonstrate that the MLS-GA found a better solution compared to the GA. The mean value of the objective function is the average value of the entire solution set, and the best value of the objective function is the minimum value. The best value of the objective function was close to the exact solution in the case of MLS-GA. Therefore, we arrived at the conclusion that the MLS-GA addresses the problem of many local optimal values better than the GA. In Fig. 7, it is difficult to recognize the values because the graphs overlap; therefore, the area from 0 to 50 on the y axis is enlarged. This also demonstrates that the MLS-GA outperforms the GA.
To analyze the results quantitatively, the optimal values and the function calls were compared, as shown in Table 1. In the Rastrigin function, the MLS-GA had a better optimal value. However, it can be observed that the convergence speed was slower than that of the GA.
As shown in Fig. 6, the MLS-GA was quick to find a value close to the optimum. However, it does not satisfy the termination condition and converges later.
This occurs because the optimization algorithm incurs a trade-off between the variety of solutions and the rate of convergence. In the Rosenbrock function, it can be observed that the MLS-GA is superior to the GA in terms of optimal value and convergence speed.
V. Optimal Design of a Thinned Array with the MLS-GA
The proposed design algorithm was then applied to a circular planar array with a rectangular grid and circular boundary [1]. The radius was 4.75λ, N = 316, and the uniform spacing of elements was d = λ/2, where λ denotes the wavelength. In this model, 79 elements of the 1st quadrant were aligned linearly according to the distance from the array center. The beam pattern in the array can be calculated according to [17] as follows:
where
In (15), wn is the weighting factor of the nth array element, which is assumed to have a uniform weighting value of wn = 1. Further, (xn, yn) is the location of the nth array element and (θ0, φ0) is the steering angle.
The GA parameters were set as follows: population number Np = 200, number of iterations NG = 50, and probability of mutation Pu = 0.05. In this model, the thinning objective is to minimize the gain loss and the PSLL while maintaining a thinning coefficient η of ≥ 25%. This minimization problem was performed for all desired azimuth and elevation angles. The objective function of the kth population is defined as
where ζk is the kth population and Nθ and Nφ are the number of elevation and azimuth steering angles, respectively. In (17), [[·]] is defined as follows:
In (17), Gcal refers to the gain derived from the designed thinned array, and Gmin is the minimum desired gain. In this case, Gmin was set to 45 dB. If Gcal ≥ Gmin, the objective function is completely dependent on the PSLL; otherwise, the objective function is determined by adding the gain losses and the PSLLs.
To consider the effects of steering angles, we calculated the objective function at two elevation angles θs = {0°,30°} and four azimuth angles, φs = {0°, 30°, 60°, 90°}. Here, only the 1st quadrant angles of the azimuth direction are considered according to the rotational symmetry of the array antenna. As mentioned previously, these parameters (α, β, τl) determine the shape of the NCF, while the threshold χ determines the selected array elements. In this paper, it is assumed that 0 <α <104 and −1<β <1, with τl being L = 6. Accordingly, there are nine design variables.
Fig. 8 displays the mean and best value of the objective function of each generation after averaging the results of 50 trials. To mitigate any effects due to the choice of initial populations, the same randomized population set was used for the conventional GA and the MLS-GA. As demonstrated in Fig. 8, the MLS-GA achieved better performance in finding the optimal solution and with faster convergence compared to the conventional GA.
At the boresight (θs = 0°, φs = 0°), the gains and PSLLs designed by the conventional GA and the MLS-GA are compared in Table 2. Here, it can be observed that the designed gain and PSLL obtained by the GA decreased by 4.7 dB and 3.22 dB, respectively, compared to the full array. In addition, the designed gain and PSLL obtained by the MSL-GA correspondingly decreased by 4.51 dB and 2.82 dB, respectively, compared to the full array. The result of the MSL-GA was superior to the GA with regard to gain. However, the GA was superior for the PSLL, as the optimization was performed for every elevation and azimuth angle. This implies that there are some angles for which the results of the GA are superior to the MSL-GA. Therefore, it is reasonable to compare the results for various angles, which is presented in Fig. 9. In addition, the thinning coefficient of the MLS-GA was slightly less than that of the GA. However, this disadvantage can be ignored, because the design satisfied the constraint of η ≥ 25%.
Fig. 10 indicates the optimum thinning configurations designed by the conventional GA and the MLS-GA. In this paper, only elements in the 1st quadrant were designed. However, due to rotational symmetry, the result can be extended to the entire quadrant, as shown in Fig. 10.
To confirm the result of the steering, the gain and PSLL were evaluated according to the azimuth angle at a fixed angle of elevation, as shown in Fig. 9. In the figure, the gain and PSLL are displayed according to the azimuth angle (ranging from 0° to 360°) with an elevation angle of θs = 30°. Generally, the higher the steering angle, the lower the beam-steering performance. Therefore, only the result of θs = 30° is considered. The results of the GA and MLS-GA both satisfied the requirement of Gcal > Gmin at all azimuth angles. Term Gmin is the minimum desired gain, and its value was 45 dB. The gain of the MLS-GA was approximately 0.2 dB better than that of the GA. Further, the PSLL of the MLS-GA was better than that of the conventional GA. Therefore, the proposed MLS-GA achieves superior performance compared to the GA when designing thinned arrays.
VI. Conclusion
In this paper, a new hybrid GA was proposed to enhance the convergence rate and search performance of the optimal solution based on the MLS method. With the MLS, the local interpolation function is constructed at each local domain, and solutions that are superior to the previous best value of the objective function are obtained.
These solutions are included in the new solutions for the next generation. A nonlinear chirp function was proposed to achieve an efficient thinning design, and the design variable that determined the shape of the nonlinear chirp function was defined. To verify the proposed MLS-GA, it was applied to a test function for optimization and then to a design problem of a thinned array. The proposed algorithm was found to be superior to the conventional GA in terms of both the global search performance and the convergence rate.