1. Principle of the PSO-PID Algorithm
The PID algorithm is a feedback control algorithm that has been widely employed in the field of industrial control owing to its simple structure, excellent stability, reliable operation, and convenient adjustability. The fundamental concept that defines this algorithm involves adjusting the system’s control input based on the disparity between the current desired output and the actual output of the system by conducting proportional, integral, and derivative control actions. This adjustment aims to minimize deviations between the system output and the desired value as much as possible by appropriately setting the proportional, integral, and derivative coefficients. Consequently, it ensures a rapid response time, offers accurate tracking capabilities, and improves overall system stability.
Furthermore, in contrast to the proportion control-based negative feedback iterative method, the PID algorithm is capable of eliminating steady-state errors [
6]. Steady-state error refers to the discrepancy between the desired output value and the actual output value when the system control process approaches stability. Notably, the integral component plays a primary role in reducing steady-state error and enhancing system accuracy. The effectiveness of an integral action depends on the integration time constant—a larger integration time constant results in weaker integral action, and vice versa. Meanwhile, the derivative component reflects deviation signal trends (i.e., the rate of change). Furthermore, it introduces an early correction signal into the system before deviation signals become too large, thereby accelerating system response times and minimizing adjustment periods.
Considering this context, the relationship between the expected output from a system which denoted as
ỹ(
n), and the actual output is
y(
n), referred to as
e(
n), can be expressed as
Eq. (1):
Therefore, according to the definition of the PID algorithm, it can be expressed as
Eq. (2):
where kp, ki, and kd are the coefficients for the proportional, integral, and derivative terms, respectively.
As previously mentioned, the appropriate selection of PID parameters plays a crucial role in a negative feedback system. The estimation methods for these parameters can be categorized into traditional approaches and intelligent algorithms. Common traditional methods include empirical techniques, trial-and-error procedures, and root locus analysis. These algorithms rely heavily on manual expertise, involve time-consuming trial-and-error iterations, lack global search capabilities, necessitate high accuracy of the system model, and exhibit a limited applicability range.
Intelligent algorithms, on the other hand, are capable of conducting global searches, which enables them to avoid being trapped in local optima and find a global optimal solution or a solution close to it for the PID parameters. Furthermore, they can optimize parameters automatically, thus reducing manual intervention and improving the efficiency and accuracy of parameter optimization. Additionally, they exhibit strong adaptability and a considerable degree of generalization ability, which allows them to quickly identify suitable PID parameters for different control systems, thereby enhancing control performance. Moreover, intelligent algorithms can be easily implemented in parallel computing, where they take full advantage of the multicore processors of modern computers to improve computational speed.
As shown in
Fig. 2, in this study, we employed the PSO algorithm to optimize the PID parameters. PSO is a swarm intelligence-based optimization algorithm that emulates the foraging behavior exhibited by birds to identify the optimal solution for a given problem. As a result, the PSO algorithm effectively handles continuous nonlinear problems [
7]. Originally proposed by Eberhart and Kennedy [
8], this approach conceptualizes the entire population as a swarm, where each particle explores a predefined search space to attain its individual best solution while also collectively interacting with other particles to achieve global solutions. This algorithm comprises displacement and velocity equations, which are influenced by factors such as weight, personal best coefficient, global best coefficient, number of particles, and number of iterations.
In this heuristic search technique, each agent examines its position information within the solution space to update the personal best displacement of its optimal solution, which in turn updates the global best displacement to achieve the globally optimal solution. Meanwhile, the velocity components are calculated based on the displacement of each particle. The coefficients or acceleration constants c1 and c2, representing the velocity of the particles toward their individual and global best positions, respectively, determine the speed and displacement of the particles. In this context, it is crucial to maintain a balance between the constants, since a higher value would compel the particles to rapidly converge toward or surpass the target position, while a lower value may result in overshooting of the target coordinates during exploration without proper recall. Notably, the inertia value ω governs the exploration rate between an individual’s best position and the global position.
Eqs. (3) and
(4) present the fundamental velocity and displacement formulas for standard swarm particles:
where ω is inertia weight; c1, c2 are non-negative constant acceleration;
Vik is velocity of particle i in the k-th iteration; r1, r2 are random numbers distributed in the interval [0,1];
Pik is individual best value of particle i in the k-th iteration; and
Pgk is population extremum of the k-th iteration.
To improve the balance between the global search and local search capabilities of the algorithm, Shi and Eberhart [
8] proposed using a linearly decreasing inertia weight, as depicted in
Eq. (5):
In this equation, ωstart represents the initial inertia weight, ωend denotes the inertia weight at maximum iteration count, and Tmax signifies the maximum number of iterations. Usually, optimal performance is achieved by the algorithm on setting ωstart = 0.9 and ωend = 0.4.
The flowchart in
Fig. 3 illustrates the procedural steps involved in PSO. Initially, the positions and velocities of the particles within a population are randomly initialized, following which the fitness value of each particle is computed to find the individual and global optima. Based on these optima, the positions and velocities of the particles are updated. This iterative process continues until a termination condition is met. Notably, each particle represents three parameters of a PID controller:
kp,
ki, and
kd. Furthermore, the fitness evaluation of each particle involves calculating the normalized mean square error (NMSE) between the actual output and the distortion-free output obtained through the PID negative feedback iteration. The NMSE is then compared against a predefined threshold to determine whether it satisfies the specified condition. If it fails to meet this condition, the particle swarm is updated. The program terminates either when the NMSE meets the set condition or when it reaches the maximum number of iterations.
3. Accuracy Evaluation Function
The introduction of harmonic distortion into the output signal by DAC during the process of converting digital signals to analog ones can be attributed to the non-ideal characteristics of devices, such as nonlinearity, limited bandwidth, and noise. To mitigate the generation of harmonics, a digital predistorter can be cascaded prior to feeding the signal into the DAC for preprocessing. In this regard, traditional memory polynomials [
9] can be expressed as
Eq. (7):
where q represents the memory term, k denotes the nonlinear order, and ckq signifies the model coefficients.
Notably, in
Eq. (7), the
k-th order higher-order term
xk(
n) of the input signal encompasses not only the
k-th harmonic components and their associated sum and difference frequency components but also the lower-order harmonics located in proximity and their corresponding sum and difference frequency components. As illustrated in
Fig. 4, when the input signal
x(
n) is a two-tone signal,
x6(
n) generates the fourth harmonic, second harmonic, and related spectral components. Similarly,
x4(
n) also produces second harmonic components. However, if we intend to modify only the second harmonic component, it becomes necessary to adjust
x2(
n),
x4(
n), and
x6(
n). Furthermore, altering
x4(
n) and
x6(
n) would inevitably impact both the fourth harmonic and the sixth harmonic, which would compel us to deviate from our intended objective. Hence, one of the primary focuses of this paper is to modify
x6(
n) while minimizing any influence on lower-order harmonics.
First, we applied the Hilbert transform [
10] to the real-valued signal
x(
n), as shown in
Eqs. (8) and
(9):
where * denotes convolution and
j represents the imaginary unit. We transformed
x(
n) into a complex-valued signal
z(
n), containing pure
k-th harmonics along with the surrounding parasitic components.
Fig. 5 illustrates the spectra of
z(
n),
z2(
n),
z3(
n) and
z4(
n) for a two-tone signal. It is observed that modifying
z4(
n) does not affect the fourth or second harmonic compared to
x4(
n). However, the lower-order components are missing in
z4(
n), which requires compensation.
We exploited the spectral displacement of
zk(
n) to align it with the
k-th power term of the lower-order components of
x(
n). In digital communication systems, the center frequency
fc and bandwidth information of a digital signal sent to a DAC are often known a priori. Therefore, by applying complex conjugation to the center frequency, we obtained −
fc in the negative frequency domain, resulting in a digitally sampled −
fc(
n). Following this, we calculated the spectrum shift using
Eq. (10):
where m represents the number of shifts.
By multiplying
zk(
n) with
fc,m(
n),
Eq. (11) demonstrates the completion of the frequency spectrum shift for
zk(
n). Following this shift, we obtained the low-order harmonic components of
xk(
n), along with the sum and difference frequency components surrounding these harmonics. Notably, to differentiate it from
zk(
n), we denote the result obtained after the frequency spectrum shift as
zk_l(
n).
Fig. 6 presents the simulation results obtained using
z6(
n) of a two-tone signal as an example.
Using
Eq. (9), we calculated
zk(
n) and then multiplied it with
Eq. (10) to obtain
zk_l(
n), as depicted in
Eq. (11). Subsequently, by optimizing
xk(
n) in
Eq. (7), we formulated
Eq. (12). In this context, it is important to note that the optimization process for the memory term follows a similar approach as that for the current term. However, elaborating on this further is beyond the scope of this study. Notably,
Eq. (12) reveals that
xk(
n –
q) exhibits a higher degree of parameterization:
By substituting
Eq. (12) into
Eq. (7) and applying the Hilbert transform to
y(
n), we obtained
Eq. (13), which represents the HTBMP model. The flowchart for this model is illustrated in
Fig. 7.
The original signal was used as the input for the predistorter, while the PSO-PID algorithm was employed to obtain the optimal input signal
xd(
n) for achieving a harmonic-free DAC output. Once sufficient accuracy was achieved, the optimized signal was used as the output of the predistorter. This can be expressed as
Eq. (14), where
a⃗ represents the parameter vector of the HTBMP.
Where
Subsequently, the parameter vector
a⃗ of the HTBMP model was extracted using the least squares method proposed in [
11], formulated as
Eq. (16):