You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here's below a brief writeup I'd like to share with the community, and in particular with those of you who are concerned with the design of control components. There are subtle (maybe also counterintuitive) aspects in determining the control rate that we shall account for, which are often underrated and sometimes can even trick the experts.
Put differently, find below a couple of lifesavers that can help you get around those difficult situations when your boss pushes you to increase the control rate – especially if the HW resources seem to allow doing so – and you lack the right words to argue 😆
ℹ Intro
We all know that there are good arguments for increasing the control rate, or equivalently to make the sampling time $T_s$ smaller. Just to cite the most common points:
to be able to fully account for the signal bandwidth and avoid aliasing effects (the Nyquist theorem);
to reproduce the open-loop time response of the system precisely enough;
to readily react to the disturbances.
But did you know that making $T_s$ smaller is not always a good move and can be even detrimental?
⚪ Instability and numerical errors
When we discretize a continuous system, we basically transform the poles $s_i$ into their discrete counterparts $z_i$ by applying the relation:
$$
z_i = e^{s_iT_s}.
$$
Now, what if $T_s \rightarrow 0$? The discrete poles will move toward the border of the stability region comprised within the unit circle $z_i \rightarrow 1$, regardless of the position of $s_i$!
This is a kind of undesired effect and poses problems when working with finite-precision machines!
Consider the example proposed in 📚 Automatic Control – Sampling, where two poles $s_1=-1$ and $s_2=-10$ need to be discretized.
$s_1 = -1$
$s_2 = -10$
$T_s= 0.001$ s
$z_1 \approx 0.9990$
$z_2 \approx 0.9900$
$T_s= 0.01$ s
$z_1 \approx 0.90$
$z_2 \approx 0.36$
If we truncate after two decimal digits, it's easy to check how quickly we'll get in trouble with the fastest sampling time $T_s = 1 \text{ms}$.
⚪ Noise and uncertainties
The derivative term is the naughtiest beast among the 3 ingredients of a PID controller. Even when the D is applicable as the system has slow and well predictable dynamics, we should limit its gain because of the amplification of high-frequency noise.
Sometimes, we're forced to make use of the derivative action to reduce overshoots or stabilize systems where a PI is not sufficient alone (e.g., a double integrator plant).
In the latter context, we should pay attention not only to the frequency content of the noise but also to the magnitude of uncertainties $\omega$ that may affect the measurements.
In a derivative architecture, the uncertainty in the feedback $\omega$ gets in fact divided by the sampling time $T_s$; therefore, the smaller is $T_s$, the larger is the error in the computation of the derivative term!
You can find an extensive demonstration of this impact in this nice 🌐 SE post, which warns the reader to be careful in judging the effects of an increased sampling rate in relation to the structure of the controller at stake.
🔳 Outro
The take-home message is clearly that:
There exist subtle conditions, mostly related to the numerical precision, the magnitude of noise/uncertainties in the system, and the structure of the controller, in which closing the loop faster can cause unexpected headaches.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Preamble
Here's below a brief writeup I'd like to share with the community, and in particular with those of you who are concerned with the design of control components. There are subtle (maybe also counterintuitive) aspects in determining the control rate that we shall account for, which are often underrated and sometimes can even trick the experts.
Put differently, find below a couple of lifesavers that can help you get around those difficult situations when your boss pushes you to increase the control rate – especially if the HW resources seem to allow doing so – and you lack the right words to argue 😆
ℹ Intro
We all know that there are good arguments for increasing the control rate, or equivalently to make the sampling time$T_s$ smaller. Just to cite the most common points:
But did you know that making$T_s$ smaller is not always a good move and can be even detrimental?
⚪ Instability and numerical errors
When we discretize a continuous system, we basically transform the poles$s_i$ into their discrete counterparts $z_i$ by applying the relation:
Now, what if$T_s \rightarrow 0$ ? The discrete poles will move toward the border of the stability region comprised within the unit circle $z_i \rightarrow 1$ , regardless of the position of $s_i$ !
This is a kind of undesired effect and poses problems when working with finite-precision machines!$s_1=-1$ and $s_2=-10$ need to be discretized.
Consider the example proposed in 📚 Automatic Control – Sampling, where two poles
If we truncate after two decimal digits, it's easy to check how quickly we'll get in trouble with the fastest sampling time$T_s = 1 \text{ms}$ .
⚪ Noise and uncertainties
The derivative term is the naughtiest beast among the 3 ingredients of a PID controller. Even when the D is applicable as the system has slow and well predictable dynamics, we should limit its gain because of the amplification of high-frequency noise.
Sometimes, we're forced to make use of the derivative action to reduce overshoots or stabilize systems where a PI is not sufficient alone (e.g., a double integrator plant).
In the latter context, we should pay attention not only to the frequency content of the noise but also to the magnitude of uncertainties$\omega$ that may affect the measurements.
In a derivative architecture, the uncertainty in the feedback$\omega$ gets in fact divided by the sampling time $T_s$ ; therefore, the smaller is $T_s$ , the larger is the error in the computation of the derivative term!
You can find an extensive demonstration of this impact in this nice 🌐 SE post, which warns the reader to be careful in judging the effects of an increased sampling rate in relation to the structure of the controller at stake.
🔳 Outro
The take-home message is clearly that:
Beta Was this translation helpful? Give feedback.
All reactions