top of page
Screenshot_2024-07-24_173128-removebg-preview.png
Hevsel Times Logo Transparent Red.png

Control Theory


Written by Kerem Muldur



For the last 10 years, we are all in a pool of news sounding like from 2050, with headlines like self-driving cars, drones, and humanoid robots. What all those systems have in common is that we want them to move or act in a specific way for a particular amount, like I want my drone to fly at an altitude of 20 meters and turn 30 degrees to the left or I want my self-driving car to drive itself in a way that follows the road from home to the office. Have you ever wondered how non-linear actuators and motors trigger a system that moves in a way that aligns with your input values?


Even we, as people, cannot always act linearly. For example, we have a paper tilted from its edge and we want to flatten it. If it is tilted to the right, we tilt it to the left so that those values balance out and I have a fixed flat surface. However, after you apply a moment to the edge in the negative direction, probably you would end up having a paper edge tilted to the left. You apply a moment again but now it is tilted to the right again. Even though you are not doing this intentionally, you compare your desired output to the current situation and apply a force or moment to change that situation.


Control theory is a sub-science where people deal with having optimum control mechanisms to use them in a dynamic system and change its current state to their desired outcome. That system might be your folded paper or your self-driving automobile. Your action to change the system is the control input, like the force applied to the gas pedal to change the car’s state, its position.


A basic dynamic model consists of the system, controlled input, and the state. However, there are also other facts interacting with my system out of my control, called disturbances; such as air drag and inner efficiency losses.


Sometimes your input into the system is not the same as the actual input, aligning with the system’s architecture. To solidify, when you are driving a drone, you are playing with a joystick to make the drone move from one place to another; however, it is the propeller’s speed that decides that position.


The feed-forward mechanism converts your input to another input that aligns with the system, just like converting a joystick’s signal to propeller speed. Feed-forward mechanisms are crucial for the linear behaviour of a system to adjust its inner mechanism according to your reference.


However, sometimes you input a reference to a system, but the outcome is not aligning with your reference. In that case, we need a mechanism to compare the current situation of the system to our reference and add or subtract values to approach our reference. That is exactly what feedback mechanisms do.


Feedback mechanisms subtract the current state from the reference value, getting the error term, and then create a control input based on transferring that error term. How you evaluate the error term varies for each feedback mechanism, each standing for different types of systems.


There are linear feedback mechanisms like PID and full-state feedback mechanisms that assume the system to act completely in nature, not unexpectedly. If that is not the case, there are nonlinear feedback controllers like on-off and sliding mode controllers. To deal with more unexpected behaviors of our system, robust feedback mechanisms are used like μ-synthesis and active disturbance control. The list goes on and on because control theory engineers developed several kinds of control mechanisms for particular need characteristics.


Let’s break down a single feedback control mechanism to understand one of the most commonly used control mechanisms in controlling a system.


PID is a linear feedback mechanism that uses three parts of transfer functions to decide the controlled input. The P stands for proportional, I stands for integrator, and D stands for derivation. Each part has a coefficient called the gain ratio, deciding how much each part affects the combined controlled input. Let’s delve into three cases to understand why we need each of these three parts.


Imagine you want to walk to a spot 50 meters ahead. Your reference is 50 meters, and your current position is 0. That means your error term is 50 meters.


In that case, your reference is 50 meters, and the current position is 0 meters; therefore, you have 50 meters of error term. Our brain sets the velocity of our legs depending on the error term. In a proportional control mechanism, we set a coefficient and multiply it by the error rate to decide the input, and in that case, it is the walking speed. Thus, if the error term is 50 meters, my speed is 5 m per sec, and if it is 40 it is 4 per sec: your control input depends on the error rate. Simply, we multiply the gain by our error rate and give it as the control output to our system. This proportional mechanism seems to slow down as I get closer to my goal and works well for that car situation.


Let’s test our mechanism for another situation. Take a drone that needs to reach an altitude of 50 meters. The control input is the propeller speed (RPM), and the state is the altitude. If the rotation speed of propellers is set dependent on the error term, the drone would have zero speed when it reaches 50 meters, therefore falling back and then having an error rate to rise it up and then fall again at the top, causing an instability error. To fix this, we can set the gain to a higher number so that the minimum value of rpm is enough to hover the drone, say this value is 100 rpm. If my gain is 5, first the error rate is 50, and my rpm is therefore 5*50=250, and when I get to 20 meters of error rate, the rpm would be enough to hover the drone, not being able to increase its altitude to 50 meters.


We might try increasing the proportional gain, so that even a small error produces enough RPM to hover. But that only reduces the error — it never fully removes it. For example, if you tune it perfectly for 50 meters, it will not work if you change the reference to 100 meters. That leftover offset is the steady-state error. To eliminate this error, we utilize an integrator that accumulates the error amount and produces an output based on that accumulation. In our drone case, when our error state became steady at a certain level where our drone cannot increase its position more because it has reached its maximum speed according to the proportional controller, the sum of the area under state error graph increases, putting an extra output to our system. As we go higher, the area under the error time graph decreases; therefore, the change rate of our sum decreases over time, causing a decrease in our drone's speed. When the drone reaches 50 meters, since the error term is zero, the area under the line is zero too, therefore since the input to the integrator is zero, the output is zero too. This combination of proportional and integrator controls is called a proportional integrator, or PI unit. Let’s see if this algorithm works for our final case.


Mostly, our PI system is not able to predict future overshoots that might happen. In our drone case, let’s say we approached to 100 meters, but we are not done yet, so the integrator continued adding this error amount and therefore gave an rpm output higher than 100 rpm. In other words, even though the error term is zero when I reach my setpoint, the accumulation of past before that setpoint is nonzero, giving a speed bigger than hovering, even though I have already reached my setpoint. This point is where the derivative part comes in. Derivative calculates the rate of error change and gives a balancing value. To put it in other words, the derivative part takes the slope of the error function over time, and in our case, where we approach our setpoint 100 meters so fast, the slope of that graph is a negative value that keeps decreasing, balancing the over-increasing value from the integrator. The main job of the designer is to set very specific gain ratios for each part so that the system weighs well overall.



References:


Comments


bottom of page