Tests and experiments are inherent part of every research and development process. When developing a flying vehicle, we may basically distinguish between the static tests of the whole system (or its part), being conducted safely in the home comforts of a laboratory, and the flight tests, when newly designed and developed assets are put through their paces in real-life conditions. The former is of vital importance, never to be underestimated, and often determines the outcome of the latter. The flight tests are inherently expensive and dangerous, and it is essential to be well prepared, in order to maximize the chances of a positive result. This is why the static tests are conducted. Let us now describe the testing methodology, used when developing the RAMA system, and present the most important testing results.
The goal of the static tests is to validate the system as much as could be done without flying and to prove its reliability before the flight tests. The latter is especially important, because every part of a UAV control system is more or less safety critical and any in-flight failure may have grave consequences. By validation we mean a verification that the newly developed part of the system really does what it is supposed to do and that it fulfills all the requirements. The goal of the testing, on the other hand, is to verify that the system is reliable and safe enough to fly.
There is no practical way how to generally describe the validation process, because every feature is different and the validation tests must be tailored accordingly. On the contrary, the testing of the overall system reliability and safety is, and should be, a generally codified process, to ensure that no corners are cut when preparing to a flight test. The term “reliability” is used to somehow describe the probability of a failure; on the other hand, the term “safety” describes the consequences of a failure. When testing the reliability, the system must pass a set of test cases without any error; to test the safety of the system, errors or non-standard situations are deliberately induced into the system and the reaction is evaluated.
For the RAMA system, a general testing methodology has been developed and practiced over the years. It is called the Flight Readiness Test (FRT). It is a set of tests cases and focal points for both the software and hardware, avionics as well as the vehicle.
The new functions, implemented into the avionics, are validated and tested using usually some kind of the processor-in-the-loop tests. The testing setup for this kind of test is depicted in the figure. A Matlab/Simulink model of the newly tested feature runs on simulated data. The same data are fed into the control algorithm, running in the real hardware. The results of the simulation and the implementation must match. The evaluation is done offline, for the sake of convenience. This kind of test not only validates the algorithm, but is also able to discover systematic implementation errors.
Once the system is validated, the reliability tests take place. The avionics is left to run for a long enough time (an hour or so) and is stimulated with some kind of arbitrary signal, trying to emulate likely in-flight conditions. The control outputs, as well as the internal states of the algorithms, are monitored through the telemetry and checked for error codes or any other anomalies.
The hardware is also looked upon:
The safety is tested by intensionally inducing errors into the system. These can be induced by the software itself in some special cases, but the standard test involves disconnecting, reconnecting and resetting of various system parts in run-time. The standard testing procedure involves:
The control system must respond correctly to all these states and appropriate fail-safe mode must be engaged. The manual control must be preserved all the time (the obvious exception being the control signal loss - in that case, the Critical Failure Mode must be engaged, and disengaged again after the signal reacquisition).
The Hirobo Freya helicopter is also inspected before each flight test:
The flight tests are the ultimate proving ground for each system part. They play an unsubstitutable role in the development process and provide the invaluable real-life experience. However, each flight test is a substantial logistical undertaking and also pose an inherent risk of vehicle damage or even a complete loss. There is no second chance if anything goes fatally wrong in flight; that is why the static tests are so important. Each flight test has to be well prepared, the testing setup outlined and proven and testing priorities must be set, to avoid any confusion in the field. A flight test usually consists of 1-3 days and only a limited number of flights is available (a maximum of 10-14 flights a day), due to several constraints (weather, battery capacity, pilot fatigue and others). It is not possible to introduce any major last-minute changes to the testing setup during the process, emphasizing the need to statically test the hardware and software thoroughly beforehand and clearly define the testing means, goals and priorities. It would be technically possible to update the flight software in field, but it is never done for principal reasons, for such behavior would strongly enhance the possibility of inducing a human error.
A flight test is also relatively expensive in both money and labor, because of the logistical challenge and hardware attrition and maintenance, so it should be carefully considered what is worth flight-testing and what is not, in order to not to waste precious flight time. It is not possible to flight-test every bright idea that passes by and testing objectives must be carefully selected.
After each flight day, a post-flight review takes place. The data are inspected from several viewpoints. The performance of the tested feature is naturally evaluated, but not only that. Several standard procedures also take place:
Many flight days were dedicated to the instrumentation development, because a lot of problems were encountered with nearly all the sensors under flight conditions. The Inertial Measurement Unit proved to be the most problematic, and countless flights had to be dedicated to remedy the situation. See the figure to the left, where the difference between the data provided by the same sensor within the IMU are shown, depending on the mounting of the unit. The mass damper, described in section Inertial Measurement Issues, had to be developed and tuned properly before any serious control experiments could begin. The x-axis accelerometer measurements in hover, are shown in the Figure. The blue signal corresponds to an early type of IMU fixture, the red signal corresponds to the IMU mounted on the mass damper.
The Garmin 18-LVC GPS receiver also proved to be problematic in various ways (see section EMC Issues). Its mass, located at the very tip of the tailboom, worked as a perfect vibration resonator and induced severe shivering into the airframe. It was very hard to de-tune the system to rectify the issue, because there was an interconnection between the horizontal fin and and the receiver itself, which was very hard to discover. The fin had to be removed eventually and the receiver mounted relatively loosely on some foam dampers. The relocation of the receiver was impossible, because it didn’t work under the main rotor blades, which were obscuring the signal. The first figure shows an example. The GPS position fix express the mode the GPS receiver works in - 1 corresponds to no fix (no signal received), 2 to the 2D mode (no altitude data), 3 to the 3D mode, 4 to the 2D differential mode and 5 to the 3D differential mode. The first case in Figure shows the loss of the GPS signal in 43s time, corresponding to the time when the main rotor started spinning. The second case shows the normal signal acquisition after startup - initial 2D fix is obtained, then the 3D mode is entered and after difference signal acquisition the 3D difference mode is engaged and never lost again during the flight. The second figure shows the GPS recording of a flight path.
Other system development flights focused on the hardware of the control system itself; the Electronic Container was repositioned several times and its mounting points altered, the connectors and wiring were changed and the mounting of the PCBs inside the EC was also altered many times to obtain the best solution.
There were also some software development flights, verifying the reliability and timing of the system and performance of the control loops under flight conditions. The Automatic Control Mode (ACM) was emulated, in order to verify the feasibility of the whole solution. The testing setup was following: The control system operated in the ACM during the whole flight, the only difference was that the manual commands were actually applied to the actuators at the end of each working cycle, instead of the computed automatic actions. The Servo Control Unit (SCU) operated in the ACM, measuring the positions of the control sticks and sending them to the Main Control Computer (MCC), waiting for commands. The MCC waited for the complete data acquisition, executed the control loops, but at the very end of its working cycle actually sent the measured control stick positions back to the SCU, instead of the computed commands.
The vehicle was therefore controlled manually, but the manual commands passed through the whole control loop (so they were subjected to the same delays as the automatic commands would) and the control system was fully running. After these flights, the performance of the control system was thoroughly evaluated; The timing was analyzed, looking for possible jitter, the internal states of all system parts were verified and the commands, computed by the control loops, were verified using a Matlab model of the control algorithms. The Matlab implementation was fed by recorded telemetry data and its output was compared to the output of the MCC control loops, also recorded in the telemetry. The results had to be the same. This was the ultimate “dress rehearsal” of the control system as a whole, before the first attempts for semi-automatically controlled flights. In fact, it was the complete “hardware-in-the-loop” simulation. By this term, a most realistic testing setup is meant, where all control algorithms are running real-time in the hardware as they would in the real case, and input signals are not emulated - the sensors are actually subjected to the real physical conditions they should measure. This is the major difference compared to the “processor-in-the-loop” testing, mentioned in section Avionics Tests.
As was said in section Angular Rate Control Layer, the yaw rate control is the fastest of all Angular Rate Control Layer (ARCL) loops. It is also the only one which cannot be properly controlled by a P controller in principle, because the vehicle dynamics in the yaw axis contains a first-order astatism; because of the reaction torque of the fuselage, which the tail rotor has to compensate for, the actuator is actually required to be out of the neutral position in order to maintain the zero control error. This can be achieved only when the I component is present, so a I, PI or PID controller is required for this axis.
Initial experiments with the PI controller didn’t quite work out, because the controller proved to be rather unstable under variable angular rates. It usually worked fine for near-hover conditions, but as the angular rates went higher, it tended to oscillate excessively (see the left figure below). This was partly caused by the rather low sampling rate at the time; 32Hz was clearly not enough to capture the fast yaw movements. A PID controller was introduced in an attempt to suppress these oscillations, but initially, the D component didn’t work very well because of the noise, superposed to the measured signal. Additional signal filtering was introduced, improving the situation, but the PID controller proved rather hard to tune “by hand”. It was impossible to find the correct setup intuitively.
So, some identification experiments were performed in order to determine the natural frequency and the critical gain of the vehicle in yaw. A P controller was used for this purpose, which gain was gradually increased up to the point when the tail started to oscillate. The natural frequency and the critical gain were determined and PID constants computed according the Ziegler-Nichols method. The PID controller, set this way, worked reasonably well from the beginning and required only little additional tunning. The reference weighting was also introduced into the P branch of the controller, in order to help to suppress the tail oscillations at higher angular rates. The final setup of the controller is denoted in the table here: ARCL Controller Settings. Please note that instead of the proportional, integration and derivative gains p, i and d, the overall gain k and the circular frequencies ωi and ωd are given.
The sampling rate was later increased to 64Hz, further improving things. The step response (measured in flight) of a properly set yaw rate PID controller is shown in the right figure below. The control quality is considerably improved over the PI controller.
Initial tests of the Attitude Control Layer (ACL) were performed only in the yaw axis so far. Because the ARCL works very well and the attitude excursions of the vehicle are very slow in time, it could be safely assumed that only conservatively set P or PI controllers will most likely be required for the ACL layer.
The initial test of the ACL in yaw was hampered by the fact that the attitude computation algorithm was not finalized in time for the test, so only an intermediate solution was used; the yaw angle was computed using simple triangulation of the x and y magnetic intensity measurements. The yaw measurements were therefore working properly only when the vehicle was nearly level; even small bank angles in pitch and roll would affect the yaw angle measurements, because the magnetic field variations induced by the banking of the vehicle were not taken into account.
Even under those adverse conditions the ACL worked, although the control quality was not up to the required level. This is assumed to improve with better attitude measurement. The in-flight performance of the yaw angle control is shown in the figure. Simple P controller was used and only attitude holding test was performed (the reference was constant). The attitude excursions are caused by the measurement inaccuracy rather then the controller performance. Obviously, the attitude was hold reasonably well in global, although the overall control quality was rather unimpressive.
The pitch rate control proved to be somewhat tricky. The dynamics of the vehicle in the pitch axis is the slowest and most vehicle inertia is present. The experiments with the P, PI and PID controllers were only partially successful; the same problem was plaguing all of them. This problem was that either the controller suffered a very large phase delay - the response was vastly delayed after the reference (see the left figure) - or the control loop was unstable, experiencing severe oscillations (or both), as shown in the figure to the left.
The D component in the controller was not very helpful and the Ziegler-Nichols rule didn’t work. In fact, the attempt to determine the natural frequency and critical gain indirectly led to a loss of a vehicle, as will be described later (see section Roll and Pitch Identification Experiment).
The PI controller with a rather large i constant proved to be the most promising solution. It was steady and stabilized the vehicle perfectly in hover, but suffered considerable lag in response to the reference change (as shown in Figure below left). This problem was ultimately solved by introducing the feed-forward loop into the controller. The feed-forward allows a part of the reference signal to be applied directly to the actuator, ensuring rapid response to the reference change; the P and I branches work as expected, stabilizing the vehicle. Considerable improvement in response to the reference variations using the feed-forward technique is shown in the right figure below - compare that to the performance shown in the left figure.
The final setup of the pitch rate controller can be found in the table here: ARCL Controller Settings.
The roll rate control characteristics is naturally similar to the pitch, although there is a little less momentum in this axis. Therefore, similar problems were encountered when trying to setup the roll rate controller. The lag issue was present in the roll rate control too, although it was not so severe compared to the pitch (see the left figure below). The Ziegler-Nichols rule did not work for the roll rate controller either. The final solution for the roll was achieved in the same way as for the pitch rate - by introducing the feed-forward branch into the controller. The flight performance of a properly set roll rate controller can be seen in the right figure below, and the final parameters are noted in the table here: ARCL Controller Settings.
|D Component Filtering|
When trying to setup the pitch and roll rate controllers, an identification experiment was carried out in order to determine the natural frequencies and critical gains of the vehicle in both axes. Instead of a P controller, a bang-bang controller was used for that purpose. By the term “bang-bang” a two-state controller is meant, actuating either fully to one side or the other, according the control error; an infinite-gain P controller would work analogically.
The gain (i.e. the allowed actuator throw) of the bang-bang controller was gradually increased and the response of the vehicle was measured. The results can be seen in Figures below (left for the roll, right for the pitch). The identification experiment was successful, or at least seemed to be, but right in the next flight the testing vehicle crashed badly due to the mechanical failure of the powertrain. The engine revved up suddenly, but the rotor lost all power and slowed down. The pilot contributed to the failure by not engaging the auto-rotation soon enough and let the momentum accumulated in the rotor to dissipate. The crash was then inevitable and destroyed most of the vehicle, badly damaging the avionics in the process. It was later determined that a needle one-way bearing in the transmission cracked and failed, causing the sudden loss of power. The root cause was traced back to the identification experiments.
The one-way bearing is situated inside the main gear (pointed by the red arrow in the figure to the left), transmitting the engine torque to the main rotor shaft. Therefore, all forces and moments, induced by the main rotor to the fuselage, are transferred through this bearing. When the identification experiments, described above, were performed, the bearing was overstressed as a result of relatively long-term high-frequency periodic oscillations, induced by the bang-bang controller. Also, the fuselage is much heavier and therefore has larger moments of inertia than what the bearing was designed for by the vehicle manufacturer, because of additional weight of the on-board avionics. The housing of the bearing is made of plastics and can flex, not protecting the bearing sufficiently. This resulted into particularly high rate of attrition for the bearing, causing it to fail eventually.
The problem was fixed by introducing a non-flexible metal housing for the bearing, protecting it much better than the original plastic housing.
Roll and pitch rate bang-bang controller in-flight performance is shown in the figures above. Maximum actuator throw (the red line) was restricted to 0.1, 0.2, 0.3 and 0.4 of the maximal throw in the figures going from up to down. The blue line is the measured angular rate.