Sim-to-real transfer of active suspension control using deep reinforcement learning
Wiberg V, Wallin E, Fälldin A, Semberg T, Rossander M, Wadbro E, and Servin M. Sim-to-real transfer of active suspension control using deep reinforcement learning. Robotics and Autonomous Systems, 104731 (2024).  doi.org/10.1016/j.robot.2024.104731, arxiv:2306.11171
We explore sim-to-real transfer of deep reinforcement learning controllers for a heavy vehicle with active suspensions designed for traversing rough terrain. While related research primarily focuses on lightweight robots with electric motors and fast actuation, this study uses a forestry vehicle with a complex hydraulic driveline and slow actuation. We simulate the vehicle using multibody dynamics and apply system identification to find an appropriate set of simulation parameters. We then train policies in simulation using various techniques to mitigate the sim-to-real gap, including domain randomization, action delays, and a reward penalty to encourage smooth control. In reality, the policies trained with action delays and a penalty for erratic actions perform at nearly the same level as in simulation. In experiments on level ground, the motion trajectories closely overlap when turning to either side, as well as in a route tracking scenario. When faced with a ramp that requires active use of the suspensions, the simulated and real motions are in close alignment. This shows that the actuator model together with system identification yields a sufficiently accurate model of the actuators. We observe that policies trained without the additional action penalty exhibit fast switching or bang-bang control. These present smooth motions and high performance in simulation but transfer poorly to reality. We find that policies make marginal use of the local height map for perception, showing no indications of look-ahead planning. However, the strong transfer capabilities entail that further development concerning perception and performance can be largely confined to simulation.
Videos

The research was supported in part by Troedsson Teleoperation Lab, Mistra Digital Forest, Algoryx Simulation AB, Swedish National Infrastructure for Computing at High-Performance Computing Center North (HPC2N), eSSENCE, and eXtractor AB.

UMIT Research Lab, Digital Physics