V. Wiberg, E. Wallin, M. Servin, and T. Nordfjell, Control of rough terrain vehicles using deep reinforcement learning. arXiv:2107.01867
. Submitted manuscript (2021).
We explore the potential to control terrain vehicles using deep reinforcement in scenarios where
human operators and traditional control methods are inadequate. This letter presents a controller that perceives, plans, and successfully controls a 16-tonne forestry vehicle with two frame
articulation joints, six wheels, and their actively articulated suspensions to traverse rough terrain. The carefully shaped reward signal promotes safe, environmental, and efficient driving,
which leads to the emergence of unprecedented driving skills. We test learned skills in a virtual
environment, including terrains reconstructed from high-density laser scans of forest sites. The
controller displays the ability to handle obstructing obstacles, slopes up to 27°, and a variety of
natural terrains, all with limited wheel slip, smooth, and upright traversal with intelligent use of
the active suspensions. The results confirm that deep reinforcement learning has the potential
to enhance control of vehicles with complex dynamics and high-dimensional observation data
compared to human operators or traditional control methods, especially in rough terrain.
This work has in part been supported by Mistra Digital Forest (Grant DIA
2017/14 6) and Algoryx Simulation AB. The simulations were performed on
resources provided by the Swedish National Infrastructure for Computing (SNIC
410 dnr 2021/5-234) at High Performance Computing Center North (HPC2N).
UMIT Research Lab, Digital Physics