Loading

Learning to Run challenge solutions: Adapting reinforcement learning methods for neuromusculoskeletal environments

Łukasz Kidziński Sharada Prasanna Mohanty Carmichael Ong Zhewei Huang
Shuchang Zhou Anton Pechenko Adam Stelmaszczyk Piotr Jarosik
Mikhail Pavlov Sergey Kolesnikov Sergey Plis Zhibo Chen Zhizheng Zhang Jiale Chen Jun Shi Zhuobin Zheng Chun Yuan Zhihui Lin Henryk Michalewski Piotr Miłoś Błażej Osiński Andrew Melnik Malte Schilling Helge Ritter Sean Carroll Jennifer Hicks Sergey Levine Marcel Salathé Scott Delp
DATE PUBLISHED
02 Apr 2018

Citations
33

Abstract

Synthesizing physiologically-accurate human movement in a variety of conditions can help practitioners plan surgeries, design experiments, or prototype assistive devices in simulated environments, reducing time and costs and improving treatment outcomes. Because of the large and complex solution spaces of biomechanical models, current methods are constrained to specific movements and models, requiring careful design of a controller and hindering many possible applications. We sought to discover if modern optimization methods efficiently explore these complex spaces. To do this, we posed the problem as a competition in which participants were tasked with developing a controller to enable a physiologically-based human model to navigate a complex obstacle course as quickly as possible, without using any experimental data. They were provided with a human musculoskeletal model and a physics-based simulation environment. In this paper, we discuss the design of the competition, technical difficulties, results, and analysis of the top controllers. The challenge proved that deep reinforcement learning techniques, despite their high computational cost, can be successfully employed as an optimization method for synthesizing physiologically feasible motion in high-dimensional biomechanical systems.

Back to AIcrowd Research