For a mobile robot, navigation in a densely crowded space can be a challenging and sometimes impossible task, especially with traditional techniques. In this paper, we present a framework to train neural controllers for differential drive mobile robots that must safely navigate a crowded environment while trying to reach a target location. To learn the robot’s policy, we train a convolutional neural network using two Reinforcement Learning algorithms, Deep Q-Networks (DQN) and Asynchronous Advantage Actor Critic (A3C) and develop a training pipeline that allows to scale the process to several compute nodes. We show that the asynchronous training procedure in A3C can be leveraged to quickly train neural controllers and test them on a real robot in a crowded environment.

Robot Navigation in Crowded Environments: A Reinforcement Learning Approach

Caruso, Matteo
;
Regolin, Enrico;Camerota Verdù, Federico Julian;Russo, Stefano Alberto;Bortolussi, Luca;Seriani, Stefano
2023-01-01

Abstract

For a mobile robot, navigation in a densely crowded space can be a challenging and sometimes impossible task, especially with traditional techniques. In this paper, we present a framework to train neural controllers for differential drive mobile robots that must safely navigate a crowded environment while trying to reach a target location. To learn the robot’s policy, we train a convolutional neural network using two Reinforcement Learning algorithms, Deep Q-Networks (DQN) and Asynchronous Advantage Actor Critic (A3C) and develop a training pipeline that allows to scale the process to several compute nodes. We show that the asynchronous training procedure in A3C can be leveraged to quickly train neural controllers and test them on a real robot in a crowded environment.
File in questo prodotto:
File Dimensione Formato  
machines-11-00268.pdf

accesso aperto

Tipologia: Documento in Versione Editoriale
Licenza: Creative commons
Dimensione 7.78 MB
Formato Adobe PDF
7.78 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11368/3040040
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact