Flocks of birds, schools of fish, and insect swarms are examples of the coordinated motion of a group that arises spontaneously from the action of many individuals. Here, we study flocking behavior from the viewpoint of multiagent reinforcement learning. In this setting, a learning agent tries to keep contact with the group using as sensory input the velocity of its neighbors. This goal is pursued by each learning individual by exerting a limited control on its own direction of motion. By means of standard reinforcement learning algorithms we show that (i) a learning agent exposed to a group of teachers, i.e., hard-wired flocking agents, learns to follow them, and (ii) in the absence of teachers, a group of independently learning agents evolves towards a state where each agent knows how to flock. In both scenarios, the emergent policy (or navigation strategy) corresponds to the polar velocity alignment mechanism of the well-known Vicsek model. These results (a) show that such a velocity alignment may have naturally evolved as an adaptive behavior that aims at minimizing the rate of neighbor loss, and (b) prove that this alignment does not only favor (local) polar order, but it corresponds to the best policy or strategy to keep group cohesion when the sensory input is limited to the velocity of neighboring agents. In short, to stay together, steer together.

Learning to flock through reinforcement

Durve M.;
2020-01-01

Abstract

Flocks of birds, schools of fish, and insect swarms are examples of the coordinated motion of a group that arises spontaneously from the action of many individuals. Here, we study flocking behavior from the viewpoint of multiagent reinforcement learning. In this setting, a learning agent tries to keep contact with the group using as sensory input the velocity of its neighbors. This goal is pursued by each learning individual by exerting a limited control on its own direction of motion. By means of standard reinforcement learning algorithms we show that (i) a learning agent exposed to a group of teachers, i.e., hard-wired flocking agents, learns to follow them, and (ii) in the absence of teachers, a group of independently learning agents evolves towards a state where each agent knows how to flock. In both scenarios, the emergent policy (or navigation strategy) corresponds to the polar velocity alignment mechanism of the well-known Vicsek model. These results (a) show that such a velocity alignment may have naturally evolved as an adaptive behavior that aims at minimizing the rate of neighbor loss, and (b) prove that this alignment does not only favor (local) polar order, but it corresponds to the best policy or strategy to keep group cohesion when the sensory input is limited to the velocity of neighboring agents. In short, to stay together, steer together.
File in questo prodotto:
File Dimensione Formato  
PhysRevE.102.012601.pdf

Accesso chiuso

Tipologia: Documento in Versione Editoriale
Licenza: Copyright Editore
Dimensione 1.38 MB
Formato Adobe PDF
1.38 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11368/2971559
Citazioni
  • ???jsp.display-item.citation.pmc??? 4
  • Scopus 29
  • ???jsp.display-item.citation.isi??? 26
social impact