Most of the theoretical foundations which have contributed to shape Artificial Intelligence (AI) as we know it come from the last century. The technological advancement of the last decades however, mainly in the form of faster parallel computation, larger memory units, and Big Data, has dramatically increased the popularity of AI within the research community. Far from being only a pure object of research, AI has been successful in many fields of applications, and it has become deeply integrated into our daily experiences. We live in a society in which on-demand content suggestions are tailored for each customer, where it is possible to order products online by chatting with bots. Smart devices adapts to the owner behavior, the stock exchange brokers are algorithm based on predictive models, and the computers are able to discover new medicines and new materials. Despite the amount of knowledge acquired on AI, there are still many aspects of it that we do not fully understand, such as the interplays within multiple autonomous agents scenarios, in which AIs learn and interact in a shared environment, while possibly being subjected to different goals. In these scenarios the communication and the regulation of the autonomous agents are both extremely relevant aspects. In this work we analyze in which way the language expressiveness affect how agents learn to communicate, to which extent the learned communication is affected by the scenario, and how to allow them to learn the optimal one. We then investigate which communication strategies might be developed in different scenarios when driven by the individual goal, which might lead to improved equality in a cooperative scenario, or more inequality in a competitive one. Another aspect that we consider is the ethics of multiple agents, to which we contribute by proposing a way to discourage unethical behaviors without disabling them, but instead enforcing a set of flexible rules to guide the agents learning. AI success can be determined by its ability to adapt, which is an aspect that we consider in this work, relatively to the adaptation of autonomous soft robotic agents. Soft robots are a new generation of nature-inspired robots more versatile and adaptable than the ones made of rigid joints, but the design and the control of soft robots can not be easily done manually. To this extent we investigate the possibility of mimicking the evolution of biological beings, by adopting evolutionary meta-heuristics for optimizing these robots. Specifically we propose to evolve a control algorithm that leverages the body complexity inherent to the soft robots through sensory data collected from the environment. Considering the problem of designing adaptable soft robots, we propose an approach that allows to automatically synthesize robotic agents for solving different tasks, without needing to know them in advance. Agent-based scenarios are powerful research tools that can be adopted also for approximating the behavior of biological actors. Based on this possibility, we propose a model for the assessment of the publishing system indicators, which are currently used to evaluate authors and journals.

Artificial Intelligence Strategies in Multi-agent Reinforcement Learning and Robotic Agents Evolution / Talamini, Jacopo. - (2021 Mar 15).

Artificial Intelligence Strategies in Multi-agent Reinforcement Learning and Robotic Agents Evolution

TALAMINI, JACOPO
2021-03-15

Abstract

Most of the theoretical foundations which have contributed to shape Artificial Intelligence (AI) as we know it come from the last century. The technological advancement of the last decades however, mainly in the form of faster parallel computation, larger memory units, and Big Data, has dramatically increased the popularity of AI within the research community. Far from being only a pure object of research, AI has been successful in many fields of applications, and it has become deeply integrated into our daily experiences. We live in a society in which on-demand content suggestions are tailored for each customer, where it is possible to order products online by chatting with bots. Smart devices adapts to the owner behavior, the stock exchange brokers are algorithm based on predictive models, and the computers are able to discover new medicines and new materials. Despite the amount of knowledge acquired on AI, there are still many aspects of it that we do not fully understand, such as the interplays within multiple autonomous agents scenarios, in which AIs learn and interact in a shared environment, while possibly being subjected to different goals. In these scenarios the communication and the regulation of the autonomous agents are both extremely relevant aspects. In this work we analyze in which way the language expressiveness affect how agents learn to communicate, to which extent the learned communication is affected by the scenario, and how to allow them to learn the optimal one. We then investigate which communication strategies might be developed in different scenarios when driven by the individual goal, which might lead to improved equality in a cooperative scenario, or more inequality in a competitive one. Another aspect that we consider is the ethics of multiple agents, to which we contribute by proposing a way to discourage unethical behaviors without disabling them, but instead enforcing a set of flexible rules to guide the agents learning. AI success can be determined by its ability to adapt, which is an aspect that we consider in this work, relatively to the adaptation of autonomous soft robotic agents. Soft robots are a new generation of nature-inspired robots more versatile and adaptable than the ones made of rigid joints, but the design and the control of soft robots can not be easily done manually. To this extent we investigate the possibility of mimicking the evolution of biological beings, by adopting evolutionary meta-heuristics for optimizing these robots. Specifically we propose to evolve a control algorithm that leverages the body complexity inherent to the soft robots through sensory data collected from the environment. Considering the problem of designing adaptable soft robots, we propose an approach that allows to automatically synthesize robotic agents for solving different tasks, without needing to know them in advance. Agent-based scenarios are powerful research tools that can be adopted also for approximating the behavior of biological actors. Based on this possibility, we propose a model for the assessment of the publishing system indicators, which are currently used to evaluate authors and journals.
15-mar-2021
MEDVET, Eric
33
2019/2020
Settore ING-INF/05 - Sistemi di Elaborazione delle Informazioni
Università degli Studi di Trieste
File in questo prodotto:
File Dimensione Formato  
Tesi.pdf

accesso aperto

Descrizione: Tesi.pdf
Tipologia: Tesi di dottorato
Dimensione 2.95 MB
Formato Adobe PDF
2.95 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11368/2982151
 Avviso

Registrazione in corso di verifica.
La registrazione di questo prodotto non è ancora stata validata in ArTS.

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact