Artificial Intelligence (AI) is rapidly reshaping global economics and this transformation is particularly impacting banks and investments firms, as they started to develop and adopt AI-assisted trading algorithms. Because of the very nature and cutting-edge complexity of these new technologies, they are also bound to give rise to new and unprecedented risks, particularly affecting individuals and human competitors. Industries adopt AI-based solutions in their working pipelines in order to deliver efficiency gains and produce more profit. Due to their exponentially rapid development, these new solutions are acting as increasingly independent agents. Despite having been developed by humans, such forms of AI might misbehave in ways which are unacceptable to a human competitor, even without being explicitly programmed to do so. Adequate legal instruments are required to prevent potential threats resulting from concurrent human-machine interactions. Nonetheless, AI-trading solutions are difficult to fit into the current European Union legal framework concerning market manipulation, especially when trying to assess liability under traditional legal concepts such as intent, causation and foreseeability, which were originally developed when dealing with human-only agents. Rules struggling to keep up the pace with technological innovation is a documented and well-known phenomenon that has been observed at least since the rise of industrial capitalism. The main current European Union regulatory instruments to contrast AI-based harms in this context are the European Market Abuse Regulation (EU) No 596/2014 and the Market Abuse Directive 2014/57/EU, which date back to 2014. Technological progress comes accompanied by a continuous blurring of the boundary between humans and machines. Addressing the liabilities of the new machine agents that coexist and interact with humans calls for an up-to-date legal framework. This paper aims to highlight emerging human vulnerabilities and critically investigate the current European Union legal framework.
Emerging Risks and the European Liability Framework in AI-based Market Manipulation / Mecchina, Andrea; Basile, Lorenzo; Bortolussi, Luca. - (In corso di stampa), pp. 1-11. ( Human Vulnerability in Interaction with AI Rome, Italy Dal 16/05/2024 al 17/05/2024).
Emerging Risks and the European Liability Framework in AI-based Market Manipulation
Andrea Mecchina
Primo
;Lorenzo Basile;Luca Bortolussi
In corso di stampa
Abstract
Artificial Intelligence (AI) is rapidly reshaping global economics and this transformation is particularly impacting banks and investments firms, as they started to develop and adopt AI-assisted trading algorithms. Because of the very nature and cutting-edge complexity of these new technologies, they are also bound to give rise to new and unprecedented risks, particularly affecting individuals and human competitors. Industries adopt AI-based solutions in their working pipelines in order to deliver efficiency gains and produce more profit. Due to their exponentially rapid development, these new solutions are acting as increasingly independent agents. Despite having been developed by humans, such forms of AI might misbehave in ways which are unacceptable to a human competitor, even without being explicitly programmed to do so. Adequate legal instruments are required to prevent potential threats resulting from concurrent human-machine interactions. Nonetheless, AI-trading solutions are difficult to fit into the current European Union legal framework concerning market manipulation, especially when trying to assess liability under traditional legal concepts such as intent, causation and foreseeability, which were originally developed when dealing with human-only agents. Rules struggling to keep up the pace with technological innovation is a documented and well-known phenomenon that has been observed at least since the rise of industrial capitalism. The main current European Union regulatory instruments to contrast AI-based harms in this context are the European Market Abuse Regulation (EU) No 596/2014 and the Market Abuse Directive 2014/57/EU, which date back to 2014. Technological progress comes accompanied by a continuous blurring of the boundary between humans and machines. Addressing the liabilities of the new machine agents that coexist and interact with humans calls for an up-to-date legal framework. This paper aims to highlight emerging human vulnerabilities and critically investigate the current European Union legal framework.Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


