Responsibility gaps in artificial intelligence and neurotechnology. The case of symbiotic Brain-Computer interfaces.


There is a remarkable parallel between the consequences of progress in Artificial Intelligence (AI) and in neurotechnology (NT). In both domains, human users function in combination with smart technologies that are aimed at restoring and/or improving their cognition and behavior. In this talk I would like to explore some similarities and differences between ethical and legal debates about the implications of these two types of technology. I will do this by examining the notion of a ‘responsibility gap’ (Matthias, 2004).Within the context of AI, responsibility gaps arise when human control over intelligent machines diminishes or even disappears because of, among other factors, their growing autonomy and the inherent opacity of their functioning. The resulting difficulties in establishing and attributing responsibility to the human users of AI create the gap. In various discussions about the impact of neurotechnology on human identity, agency and responsibility, a similar discussion can be discerned. These two strands of literature tend, however, to remain separate. In the literature about AI, meaningful human control has been isolated as a normative requirement to prevent human responsibility gaps by keeping autonomous machines’ behavior aligned with human controllers’ intentions and values (Santoni de Sio & van den Hoven, 2018). However, NT introduces the novel possibility that control could come without accompanying intentions, i.e. subconsciously. This is an almost counterintuitive implication of the so called pBCI (passive Brain-computer interfaces, also regularly discussed as symbiotic BCIs). I will therefore argue that the equation “more control = more responsibility”, widely accepted within (the several) meaningful human control theories in the context of AI, might not hold as well in the case of pBCI. I will claim that these particular technologies create responsibility gaps that are particularly challenging for a theory of meaningful human control that aims to fill them. My analysis is aimed first of all to show that debates about responsibility gaps in AI and NT can mutually enrich each other. Secondly, I aim to indicate that, from a legal and moral perspective, the consequences of NT on responsibility may be even more significant than previously thought.


Giulio Mecacci

Giulio Mecacci is assistant professor in Ethics and Philosophy of AI and Neurotechnology at the Donders Institute for Brain, Cognition and Behaviour at Radboud University. He works at the intersection of philosophy, science and technology, aiming to establish a constructive and mutually enriching dialogue among different disciplines. He also strives to promote the inclusion of human values in the process of scientific research and technological innovation.