In a new paper published in AI & Society: Knowledge, Culture and Communication, the TAS Functionality Node’s Helen Smith, Kerstin Eder and Jonathan Ives argue that using the term ‘autonomous’ to describe the capabilities of highly automated systems is misleading at best.
Drawing on examples from widely-know science fiction, the Cyberdyne Systems Model T-800 depicted in the Terminator and Terminator 2 films is presented as a great example of an adaptive system that demonstrates evolving functionality and decision-making. However, the authors observe that it can hardly be defined as autonomous when its overall goals and limitations are set by another agent, removing autonomy or ‘freedom of choice’ from its functionality.
The authors look at the implications of describing a system as autonomous, since in doing so we would be assigning moral agency to it. On this basis, the expectation would be that a system is a moral agent and therefore can be held responsible for bad decisions, which is simply not feasible (‘a computer cannot be fined or put in jail when a bad decision is made’ (Dignum et al, 2018, p.63)).
The authors conclude that the correct use of language to describe critical systems is vital to ensure responsibility for the systems’ decisions and actions is attributed to those designing, developing and operating these systems, rather than the systems themselves.
Read the full paper: https://doi.org/10.1007/s00146-023-01662-9
This paper was led by Dr Helen Smith, a Research Associate in Engineering Ethics and Registered Nurse based in the Centre for Ethics in Medicine at the University of Bristol. Helen works as part of the TAS Node team bringing her expertise in the ethical and legal challenges of AI use in healthcare (and beyond).