Ethics

Lead: Professor Jonathan Ives

Autonomous systems with evolving/changing functionality raise significant ethical questions concerning the threat they potentially pose to the reasons people can, and do, place trust in systems and consent to their use. When we consent to a technology being in our lives we do so on the basis of being adequately informed of its function, and seeing how reliably that function is performed.  Seeing a technology perform reliably over a period of time becomes the basis of a naïve trust, where we assume it will continue to function in the future as it has in the past (even if we have no idea how it actually works). The more we understand how and why something works, the less naïve our trust. With most technology, users have no understanding of how it works, and so their trust is usually naïve – but it is bolstered by the trust they place in the developer, manufacturer and retailer/provider.

When systems have the ability to change function over time, it becomes increasingly difficult to understand how they can be trustworthy, because:

  1. it is more difficult to assess reliability over time;
  2. as system autonomy increases it becomes increasingly difficult for anyone to understand, and therefore predict, how they will act based on its past actions – because they might change.

With that level of uncertainty, trustworthiness is threatened.

At the TAS Functionality Node we are exploring how developers and users think about and frame trustworthiness in relation to the technology they are developing/using and how this is affected by evolving functionality. Approaches will include an ethnographic study of developers, interviews with developers and users, public engagement activities with users, and ethical theorising.

For more information about the interviews with developers and users, and how to take part, see the ARET: Adaptable Robots, Ethics, and Trust study

We are also researching the ethics of swarm technology in healthcare focusing on what the first in-human nanoswarm clinical trial should look like.  We will be using interviews initially and focus groups in the next phase to explore the attitudes of stakeholders towards this swarm technology in healthcare, combined with ethical/legal analysis to consider how swarm medicine should be regulated in clinical trials. For more information on this study please see the SWARM study – Small robots With collective behaviour as AI-driven cancer therapies; building Regulations for future nanoMedicines.

Find out more about our research