Lead: Dr John Downer
We are exploring how variations in the functionality of autonomous systems translate into differences in their regulatory requirements.
These variations can take many forms relating both to the ‘nature’ of a system’s function (i.e. its purpose), and the ‘manner’ of its functioning (i.e. its design). This is to say that systems with identical designs must sometimes be regulated differently if they have different functions. (A robot arm used for stacking boxes in an unoccupied room, for example, will have different reliability requirements than one used for assisting with surgery in a crowded operating theatre.) And also that systems with identical functions must sometimes be regulated differently if they have different designs. (A robot with conventional, linear mechanical components and an explicit codebase, for instance, can be assessed and verified in ways that one based on ‘soft’ or ‘machine-learnt’ components cannot.)
These variations have complex implications. On one level, for example, verifying a system for surgery rather than stacking boxes can be understood as a matter of establishing different levels of reliability. But different levels of reliability have to be established in fundamentally different ways. Regulators can establish the performance of a box-stacking robot actuarially, from empirical service experience: each dropped box a datapoint. They cannot (ethically or even empirically) establish the performance of a surgical robot by counting surgical mishaps.
By exploring these questions in the context of the Functionality Node’s evolving technology development, and its parallel work on ethics and verification, we are developing a richer understanding of the regulatory challenges of (and for) autonomous systems.