Clinicians and AI use: where is the professional guidance?

In a new paper published in BMJ’s Journal of Medical Ethics, the TAS functionality nodes’ Jonathan Ives, John Downer and Helen Smith explore the increased use of AI in healthcare and medical settings, and the lack of professional guidance around it.

Although AI has great potential to help improve medical care and alleviate the burden on healthcare workers, the authors argue that as there is no precedent for when AI or AI-influenced medical workers make a mistake, regulation should be developed as a priority to outline the rights and expectations of those working closely with it.

There have recently been reports from National Health Service AI Lab & Health Education England which focus on healthcare workers’ understanding and confidence in AI clinical decision support systems, and are concerned with developing trust in, and the trustworthiness of these systems. However while they offer guidance to aid developers and purchasers of such systems, they offer little specific guidance for the clinical users who will be required to use them in patient care.

The clinicians who will have to decide whether or not to enact an AI’s recommendations are subject to the requirements of their professional regulatory bodies in a way that AIs (or AI developers) are not. This means that clinicians carry responsibility for not only their own actions, but also the effect of the AI that they use to inform their practice.

The paper argues that clinical, professional and reputational safety will be risked if this deficit of professional guidance for clinicians, and that this should be introduced urgently alongside the existing training for clinical users.

The authors end with a call to action for clinical regulators: to unite to draft guidance for users of AI in clinical decision-making that helps manage clinical, professional and reputational risks.

More information

Read the full paper:

http://dx.doi.org/10.1136/jme-2022-108831

Read a blog about the paper on BMJ

 

All authors are fully or part funded via the UKRI’s Trustworthy Autonomous Systems Node in Functionality under grant number EP/V026518/1.

Helen Smith is additionally supported by the Elizabeth Blackwell Institute, University of Bristol via the Wellcome Trust Institutional Strategic Support Fund.

Jonathan Ives is in part supported by the NIHR Biomedical Research Centre at University Hospitals Bristol and Weston NHS Foundation Trust and the University of Bristol. The views expressed in this publication are those of the authors and not necessarily those of the NHS, the National Institute for Health Research or the Department of Health and Social Care.

 

Job opportunity: Research Associate in Human/Swarm Interaction (deadline 29th August)

The role

We have an exciting opportunity for a talented, motivated individual to join us as a Research Associate in Human Swarm Interaction at the University of Bristol. The successful candidate will play a crucial role in engineering trustworthy robot swarms.

As a Research Associate, you will be involved in designing human trials to assess the trustworthiness of our swarm solutions. You will contribute to building demonstrations for public engagement/outreach activities. Collaborating with our Engagement Officer, you will explore potential uses of swarm solutions in science festivals & events, and contribute to educational resources for school children.

This position is within the multidisciplinary UKRI Trustworthy Autonomous Systems (TAS) Node in Functionality programme. You will collaborate with a diverse team of researchers from various disciplines, including ethics, sociology, computer science, and engineering. Together, we are developing design-for-trustworthiness techniques & methods applicable to a wide range of autonomous systems.

By joining our team, you become part of the UKRI Trustworthy Autonomous Systems (TAS) programme, a £33 million initiative that brings together research communities & stakeholders to drive cross-disciplinary fundamental research.

Please note this project runs until 30/04/2024. See our website for more information: https://tasfunctionality.bristol.ac.uk/

What will you be doing?

  • Translate results to physical testbeds (existing Industrial Swarm Arena).
  • Create outside demonstrators for festivals, public outreach efforts & schools.
  • Perform human trials to assess trustworthiness of swarm solutions.
  • Work within a multidisciplinary team to develop design-for-trustworthiness techniques and methods for autonomous systems that can evolve in functionality.
  • Prepare conference and journal papers for publication of findings.

You should apply if

  • A good honours degree (or equivalent) with subject knowledge in Robotics, Engineering, Computer Science, Maths or closely related discipline, and some research experience.
  • A relevant postgraduate research degree or equivalent professional qualification/experience, or be working towards one.
  • Expertise running human trials to assess robotic technologies.
  • Expertise managing ethics approval processes.
  • Expertise developing outreach activities.
  • The ability to write up research results and present these to diverse international audiences through conferences, industry interactions and public engagement.

For a full view of requirements refer to the job description below, but if the aforementioned describes you, we’d encourage you to apply.

Why choose us?

Join a top-ranked UK research university and benefit from attractive perks, a strong pension scheme, and access to development and well-being programs. Embrace our inclusive culture, join diverse staff networks, and become part of a supportive community.

Don’t miss this chance to be at the forefront of engineering trustworthy robot swarms. Apply now to join our dynamic research team!

Additional information

For informal queries please contact Sabine Hauert – sabine.hauert@bristol.ac.uk

To find out more about what it’s like to work in the Faculty of Engineering, and how the Faculty supports people to achieve their potential, please see our staff blog:

https://engineeringincludesme.blogs.bristol.ac.uk/

Interviews are anticipated to take place on 13th September 2023.

Our strategy and mission

We recently launched our strategy to 2030 tying together our mission, vision and values.

The University of Bristol aims to be a place where everyone feels able to be themselves and do their best in an inclusive working environment where all colleagues can thrive and reach their full potential. We want to attract, develop, and retain individuals with different experiences, backgrounds and perspectives – particularly people of colour, LGBT+ and disabled people – because diversity of people and ideas remains integral to our excellence as a global civic institution.

Available documents

Please refer to the Job Description for a complete list of skills and experience that will support your application for this role.

TAS community gathers for our first International Symposium

Over three days in July, just outside Edinburgh, researchers working on autonomous systems gathered for the first International Symposium on Trustworthy Autonomous Systems at Heriot-Watt University. Although the TAS project has been running for a few years, the pandemic prevented the TAS Hub and nodes from gathering the community to share their research into trustworthiness and autonomous systems. The talks and panels consisted of a diverse range of engineers, computer scientists and social scientists, including plenaries by Professors Sharon Strover and Gina Neff.

 

Our functionality node presented four posters and two papers, over the course of the conference. Dr Sabine Hauert didn’t let unreliable public transport prevent her from giving a talk on “Trustworthy Swarms”, a collaboration of researchers across our node. We also presented a scoping review, with work from Dr Helen Smith, Dr Jonathan Ives, and our previous colleague Dr Ariana Manzini, on “Ethics of trust/worthiness in Autonomous Systems”.

 

The first day of the conference focussed on early career researchers, and a number of Early Career Researcher awards were presented to them at the nearby National Robotarium, in categories including Policy and Knowledge Transfer. We were delighted that Dr Helen Smith won one of the awards for Responsible Research and Innovation, which included a £4,000 grant towards her research. We look forward to sharing where this leads.

 

After we’d been joined by our international colleagues, the nodes then had a further day at the All Hands Meeting, to share what we’d done over the previous 12 months. We heard from every node, plans for UKRI’s new Responsible AI initiative, and one of the panels involved Professor Dame Wendy Hall.

Thank you to the organisers and everyone who came along to make it such a useful, interesting and friendly event.

All images credited to photographer Ryan Warburton.