Â鶹ÉäÇø

Search skillsforcare.org.uk

Â鶹ÉäÇø
Top

Ethics and artificial intelligence in adult social care

09 Dec 2024

8 min read

Caroline Emmer De Albuquerque Green


  • Digital

Dr Caroline Emmer De Albuquerque Green, Director of Research at the Institute for Ethics in AI at Oxford University, speaks to us about how they’re working to ensure artificial intelligence (AI) is used ethically to improve the quality of care delivered in our sector.

Artificial intelligence (AI) has the potential to revolutionise many sectors. Adult social care is no different, with a huge number of AI tools being rapidly developed with the intention of improving the lives of those working in our sector and being supported by it.

Popular tools, such as ChatGPT and other large language models, have already proven themselves valuable to those looking to lessen the administrative burden of their job. In social care, a sector that struggles to attract and retain staff, these tools might prove to free up work time that could otherwise be spent on other tasks, foremost delivering support to those who need it.

In addition to making things more efficient, such tools have real potential to make social care a better experience for those receiving support. From personalised communication to AI-driven assistive technology, tangible improvements to people’s quality of life are becoming increasingly attainable.

I’m optimistic with regards to the potential that these tools have to make care services better. However, there’s also a significant risk of harm being caused to those the sector looks to support, as there’s been very limited guidance produced on how these tools can be used safely and ethically.

Since February of this year, I’ve been co-chairing a research project titled ‘Oxford Project: The Responsible Use of Generative AI in Social Care’. This project is being delivered by the Institute for Ethics at Oxford University, in partnership with Katie Thorne from the Digital Care Hub and Daniel Casson, Managing Director of Casson Consulting, Reuben College and 30 organisations and individuals from the adult social care sector.

Our project aims to investigate the responsible and ethical use of AI tools, including large language models, in social care provision. Despite the clear benefits of AI, there exists a number of risks that we must tackle to ensure we do not expose those this sector supports to exploitation or rights violations.

Perhaps the most obvious amongst these risks is the privacy and data protection. At current, many people are unaware that by using the popular AI tools of today, such as ChatGPT or Google’s Gemini, you’re submitting data that has the potential to be used in educating and changing the language models at the core of these technologies. Not to mention the fact that submitted data is stored by these organisations and is susceptible to data breach, something which has already happened to ChatGPT. Ultimately, social care providers could liable if the sensitive data of those they support is used inappropriately by these organisations or accessed without permission.

To combat these risks, our project looks to co-produce actionable guidelines for appropriate usage and deployment of generative AI in social care, as well as discover the need for further resources and training.

To kick off our project, we invited 22 organisations to a roundtable event hosted at Oxford University. The event saw representatives from each organisation discuss the need for guidelines around appropriate use of AI in social care, in addition to the creation of working groups that reflect their experience and interests of the participants.

The goal of our project is not to simply produce guidelines, but to create a foundation which focuses on what it means to use AI well within adult social care. We’ll also be looking to influence how the government reacts to the ongoing growth of AI technology. At current, there’s no regulation around how products intended for the social care market are developed or deployed, in contrast with markets such healthcare. Personally, I think it’s imperative that we see similar regulatory oversight for products and technology that are used in social care settings, and the integration of AI into these products makes this even more critical. I’m hopeful that our work can shine a light on this issue.

For technology providers themselves, we’re actively working on a pledge which we’re hoping will be adopted by those producing AI products for adult social care. This pledge will reflect the guidelines that our project looks to produce and acknowledge the responsibility of providers to operate by the shared values of the sector by being human-centred and transparent.

Although our work has several objectives, to me, the biggest measurement of its success will be whether we’re able to keep this shared movement going into the future. A concerted, persistent effort is the only way we’re likely to make the impact we want. If we continue to promote the importance of our work, government, regulatory bodies and technology providers will be far more likely to recognise and embrace it, which is key if we’re to realise these benefits of AI ethically and safely.

Find out more about or visit our campaign landing page.

Topic areas


Care provider Future Directions wins two awards at Spirit of Manchester Awards

How we created a bespoke digital solution to become a more efficient care organisation