Regulation and Standards in the Safe and Ethical Adoption of AI Systems in Healthcare
From Isabelle Hanlon
Related Media
Title: Regulation and Standards in the Safe and Ethical Adoption of AI Systems in Healthcare
Abstract: Following a series of high profile commercial and research developments such as alpha go and AlexNet, the 2010s saw a significant upsurge in attention, interest, and investment in AI technologies. As part of this much has been written about the potential for AI to ‘revolutionise’ healthcare delivery in diverse domains, from radiography to mental health. However, the reality of developing systems with concrete and robust healthcare benefits has proven a significant challenge, as high profile failures such as IBM’s Watson for Oncology demonstrate. Furthermore, there has been a significant, and often unappreciated, work to centre the inequitable impacts and safety risks of AI systems developed when there may have been no consideration of ethics or societal impact.
To ensure AI systems are effective, safe, and that substantial steps are taken to reduce and mitigate for inequitable outcomes, work is now underway to develop methods and instruments for auditing AI for healthcare. This talk will cover the wider healthcare contexts and critical issues which establish the need for rigorous evaluation. The talk will address the role for regulation and standards in these efforts, with particular attention given to standards. The speakers are members of the development committee for the UK’s first auditable standard for Healthcare AI ‘BS 30440 Validation framework for the use of AI in healthcare’. BS 30440 constitutes an important case study in the opportunities and challenges presented in developing audit instruments for assessing healthcare AI.
Bios:
Michelle
Professor Michelle Williams is a Professor of Cardiovascular Imaging and Radiology Consultant at the University of Edinburgh and NHS Lothian. She made substantial technical contributions as a committee member for the BS 30440 development panel. She is Associate Director of the British Heart Foundation Data Science Centre and the Imaging theme lead. Her research centers around multi-modality non-invasive imaging of the heart and blood vessels, including using machine learning and other advanced analytic techniques. She is president elect of British Society of Cardiovascular Imaging, member of the executive community of the European Society of Cardiovascular Radiology, member of the Board of Directors of the SCCT and chair of the SCCT education committee.
Danny
Danny Ruta is the AI Clinical Lead at Guy’s Cancer Centre, the cancer centre for Guy's and St Thomas' NHS Foundation Trust. His present work centres on developing and applying a validation framework, evidence standards and methods for the clinical evaluation of AI technology in cancer care. This work is performed in collaboration with the Clinical Scientific Computing department at Guy's and St Thomas' and with colleagues at King’s College London. Danny's draft validation framework formed the foundation for BS 30440 Validation framework for the use of artificial intelligence (AI) within healthcare.
Danny has formerly held posts as Director of Public health for the London Borough of Lewisham and the City of Newcastle, and as Senior Lecturer in Epidemiology and Health Services Research at Newcastle University and Dundee University. Danny has developed national clinical guidelines for the NHS in Scotland, conducted large randomised trials, and developed Patient Reported Outcome measures recommended by the US FDA, used with patients across the NHS in England and throughout the world.
- Tags
-