Regulatory science for AI in health: A vision for global solidarity
By: Dr. Bilal Mateen, Executive Director, Digital Square at PATH
On June 4th, I was invited by the Brazilian Government to give a keynote address during the 3rd Health Working Group meeting as part of a session on 'Harnessing artificial intelligence across the G20: Opportunities and challenges for global health’. Speaking to a room full of G20 delegates, donors, academics and scientists, public health advocates, and members of private industry, I called on the global health community to adopt a globally unified approach to artificial intelligence (AI) regulation for health. I’m sharing my remarks here in full so this message may reach beyond those gathered in Salvador today, as it will take a truly global effort to unlock the potential of equitable AI in health.
Ambassador Ghisleni, Secretary Haddad, and esteemed colleagues, it’s a privilege to address you all today. I’d like to start by thanking the Government of Brazil for the honour and for your leadership on the topic of regulatory science. It should not go without mention that Brazil’s national regulatory agency, ANVISA, is one of only two agencies from low- or middle-income nations that is a full member of the International Medical Devices regulatory forum—a testament to how influential you are and that there is no better sponsor for today’s discussion.
Last year, on the sidelines of the UN General Assembly, I was introduced to a concept. I think this concept could be incredibly valuable for us to reflect on as we discuss what those assembled here can achieve in the space of regulatory ecosystem strengthening and eventual harmonisation to achieve equitable access to safe and effective AI in health.
That concept is global solidarity, and today, I want to share with you all a vision for global solidarity around regulatory science for AI in health.
As the 2023 Global Solidarity Report states, “Solidarity is the basis of community—whether local, national, or international. When we have a sense of belonging together, effective and representative institutions, and powerful stories that show cooperation working, the sacrifices needed to solve common challenges become possible. Without that solidarity, it is much harder to make tough choices and fix crises.”
Whilst we speak often about the positive impact and potential of AI, I think we are all realising that there is, at the same time, a dark reality we need to contend with – the rapid pace of development is quickly creating a series of winners and losers. This is not a distant future; it's happening now. Those of us who can access and test best-in-class models with appropriate safeguards to inform care, and those who can’t. Where the US FDA has cleared or approved over 800 AI-based tools, the entire continent of Africa has done less than 10. That’s not to suggest these tools aren’t being developed at scale, but that much of Africa lacks clear safeguards and mechanisms for recourse when things go wrong.
Couple that with the ability to propagate misinformation and disinformation at scale—which could be used to undermine vaccine confidence—and with the fact that these tools have forced us to contend with fundamental questions of ownership and intellectual property, such as whether I own the sound of my voice, and it is unsurprising that our regulatory frameworks have been pushed to a breaking point. That’s what you can see illustrated on your screens—the global solidarity report published last year found that our collective faith in institutions is at an all-time low. However, there remains a clear opportunity to leverage our identity as global citizens to address this challenge.
To give you a meaningful example to help anchor what I mean by our regulators having been stretched to their limits, let me draw on the disruption caused by large language models (LLMs).
Our current regulatory framework for software as a medical device, which includes AI, is based on the assumption that all of these technologies are deterministic. In other words, the same input will reproducibly generate the same output. It is based on the fact that we draw confidence that the evidence of safety and efficacy generated in clinical trials is a generalisable guarantee. However, we know that LLMs are not deterministic. The creativity engines built into them are based on injecting randomness into their answers, and by breaking that axiomatic assumption of deterministic behaviour, it is hard to see how we can apply the same framework to regulate them.
Imagine a chatbot that gives a different answer to the same question. The chatbot not only sometimes gets it wrong but also sometimes hallucinates evidence to justify its answer. How do we even begin to characterise this behaviour? What number of hallucinations are safe? How do you differentiate the seriousness of a hallucination?
What we’ve arrived at is a phenomenally interesting set of research questions—or at least I think it is. How do we effectively evaluate a non-deterministic technology when used in a safety-critical setting like health? That is the crux of regulatory science; it is the marriage of implementation science, operational research, and policy development. For the last decade, we’ve had notable leadership from the USA, UK, and several others through their investments in centres of excellence in regulatory science and innovation. But the diffusion of the foundational knowledge generated by these centres, beyond national borders or outside the eleven core members of IMDRF, has been slow.
If we are honest with ourselves, in response to the novel challenges introduced by AI and especially generative technology, we’ve turned inward. We’re becoming more parochial as we seek to build stability locally, forgetting that this crisis of needing to figure out how to effectively regulate AI in health is so profound that none of us can do it alone in the time we have available.
Thus, my ask of all of you today is simple. We need a commitment from each of you that as knowledge is generated and lessons are learnt in regulatory science for AI, it will be made available immediately and as freely as possible, as with any global public good. In other words, we cannot think of effective safeguards as a competitive advantage that one country can offer and another cannot.
Second, whilst finding the time will always be hard, coordination and collaboration of the institutions we have must improve. Regulation of AI in health at the national level cannot be managed solely by medicines and devices regulators; the technology touches too many aspects of effective governance (with credit to our colleagues in the UK for recognising this need for plurality in deciding which statutory bodies needed to be involved in the ‘AI Airlock’ sandbox). At the international level, there is a proliferation of institutions at a time when we need to consolidate.
This brings me to my final point: we need a mechanism to hold ourselves and each other accountable on this issue. Whether this is an extension of the mandate we’ve already given to member state organisations and an opportunity to leverage the new global initiative on AI, or we decide we need a new entity much as we have for other types of crises, I will leave it open to debate.
To close, I hope as we leave here today, we can agree that global solidarity on the regulation of AI in health, however we choose to operationalise it, is necessary. And the good news, to quote Mia Motley, the prime minister of Barbados, is that people everywhere have more solidarity with each other than governments have so far mustered, as we saw earlier. We just need to channel that goodwill into real action.
Thank you.
For more information, see: