Imperial experts answer key questions on the future of AI to mark the AI Safety Summit in the UK

Ai,Tech,,Businessman,Show,Virtual,Graphic,Global,Internet,Connect,Chatgpt

 

The AI Safety Summit 2023 is focussing on the potential risks from frontier AI and how national governments should categorise and manage those risks while ensuring that we can realise the benefits of AI.

Six Imperial researchers working on AI shared their thoughts on the progress of this technology.

  • Professor Christopher Tucci, Professor of Digital Strategy & Innovation at Imperial College Business School and Co-Director (Education) for I-X
  • Dr Saira Ghafur, medical doctor and Policy Fellow and Lead for Digital Health at the Institute of Global Health Innovation
  • Dr Petar Kormushev, Senior Lecturer in Robotics at the Dyson School of Design Engineering and Director of the Robot Intelligence Lab.
  • Professor Francesca Toni, Professor in Computational Logic at the Department of Computing
  • Professor Aldo Faisal, Professor of AI & Neuroscience at the Departments of Computing and Bioengineering
  • Professor Hamed Haddadi, Professor of Human-Centred Systems at the Department of Computing and I-X, and Security Science Fellow at the Institute for Security Science and Technology

The questions and answers below have been edited for length and clarity.

Q – Why do we need this Summit now?

A – Professor Faisal: “The speed of AI development, as perceived by the public with systems like ChatGPT, has accelerated at a rate that maybe before only professors and students knew about. It is clear to me that in 200 years’ time historians will have a name for what is happening in this year and the next years – like the French Revolution, for example. That is because we have AI systems that have now arguably passed the Turing test of artificial intelligence. We now have functional AI systems that solve problems at a level which allows them to function above human performance in society. In contrast to inventing or crafting these ‘by hand’, the machines learn and find the solution by learning from data.”

A – Professor Haddadi: “I think we are at a point where these algorithms, these models, are coming from industry, from academia, from everywhere, and they are embedding in our lives from all angles, so it is time to have that dialogue to ask ‘is this ecosystem safe?’, ‘is this the trajectory and direction that we want to go?’, ‘do we need regulation in this space, or do we let the market and businesses take care of it and see what happens next?’”

A – Dr Kormushev: “With the general awareness of ChatGPT and similar systems, there is also a rising sense of fear in many jobs and professions that people might lose their jobs to an AI system. This needs to be addressed early. My opinion is that we need to partner with AI. With the robots we build in my research, I always aim for them to be partners for humans that can help make their jobs easier.”

A – Professor Toni: “Additionally, this is not something that can be dealt with unilaterally. I think there is a need to talk internationally about these problems. We can’t just sit in our corner and address the problem locally. In my opinion, there is a need for a broader and wider discussion because we are going to need globalised solutions.”

A – Professor Tucci: “Information technology regulation, in general, does not work very well when only one jurisdiction does it, because you can just rush off to the most lax jurisdiction, develop a system and then sell the solution back into the place that had the restrictions on it. I think that is why it is quite important to think about multilateral approaches to these things.”

Q – What can we expect in terms of future policies and regulations?

A – Professor Faisal: “The European Union has been thought-leading on digital issues such as data privacy and GDPR – for example, with embarking on the EU-AI Act. There are challenges that arise in the details of the EU’s AI legal approach that is, in part, overlapping and potentially conflicting with its other digital regulations, such as GDPR and the Digital Markets Act.

“Moreover, legal regulation for AI – for example, on traceability and accountability of AI systems – is not going to be regulated by EU law. It is going to be farmed out to technical standardisation committees that at a very technocratic level will need to target what is actually going to happen.

“The other challenge is that the EU was so far ahead that it did not account for the global impact of Large Language Models (such as ChatGPT) in their present form and what this means for everyday use, e.g., in legal communications.

“There is an opportunity here for the UK to come up with a pragmatic, coherent proposal for regulation that can make us the place to take AI technology forward.”

Q – Conclusions reached by AI will have real-world consequences – for example in healthcare or law. Is it more important to make the conclusions of AI explainable to users, or to ensure that they have been run through some sort of safety filter after they emerge?

A – Professor Toni: “I believe they are both very important. If you are a GP who uses a tool that helps to determine the risk of cancer in patients, you will be somewhat more comfortable knowing that the system you are using has been verified and does not have biases or other issues. However, you still want to understand what it is telling you, because, ultimately, you are using the tool and you are deciding for a patient. If you don’t understand the rationale behind the machine that is giving the conclusion, you are not going to feel empowered by the tool, so it is important to have both. You need to have the safety guarantees and the verification of desirable properties of the tool, but it is also crucial that the tool is usable and that the human users understand what the tool is doing and that it aligns with their values.

“In terms of trusting a system, you can unfold trust into two different types. You have what some researchers call extrinsic trust – trusting a system at the input and output level, which is maybe based upon the system having been verified and having some desirable properties. But there is another type of trust, which is equally important – intrinsic trust. This is trusting how the system functions, how it reasons, the knowledge it embeds and whether it collides with that of humans. I think in all the high stakes applications, such as medical, finance or law, we need both types of trust – the extrinsic one and the intrinsic one.”

A – Dr Ghafur: “Clinical education is so crucial to using AI in healthcare. As a clinician, you are always trained to know what the evidence is for something before you use it, especially with drugs. You read a paper, you are confident in the results, and then you prescribe a treatment to your patients.

“For AI, the evidence that you have for the algorithm is that it has been trialled and tested on data. When you use it in the real world, you’ve got to carry on studying how that tool is deployed and the unintended consequences that might occur from using the tool in practice. You’ve got to ask: how does it work in a clinical workflow? How is it going to change practice? How does it impact clinicians? And crucially, is it better than what you were doing before? Is there a return on investment? Is it more expedient? Does it give better clinical outcomes?”

A – Professor Tucci: “There is an interesting case study on this. The IBM Watson system was applied to cancer diagnosis and treatment suggestions and implemented successfully at Memorial Sloan Kettering Cancer Center. It was then proposed afterwards at the MD Anderson Cancer Center. It’s interesting to see that it did not work very well at the second site, even though it seemed to work quite well at Memorial Sloan Kettering.

“There were many explanations for the lack of success, but one of them is that the doctors at MD Anderson wanted to interact more with the system: they wanted to understand the reasoning or logic behind a certain diagnosis, or a certain treatment plan. It wasn’t set up for that, at that point. There were other things going on at the same time, too, but I think it’s an interesting case study.”

A – Professor Faisal: “I think there is an important question here, which is, ‘how do we make sure that AI works, and that it is safe?’ When we think about general AI, what is a good framework for regulation? I believe that health care regulation is a very good framework for this, because it scales: it can go from vaccines rolled out all over the planet to small local treatment developments. It knows how to balance business needs for confidentiality with the public’s needs for transparency.

“We have established practices in the UK for involving patients, healthcare workers, and how to think about this. We have standards for what needs to be disclosed so that it still protects the company, their intellectual property, but it also assures the public that this work has been done properly and safely.

“You hear a lot of discussions saying, ‘oh, we don’t need regulation, because it stifles progress’ all the way to ‘it needs to be like nuclear power regulation’, which would preclude any small- or medium-sized players from participating. Healthcare is a beautiful example of how to do technology regulation better, in a pragmatic way that serves society.”

Q – How can we maintain confidentiality of health data and prevent malicious use when using AI?

A – Professor Haddadi: “There has been over two decades of work on database anonymisation techniques. There are mathematical techniques that we use – for example, differential privacy and similar tools. The problem is these techniques have a privacy budget, or a privacy guarantee, that starts to degrade over time as you add more data. This is a challenge that we will have. However, there is a strong community of researchers in the UK, the EU and internationally working on this.

“For example, there was a recent Royal Society report on privacy-enhancing technologies covering in detail how we can provide more techniques. But as our computational tools get better and as we get more and more computing power through quantum computing; we need to throw in more advanced techniques to counter this.

“There is always a trade-off between anonymity and utility of data. I can add in a lot of noise to a dataset and it becomes fully anonymous, but there is zero utility left in it. This is a game that research and industry are playing together strongly, and I think we need to understand the trade-offs.”

A – Dr Ghafur: “Confidentiality is a genuine concern. Through the pandemic, we surveyed 10,000 patients in northwest London and one of their main concerns with using ‘digital-first’ technologies to provide healthcare was trust and data security. The question was, ‘what are people going to do with my data?’

“You can look at what other countries have done here, so looking at Israel and secure data environments there, the Mayo Clinic in the US is an example too, and certainly what we are moving towards in the UK with secure environments. I think we need to reassure patients that these data environments are as anonymous as possible, but also that the committees that can look at the data access requests are secure as well. So, who is working on them? Which researchers have access? Which companies have access? We should be a lot more stringent in terms of who has access to that data, because the trust element is so critical for using data to do anything in terms of research and reassuring the public that what we are doing is in their best possible interest.”

Q – What do you think of the way AI is presented in media such as film and television?

A – Professor Haddadi: “Many of the movies about AI share the AI doomsday scenario, like ‘Terminator 2: Judgement Day’, for example. Why? The problem I see today with the real-world systems in our homes and environments is that there is such a rush to develop these things and introduce them that questions around safety, and more advanced things like interpretability and transparency, haven’t been thought through.

“The problem is not so much that these are super advanced robots and AI is about to take over our lives. The problem is that they might be failing at the basic job they are supposed to do, or doing it without the fairness, accountability, or transparency which should be legally required. I think this is why this AI safety summit is well timed, for questions like, ‘do we need a basic level of auditing before AI is used for automatic insulin pumps?’”

Q – What are you most excited about for the future of AI? What value can this technology bring?

A – Professor Faisal: “I think in the near future, it is the lives saved and the patients treated who otherwise would not have been treated. This would help healthcare systems, not just in the UK, but worldwide, to deal better with the rising demand for healthcare. In the more distant future, it would be to have some very interesting conversation partners, or maybe even friends, that may not be biological.”

A – Dr Ghafur: “I would say the possibility of better clinical outcomes and population health. I don’t know exactly what tools these will be, but the possibility that we can deliver better patient care through technology and using data and having a more efficient healthcare system. Currently, it is not as efficient as it ought to be – but imagine what we could do.”

A – Dr Kormushev: “For me, the most exciting thing is the potential for robots to relieve humans of the dangerous or dull aspects of their jobs, so they can focus on tasks that are more creative and pursue their passions without danger in their work.”

A – Professor Tucci: “I agree with the comments on having better health outcomes and a better environment, so I do not want to suggest that business value is the ‘be all and end all’, but the amount of business value that is predicted to be created in the next 20 years from AI is enormous – it is something like four times the GDP of the UK added to total global value every year in a few years’ time. Aside from business opportunities, I think there are some quite interesting quality of life opportunities that could accompany many of these things that are very important.”