Event overview
On 14th July 2025, the Royal Academy of Engineering hosted the final event in its People’s AI Stewardship Summit series. This ended a UK-wide journey via Belfast, Glasgow, Liverpool, and Swansea.
Members of the public from London joined entrepreneurs, policymakers, academics, and civil society organisations to explore how AI can be shaped in ways that genuinely serve people and communities.
At the heart of the Summit series was a clear purpose: to hear directly from local community members to ensure their perspectives shape how AI is imagined, developed, and used. The London event focused on two themes with relevance to the city: health and transportation.
Explore a breakdown of the event below to find out what was discussed in each session and how the people of London feel about AI.

These summits are a fantastic opportunity for our founders to engage with the end users of the technology.
Ana Avaliani, Director of Enterprise, Royal Academy of Engineering.
Opening talk
Setting the Scene: On artificial intelligence
With Professor Nick Jennings, Vice Chancellor of Loughborough University

AI might feel new to many of us, but as Professor Jennings reminded us, it has been around for decades. Alan Turing was writing about thinking machines in the 1940s, and the term “artificial intelligence” was first used in 1956. From Netflix recommendations and voice assistants to smart thermostats, facial recognition and warehouse automation, AI is all around us — often unnoticed.
He left us with three takeaways:
- AI lets us do things we couldn’t do before, for good or bad.
- It is already central to solving complex problems.
- Its greatest strength lies in collaboration with people.
Public discussion sessions
Hopes
Faster, more accurate diagnoses came up repeatedly — from “fast and accurate scan reports” and “early identification of illness” to systems trained on more examples than any human could remember. Participants hoped AI could catch the notes that staff miss and help clinicians feel more confident in their decisions.
Many saw AI easing pressure on the NHS by handling admin and triage, reducing wait times and “making simple things fast.”
Beyond clinical use, hopes included fairer resource allocation, data-informed decisions, and improved access to care. Some imagined ambitious possibilities — from “preventative health benefits from birth” to “finding cures for all diseases.”
Uncertainties
Participants raised questions about how AI would function in real healthcare settings. Some wondered whether it would truly lower NHS costs or simply reduce staff, leading to “robot care homes” and “bedside robots.”
Accountability was a key concern; if something goes wrong, who is responsible? Questions were also raised about whether AI could match “GP-level understanding,” and how it might affect staff training, junior roles, and diagnostic consistency.
Fears
Misdiagnosis was a common concern, along with the risk that clinicians might “blindly trust AI opinion” without scrutiny. Some worried that AI, trained only on existing data, might limit medical research, offering no real innovation, just reinforcing what’s already known.
Others raised concerns about bias in training data, particularly relating to race, gender, sexuality, and weight. There was discomfort around AI replacing human interaction, especially in mental health or end-of-life care. Broader fears included poor data governance, vendor lock-in, and the growing power of private companies
Hopes
A major theme was efficiency. People wanted smoother, faster journeys, with AI helping to plan routes, reduce traffic, and improve scheduling by adapting in real time.
Some saw potential for AI to support smarter road planning, detect faults in vehicles, and enable more proactive maintenance of public transport systems.
Hopes included self-driving trains, self-repairing infrastructure, and even “superflight” instant transport.
Uncertainties
A key concern was pace: would systems like driverless cars be rolled out too fast, without sufficient testing?
There were also unresolved questions about purpose and need. With more people working from home, how much transport infrastructure is necessary? Would AI-supported systems be designed with the needs of older people, disabled travellers or women travelling alone in mind? Would rural areas be overlooked in favour of cities?
Fears
Driverless vehicles raised fears of accidents, loss of human oversight, and sabotage.
Some worried AI could be misused in human trafficking or identity theft, or used to restrict personal freedom. Surveillance was another fear, especially facial recognition, and digital IDs in public spaces, with little clarity on who’s in control or how that data might be used.
One participant warned of “false optimism,” where AI is sold as a fix-all while infrastructure and services for people are quietly cut. At root, many feared that decisions would be made for efficiency or profit, not for public good.
Next, participants were asked what government, industry, academia, and civil society should do to support their hopes for AI in health and address their concerns.
For Health and Wellbeing
- Government was expected to act early, regulate clearly, and invest in public education and independent audits. Participants wanted funding not just for new tech, but for better care.
- Industry was urged to prioritise safety, transparency, and accessibility.
- Academia was seen as a leader in ethics and long-term thinking, with a role in producing accessible, applied research and disclosing funding sources.
- Civil society was urged to advocate, question, and elevate underrepresented voices in decisions about AI in healthcare.
- Across all sectors, participants emphasised the need for honesty, openness, and public education so people can have conversations about how AI is used in their care.
For Transport and Infrastructure
- Government was expected to act fast, upskill itself, and resist corporate capture.
- Industry was urged to design with ethics in mind, be transparent about their use of data, and avoid complicity in misuse.
- Academia was expected to provide high quality evidence and treat AI as a social as well as technical challenge.
- Civil society was seen as a watchdog and bridge: surfacing local knowledge, challenging poor decisions, and keeping the public’s voice in the loop.
- There was support for independent regulators “with teeth” — capable of enforcing rules, rectifying harm, and holding both government and industry to account.



Visioning: Positive futures for AI
Participants created posters sharing their visions for AI in health and transport.
Health and wellbeing
Building trust through education and representation
“AI can only be as powerful as our data,” raising the risk of failure if underrepresented communities are excluded. One poster imagined the foundation blocks needed for trustworthy AI in healthcare, including education for young people and equitable research.
Ending the postcode lottery
Many envisioned AI helping reduce health inequalities, enabling faster, more precise diagnoses and tackling delays. If built well, an AI-supported NHS could offer fairer outcomes for all, not just the already well-served.
Tech which supports, not replaces humans
Participants imagined smart care homes that use AI for medication reminders or social prompts but keep human care at the centre.
Empowering patients through knowledge
By providing patients with more knowledge, AI could help patients be more confident contributing to decisions about their care, rebalancing power between patients and doctors.
Public good over private gain
One group imagined AI surrounded by a “halo of governance,” serving people and the planet, not just the powerful. Healthcare as a Whole System AI wasn’t seen in isolation. Groups spoke about food, social care, and dementia support, and warned against narrow technical fixes. “It’s not just the O-ring that crashed,” one group reminded us, referencing past failures that had social, cultural, and political as well as technical causes.
Transport and infrastructure
Human oversight by design
“Red buttons” were proposed — literal or symbolic safeguards to ensure people stay in control. Even with self-driving cars or pilotless planes, AI must support human judgment, not override it.
Monitoring, security, and independent checks
AI could play a role in real-time monitoring of infrastructure, emergencies, even mass migration, but must be accountable and auditable.
Transport that serves people and planet
AI’s potential to create greener, more affordable transport came through strongly. “No vanity space travel,” one group wrote. The priority was wellbeing, not spectacle.

Expert insights
AI and Robotic Surgery: Professor Prokar Dasgupta, King’s College London
Provocation: Robotic surgery is increasingly common in cancer care, with faster recovery and better outcomes. AI could make it even more accurate, but would you trust a machine to operate on you autonomously?
Reflections: Professor Dasgupta noted that fully autonomous surgery isn’t science fiction: just days earlier, a machine had successfully removed a gallbladder from pigs. Still, most participants said, “not yet.” They stressed the need for a human in the loop.
AI and Deception: Dr Stefan Sarkadi, King’s College London
Provocation: AI-driven deception is an urgent societal challenge. While not all forms are harmful (e.g. entertainment or therapeutic use), how should we manage the risks of malicious deception while preserving any benefits of pro-social applications?
Reflections: The group felt that deceptive AI is a tool shaped by human intent. While it can be used constructively—such as softening the delivery of serious medical news—they were most concerned about its potential for harm: flooding people with misleading content, fuelling disinformation, or manipulating public opinion.
AI-Generated Personas: Daniel Foster-Smith,
Provocation: Many companies spend a lot of time and money talking to customers before launching new products, while smaller businesses often can’t afford this research. How would you feel if companies used an AI tool to create realistic profiles using their existing customer data?
Reflections: The group focused on the risks of bias, not just in the data used, but also in how it’s collected and who collects it. They questioned what kind of data would be fed into such systems, and how it could be accounted for.
Medical Imaging and Triage: Professor Ben Glocker, Imperial College London
Provocation: Professor Ben Glocker asked whether people would accept fully automated AI triage systems that assess scans, decide next steps, and refer to specialists, especially if they reduced waiting times. Would people want to opt out?
Reflections: The group focused on trust. People are used to trusting humans, not machines—like trusting product reviews rather than sellers. There was discussion about whether AI could work behind the scenes while a doctor remains the face of care. Some questioned whether we hold AI to higher standards than human clinicians, and whether we even know how accurate human judgement is.
AI in Mental Health Services: Niamh Roberts (South London and Maudsley NHS Foundation Trust)
Provocation: Niamh asked how people would feel about a tool that could read a patient’s mental health history and suggest treatments. What should it consider, and how should it be used?
Reflections: The group valued the idea of AI identifying patterns across fragmented systems, such as GP and local authority data, especially The group valued the idea of AI identifying patterns across fragmented systems, such as GP and local authority data, especially when patients see different clinicians over time. But recognised that this posed significant ethical and privacy challenges in implementation.
The future of AI in London
1. How can we ensure that the benefits from AI developed in London and using local resident data have local benefit?
The group reflected on the scale of data collected across London, from street CCTV to hospital and transport infrastructure systems, and questioned whether this wealth of data is delivering enough value for local communities. They imagined using real-time traffic data to suggest safer routes or analysing environmental data to track pests. Another welcome idea was AI systems detecting rail faults or trees that could fall on tracks before they cause travel disruptions.
2. Should AI use be identified or labelled — and if so, how?
There was strong support for labelling AI use, particularly where it shapes decisions or affects consumers. But the group acknowledged this isn’t simple. Should we label all uses, or only when AI is central to the task? It depends on what the AI is doing. And how would this work in practice when many technology providers are global and not incentivised to be transparent?

Hear from the public on AI
Members of the public were invited to share their views on a number of questions:
- What brought you to participate in the summit today?
- Do you have any hopes and fears around AI?
- How confident are you that the benefits of AI will be seen locally in London?
- What would make you trust AI?
- How confident are you in AI?
Explore all the videos in a playlist.
Related policy work
Swansea
The fourth People’s AI Stewardship Summit took place in Swansea. This summit focused on exploring how AI might impact w…
Liverpool
The event focused on the opportunities and challenges AI poses, particularly in health and infrastructure—salient issue…
Futures and Dialogue
Using foresight techniques and engaging with the public and other stakeholders to inform policy and policymakers.
Engineering Responsible AI
The Academy's Engineering Responsible AI campaign explores how the emerging technology of artificial intelligence (AI)…