Event overview
The fourth People’s AI Stewardship Summit (PAISS) took place in Swansea, bringing together diverse voices to explore how artificial intelligence might shape work and skills in Wales.
Like previous summits, the Swansea event invited a broad spectrum of AI-interested people:
- The public — some familiar with AI, others less so.
- Experts — including people working in AI-related businesses, civil society organisations, policymakers and researchers.
- Facilitators — including Welsh-language support.
- And the team from the Royal Academy of Engineering.

The summit took place at Tramshed Tech—also the new home of the Royal Academy of Engineering’s Enterprise Hub in Wales. Mike McMahon introduced the Hub’s mission to support exceptional entrepreneurs with high-potential ideas. Officially launched in 2023, the Hub provides funding, mentoring, and business development support across Wales.
Explore a breakdown of the event below to find out what was discussed in each session and how the people of Swansea feel about AI.
We want people who don’t work in AI to talk to those who do.
Dr. Natasha McCarthy, Associate Director, National Engineering Policy Centre
Opening talks: Setting the scene with Dr Matt Roach

Dr Matt Roach from Swansea University opened the day’s discussions. Describing himself as “unapologetically the super geek in the room”, he gave an overview of what AI is and where the major risks lie. AI, he explained, is a set of computational algorithms that have been in use since the 1960s. What’s changed more recently is the scale and speed of deep learning, which allows systems to learn patterns directly from data without needing hand-written rules. Using a relatable example—teaching a model to distinguish between cats and dogs—he explained how deep learning systems are trained by adjusting “weights” in response to errors, improving accuracy over time.
He also highlighted key concerns. “AI reflects our assumptions.” It can reinforce bias; for instance, a model might associate leadership with white men if the majority of images labelled as ‘leaders’ that it is trained on are white males.
Large models also use significant computing power and water, with AI-related infrastructure predicted to use six times as much water as Denmark in 2027.
Public discussion sessions
AI and the Future of Work
Hopes
Participants hoped AI could take on repetitive or laborious tasks, leading to better-quality outputs, shorter working weeks, and more time for creativity or care work. They saw opportunities for delivering better services and support for neurodivergent workers.
Uncertainties
There was concern about dependency on AI tools, particularly if they become expensive or opaque over time. Who is accountable for AI-generated work? Should clients be told when AI is used? What happens if AI becomes self-referential?
If AI draws from itself, will it get more incorrect or just self-confirm?
Fears
Participants voiced fears about AI shrinking teams, normalising mediocrity, or eliminating jobs outright. Receptionists and factory workers were mentioned. One concern was that work might become more intense if AI automates only the easier tasks, leaving humans with the most demanding ones. The fear of bias came up repeatedly, especially if AI is developed without input from social sciences or the humanities.
AI and the Future of Skills
Hopes
On skills, there was enthusiasm for AI’s potential to personalise learning, support underserved learners (e.g., those with dyslexia or limited digital literacy), and improve healthcare through faster screening and diagnosis.
Uncertainties
Some groups raised concerns about dependency—whether AI might reduce AI and the Future of Skills curiosity, creative thinking, or discussion. And will some of the population be left behind if they don’t adopt AI?
Fears
In creative industries, some participants worried about artistic work being scraped without consent. There were also fears of power becoming concentrated, overqualification without practical skills, and lack of emotional skills. “Smaller brains,” one participant noted, fearing over-reliance could blunt critical faculties.
Regulation and responsibility
Attendees imagined a future where clear laws protect people from misuse without stifling creativity or progress. If AI is here to stay, governance must move with it—not behind it. They didn’t want regulation to become censorship, but also didn’t want to see decisions on AI, its development and uses left unchecked.
Some suggested we involve historians, learning from past experiences, like the adoption of seat belts and the industrial revolution in Wales.
Education and critical thinking

Many posters called for critical thinking in schools: helping students question AI, spot fake content and stay safe online. One participant training in counselling recalled a student saying that an AI therapy tool had offered them the best help they’d ever received. Her message was hopeful: AI might expand support where human systems are overstretched.
Environment and global impact
Several groups touched on AI’s carbon and water use, pointing out that poorer countries might bear the brunt of that impact. Rather than resisting AI entirely, groups focused on making it more sustainable through design choices, regulation, and open conversations about costs and trade-offs.
Work, identity, and social change
Many posters reflected optimism about reshaping work. One well-received suggestion was that AI might allow us to work two days and get paid for five. Another group said that while AI might change our jobs, it shouldn’t replace people’s sense of being needed and valued. The mention of Tata Steel redundancies grounded the conversation in authentic, local experience. Can we anticipate such shifts and train people to move industries?
Open discussions
In the following discussion, participants reflected on how their views had changed. Some expressed growing concern, especially after learning about the extent of the environmental impact. But others felt reassured by seeing how many people cared and that experts were listening.
Opinions were mixed on how much regulation is needed, though most leaned toward stronger rules. Some worried that regulation would restrict innovation. One expert offered a different view: that regulation could actually foster innovation by imposing more explicit constraints within clear guidelines.
There was also debate about labelling. Should people be told when AI is involved in decision making, service delivery or product creation? Most leaned towards yes, but some stakeholders were more sceptical. One participant said people should be able to judge the message, not the method. Another suggested that labelling might be positive, helping people realise all the hidden ways AI is already benefiting them.



Hear from the public on AI
Members of the public were invited to share their views on a number of questions:
- What brought you to participate in the summit today?
- Do you have any hopes and fears around AI?
- How confident are you that the benefits of AI will be seen locally in Liverpool?
- What would make you trust AI?
- Do you think the public understand AI?
Explore all the videos in a playlist.
Related policy work
Belfast
The first People's AI Stewardship Summit was held on 5 March 2024 at the Enterprise Hub NI in Belfast. Members of the…
Futures and Dialogue
Using foresight techniques and engaging with the public and other stakeholders to inform policy and policymakers.
Data and AI
The Academy has undertaken a wide range of projects on the role of data and artificial intelligence (AI) in shaping the…
Engineering Responsible AI
The Academy's Engineering Responsible AI campaign explores how the emerging technology of artificial intelligence (AI)…