Practical AI is the Academy’s new event series, launched to share the realities of deploying AI in the systems we all rely on – from transport and housing to energy, utilities and so much more. The focus is on the practical decisions that determine whether AI deliver tangible improvements for people and places: how data is gathered, how risk is managed, how trust is built, and how engineers, policymakers and adopters solve problems together.
The first discussion in the series focused on road safety. Through the experience of Transport for West Midlands and VivaCity, it explored what happens when AI is integrated into physical infrastructure, how teams navigate the messy early stages of adoption, and what lessons can be learned from projects like the West Midlands near-miss pilot.
Takeaways
The pilot can offer the following takeaways for policymakers thinking about how AI fits into infrastructure:
- Collaboration and iteration are vital. The Coventry pilot worked because iteration and feedback were built into the process.
- Local authorities need data but also usable tools. Scaling AI in infrastructure requires investment in analytical capacity and easy-to-use interfaces for non-specialist teams.
- Presenting evidence in diverse formats can accelerate action. Showing short clips of near-misses persuaded the local council to act faster than datasets could.
- Behavioural insights matter. Safety design should reflect how people move, not how planners assume they will.
- Privacy-by-design builds trust. On-device processing and anonymisation meant the system could collect data without crossing the line into surveillance.
- AI insights depend on physical design and local expertise. The Coventry pilot showed that factors such as placement and lighting can make or break an algorithm’s usefulness.
Last year, approximately 1,600 people lost their lives on Britain’s roads and nearly 30,000 were seriously injured. This level has been fairly consistent in recent years. Reducing incidents further requires understanding where risks are high, a measure that isn’t possible with standard road casualty data.
Behind every statistic is a split-second swerve or break. A pilot in the West Midlands set out to turn the near-miss collisions into insights that road safety teams could act on. Working with Transport for West Midlands, VivaCity integrated AI-powered road safety features into 40 of the 440 sensors it operates across the region, including speed, pedestrian and vehicle counts, and the near-miss technology used to identify danger hotspots on the network before incidents happen. These datasets provided the evidence to inform the redesign of a high-risk juncture in Coventry, leading to an 88.5% reduction in validated near misses to pedestrians, significantly lowering the level of risk for this road user group.
Analysing video from junctions and engaging with the public
In Coventry, one junction became the test case. A wide bell-mouth crossing with little refuge space had been flagged by residents as unsafe, but relying on traditional road casualty data only reveals incidents after they happened without being able to capture the scale of risk and why danger is present.
It’s tempting to think of AI as a digital intervention, but it’s a complex systems problem.
Academy President John Lazar CBE FREng
VivaCity’s computer-vision sensors helped change that. Mounted on existing street infrastructure, the devices collect and process footage locally, automatically blur faces and car number plates for privacy, and send back anonymised data on how road users move. During early validation of the near-miss product, short redacted video clips were shared with the council team to check accuracy, but no identifiable footage ever left the device.
“As a company, privacy has always been really important to us. Each camera acts more as a sensor than a camera,” explained Mark Nicholson, VivaCity’s Co-founder and CEO. “It’s not sharing video data back, but anonymised data about what’s happening on the roads.” Each sensor is also clearly labelled, allowing members of the public to identify it and raise questions, which is a small but important feature for building trust in new technologies on city streets.
Improving road crossings in Coventry
What the team quickly found was that the usefulness of each data stream depended not only on the algorithm but also on where the sensor was placed. Field of view, lighting and obstructions shaped accuracy. Site selection became a design discipline of its own, requiring balancing technical parameters with realities on-the-ground.
Once the data was validated, engineers in Coventry redesigned the junction. They narrowed the road mouth, extended the pavement and added a central island for pedestrians. This has led to an 88% reduction in near-miss incidents.
Just as striking was the effect on decision-making. “When you show a councillor a ten-second clip of a near-miss, it hits different,” Mark said. Decision-makers who might have dismissed or delayed acting on aggregate data responded promptly to the split second in which a pedestrian was nearly struck.
The learning curve of AI adoption
The early stages were a learning curve. The AI initially misidentified bikes on car racks as near-misses and produced more footage than the small road safety team could manage. “It was a classic bite-off-more-than-you-can-chew moment,” admitted Callum Bick, senior research analyst at Transport for West Midlands.
Instead of shelving the technology, the partners made the friction part of the process. They refined definitions, built filtering tools and reviewed clips together to decide what should be counted in this new dataset. Rather than a linear rollout, the project evolved through tight feedback loops between developers refining their algorithms and transport analysts learning how to interpret the data.
“Human in the loop is so important to get real-world value,” said Anushka Fernando, VivaCity’s senior research engineer. “Working with the client and understanding what they deem a near miss is how we make sure our AI is doing what it’s supposed to do.”
The iterative approach didn’t end once the system was up and running. Each new deployment now helps refine the underlying models. Every image labelled and every interaction captured incrementally improves the algorithm’s ability to recognise what matters and ignore what doesn’t.
These insights need to be understood, used and maintained by local authorities that are often overstretched. “The analytics skills you need to get value from the data are above the level where we see a lot of people in the broader population,” Mark said. “We need to build better tools to help people get the most from it.” Adoption depends on creating usable interfaces, that the user has trust in.
Lessons learned and takeaways for future projects
The team was asked what they would have done differently, knowing what they know following the pilot. “We cast it too wide at the start,” said Callum from the Transport for West Midlands. “Forty sensors pulling hundreds of records each month – we just didn’t have the resource to go through and validate them.” A phased rollout with clearer validation tools would have been more manageable.
Anushka from VivaCity noted that the project also surfaced behavioural surprises. “People don’t always realise how close they came to a serious collision,” she said. “They don’t perceive the risk.” Those insights could inform how road layouts and public awareness campaigns are designed, as behavioural data can be as important as geometric or traffic data in safety planning.
How to scale AI adoption responsibly
Scaling AI-enabled infrastructure is less about blanket coverage of every street with sensors and more about intelligent targeting. This means using fewer, smarter sensors that are affordable enough to capture risk where it matters most. As VivaCity’s Mark Nicholson explained, scaling responsibly means accepting trade-offs: it’s often better to have a system that’s 70-80% right in twice as many places than perfectly precise in just a few.
If scaled well, near-miss sensing could feed into digital twins, transport planning models and national road safety frameworks. But its success will hinge on the engineers, analysts and policymakers who can interpret the data and act on it. “The systems that learn best,” Mark said, “are the ones that make their learning visible.”
What this tells us about Practical AI
Stepping back, the conversation surfaced several principles that Practical AI will continue to explore.
First, AI adoption is a systems challenge. The pilot worked well because engineers, analysts and local authorities collaborated across technical, physical and organisational boundaries.
Second, iteration is how the system becomes safe and useful. Early false positives, misclassifications and overwhelming volumes of footage were how Transport for West Midlands and VivaCity learned where the models struggled, refined parameters and aligned human judgement with machine outputs. The value came from those feedback loops.
Third, usability matters as much as capability. The analytics skills required to extract insight from these systems are not evenly distributed. Scaling AI in public infrastructure will require usable interfaces and investment in analytical capacity across local government.
Finally, responsible adoption requires transparency. Privacy-by-design, on-device processing and clearly labelled sensors helped maintain public trust. AI in public space relies as much on legitimacy as technical accuracy.
Looking ahead
The next events in the series will continue drawing out these practical lessons, helping engineers, policymakers and adopters navigate the reality of deploying AI responsibly across the UK’s everyday infrastructure.
Join us for the next discussion exploring how Tideway is using embodied AI through intelligent robots to upgrade their sewer maintenance for the huge Thames Tideway Tunnel.