The Impact & Invention Conference 2026

Register Here!

We are delighted to announce that the 3rd Impact & Invention Conference will be held on 26th of March 2026 in the Ogden Centre for Fundamental Physics (West), Durham University. This year’s theme is: Artificial Intelligence.

For information and videos of last year’s conference, visit here.

Provisional Schedule:

13:00 – 13:20       Reception

13:20 – 13:30       Introductory Address

13:30 – 14:10       Keynote Address

14:10 – 16:10       Presentations

16:10 – 16:30       Tea & Coffee Break

16:30 – 17:30       Panel Discussion

17:30 – 17:45       Closing Remarks

17:45 – 19:00       Poster Session with Drinks & Nibbles

Venue Location

Speakers & Presenters

Biographies

.      Keynote: Mr Matt Frank, Former Chief Innovation Officer, Ancoris

Matt is an expert in technology, data and AI platforms in public and private sector organisations and has 19 years of UK technology and telecoms experience. He was Chief AI & Innovation Officer and Board Member at the Google and Microsoft partner Telana and has also held senior technology leadership roles at the Bank of England, KPMG and Sky. Matt has robust experience helping organisations adopt, scale, manage and secure AI and cloud technology to drive innovation and efficiency. Matt has spoken in numerous public forums including the London AI Summit, events hosted by the Financial Times and Google and many other Industry and Public Sector groups and events. Matt holds an MA in Modern History from Oxford University and an MSc in Computation from UMIST (now part of the University of Manchester).

Presentation: Professor Spyros Galanis

Spyros Galanis is a Professor of Economics at the Durham University Business School and Director of Research at the Department of Economics. His research interests include decision theory, game theory, experiments, and finance.  His main research focuses on the role that uncertainty, information and bounded perception have on single- and multi-agent decision making. Previously, he was Associate Professor (Reader) and Head of the Department of Economics of City, University of London. Between 2007-2018, he was first a Lecturer and then an Associate Professor at the Department of Economics of the University of Southampton. He received his PhD from the University of Rochester, his MSc from the University of Warwick and his BSc from the Athens University of Economics and Business, all in Economics.

Abstract: Information Aggregation with AI Agents

Can Large Language Models (AI agents) aggregate dispersed private information through trading and reason about the knowledge of others by observing price movements? We conduct a controlled experiment where AI agents trade in a prediction market after receiving private signals. We find that although the median market is effective overall at aggregating information, increasing the complexity of the information structure has a significant and negative impact, suggesting that AI agents suffer from the same limitations as humans when reasoning about others.   Consistent with our theoretical predictions, information aggregation remains unaffected by allowing cheap talk communication, changing the duration of the market or initial price, and strategic prompting—thus demonstrating that prediction markets are robust and effective.  We establish that “smarter” AI agents perform better at aggregation, but this advantage is limited to preventing a catastrophic mis-pricing of the security rather than improving the performance of the median market.  Finally, we find that cheap talk reduces trading volume, whereas being smarter than one’s competitors increases profits.

Presentation: Dr Lindley Lentati

Lindley Lentati is co-founder and Director of Cambridge Inference Limited, a UK technology company building AI control and monitoring infrastructure for regulated industries. He is a Fellow of DIRDI, where he contributes to sovereign AI policy for the UK, advocating for domestic AI model-building capabilities to preserve strategic autonomy. His work focuses on the gap between theoretical AI safety research and production deployment, particularly the challenge of maintaining meaningful human oversight as AI systems become more capable and autonomous. Lindley holds a PhD in Astrophysics from the University of Cambridge, where he researched gravitational waves and precision pulsar timing, and spent six years developing automated trading strategies in quantitative finance.

Abstract: Sovereign AI and the UK’s Strategic Future

The UK is widely recognised as the world’s third AI power, with world-class safety research and demonstrated convening power. But safety expertise without sovereign capability leaves the UK dependent on systems built elsewhere, with values and priorities set by others. This talk argues that the UK must treat sovereign AI as a national security requirement, drawing on the historical parallel of the McMahon Act, which saw the US unilaterally cut Britain off from nuclear technology it had helped develop, and only restore cooperation once Britain demonstrated independent capability. It examines why the dynamics of advanced AI may be more challenging than nuclear, what France, Germany, and the EU are doing to build sovereign AI programmes, and what a phased UK sovereign AI research programme would look like. It concludes with a proposal for a coalition of democratic middle powers, each contributing distinct strengths, to build frontier AI capability under sovereign.

Presentation: Professor Chris Barnes

Chris Barnes is Head of Science for AI at the National Physical Laboratory (NPL), where he leads strategic research into trustworthy AI, AI for science and AI assurance.  His work supports the development of reliable, transparent, and fit-for-purpose AI systems, particularly in critical sectors such as healthcare and manufacturing. He also holds a professorship in Systems and Synthetic Biology at University College London (UCL), where his academic research focuses on AI, computational modelling, engineering biology, and biomedical applications.

Abstract: An overview of AI at NPL

The National Physical Laboratory (NPL) is the UK’s national metrology institute. It develops and maintains the country’s primary measurement standards and provides the measurement science, calibration, testing, and scientific expertise that underpin industry, innovation, and public services—helping ensure measurements are accurate, comparable, and trustworthy. In this talk, Chris Barnes (Head of Science for AI, NPL) will give an overview of NPL’s AI activity and how it supports the UK’s ambitions for trustworthy and deployable AI across industry.

The talk is structured into three parts covering the main aspects of AI@NPL. First, Trustworthy AI: how measurement thinking translates into AI practice, covering concepts such as uncertainty, robustness, interpretability, and data quality, and how these properties can be characterised, tested, and compared in a repeatable way. Second, AI for Metrology: examples of how modern machine learning can be used to enhance the scientific process. Third, AI assurance: the emerging methods, standards, and evaluation approaches needed to build justified confidence in AI systems for critical applications.

Panel Speaker: Dr Stefan Sarkadi

Dr. Stefan Sarkadi is an Associate Professor in AI for Defence and Security and Royal Academy of Engineering UK Intelligence Community Research Fellow at the University of Lincoln. At Lincoln, he leads the research of the newly established centre for AI in Defence and Security. Before that, he was a Proleptic Lecturer (Assistant Professor) in the Department of Informatics at King’s College London. His background is truly multidisciplinary, built on a PhD in Computer Science, a Master’s in Cognitive Science, and a Bachelor’s in Philosophy. Stefan’s work focuses on Hybrid Intelligence, Human-Machine Teaming, Decision Making, and, most notably, on Deceptive AI with Theory of Mind. His contributions on understanding AI Deception and Theory of Mind range from the first PhD thesis that covers the topic from a computational perspective, a publication track on the topic in top AI conferences and journals (JAIR, AAMAS, Royals Society, etc.), the founding of the Deceptive AI workshop series (IJCAI, ECAI), and the co-founding of the AAAI Workshop series on Advancing AI Through Theory of Mind. Stefan has also worked on applications of his research in the domain of Intelligence Analysis.

Panel Speaker: Ms Joanna Wake

Joanna Wake is an AI Strategy & Skills consultant and UK Brand Ambassador for She AI, a UN-certified AI education platform that democratises AI learning globally. A multi-award-winning digital skills leader with 15 years of experience across public and private sectors, Joanna is passionate about getting AI skills to businesses of all sizes and sectors.As a two-time founder who built and sold RAW Digital Training and has recently influenced national policy on apprenticeships, Joanna has created innovative digital skills programmes for every sector of society. 

Joanna now helps organisations keep pace with the fast moving changes in AI. She works with businesses to navigate AI adoption with practical, business focused strategies that avoid technology first pitfalls, and focus on keeping pace to seize competitive advantage.

Panel Speaker: Dr David Watson

David Watson is a Lecturer in Artificial Intelligence at King’s College London’s Department of Informatics. His primary research interests include machine learning, philosophy of science, and computational biology. Previously, he was a Postdoctoral Research Fellow in the Department of Statistical Science at University College London, where he developed methods for causal discovery and inference. Before that, he earned his doctorate from the University of Oxford, studying algorithmic fairness and explainability as a member of the Digital Ethics Lab. He is an Associate Editor at Minds & Machines, a Visiting Research Fellow at Meta’s Central Applied Science unit, and an occasional contributor to The Economist.

Panel Speaker: Dr Maz Hardey

Mariann (Maz) Hardey is a Professor of Business and Computing at Durham University and a leading authority on digital culture and the “Digital Human.” Rather than focusing on standard AI as a tool narratives, Maz conducts a forensic audit of how technology reshapes systemic intelligence, neurodiversity, and the intersection of work culture.
 
She is the creator of the Substack, Permanent Thoughts, where she rethinks technology for different minds and challenges the industry to move beyond sterile productivity metrics. Her current work investigates the “Trillion-Dollar Wall,” arguing that AI’s true value lies not in deployment but in the fundamental evolution of human processes. A proponent of Rebel Nodes – the thinkers essential for spotting glitches in the machine – Maz advocates for sweet spot leadership to navigate the complexity tax of modern AI projects.
 
On the panel, Maz will provide a Safety Valve of critical thinking, moving the conversation from the sterile lab context to the reality of those actually navigating the digital frontier.