Trust Before Reliance: AI Influence in Decision-Making
Tracks
Southport Room 2
| Tuesday, July 28, 2026 |
| 2:15 PM - 2:35 PM |
| Southport Room 2 |
Overview
Sara Mills-Kerr, University Of Newcastle
Details
Three Key Learnings
1. Why technical AI capability alone is insufficient for trust in high-consequence incidents: Accuracy and performance, even with explainability and transparency models, may still be inadequate under time pressure, uncertainty, and cognitive load.
2. How trust, vulnerability, and governance shape disaster decision-making: Incident leaders rely on systems only when decisions are defensible within established roles, rules, and multi-agency governance.
3. What “operationally admissible” AI could look like in practice: Exploring trust calibration as a way to examine tolerance for AI influence, with implications for how decision support may be integrated into agencies and governance frameworks.
Speaker
Mrs Sara Mills - Kerr
Firefighter/ Phd Candidate
University Of Newcastle
Trust Before Reliance: AI Influence in Decision-Making
Abstract
Artificial intelligence is rapidly entering Disaster and Emergency Management, promising faster analysis, improved predictions, and enhanced situational awareness for decision-making. Yet in high-consequence environments, technical performance alone is insufficient. AI systems can hallucinate, be confidently wrong, and reflect hidden bias.
At the same time, decision-makers already operate under significant information and cognitive load, constantly prioritising between competing demands. New technology must reduce this burden rather than add to it. It must meet agencies where they are in terms of organisational maturity and readiness.
High-stakes decisions ultimately depend on trust: Trust in information, trust between agencies, and trust that actions taken under uncertainty are made by capable and reliable actors who can justify and defend them. Trust is a human judgement that requires decision-makers to accept vulnerability when acting under uncertainty. This may help explain why many organisations remain cautious about relying on AI outputs, even as technical capabilities continue to advance.
This presentation introduces early-stage PhD research that asks an unresolved but straightforward question:
When should decision-makers trust and rely on AI influence in real-world incidents?
Rather than assuming adoption, the research treats AI as a potential new actor within an already trust-dependent, multi-agency system. It focuses on when reliance becomes operationally and institutionally defensible through the concept of calibrated trust.
The session outlines this emerging research direction and invites incident controllers and senior leaders to inform the next empirical phase, helping define governance conditions for the safe, trusted, and practical use of AI in Disaster and Emergency Management decision-making.
At the same time, decision-makers already operate under significant information and cognitive load, constantly prioritising between competing demands. New technology must reduce this burden rather than add to it. It must meet agencies where they are in terms of organisational maturity and readiness.
High-stakes decisions ultimately depend on trust: Trust in information, trust between agencies, and trust that actions taken under uncertainty are made by capable and reliable actors who can justify and defend them. Trust is a human judgement that requires decision-makers to accept vulnerability when acting under uncertainty. This may help explain why many organisations remain cautious about relying on AI outputs, even as technical capabilities continue to advance.
This presentation introduces early-stage PhD research that asks an unresolved but straightforward question:
When should decision-makers trust and rely on AI influence in real-world incidents?
Rather than assuming adoption, the research treats AI as a potential new actor within an already trust-dependent, multi-agency system. It focuses on when reliance becomes operationally and institutionally defensible through the concept of calibrated trust.
The session outlines this emerging research direction and invites incident controllers and senior leaders to inform the next empirical phase, helping define governance conditions for the safe, trusted, and practical use of AI in Disaster and Emergency Management decision-making.
Biography
Sara (Mills) Kerr is a firefighter with and an early-stage PhD researcher from the University of Newcastle examining how trust shapes the use and integration of Artificial Intelligence in Disaster and Emergency Management agencies. With a background in business, safety, and high-risk industry, her research focuses on defining when AI-enabled decision support is trusted and operationally defensible for decision-makers in real incident contexts. Sara aims to contribute to frontline practice and research to develop trust-informed, governance-aligned approaches for when and how AI is used in Disaster and Emergency management.