Research project
A New Approach to Measuring Police Involvement with Vulnerable People
- Start date: 1 May 2024
- End date: 30 April 2027
- Funder: Economic and Social Research Council (ESRC)
- Primary investigator: Professor Daniel Birks
- Co-investigators: Sam Relins, University of Leeds
- External co-investigators: Professor Charlie Lloyd, University of York (Joint PI)
This project explores how new, generative, large language models like those used in ChatGPT and other AI chatbots can be used to automatically extract insights from free-text notes in police incident data. It aims to assess the ability of large language models to label police incident narratives accurately and reliably for situations involving vulnerable people, such as people with mental health problems, people who are homeless, and people with substance abuse problems.
Police frequently encounter vulnerable individuals throughout the course of their everyday duties. Despite growing recognition of this aspect of policing, measuring the extent of these interactions remains challenging due to limitations in data collection methods.
Current estimates typically rely on either categorical “flags” in call and dispatch systems, or requires manually analysing or observing small samples of incidents. The limitations of these and other approaches are reflected in significant variations in published estimates of police involvement with vulnerability. To illustrate, the UK’s 2022 Policing Productivity Review suggested that between 5% – 9% of incidents involved mental ill health, while evidence submitted to a UK Parliamentary Inquiry estimated 20% of police time was spent on mental health-related calls. Similarly, a systematic review of North American studies found estimates ranging from 1% – 9% depending on the measurement method used. Understanding these patterns is key to inform evidence-based problem and demand analyses, training, and inter-agency coordination.
A potentially valuable but underutilised source of information exists in the narrative text that police officers or call handlers write when documenting incidents. These written accounts typically contain rich details about circumstances and behaviours that often aren’t captured in standardised data fields. They could provide deeper insights into police-public populations. However, analysing these narratives has traditionally required labour-intensive manual review that becomes impractical when dealing with thousands of reports.
Recent advances in artificial intelligence, specifically IT-LLMs, offer new possibilities for automating this analysis. These models can understand and follow complex instructions to analyse text, potentially enabling systematic, large-scale qualitative analysis of free-text data without specialised training. Our research explores a generalizable methodology using IT-LLMs for qualitative analysis of police incident narratives, using four specific vulnerabilities (mental ill health, substance abuse, homelessness, and alcohol dependence) as example categories to evaluate the approach’s effectiveness.
Publications and outputs
Report
Read the research findings summary “Developing AI tools to help quantify police involvement with vulnerable people”.
Journal article
Read the journal article “Using Instruction-Tuned Large Language Models to identify indicators of vulnerability in police incident narratives“, published in the Journal of Quantitative Criminology in June 2025.