conferences | speakers | series

Accelerating Public Consultations with Large Language Models: A Case Study from the UK Planning Inspectorate

home

Accelerating Public Consultations with Large Language Models: A Case Study from the UK Planning Inspectorate
PyCon DE & PyData Berlin 2023

Local Planning Authorities (LPAs) in the UK rely on written representations from the community to inform their Local Plans which outline development needs for their area. With an average of 2000 representations per consultation and 4 rounds of consultation per Local Plan, the volume of information can be overwhelming for both LPAs and the Planning Inspectorate tasked with examining the legality and soundness of plans. In this study, we investigate the potential for Large Language Models (LLMs) to streamline representation analysis. We find that LLMs have the potential to significantly reduce the time and effort required to analyse representations, with simulations on historical Local Plans projecting a reduction in processing time by over 30%, and experiments showing classification accuracy of up to 90%. In this presentation, we discuss our experimental process which used a distributed experimentation environment with Jupyter Lab and cloud resources to evaluate the performance of the BERT, RoBERTa, DistilBERT, and XLNet models. We also discuss the design and prototyping of web applications to support the aided processing of representations using Voilà, FastAPI, and React. Finally, we highlight successes and challenges encountered and suggest areas for future improvement.

In the United Kingdom, Local Planning Authorities (LPAs) are responsible for creating Local Plans that outline the development needs of their areas, including land allocation, infrastructure requirements, housing needs, and environmental protection measures. This process involves consulting with the local community and interested parties multiple times, which often results in hundreds or thousands of written representations that must be organised and analysed. On average, LPAs receive approx. 2000 written representations per consultation, and each Local Plan requires 4 rounds of consultation. The process of analysing these representations takes approx. 3.5 months per round of consultation to complete. The Planning Inspectorate is tasked with examining Local Plans to ensure they follow national policy and legislation. The Inspectorate examines approx. 60 Local Plans a year, each examination lasting around a year’s time. The volume of information included in each Local Plan significantly outweighs the capacity of the Planning Inspectorate to read and analyse the content in detail. This can lead to important issues being overlooked and potential problems with the review process or legal challenges. Conducting a thorough and meticulous analysis of representations takes a lot of time and effort for both LPAs and the Planning Inspectorate. Together with the Planning Inspectorate, we conducted an AI discovery to explore how Large Language Models (LLMs) can help reduce the time taken to analyze representations, improve resource planning, increase consistency in decision-making, and mitigate the risk of a key issue of material concern being missed. We assessed the performance of competing models and demonstrated their goodness with proof-of-concept apps for both LPAs and the Planning Inspectorate that unify and streamline the aided processing of representations. Our simulations on historical Local Plans resulted in a projected reduction of the time taken to analyze representations by more than 30%, and experiments show that we are able to classify representations to the relevant policy in Local Plans with up to 90% accuracy. In this talk, we share our experimental process based on Python and the experimental results. We delve into how we approached the problem, sourced and cleaned the data, and used a distributed experimentation environment with Jupyter Lab and cloud resources to evaluate the performance of BERT, RoBERTa, DistilBERT, and XLNet models. We also discuss our strategies for dealing with limited training data. Finally, we present the design and prototyping of two web applications using Voilà, and demonstrate how we iterated on them using FastAPI and React. Throughout the presentation, we highlight the successes and challenges we encountered, and suggest areas for future improvement.

Speakers: Michele Dallachiesa Andreas Leed