From Elicitation Interviews to User Stories: A Prompt Chaining LLM Approach for Automated Candidate Requirements Extraction and Formulation
Publication date
Authors
DOI
Document Type
Master Thesis
Metadata
Show full item recordCollections
License
CC-BY-NC-ND
Abstract
Context and motivation: Interviews are the most widely adopted technique for requirements elicitation, as they offer rich and contextualized information about stakeholder needs. Recently, there has been growing interest in leveraging automation to support the analysis of such interview data. Advances in automatic speech-to-text (STT) and Large Language Models (LLMs) present promising new opportunities in this space. However, their application in the field of Requirements Engineering is still in its early stages and remains underexplored.
Problem statement: Elicitation interviews result in lengthy, unstructured transcripts whose manual analysis is time-consuming and susceptible to omissions. This research explores how a LLM-based pipeline can support the reliable extraction of requirement-relevant information and its formulation into User Stories while maintaining traceability to the source transcript and offering interpretability of the automated process.
Principal ideas/results: A multi-stage prompt chaining approach is proposed to perform: (1) topic-based segmentation of the interview transcript, (2) requirement-relevant information extraction, (3) tracing of extracted candidates to the source transcript object construction to support interpretability, and finally (4) user-story issues detection based on the Quality of User Stories (QUS) framework. A functional prototype of this system was developed, integrating all four stages into a pipeline. The quality of the system is assessed through a mix of intermediate human-rated evaluations and a final evaluation via a repeated focus group. The results indicate that the approach can successfully segment interviews, extract requirement-relevant content in the form of user stories, and preserve traceability links.
Contribution: This thesis provides two main contributions: (1) the design research and implementation of a multi-step prompt-chaining prototype for candidate requirements extraction. And (2) an empirical evaluation of this system with experienced professionals and researchers through a repeated focus group.
Keywords
Requirements Engineering, Elicitation Interviews, Large Language Models, Prompt Chaining, Automated Requirements Extraction, User Stories, Traceability, Interpretability, Interview Transcript Analysis