Can LLMs Demystify Bug Reports?

Avatar
Poster
Voices Powered byElevenlabs logo
Connected to paperThis paper is a preprint and has not been certified by peer review

Can LLMs Demystify Bug Reports?

Authors

Laura Plein, Tegawendé F. Bissyandé

Abstract

Bugs are notoriously challenging: they slow down software users and result in time-consuming investigations for developers. These challenges are exacerbated when bugs must be reported in natural language by users. Indeed, we lack reliable tools to automatically address reported bugs (i.e., enabling their analysis, reproduction, and bug fixing). With the recent promises created by LLMs such as ChatGPT for various tasks, including in software engineering, we ask ourselves: What if ChatGPT could understand bug reports and reproduce them? This question will be the main focus of this study. To evaluate whether ChatGPT is capable of catching the semantics of bug reports, we used the popular Defects4J benchmark with its bug reports. Our study has shown that ChatGPT was able to demystify and reproduce 50% of the reported bugs. ChatGPT being able to automatically address half of the reported bugs shows promising potential in the direction of applying machine learning to address bugs with only a human-in-the-loop to report the bug.

Follow Us on

0 comments

Add comment