Utilizing Context to Improve LLM-assisted Formal Verification

Aus SDQ-Wiki
Ausschreibung (Liste aller Ausschreibungen)
Typ Masterarbeit
Aushang LLMFormVer Context.pdf
Betreuer Wenden Sie sich bei Interesse oder Fragen bitte an:

Tobias Hey (E-Mail: hey@kit.edu, Telefon: +49-721-608-44765), Tianhai Liu

Motivation

Formal verification ensures the correctness of software systems but remains a highly specialized and time-consuming process. Recent advances in Large Language Models (LLMs) show promise in automating parts of this workflow; however, their effectiveness is limited by a lack of contextual understanding of the system, specifications, and testing environment. Incorporating contextual information on existing other artifacts, such as test cases, requirements, or models, can help LLMs better generate specifications. By utilizing such contextual cues, this research aims to improve the accuracy of LLM-assisted formal verification.

Task Description

The goal of this thesis is to investigate methods for utilizing contextual information from software artifacts to enhance LLM-assisted formal verification. The task involves designing and evaluating approaches that enable LLMs to incorporate these contextual cues when generating formal specifications. This includes identifying relevant contextual sources and fitting datasets, developing an approach to integrate this information into LLM-assisted formal specification generation, and assessing the resulting impact on the accuracy and usefulness of the generated verification artifacts.