Identifying Security Requirements in Natural Language Documents

Aus SDQ-Institutsseminar
Version vom 3. Januar 2024, 11:42 Uhr von Elias Hofele (Diskussion | Beiträge)
(Unterschied) ← Nächstältere Version | Aktuelle Version (Unterschied) | Nächstjüngere Version → (Unterschied)
Vortragende(r) Elias Hofele
Vortragstyp Masterarbeit
Betreuer(in) Sophie Corallo
Termin Fr 19. Januar 2024
Vortragssprache
Vortragsmodus in Präsenz
Kurzfassung The automatic identification of requirements, and their classification according to their security objectives, can be helpful to derive insights into the security of a given system. However, this task requires significant security expertise to perform. In this thesis, the capability of modern Large Language Models (such as GPT) to replicate this expertise is investigated. This requires the transfer of the model's understanding of language to the given specific task. In particular, different prompt engineering approaches are combined and compared, in order to gain insights into their effects on performance. GPT ultimately performs poorly for the main tasks of identification of requirements and of their classification according to security objectives. Conversely, the model performs well for the sub-task of classifying the security-relevance of requirements. Interestingly, prompt components influencing the format of the model's output seem to have a higher performance impact than components containing contextual information.