Institutsseminar/2025-10-31

Aus SDQ-Institutsseminar
Termin (Alle Termine)
Datum Freitag, 31. Oktober 2025
Uhrzeit 11:30 – 12:45 Uhr (Dauer: 75 min)
Ort Raum 010 (Gebäude 50.34)
Prüfer/in Anne Koziolek
Webkonferenz
Vorheriger Termin Mi 29. Oktober 2025
Nächster Termin Fr 7. November 2025

Termin in Kalender importieren: iCal (Download)

Vorträge

Integrating Maturity into Consistency Preservation
Vortragende(r) Juxhin Abazi
Vortragstyp Masterarbeit
Betreuer(in) Thomas Weber
Vortragssprache Englisch
Vortragsmodus in Präsenz
Kurzfassung In multi-model environments where the system functionality is spread across multiple interrelated models, consistency among these models is ensured through consistency rules. All changes to these models are treated as equal and always propagated to other related models. This thesis introduces a mechanism to choose whether these changes should spread, by introducing a maturity based gating mechanism to the Vitruvius Framework. The maturity model introduces three ordered levels (Draft, Reviewed, and Final), allowing for more granular control of consistency propagation, while retaining backward compatibility with current Vitruvius applications. The approach is empirically tested covering all maturity combinations, fallback rules, and gating outcomes. Additionally, the thesis outlines the current research on the topic of maturity, including different maturity models across multiple industries, and analogies to maturity gating from other domains.
LLM-based Consistency Preservation in Model-based Low-Code Platforms
Vortragende(r) Til Körnig
Vortragstyp Bachelorarbeit
Betreuer(in) Nathan Hagel
Vortragssprache Englisch
Vortragsmodus in Präsenz
Kurzfassung This bachelor’s thesis explores the potential of Large Language Models (LLMs) to assist with consistency management in model-based low-code platforms. In these environments, ensuring that domain models and user-facing forms remain consistent is an important but often demanding task. The work presents a prototype of an LLM-based assistant integrated into a low-code platform, designed to help users identify and address inconsistencies between models and forms. To examine its usefulness, a user study was conducted comparing modeling tasks with and without AI support, focusing on effectiveness, efficiency, and user satisfaction. The results suggest that the assistant can provide helpful guidance and may support users in improving task outcomes. The defense will outline the motivation, implementation, and evaluation of the approach, and reflect on the opportunities and limitations of integrating language models into low-code development environments.
Neuen Vortrag erstellen

Hinweise