| Kurzfassung
|
This bachelor’s thesis explores the potential of Large Language Models (LLMs) to assist with consistency management in model-based low-code platforms. In these environments, ensuring that domain models and user-facing forms remain consistent is an important but often demanding task. The work presents a prototype of an LLM-based assistant integrated into a low-code platform, designed to help users identify and address inconsistencies between models and forms. To examine its usefulness, a user study was conducted comparing modeling tasks with and without AI support, focusing on effectiveness, efficiency, and user satisfaction. The results suggest that the assistant can provide helpful guidance and may support users in improving task outcomes. The defense will outline the motivation, implementation, and evaluation of the approach, and reflect on the opportunities and limitations of integrating language models into low-code development environments.
|