A Contrastive Learning Framework for Semantic Consistency Verification of SysML and Simulink Models

Aus SDQ-Wiki
Ausschreibung (Liste aller Ausschreibungen)
Typ Bachelorarbeit oder Masterarbeit
Aushang
Betreuer Wenden Sie sich bei Interesse oder Fragen bitte an:

Rahul Sharma (E-Mail: rahul.sharma@kit.edu)

Motivation

Ensuring consistency between different modeling views (e.g., architectural in SysML, behavioral in Simulink) is a critical challenge in Cyber-Physical Systems (CPS) design. Current manual and rule-based methods are brittle and cannot detect deep semantic nuances. This thesis proposes to leverage Large Language Models (LLMs) in a novel way to solve this problem. We will design, implement, and evaluate "CPS-Align," a deep learning framework that uses contrastive learning to map heterogeneous CPS models into a shared semantic embedding space. In this space, the distance between model embeddings will directly correlate with their semantic consistency, enabling a robust and automated verification method.

Research Questions

How can structured CPS models (represented as serialized graphs) be effectively encoded by a transformer-based LLM to capture both their structural and parametric semantics?

Can a contrastive learning objective (e.g., Triplet Margin Loss) successfully train an LLM to "align" the embedding spaces of two different modeling languages like SysML and Simulink?

How does a fine-tuned, domain-specific model (CPS-Align) compare against general-purpose, zero-shot LLMs (e.g., GPT-4) and traditional rule-based methods in detecting semantic inconsistencies?

Beyond simple pass/fail classification, can the internal states of the model (e.g., attention maps) be used to help localize the specific components that are the source of an inconsistency?

Expected Contributions

The CPS-Align framework: A complete, open-source implementation of the Siamese network and training pipeline.

The final, fine-tuned model weights for semantic consistency checking.

A comprehensive evaluation report comparing CPS-Align against baselines.

A proof-of-concept for inconsistency localization based on attention map analysis.