Auf dieser Seite werden die nächsten fünf Seminartermine angezeigt. Für weitere Termine bitte unten auf "weitere Termine" klicken oder bei Alle Termine nachschauen. Neue Terminseiten können ebenfalls auf Alle Termine angelegt werden.
Kalenderdatei (alle zukünftigen Termine): iCal (Download)
iCal (Download)
Ort: Raum 010 (Gebäude 50.34)
Webkonferenz: https://sdq.kastel.kit.edu/institutsseminar/Microsoft_Teams
A Structured Approach for Building Descriptive Models from Data
| Vortragende(r)
|
Philipp Meyer
|
| Vortragstyp
|
Bachelorarbeit
|
| Betreuer(in)
|
Raziyeh Dehghani
|
| Vortragssprache
|
Englisch
|
| Vortragsmodus
|
in Präsenz
|
| Kurzfassung
|
The development of Cyber-Physical Systems (CPS) is characterized by a high degree of complexity and requires continuous optimization throughout the entire development process.
The feedback cycles of the MODA framework are ideal for systematically controlling these adjustments. However, their effective use requires that descriptive models can be derived from runtime data. Established approaches to model derivation, however, were primarily
designed for other domains and applications. Against this background, this work develops an automated pipeline to extract descriptive models from raw data and systematically evaluates the suitability of various modeling approaches for the domain of cyber-physical systems. A central element of the solution approach is the integration of the analysis results into a standardized metamodel based on the Structured Metrics Metamodel in order to give the raw data a semantic structure and ensure interoperability for downstream MDD tools. To objectively evaluate the results, a dedicated evaluation framework was developed
that compares the various approaches using quantitative metrics and qualitative expert feedback. The evaluation confirms that the automated derivation of statistical parameters, segmentations, and discrete system states delivers robust results. In contrast, limitations were identified in the generation of complex process models using process mining, as the conversion of continuous physical signals into discrete logic remains a challenge. Overall, the work demonstrates as a proof of concept how the gap between collected runtime data and formal models can be closed, thus providing a technological basis for MODA feedback cycles in CPS development.
|
Traceability Link Recovery for Architecture Decision Records
| Vortragende(r)
|
Ege Uzhan
|
| Vortragstyp
|
Masterarbeit
|
| Betreuer(in)
|
Jan Keim
|
| Vortragssprache
|
Englisch
|
| Vortragsmodus
|
in Präsenz
|
| Kurzfassung
|
Documentation plays a crucial role in software engineering by supporting development, maintenance, and evolution. A structured form of documenting design decisions is Architectural Decision Records (ADRs), which concisely capture architectural choices. However, like most documentation, ADRs are often disconnected from related project artifacts. Traceability Link Recovery (TLR) aims to reconstruct such links automatically, yet it has not previously been applied specifically to ADRs, nor have benchmarks existed for this purpose. This work addresses that gap by applying established TLR approaches to ADRs and introducing the first gold-standard dataset linking ADRs to source code and software architecture documentation at multiple levels of granularity. Results show that ADR traceability is feasible, with file-level recovery yielding more stable precision-recall trade-offs than sentence-level recovery. Effectiveness depends on artifact type, granularity, and candidate selection, highlighting challenges and opportunities for improving ADR traceability.
|
iCal (Download)
Ort: Raum 010 (Gebäude 50.34)
A Reproducible Profiling Framework for a MQTT-to-Kafka Pipeline
| Vortragende(r)
|
Jonas Bruer
|
| Vortragstyp
|
Bachelorarbeit
|
| Betreuer(in)
|
Maximilian Hummel
|
| Vortragssprache
|
Deutsch
|
| Vortragsmodus
|
in Präsenz
|
| Kurzfassung
|
Bridging MQTT-based IoT communication with Apache Kafka enables scalable data streaming but introduces additional processing stages that may become performance bottlenecks. Existing benchmarks mainly evaluate MQTT brokers in isolation or rely on black-box end-to-end measurements, offering limited insight into internal pipeline behavior.
This thesis presents a reproducible profiling framework for a MQTT-to-Kafka pipeline that combines benchmarking with white-box instrumentation of internal components. The framework models atomic Entry Level System Calls (ELSCs) and composes them into configurable workload classes, enabling automated and systematic performance experiments. The implementation is based on EMQX with integrated Kafka bridging and distributed tracing across protocol boundaries.
Evaluation following a Goal-Question-Metric approach demonstrates that the framework supports reproducible experiments, preserves trace continuity across services, and enables identification of internal bottlenecks while maintaining controlled instrumentation overhead.
|
iCal (Download)
Ort: Raum 010 (Gebäude 50.34)
Evaluierung von Embedding Modellen auf Modelldaten
| Vortragende(r)
|
David Inca
|
| Vortragstyp
|
Bachelorarbeit
|
| Betreuer(in)
|
Julian Roßkothen
|
| Vortragssprache
|
Deutsch
|
| Vortragsmodus
|
in Präsenz
|
| Kurzfassung
|
Das zuverlässige Retrieval von Artefakten der modellgetriebenen Softwareentwicklung (MDSE) mittels semantischer Ähnlichkeit ist eine Kernvoraussetzung für leistungsfähige RAG-Systeme in diesem Bereich. Da gängige Embedding-Modelle primär für unstrukturierten Text optimiert sind, untersucht diese Arbeit deren Effektivität für strukturierte, referenzielle Modellartefakte. In einem kontrollierten Benchmark-Setting wird der Einfluss von Datenaufbereitung, Serialisierung und Embedding-Modellwahl systematisch evaluiert.
Die Ergebnisse verdeutlichen, dass die Wahl des Embedding-Modells den signifikantesten Einfluss auf die Retrievalqualität ausübt. Als besonders effektiv erweist sich die Dereferenzierung interner Verknüpfungen, während die Wahl des Serialisierungsansatzes nur eine marginale Rolle spielt. Die Untersuchung belegt die prinzipielle Eignung embedding-basierter Verfahren für MDSE-Daten und liefert konkrete Handlungsempfehlungen für die Konfiguration effizienter Retrieval-Architekturen.
|
Textual Modeling for Cloud-Native Performance Simulation
| Vortragende(r)
|
Fabio Freund
|
| Vortragstyp
|
Bachelorarbeit
|
| Betreuer(in)
|
Maximilian Hummel
|
| Vortragssprache
|
Deutsch
|
| Vortragsmodus
|
in Präsenz
|
| Kurzfassung
|
Text-based modeling simplifies the creation of software architecture models, yet existing grammars are largely rooted in traditional PCM concepts. Modern cloud-native systems—built around containers, microservices, and Kubernetes-based workflows—do not align well with these abstractions. In addition, current modeling approaches lack an accessible, declarative syntax familiar to DevOps engineers who work with YAML-style configuration files. This thesis addresses this gap by extending an existing textual modeling language to better represent cloud-native patterns while introducing a concise, YAML-inspired syntax. The work includes analyzing and adapting the TPCM/Xtext grammar, designing user-friendly constructs aligned with real-world deployment descriptors, and implementing a transformation layer that maps the extended language to PCM models compatible with Palladio and Simulizer. The result will improve the usability and relevance of performance simulation in cloud-native environments.
|
iCal (Download)
Ort: Raum 348 (Gebäude 50.34)
iCal (Download)
Ort: Raum 010 (Gebäude 50.34)
Webkonferenz: https://sdq.kastel.kit.edu/institutsseminar/Microsoft_Teams
A Context-Based Approach for Change Propagation in Vitruvius
| Vortragende(r)
|
Josua Eyl
|
| Vortragstyp
|
Bachelorarbeit
|
| Betreuer(in)
|
Raziyeh Dehghani
|
| Vortragssprache
|
Englisch
|
| Vortragsmodus
|
in Präsenz
|
| Kurzfassung
|
Model-driven software development is the process of developing software on different abstraction levels. Therefore, various models describe one system from different viewpoints.
These models, besides describing different parts of the system, have overlaps. Due to these
overlaps, there needs to be an effort to make those models consistent with each other.
For this purpose, a framework called Vitruvius was developed as a view-based software
development tool. A domain-specific language called Reactions was developed to specify
how changes are propagated between models. This language facilitates the propagation of
changes made in one model to other corresponding models. To this moment, there is no
possibility that the Reactions language propagates its changes based on external context data.
In current propagation mechanisms, changes are applied uniformly, without considering
environmental or situational factors. However, in practice, the correct propagation often
depends on context, such as user roles, system state, or domain-specific constraints. Here,
the bachelor’s thesis aims to develop a context-aware change propagation mechanism for
software development.
Therefore, a context meta-model needs to be developed to describe how context can be
defined. The Reactions language must be extended to incorporate this context, enabling
propagation decisions to be made based on it. The developed approach will be evaluated
using a representative use case that demonstrates how context information can influence
and improve change propagation decisions.
|
Integrating Human-Related Factors into Change Management
| Vortragende(r)
|
Manuel Odinius
|
| Vortragstyp
|
Bachelorarbeit
|
| Betreuer(in)
|
Raziyeh Dehghani
|
| Vortragssprache
|
Englisch
|
| Vortragsmodus
|
in Präsenz
|
| Kurzfassung
|
Systems are getting more and more complex. To handle the complexity of constructing them models are used.
The systems, such as the models evolve and adapt over time, creating a big information problem, to keep everyone up to date and distributed knowledge consistent.
To address this problem models are correlated with each other using correspondance models, that can be used to create (virtual) single underlying models, that aggregate all information at a single point.
However implementations for this strategy lack information on human-related aspects of these changes.
To solve this problem a metamodel, that can be used to add information on human-related aspects of changes will be presented.
The information on changes can be added through annotations, that preserve the information in different formats, some more machine readable, some more human readable.
This thesis, will discuss some human-related aspects, that are deemed relevant as annotations, to give additional context on changes, that otherwise might go missing.
Further this thesis will discuss a concrete implementation of using the preseneted annotations in the Vitruvius framework, together with a theoretical example showing a possible application of the implementation.
|
weitere Termine