Assessing the Scalability of Variability Artifact Transformations using TRAVART: Unterschied zwischen den Versionen

Aus SDQ-Institutsseminar
Keine Bearbeitungszusammenfassung
Keine Bearbeitungszusammenfassung
 
Zeile 2: Zeile 2:
|vortragender=Kaan Berk Yaman
|vortragender=Kaan Berk Yaman
|email=uynuq@student.kit.edu
|email=uynuq@student.kit.edu
|vortragssprache=Deutsch
|vortragssprache=Englisch
|vortragstyp=Masterarbeit
|vortragstyp=Masterarbeit
|betreuer=Kevin Feichtinger
|betreuer=Kevin Feichtinger

Aktuelle Version vom 22. September 2025, 13:55 Uhr

Vortragende(r) Kaan Berk Yaman
Vortragstyp Masterarbeit
Betreuer(in) Kevin Feichtinger
Termin Fr 26. September 2025, 11:30 (Raum 348 (Gebäude 50.34))
Vortragssprache Englisch
Vortragsmodus in Präsenz
Kurzfassung There are different types of artifacts for the representation of variability in a Software Product Line (SPL). Because there also exists different analysis tools for different artifact types, it is sometimes required to transform between different formats to make models available for analysis for a certain tool. Between different artifact types, there is not always a natural mapping of structures; a model transformation between two arbitrary formats is not trivial. A model transformation is usually between two different formats, however, it is possible to transitively transform models from one format to another over multiple partial transformations. The framework TRAVART utilizes this over a pivot model format called Universal Variability Language (UVL): Over a plugin architecture, different plugins provide transformations from and to UVL. There are use cases where TRAVART could be utilized in a pipeline, but there is no research if TRAVART is actually adequate to be used in this manner. To investigate this, we implement a benchmarking facility in TRAVART and benchmark three actively maintained TRAVART plugins. We collect over 3000 models from literature for benchmarking and use various statistical methods to support the claim that TRAVART model transformations scale. Our results show that, for the models in our chosen benchmarking datasets, in every possible path and for all plugins, the mean transformation time is less than five seconds. Correlation analysis provides estimators in O(𝑛2) for average transformation time as a function of model size with high 𝑅2 across all plugins and paths with only a few exceptions where the 𝑅2 value is more open to interpretation. We use control groups with larger and smaller models to see how well our predictors work for the two edge cases. For smaller models, transformation time is best predicted by a mean line, which can be explained by a fixed transformation overhead, i.e. constant factor in runtime. For the largest models, the predictors fit for the benchmarking models underestimate run time, although there is some gap in the runtime curve that would imply some other factor which we have not considered, such as limitations due to memory hierarchy or single-thread execution. We also investigate transformation complexity as a scaling metric for model size, which however leads to only a nomimal improvement in predictor quality. Ultimately we deduce that TRAVART plugin transformations do scale, especially for the range of model sizes of the models in our benchmarking dataset, with some caveats for larger models where further factors must be investigated for a definitive answer