Assessing the Scalability of Variability Artifact Transformations using TRAVART: Unterschied zwischen den Versionen
Keine Bearbeitungszusammenfassung |
Keine Bearbeitungszusammenfassung |
||
| Zeile 7: | Zeile 7: | ||
|termin=Institutsseminar/2025-09-26 | |termin=Institutsseminar/2025-09-26 | ||
|vortragsmodus=in Präsenz | |vortragsmodus=in Präsenz | ||
|kurzfassung= | |kurzfassung=There are different types of artifacts for the representation of variability in a Software Product Line (SPL). Because there also exists different analysis tools for different artifact types, it is sometimes required to transform between different formats to make models available for analysis for a certain tool. Between different artifact types, there is not always a natural mapping of structures; a model transformation between two arbitrary formats is not trivial. A model transformation is usually between two different formats, however, it is possible to transitively transform models from one format to another over multiple partial transformations. The framework TRAVART utilizes this over a pivot model format called Universal Variability Language (UVL): Over a plugin architecture, different plugins provide transformations from and to UVL. There are use cases where TRAVART could be utilized in a pipeline, but there is no research if TRAVART is actually adequate to be used in this manner. To investigate this, we implement a benchmarking facility in TRAVART and benchmark three actively maintained TRAVART plugins. We collect over 3000 models from literature for benchmarking and use various statistical methods to support the claim that TRAVART model transformations scale. Our results show that, for the models in our chosen benchmarking datasets, in every possible path and for all plugins, the mean transformation time is less than five seconds. Correlation analysis provides estimators in O(𝑛2) for average transformation time as a function of model size with high 𝑅2 across all plugins and paths with only a few exceptions where the 𝑅2 value is more open to interpretation. We use control groups with larger and smaller models to see how well our predictors work for the two edge cases. For smaller models, transformation time is best predicted by a mean line, which can be explained by a fixed transformation overhead, i.e. constant factor in runtime. For the largest models, the predictors fit for the benchmarking models underestimate run time, although there is some gap in the runtime curve that would imply some other factor which we have not considered, such as limitations due to memory hierarchy or single-thread execution. We also investigate transformation complexity as a scaling metric for model size, which however leads to only a nomimal improvement in predictor quality. Ultimately we deduce that TRAVART plugin transformations do scale, especially for the range of model sizes of the models in our benchmarking dataset, with some caveats for larger models where further factors must be investigated for a definitive answer | ||
}} | }} | ||
Version vom 9. September 2025, 14:09 Uhr
| Vortragende(r) | Kaan Berk Yaman | |
|---|---|---|
| Vortragstyp | Masterarbeit | |
| Betreuer(in) | Kevin Feichtinger | |
| Termin | Fr 26. September 2025, 11:30 (Raum 348 (Gebäude 50.34)) | |
| Vortragssprache | Deutsch | |
| Vortragsmodus | in Präsenz | |
| Kurzfassung | There are different types of artifacts for the representation of variability in a Software Product Line (SPL). Because there also exists different analysis tools for different artifact types, it is sometimes required to transform between different formats to make models available for analysis for a certain tool. Between different artifact types, there is not always a natural mapping of structures; a model transformation between two arbitrary formats is not trivial. A model transformation is usually between two different formats, however, it is possible to transitively transform models from one format to another over multiple partial transformations. The framework TRAVART utilizes this over a pivot model format called Universal Variability Language (UVL): Over a plugin architecture, different plugins provide transformations from and to UVL. There are use cases where TRAVART could be utilized in a pipeline, but there is no research if TRAVART is actually adequate to be used in this manner. To investigate this, we implement a benchmarking facility in TRAVART and benchmark three actively maintained TRAVART plugins. We collect over 3000 models from literature for benchmarking and use various statistical methods to support the claim that TRAVART model transformations scale. Our results show that, for the models in our chosen benchmarking datasets, in every possible path and for all plugins, the mean transformation time is less than five seconds. Correlation analysis provides estimators in O(𝑛2) for average transformation time as a function of model size with high 𝑅2 across all plugins and paths with only a few exceptions where the 𝑅2 value is more open to interpretation. We use control groups with larger and smaller models to see how well our predictors work for the two edge cases. For smaller models, transformation time is best predicted by a mean line, which can be explained by a fixed transformation overhead, i.e. constant factor in runtime. For the largest models, the predictors fit for the benchmarking models underestimate run time, although there is some gap in the runtime curve that would imply some other factor which we have not considered, such as limitations due to memory hierarchy or single-thread execution. We also investigate transformation complexity as a scaling metric for model size, which however leads to only a nomimal improvement in predictor quality. Ultimately we deduce that TRAVART plugin transformations do scale, especially for the range of model sizes of the models in our benchmarking dataset, with some caveats for larger models where further factors must be investigated for a definitive answer | |