Institutsseminar/2022-07-22: Unterschied zwischen den Versionen

Aus SDQ-Institutsseminar
K (Jv1963 verschob die Seite Institutsseminar/2021-07-22 nach Institutsseminar/2022-07-22, ohne dabei eine Weiterleitung anzulegen: Datum falsch)
Keine Bearbeitungszusammenfassung
Zeile 1: Zeile 1:
{{Termin
{{Termin
|datum=2022-07-22T12:00:00.000Z
|datum=2022-07-22T11:30:00.000Z
|raum=Raum 348 (Gebäude 50.34)
|raum=Raum 348 (Gebäude 50.34)
}}
}}

Version vom 25. Februar 2022, 14:54 Uhr

Termin (Alle Termine)
Datum Freitag, 22. Juli 2022
Uhrzeit 11:30 – 12:15 Uhr (Dauer: 45 min)
Ort Raum 348 (Gebäude 50.34)
Webkonferenz
Vorheriger Termin Fr 15. Juli 2022
Nächster Termin Fr 12. August 2022

Termin in Kalender importieren: iCal (Download)

Vorträge

Vortragende(r) Philipp Uhrich
Titel Empirical Identification of Performance Influences of Configuration Options in High-Performance Applications
Vortragstyp Masterarbeit
Betreuer(in) Larissa Schmid
Vortragssprache
Vortragsmodus online
Kurzfassung Many modern high-performance applications are highly-configurable software systems that provide hundreds or even thousands of configuration options. System administrators or application users need to understand all these options and their impacts on the software performance to choose suitable configuration values. To understand the influence of configuration options on the run-time characteristics of a software system, users can use performance prediction models, but building performance prediction models for highly-configurable high-performance applications is expensive. However, not all configuration options, which a software system offers, are performance-relevant. Removing these performance-irrelevant configuration options from the modeling process can reduce the construction cost. In this thesis, we explore and analyze two different approaches to empirically identify configuration options that are not performance-relevant and can be removed from the performance prediction model. The first approach reuses existing performance modeling methods to create much cheaper prediction models by using fewer samples and then analyzing the models to identify performance-irrelevant configuration options. The second approach uses white-box knowledge acquired through dynamic taint analysis to systematically construct the minimal number of required experiments to detect performance-irrelevant configuration options. In the evaluation with a case study, we show that the first approach identifies performance-irrelevant configuration options but also produces misclassifications. The second approach did not perform to our expectations. Further improvement is necessary.
Neuen Vortrag erstellen

Hinweise