Termin in Kalender importieren: iCal (Download)
Vorträge
Analyzing software extensions through composition of architecture and source code analyzers
| Vortragende(r)
|
Velislav Slavov
|
| Vortragstyp
|
Masterarbeit
|
| Betreuer(in)
|
Bahareh Taghavi
|
| Vortragssprache
|
Englisch
|
| Vortragsmodus
|
in Präsenz
|
| Kurzfassung
|
Extensibility is a common strategy for addressing the diverse requirements of software users. However, extensions—often distributed through online repositories—typically lack guarantees about their performance characteristics. This Master's Thesis proposes an approach to estimate the performance of software extensions before they are made available to end users. Such early-stage evaluation provides valuable feedback to developers and helps prevent stakeholder dissatisfaction.
Designing a single analyzer that can evaluate both the extension and the base system is often infeasible, due to the usage of multiple programming languages and the dependencies between the two parts of the system. To address this, we investigate the composition of static source code analysis and model-based architecture analysis. Our method is evaluated through a case study on an extensible Customer Relationship Management (CRM) system, demonstrating both the feasibility of performance estimation and the benefits of analyzer composition.
|
Integrating Explainability into Federated Learning: A Non-functional Requirement Perspective
| Vortragende(r)
|
Nicolas Schuler
|
| Vortragstyp
|
Masterarbeit
|
| Betreuer(in)
|
Vincenzo Scotti
|
| Vortragssprache
|
Englisch
|
| Vortragsmodus
|
in Präsenz
|
| Kurzfassung
|
In most critical applications, e.g., the medical field, traditional Machine Learning (ML) methods cannot satisfy strict privacy requirements. For this reason, the AI paradigm Federated Learning (FL) emerged as a means to train ML models decentralized. Despite the benefits of explainability methods and FL being present today, the integration of explainability into FL is still seriously lacking. In this master’s thesis, we empirically researched the interaction between FL and explainability by integrating explainability as a non-functional requirement. We empirically evaluated different measurements regarding explanation methods, their explanations, and the FL context. In parallel to the experimental aspect of this thesis, which is a bottom-up approach, we also further analyzed, conceptualized, and understood explainability from a top-down perspective to tackle the human-side problem. Our results were additionally complemented by a user survey about explainability.
|
Hinweise