| Kurzfassung
|
In most critical applications, e.g., the medical field, traditional Machine Learning (ML) methods cannot satisfy strict privacy requirements. For this reason, the AI paradigm Federated Learning (FL) emerged as a means to train ML models decentralized. Despite the benefits of explainability methods and FL being present today, the integration of explainability into FL is still seriously lacking. In this master’s thesis, we empirically researched the interaction between FL and explainability by integrating explainability as a non-functional requirement. We empirically evaluated different measurements regarding explanation methods, their explanations, and the FL context. In parallel to the experimental aspect of this thesis, which is a bottom-up approach, we also further analyzed, conceptualized, and understood explainability from a top-down perspective to tackle the human-side problem. Our results were additionally complemented by a user survey about explainability.
|