Tuning of Explainable ArtificialIntelligence (XAI) tools in the field of textanalysis: Unterschied zwischen den Versionen

Aus IPD-Institutsseminar
Zur Navigation springen Zur Suche springen
(Die Seite wurde neu angelegt: „{{Vortrag |vortragender=Philipp Weinmann |email=utdsz@student.kit.edu |vortragstyp=Bachelorarbeit |betreuer=Clemens Müssener |termin=Institutsseminar/2021-06-…“)
 
 
Zeile 5: Zeile 5:
 
|betreuer=Clemens Müssener
 
|betreuer=Clemens Müssener
 
|termin=Institutsseminar/2021-06-11
 
|termin=Institutsseminar/2021-06-11
 +
|kurzfassung=The goal of this bachelor thesis was to analyse classification results using a 2017 published method called shap. Explaining how an artificial neural network makes a decision is an interdisciplinary research subject combining computer science, math, psychology and philosophy. We analysed these explanations from a psychological standpoint and after presenting our findings we will propose a method to improve the interpretability of text explanations using text-hierarchies, without loosing much/any accuracy. Secondary, the goal was to test out a framework developed to analyse a multitude of explanation methods. This Framework will be presented next to our findings and how to use it to create your own analysis. This Bachelor thesis is addressed at people familiar with artificial neural networks and other machine learning methods.
 
}}
 
}}

Aktuelle Version vom 26. Mai 2021, 14:12 Uhr

Vortragende(r) Philipp Weinmann
Vortragstyp Bachelorarbeit
Betreuer(in) Clemens Müssener
Termin Fr 11. Juni 2021
Vortragsmodus
Kurzfassung The goal of this bachelor thesis was to analyse classification results using a 2017 published method called shap. Explaining how an artificial neural network makes a decision is an interdisciplinary research subject combining computer science, math, psychology and philosophy. We analysed these explanations from a psychological standpoint and after presenting our findings we will propose a method to improve the interpretability of text explanations using text-hierarchies, without loosing much/any accuracy. Secondary, the goal was to test out a framework developed to analyse a multitude of explanation methods. This Framework will be presented next to our findings and how to use it to create your own analysis. This Bachelor thesis is addressed at people familiar with artificial neural networks and other machine learning methods.