On the Interpretability of Anomaly Detection via Neural Networks

Aus SDQ-Institutsseminar
Version vom 19. September 2018, 10:54 Uhr von Marco Sturm (Diskussion | Beiträge) (Die Seite wurde neu angelegt: „{{Vortrag |vortragender=Marco Sturm |email=marco.sturm@student.kit.edu |vortragstyp=Masterarbeit |betreuer=Edouard Fouché |termin=Institutsseminar/2018-10-12…“)
(Unterschied) ← Nächstältere Version | Aktuelle Version (Unterschied) | Nächstjüngere Version → (Unterschied)
Vortragende(r) Marco Sturm
Vortragstyp Masterarbeit
Betreuer(in) Edouard Fouché
Termin Fr 12. Oktober 2018
Vortragssprache
Vortragsmodus
Kurzfassung Verifying anomaly detection results when working in on an unsupervised use case is challenging. For large datasets a manual labelling is economical unfeasible. In this thesis we create explanations to help verifying and understanding the detected anomalies. We develop a method to rule generation algorithm that describe frequent patterns in the output of autoencoders. The number of rules is significantly lower than the number of anomalies. Thus, finding explanations for these rules is much less effort compared to finding explanations for every single anomaly. Its performance is evaluated on a real-world use case, where we achieve a significant reduction of effort required for domain experts to understand the detected anomalies but can not specify the usefulness in exact numbers due to the missing labels. Therefore, we also evaluate the approach on benchmark dataset.