Institutsseminar/2021-08-20: Unterschied zwischen den Versionen

Aus SDQ-Institutsseminar
(Quantitative Evaluation of the Expected Antagonism of Explainability and Privacy)
 
Keine Bearbeitungszusammenfassung
Zeile 3: Zeile 3:
|raum=https://conf.dfn.de/webapp/conference/979160755
|raum=https://conf.dfn.de/webapp/conference/979160755
}}
}}
Quantitative Evaluation of the Expected Antagonism of Explainability and Privacy

Version vom 10. August 2021, 12:14 Uhr

Termin (Alle Termine)
Datum Freitag, 20. August 2021
Uhrzeit 11:30 – 12:00 Uhr (Dauer: 30 min)
Ort https://conf.dfn.de/webapp/conference/979160755
Webkonferenz
Vorheriger Termin Fr 30. Juli 2021
Nächster Termin Fr 10. September 2021

Termin in Kalender importieren: iCal (Download)

Vorträge

Vortragende(r) Martin Lange
Titel Quantitative Evaluation of the Expected Antagonism of Explainability and Privacy
Vortragstyp Bachelorarbeit
Betreuer(in) Clemens Müssener
Vortragssprache
Vortragsmodus
Kurzfassung Explainable artificial intelligence (XAI) offers a reasoning behind a model's behavior.

For many explainers this proposed reasoning gives us more information about the inner workings of the model or even about the training data. Since data privacy is becoming an important issue the question arises whether explainers can leak private data. It is unclear what private data can be obtained from different kinds of explanation. In this thesis I adapt three privacy attacks in machine learning to the field of XAI: model extraction, membership inference and training data extraction. The different kinds of explainers are sorted into these categories argumentatively and I present specific use cases how an attacker can obtain private data from an explanation. I demonstrate membership inference and training data extraction for two specific explainers in experiments. Thus, privacy can be breached with the help of explainers.

Neuen Vortrag erstellen

Hinweise