Institutsseminar/2020-12-11: Unterschied zwischen den Versionen

Aus SDQ-Institutsseminar
Keine Bearbeitungszusammenfassung
Keine Bearbeitungszusammenfassung
 
Zeile 1: Zeile 1:
{{Termin
{{Termin
|datum=2020/12/11 11:30:00
|datum=2020/12/11 11:30:00
|raum=https://conf.dfn.de/webapp/conference/979111385
|online=https://conf.dfn.de/webapp/conference/979111385
}}
}}

Aktuelle Version vom 14. Januar 2022, 13:19 Uhr

Termin (Alle Termine)
Datum Freitag, 11. Dezember 2020
Uhrzeit 11:30 – 12:30 Uhr (Dauer: 60 min)
Ort
Webkonferenz https://conf.dfn.de/webapp/conference/979111385
Vorheriger Termin Fr 27. November 2020
Nächster Termin Do 17. Dezember 2020

Termin in Kalender importieren: iCal (Download)

Vorträge

Vortragende(r) Haiko Thiessen
Titel Detecting Outlying Time-Series with Global Alignment Kernels
Vortragstyp Proposal
Betreuer(in) Florian Kalinke
Vortragssprache
Vortragsmodus
Kurzfassung Using outlier detection algorithms, e.g., Support Vector Data Description (SVDD), for detecting outlying time-series usually requires extracting domain-specific attributes. However, this indirect way needs expert knowledge, making SVDD impractical for many real-world use cases. Incorporating "Global Alignment Kernels" directly into SVDD to compute the distance between time-series data bypasses the attribute-extraction step and makes the application of SVDD independent of the underlying domain.

In this work, we propose a new time-series outlier detection algorithm, combining "Global Alignment Kernels" and SVDD. Its outlier detection capabilities will be evaluated on synthetic data as well as on real-world data sets. Additionally, our approach's performance will be compared to state-of-the-art methods for outlier detection, especially with regard to the types of detected outliers.

Vortragende(r) Patrick Ehrler
Titel Meta-Modeling the Feature Space
Vortragstyp Proposal
Betreuer(in) Jakob Bach
Vortragssprache
Vortragsmodus
Kurzfassung Feature Selection is an important process in Machine Learning to improve model training times and complexity. One state-of-the art approach is Wrapper Feature Selection where subsets of features are evaluated. Because we can not evaluate all 2^n subsets an appropriate search strategy is vital.

Bayesian Optimization has already been successfully used in the context of hyperparameter optimization and very specific Feature Selection contexts. We want to look on how to use Bayesian Optimization for Feature Selection and discuss its limitations and possible solutions.

Vortragende(r) Philipp Weinmann
Titel Tuning of Explainable Artificial Intelligence (XAI) tools in the field of text analysis
Vortragstyp Proposal
Betreuer(in) Clemens Müssener
Vortragssprache
Vortragsmodus
Kurzfassung Philipp Weinmann will present his plan for his Bachelor thesis with the title: Tuning of Explainable Artificial Intelligence (XAI) tools in the field of text analysis: He will present a global introduction to explainers for Artificial Intelligence in the context of NLP. We will then explore in details one of these tools: Shap, a perturbation based local explainer and talk about evaluating shap-explanations.
Neuen Vortrag erstellen

Hinweise