Bayesian Optimization for Wrapper Feature Selection: Unterschied zwischen den Versionen

Aus IPD-Institutsseminar
Zur Navigation springen Zur Suche springen
Zeile 5: Zeile 5:
 
|betreuer=Jakob Bach
 
|betreuer=Jakob Bach
 
|termin=Institutsseminar/2019-06-07
 
|termin=Institutsseminar/2019-06-07
|kurzfassung=Wrapper feature selection can lead to highly accurate classifications. However, the computational costs for this are very high in general. Bayesian Optimization on the other hand has already proven to be very efficient in optimizing black box functions. This approach uses Bayesian Optimization in order to minimize the number of evaluations, i.e. the training of models with different feature subsets.
+
|kurzfassung=Wrapper feature selection can lead to highly accurate classifications. However, the computational costs for this are very high in general. Bayesian Optimization on the other hand has already proven to be very efficient in optimizing black-box functions. This approach uses Bayesian Optimization in order to minimize the number of evaluations, i.e. the training of models with different feature subsets. We will use Gaussian processes, random forests and other regression learners for the surrogate model. On 10 different classification datasets the approach will be compared against established wrapper feature selection methods, but also against filter and embedded methods.
 
}}
 
}}

Version vom 4. Juni 2019, 16:10 Uhr

Vortragende(r) Adrian Kruck
Vortragstyp Proposal
Betreuer(in) Jakob Bach
Termin Fr 7. Juni 2019
Vortragsmodus
Kurzfassung Wrapper feature selection can lead to highly accurate classifications. However, the computational costs for this are very high in general. Bayesian Optimization on the other hand has already proven to be very efficient in optimizing black-box functions. This approach uses Bayesian Optimization in order to minimize the number of evaluations, i.e. the training of models with different feature subsets. We will use Gaussian processes, random forests and other regression learners for the surrogate model. On 10 different classification datasets the approach will be compared against established wrapper feature selection methods, but also against filter and embedded methods.