On the semantics of similarity in deep trajectory representations: Unterschied zwischen den Versionen

Aus SDQ-Institutsseminar
Keine Bearbeitungszusammenfassung
Keine Bearbeitungszusammenfassung
 
(2 dazwischenliegende Versionen desselben Benutzers werden nicht angezeigt)
Zeile 5: Zeile 5:
|betreuer=Saeed Taghizadeh
|betreuer=Saeed Taghizadeh
|termin=Institutsseminar/2019-10-11
|termin=Institutsseminar/2019-10-11
|kurzfassung=With the rapid increase of the number of GPS-enabled devices the amount of accumulated trajectory data is ever increasing. An important task in analyzing trajectory data is investigating which trajectories are similar to each other. Traditional models for trajectory similarity computation are based on dynamic programming. However, they suffer from scalability issues as well as susceptibility to noisy data.  
|kurzfassung=Recently, a deep learning model (t2vec) for trajectory similarity computation has been proposed. Instead of using the trajectories, it uses their deep representations to compute the similarity between them. At this current state, we do not have a clear idea how to interpret the t2vec similarity values, nor what they are exactly based on. This thesis addresses these two issues by analyzing t2vec on its own and then systematically comparing it to the the more familiar traditional models.


A novel deep learning model for trajectory similarity computation (t2vec) emerged in 2018 and solved these two issues. However, we have no intuitive understanding what the similarity values obtained by t2vec are based on. In order to understand which applications are best suitable for t2vec, we need to analyze the similarity semantics which are captured by the deep learning model.
Firstly, we examine how the model’s parameters influence the probability distribution (PDF) of the t2vec similarity values. For this purpose, we conduct experiments with various parameter settings and inspect the abstract shape and statistical properties of their PDF. Secondly, we consider that we already have an intuitive understanding of the classical models, such as Dynamic Time Warping (DTW) and Longest Common Subsequence (LCSS). Therefore, we use this intuition to analyze t2vec by systematically comparing it to DTW and LCSS with the help of heat maps.
 
In this thesis we investigate how the parameters of the deep learning model influence the probability distributions of the similarity values. We already have an intuitive understanding of the traditional dynamic programming models. Therefore, we transfer this intuition onto t2vec by systematically comparing it to the traditional models. In the end we recommend suitable applications for t2vec, based on the results that we have gathered from our experiments.
}}
}}

Aktuelle Version vom 30. September 2019, 11:16 Uhr

Vortragende(r) Zdravko Marinov
Vortragstyp Bachelorarbeit
Betreuer(in) Saeed Taghizadeh
Termin Fr 11. Oktober 2019
Vortragsmodus
Kurzfassung Recently, a deep learning model (t2vec) for trajectory similarity computation has been proposed. Instead of using the trajectories, it uses their deep representations to compute the similarity between them. At this current state, we do not have a clear idea how to interpret the t2vec similarity values, nor what they are exactly based on. This thesis addresses these two issues by analyzing t2vec on its own and then systematically comparing it to the the more familiar traditional models.

Firstly, we examine how the model’s parameters influence the probability distribution (PDF) of the t2vec similarity values. For this purpose, we conduct experiments with various parameter settings and inspect the abstract shape and statistical properties of their PDF. Secondly, we consider that we already have an intuitive understanding of the classical models, such as Dynamic Time Warping (DTW) and Longest Common Subsequence (LCSS). Therefore, we use this intuition to analyze t2vec by systematically comparing it to DTW and LCSS with the help of heat maps.