On the semantics of similarity in deep trajectory representations: Unterschied zwischen den Versionen
|(2 dazwischenliegende Versionen desselben Benutzers werden nicht angezeigt)|
|Zeile 5:||Zeile 5:|
|kurzfassung=of the the . to similarity are based on. issues to .
, we how the parameters influence the probability of the similarity values. already have an intuitive understanding of the models. Therefore, we this intuition t2vec by systematically comparing it to the .
Aktuelle Version vom 30. September 2019, 11:16 Uhr
|Termin||Fr 11. Oktober 2019|
|Kurzfassung||Recently, a deep learning model (t2vec) for trajectory similarity computation has been proposed. Instead of using the trajectories, it uses their deep representations to compute the similarity between them. At this current state, we do not have a clear idea how to interpret the t2vec similarity values, nor what they are exactly based on. This thesis addresses these two issues by analyzing t2vec on its own and then systematically comparing it to the the more familiar traditional models.
Firstly, we examine how the model’s parameters influence the probability distribution (PDF) of the t2vec similarity values. For this purpose, we conduct experiments with various parameter settings and inspect the abstract shape and statistical properties of their PDF. Secondly, we consider that we already have an intuitive understanding of the classical models, such as Dynamic Time Warping (DTW) and Longest Common Subsequence (LCSS). Therefore, we use this intuition to analyze t2vec by systematically comparing it to DTW and LCSS with the help of heat maps.