Network of Excellence Peer-to-Peer Tagged Media

3D Clustering of Social Media

This software aims to find and rank similar video sequences. The similarity is measured by using MPEG-7 visual descriptors (colour, edge and motion), hierarchical cluster methods and probabilistic latent semantic analysis. 


The visual distances between key frames are calculated using the weighted sum of predefined dissimilarity functions per descriptor (picture on the left side: 3D view on the calculated video similarities). Similarities between shots resp. key frames are determined by hierarchical cluster methods. Therefore, the video sequences are segmented in shots and per shot a key frame is generated, to which the visual descriptors are applied. In addition to that the motion features are applied to the whole video sequence. In order to compare sequences with different length, the motion activity is transformed into the frequency domain. The similarity of the motion activity of the normalized cross correlation function Rxy(?) at ?=0. The motion direction characteristics of each video are captured by histograms containing quantized angles of motion vectors. The dissimilarity of two video sequences is measured by the weighted sum of L1-norm of direction histograms (picture on the right side: similarity ranking to sample video).
Textual similarities of video sequences are measured by applying probabilistic latent semantic analysis (pLSA) on keywords, descriptions and comments to determine similarities of associated metadata. From these metadata, document-term matrixes are extracted after the elimination of stop-words. The probabilistic latent semantic analysis reduces the document-term matrix of a video sequence to few concepts. The textual similarities between metadata are measured by cosine similarity of these concept vectors. 
The weighted sum of the distances matrixes is used for fusion of textual similarity and visual similarity. The calculated distances can be visualized by triple usage of Fastmap algorithm to generate three coordinates per data point.

 

Contact: Pascal Kelm, [Last name]@nue.tu-berlin.de