Title | Capturing the human action semantics using a query-by-example |
Publication Type | Conference Paper |
Year of Publication | 2008 |
Authors | Montesanto A., Baldassarri P, Dragoni A.F., Vallesi G., Puliti P |
Conference Name | SIGMAP 2008 - Proceedings of the International Conference on Signal Processing and Multimedia Applications |
Abstract | The paper describes a method for extracting human action semantics in video's using queries-by-example.b Here we consider the indexing and the matching problems of content-based human motion data retrieval. The query formulation is based on trajectories that may be easily built or extracted by following relevant points on a video, by a novice user too. The so realized trajectories contain high value of action semantics. The semantic schema is built by splitting a trajectory in time ordered sub-sequences that contain the features of extracted points. This kind of semantic representation allows reducing the search space dimensionality and, being human-oriented, allows a selective recognition of actions that are very similar among them. A neural network system analyzes the video semantic similarity, using a two-layer architecture of multilayer perceptrons, which is able to learn the semantic schema of the actions and to recognize them. |
URL | http://www.scopus.com/inward/record.url?eid=2-s2.0-55849129652&partnerID=40&md5=a8587b1b9833a9ccf64510cc75e426ab |
Capturing the human action semantics using a query-by-example
0