Advanced integration of multimedia assistive technologies: A prospective outlook

TitleAdvanced integration of multimedia assistive technologies: A prospective outlook
Publication TypeConference Paper
Year of Publication2014
AuthorsLiciotti D, Ferroni G., Frontoni E, Squartini S., Principi E., Bonfigli R., Zingaretti P, Piazza F
Conference NameMESA 2014 - 10th IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications, Conference Proceedings
Abstract

In the recent years several studies on population ageing in the most advanced countries argued that the share of people older than 65 years is steadily increasing. In order to tackle this phenomena, a significant effort has been devoted to the development of advanced technologies for supervising the domestic environments and their inhabitants to provide them assistance in their own home. In this context, the present paper aims to delineate a novel, highly-integrated system for advanced analysis of human behaviours. It is based on the fusion of the audio and vision frameworks, developed at the Multimedia Assistive Technology Laboratory (MATeLab) of the Università Politecnica delle Marche, in order to operate in the ambient assisted living context exploiting audio-visual domain features. The existing video framework exploits vertical RGB-D sensors for people tracking, interaction analysis and users activities detection in domestic scenarios. The depth information has been used to remove the affect of the appearance variation and to evaluate users activities inside the home and in front of the fixtures. In addition, group interactions are monitored and analysed. On the other side, the audio framework recognises voice commands by continuously monitoring the acoustic home environment. In addition, a hands-free communication to a relative or to a healthcare centre is automatically triggered when a distress call is detected. Echo and interference cancellation algorithms guarantee the high-quality communication and reliable speech recognition, respectively. The system we intend to delineate, thus, exploits multi-domain information, gathered from audio and video frameworks each, and stores them in a remote cloud for instant processing and analysis of the scene. Related actions are consequently performed.

URLhttp://www.scopus.com/inward/record.url?eid=2-s2.0-84911977133&partnerID=40&md5=3a8fad94ccf6268631dbf553e9360956
DOI10.1109/MESA.2014.6935629