Human activity recognition systems from static images or video sequences are becoming more and more present in our life. Most computer vision applications such as human-computer interaction, virtual reality, public security, smart home monitoring, or autonomous robotics, to name a few, highly rely on human activity recognition. Of course, basic human activities, such as “walking” and “running”, are relatively easy to recognize. On the other hand, identifying more complex activities is still a challenging task that could be solved by retrieving contextual information from the scene, such as objects, events, or concepts. Indeed, a careful analysis of the scene can help to recognize human activities taking place. In this work, we address a holistic video understanding task to provide a complete semantic level description of the scene. Our solution can bring significant improvements in human activity recognition tasks. Besides, it may allow equipping a robotic and autonomous system with contextual knowledge of the environment. In particular, we want to show how this vision module can be integrated into a social robot to build a more natural and realistic context-based Human-Robot Interaction. We think that social robots must be aware of the surrounding environment to react in a proper and socially acceptable way, according to the different scenarios.
Dettaglio pubblicazione
2022, Lecture Notes in Computer Science, Pages 310-325 (volume: 13196)
Vision-Based Holistic Scene Understanding for Context-Aware Human-Robot Interaction (04b Atto di convegno in volume)
DE MAGISTRIS Giorgio, Caprari Riccardo, Castro Giulia, Russo Samuele, Iocchi Luca, Nardi Daniele, Napoli Christian
ISBN: 978-3-031-08420-1; 978-3-031-08421-8
Gruppo di ricerca: Artificial Intelligence and Robotics
keywords