메뉴 드롭다운
메뉴 드롭다운
KR EN

대학원 소개

연구성과

UNIST 인공지능대학원의 대학원 및 연구성과를 확인하실 수 있습니다.

Learning 3D skeletal representation from transformer for action recognition (IEEE Access), Prof. Seungryul Baek

Learning 3D Skeletal Representation From Transformer for Action Recognition


Junuk Cha; Muhammad Saqlain; Donguk Kim; Seungeun Lee; Seongyeong Lee; Seungryul Baek

Abstract:
Skeleton-based human action recognition has attracted significant interest due to its simplicity and good accuracy. Diverse end-to-end trainable frameworks based on skeletal representation have been proposed so far to map the representation to human action classes better. Most skeleton-based human action recognition approaches are based on the skeletons, which are heuristically pre-defined by the commercial sensors. Nevertheless, it is not confirmed whether the sensor-captured skeletons are the best representation of human bodies for the action recognition task, while in general, the dedicated representation is required for achieving the successful performance on subsequent tasks such as action recognition. In this paper, we try to deal with the issue by explicitly learning the skeletal representation in the context of the human action recognition task. We start our investigation by reconstructing 3D meshes of the human bodies from RGB videos. Then we involve the transformer architecture to sample the most informative skeletal representation from reconstructed 3D meshes, considering the inner and inter structural relationship of 3D meshes and sensor-captured skeletons. Experimental results on challenging human action recognition benchmarks (i.e., SYSU and UTD-MHAD datasets) have shown the superiority of our skeletal representation compared to the sensor-captured skeletons for the action recognition task.