3D CHARACTER EXPRESSION ANIMATION ACCORDING TO VIETNAMESE SENTENCE SEMANTICS
Abstract
Facial expressions are the main means of communicating social information between people, being a form of nonverbal communication. In a virtual reality program or game, a compelling 3D character needs to be able to act and express emotions clearly and coherently. Animation studies show that character need to represent at least six basic emotions: happy, sad, fear, disgust, anger, surprise. However, generating expression animations for virtual characters is time-consuming and requires a lot of creativity. The main objective of the article is to generate expression animations combined with lip-synchronization of 3D characters according to the semantics of Vietnamese sentences. Our method is based on the blendshape weights of the 3D face model. The input text after emotion prediction will be passed to lip sync and emotion generator to perform 3D face animation. Experimental results with 200 Vietnamese sentences are automatically classified according to six different emotions. Then, conduct a survey that predicts the emotion shown in the video. Survey participants were asked to recognize the emotions of 3D virtual faces according to each sentence of input text. Survey results show that anger is the most recognizable emotion, happiness and excitement are easily confused.