Perceptually Guided Processing of Style and Affect in Human Motion for Multimedia Applications

It appears your Web browser is not configured to display PDF files. Download adobe Acrobat or click here to download the PDF file.

Click here to download the PDF file.

Creator: 

Etemad, Seyed Ali

Date: 

2014

Abstract: 

Computer graphics and animation have become broad and demanding areas of research. Animation of human motion is a major component, attracting many due to its significance in movies, games, and virtual environments. We propose that processing features for style and affect should be carried out through perceptually guided techniques. In this dissertation, we employ this approach and develop a set of tools for extraction, synthesis, and analysis of stylistic/affective features.
Temporal alignment is a common issue in motion processing. Accordingly, we first propose a time warping technique for
motion. Our method outperforms several existing techniques and benefits from customizability and low distortion.
Many motion processing techniques utilize incremental (joint-to-joint) processing and some process only selected joints. Hence, it is imperative to verify whether subsets of computational solutions lead to perceptually accurate results. Accordingly, we investigate and validate the notion of additivity in perception of affect from motion.
A system for extracting style/affect features from motion is then introduced. Our method has several advantages over existing techniques,
namely extracting the features as separate movement, posture, and time components, which are the perceptual and functional sources for stylistic/affective motion.
Towards synthesis of style/affect features, a perception-based approach is used. Radial basis functions (RBFs) are introduced as mathematical constructs for the features. An interface is developed using which animators utilize these functions to synthesize the desired features. Through analyzing the collected data, expert-driven perceptual shortcuts for generation of stylistic/affective themes are derived. The features also shed
light on various aspects of execution and perception of style/affect.
A unified system capable of both classification and translation of stylistic/affective features is subsequently developed using neural networks. The recognition module outperforms several other classifiers and viewers validate the translation results as perceptually accurate.
Finally, to provide a set of guidelines for animators, based on which motion features can be modified or added to motion data, an empirical paradigm is proposed. The paradigm discusses the characteristics required to make motion scenes perceptually
valid with respect to motion features and context. The existing body of literature, sensible examples, case studies, and opinions of experienced animators support this paradigm.

Subject: 

Computer Science
Artificial Intelligence

Language: 

English

Publisher: 

Carleton University

Thesis Degree Name: 

Doctor of Philosophy: 
Ph.D.

Thesis Degree Level: 

Doctoral

Thesis Degree Discipline: 

Engineering, Electrical and Computer

Parent Collection: 

Theses and Dissertations

Items in CURVE are protected by copyright, with all rights reserved, unless otherwise indicated. They are made available with permission from the author(s).