Perceptually Guided Processing of Style and Affect in Human Motion for Multimedia Applications

Public Deposited
Resource Type
Creator
Abstract
  • Computer graphics and animation have become broad and demanding areas of research. Animation of human motion is a major component, attracting many due to its significance in movies, games, and virtual environments. We propose that processing features for style and affect should be carried out through perceptually guided techniques. In this dissertation, we employ this approach and develop a set of tools for extraction, synthesis, and analysis of stylistic/affective features. Temporal alignment is a common issue in motion processing. Accordingly, we first propose a time warping technique for motion. Our method outperforms several existing techniques and benefits from customizability and low distortion. Many motion processing techniques utilize incremental (joint-to-joint) processing and some process only selected joints. Hence, it is imperative to verify whether subsets of computational solutions lead to perceptually accurate results. Accordingly, we investigate and validate the notion of additivity in perception of affect from motion. A system for extracting style/affect features from motion is then introduced. Our method has several advantages over existing techniques, namely extracting the features as separate movement, posture, and time components, which are the perceptual and functional sources for stylistic/affective motion. Towards synthesis of style/affect features, a perception-based approach is used. Radial basis functions (RBFs) are introduced as mathematical constructs for the features. An interface is developed using which animators utilize these functions to synthesize the desired features. Through analyzing the collected data, expert-driven perceptual shortcuts for generation of stylistic/affective themes are derived. The features also shed light on various aspects of execution and perception of style/affect. A unified system capable of both classification and translation of stylistic/affective features is subsequently developed using neural networks. The recognition module outperforms several other classifiers and viewers validate the translation results as perceptually accurate. Finally, to provide a set of guidelines for animators, based on which motion features can be modified or added to motion data, an empirical paradigm is proposed. The paradigm discusses the characteristics required to make motion scenes perceptually valid with respect to motion features and context. The existing body of literature, sensible examples, case studies, and opinions of experienced animators support this paradigm.

Subject
Language
Publisher
Thesis Degree Level
Thesis Degree Name
Thesis Degree Discipline
Identifier
Rights Notes
  • Copyright © 2014 the author(s). Theses may be used for non-commercial research, educational, or related academic purposes only. Such uses include personal study, research, scholarship, and teaching. Theses may only be shared by linking to Carleton University Institutional Repository and no part may be used without proper attribution to the author. No part may be used for commercial purposes directly or indirectly via a for-profit platform; no adaptation or derivative works are permitted without consent from the copyright owner.

Date Created
  • 2014

Relations

In Collection:

Items