Learning State-Based Behavior Using Deep Neural Networks
Public Deposited- Resource Type
- Creator
- Abstract
Imitation learning is a supervised learning problem that involves training a model to perform a task in a given environment using demonstrations of an expert. In this thesis, we propose 5 metrics to evaluate the performance of imitation learning agents. We compare state-of-the-art imitation learning models to deep neural networks at imitating state-based and reactive behavior. To compare the imitation learning techniques, we use two partially observable domains: the continuous RoboCup domain and the discrete Vacuum Cleaner domain. We show how our proposed metrics provide us with more qualitative information about the performance of imitation learners when imitating state-based behavior compared to state-of-the-art metrics. In addition, we show how our testing methodology provides results that resemble the eye-test that current testing methodologies fail to provide. We also show how Long Short-Term Memory (LSTM) networks outperform state-of-the-art models at imitating state-based behavior in the RoboCup soccer domain.
- Subject
- Language
- Publisher
- Thesis Degree Level
- Thesis Degree Name
- Thesis Degree Discipline
- Identifier
- Rights Notes
Copyright © 2021 the author(s). Theses may be used for non-commercial research, educational, or related academic purposes only. Such uses include personal study, research, scholarship, and teaching. Theses may only be shared by linking to Carleton University Institutional Repository and no part may be used without proper attribution to the author. No part may be used for commercial purposes directly or indirectly via a for-profit platform; no adaptation or derivative works are permitted without consent from the copyright owner.
- Date Created
- 2021
Relations
- In Collection:
Items
Thumbnail | Title | Date Uploaded | Visibility | Actions |
---|---|---|---|---|
zalat-learningstatebasedbehaviorusingdeepneural.pdf | 2023-05-05 | Public | Download |