Learning State-Based Behavior Using Deep Neural Networks

It appears your Web browser is not configured to display PDF files. Download adobe Acrobat or click here to download the PDF file.

Click here to download the PDF file.

Creator: 

Zalat, Mohamed Elsayed Gaber Ali

Date: 

2021

Abstract: 

Imitation learning is a supervised learning problem that involves training a model to perform a task in a given environment using demonstrations of an expert. In this thesis, we propose 5 metrics to evaluate the performance of imitation learning agents. We compare state-of-the-art imitation learning models to deep neural networks at imitating state-based and reactive behavior. To compare the imitation learning techniques, we use two partially observable domains: the continuous RoboCup domain and the discrete Vacuum Cleaner domain. We show how our proposed metrics provide us with more qualitative information about the performance of imitation learners when imitating state-based behavior compared to state-of-the-art metrics. In addition, we show how our testing methodology provides results that resemble the eye-test that current testing methodologies fail to provide. We also show how Long Short-Term Memory (LSTM) networks outperform state-of-the-art models at imitating state-based behavior in the RoboCup soccer domain.

Subject: 

Information Science
Education - Technology

Language: 

English

Publisher: 

Carleton University

Thesis Degree Name: 

Master of Applied Science: 
M.App.Sc.

Thesis Degree Level: 

Master's

Thesis Degree Discipline: 

Engineering, Electrical and Computer

Parent Collection: 

Theses and Dissertations

Items in CURVE are protected by copyright, with all rights reserved, unless otherwise indicated. They are made available with permission from the author(s).