Deep Reinforcement Learning for Quantitative Finance: Time Series Forecasting using Proximal Policy Optimization

It appears your Web browser is not configured to display PDF files. Download adobe Acrobat or click here to download the PDF file.

Click here to download the PDF file.

Creator: 

Chiumera, David Joseph

Date: 

2022

Abstract: 

In this work, the focus is on price prediction and concurrent strategy building. The modelling approach chosen for this is of the deep reinforcement learning type, and actor-critic class. Specifically, in this work the proximal policy optimization (PPO) architecture is employed individually on each stocks market history in order to try and solve the price prediction problem. A custom RL environment was built to run the proposed experimental sequence and to test which parameter values should be used in regards to learning rate, discount factor, feature space, action space, and look-back length. These values were subsequently used for experiments on different datasets, exploring the portability of the model, effect of transfer learning, as well as portability of the parameter configuration. The results show our experimental sequence can be effectively used for the price prediction problem, and in some instances outperform a practical B&H strategy.

Subject: 

Artificial Intelligence
Finance

Language: 

English

Publisher: 

Carleton University

Thesis Degree Name: 

Master of Information Technology: 
M.I.T.

Thesis Degree Level: 

Master's

Thesis Degree Discipline: 

Digital Media

Parent Collection: 

Theses and Dissertations

Items in CURVE are protected by copyright, with all rights reserved, unless otherwise indicated. They are made available with permission from the author(s).