Decentralized Cache-aided Offloading in Edge Cloud Collaborative Environment using Deep Q Reinforcement Learning

Public Deposited
Resource Type
Creator
Abstract
  • Many emerging applications such as augmented reality, facial recognition require heavy computation, and the processed results have to be available to the user in the order of milliseconds. Edge computing combined with cloud computing can address this challenge by distributing the load (offloading) on different connected computing resources. This thesis introduces a novel adaptive offloading framework using Online Deep Q reinforcement learning. The proposed framework considers strict latency constraints, high state space, rapidly changing user mobility, heterogeneous resources, and stochastic task arrival rate. It also highlights the importance of caching and introduces a novel concept called "container caching" that caches the dependencies of popular applications. Therefore, offloading decisions are taken to minimize energy consumption, latency, and caching costs. Simulation results and comparisons with existing benchmarking algorithms showed remarkable performance in terms of energy consumption, network traffic, task failures, remaining power on a large scale demonstrated the feasibility of proposed approach.

Subject
Language
Publisher
Thesis Degree Level
Thesis Degree Name
Thesis Degree Discipline
Identifier
Rights Notes
  • Copyright © 2022 the author(s). Theses may be used for non-commercial research, educational, or related academic purposes only. Such uses include personal study, research, scholarship, and teaching. Theses may only be shared by linking to Carleton University Institutional Repository and no part may be used without proper attribution to the author. No part may be used for commercial purposes directly or indirectly via a for-profit platform; no adaptation or derivative works are permitted without consent from the copyright owner.

Date Created
  • 2022

Relations

In Collection:

Items