Masoud Mokhtari
-
MSc (Amirkabir University of Technology, 2016)
Topic
Reinforcement Learning Based Resource Allocation in Fog Computing
Department of Computer Science
Date & location
-
Wednesday, October 15, 2025
-
10:00 A.M.
-
Engineering Computer Science Building
-
Room 468 and Virtual Defence
Reviewers
Supervisory Committee
-
Dr. Sudhakar Ganti, Department of Computer Science, University of Victoria (Supervisor)
-
Dr. Alex Thomo, Department of Computer Science, UVic (Member)
-
Dr. Lin Cai, Department of Electrical and Computer Engineering, UVic (Outside Member)
External Examiner
-
Dr. Muhammad Jaseemuddin, Department of Electrical, Computer & Biomedical Engineering, Toronto Metropolitan University
Chair of Oral Examination
-
Dr. Cheng Lin, Department of Civil Engineering, UVic
Abstract
The Internet of Things (IoT) has revolutionized connectivity by enabling seamless data exchange among diverse devices, fostering intelligent services and informed decision making. However, the rapid surge in data traffic has exposed the limitations of traditional cloud-based solutions, particularly in meeting the quality-of-service (QoS) demands of latency-sensitive applications. Fog computing has emerged as a transformative paradigm, extending computational resources closer to end-users and bridging the gap between centralized cloud systems and edge devices. This approach addresses QoS challenges by providing critical services and resources at the network’s edge.
Despite its advantages, fog computing faces resource limitations at the node level, necessitating efficient resource allocation to optimize performance and meet application specific QoS requirements. Deciding whether to process data at the fog or cloud level involves navigating complex trade-offs dictated by resource availability, offloading criteria, and diverse application scenarios.
This thesis addresses these challenges through a comprehensive approach to resource allocation in fog and cloud computing environments. First, a reinforcement learning-based method is introduced to optimize resource allocation for a single fog node. By formulating the problem as a Markov Decision Process (MDP), the approach maximizes fog resource utilization while considering the number of resource blocks and delay tolerance for each request. Experimental evaluations demonstrate the superiority of the E-SARSA algorithm in terms of speed, utilization, and adaptability compared to Q-learning, SARSA, and a Fixed-Threshold approach.
The study then extends to multi-fog/cloud systems, introducing a two-phase process. In the first phase, the optimal fog node for resource allocation is identified. In the second phase, reinforcement learning is applied to determine whether tasks should be processed locally or off loaded to the cloud. This method ensures efficient resource utilization, with experimental results highlighting the superior performance of the Selection-2 approach compared to Genetic Algorithms (GA), Round Robin (RR), and Random strategies, particularly in speed, utilization, and load balancing.
Finally, the framework is further enhanced with a hybrid approach combining Genetic Algorithms and Reinforcement Learning (GA/RL) for dynamic resource allocation in integer-based multi-fog/cloud systems. This method applies the two-phase process, achieving significant improvements in speed, utilization, and load balancing compared to existing methods.
By dynamically allocating fog resources and optimizing offloading strategies, this work addresses the limitations of traditional cloud computing systems and ensures seamless performance for latency-sensitive IoT applications. The proposed approaches advance resource allocation strategies in fog and cloud computing, offering scalable, efficient, and adaptive solutions for future IoT ecosystems.