摘 要: 针对当前边缘计算中卸载策略优化效果不佳、泛化能力不足等问题,提出了一种基于深度强化学习的计算卸载策略。采用改进的Rainbow深度Q网络优化任务卸载决策,构建了“边-端”架构的系统模型,并建立任务计算成本和优化目标的数学模型。采用马尔可夫决策过程描述卸载策略,并通过改进的 Rainbow深度Q网络算法进行优化。实验结果表明,提出的计算卸载策略在任务卸载时延和能耗优化方面优于DQN、DuelingDQN等多种对比算法,卸载成本为次优卸载策略成本的72.6%,同时具有更快的收敛速度,有效提升了边缘计算环境中的任务卸载决策质量。 |
关键词: 边缘计算 深度强化学习 计算卸载 Rainbow深度Q网络 |
中图分类号:
文献标识码: A
|
基金项目: 山西省高等学校科技创新项目(2022L439);山西大同大学青年科研基金项目(2021Q1) |
|
Deep Reinforcement Learning-Based Computation Offloading Strategyin Edge Computing |
ZHANG Shengtian, GUO Wenjun, YAO Jingshu
|
(School of Computer and Network Engineering, Shanxi Datong University, Datong 037009, China)
zstnewman@sxdtdx.edu.cn; sxdtdxgwj@163.com; 707797020@qq.com
|
Abstract: To address the issues of suboptimal optimization effectiveness and insufficient generalization capability in current edge computing offloading strategies, this study proposes a computation offloading strategy based on deep reinforcement learning, employing an improved Rainbow Deep Q-Network (Rainbow DQN) to optimize task offloading decisions. First, a system model with an "edge-end" architecture is constructed, and a mathematical model for task computation costs and optimization objectives is established. The offloading strategy is formulated as a Markov Decision Process (MDP) and optimized using an enhanced Rainbow DQN algorithm. Experimental results demonstrate that the proposed strategy outperforms comparative algorithms such as DQN and Dueling DQN in terms of task
offloading latency and energy consumption optimization, achieving an offloading cost of only 72.6% of the suboptimal strategy while exhibiting faster convergence. This approach significantly enhances the quality of task offloading decisions in edge computing environments. |
Keywords: edge computing deep reinforcement learning computation offloading Rainbow Deep Q-Network |