The mobile edge computing network assisted by unmanned aerial vehicles , demonstrates great potential in emergency response, real-time monitoring, and other fields. However, the efficient operation of MEC networks en-counters challenges stemming from multiple optimization objectives, such as high energy consumption and high latency. Therefore, this paper introduces a Multi-Objective Evolution with Deep Deterministic Policy Gradient (MOE-DDPG) algorithm for UAV-assisted MEC network optimization. Firstly, an integrated multi-objective optimization model is established to ensure comprehensive performance of the MEC network by minimizing latency and energy consumption while maximizing the number of completed UAV tasks. Secondly, a bidirectional selection strategy for weight vector and individual matching is proposed to address the difficulty of balancing various objectives in traditional DDPG algorithms when dealing with multi-objective optimization problems, thereby significantly enhancing population diversity. Finally, by organically fusing the MOE algorithm and DDPG algorithm, a novel MOE-DDPG algorithm framework is proposed, which can optimize the overall performance of the MEC network in real time. The experimental results show that the MOE-DDPG algorithm not only significantly improves the distribution and convergence of the Pareto solution set but also effectively reduces energy consumption, latency, and increases the number of completed tasks simultaneously.
[1]CAO B, YE H, LIU J, et al.Smart: cost-aware service migration path selection based on deep reinforcement learning[J].IEEE Transactions on Intelligent Transportation Systems, 2024, 25(9):12421-12436
[2]CHENG S, REN T, ZHANG H, et al.A stackelberggame-based framework for edge pricing and resource allocation in mobile edge computing[J].IEEE Internet of Things Journal, 2024, 11(11):20514-20530
[3]李伟, 郭艳, 李宁, 等.智能反射面辅助无人机移动边缘计算任务数据最大化方法[J].航空学报, 2023, 44(19):328486-
[4]SHAH Z, JAVED U, NAEEM M, et al.Mobile edge computing (MEC)-enabled UAV placement and computation efficiency maximization in disaster scenario[J].IEEE Transactions on Vehicular Technology, 2023, 72(10):13406-13416
[5]ZHAI J, BI J, YUAN H, et al.Cost-minimized microservice migration with autoencoder-assisted evolution in hybrid cloud and edge computing systems[J].IEEE Internet of Things Journal, 2024, 11(24):40951-40967
[6]李伟, 郭艳, 何明, 等.满意度驱动下无人机移动边缘计算服务缓存和资源分配方法[J].航空学报, 2024, 45(19):330017-
[7]HUI M, CHEN J, YANG L, et al.UAV-assisted mobile edge computing: optimal design of UAV altitude and task offloading[J].IEEE Transactions on Wireless Communications, 2024, 23(10):13633-13647
[8]屈毓锛, 秦蓁, 马靖豪, 等.面向空地协同移动边缘计算的服务布置策略[J].计算机学报, 2022, 45(4):781-797
[9]CAO L, HUO T, LI S, et al.Cost optimization in edge computing: a survey[J]. Artificial Intelligence Review, 2024, 57: 312-
[10]EJAZ M, GUI J, ASIM M, et al.RL-planner: reinforcement learning-enabled efficient path planning in multi-UAV MEC systems[J].IEEE Transactions on Network and Service Management, 2024, 21(3):3317-3329
[11]LI B, LIU Y, TAN L, et al.Digital twin assisted task offloading for aerial edge computing and networks[J].IEEE Transactions on Vehicular Technology, 2022, 71(10):10863-10877
[12] ZHANG S, CAO R.Multi-objective optimization for UAV-enabled wireless powered IoT networks: an LSTM-based deep reinforcement learning approach [J].IEEE Communications Letters, 2022, 26(12): 3019-3023.
[13] WANG L, ZHANG G. Joint service caching, resource allocation and computation offloading in three-tier cooperative mobile edge computing system [J]. IEEE Transactions on Network Science and Engineering, 2023, 10(6): 3343-3353.
[14] LIU Y, YAN J, ZHAO X. Deep reinforcement learning based latency minimization for mobile edge computing with virtualization in maritime UAV communication network[J]. IEEE Transactions on Vehicular Technology, 2022, 71(4): 4225-4236.
[15] KONG X, NI C, DUAN G, et al. Energy consumption optimization of UAV-assisted traffic monitoring scheme with tiny reinforcement learning [J]. IEEE Internet of Things Journal, 2024, 11(12): 21135-21145.
[16] LI L, GUAN W, ZHAO C, et al. Trajectory planning, phase shift design, and IoT devices association in flying-RIS-assisted mobile edge computing [J]. IEEE Internet of Things Journal, 2024, 11(1): 147-157.
[17] ZHANG S, JIN H, GUO P. IRS-assisted energy efficient communication for UAV mobile edge computing [J]. Computer Networks, 2024, 246: 110387.
[18] AL-HILO A, SAMIR M, ELHATTAB M, et al. RISassisted UAV for timely data collection in IoT networks[J]. IEEE Systems Journal, 2023, 17(1): 431-442.
[19] SHI J, LI C, GUAN Y, et al. Multi-UAV-assisted computation offloading in DT-based networks: a distributed deep reinforcement learning approach [J]. Computer Communications, 2023, 210: 217-228.
[20] ZHAO X, ZHAO T, WANG F, et al. SAC-based UAV mobile edge computing for energy minimization and secure data transmission [J]. Ad Hoc Networks, 2024, 157: 103435.
[21] LI B, XIE W, YE Y, et al. FlexEdge: Digital twin-enabled task offloading for UAV-aided vehicular edge computing[J]. IEEE Transactions on Vehicular Technology, 2023, 72(8): 11086-11091.
[22] YU Y, TANG J, HUANG J, et al. Multi-objective optimization for UAV-assisted wireless powered IoT networks based on extended DDPG Algorithm [J]. IEEE Transactions on Communications, 2021, 69(9): 6361-6374.
[23] ZHOU C, WU W, HE H, et al. Deep reinforcement learning for delay-oriented IoT task scheduling in sagin[J]. IEEE Transactions on Wireless Communications, 2021, 20(2): 911-925.
[24] 季薇, 杨许鑫, 李飞, 等. 无人机辅助MEC系统中基于最优SIC顺序的能耗优化方案[J]. 通信学报, 2024, 45(2): 18-30.
[25] YI X, YU H, XU T. Solving multi-objective weapon-target assignment considering reliability by improved MOEA/D-AM2M[J]. Neurocomputing, 2024, 563: 126906.
[26] PERVEZ F, SULTANA A, YANG C, et al. Energy and latency efficient joint communication and computation optimization in a multi-UAV-assisted MEC network [J]. IEEE Transactions on Wireless Communications, 2024, 23(3): 1728-1741.
[27] SONG F, DENG M, XING H, et al. Energy-efficient trajectory optimization with wireless charging in UAVassisted MEC based on multi-objective reinforcement learning [J]. IEEE Transactions on Mobile Computing, 2024, 23(12): 10867-10884.