1 |
ZENG Y, ZHANG R, LIM T J. Wireless communications with unmanned aerial vehicles: Opportunities and challenges[J]. IEEE Communications Magazine, 2016, 54(5): 36-42.
|
2 |
LI B, FEI Z S, ZHANG Y. UAV communications for 5G and beyond: Recent advances and future trends[J]. IEEE Internet of Things Journal, 2019, 6(2): 2241-2263.
|
3 |
WU Q Q, XU J, ZENG Y, et al. A comprehensive overview on 5G-and-beyond networks with UAVs: From communications to sensing and intelligence[J]. IEEE Journal on Selected Areas in Communications, 2021, 39(10): 2912-2945.
|
4 |
ZANELLA A, BUI N, CASTELLANI A, et al. Internet of things for smart cities[J]. IEEE Internet of Things Journal, 2014, 1(1): 22-32.
|
5 |
LI J X, ZHAO H T, WANG H J, et al. Joint optimization on trajectory, altitude, velocity, and link scheduling for minimum mission time in UAV-aided data collection[J]. IEEE Internet of Things Journal, 2020, 7(2): 1464-1475.
|
6 |
LI X, TAN J W, LIU A F, et al. A novel UAV-enabled data collection scheme for intelligent transportation system through UAV speed control[J]. IEEE Transactions on Intelligent Transportation Systems, 2021, 22(4): 2100-2110.
|
7 |
胡敬, 田俊曦, 邹世明, 等. 基于自适应卡尔曼滤波的联合RSS/TOA/INS无人机定位算法[J]. 无人系统技术, 2022, 5(2): 62-70.
|
|
HU J, TIAN J X, ZOU S M, et al. A hybrid RSS/TOA/INS UAV localization algorithm based on adaptive Kalman filter[J]. Unmanned Systems Technology, 2022, 5(2): 62-70 (in Chinese).
|
8 |
SUNG Y, TOKEKAR P. GM-PHD filter for searching and tracking an unknown number of targets with a mobile sensor with limited FOV[J]. IEEE Transactions on Automation Science and Engineering, 2022, 19(3): 2122-2134.
|
9 |
YANG X, DING M Y, ZHOU C P. Fast marine route planning for UAV using improved sparse A* algorithm[C]∥2010 Fourth International Conference on Genetic and Evolutionary Computing. Piscataway: IEEE Press, 2010: 190-193.
|
10 |
KALA R, WARWICK K. Planning of multiple autonomous vehicles using RRT[C]∥2011 IEEE 10th International Conference on Cybernetic Intelligent Systems (CIS). Piscataway: IEEE Press, 2011: 20-25.
|
11 |
DORIGO M, MANIEZZO V, COLORNI A. Ant system: Optimization by a colony of cooperating agents[J]. IEEE Transactions on Systems, Man, and Cybernetics Part B, Cybernetics: A Publication of the IEEE Systems, Man, and Cybernetics Society, 1996, 26(1): 29-41.
|
12 |
周彬, 郭艳, 李宁, 等. 基于导向强化Q学习的无人机路径规划[J]. 航空学报, 2021, 42(9): 325109.
|
|
ZHOU B, GUO Y, LI N, et al. Path planning of UAV using guided enhancement Q-learning algorithm[J]. Acta Aeronautica et Astronautica Sinica, 2021, 42(9): 325109 (in Chinese).
|
13 |
MOON J, PAPAIOANNOU S, LAOUDIAS C, et al. Deep reinforcement learning multi-UAV trajectory control for target tracking[J]. IEEE Internet of Things Journal, 2021, 8(20): 15441-15455.
|
14 |
WANG Y, GAO Z, ZHANG J, et al. Trajectory design for UAV-based Internet of Things data collection: A deep reinforcement learning approach[J]. IEEE Internet of Things Journal, 2022, 9(5): 3899-3912.
|
15 |
SINGLA A, PADAKANDLA S, BHATNAGAR S. Memory-based deep reinforcement learning for obstacle avoidance in UAV with limited environment knowledge[J]. IEEE Transactions on Intelligent Transportation Systems, 2021, 22(1): 107-118.
|
16 |
SHIRI H, SEO H, PARK J, et al. Attention-based communication and control for multi-UAV path planning[J]. IEEE Wireless Communications Letters, 2022, 11(7): 1409-1413.
|
17 |
LI B H, HUANG Z L, CHEN T W, et al. MSN: Mapless short-range navigation based on time critical deep reinforcement learning[J]. IEEE Transactions on Intelligent Transportation Systems, 2023, 24(8): 8628-8637.
|
18 |
XU S, ZHANG X Y, LI C G, et al. Deep reinforcement learning approach for joint trajectory design in multi-UAV IoT networks[J]. IEEE Transactions on Vehicular Technology, 2022, 71(3): 3389-3394.
|
19 |
TANG X M, CHAI Y, LIU Q. A 2D UAV path planning method based on reinforcement learning in the presence of dense obstacles and kinematic constraints[C]∥2022 IEEE 11th Data Driven Control and Learning Systems Conference (DDCLS). Piscataway: IEEE Press, 2022: 306-311.
|
20 |
KHAMIDEHI B, SOUSA E S. Reinforcement-learning-aided safe planning for aerial robots to collect data in dynamic environments[J]. IEEE Internet of Things Journal, 2022, 9(15): 13901-13912.
|
21 |
AL-HOURANI A, KANDEEPAN S, LARDNER S. Optimal LAP altitude for maximum coverage[J]. IEEE Wireless Communications Letters, 2014, 3(6): 569-572.
|
22 |
YAO J J, ANSARI N. QoS-aware power control in Internet of drones for data collection service[J]. IEEE Transactions on Vehicular Technology, 2019, 68(7): 6649-6656.
|
23 |
ZHU B T, BEDEER E, NGUYEN H H, et al. UAV trajectory planning for AoI-minimal data collection in UAV-aided IoT networks by transformer[J]. IEEE Transactions on Wireless Communications, 2023, 22(2): 1343-1358.
|
24 |
ZENG Y, XU J, ZHANG R. Energy minimization for wireless communication with rotary-wing UAV[J]. IEEE Transactions on Wireless Communications, 2019, 18(4): 2329-2345.
|
25 |
TENG T H, TAN A H, ZURADA J M. Self-organizin-g neural networks integrating domain knowledge and reinforcement learning[J]. IEEE Transactions on Neur-al Networks and Learning Systems, 2015, 26(5): 889-902.
|
26 |
LILLICRAP T P, HUNT J J, PRITZEL A, et al. Continuous control with deep reinforcement learning[DB/OL]. arXiv preprint: 1509.02971, 2015.
|
27 |
WU J, WANG R, LI R Y, et al. Multi-critic DDPG method and double experience replay[C]∥2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC). Piscataway: IEEE Press, 2018: 165-171.
|
28 |
PARDALOS P M. Convex optimization theory[J]. Optimization Methods & Software, 2010, 25(3): 487.
|