Deep Reinforcement Learning-Based Optimal Deployment Strategy for UAV-Assisted Wireless Communication
Main Article Content
Abstract
Unmanned aerial vehicles (UAVs) are progressively used to improve wireless communication networks, especially in dynamic and complicated environments. This research presents a novel UAV deployment optimization framework utilizing deep reinforcement learning (DRL), epically a deep Q-network (DQN), to enhance user coverage and power efficiency while dynamically adjusting to environmental conditions. In contract to traditional methods such as K-means clustering, the proposed approach uses an adaptive learning mechanism and a multi-metric reward function to optimize UAV placement in real time depending on altitude and noise variance. Simulation outcomes show that the DRL-based method accomplishes up to 11.2 in reward values at 300m altitude with tiny noise variance, in contrast to a maximum of 9.4 in conventional techniques under similar scenarios. Furthermore, power efficiency enhanced by 18% and energy consumption was decreased by 15% in contrast to static optimization methods. The user coverage raised by 12% on average, corroborating the model’s effectiveness in handling unpredictable environmental. These results confirm the superiority of DRL over traditional UAV deployment techniques, making it a viable solution for independent aerial communication networks of the future. This work contributes to enhancing UAV adaptability in real-world applications, providing a more efficient and intelligent approach to wireless network optimization.
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.