Deep Reinforcement Learning-Based Optimal Deployment Strategy for UAV-Assisted Wireless Communication

Main Article Content

sara A. Owaid
https://orcid.org/0009-0006-5188-2033
Abbas H. Miry
https://orcid.org/0000-0002-7456-287X
Tariq M. Salman

Abstract

Unmanned aerial vehicles (UAVs) are progressively used to improve wireless communication networks, especially in dynamic and complicated environments. This research presents a novel UAV deployment optimization framework utilizing deep reinforcement learning (DRL), epically a deep Q-network (DQN), to enhance user coverage and power efficiency while dynamically adjusting to environmental conditions. In contract to traditional methods such as K-means clustering, the proposed approach uses an adaptive learning mechanism and a multi-metric reward function to optimize UAV placement in real time depending on altitude and noise variance. Simulation outcomes show that the DRL-based method accomplishes up to 11.2 in reward values at 300m altitude with tiny noise variance, in contrast to a maximum of 9.4 in conventional techniques under similar scenarios. Furthermore, power efficiency enhanced by 18% and energy consumption was decreased by 15% in contrast to static optimization methods. The user coverage raised by 12% on average, corroborating the model’s effectiveness in handling unpredictable environmental. These results confirm the superiority of DRL over traditional UAV deployment techniques, making it a viable solution for independent aerial communication networks of the future. This work contributes to enhancing UAV adaptability in real-world applications, providing a more efficient and intelligent approach to wireless network optimization. 

Article Details

How to Cite
A. Owaid, sara, H. Miry, A., & M. Salman, T. (2026). Deep Reinforcement Learning-Based Optimal Deployment Strategy for UAV-Assisted Wireless Communication. Journal of Applied Research and Technology, 24(2), 262–273. https://doi.org/10.22201/icat.24486736e.2026.24.2.3029
Section
Articles