AI-Driven Self-Optimizing Framework for Real-Time Wireless Network Performance Enhancement
Keywords:
Deep learning; networks; 5G NR; spectral efficiency; neural networks; network optimizationAbstract
The rapid proliferation of heterogeneous wireless devices and increasingly dynamic and unpredictable spectrum usage patterns have exposed the limitations of traditional network management paradigms based on fixed configurations and reactive optimization.. This paper introduces a self-optimizing AI-driven framework, termed the Adaptive Neural Radio Environment Manager (ANREM), designed to provide continuous real-time performance optimization in multi-tier wireless network architectures. In contrast to conventional approaches that optimize network parameters independently, ANREM performs joint optimization of spectral efficiency, end-to-end latency, energy consumption, and user quality of experience (QoE) through a unified multi-objective reward formulation.
ANREM combines a multi-level deep reinforcement learning (DRL) engine with three temporal scales (milliseconds radio resource management, seconds handover coordination, and minutes load balancing), a graph neural network (GNN) unit to estimate topology-aware interference, and a federated learning coordination layer to facilitate privacy-preserving model updates across distributed base stations. In contrast to the previous methods that can optimize the individual network parameters independently, ANREM can co-optimize spectral efficiency, end-to-end latency, energy consumption, and user quality of experience (QoE) using a multi-objective reward function that is carefully formulated. Experiments over a 5G Non-Standalone (NSA) testbed environment, using a stochastic urban mobility model based on real city-scale traces, 48 gNodeBs, 1,200 user equipment nodes, and ANREM, has shown that ANREM can improve aggregate throughput by 34.7 percent, reduce handover failure rate by 41.2 percent, and reduce base station energy expend Non-stationary-traffic convergence stability does not require catastrophic forgetting, due to an elastic weight consolidation mechanism that is part of the DRL training loop. These findings make ANREM a feasible and deployable candidate to next-generation self-organizing network (SON) design and open channels toward the entirely autonomous management of wireless infrastructure.