1137
*
For correspondence.
Journal ofEnvironmental Protection and Ecology 26, No 3, 1137–1147 (2025)
Computer science
EDGE COMPUTING-DRIVEN RESOURCE ALLOCATION
FOR LATENCY-SENSITIVE 5G APPLICATIONS
SHILPA CHOUDHARYa
*, M. V. KAVITHAb
, CHALLAPALLI SUJANAc
,
R. V. S. LALITHAc
, WARISH PATELd
, NATASHA MARTINe
,
K. LINGARAJf
, JIM MATHEW PHILIPg
a
Department of Electronics and Communication Engineering, G. L. Bajaj
Institute of Technology and Management, Uttar Pradesh 201 306, India
E-mail: shilpadchoudhary@gmail.com
b
Department of Electronics and Communication Engineering, Cambridge
Institute of Technology, Bengaluru, Karnataka 560 036, India
c
Department of Computer Science and Engineering, Aditya University,
Surampalem, Andhra Pradesh 533 437, India
d
Department of Computer Science and Engineering, Parul Institute of
Engineering and Technology, Gujarat 391 760, India
e
Faculty of Law, Symbiosis Law School Pune, Symbiosis International (Deemed
University), Pune, Maharashtra 411 014, India
f
Department of Computer Science and Engineering, Rao Bahadur
Y. Mahabaleswarappa Engineering College, Karnataka 583 104, India
g
Department of Computer Science and Engineering, Sri Ramakrishna Institute
of Technology, Coimbatore, Tamil Nadu, India
Abstract. In 5G networks, the exponential growth of latency-sensitive applications has improved
requirement of effective resource allocation. Traditional models often face struggle to satisfy the high
latency constraints, due to the inherent delays imposed by the centralised processing. To overcome
these issues, this paper proposed a hybrid method for resource allocation in edge computing at the
network’s edge. To optimise the resource allocation while ensuring the low latency and higher energy
efficiency, the proposed method incorporates artificial intelligence (AI) with optimisation model.
The proposed method consists of a multi-layered architecture that begins with data collection at edge
devices, followed by pre-processing and utilises the Long Short-Term Memory (LSTM) model for
feature extraction. Real-time resource demand forecasting and task distribution analysis are utilised
by the AI Deep reinforcement learning (DRL) and the Special Forces Algorithm (SFA) at edge
nodes. This method is employed to adaptively distribute the resources based on network conditions
and application requirements. Comprehensive simulation results display that the proposed method
increased energy efficiency while lowering end-to-end latency that surpasses the conventional
approaches. Additionally, the system significantly enhances its performance by meeting the user
demands without losing efficiency. This study elaborates the edge computing potential to overcome
the drawbacks of existing cloud-based architectures and provides a reliable solution for 5G applica-
tions sensitive to latency.
2.
1138
Keywords: 5G networks,artificial intelligence, long short-term memory, deep reinforcement learning,
special forces algorithm, resource allocation.
AIMS AND BACKGROUND
Low-latency IoT applications, such as autonomous vehicles, augmented/virtual
reality devices, and security applications, require high computation resources to
make decisions that leads to a delay in a cloud infrastructure1
. Nowadays, as the
need for capacity continues to grow, entirely novel services are emerged to supply
the services in a real-time responsive and scalable way2
. A recent paradigm shifts
in support of 5G-and-beyond (5GB), Human-to-Machine/Robot (H2M/R), and
the Tactile Internet has resulted in latency-sensitive applications being delivered
over communication networks. These applications are in conjunction with the
exponential growth in connected devices, and have pushed for architectural and
capacity changes in both wired and wireless networks3
. With the rise of advanced
internet technologies and smart machines, several emerging internet of things (IoT)
applications have been deployed that offer significant benefits to humans. Mobile
cloud computing technology can reduce the latency of IoT application execution
by providing the necessary computation processing and data caching facility to
resource-limited smart devices4
. The integration of mobile edge computing with
the ultra-dense network is not only capable of handling traffic from a large number
of smart devices but also delivers substantial processing capabilities to the users5
.
A strong cloud network to fully utilise the potential of integrated infrastructure,
communication and computing resources should be regulated jointly, as cloud
and network domains will have different features but limited capabilities6
. It is
suggested7
the collaborative communication and computing resource allocation
(CCRA) framework for solving the issue in5G networks. To overcome the limita-
tions connection, and queuing delays, this study prioritises the traffic, and Virtual
Network Functions (VNFs) for selecting the paths. The system has QoS criteria
while reducing the cost that outperforms the B&B-CCRA and WF-CCRA model
in addressing the problem optimally and near-optimally, respectively. The vital
ultra-reliable low-latency communications (URLLC) services are introduced8
for
locating the out-of-coverage area by using the unmanned aerial vehicles (UAVs).
To enhance the sum-rate and reduce UAV transmit power while satisfying URLLC
criteria, it concentrates the resource optimisation, as resource block (RB) and power
allocation, in integrated with UAV deployment strategy. To enhance the sched-
uling and power strategies, a Gaussian Process Regression (GPR)-based online
URLLC traffic model is utilised. The minimisation model is utilised for tackling
the mixed-integer nonlinear programming (MINLP) issue. A deep neural network
(DNN) model is offerred9
for energy-efficient deploymentAI applications on edge
computing resources. To overcome the problem of schedule and place services
with less latency and reduced energy consumption, a heuristic technique is utilised.
3.
1139
The simulations showthat the performance is better than the baseline methods
that significantly packing services on some edge nodes while satisfying the latency
constraints. In this study10
, Multi-access Edge Computing (MEC) applications and
model isare utilised for emphasising the best possible application in 5G network
based on count of business and technological limitations as security. It ensures a
MEC specification and offers a data formats for virtualised MEC infrastructure
utilising the virtual machines. The OpenStack platform for MEC application is
presented in the paper and this study increases the MEC deployments’ effective-
ness and security in 5G networks.
EXPERIMENTAL
DATA ACQUISITION AND PRE-PROCESSING
The data acquisition and pre-processing step are essential for edge computing-
driven frameworks to allocate resources accurately and efficiently. The raw data
are gathered and converted into a format that suitable for the additional processing
and analysis. Various sources of data as real-time data generated by edge servers,
mobile devices, or Internet of Things nodes, as sensor data, user interactions, or
application requests are gathered. Data are on device capacity, latency, and band-
width utilisation are collected to optimise resource allocation. The acquired data
Draw
are expressed as multi-dimensional matrix as follows:
Draw
= {di,j,k
| i ∈ {1, …, N}, j ∈ {1, …, M}, k ∈ {1, …, K}}, (1)
where i denotes the data points, j – the features, and k – the time intervals.
DATA PRE-PROCESSING
The raw data might have missing values, noise, and inconsistencies that have been
addressed in this step. The interpolation techniques is utilised to fill the missing
numbers and eliminate the outliers. The mean or median values utilised as substi-
tute missing data are as:
di,j
norm
= (1/K) ∑
K
k=1
di,j,k
, if di,j,k
is missing. (2)
The min-max scaling method is utilised to ensure uniformity and features are
transformed to a standard scale. It is expressed as:
di,j
norm
= (di,j
– min (Dj
))/(max (Dj
) – min (Dj
)), (3)
where min (Dj
) and max (Dj
) refer to the minimum and maximum values of feature
j, respectively.
4.
1140
FEATURE EXTRACTION USINGLSTM MODEL
Feature extraction process is most essential component, where Long Short-Term
Memory (LSTM) model is utilised to examine the input data for determining
the workload trends, and extracts most pertinent features for precise resource al-
location11
. The Recurrent neural networks (RNNs) of LSTM type are suitable for
dynamic workloads and resource allocation in edge computing, with their ability
of capturing the temporal relationships and sequential patterns. The input data Dt
,
with a sequential observation over the time t as:
Dt
= {x1
, x2
, …, xT
}, (4)
where xt
refers to the data.
Analysis of Dt
’s sequential patterns reveals the temporal dependencies as
workload changes and resource utilisation trends. Three gates are utilised in the
LSTM model to handle the long-term dependencies while processing the input
sequences12
.
Forget gate (Ft
). It has ability to decide which data need to be removed from the
cell state:
Ft
= σ (WF
[Ht–1
, xt
] + bF
), (5)
where WF
and bF
denote the weights and biases of the forget gate, respectively,
Ht–1
– the hidden state, and σ – the sigmoid activation function.
Input gate (It
). Add new data to the cell state:
It
= σ (WI
[Ht–1
, xt
] + bI
) (6)
Čt
= tanh (WC
[Ht–1
, xt
] + bC
), (7)
where Čt
denotes the candidate values for cell state.
Output gate (Ot
). Finds the output according to the updated cell state:
Ot
= σ (WO
[Ht–1
, xt
] + bO
) (8)
Ht
= Ot
tanh (Ct
), (9)
where Ht
refers to the updated hidden state and Ct
– the updated cell state. The
{H1
, H2
, …, HT
} denote the hidden states of the temporal patterns of data. These
states are employed for extracting the features displaying the workload trends
and forecasting the resource requirements. The final hidden state HT
combines the
features that have been retrieved with future resource needs Rt
:
Rt
= Wr
HT
+ br
, (10)
where Wr
and br
refer to the weights and biases for prediction layer, respectively.
The high-quality features capture the temporal dependencies of data that are ex-
tracted by the edge nodes with the help of LSTM models. These characteristics
5.
1141
ensure an accurateforecast of resource requirements and 5G applications efficient
resource allocation and less latency.
DYNAMIC RESOURCE ALLOCATION USING AI AND SFA
The proposed method uses a hybrid model that integrates the Special Forces Al-
gorithm (SFA) and Deep Reinforcement Learning (DRL) for Dynamic Resource
Allocation process. The integration of bio-inspired algorithms and AI enables the
effective and flexible resource allocation to satisfy the energy and latency demands
of the latency-sensitive 5G applications. The DRL and SFAmethod offers a strong
system that can constantly learn and adapt to the dynamic network conditions.
DEEP REINFORCEMENT LEARNING (DRL) FOR OPTIMAL POLICY LEARNING
Deep Reinforcement Learning (DRL) is a machine learning technique that is em-
ployed in the condition of interaction with environment and an agent learns best
decision-making strategies13
. The agent or edge node monitors the system’s cur-
rent state, such as, workload needs, network conditions, and available resources,
and provides allocation of resources as efficiently. The system consists of St
states
referring to environment, latency conditions, network traffic, and resource usage
at t. Computation offloading and bandwidth assignment are examples of resource
allocation decisions as agent’s actions At
. The DRL agent optimises the reward
function Rt
that is defined by variables as energy efficiency, throughput, and la-
tency reduction.
The agent adheres to policy π that is related with the actions and states. From
the observance of rewards, the DRLagent changes its policy.Avalue function Q(St
,
At
) is employed for enhancing the policy; it approximates the long-term reward
got from the action At
in state St
. The Bellman equation is employed for iteratively
enhancement of the policy by modifying Q(St
, At
)
Q(St
, At
) = Rt
+ γ maxA′
Q(St+1
, A′), (11)
where γ refers to the discount factor, and A′ – the possible future actions. The DRL
agent continuously discovers the best course of action that enhances the cumulative
reward and ensures a significant and effective resource allocation while satisfying
the latency restrictions. The high-dimensional state and action are maintained by
the DQN that improves the Q-values using deep neural networks.
SPECIAL FORCES ALGORITHM (SFA) FOR TASK ALLOCATION
The Special ForcesAlgorithm (SFA) mimics the decision-making process of mili-
tary forces, who are renowned for their capacity to make choices in dynamic cir-
cumstances and adapt to high-pressure scenarios. The SFAmodels strike a balance
among the exploitation (concentrating on established strategies) and exploration
(looking for new strategies) based on the resource allocation. The system modi-
6.
1142
fies dynamic circumstancesbased on the equilibrium, while ensuring an effective
allocation of resources.
Each of the candidate solutions in the SFA’s population refers to a possible
method to resource distribution. The first step of the process is to evaluate the solu-
tion’s fitness that assesses how well it utilises resources, uses energy, and reduces
latency. A fitness function F(S) can be represented as:
F(S) = α latency(S) + β energy(S) – γ throughput(S), (12)
where α, β, and γ denote the weighting factors for ensuring the less latency, con-
serving energy, or improved throughput, respectively. In SFA, both exploration
and exploitation serve as guiding principles for determining the optimal solutions.
The exploration phase enables the algorithm to determine the new method by
generating and assessing the solutions at random. The crossover and mutation
processes are improved based on the pre-existing results over the exploitation
phase. The possibility of finding the high-performing strategies is improved by
the action that combines the components from various solutions to offer a new
contender. A parameter p gets behavior of algorithm and finds the proportion of
exploration to exploitation:
p = (Exploration rate)/(Total search iteration). (13)
This equilibrium ensures that algorithm never gets mired in lesser ideal solu-
tions or spends more time while investigation. The DRL and SFA model is more
suitable for both the short-term and long-term dynamic network conditions and
made a significant dynamic resource allocation. The real-time incentives are em-
ployed by the DRLcomponent to continuously enhance its allocation policies, and
SFA ensures a task allocation method that adapts to provide an optimal resource
distribution. The SFA model is flexible in handling the dynamic and unexpected
circumstances and capacity of DRL to develop an intricate, long-term strategies
is advantages of the hybrid method. The DRL agent in the hybrid method makes
decision based on the network condition, while the SFA refines the resource al-
location process by evaluating and ensuring optimal performance, and leads to the
best possible resource allocation for latency-sensitive applications in 5G networks.
FEEDBACK MECHANISM
The Feedback Mechanism constantly monitors the performance metrics as latency,
energy efficiency, and resource use, which are significant to the proposed system.
The system’s flexibility is ensured with real-time feedback loop that enables to
enhance the overall performance. The performance data are collected over the
time step that is utilised to evaluate how the current resource allocation affects the
system’s effectiveness and the user experience. The SFA and DRL components
are adjusted based on the input. The performance indicators are utilised to reward
the signal to updates Q-values, and the feedback adjust the learnt policies for the
7.
1143
DRL elements. Feedbackassists SFA to maintain the balance among exploration
and exploitation by upgrading the allocation techniques based on the real-time
performance results. Over time, the ongoing enhancements allow the system to
adjust to modifying the circumstances and ensures a decision about how to allocate
resources. To enhance the overall performance and efficiency of 5G applications,
the feedback loop ensures a system stays flexible and dynamically enhances its
resource allocation in according to the current network conditions and application
requirements14
.
EXECUTION AND MONITORING
In the Execution and Monitoring phase, the tasks are handled locally at edge nodes
ensuring utilisation of allocated resources, effectively managing the network and
computing demands. This method reduces the latency with rapid reaction times
by minimising the dependency on centralised cloud servers that is significant for
latency-sensitive applications in 5G networks.
The local processing enhances the energy efficiency by offloading the tasks
near the source, by minimising the data transmission to the remote cloud servers.
The system continuously monitors key performance indicators, and the
monitoring process helps to detect the anomalies in resource usage as unexpected
delays or overloads at the edge nodes that compromise service quality. In addition,
the system detects the resource bottlenecks, where resources exceed the available
capacity that causes delays or failures in processing tasks15
. To preserve the neces-
sary performance levels, the system utilises the adaptive measures to reallocate the
resources, adjusts the workloads, or offload tasks to another nodes while problems
are identified, as latency violations or resource bottlenecks. The system effectively
satisfies the strict latency needs while preserving the optimal resource consumption
in dynamic network in real-time monitoring.
SCALABILITY AND ADAPTATION
The Scalability and Adaptation mechanism makes an effectively management of
dynamic user demands by the distribution of resources across the edge nodes. To
accommodate the increased demand, the system should be large with the increased
count of users or application demands. This is accomplished by offloading tasks
from the edge nodes with less usages and directing large resources with the greater
loads. The dynamic scaling minimises the resource congestion, each node can
continue to operates at the best even in dynamic network conditions.
To ensure a rapid response to network conditions and application needs, the
SFA is essential due to the constant evaluation of workload needs and network
condition by modeling the decision-making process as elite military units. The
method offers optimal resource allocation method by strikes among the exploration
and exploitation. The SFAalso rapidly changes the resource distribution to preserve
the optimal performance while network conditions modify assudden increase in
8.
1144
demand or bandwidthlimits. This adaptive method enables the system to respond
rapidly to real time variations, ensuring effective resource allocation, continuously
meeting the latency and energy efficiency requirements of applications.
RESULTS AND DISCUSSION
In a 5G scenario, the proposed method simulation setup contains several small
devices inside the 10 × 10 km area and with a base station 20 × 20 km. The resource
management and allocation are simulated using the Python programming. The
proposed method performance was evaluated using the latency, energy efficiency,
and resource consumption and the detailed explanations are presented.
Latency (ms) assesses the amount of time taken to passes among the tasks or
data transfers. It is defined as:
Latency = (Time taken to complete a task)/(No of tasks proceed). (14)
Energy efficiency (J/task) evaluates the energy utilised by the system to com-
plete a task. It is computed as:
Energy efficiency = (Energy consumed (J))/(No of tasks proceed). (15)
Resource utilisation (%) evaluates the ratio of available resources used while
task processing. It is defined as:
Resource utilisation = ((Resource used)/(Total available resources)) × 100. (16)
Figure 1 shows the comparative analysis of the existing and proposed method
with varying device densities. The latency increases for all the methods while count
of devices increases. The proposed method has a lowest latency at each density
of device that continuously outperforms the existing methods. For instance, the
proposed method is with latency of 50 ms at 50 devices, while R-JSRIN – with
85 ms, EF-JSRIN – with 72 ms, and Gradient Boosting – with 60ms. The trend is
followed for the increasing count of devices. The device density reaches 300, the
proposed method has a latency of 180 ms, R-JSRIN’s – 270 ms, EF-JSRIN’s – 245
ms, Gradient Boosting’s – 220 ms. This result shows that the proposed method
works to keet the latency lower even the device demands are increased.
0
50
100
150
200
250
300
50 100 150 200 250 300
latency
(ms)
device count
R-JSRIN WF-JSRIN Gradient Boosting Proposed Method
Fig. 1. Latency comparison
9.
1145
Figure 2 comparesthe energy efficiency of the methods in terms of J per
task. The proposed method has highest energy efficiency at all device densities.
For instance, the proposed method uses 0.15 J/task, the existing model R-JSRIN,
WF-JSRIN, and Gradient Boosting use 0.25, 0.21, and 0.18 J/task, respectively.
The proposed method has improved energy efficiency while the device density
increased to 300 using 0.45 J/task as that surpasses the other existing methods. The
results showcase the benefits of the proposed method in minimising the energy
usage that is essential in 5G edge computing.
0
0,1
0,2
0,3
0,4
0,5
0,6
0,7
50 100 150 200 250 300
energy
efficiency
(J/task)
device count
R-JSRIN WF-JSRIN Gradient Boosting Proposed Method
Fig. 2. Energy efficiency comparison
Figure 3 displays the proposed and existing method’s resource consumption
that depicts the proportion of resources used efficiently. The proposed method
efficiently uses the available resource, achieving an utilisation rate of 92%, while
R-JSRIN uses 82%,WF-JSRIN uses 87%, and the Gradient Boosting uses 90%.The
existing methods resource consumption decreases to 55–62%, while the proposed
method maintains the maximum utilisation while the count of devices increases,
to reach 70% at 300 devices. This result showcases the enhanced resource alloca-
tion capabilities of the proposed method and ensure a greater utilisation in the 5G
edge computing situations.
0
20
40
60
80
100
50 100 150 200 250 300
resource
utilisation
(%)
device count
R-JSRIN WF-JSRIN Gradient Boosting Proposed Method
Fig. 3. Resource utilisation comparison
10.
1146
CONCLUSIONS
In summary, theproposed resource allocation model for 5G edge computing
demonstrated significant advancement over existing methods such as R-JSRIN,
WF-JSRIN, and Gradient Boosting by combining DRL and the Special Forces
Algorithm (SFA). The results precisely demonstrate the enhanced performance of
the proposed method based on the resource usage, energy efficiency, and latency.
The proposed method has 180 ms, improved energy efficiency of 0.45 J/task, and
maintains maximum resource consumption of 70–92%. However, the study has
certain limitations. There may be additional challenges in real-world deployment,
as network interference and dynamic user mobility, while the simulation runs under
the simplified settings with the fixed device counts. The investigation of model’s
scalability at high device densities is necessary to find how it affects the long-term
performance. Future research should focus on the extending the model to include
the heterogeneous devices and dynamic movement patterns. Furthermore, the
integration of advanced AI methods enables more efficient resource management
and the application of additional optimisation methods, resulting in enhanced
performance in real-time 5G edge computing applications.
REFERENCES
1. A. ABOUAOMAR, S. CHERKAOUI, Z. MLIKA, A. KOBBANE: Resource Provisioning in
Edge Computing for Latency-sensitiveApplications. IEEE Internet Things J, 8 (14), 11088–11099
(2021).
2. M. SHOKRNEZHAD, T. TALEB, P. DAZZI: Double Deep q-Learning-based Path Selection
and Service Placement for Latency-sensitive beyond 5GApplications. IEEE Trans Mob Comput,
(2023).
3. I. DIAS, L. RUAN, C. RANAWEERA, E. WONG: From 5G to beyond: Passive Optical Network
and Multi-access Edge Computing Integration for Latency-sensitive Applications. Opt Fiber
Technol, 75, 103191 (2023).
4. M. CHOWDHURY: Flexible Heuristic-based Prioritised Latency-sensitive IoT Application
Execution Scheme in the 5G Era. International Journal of Embedded Systems (IJES), 14 (4),
363–377 (2021).
5. N. SHARMA, K. KUMAR: A Novel Latency Aware Resource Allocation and Offloading Strat-
egy with Improved Prioritization and DDQN for Edge-enabled UDNs. IEEE Transactions on
Network and Service Management (IEEE TNSM), (2024).
6. M. MAHBUB, B. BARUA: Joint Energy and Latency-sensitive Computation and Communica-
tion Resource Allocation for Multi-access Edge Computing in a Two-tier 5G HetNet. Int J Inf
Technol, 15 (1), 457–464 (2023).
7. M. SHOKRNEZHAD, T. TALEB: Near-optimal Cloud-Network Integrated ResourceAllocation
for Latency-sensitive B5G. In: Proceedings of the IEEE Global Communications Conference
(GLOBECOM), 2022, 4498–4503.
8. S. R. PANDEY, K. KIM, M. ALSENWI et al.: Latency-sensitive Service Delivery with UAV-
assisted 5G Networks. IEEE Wirel Commun Lett, 10 (7), 1518–1522 (2021).
9. G. PREMSANKAR, B. GHADDAR: Energy-efficient Service Placement for Latency-sensitive
Applications in Edge Computing. IEEE Internet Things J, 9 (18), 17926–17937 (2022).
11.
1147
10. R. ARTYCH,K. BOCIANIAK, Y. CARLINET et al.: Security Constraints for Placement of
Latency-sensitive 5G MEC Applications. In: Proceedings of the 9th International Conference
on Future Internet Things Cloud (FiCloud), 2022, 40–45.
11. V. S. CHANDRIKA, N. M. G. KUMAR, V. V. KAMESH et al.: Advanced LSTM-based Time
Series Forecasting for Enhanced Energy Consumption Management in Electric Power Systems.
Int J Renew Energy Res (IJRER), 14 (1), 127–139 (2024).
12. M. PUSHPAVALLI, D. DHANYA, M. KULKARNI et al.: Enhancing Electrical Power Demand
Prediction Using LSTM-based Deep Learning Models for Local Energy Communities. Electr
Power Compon Syst, 1–18 (2024).
13. R. P.ANAND, V. SENTHILKUMAR, G. KUMAR et al.: Dynamic Link Utilization Empowered
by Reinforcement Learning for Adaptive Storage Allocation in MANET. Soft Comput, 28 (6),
5275–5285 (2024).
14. S. GUPTA, N. PATEL, A. KUMAR et al.: Intelligent Resource Optimization for Scalable and
Energy-efficient Heterogeneous IoT Devices. Multimed Tools Appl, 1–25 (2024).
15. H. SHEKHAR, C. BHUSHAN MAHATO, S. K. SUMAN et al.: Demand Side Control for En-
ergy Saving in Renewable Energy Resources Using Deep Learning Optimization. Electr Power
Compon Syst, 51 (19), 2397–2413 (2023).
Received 30 January 2025
Accepted 14 March 2025