Sustainable Resource Allocation in Edge Environments Using Deep Reinforcement Learning
I’m thrilled to share that my research paper titled “Sustainable Resource Allocation in Edge Environment Using Deep Deterministic Policy Gradient-Based Reinforcement Learning” has been published on IEEE Xplore!
This paper addresses a critical challenge in modern distributed systems: how to efficiently and sustainably allocate computing resources in edge environments—particularly as IoT deployments and latency-sensitive applications become more prevalent.
The Problem
Edge computing pushes computation closer to the data source, reducing latency and bandwidth usage. However, it introduces a new set of resource management challenges. How can we ensure that resources like CPU, memory, and energy are utilized efficiently while also ensuring sustainability?
Our Approach
We leveraged Deep Reinforcement Learning (DRL), specifically the Deep Deterministic Policy Gradient (DDPG) algorithm, to dynamically learn optimal resource allocation strategies in real time. DDPG is well-suited for continuous action spaces and enables fine-grained control, making it ideal for edge scenarios.
Key Contributions:
- 📊 Modeling edge environments as dynamic Markov Decision Processes (MDPs).
- 🤖 Employing DDPG to learn policies that optimize for both performance and sustainability.
- 🌿 Incorporating energy efficiency and latency as part of the reward structure
🔬 Results
The results demonstrated significant improvements in resource utilization, energy savings, and response times compared to traditional allocation methods. Our model was tested in a simulated edge environment and showed promising real-world applicability.
Acknowledgments
This work was done as part of my undergraduate research, and I am deeply grateful to my co-authors and mentors for their guidance and support throughout the process.
Stay tuned—more research updates and technical breakdowns coming soon!