Ajairaj

Software Engineer


Exploring LLM Architectures: Key Research Insights

I’ve recently been diving deep into the world of Large Language Models (LLMs), exploring foundational papers, architectural improvements, and optimization techniques that shape modern generative AI. 🏗 Transformer Architecture: The Foundation of LLMs One of the most important breakthroughs in LLM design was introduced in the paper "Attention is All You Need". This paper proposed the Transformer architecture, which replaced traditional recurrent layers with a self-attention mechanism—leading to improved scalability and performance.

Read more...

Sustainable Resource Allocation in Edge Environments Using Deep Reinforcement Learning

I’m thrilled to share that my research paper titled “Sustainable Resource Allocation in Edge Environment Using Deep Deterministic Policy Gradient-Based Reinforcement Learning” has been published on IEEE Xplore! This paper addresses a critical challenge in modern distributed systems: how to efficiently and sustainably allocate computing resources in edge environments—particularly as IoT deployments and latency-sensitive applications become more prevalent. The Problem Edge computing pushes computation closer to the data source, reducing latency and bandwidth usage.

Read more...
1 of 1