Latency Optimization in Edge vs. Cloud Computing: A Comparative Study for Real-Time Applications

Main Article Content

Asif Irshad

Abstract

Background and Purpose: The increasing demand for real-time applications such as autonomous vehicles, industrial automation, and telemedicine has driven the need for low-latency computing solutions. While traditional cloud computing offers extensive computational power, it suffers from latency due to long-distance data transmission. In contrast, edge computing reduces latency by processing data closer to the source, despite facing resource limitations. This study is designed to compare latency optimization strategies in edge versus cloud computing to support the unique demands of real-time applications.


Methods: The research employs a comparative analysis framework that focuses on key performance metrics and real-world case studies. It evaluates various optimization techniques including caching, network slicing, AI-driven workload allocation, and data compression. This approach allows for a systematic assessment of the latency performance and resource efficiency of both computing paradigms.


Findings: Results indicate that edge computing substantially reduces latency by minimizing the distance data must travel, though it is constrained by limited resources. In contrast, cloud computing, while offering high computational capabilities, introduces latency overhead due to extended data transmission. The findings suggest that hybrid models, which integrate edge and cloud computing, provide the most effective balance by leveraging the strengths of each paradigm to enhance overall system performance.


Theoretical Contributions: This study advances the theoretical understanding of distributed computing architectures by clarifying the inherent trade-offs between latency reduction and resource availability. It contributes a novel framework for evaluating hybrid edge-cloud models, thereby enriching the academic discourse on real-time system optimization.


Conclusions and Policy Implications: The research concludes that hybrid computing models offer a promising solution for real-time applications by combining low latency with scalable computing power. Policymakers and industry leaders are encouraged to consider integrated edge-cloud infrastructures in future technology deployments, as these models can enhance performance while addressing the limitations of each individual approach.

Downloads

Download data is not yet available.

Article Details

Section

Articles

Similar Articles

You may also start an advanced similarity search for this article.