Latency Comparison of Cloud Datacenters and Edge Servers

Batyr Charyyev, Engin Arslan, Mehmet Hadi Gunes

Research output: Contribution to journalConference articlepeer-review


Edge computing has become a recent approach to bring computing resources closer to the end-user. While offline processing and aggregate data reside in the cloud, edge computing is promoted for latency-critical and bandwidth-hungry tasks. In this direction, it is crucial to quantify the expected latency reduction when edge servers are preferred over cloud locations. In this paper, we performed an extensive measurement to assess the latency characteristics of end-users with respect to the edge servers and cloud data centers. We also evaluated the impact of capacity limitations of edge servers on the latency under various user workloads. We measured latency from 8,456 end-users to 6,341 Akamai edge servers and 69 cloud locations. Measurements of latencies show that while 58% of end-users can reach a nearby edge server in less than 10 ms, only 29% of end-users obtain a similar latency from a nearby cloud location. Additionally, we observe that the latency distribution of end-users to edge servers follows a power-law distribution, which emphasizes the need for non-uniform server deployment and load balancing by an edge provider.

Original languageEnglish
Article number9322406
JournalProceedings - IEEE Global Communications Conference, GLOBECOM
StatePublished - 2020
Event2020 IEEE Global Communications Conference, GLOBECOM 2020 - Virtual, Taipei, Taiwan, Province of China
Duration: Dec 7 2020Dec 11 2020

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Networks and Communications
  • Hardware and Architecture
  • Signal Processing


  • Cloud computing
  • Edge computing
  • Fog computing
  • Latency measurement

Cite this