What is a 
Latency
?

The time it takes for data to travel from one point to another across a network often measured in milliseconds (ms).

Latency
 Example

You say “hello” on a call, but the person on the other end hears it half a second later. That delay? That’s latency and it can make conversations feel awkward or unnatural.

In networking, latency refers to the delay between sending a request and receiving a response. It’s a critical factor in real-time communication, like voice, video, and live chat.

There are different types of latency:

  • Network latency: Time for a packet to travel between endpoints.
  • Processing latency: Time spent decoding or encoding audio/video.
  • Queuing latency: Time packets spend waiting in buffers or routers.

Low latency is essential for a good user experience in:

  • VoIP and WebRTC calls (to avoid talk-over)
  • Gaming (to prevent lag)
  • Financial systems (to ensure timely trades)

High latency doesn’t always mean packet loss or failure, it means slowness. In contact centres, high latency can lead to:

  • Talk-over or echo on calls
  • Delays in agent responses
  • Frustrated customers

Latency is typically measured in round-trip time (RTT). Anything under 150ms is generally considered acceptable for voice; beyond that, users start to notice.