A brief explanation of latency
In this video, Informa TechTarget managing editor and multimedia manager Kelsey Waddill gives a quick explanation of latency and how it impacts data sharing.
Latency is the measure of how long data takes to travel from one point to another, or how long the delay is from request to response.
Ideally, latency will be as close to zero as possible. Low latency means less delay, which makes for a positive user experience. High latency means a greater delay, which causes long loading times, interrupted streams and an overall poor user experience.
Latency can be caused by many things, including the following:
- Physical distance between the source and the destination.
- The type of transmission media used to transport data packets.
- Large data packet size.
- A weak signal.
- Other computer and storage delays.
Latency can be reduced by taking these steps:
- Tuning and upgrading computer hardware and software.
- Using techniques like prefetching, multithreading or parallelism.
- Uninstalling unnecessary programs.
Latency can be an inconvenience in mild cases, like slow loading times when streaming a movie. But in extreme scenarios -- like self-driving cars -- latency can be critical. In these kinds of applications, it's important to put the computer as close to the data source as possible -- a practice called edge computing.
Sabrina Polin is a senior managing editor of video content for the Learning Content team. She plans and develops video content for Informa TechTarget's editorial YouTube channel, Eye on Tech. Previously, Sabrina was a reporter for the Products Content team.