Tip

How round-trip time and limiting data rate impact network performance

This tip reviews how network parameters such as round-trip time and limiting data rate affect network performance.

This tip reviews how network parameters such as round-trip time and limiting data rate affect network performance. For more background, view part 1 of this article series: How much bandwidth is enough?

The real-world network: Network parameters and network-protocol parameters

Unfortunately, there are more insidious effects of other real-life network parameters that none of the above adjustments will improve. Knowing when these nasties are in charge is partly science, partly art. Two network parameters affecting the throughput of any network exchange over any path are equally important and crucial, even if the end-node systems are perfect:

A. Round-trip delay (round-trip time or RTT in TCP parlance)
B. Limiting data rate

The first has increasing importance because the transport protocol (i.e., TCP) has error-recovery limitations. The second has increasing importance as transmitted data size increases. They both interact with the transport protocol's behavior. While round-trip delay may seem intuitive, it and the relations among network path and protocol parameters are often missed. Therefore, we need to know which network-protocol parameters may interact with a network path's own properties:

C. Sender's transmit window (unacknowledged data, Layer 4 and higher);
D. Sender's and receiver's maximum transfer unit (or "maximum transmission unit" -- MTU, frame size)
E. Sender's transport (Layer 4) time-out and retransmission policies
F. Receiver's window (packet-buffer size)
G. Receiver's acknowledgment (ACK) policies (Layer 4 and higher)
H. Error detection/correction
I. Path-congestion notification, if any (Layer 2 and higher)
J. Protocol overhead

These are the main protocol stack parameters and related algorithms that allow throughput to be maximized in the face of actual, imperfect network paths and end-node capabilities. This doesn't mean throughput will be maximized in the ideal sense, it just means that for a given real path, protocol properties can be adjusted (Layers 1 and higher) to yield a best throughput for that protocol/path combination -- choosing a different protocol or path might well improve overall throughput. This is where the network architect needs to have experience and knowledge to design well, and the network manager and technician need savvy to perform tests and computations.

Consider this statistical picture of thousands of real network packets sent over a real network path to meet a simple but large file-transfer request over TCP/IP:

NetCalibrator Packet Statistics
Click diagram to enlarge view.

The upper left quadrant shows it took about 35 seconds to transfer 1.5 MB of payload, so throughput is 8 * 1500000 / 35 = 343 KBps, typical of a fractional T1 link's speed, such as ADSL uploading. But wait -- the lower-right quadrant shows many inter-packet delays of only 8 milliseconds (msec), and the lower left shows all packets are 1518 bytes -- Ethernet's traditional MTU. These two facts mean that 1518 bytes can sometimes come every 8 msec, or at full T1 (1.5 MBps). Clearly, there's a disparity between limiting data rate (good) and actual throughput (not so good).

Protocol overhead per packet is 18B + 20B + 20B (Ethernet + IP + TCP), so that penalty yields 1518 - 58 = 1460 bytes of payload per packet, for a max rate of 8 * 1460 / .008 = 1.46Mbps: close to T1, but way over the average observed rate for 35 seconds. What's going on? Our burst throughput is near T1, but our sustained average is under one fourth of that.

The right-hand quadrants give more than hints at path issues:

  1. Acknowledgment times for the receiving node are spread widely, but most are very quick (~200 microseconds) -- the stack is basically fast
  2. Some ACK times are very long (Delayed ACKs, >100 msec) -- why?
  3. The limiting rate is often seen, but more often, the lower right quadrant shows a wide spread of time between adjacent packets, the packet inter-arrival time -- why?

⇒ Continue reading part 3: Multiple effects of multiple causes.

About the author:
Alexander B. Cannara, PhD, is an electrical engineer, a software and networking consultant, and an educator. He has 18 years of experience in the computer-networking field, including 11 years in managing, developing and delivering technical training. He is experienced in many computer languages and network protocols and is a member of IEEE, the Computer Society, and the AAAS. Alex lives with his wife and son in Menlo Park, California.

Dig Deeper on Cloud and data center networking

Unified Communications
Mobile Computing
Data Center
ITChannel
Close