In datacenter applications, predictability in service time and controlled latency, especially tail latency, is essential for building performant applications. This is especially true for applications or services build by accessing data across thousands of servers to generate a user response. Current practice has been to run such services at low utilization to rein in latency outliers, which decreases efficiency and limits the number of service invocations developers can issue while still meeting tight latency budgets. In this paper, we analyze the three datacenter applications, Memcached, OpenFlow, and web search, to measure the effect on tail latency of 1) kernel socket handling, NIC interaction, and the network stack, 2) application locks contested in the kernel, and 3) application-layer queuing due to requests being stalled behind straggler threads. We propose a novel approach of reducing the above sources of latency by relying on support from the NIC hardware, and we find that the resulting improvements indeed dramatically reduce end-to-end application latency.
The authors of these documents have submitted their reports to this technical report series for the purpose of non-commercial dissemination of scientific work. The reports are copyrighted by the authors, and their existence in electronic format does not imply that the authors have relinquished any rights. You may copy a report for scholarly, non-commercial purposes, such as research or instruction, provided that you agree to respect the author's copyright. For information concerning the use of this document for other than research or instructional purposes, contact the authors. Other information concerning this technical report series can be obtained from the Computer Science and Engineering Department at the University of California at San Diego, firstname.lastname@example.org.
[ Search ]