This white paper focuses on best practices for reducing latency within Dell™ PowerEdge™ 12th generation server hardware. With today’s multi-socket, multi-core, highly-threaded PowerEdge servers, the operating system, applications, and drivers are expected to be written to take advantage of this massively parallel architecture. While most industry-standard benchmarks and tools (for example, SPECrate®, SPECjbb®2005, VMware® VMmark™, and database benchmarks from the Transaction Processing Performance Council) can be configured and optimized to saturate all the processing power of these servers, these benchmarks typically measure throughput (for example, transactions, input/output (I/O), or pages per second). However, many organizations, especially in the financial industry (where high-frequency trading occurs), still care about reducing the time it takes to solve a single task. In these cases, the focus must be on reducing system latency (typically measured in nanoseconds, microseconds, or milliseconds) rather than increasing throughput. Network latency improvements are also partially tied to system latency improvements, so tuning for these environments is similar.