E-mail servers, data base servers, file servers, web servers struggle to cope with LATENCY on accessing millions of small files under heavy concurrent load.
Low latency is key factor to a good user experience.
With the Debian GNU / Linux 6.x kernel tuning, combined with the previous article hints about XenServer I/O latency, tuning filesystems configuring multipath, and data storage, we reduced eighty-four times our virtual block device latency at our environment.
Thousands concurrent synchronous writes of random small files on filesystems with millions of them while honoring write barriers for POSIX compliance, stress the limits of the data storage, file systems, kernel I/O, virtualization I/O, hardware data paths.
These are our findings and Linux kernel tunings in Cyrus IMAP e-mail servers grid under heavy load production since November 2011.
First of all, do not blindly apply these settings at your servers. Read the whole bibliography to understand what you are about to tune.
Understand what each parameter is doing.
Understand your server workload profile pattern.
Understand your hardware and storage performance and behaviour.
Understand your Fiber Channel SAN or iSCSI data path and network segment behaviour.
OUR environment has servers using Fiber Channel HBA multipath connected to FC disks WAFL Data Storage with very low latency and lots of ECC NVRAM for cache. It almost behaves as a giant SSD storage from latency standpoint.
This is important: if you are going to “flush” your servers kernel buffers as fast they could, you data storage and data path should be able to cope with such high IOPS and trough output.
Read YOUR deployed kernel version documentation. The links below are for the latest kernel.org version.
Also, the net result is only as fast as your slowest link at your application stack, hardware and network stacks.
Carefully configure your application stack for low latency and performance.
At other articles we covered filesystem tuning, data storage LUN configurations, and some aspects of Cyrus IMAP configurations. More on future articles about performance tuning.
You will carefully watch the iostat output during the tests, looking for the bottlenecks.
Read the previous article bibliography and test results about XenServer I/O latency.
Learn how to correlate the output columns with the kernel parameters and your data storage behaviour. Read the manpage of iostat.
Non persistent between reboots configuration
Use these during your test, tuning and evaluation phase.
Logged as root:
echo "4" > /sys/block/xvdb/queue/nr_requests
echo deadline > /sys/block/xvdb/queue/scheduler
echo 1 >/sys/block/xvdb/queue/iosched/front_merges
echo 1 >/sys/block/xvdb/queue/iosched/fifo_batch
A very aggressive configuration (watch your %sys during test phase):
Debian GNU/Linux configuration persistent between reboots
Append to your /etc/sysfs.conf
block/xvdb/queue/nr_requests = 4
block/xvdb/queue/scheduler = deadline
block/xvdb/queue/iosched/front_merges = 1
Append to your /etc/sysctl.conf
vm.swappiness = 10
vm.dirty_background_ratio = 1
vm.dirty_expire_centisecs = 500
vm.dirty_ratio = 15
Some default values for the deadline i/o scheduler
For Debian GNU / Linux 6.x Squeeze, kernel 2.6.32
kswapd0 , ksoftirqd0
Linux kernel tuning for low latency block devices and small files
linux kernel tuning for low latency
search on the web:
- Whitepapers Red Hat Enterprise Linux 5 IO Tuning Guide Performance Tuning Whitepaper for Red Hat Enterprise Linux 5.2