Novos blogs:  postgresql extreme tuning 40 cores 512 GB RAM Debian  debian linux kernel tuning high io  ZFS parte 6: vídeo palestra ZFS para gestores e sysadmins  ZFS parte 5: como comprar ou montar ZFS data storage server  PostgreSQL tuning example

Tech Force / Linux blog / Debian Linux kernel tuning low latency small files

Right menu

Linux blog recente

PostgreSQL: palestra tuning extremo em hardware 40 núcleos, 80 threads, 512 GB RAM, Debian

Palestra com dicas de tuning extremo para dezenas de milhares de TPS e IOPS com PostgreSQL sobre Debian GNU/Linux.

Não conectado


Debian GNU / Linux kernel tuning for low latency on small files

E-mail servers, data base servers, file servers, web servers struggle to cope with LATENCY on accessing millions of small files under heavy concurrent load.

Low latency is key factor to a good user experience.

With the Debian GNU / Linux 6.x kernel tuning, combined with the previous article hints about XenServer I/O latency, tuning filesystems configuring multipath, and data storage, we reduced eighty-four times our virtual block device latency at our environment.

Thousands concurrent synchronous writes of random small files on filesystems with millions of them while honoring write barriers for POSIX compliance, stress the limits of the data storage, file systems, kernel I/O, virtualization I/O, hardware data paths.

These are our findings and Linux kernel tunings in Cyrus IMAP e-mail servers grid under heavy load production since November 2011.

First of all, do not blindly apply these settings at your servers. Read the whole bibliography to understand what you are about to tune.

Understand what each parameter is doing.

Understand your server workload profile pattern.

Understand your hardware and storage performance and behaviour.

Understand your Fiber Channel SAN or iSCSI data path and network segment behaviour.

OUR environment has servers using Fiber Channel HBA multipath connected to FC disks WAFL Data Storage with very low latency and lots of ECC NVRAM for cache. It almost behaves as a giant SSD storage from latency standpoint.

This is important: if you are going to “flush” your servers kernel buffers as fast they could, you data storage and data path should be able to cope with such high IOPS and trough output.

You could even get out-of-memory errors caused by high latency storage hardware. The linked article author gone the opposite direction in tuning due to HIS hardware environment.

Read YOUR deployed kernel version documentation. The links below are for the latest version.

Also, the net result is only as fast as your slowest link at your application stack, hardware and network stacks.

Carefully configure your application stack for low latency and performance.

At other articles we covered filesystem tuning, data storage LUN configurations, and some aspects of Cyrus IMAP configurations. More on future articles about performance tuning.

You will carefully watch the iostat output during the tests, looking for the bottlenecks.

Read the previous article bibliography and test results about XenServer I/O latency.

Learn how to correlate the output columns with the kernel parameters and your data storage behaviour. Read the manpage of iostat.

iostat -dmxthN 1 /dev/xvd*

Non persistent between reboots configuration

Use these during your test, tuning and evaluation phase.

Logged as root:

sysctl vm.swappiness=10 

sysctl vm.dirty_background_ratio=1 

sysctl vm.dirty_expire_centisecs=500 

sysctl vm.dirty_ratio=15 

sysctl vm.dirty_writeback_centisecs=100 

cat /sys/block/xvdb/queue/nr_requests 

echo "4" > /sys/block/xvdb/queue/nr_requests 

cat /sys/block/xvdb/queue/nr_requests 

cat /sys/block/xvdb/queue/scheduler 

echo deadline > /sys/block/xvdb/queue/scheduler 

cat /sys/block/xvdb/queue/scheduler 

cat /sys/block/xvdb/queue/iosched/front_merges 

echo 1 >/sys/block/xvdb/queue/iosched/front_merges

cat /sys/block/xvdb/queue/iosched/front_merges 

cat /sys/block/xvdb/queue/iosched/fifo_batch 

echo 1 >/sys/block/xvdb/queue/iosched/fifo_batch 

cat /sys/block/xvdb/queue/iosched/fifo_batch

A very aggressive configuration (watch your %sys during test phase):

sysctl vm.dirty_expire_centisecs=50 

sysctl vm.dirty_writeback_centisecs=10 

sysctl vm.dirty_background_ratio=0

Debian GNU/Linux configuration persistent between reboots

You will need the “sysfsutils” package installed.

Append to your /etc/sysfs.conf

#AFM 20120523

block/xvdb/queue/nr_requests = 4 

block/xvdb/queue/scheduler = deadline 

block/xvdb/queue/iosched/front_merges = 1 

block/xvdb/queue/iosched/fifo_batch = 1

Append to your /etc/sysctl.conf

#AFM 20120523 

vm.swappiness = 10 

vm.dirty_background_ratio = 1 

vm.dirty_expire_centisecs = 500 

vm.dirty_ratio = 15 

vm.dirty_writeback_centisecs = 100

Some default values for the deadline i/o scheduler

For Debian GNU / Linux 6.x Squeeze, kernel 2.6.32

cat /sys/block/xvdb/queue/iosched/fifo_batch
cat /sys/block/xvdb/queue/iosched/front_merges
cat /sys/block/xvdb/queue/iosched/read_expire
cat /sys/block/xvdb/queue/iosched/write_expire
cat /sys/block/xvdb/queue/iosched/writes_starved


kswapd0 , ksoftirqd0 ************************** tuning de kernel para arquivos pequenos * *


Linux kernel tuning for low latency block devices and small files **************** ****** **************** **** ***************,115991 ********* *** ***** * ** **** ******* *** *** **** ***********

linux kernel tuning for low latency ************************************* **** **************** ******************* ******** libaio ************* ****** ******************* ************ **** ************************

block devices **** ** * * *** ** *********** ******* *******

search on the web:

  • Whitepapers Red Hat Enterprise Linux 5 IO Tuning Guide Performance Tuning Whitepaper for Red Hat Enterprise Linux 5.2 ************** ******* *** **** ********** ********** * *** *** ******* ** *** ***** benchmarking comparação iSCSI x Fiber Channel **** especificações da qla 2300. **** mais especificações da qla 2300 ************ ******* Citrix XenServer and NetApp Storage Best Practices The complete Q&A from the Citrix XEN Masterclass webinar

How to configure maximum performance storage space for Debian GNU / Linux on IBM DS 8300 Data Storage Systems

The IBM DS 8300 Data Storage Systems are multi millions dollars flexible high availability and performance SAN machines.

But you may left much of such performance and availability behind if you do not configure them correctly for Debian GNU / Linux.

See how to ask for performance data storage space on them. Or what you need to configure on them.

Read about an actual configuration running with Debian GNU / Linux hosts at SERPRO.

How to configure multipath for high availability and performance on Debian and CentOS for storage at IBM DS8300 SAN

This detailed how to guides to achieve high availability and performance on Debian and CentOS for accessing storage space at IBM DS8300 Data Storage Systems.

Tested on Debian GNU/Linux 5.x Lenny 64 bits and CentOS 5.3 64 bits running on 8 cores blades, with Host Bus Adapters Qlogic and Emulex Light Pulse Fiber Channel in deployed systems at SERPRO.

Observations showed that Debian Lenny has the best performance, for our app load profile and hardware.

LVM, RAID, XFS and EXT3 file systems tuning for small files massive heavy load concurrent parallel I/O on Debian

Thousands concurrent parallel read write accesses over tens of millions of small files is a terrible performance tuning problem for e-mail servers.

You must understand and fine tune all your infrastructure chain, following the previous articles for data storage and multipath on Debian 5.x Lenny.

We reduced the CPU I/O wait from 30% to 0,3% (XFS) and 5% (EXT3) with these combined previously undocumented file system tuning tips.

XenServer: Como reduzir latência i/o rede e disco e reduzir perda de pacotes em alta carga

Para melhor experiência de usuário, devido a respostas mais rápidas do serviço, é fator chave reduzir a LATÊNCIA de I/O de rede e disco.

Especialmente em servidores de e-mail e de banco de dados.

Veremos como reduzir a latência de de I/O Máquinas Virtuais em servidores de virtualização XenServer.

Tutorial Cyrus IMAP aggregator (murder) 2.3.16 sobre Debian GNU Linux 5.x Lenny

Este tutorial completo mostrará como montar um LABORATÓRIO de infra-estrutura de e-mail bastante escalável, para centenas de milhares de contas.


Usuários registrados têm permissão para criar comentários.

Translate this page.  

Add to Free Software Daily   
Follow meon