UPDATE: This article is now slightly dated. In the comments below, Null4Ever provides more contemporary kernel parameter settings. 

Siege is a great tool to measure web site performance and establish benchmark metrics. What do you do if siege shows your performance stinks? This document illustrates how to tune your server and validate those tunings with siege.

For the purpose of this exercise we will tune a very old Linux server to effectively serve static content at a rate of 200 requests per second. We’ll start with an out-of-box apache configuration and build from there. By the way, when I say “old” I mean old:

Ben: $ uname -a 
Linux ben 2.4.10-4GB #1 Fri Sep 28 17:20:21 GMT 2001 i586 unknown
Ben: $ bin/httpd -V
Server version: Apache/2.0.47
Server built:   Jul 18 2003 18:16:04

Damn, that’s old! Let’s see how this Model-T performs. The first thing we need to do is establish a benchmark:

Bully $ siege -d1 -c200 -t1m http://ben.home.joedog.org/apache_pb.gif
Lifting the server siege...      done.
Transactions:                10230 hits
Availability:               100.00 %
Elapsed time:                59.90 secs
Data transferred:            22.69 MB
Response time:                0.15 secs
Transaction rate:           170.78 trans/sec
Throughput:                   0.38 MB/sec
Concurrency:                 26.18
Successful transactions:     10230
Failed transactions:             0
Longest transaction:          2.34
Shortest transaction:         0.00

There’s no two ways about it: that’s crappy. What steps can we take to improve that? An obvious place to look is the apache configuration. Is the server configured to handle 200 simulataneous connections? If apache is configured with less than 200 workers, then requests are queued until workers are available to handle them. Let’s check our config:

<IfModule prefork.c>
  StartServers         5
  MinSpareServers      5
  MaxSpareServers     10
  MaxClients         150
  MaxRequestsPerChild  0

Sure enough, we have fewer workers than requests. Let’s bump those numbers in order to meet our requirement. While we’re at it, we’ll increase the initial pool so we pay less of a penalty to fork new workers:

<IfModule prefork.c>
  StartServers        50
  MinSpareServers     15
  MaxSpareServers     25
  MaxClients         225
  MaxRequestsPerChild  0

After apache restart we find thirty-three httpd processes in memory. With a bigger pool in memory, it’s easier to accomodate a large pool of incoming requests. With the expansion of our workers pool, we have the capacity to meet our load requirement. Let’s see how these changes affected our performance:

Lifting the server siege...      done.
Transactions:                10290 hits
Availability:                99.44 %
Elapsed time:                59.07 secs
Data transferred:            22.83 MB
Response time:                0.05 secs
Transaction rate:           174.20 trans/sec
Throughput:                   0.39 MB/sec
Concurrency:                  9.55
Successful transactions:     10290
Failed transactions:            58
Longest transaction:          3.07
Shortest transaction:         0.00

Our transaction rate improved by nearly 3.5 per second. This is because requests spent less time waiting for a worker to handle them. If you watch the web server’s socket table during a siege, one thing stands out. The number of sockets in TIME_WAIT. This what I saw during the last run:

Ben: $ netstat -a | grep TIME_WAIT | wc -l   

If we could recycle those connections, we could improve our performance. The kernel parameter that controls TIME_WAIT reuse is ‘net.ipv4.tcp_tw_recycle’. The sysctl command will enable you to view and set kernel parameters. Let’s check the value on our web server:

Ben: $ sysctl net.ipv4.tcp_tw_recycle
net.ipv4.tcp_tw_recycle = 0

We’re not reusing sockets in TIME_WAIT. Let’s change it and view our results:

Ben: $ sysctl -w net.ipv4.tcp_tw_recycle=1
net.ipv4.tcp_tw_recycle = 1

For readers with newer systems, we recommend this additional setting:

sysctl -w net.ipv4.tcp_tw_reuse=1

I set those parameters on both the computer running siege and the server running apache. To make the changes permanent, add them to /etc/sysctl.conf Now let’s run another test:

Lifting the server siege...      done.
Transactions:                10231 hits
Availability:               100.00 %
Elapsed time:                31.25 secs
Data transferred:            22.69 MB
Response time:                0.05 secs
Transaction rate:           327.39 trans/sec
Throughput:                   0.73 MB/sec
Concurrency:                 16.68
Successful transactions:     10231
Failed transactions:             0
Longest transaction:          3.12
Shortest transaction:         0.00

Our transactions per second sky-rocketed to 327.39! That’s a significant improvement but we still have a problem. Notice the elapsed time. I stopped the siege short of a minute because it was hung. In run after run, siege was hung at about 10,200 transactions. In fact, every time I run the siege it hangs at that number. We’re exhausting something, but what? Let’s check our kernel parameters and see what we find:

Ben: $ /sbin/sysctl -a  | grep 102
net.ipv6.neigh.default.gc_thresh3 = 1024
net.ipv6.route.gc_thresh = 1024
net.ipv4.ip_conntrack_max = 10232
net.ipv4.neigh.default.gc_thresh3 = 1024
net.ipv4.route.gc_thresh = 1024
net.ipv4.tcp_max_syn_backlog = 1024
net.core.optmem_max = 10240
kernel.sem = 250    32000    32    1024
kernel.rtsig-max = 1024

It looks like we hit the ip_conntrack_max limit. If kernel dropped packets it will let us know. Let’s check the system logs:

Apr  2 15:00:58 ben kernel: ip_conntrack: table full, dropping packet.

Sure enough. Let’s double it.

Ben: $ sysctl -w net.ipv4.ip_conntrack_max=`echo 10232*2 | bc`
net.ipv4.ip_conntrack_max = 20464

One more run, let’s see how we do:

Bully # siege -d1 -c200 -t1m http://ben.home.joedog.org/apache_pb.gif
Lifting the server siege...      done.
Transactions:                20462 hits
Availability:               100.00 %
Elapsed time:                59.65 secs
Data transferred:            45.39 MB
Response time:                0.04 secs
Transaction rate:           343.03 trans/sec
Throughput:                   0.76 MB/sec
Concurrency:                 15.31
Successful transactions:     20462
Failed transactions:             0
Longest transaction:          1.26
Shortest transaction:         0.00

Much better. With a little bit of tuning, we were able to double our server’s transaction rate.






Posted in | 4 Comments

4 Responses to “Using Siege to Tune Apache on GNU/Linux”

  1. Damien says:

    Good article! Thanks for the tips, I’m trying to improve my Apache configuration and as a simple web-developer I wasn’t aware of those kernel settings and stuffs… :)

  2. Herbert says:

    Hi I tried your tips but it didn’t seem to have any effect at all. My servers are behind a load balancer, I made sure to apply the change on all of them.

    Before tweaking net.ipv4.tcp_tw_recycle

    Transactions: 11269 hits
    Availability: 100.00 %
    Elapsed time: 59.47 secs
    Data transferred: 13.00 MB
    Response time: 0.02 secs
    Transaction rate: 189.49 trans/sec
    Throughput: 0.22 MB/sec
    Concurrency: 3.35
    Successful transactions: 11269
    Failed transactions: 0
    Longest transaction: 3.01
    Shortest transaction: 0.00

    After setting the value to 1

    Transactions: 11199 hits
    Availability: 100.00 %
    Elapsed time: 59.97 secs
    Data transferred: 12.92 MB
    Response time: 0.03 secs
    Transaction rate: 186.74 trans/sec
    Throughput: 0.22 MB/sec
    Concurrency: 5.86
    Successful transactions: 11199
    Failed transactions: 0
    Longest transaction: 3.00
    Shortest transaction: 0.00

    • Jeff Fulmer says:

      Was there evidence that your system wasn’t reclaiming resources? This seems to indicate that it didn’t suffer from that constraint. In my example, an old, under-powered server had difficulty recycling connections and the kernel tweak was in response to that reality.

  3. Null4Ever says:


    Thanks for this article.

    However, may I share information about how to set any Linux distro using optimal settings on “modern” boxes (just remember that mono core CPU ended almost a decade ago)..

    They come from the publication of the Intel experts Bryan Veal and Annie Foong (Performance Scalability of a Multi-Core Server, Nov 2007, page 4/10).

    Add all the following lines to the file: “/etc/sysctl.conf”

    fs.file-max = 5000000
    net.core.netdev_max_backlog = 400000
    net.core.optmem_max = 10000000
    net.core.rmem_default = 10000000
    net.core.rmem_max = 10000000
    net.core.somaxconn = 100000
    net.core.wmem_default = 10000000
    net.core.wmem_max = 10000000
    net.ipv4.conf.all.rp_filter = 1
    net.ipv4.conf.default.rp_filter = 1
    net.ipv4.tcp_congestion_control = bic
    net.ipv4.tcp_ecn = 0
    net.ipv4.tcp_max_syn_backlog = 12000
    net.ipv4.tcp_max_tw_buckets = 2000000
    net.ipv4.tcp_mem = 30000000 30000000 30000000
    net.ipv4.tcp_rmem = 30000000 30000000 30000000
    net.ipv4.tcp_sack = 1
    net.ipv4.tcp_syncookies = 0
    net.ipv4.tcp_timestamps = 1
    net.ipv4.tcp_wmem = 30000000 30000000 30000000
    # Optionally, avoid TIME_WAIT states on localhost no-HTTP Keep-Alive tests:
    # “error: connect() failed: Cannot assign requested address (99)”
    # On Linux, the 2MSL time is hardcoded to 60 seconds in /include/net/tcp.h:
    # define TCP_TIMEWAIT_LEN (60*HZ). This option is safe to use in production.
    net.ipv4.tcp_tw_reuse = 1
    # WARNING:
    # ——–
    # The option below lets you reduce TIME_WAITs by several orders of magnitude
    # but this option is for benchmarks, NOT for production servers (NAT issues)
    # So, uncomment the line below if you know what you’re doing.
    #net.ipv4.tcp_tw_recycle = 1
    net.ipv4.ip_local_port_range = 1024 65535
    net.ipv4.ip_forward = 0
    net.ipv4.tcp_dsack = 0
    net.ipv4.tcp_fack = 0
    net.ipv4.tcp_fin_timeout = 30
    net.ipv4.tcp_orphan_retries = 0
    net.ipv4.tcp_keepalive_time = 120
    net.ipv4.tcp_keepalive_probes = 3
    net.ipv4.tcp_keepalive_intvl = 10
    net.ipv4.tcp_retries2 = 15
    net.ipv4.tcp_retries1 = 3
    net.ipv4.tcp_synack_retries = 5
    net.ipv4.tcp_syn_retries = 5
    net.ipv4.tcp_moderate_rcvbuf = 1
    kernel.sysrq = 0
    kernel.shmmax = 67108864

    Then add also the 2 following lines to the file: /etc/secutity/limits.conf

    * soft nofile 1000000
    * hard nofile 1000000

    Then you’ll get a multi core box ready for the war.

    Hope this helps.

Leave a Reply

Recent Comments

  • CC: Many thks for your reply.
  • Jeff Fulmer: You mean your operating environment can only sustain more than 1000 connections for a few minutes....
  • CC: HI BUDDY I met this when the number of concurrent connections > 1000,siege can only sustained for a few mins....
  • Patrick: Hello- Have a simple 3 tier system and wanting to have multiple Siege testers run against multiple...
  • Oleg: Hello. Are the response time is the same as TTFB?