up arrow Concurrency and the Single Siege

We’re frequently asked about concurrency. When a siege is finished, one of its characteristics is “Concurrency” which is described with a decimal number. This stat is known to make eyebrows furl. People want to know, “What the hell does that mean?”

In computer science, concurrency is a trait of systems that handle two or more simultaneous processes. Those processes may be executed by multiple cores, processors or threads. From siege’s perspective, they may even be handled by separate nodes in a server cluster.

When the run is over, we try to infer how many processes, on average, were executed simultaneously the web server. The calculation is simple: total transactions divided by elapsed time. If we did 100 transactions in 10 seconds, then our concurrency was 10.00.

Bigger is not always better

Generally, web servers are prized for their ability to handle simultaneous connections. Maybe your benchmark run was 100 transactions in 10 seconds. Then you tuned your server and your final run was 100 transactions in five seconds. That is good. Concurrency rose as the elapsed time fell.

But sometimes high concurrency is a trait of a poorly functioning website. The longer it takes to process a transaction, the more likely they are to queue.  When the queue swells, concurrency rises. The reasons for this rise can vary. An obvious cause is load.  If a server has more connections than thread handlers, requests are going to queue. Another is competence – poorly written apps can take longer to complete then well-written ones.

We can illustrate this point with an obvious example. I ran siege against a two-node clustered website. My concurrency was 6.97. Then I took a node away and ran the same run against the same page. My concurrency rose to 18.33. At the same time, my elapsed time was extended 65%.

Sweeping conclusions

Concurrency must be evaluated in context. If it rises while the elapsed time falls, then that’s a Good Thing™. But if rises while the elapsed time increases, then Not So Much™. When you reach the point where concurrency rises and elapsed time is extended, then it might be time to consider more capacity.

 

Posted in Applications, Siege | 3 Comments

3 Responses to “Concurrency and the Single Siege”

  1. Priom says:

    Hi,

    Transaction is equivalent to hits or pageviews?

    Suppose I want to test 50 million people are requesting my site at the same time. What should be my configuration?

    • Jeff says:

      A transaction is a hit.

      Assuming you have enough bandwidth to accommodate 50 million simultaneous hits, your configuration should be a large pile of money on which to sleep because you’ll soon be a very wealthy individual.

      Seriously, I don’t recommend scheduling more than 1024 hits per second with siege so if you really need to scale to 50 million, this isn’t the tool for you. But if you add that capacity, we’ll take a patch.

      To increase simulated users, I’d recommend forking additional processes. Each fork could then spawn its own thread pool. 1024 child processes each spawning 1024 threads will give you a million users per server. Then all you need is 50 sieging computers to generate 50 million hits.

  2. hotfloppy says:

    So concurrency is not concurrent user? If I want to simulate 2000 users hit my site at the same time, is this the correct way?

    $ siege -c 2000 -d 1 -r 1 http://mysite.com/path

    Thanks.

Leave a Reply




Recent Comments

  • Jeff Fulmer: I love that there’s an end bracket surround by a sea of comments. That really aids its...
  • Tim: The more I read that code – the more wtf it becomes. Its a work of beauty that you appreciate the longer...
  • Mike: I just now saw you know DK effect already. Whoops.
  • Mike: “he lacks the skill to recognize his ineptitude” I believe it’s recognized as the...
  • Mirko: Wow! This trick saved my day :) thanks a lot