Performance Tuning Using Data from Load Impact

Posted by Load Impact on Apr 7, 2015

When it comes to serving up websites, nothing is more important than fast, dependable service.

Many website designers don’t always consider system performance, but the people in charge of maintaining the server it’s running on are always looking critically at performance and bottlenecks.

For my company, Binary Computer Solutions, Inc., we’re often involved in the design and the development of sites, so performance is always top of mind for and our clients.

When dependability is of high importance, you don’t always have to scale out to redundant servers, multiple databases and replication, several backend web servers, or even a CDN. This example will show you before and after adding performance tuning on a server.

Test Server Information:

Servers are running on a Xen Virtual Machine.

Site 1

  • Intel Xeon E3-1230v3 CPU @ 3.30GHz (2 cores assigned)
  • 4GB RAM
  • Lighttpd web server in fastcgi mode (running in HTTP only)
  • php-fpm using PHP 5.5.23 with Suhosin v0.9.37.1
  • MySQL Server 5.5.42-37.1
  • 1GBps Uplink

Site 2

  • Intel Xeon E3-1230v3 CPU @ 3.30GHz (2 cores assigned)
  • 4GB RAM
  • Lighttpd web server in fastcgi mode
  • php-fpm using PHP 5.5.23 with Suhosin v0.9.37.1 and Zend OPCache v7.0.4-dev
  • Percona MySQL Server 5.5.42-37.1
  • Memcached v 1.4.22
  • Varnish Reverse Proxy / Caching Server v3.0.6 revision 1799836 (Running 2GB malloc storage)
  • Pound SSL Reverse Proxy (version 2.6)
  • 1GBps Uplink

Both servers are running the same website. The major difference is Site 2 is a live production site with real traffic on it, on top of the load test. Site 1 has no traffic and is not generally publicly accessible.

Before running the load test, check out the vmstat command so you can see the current state of load on the server.

For those who aren’t familiar with vmstat logs, the columns to really watch are the “r” column under “procs” and the “wa” under the CPU column. The “r” is the number of processes waiting for run time.

The “wa” is the time spent waiting for IO. Both of these are seen as bottlenecks when the system has to wait to perform a process.


I then ran “vmstat 5 84 > /root/vmstat.details” which takes a snapshot every five seconds, 84 times (or 7 minutes) and records the system load information into the vmstat.details file.

For the purpose of the test, this is going to be 50 VUs over a duration of five minutes, from two locations. This will simulate a real life traffic scenario where you have traffic from multiple locations, and could be from something as simple as posting a link to your blog on social media. The real life scenario is very plausible.

gen stats 8 VUs

After one minute into the test, we have eight Active VUs and are seeing a good steady stream of requests and activity over 30 active TCP connections. The data transfer rate seems decent, and the server appears to be keeping up with it. We would expect that though, given how it’s only eight Active VUs.


After two minutes, we are up to 20 Active VUs, and the server is still responding and handling the connections well.


Unfortunately, after three minutes, and with just 30 Active VUs, the server decided it had enough.


While we made it to 50 Active VUs, unfortunately, the server just absolutely refused to respond. It died after three minutes. At this point it’s just a denial of service. Pack it up, the server is done.

After looking through the vmstat log, it was obvious where it gave up and the server just stopped responding.


Even the graph shows the drop off. So, what happened? It took 20 VUs with just under 80 active TCP connections to bring down this web server. That seems so low, yet, the numbers just don’t lie.

Maybe it’s time for round 2, where we test this on our performance tuned server. Keep in mind — this is a live production server we are running this test on. If anything, we should expect to see performance that equals the server with no usage at all. Here is the vmstat of our production server:


We are going to run the identical test, using 50 VUs over five minutes, and log the vmstat for seven minutes, noting load information every five seconds.


After one minute in, we have 13 Active VUs, and what looks like a good request rate. We have almost double the amount of Active TCP Connections in the same time frame as the first test. The number of requests is down by almost half as well.


After two minutes, we have more Active TCP Connections than the server that died before had, and still clicking along serving up the requests.


After three minutes, our Active TCP Connections has bumped up to 120, almost double what crashed our first server.


After four minutes, with 180 Active TCP Connections, the server is still handing requests at a steady rate. We are now up to 36 Active VUs, and seeing good bandwidth usage.


Still around four minutes, we see the Active TCP Connections continue to rise, and the bandwidth usage is starting to get up there, as well as the request rate. Yet the server never hesitates, and continues to serve up the pages.


Nearing the end of our test, we have over 240 Active TCP connections, with a request rate of 65/second and the server is still not slowing down.


As you can see from the chart, the server actually managed to serve all the clients, and the VU load time with two minor exceptions never exceeds two seconds.

Simple performance tuning will allow for a site to maintain a load without so much as slowing down a user on the website. The impact to the server was minimal under load, and it sure didn’t crash. Remember, this is all without a CDN.

One more VERY important note about this: The server without performance tuning was running standard HTTP. The performance tuned server was running strictly HTTPS. (Yes, we are running a fast, dependable website on HTTPS).

Affirmed by data gathered from Load Impact, it’s clear the performance tuning and optimizations we’re making are clearly working, and that can mean only good things for us and our clients.

— Blaine Bouvier is a contributing writer for Load Impact and the owner of Binary Computer Solutions, Inc., a website design, development, and performance company. Blaine has been working in the technology field for over 10 years, and is currently developing several high-end, high-performance websites for clients.

Topics: performance, Performance testing, Binary Computer Solutions, web performance, performance tuning, server maintenance, Load Testing, vmstats

Recent Posts

Popular posts

Posts by Topic

see all

Subscribe to Email Updates