What’s holding you back?
When you hit the limits of how much load your website can handle, you almost always want to know what it is that is holding you back. You already know that you’ve reached the limit, but what part needs to be changed in order to go higher?
The more load a web site gets, the more resources it’s going to consume. One of the many types of resources that the server needs to function will run out before the others. Sure, an extremely well balanced server setup will run out of all types of resources at the same time, but that’s probably not very common. To figure out what resource type that is causing the bottle neck, you need to look a different things. Loadimpact.com offers several interesting performance metrics that will reveal what’s holding you back. Then of course, as soon as you fix that, the next bottleneck is going to become visible, but that’s another blog post.
In this post, I’ll share some information about how you can determine that your web site performance is held back by bandwidth issues and a bit about what you can do to solve it.
How do I know it’s the bandwith?
Depending on where you host your web site, you may have access to tools and graphs from the hosting company that can give you a lot of information. But assiming that you don’t have that tool available, let’s look at how you can use Loadimpact.com to tell.
To be able to show you, I created a very simple web site that is very very bandwidth limited. The website contains one single .html file, absolutely no Python, PHP, Java, Perl or anything similar involved at all. The one file is called heavy.html and contains roughly 16Mb of the letter A. When lots of concurrent user requests heavy.html, a lot of bits will have to leave the web server all at the same time. This is the graph from the test (click to enlarge):
The graph reveals two interesting things. First of all, if you didn’t know already, you can add more than the two standard data series to your Loadimpact graphs. By default, Loadimpact will give you number of active clients and average response time. In this case, I’ve added the Bandwidth data series.
Second. The bandwidth graph pinpoints exactly what I was hoping for. That the bandwidth usage actually hits a plateau at roughly 70 Mbit/s. This means that somewhere between the software on my test server and the software on the measuring probe, there’s a bandwidth limitation of about 70 Mbit/s. It’s important to point out that this result doesn’t point out the exact location of that bottleneck, it just tells you it’s there. To make sure that you have a bottleneck in actually in your hosting environment, you should run the same test from different test servers. Loadimpact currently offers 8 different load zones, each load zone is in a different geographic location. Make sure you run tests from different load zones, or even more interesting, add 4-5 load zones into the same test. If you can still see your plateau at the same bandwidth usage, you can be fairly sure that you’ve found your limit.
And don’t worry if you’ve already run a series of tests using Loadimpact and didn’t add bandwidth to the graph. The data is still stored on our servers and you can add bandwidth when looking at older tests as well. So you might already have interesting data to analyze.
Ok, so what do I do about it?
If you are held back by bandwidth limitation, next step is obviously to try and do something about it. There are many potential ways you can bring down the bandwidth need.
Make sure you use compression like gzip or deflate. By compressing the content before it’s sent from the server to the browser, you pay with some CPU resources to gain a little bandwidth. It’s safe to enable since the server will only send compressed content if the browser says it can handle it. Check if your web site uses compression with our Page Analyzer service. Enter the URL you want to test and when the result comes back, click the green plus sign to expand:
Reduce image quality
It may sound backwards, but a lot of web sites are sending images to the browser in 300 DPI, that’s great if the user want’s to print the image, but most images are just displayed on the actual web site where 72-96 DPI is sufficient. Not that the very term DPI means that much on web pages, but it still. A good text about the why and how is found here http://www.webdesignerdepot.com/2010/02/the-myth-of-dpi/
Have your cache settings correct.
Use a CDN.
A method that actually covers a lot of the above tips all in one is to use a content delivery network (CDN). A CDN provider will store your static content on their servers and serve it for you. Unless you are one of the bigger Internet companies, chances are that the CDN provider have more bandwidth available. They almost always also have more than one physical location, so that a user from Spain gets the content from a server in or near Spain while a UK user gets his content from a server in the UK. The end result is that the user gets his content faster and your server never have to see the traffic. The better CDN providers can also do some of the things like minifying of even image quality reduction automatically for you. So chances are that you end up saving both time and bandwidth.
Opinions? Questions? Tell us what you want think in the comments below.