We’re all about performance testing here, and we love to empower developers to intelligently build apps with performance in mind. But performance testing isn’t enough: it works hand-in-hand with performance monitoring. Here’s how.
Performance testing does what the name implies: estimates your project’s performance. In performance testing, you simulate usage and “load,” or simulated users, to gauge how your site or app holds up.
Generally speaking, the major performance variable you watch for in performance testing is response time. For a website, it’s page response time (under three seconds minimum, under two seconds ideally. Those response times are research-proven to be the point at which revenue nosedives). For an app - desktop or mobile - the same response time applies before frustration sets in. For an API, although they’re consumed by other code, not by a human, your response time needs to be as short as possible, lest your API customers find a faster provider because your service is a bottleneck.
Full performance tests take some time and are best done with a daily or other regular build (but probably not more often than daily). A full performance test can illuminate where code changes affect performance.
When you set a performance benchmark - where performance is acceptable under normal traffic and normal use - performance testing can tell you when that benchmark is reached. You can also test peak traffic and usage, and watch for potential bottlenecks in your own project.
Performance testing illuminates performance issues, which you can fix through performance tuning. Those performance issues are often exposed via performance monitoring during a test. Let’s explain.
Performance testing can be performed by any team under the general “DevOps” umbrella, be they developers, operations, or testing. We see developers and QA engineers handle performance testing. In contrast, performance monitoring is most often performed by the “ops” part of “DevOps,” in bigger organizations, by a dedicated operations team.
During a performance test, performance is closely monitored using performance monitoring tools. Often those are called analytics tools, instrumentation, and so forth. (A nice list is here).
Performance monitoring generally focuses on the entire system: server health and response time, network response time, app server health and response time, cloud performance, database performance, app performance, caching, and so on. Yet performance testing - load testing - often focuses only on the app, site or API. Why? That’s the best place to test the client or user experience, since that’s where the user interacts with the entire system.
While performance testing tools can illuminate potential areas to tune, we recommend using performance monitoring tools during a test, or at least comparing monitoring logs to performance test logs. This can help you pinpoint the performance issues you discover. For example, perhaps your cloud instance hits its peak performance under a relatively small number of simulated users, and you may need to tune its settings to allow greater throughput or CPU cycles under load. Or perhaps your network infrastructure becomes clogged under too much inter-server communication. Or maybe it’s a hardware issue (can’t we always blame it on a hardware issue?).
Good luck with your performance testing and monitoring. We’re here to help you integrate these systems - just ask!