Ugh. I swear there is some good excuse for why I haven’t blogged about QA for almost two months. Probably because I’ve actually been doing QA, and then some. In addition to my day job, where we have a delivery deadline approaching on multiple projects, I’ve also been working on some freelance stuff. Then there’s the whole matter of some hairy issues in my personal life, plus being a mom, and…well, that’s where the time goes.
As for the “day job,” I’ve been doing load testing again recently. This time we’ve implemented a content switch to load balance our two web application servers, so I load tested for performance at various levels on the switch, making sure performance didn’t suffer when one app server went down and traffic was rerouted, and comparing performance on the switch vs. going direct to the web server (surprisingly almost identical).
There are two kinds of management on projects like this – the kind that understands the real numbers and wants to know about them in detail, and the kind that wants to check off a box that says “load testing” and look at some pretty graphs and charts. Luckily, I’m able to satisfy both types. I do my load testing with JMeter, and generate some results data directly from that. For something simpler and prettier, I plug a few numbers into Excel and generate something like this:
This shows the increasing number of samples (or HTTP requests) thrown at the switch by looping a larger and larger number of users or threads, alongside the average response time for each data point. So when there are 100 users, the response time is 42 ms, and when that increases to 300 users, the response time is 154 ms, and so on.
Is this a good or bad result? I don’t know. I’m not a statistician or an analyst, I just do the tests and report the results. In my opinion, you could look at it either way. I will say that the levels of concurrent users that this test used were much higher than what the server will realistically deal with in production.
What do you think?