开发者

How do you test performance of code between software release versions?

开发者 https://www.devze.com 2023-02-01 08:17 出处:网络
I am developing software in C#/.NET but I guess the question can be asked for other programming languages as well. How do you test performance of your software between release versions? Let me elabora

I am developing software in C#/.NET but I guess the question can be asked for other programming languages as well. How do you test performance of your software between release versions? Let me elaborate some more.

Before the production release of the software, I would like to compare the performance of the software for a set of functions which were available in previous release of the software. Let's assume that you are talking about a software library project (no GUI) which results in the release of one or more dll's. How would one achieve this? What are some of the best practices? I cannot possibly swap the current dll with the previous release dll and run the same test.

One way I can think of is to add the same performance test in the main branch (which is used for current release) and the earlier release branch and then compare the performance. I think that there is some pain involved in doing this, but is possible.

Another way I can think of is start with the lest release branch, stub out the new codes and features which has been put in after the last release, and then run the test. I do not think this would produce correct result, not to mention that this approach is even more pa开发者_如何学JAVAinful than the previous approach.

Thanks for other ideas. Would prefer C#/.NET specific answers.

Edit 1: This and this are somewhat related questions.


We have a suite of performance tests. These are just NUnit tests. Each test sets up some objects, starts a timer (Stopwatch works well), does the operation we're interested in (e.g., loading the data for a certain screen), and then writes the elapsed time to a CSV file. (NUnit logs how long each test takes, but we want to exclude the setup logic, which in some cases will vary from test to test -- so doing our own timers and logging makes more sense.)

We run these tests from time to time, always on the same hardware and network environment. We import the results into a database. Then it's easy to build graphs that show trends, or that call out large percentage changes.


If want to actually compare performance between releases then you will need some test that performs the same functions across releases. Unit tests often work well for this.

Another more active thing you can do is instrument your code with logging based on pre-defined performance thresholds. For example, when you run code in an old version you get a metric. Then add timing code to your application so if the same function takes a certain amount longer at any point in time, log a message (or broadcast an event which the caller can optionally log). Of course you don't want to overdo this to the extent that the timing code could result in a performance degradation itself.

We do this in our applications with SQL calls. We have a threshold for the maximum time any single sql call should take and if there is a SQL call over the threshold then we log it as a warning. We also track the number of sql calls in a given http request with the same thing. Your goal should be to reduce the thresholds over time.

You can wrap these tests in #if sections to not include in production, but it can also be really useful to have these in production.


You could append the test results to a source controlled text file for each new release. That would give you an easily-accessible history of every version's performance.

Your idea of running the performance test against the branch and trunk is essentially the same, but saving the results would probably save you the effort of switching your working copy back and forth.


We have a special setting that a user (or tester) can enable. We enable it, it generates a CSV file which we can feed into an Excel and see the performance report.

It'll report individual counts of certain operations and how long they took. Excel shows this in a nice visual way for us.

All code is custom, only disadvantage is the performance tracking code's overhead however we benchmarked it and it's virtually nothing. It's a well optimized and very short.

Beauty of this approach is also you can get good feedback from customer if they are experiencing performance issues that you cannot reproduce.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号