i want to see if there is a elegant way to create a mechanism which can track if the runtime of a function have degraded for a particular test release over release.
Let say in my software if there are 100 high level function, i will like to see what function have degraded in runtime release to release. Assuming i am running same test across releases and logging开发者_StackOverflow社区 the runtime of top level (100 high level) functions in a text file for comparison. Few people in other threads commented to use macro, but wrapping 100 function calls around macro is ugly and painful. Any better way to solve this problem.
If you created your unit tests correctly (if at all) you can create a macro to define your tests and automatically log time for each one. Since ideally each unit test uses as little different kinds of functionality as possible if one of your tests takes longer than expected by a certain margin you can just flag that with your data and if you look at the test you will be able to determine what is slowing down (usually what you're testing)
- Time your functions and write the result in a log file.
- Compare the log file for each release with the result from earlier releases.
You can also use automatic tools for the job, like gprof. Other profiling tools are listed on wikipedia
精彩评论