开发者

Computationally intensive algorithms

开发者 https://www.devze.com 2023-02-18 12:20 出处:网络
I am planning to write a bunch of programs on computationally intensive algorithms. The programs would serve as an indicator of different compiler/hardware performance.

I am planning to write a bunch of programs on computationally intensive algorithms. The programs would serve as an indicator of different compiler/hardware performance.

I would want to pick up some common set of algorithms which are used in different fields, like Bioinformatics, Gaming, Image Processing, et al. The reason I want to do this would be to learn the algorithms and have a personal mini benchmark suit that would be small | useful | easy to maintain.

Any advice on algorithm selection w开发者_JS百科ould be tremendously helpful.


Benchmarks are not worthy of your attention!

The very best guide is to processor performance is: http://www.agner.org/optimize/

And then someone will chuck it in a box with 3GB of RAM defeating dual-channel hopes and your beautifully tuned benchmark will give widely different results again.

If you have a piece of performance critical code and you are sure you've picked the winning algorithm you can't then go use a generic benchmark to determine the best compiler. You have to actually compile your specific piece of code with each compiler and benchmark them with that. And the results you gain, whilst useful for you, will not extrapolate to others.

Case in point: people who make compression software - like zip and 7zip and the high-end stuff like PPMs and context-mixing and things - are very careful about performance and benchmark their programs. They hang out on www.encode.ru

And the situation is this: for engineers developing the same basic algorithm - say LZ or entropy coding like arithmetic-coding and huffman - the engineers all find very different compilers are better.

That is to say, two engineers solving the same problem with the same high-level algorithm will each benchmark their implementation and have results recommending different compilers...

(I have seen the same thing repeat itself repeatedly in competition programming e.g. Al Zimmermann's Programming Contests which is an equally performance-attentive community.)

(The newer GCC 4.x series is very good all round, but that's just my data-point, others still favour ICC)

(Platform benchmarks for IO-related tasks is altogether another thing; people don't appreciate how differently Linux, Windows and FreeBSD (and the rest) perform when under stress. And benchmarks there - on the same workload, same machine, different machines or different core counts - would be very generally informative. There aren't enough benchmarks like that about sadly.)


There was some work done at Berkeley about this a few years ago. The identified 13 common application paterns for parallel programming, the "13 Dwarves". The include things like linear algebra, n-body models, FFTs etc

http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.html

See page 10 onwards.

There are some sample .NET implementations here:

http://paralleldwarfs.codeplex.com/


The typical one is Fast Fourier Transform, perhaps you can also do something like the Lucas–Lehmer primality test.


I remember a guy who tested computational performance of machines, compiler versions by inverting Hilbert matrices.

For image processing, median filtering (used for removing noise, bad pixels) is always too slow. It might make a good test, given a large enough image say 1000x1000.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号