开发者

Is (pure) functional programming antagonistic with "algorithm classics"?

开发者 https://www.devze.com 2023-03-25 14:08 出处:网络
The classic algorithm books (TAOCP, CLR) (and not so classic ones, such as the fxtbook)are full of imperative algorithms.This is most obvious withalgorithms whose implementation is heavily based on ar

The classic algorithm books (TAOCP, CLR) (and not so classic ones, such as the fxtbook)are full of imperative algorithms. This is most obvious with algorithms whose implementation is heavily based on arrays, such as combinatorial generation (where both array index and array value are used in the algorithm) or the uni开发者_C百科on-find algorithm.

The worst-case complexity analysis of these algorithms depends on array accesses being O(1). If you replace arrays with array-ish persistent structures, such as Clojure does, the array accesses are no longer O(1), and the complexity analysis of those algorithms is no longer valid.

Which brings me to the following questions: is pure functional programming incompatible with the classical algorithms literature?


With respect to data structures, Chris Okasaki has done substantial research into adopting classic data structures into a purely functional setting, as many of the standard data structures no longer work when using destructive updates. His book "Purely Functional Data Structures" shows how some structures, like binomial heaps and red/black trees, can be implemented quite well in a functional setting, while other basic structures like arrays and queues must be implemented with more elaborate concepts.

If you're interested in pursuing this branch of the core algorithms, his book would be an excellent starting point.


The short answer is that, so long as the algorithm does not have effects that can be observed after it finishes (other than what it returns), then it is pure. This holds even when you do things like destructive array updates or mutation.

If you had an algorithm like say:

function zero(array):
    ix <- 0
    while(ix < length(array)):
       array[ix] <- 0
       ix <- ix+1
    return array

Assuming our pseudocode above is lexically scoped, so long as the array parameter is first copied over and the returned array is a wholly new thing, this algorithm represents a pure function (in this case, the Haskell function fmap (const 0) would probably work). Most "imperative" algorithms shown in books are really pure functions, and it is perfectly fine to write them that way in a purely functional setting using something like ST.

I would recommend looking at Mercury or the Disciple Disciplined Compiler to see pure languages that still thrive on destruction.


You may be interested in this related question: Efficiency of purely functional programming.

is there any problem for which the best known non-destructive algorithm is asymptotically worse than the best known destructive algorithm, and if so by how much?


It is not. But it is true that one can see in many book algorithms that look like they are only usable in imperative languages. The main reason is that pure functional programming was restrained to academic use for a long time. Then, the authors of these algorithms strongly relied on imperative features to be in the mainstream. Now, consider two widely spread algorithms: quick sort and merge sort. Quick sort is more "imperative" than merge sort; one of its advantage is to be in place. Merge sort is more "pure" than quick sort (in some way) since it needs to copy and keep its data persistent. Actually many algorithm can be implemented in pure functional programming without losing too much efficiency. This is true for many algorithms in the famous Dragon Book for example.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号