I don't understand, how FP compilers make the code dealing with immutable data structures fast, not blow up stack, etc.
For example, insert operation in tree, it has to copy the whole tree before adding the new node and return the copied tree, versus the imperative couterpart that only needs to add a pointer to the new node. If the insert operation is run million开发者_如何转开发s times, it would take a load of memory, and copying will be slower and slower when the tree is bigger. How do FP compilers actually optimize this ?
You don't have to copy the whole tree to make a change; you can share most of the structure. See e.g. the diagrams in this blog, or this talk by Rich Hickey on Clojure (see discussion of hash tries about halfway through).
The compiler won't really optimize this, it is something you have to program for specifically when coding. Techniques for doing this are explained in the excellent Purely Functional Data Structures (book, thesis).
Take a look at the Zipper data structure.
If the garbage collector is doing its job, the older copies of the data structure will be reclaimed when they're no longer being used.
精彩评论