开发者

Finite difference in Haskell, or how to disable potential optimizations

开发者 https://www.devze.com 2023-02-24 08:19 出处:网络
I\'d like to implement the following naive (first order) finite differencing function: finite_difference :: Fractional a => a -> (a -> a) -> a -> a

I'd like to implement the following naive (first order) finite differencing function:

finite_difference :: Fractional a => a -> (a -> a) -> a -> a
finite_difference h f x = ((f $ x + h) - (f x)) / h

As you may know, there is a subtle problem: one has to make 开发者_运维知识库sure that (x + h) and x differ by an exactly representable number. Otherwise, the result has a huge error, leveraged by the fact that (f $ x + h) - (f x) involves catastrophic cancellation (and one has to carefully choose h, but that is not my problem here).

In C or C++, the problem can be solved like this:

volatile double temp = x + h;
h = temp - x;

and the volatile modifier disables any optimization pertaining to the variable temp, so we are assured that a "clever" compiler will not optimize away those two lines.

I don't know enough Haskell yet to know how to solve this. I'm afraid that

let temp = x + h
    hh = temp - x 
in ((f $ x + hh) - (f x)) / h

will get optimized away by Haskell (or the backend it uses). How do I get the equivalent of volatile here (if possible without sacrificing laziness)? I don't mind GHC specific answers.


I have two solutions and a suggestion:

First solution: You can guarantee that this won't be optimized out with two helper functions and the NOINLINE pragma:

norm1 x h = x+h
{-# NOINLINE norm1 #-}

norm2 x tmp = tmp-x
{-# NOINLINE norm2 #-}

normh x h = norm2 x (norm1 x h)

This will work, but will introduce a small cost.

Second solution: Write the normalization function in C using volatile and call it through the FFI. The performance penalty will be minimal.

Now for the suggestion: Currently the math isn't optimized out, so it will work properly at present. You're afraid it will break in a future compiler. I think this is unlikely, but not so unlikely that I wouldn't want to guard against it also. So write some unit tests that cover the cases in question. Then if it does break in the future (for any reason), you'll know exactly why.


One way is to look at the Core.

Specializing to Doubles (which will be the case most likely to trigger some optimization):

finite_difference :: Double -> (Double -> Double) -> Double -> Double
finite_difference h f x = ((f $ x + hh) - (f x)) / h
   where
        temp = x + h
        hh   = temp - x 

Is compiled to:

A.$wfinite_difference h f x =
    case f (case x of
                  D# x' -> D# (+## x' (-## (+## x' h) x'))
           ) of 
        D# x'' -> case f x of D# y -> /## (-## x'' y) h

And similarly (with even less rewriting) for the polymorphic version.

So while the variables are inlined, the math isn't optimized away. Beyond looking at the Core, I can't think of a way to guarantee the property you want.


I don't think that

temp = unsafePerformIO $ return $ x + h

would get optimised out. Just a guess.

0

精彩评论

暂无评论...
验证码 换一张
取 消