开发者

Benchmarking and lazy evaluation

开发者 https://www.devze.com 2023-01-21 13:38 出处:网络
What is wrong here, is Lazy Evaluation too? teste.hs module Main where import Control.Parallel(par,pseq)

What is wrong here, is Lazy Evaluation too?

teste.hs

module Main where  

import Control.Parallel(par,pseq)  
import Text.Printf  
import Control.Exception  
import System.CPUTime  
import Data.List  
import IO  
import Data.Char  
import Control.DeepSeq  

--Calcula o tempo entre o inicio e o fim de rodagem do programa  
time :: IO t -> IO t  
time a = do  
    start <- getCPUTime  
    v <- a
    end   <- getCPUTime  
    let diff = (fromIntegral (end - start)) / (10^12)  
    printf "Computation time: %0.3f sec\n" (diff :: Double)  
    return v  

learquivo :: FilePath -> IO ([[Int]])  
learquivo s = do   
     开发者_Go百科       conteudo <- readFile s   
            return (read conteudo)   

main :: IO ()  
main = do   
    conteudo <- learquivo "mkList1.txt"   
    mapasort <- return (map sort conteudo)
    time $ mapasort  `seq` return ()  

*Main> main

Computation time: 0.125 sec

mkList1.txt is a list of 100 lists of 100 random numbers in each, more or less like this: [[23,45,89,78,89 ...], [4783, 44, 34 ...]...]

I did a test printing mapasort:

  • time $ print ("Sort usando map = ", mapasort)

And the computing time increased considerably so I think something is wrong.

Computation time: 1.188 sec

Thanks


Yes, this is due to Haskell's laziness. You're trying to get around the laziness by using seq, but since seq is "shallow" (i.e. it doesn't traverse the whole structure of the expression - only the "outer" layer), it will force the evaluation of the map, but not the evaluation of the sorts.

To fix this either use deepseq instead of seq or, even better, use a library for benchmarking instead of using getCPUTime.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号