开发者

How can iterative deepening search implemented efficient in haskell?

开发者 https://www.devze.com 2023-01-16 19:20 出处:网络
I have an optimization problem I want to solve. You have some kind of data-structure: data Foo = { fooA :: Int

I have an optimization problem I want to solve. You have some kind of data-structure:

data Foo =
  { fooA :: Int
  , fooB :: Int
  , fooC :: Int
  , fooD :: Int
  , fooE :: Int
}

and a rating function:

rateFoo :: myFoo -> Int

I have to optimize the result of rateFoo by changing the values in the struct. In this specific case, I decided to use iterative deepening search to solve the problem. The (infinite) search tree for the best optimization is created by another function, which simply applies all possible changes recursivly to the tree:

fooTree :: Foo -> Tree

My searching function looks something like this:

optimize :: Int -> Foo -> Foo
optimize threshold foo = undefined

The question I had, before I start is this:

As the tree can be generated by the data at each point, is it possible to have only the parts of the tree generated, which are currently needed by the algorithm? Is it possible to have the memory freed and the tree regenerated if needed in order to save memory (A leave at level n can be generated in O(n) and n remains small, but not small enough to have the whole tree in memory over time)?

Is this something I can excpect 开发者_高级运维from the runtime? Can the runtime unevaluate expressions (turn an evaluated expression into an unevaluated one)? Or what is the dirty hack I have to do for this?


The runtime does not unevaluate expressions.

There's a straightforward way to get what you want however.

Consider a zipper-like structure for your tree. Each node holds a value and a thunk representing down, up, etc. When you move to the next node, you can either move normally (placing the previous node value in the corresponding slot) or forgetfully (placing an expression which evaluates to the previous node in the right slot). Then you have control over how much "history" you hang on to.


Here's my advice:

  1. Just implement your algorithm in the most straightforward way possible.
  2. Profile.
  3. Optimize for speed or memory use if necessary.

I very quickly learned that I'm not smart and/or experienced enough to reason about what GHC will do or how garbage collection will work. Sometimes things that I'm sure will be disastrously memory-inefficient work smoothly the first time around, and–less often–things that seem simple require lots of fussing with strictness annotations, etc.

The Real World Haskell chapter on profiling and optimization is incredibly helpful once you get to steps 2 and 3.


For example, here's a very simple implementation of IDDFS, where f expands children, p is the search predicate, and x is the starting point.

search :: (a -> [a]) -> (a -> Bool) -> a -> Bool
search f p x = any (\d -> searchTo f p d x) [1..]
  where
    searchTo f p d x
      | d == 0    = False
      | p x       = True
      | otherwise = any (searchTo f p $ d - 1) (f x)

I tested by searching for "abbaaaaaacccaaaaabbaaccc" with children x = [x ++ "a", x ++ "bb", x ++ "ccc"] as f. It seems reasonably fast and requires very little memory (linear with the depth, I think). Why not try something like this first and then move to a more complicated data structure if it isn't good enough?

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号