开发者

Minimizing memory usage of a breadth first search

开发者 https://www.devze.com 2023-01-28 03:10 出处:网络
In my the following code, I am traversing a graph through breadth first search. The code constructs the graph while it is traversing. This is a very large graph, with a fan out of 12. Due to this, any

In my the following code, I am traversing a graph through breadth first search. The code constructs the graph while it is traversing. This is a very large graph, with a fan out of 12. Due to this, any time the depth of the breadth first search increases, I want to destruct the layer above it in an attempt to minimize memory usage. How could I design an algorithm to do so?

string Node::bfs(Node * rootNode) {
QQueue<Cube *> q;
q.enqueue(rootNode);

while (!(q.empty())) {
    Node * currNode = q.dequeue();
    currNode->constructChildren();
    foreach (Node * child, currNode->getLis开发者_运维技巧tOfChildren()) {
        q.enqueue(child);
    }
    if (currNode->isGoalNode()) {
        return currNode->path;
    }
}


With constant fanout and assuming a tree-like graph, the number of nodes that have been visited by a BFS is almost the same as the number of nodes on the fringe. (e.g. in a tree with fanout K, each level n has K^n nodes, and the number of nodes with lower depth than n is also Theta(K^n)).

Hence, storing the fringe will already take up alot of memory. So if memory is a very big problem, an "equivalent" technique such as iterative deepening DFS may be much better.

But if you want to destroy the "visited" nodes, then some way of tracking what has been visited (in the case of a general graph; if it is a tree then there's no problem) needs to be devised. In which case more information on the graph is needed.

EDIT on why iterative deepening DFS is better.

The fringe (unvisited nodes that are to be adjacent to the visited nodes) in a BFS is O(K^n) in size, n being the current depth. The fringe for DFS is O(n) in size.

Iterative deepening DFS has the same fringe size as DFS, and gives the same result as BFS, because it "simulates" BFS.


Breadth-first search inherently has exponential space complexity. Any tricks will make only marginal impacts in the memory requirements for large graphs. You're better off using depth-first search if you want tractable space complexity.

0

精彩评论

暂无评论...
验证码 换一张
取 消