开发者

Finding the width of a directed acyclic graph... with only the ability to find parents

开发者 https://www.devze.com 2022-12-29 07:04 出处:网络
I\'m trying to find the width of a directed acyclic graph... as represented by an arbitrarily ordered list of nodes, without even an adjacency list.

I'm trying to find the width of a directed acyclic graph... as represented by an arbitrarily ordered list of nodes, without even an adjacency list.

The graph/list is for a parallel GNU Make-like workflow manager that uses files as its criteria for execution order. Each node has a list of source files and target files. We have a hash table in place so that, given a file name, the node which produces it can be determined. In this way, we can figure out a node's parents by examining the nodes which generate each of its source files using this table.

That is the ONLY ability I have at this point, without changing the code severely. The code has bee开发者_如何学编程n in public use for a while, and the last thing we want to do is to change the structure significantly and have a bad release. And no, we don't have time to test rigorously (I am in an academic environment). Ideally we're hoping we can do this without doing anything more dangerous than adding fields to the node.

I'll be posting a community-wiki answer outlining my current approach and its flaws. If anyone wants to edit that, or use it as a starting point, feel free. If there's anything I can do to clarify things, I can answer questions or post code if needed.

Thanks!

EDIT: For anyone who cares, this will be in C. Yes, I know my pseudocode is in some horribly botched Python look-alike. I'm sort of hoping the language doesn't really matter.


I think the "width" you're considering here isn't really what you want - the width depends on how you assign levels to each node where you have some choice. You noticed this when you were deciding whether to assign all sources to level 0 or all sinks to the max level.

Instead, you just want to count the number of nodes and divide by the "critical path length", which is the longest path in the dag. This gives the average parallelism for the graph. It depends only on the graph itself, and it still gives you an indication of how wide the graph is.

To compute the critical path length, just do what you're doing - the critical path length is the maximum level you end up assigning.


In my opinion when you're doing this type of last minute development, its best to keep the new structures separate from the ones you are already using. At this point, if I were pressed by time I would go for a simpler solution.

  1. Create an adjacency matrix for the graph using the parent data (should be easy)
  2. Perform a topological sort using this matrix. (or even use tsort if pressed for time)
  3. Now that you have a topological sort, create an array level, one element for each node.
  4. For each node:
    • If the node has no parents set its level to 0
    • Otherwise set it to the minimum of level its parents + 1.
  5. Find the maximum level width.

The question is as Keith Randall asked, is this the right measurement you need?


Here's what I (Platinum Azure, the original author) have so far.

Preparations/augmentations:

  • Add "children" field to linked list ("DAG") node
  • Add "level" field to "DAG" node
  • Add "children_left" field to "DAG" node. This is used to make sure that all children are examined before a parent is examined (in a later stage of the algorithm).

Algorithm:

  1. Find the number of immediate children for all nodes; also, determine leaves by adding nodes with children==0 to list.

    for l in L:
      l.children = 0
    
    
    for l in L:
      l.level = 0
      for p in l.parents:
        ++p.children
    
    Leaves = []
    for l in L:
      l.children_left = l.children
      if l.children == 0:
        Leaves.append(l)
    
  2. Assign every node a "reverse depth" level. Normally by depth, I mean topologically sort and assign depth=0 to nodes with no parents. However, I'm thinking I need to reverse this, with depth=0 corresponding to leaves. Also, we want to make sure that no node is added to the queue without all its children "looking at it" first (to determine its proper "depth level").

    max_level = 0
    while !Leaves.empty():
      l = Leaves.pop()
      for p in l.parents:
        --p.children_left
        if p.children_left == 0:
          /* we only want to append parents with for sure correct level */
          Leaves.append(p)
        p.level = Max(p.level, l.level + 1)
        if p.level > max_level:
          max_level = p.level
    
  3. Now that every node has a level, simply create an array and then go through the list once more to count the number of nodes in each level.

    level_count = new int[max_level+1]
    for l in L:
      ++level_count[l.level]
    
    width = Max(level_count)
    

So that's what I'm thinking so far. Is there a way to improve on it? It's linear time all the way, but it's got like five or six linear scans and there will probably be a lot of cache misses and the like. I have to wonder if there isn't a way to exploit some locality with a better data structure-- without actually changing the underlying code beyond node augmentation.

Any thoughts?

0

精彩评论

暂无评论...
验证码 换一张
取 消