I'm trying to make an iterator that performs breadth-first traversal of all the files and folders inside a particular folder. I've already done this with depth-first traversal, which returns, for example:
\A
\A\1
\A\1\x
\A\1\y
\A\2
\B
\B\1
etc.
Now I'm trying to make a program that would instead return the results breadth-first: (or level-by-level)
\A
\B
\A\1
\A\2
\B\1
\A\1\x
\A\1\y
for the same hierarchy. However, I've come across a stumbling block: Assuming I want these to happen in the correct order (and specifically, not the reverse orde开发者_如何学Gor), I cannot find any way to perform this action without ultimately needing O(n) memory, where n is the number of files/folders on the drive, because it seems to me that I would ultimately need to keep the entire drive hierarchy in memory at some point, whereas for DFS, I can entirely ignore all entries that I enumerate previously at the same level in the hierarchy.
So my question is: Is there a better-than-linear way to use memory in order to traverse the folder?
If your platform supports the notion of inode number
, you may be able to store a single number for each directory, to indicate the largest inode number you have visited for that specific directory. If you access the inodes in numerical order, keeping track of a single entry will be good enough to know where the 'next' entry is.
It's a small gain, as you'll still need to maintain an inode number for every single directory on the system, but you won't need to care about the contents of the directories.
Of course, keeping in mind that any traversal mechanism is subject to horrible race conditions, you'd have to have some level of assurance that the filesystem is quiescent or your code is resilient to directories / files being deleted, created, moved, etc., while your code is underway.
精彩评论