On a native C++ project, linking right now can take a minute or two. Yet, during this time CPU drops from 100% during compilation to virtually zero. Does this mean linking is primarily a disk activity?
If so, is this the main area an SSD would make big changes? But, why aren't all my OBJ files (or as many as possible) kept in RAM after compilation to avoid this? With 4 GB of RAM I should be able to save a lot of disk access and make it 开发者_C百科CPU-bound again, no?
Update: so the obvious follow-up is, can the VC++ compiler and linker talk together better to streamline things and keep OBJ files in memory, similar to how Delphi does it?
Linking is indeed primarily a disk-based activity. Borland Pascal (back in the day) would keep the entire program in memory, which is why it would link so fast.
Your OBJ files aren't kept in RAM because the compiler and linker are separate programs. If your development environment had an integrated compiler and linker (instead of running them as a separate processes), it could indeed keep everything in RAM.
But you would lose the ability to separate the development environment from the compilers and/or linkers - you would have to use the same compiler/linker, and you wouldn't be able to run the compiler outside the environment.
You can try installing some of those RAM disks utilities and keep your obj directory on the RAM disk or even whole project directory. That should speed it up considerably.
Don't forget to make it permanent afterwards :-D
The Visual Studio linker is largely I/O bound, but how much so depends on a few variables.
Incremental linking (common in Debug builds) generally requires a lot less I/O.
Writing a PDB file (for symbols) can consume a lot of the time. It's a specific bottleneck that Microsoft targeted in VS 2010. The PDB writing is now done asynchronously. I haven't tried it, but I've heard it can help link times quite a bit.
If you using link-time code generation (LTCG) (common in Release builds), you have all the usual I/O initially. Then, the linker re-invokes the compiler to re-generate code for sections that can be further optimized. This portion is generally much more CPU-intensive. Off hand, I don't know if the linker actually spins up the compiler in a separate process and waits (in which case you'll still see low CPU usage for the linker process), or if the compilation is done in the linker process (in which case you'll see the linker go through phases of heavy-I/O then heavy-CPU).
Using an SSD can help with the I/O bound portions. Simply having a second drive can help, too. For example, if your source and objects are all on one drive, and you write your PDB to a separate drive, the linker should spend less time waiting for the PDB writer. Having a second spinning drive has helped my current team's link times dramatically.
In debug builds in Visual Studio you can use incremental linking which allows you to usually avoid a lot of the time spent on linking. Basically it means that instead of linking the whole EXE (or DLL) file from scratch it builds upon the one you last linked, replacing only the things that changed.
This is however not recommended for release builds since it adds some overhead in runtime and can result in an EXE file that is several times larger than the usual.
It's hard to say what exactly is taking the linker so long without knowing how it is interacting with the OS. Thankfully, Microsoft provides Process Monitor so you can do just that.
It's helped me diagnose bugs with the Visual Studio IDE and debugger without access to source.
精彩评论