I have a Visual Studio 2008 solution with >40 C# and C++/CLI projects that depend on each other. Working with that solution is quite slow, and usually I only need a few projects at a time. So I decided to split the solution into multiple solutions that contain 3-5 projects. I would also like to keep the "full" solution with all projects (it's handy for for automated builds or for big refactoring actions that affect all projects). (This is the main condition here. Otherwise, splitting projects into solutions is trivial, of course.)
Is there any way to do that?
My first idea was to create new empty solutions and add some of the existing project files into each of these solutions. But if I do that, VS can't find the project references any more (because they're not in the same solution). I can add the references as "normal" file references. But if I do that, my "full" solution doesn't work any more, because the dependencies are lost.
EDIT:
Thank you all for your answers. I'd like to clarify my question a bit: My solution contains 44 projects, not including tests. So splitting it into 2 parts isn't really what I had in mind, I was more thinking about 5-8 parts. That's why I would like to keep the "full" solution where VS can figure out the correct build order for a full build. Maintaining the build order for 8 separate solutions by hand (e.g. in a batch file) seems error-prone to me.
Also I would like to group the projects "logically" (i.e. I would like to have the projects that a开发者_如何学Cre usually modified together in one solution). But that grouping does not always match the dependencies. For example, imagine I have the dependency chain
A is referenced by B is referenced by C is referenced by D
and imagine that A and D are often modified together but B and C rarely change. (Obviously, the interface of A that is used by B must remain unchanged for that.) Then I would like to have A and D in one solution, B and C in another. But that would only work if I could have an intact "complete" solution containing A,B,C and D if I want to build all projects from scratch. Once that build is complete, I could open my A/D-solution and edit/build only those 2 projects.
But I fear there is no elegant solution for my problem. (pun not intended)
Another approach you may want to consider is to just unload the projects you are not using, just right click and unload the ones you're not working on...this should result in a much snappier Visual Studio. It keeps a lot of stuff out of VS memory.
Something else you can do is edit the build definition and remove any unmodified projects (depending on how you links are...you know what's needed/updated/not needed).
Solution -> Right Click -> Configuration Manager -> uncheck Build on any projects that don't change and aren't needed every build for some other reason. This speeds up your build by using the output of the last build of these projects, from wherever they dump their binaries to.
Note: You can still right click a project and build to update it's output then rebuild the solution to get the latest...you can do this instead of changing the build configuration back and forth to include it.
Update
In the spirit of going the performance route (e.g. waiting until VS 2010 really), have you taken the measures listed in some other stack overflow questions: Very slow compile times on Visual Studio and Visual Studio Optimizations? These can make quite a difference, and may bring performance to an acceptable level while we wait on VS 2010 to ship.
A few more things that can have a good impact on performance:
- I highly recommend an SSD if it's an option. They are costly but definitely worth it, the more files in your solution they more they pay off. When we ordered new machines at work the entire development team went with SSDs, the difference is astounding. When you're dealing with a large solution, it's spinning the drive around switching the physical head to thousands of locations just to read the files in...that's where SSDs win, seek time is effectively 0ms.
- Along the same lines, a free solution is a good drafragmenter (only if you're on a physical, do NOT defrag a SSD) makes a big difference...if you use something that's keeping visual studio in mind. I recommend MyDefrag (freeware) as it's native algorithm for defragging keeps directory structure in mind, so at least a physical drive doesn't spend as much time switching around to steam the files you need in your solution, they're all very close on the disk.
- With the above SSD suggestion, keep in mind there is a huge disparity in SSD performance between a cheap and a fast drive, do a little bit of research here. Don't worry about your system, with a free utility such as gParted you can move your entire OS drive over very easily, and your old drive will still be a backup...as long as the SSD can fit the data off your C, you're good.
You can make multiple solutions and add any project(s) to each solution that you like.
The projects you wish to build may have dependencies on other projects. In this case, you need to change from using a "Project" reference (whch references any other project in the solution) to using a File reference (where you reference the assembly .dll that project has created).
So let's think of them as "libraries" (compiled once and then used a lot), and "core" projects (that you are changing a lot). Make solution(s) to contain your "library" projects, and (preferably) add a post-build step that copies the resulting debug/release dlls into shared library folders. Add the core projects to a new solution. Change all the core solution references to refer to your libraries via the pre-built binary dlls in your shared folders (refer to the release build dll, so that your final release works properly).
If you change a library then you need to build the library solution to rebuild it, and then use the core solution to rebuild the application(s) that depend on the library. (Thus it is a good idea to put any frequently-changed code in the core solution rather than in a library. You can move projects about at a later date when they mature, and/or make libraires into core projects if they need to be heavily modified for a while)
[2018 edit] These days, libraries can be distributed using local nuget or npm servers, which would be a preferred option unless there is a specific reason not to adopt this approach.
A last option, in cases of libraries that take a while to build and which change very infrequently, is to make them "precompiled" libraries - check the final dll files in to source control, and your team members can just get the latest version and build against the libraries without needing to get or build the source code for them at all. (To update these libraries you must remember to check out the binary .dll files before building and then check them in again after rebuilding them, so you have to weigh the advantages (faster & easier day-to-day builds) against the disadvantages (a bit more effort to make changes to the libraries, larger binary files in source control).
There's no good way to do this. The tools aren't built with that in mind. Instead, you should combine your projects together as much as you can. Even though it's the same amount of code, builds will run many times faster.
You'll have to get over the idea that separate projects are required for having good abstractions. It seems like a very clean way to enforce separation. But it's not worth the price with this toolchain. Use the language features instead; classes, namespaces, and so on.
Three years after your question, our team is still facing the same problem - not compile time, but the fact that there is no good way to split what is logically separate into separate solutions.
We ended up building our own tool in the form of soldr (open source), an inter-solution build tool. It can optionally leverage nuget to manage your code base or work on its own. The idea is to split your work into as many solutions (.sln's) as makes sense logically. In our case we have one for our in-house framework, and then one sln
for each our back end libraries, one for each product, etc. Each solution has a "components" directory (similar to nuget "packages" dir) where we store the actual library files (DLLs, etc.) which are being referenced by the current solution. These DLLs have to be updated from a fresh build of the dependency for the target sln to see a new version.
Instead of manual labor we then use our aforementioned build tool. Using very simple rules and conventions it automatically infers the inter-solution dependencies. It can then build the entire dependency graph correctly (or generate an MSBuild file to do so), at each step building and copying the outputs to the components directory of the next target. We also added features for filtering what will be built (e.g. build only the direct dependencies), printing dependency graphs, running unit tests, etc.
Update: To leverage nuget we've added support for generating nuspec
files with automatically inferred dependencies.
We have 500+ projects in a solution. There are benefits to keeping every project in a single solution, especially when a base project is changed that effects all other projects (e.g., helper classes).
We found the best way to continue was to use the vsFunnel extension for Visual Studio. It has the ability to load a solution WITHOUT the projects. The projects are still listed in the solution but not actually loaded until needed. As a project is opened via the UI it is loaded on demand.
The real trick is to enable "Load Dependencies" from the vsFunnel UI. As you open a project, it loads all the dependent projects, etc. which makes working on a target project much easier.
For example, in our 500+ projects, often we are focused on a single application, and it might have 20 dependent projects. Only those are loaded, NOT the entire 500+.
Hint: Choose Load Dependencies AND Load None - then when you open your target project in the UI, all related projects are loaded.
This has turned out to be an enormous time saver (plus an SSD drive!)
You should have separation of concerns, especially when grouping projects within a Visual Studio Solution. I recently ran into this problem at work where I had to create a bunch of unit tests and the original test created by the developer was included in the solution. At first, I thought O.K. I'll just put my tests in here b/c I know it will work and don't have to worry about getting the dependencies right. But then later on after I added like 20 test projects for each different unit I realized it was building so incredibly slow.
I then decided to create a solution for EACH set of tests rather than putting them all in one spot. This also helps organize your code better so it's easier to find. For example, I created a folder 'Test > Unit > MyUnitTest' and 'Test > Integration > MyIntegrationTest.' Not only does it make it easy to find things later on, but it helps you make your builds faster. Also, if there a bunch of people working on the code at once each developer could potentially change the project settings and configurations and not mess up the others.
The general rule is to have only 7 items or less grouped together in one certain area. If you have more than 7 items then chances are there is another sub-category you could create to make it more abstract and easier for the human brain to comprehend all the complicated details at a glance (especially for people new to the system or if you are coming back to the project months or even years later).
Another option is to have a single all-encompassing solution, and create different build configurations.
At the top of the window is a combobox where you can choose Debug or Release build configuration. Drop this down and choose the "Configuration Manager..." option.
Under "Active Solution Configuration", drop down the combo box and choose <New...>. Create a new configuration (for example, called "CoreOnly") and select "Copy settings from" as "Debug". Untick the "Create new project configurations" option.
Now, your "CoreOnly" build configuration will appear in the window. It shows a list of all the projects. For any projects that you don't want to build, untick the "Build" checkbox in the right hand column. Close the dialog.
Now, to build all the projects, choose "Debug" from the configuration dropdown, and Build as normal. When all your projects have built, you can drop down to only building the "core" projects by switching to the CoreOnly configuration. As long as you remember to build the Debug (all projects) build when you edit code that is not in any of the core projects, you'll be fine, and your CoreOnly builds will be much faster.
The down sides of this approach are that the solution can be very slow to open (although in Visual Studio 2008, Visual Studio 2010 and Visual Studio 2012 this is a lot better than it was in Visual Studio 2005, so you may not have a problem with it) and that if a project isn't enabled in your configuration, it won't ever be rebuilt and so your build (or the running application) can fall apart - you have to remember to switch back to the regular "build everything" configuration if you think you have made changes in (or affecting) the disabled projects.
Besides here mentioned soldr and Funnel I can recomend using Solution Load Manager. Yea, SSD helps, but VS is buggy: it crashes, becomes unresponsive and other problems emerge when working on 100+ projects. Keeping it all together for major refactoring, rebuilding and debugging is a must have, but working on smaller subsets of projects on daily bases will greatly improve performance and satisfaction.
ps. I would like to know if someone has made comparison Funnel vs. Solution Load Manger?
Years later...
Look into loading a subset of projects using solution filters, which also have MSBuild support and, by extension, you can probably tie this into Azure DevOps.
精彩评论