开发者

Visual studio solutions with large numbers of projects

开发者 https://www.devze.com 2023-01-05 02:20 出处:网络
I see developers frequently developing against a solution containing all the projects (27) in a system. This raises problems of build duration (5 minutes), performance of Visual Studio (such as intell

I see developers frequently developing against a solution containing all the projects (27) in a system. This raises problems of build duration (5 minutes), performance of Visual Studio (such as intellisense latency), plus it doesn't force developer's to think about project dependencies (until they get a circular reference issue).

Is it a good idea to break down a solution like this into smaller solutions that are compilable and testable independent of the "mother" solution? Are there any pote开发者_如何学运维ntial pitfalls with this approach?


Let me restate your questions:

Is it a good idea to break down a solution like this into smaller solutions

The MSDN article you linked makes a quite clear statement:

Important Unless you have very good reasons to use a multi-solution model, you should avoid this and adopt either a single solution model, or in larger systems, a partitioned single solution model. These are simpler to work with and offer a number of significant advantages over the multi-solution model, which are discussed in the following sections.

Moreover, the article recommends that you always have a single "master" solution file in your build process.

Are there any potential pitfalls with this approach?

You will have to deal with the following issues (which actually can be quite hard to do, same source as the above quote):

The multi-solution model suffers from the following disadvantages:

  • You are forced to use file references when you need to reference an assembly generated by a project in a separate solution. These (unlike project references) do not automatically set up build dependencies. This means that you must address the issue of solution build order within the system build script. While this can be managed, it adds extra complexity to the build process.
  • You are also forced to reference a specific configuration build of a DLL (for example, the Release or Debug version). Project references automatically manage this and reference the currently active configuration in Visual Studio .NET.
  • When you work with single solutions, you can get the latest code (perhaps in other projects) developed by other team members to perform local integration testing. You can confirm that nothing breaks before you check your code back into VSS ready for the next system build. In a multi-solution system this is much harder to do, because you can test your solution against other solutions only by using the results of the previous system build.


Visual Studio 2010 Ultimate has several tools to help you better understand and manage dependencies in existing code:

  • Dependency graphs and Architecture Explorer
  • Sequence diagrams
  • Layer diagrams and validation

For more info, see Exploring Existing Code. The Visualization and Modeling Feature Pack provides dependency graph support for C++ and C code.


We have a solution of ~250 projects.

It is okay, after installing a patch for Visual Studio 2005 for dealing fast with extremely large solutions [TODO add link].

We also have smaller solutions for teams with selection of their favorite projects, but every project added has also to be added to the master solution, and many people prefer to work with it.

We reprogrammed F7 shortcut (build) to build the startup project rather than the whole solution. That's better.

Solution folders seem to address the problem of finding things well.

Dependencies are only added to top-level projects (EXEs and DLLs) because, when you have static libraries, if A is dependency of B and B is dependency of C, A might often not need to be dependency of C (in order to make things compile and run correctly) and this way, circullar dependencies are OK for compiler (although very bad for mental health).

I support having fewer libraries, even to the extent of having one library named "library". I see no significant advantage of optimizing process memory footprint by bringing "only what it needs", and the linker should do it anyway on object file level.


The only time I really see a need for multiple solutions is functional isolation. The required libs for a windows service may be different than for a web site. Each solution should be optimized to produce a single executable or web site, IMO. It enhances separation of concern and makes it easy to rebuild a functional piece of the application without building everything else along with it.


It certainly has its advantages and disadvantages anyway breaking a solution into multiple projects helps you find what you looking for easly i.e if you are looking for something about reporting you go to the reporting project. it also allows big teams to split the work in such a way that nobody do something to break someone else's code ...

This raises problems of build duration

you can avoid that by only building the projects that you modified and let the CI server do the entire build


Intellisense performance should be quite a bit better in VS2010 compared to VS2008. Also, why would you need to rebuild the whole solution all the time? That would only happen if you change something near the root of the dependency tree, otherwise you just build the project you're currently working on.

I've always found it helpful to have everything in one solution because I could navigate the whole code base easily.


Is it a good idea to break down a solution like this into smaller solutions that are compilable and testable independent of the "mother" solution? Are there any potential pitfalls with this approach?

Yes it is a good idea because:

  • You don't want VS to slow down on a solution with dozens of VS projects.
  • It can be interesting to only focus on a portion of the code, this enforce the notion of code locality which is a good thing.

But the important first thing to struggle for is to have as few VS projects/assemblies as possible. My company published two free two white books that explain the pro/cons of using assemblies/VS project/namespaces to partition a large code base.

  • Partitioning code base through .NET assemblies and Visual Studio projects (8 pages)
  • Defining .NET Components with Namespaces (7 pages)

The first white-book explains also that VS is pretty slow when working with a solution with dozens of projects, and shows tricks about how to remedy to this slowness.

0

精彩评论

暂无评论...
验证码 换一张
取 消