Like threading issues? Bottlenecks? Memor开发者_如何学Cy problems?
It depends on the framework that you us, but in general I would say no. You should configure your IoC on one thread, and then all other threads are reading from the IoC. If you are worried about bottlenecks or memory footprint, you could roll your own, but most of the well known solutions do not have these issues as well.
EDIT: I figured I would update this to provide more information and to clarify my answer.
This is my point of view, but a DI container's main purposes are flexibility and testability. From a flexible point of view, you do not have references to concrete implementations in you business logic. For example if you needed a scheduling service in your logic without DI or some factory, you would need to do something like this:
SchedulingService schedulingService = new SchedulingService();
But with a DI container or factory, you would do something like:
ISchedulingService schedulingService = IoC.GetInstance<ISchedulingService>();
And as such, you are programming to an interface or base class instead of a concrete type (you are newing something predefined) which means that you can inject an modified version on that service after compile time. This leads to the second point which is testibility. What would be really nice is that the business logic class would not even use either of the lines above it would just call the methods or properties of it dependency as if it already knew about them. And the way that this is accomplished is by injecting the dependency into the object at construction time:
public UpdateDeployment(ISchedulingServer Scheduler)
{
_scheduler = Scheduler;
}
which then could be mocked or faked at test time. But in your business logic you could call an overloaded default constructor for the business object that grabs the dependency from a DI container and then calls that constructor above:
public UpdateDeployment() : this (IoC.GetInstance<ISchedulingService>()) {}
Now with all that said, from what research I have done, a DI container is a hashtable with some logic rapped around it to aid in its configuration (actually it is a hashtable of hashtables, but that is an implementation detail). For more information and clarity on this, look at the CommonServiceLocator project on CodePlex which defines an interface that all the major DI containers support. From that point of view, the memory footprint and cycle time should be small, but you have to think about how you are going to use it.
In the projects that I have worked on, I have used a DI container for two main reasons. The first is configuration information. One example is connection strings; I know that you could put them in your web.config, but this limits your testing of the business logic because A) you have to have the entire web stack in play and B) you have to modify the file between unit tests and integration tests. In the projects that I have worked on, I have injected the connection string in from a DI container that is configured in the global.asa which basically makes it a singleton as stated in other answers. This also works for reference types (I know a string is a reference type, but a lot of people use them / think of them as a supped up value type), but you need to make sure that the business logic is not changing its state. In other words, it should only use the getter for properties or should only call methods that do not have any side effects on the dependency itself.
Which leads to the second reason that I have used DI which is to provide service objects. As in the logic above, the SchedulingService is a logic dump for the domain I am working on. It provides a service that will manipulate or provide data that the business logic can use, but it does not change itself. Any change to itself is handled either at construction time or is handled by the DI container (which needs to worry about reference counting and locking which means that it shouldnt do it either). This will solve any threading issues because the data inside the container is immutable from the business logics point of view.
From an ASP application or any application that has many clients accessing it that uses business logic components that access other dependencies, DI could help with memory footprint size, but it could also hurt it. For example if I have a service that needs to be instantiated on every request (but that service is immutable), then I would have the memory footprint of the object X the number of concurrent sessions. But if I have a service that is called very infrequently, I would have the memory allocated at the startup of the application, and it is just sitting there waiting to be used. I have seen the first more than the second in the code I have worked on.
Hope that helps.
An important aspect is defining the correct object lifetime (singleton vs prototype
- never know what the correct technical term of this is) or you might get some nasty threading issues. As for bottlenecks and memory problems, these aren't relevant to DI frameworks or at least I never had any of these related to a DI framework.
All those things are concerns, but that's true regardless of whether you use DI or not.
If you're injecting singletons into objects, it's important to ensure that any of their shared state is thread safe. A project that I worked on had injected validation classes in the web tier. Someone decided to cache a value in the object instead of going back to the database each time without thinking about thread safety. We had an issue with wrong state being shown in the UI that was fixed once we eliminated the data members.
精彩评论