开发者

What are the downsides to using Dependency Injection? [closed]

开发者 https://www.devze.com 2022-12-22 03:30 出处:网络
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references,or expertise, but this question will likely solicit debate, a
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. Closed 11 years ago.

I'm trying to introduce DI as a pattern here at work and one of our lead developers would like to know: What - if any - are the downsides to using the Dependency Injection pattern?

Note I'm looking here for an - if possible开发者_开发知识库 - exhaustive list, not a subjective discussion on the topic.


Clarification: I'm talking about the Dependency Injection pattern (see this article by Martin Fowler), not a specific framework, whether XML-based (such as Spring) or code-based (such as Guice), or "self-rolled".


Edit: Some great further discussion / ranting / debate going on /r/programming here.


A couple of points:

  • DI increases complexity, usually by increasing the number of classes since responsibilities are separated more, which is not always beneficial
  • Your code will be (somewhat) coupled to the dependency injection framework you use (or more generally how you decide to implement the DI pattern)
  • DI containers or approaches that perform type resolving generally incur a slight runtime penalty (very negligible, but it's there)

Generally, the benefit of decoupling makes each task simpler to read and understand, but increases the complexity of orchestrating the more complex tasks.


The same basic problem you often get with object oriented programming, style rules and just about everything else. It's possible - very common, in fact - to do too much abstraction, and to add too much indirection, and to generally apply good techniques excessively and in the wrong places.

Every pattern or other construct you apply brings complexity. Abstraction and indirection scatter information around, sometimes moving irrelevant detail out of the way, but equally sometimes making it harder to understand exactly what's happening. Every rule you apply brings inflexibility, ruling out options that might just be the best approach.

The point is to write code that does the job and is robust, readable and maintainable. You are a software developer - not an ivory tower builder.

Relevant Links

http://thedailywtf.com/Articles/The_Inner-Platform_Effect.aspx

http://www.joelonsoftware.com/articles/fog0000000018.html


Probably the simplest form of dependency injection (don't laugh) is a parameter. The dependent code is dependent on data, and that data is injected in by the means of passing the parameter.

Yes, it's silly and it doesn't address the object-oriented point of dependency injection, but a functional programmer will tell you that (if you have first class functions) this is the only kind of dependency injection you need. The point here is to take a trivial example, and show the potential problems.

Lets take this simple traditional function - C++ syntax isn't significant here, but I have to spell it somehow...

void Say_Hello_World ()
{
  std::cout << "Hello World" << std::endl;
}

I have a dependency I want to extract out and inject - the text "Hello World". Easy enough...

void Say_Something (const char *p_text)
{
  std::cout << p_text << std::endl;
}

How is that more inflexible than the original? Well, what if I decide that the output should be unicode. I probably want to switch from std::cout to std::wcout. But that means my strings then have to be of wchar_t, not of char. Either every caller has to be changed, or (more reasonably), the old implementation gets replaced with an adaptor that translates the string and calls the new implementation.

That's maintenance work right there that wouldn't be needed if we'd kept the original.

And if it seems trivial, take a look at this real-world function from the Win32 API...

http://msdn.microsoft.com/en-us/library/ms632680%28v=vs.85%29.aspx

That's 12 "dependencies" to deal with. For example, if screen resolutions get really huge, maybe we'll need 64-bit co-ordinate values - and another version of CreateWindowEx. And yes, there's already an older version still hanging around, that presumably gets mapped to the newer version behind the scenes...

http://msdn.microsoft.com/en-us/library/ms632679%28v=vs.85%29.aspx

Those "dependencies" aren't just a problem for the original developer - everyone who uses that interface has to look up what the dependencies are, how they are specified, and what they mean, and work out what to do for their application. This is where the words "sensible defaults" can make life much simpler.

Object-oriented dependency injection is no different in principle. Writing a class is an overhead, both in source-code text and in developer time, and if that class is written to supply dependencies according to some dependent objects specifications, then the dependent object is locked into supporting that interface, even if there's a need to replace the implementation of that object.

None of this should be read as claiming that dependency injection is bad - far from it. But any good technique can be applied excessively and in the wrong place. Just as not every string needs to be extracted out and turned into a parameter, not every low-level behaviour needs to be extracted out from high-level objects and turned into an injectable dependency.


Here's my own initial reaction: Basically the same downsides of any pattern.

  • it takes time to learn
  • if misunderstood it can lead to more harm than good
  • if taken to an extreme it can be more work than would justify the benefit


The biggest "downside" to Inversion of Control (not quite DI, but close enough) is that it tends to remove having a single point to look at an overview of an algorithm. That's basically what happens when you have decoupled code, though - the ability to look in one place is an artifact of tight coupling.


I have been using Guice (Java DI framework) extensively for the past 6 months. While overall I think it is great (especially from a testing perspective), there are certain downsides. Most notably:

  • Code can become harder to understand. Dependency injection can be used in very... creative... ways. For example I just came across some code that used a custom annotation to inject a certain IOStreams (eg: @Server1Stream, @Server2Stream). While this does work, and I'll admit has a certain elegance, it makes understanding the Guice injections a prerequisite to understanding the code.
  • Higher learning curve when learning project. This is related to point 1. In order to understand how a project that uses dependency injection works, you need to understand both the dependency injection pattern and the specific framework. When I started at my current job I spent quite a few confused hours groking what Guice was doing behind the scenes.
  • Constructors become large. Although this can be largely resolved with a default constructor or a factory.
  • Errors can be obfuscated. My most recent example of this was I had a collision on 2 flag names. Guice swallowed the error silently and one of my flags wasn't initialized.
  • Errors are pushed to run-time. If you configure your Guice module incorrectly (circular reference, bad binding, ...) most of the errors are not uncovered during compile-time. Instead, the errors are exposed when the program is actually run.

Now that I've complained. Let me say that I will continue to (willingly) use Guice in my current project and most likely my next. Dependency injection is a great and incredibly powerful pattern. But it definitely can be confusing and you will almost certainly spend some time cursing at whatever dependency injection framework you choose.

Also, I agree with other posters that dependency injection can be overused.


I don't think such a list exists, however try to read those articles:

  • DI can obscure the code (if you're not working with a good IDE)

  • Misusing IoC can lead to bad code according to Uncle Bob.

  • Need to look out for over-engineering and creating unnecessary versatility.


Code without any DI runs the well-known risk of getting tangled up into Spaghetti code - some symptoms are that the classes and methods are too large, do too much and cannot be easy changed, broken down, refactored, or tested.

Code with DI used a lot can be Ravioli code where each small class is like an individual ravioli nugget - it does one small thing and the single responsibility principle is adhered to, which is good. But looking at classes on their own it's hard to see what the system as a whole does, since this depends on how all these many small parts fit together, which is hard to see. It just looks like a big pile of small things.

By avoiding the spaghetti complexity of big bits of coupled code within a large class, you run the risk of another kind of complexity, where there are lots of simple little classes and the interactions between them are complex.

I don't think that this is a fatal downside - DI is still very much worthwhile. Some degree of ravioli style with small classes that do just one thing is probably good. Even in excess, I don't think that it is bad as spaghetti code. But being aware that it can be taken too far is the first step to avoiding it. Follow the links for discussion of how to avoid it.


If you have a home-grown solution, the dependencies are right in your face in the constructor. Or maybe as method parameters which again is not too hard to spot. Though framework managed dependencies, if taken to the extremes, can begin to appear like magic.

However, having too many dependencies in too many classes is a clear sign that you're class structure is screwed up. So in a way dependency injection (home-grown or framework managed) can help bring glaring design issues out that might otherwise be hidden lurking in the dark.


To illustrate the second point better, here's an excerpt from this article (original source) which I whole heartedly believe is the fundamental problem in building any system, not just computer systems.

Suppose you want to design a college campus. You must delegate some of the design to the students and professors, otherwise the Physics building won't work well for the physics people. No architect knows enough about about what physics people need to do it all themselves. But you can't delegate the design of every room to its occupants, because then you'll get a giant pile of rubble.

How can you distribute responsibility for design through all levels of a large hierarchy, while still maintaining consistency and harmony of overall design? This is the architectural design problem Alexander is trying to solve, but it's also a fundamental problem of computer systems development.

Does DI solve this problem? No. But it does help you see clearly if you're trying to delegate the responsibility of designing every room to its occupants.


One thing that makes me squirm a little with DI is the assumption that all injected objects are cheap to instantiate and produce no side effects -OR- the dependency is used so frequently that it outweighs any associated instantiation cost.

Where this is can be significant is when a dependency is not frequently used within a consuming class; such as something like an IExceptionLogHandlerService. Obviously, a service like this is invoked (hopefully :)) rarely within the class - presumably only on exceptions needing to be logged; yet the canonical constructor-injection pattern...

Public Class MyClass
    Private ReadOnly mExLogHandlerService As IExceptionLogHandlerService

    Public Sub New(exLogHandlerService As IExceptionLogHandlerService)
        Me.mExLogHandlerService = exLogHandlerService
    End Sub

     ...
End Class

...requires that a "live" instance of this service be provided, damned the cost/side-effects required to get it there. Not that it likely would, but what if constructing this dependency instance involved a service/database hit, or config file look-ups, or locked a resource until disposed of? If this service was instead constructed as-needed, service-located, or factory-generated (all having problems their own), then you would be taking the construction cost only when necessary.

Now, it is a generally accepted software design principle that constructing an object is cheap and doesn't produce side-effects. And while that's a nice notion, it isn't always the case. Using typical constructor-injection however basically demands that this is the case. Meaning when you create an implementation of a dependency, you have to design it with DI in mind. Maybe you would have made object-construction more costly to obtain benefits elsewhere, but if this implementation is going to be injected, it will likely force you to reconsider that design.

By the way, certain techniques can mitigate this exact issue by allowing lazy-loading of injected dependencies, e.g. providing a class a Lazy<IService> instance as the dependency. This would change your dependent objects' constructors and make then even more cognizant of implementation details such as object construction expense, which is arguably not desirable either.


This is more of a nitpick. But one of the downsides of dependency injection is that it makes it a little harder for development tools to reason about and navigate code.

Specifically, if you Control-Click/Command-Click on a method invocation in code, it'll take you to the method declaration on an interface instead of the concrete implementation.

This is really more of a downside of loosely coupled code (code that's designed by interface), and applies even if you don't use dependency injection (i.e., even if you simply use factories). But the advent of dependency injection is what really encouraged loosely coupled code to the masses, so I thought I'd mention it.

Also, the benefits of loosely coupled code far outweigh this, thus I call it a nitpick. Though I've worked long enough to know that this is the sort of push-back you may get if you attempt to introduce dependency injection.

In fact, I'd venture to guess that for every "downside" you can find for dependency injection, you'll find many upsides that far outweigh it.


Constructor-based dependency injection (without the aid of magical "frameworks") is a clean and beneficial way to structure OO code. In the best codebases I've seen, over years spent with other ex-colleagues of Martin Fowler, I started to notice that most good classes written this way end up having a single doSomething method.

The major downside, then, is that once you realise it's all just a kludgy long-handed OO way of writing closures as classes in order to get the benefits of functional programming, your motivation to write OO code can quickly evaporate.


The illusion that you've decoupled your code just by implementing dependency injection without ACTUALLY decoupling it. I think that's the most dangerous thing about DI.


I find that constructor injection can lead to big ugly constructors, (and I use it throughout my codebase - perhaps my objects are too granular?). Also, sometimes with constructor injection I end up with horrible circular dependencies (although this is very rare), so you may find yourself having to have some kind of ready state lifecycle with several rounds of dependency injection in a more complex system.

However, I favour construtor injection over setter injection because once my object is constructed, then I know without a doubt what state it is in, whether it is in a unit test environment or loaded up in some IOC container. Which, in a roundabout sort of way, is saying what I feel is the main drawback with setter injection.

(as a sidenote, I do find the whole topic quite "religious", but your mileage will vary with the level of technical zealotry in your dev team!)


If you're using DI without an IOC container, the biggest downside is you quickly see how many dependencies your code actually has and how tightly coupled everything really is. ("But I thought it was a good design!") The natural progression is to move towards an IOC container which can take a little bit of time to learn and implement (not nearly as bad as the WPF learning curve, but it's not free either). The final downside is some developers will begin to write honest to goodness unit tests and it will take them time to figure it out. Devs who could previously crank something out in half a day will suddenly spend two days trying to figure out how to mock all of their dependencies.

Similar to Mark Seemann's answer, the bottom line is that you spend time becoming a better developer rather than hacking bits of code together and tossing it out the door/into production. Which would your business rather have? Only you can answer that.


DI is a technique or a pattern and not related to any framework. You can wire up your dependencies manually. DI helps you with SR (Single responsibility) and SoC (separation of concerns). DI leads to a better design. From my point of view and experience there are no downsides. Like with any other pattern you can get it wrong or misuse it (but what is in the case of DI quite hard).

If you introduce DI as principle to a legacy application, using a framework - the single biggest mistake you can do is to misuse it as a Service-Locater. DI+Framework itself is great and just made things better everywhere I saw it! From organizational standpoint, there are the common problems with every new process, technique, pattern, ...:

  • You have to train you team
  • You have to change your application (which include risks)

In general you have to invest time and money, beside that, there a no downsides, really!


Code readability. You'll not be able to easily figure out the code flow since the dependencies are hidden in XML files.


It seems like the supposed benefits of a statically-typed language diminish significantly when you're constantly employing techniques to work around the static typing. One large Java shop I just interviewed at was mapping out their build dependencies with static code analysis...which had to parse all the Spring files in order to be effective.


Two things:

  • They require extra tool support to check that the configuration is valid.

For example, IntelliJ (commercial edition) has support for checking the validity of a Spring configuration, and will flag up errors such as type violations in the configuration. Without that kind of tool support, you can't check whether the configuration is valid before running tests.

This is one reason why the 'cake' pattern (as it's known to the Scala community) is a good idea: the wiring between components can be checked by the type checker. You don't have that benefit with annotations or XML.

  • It makes global static analysis of programs very difficult.

Frameworks like Spring or Guice make it difficult to determine statically what the object graph created by the container will look like. Although they create an object graph when the container is started up, they don't provide useful APIs that describe the object graph that /would/ be created.


It can increase app startup time because IoC container should resolve dependencies in a proper way and it sometimes requires to make several iterations.

0

精彩评论

暂无评论...
验证码 换一张
取 消