carloscastilla - Fotolia
Adapting an existing system to use dependency injection is not a trivial task, but it is worth the effort. The architectural advantages of this approach include faster development, easier defect fixes, significantly enhanced unit testing possibilities and an overall quality improvement.
Much of the work required to introduce dependency injection happens before the pattern is introduced. Preparing the existing code often takes up more than 50% of the time required to introduce dependency injection (DI). Here are some tips on how to make this process go more smoothly.
Implement a data access layer
Regardless of how old or new the application is, data access is probably a very large part of the application. In older systems, database queries can be found throughout the code -- essentially, tightly coupling the code to the database structure. But because the database is one of the most inflexible parts of the system, this coupling can quickly negate the advantages of converting to DI. To solve this problem, introduce a data access layer (DAL).
A DAL has two goals. First, it abstracts the database engine from your application, so you can change databases at any time -- e.g., from Microsoft SQL to Oracle. This is rarely done in practice and would not be reason enough to make the effort to refactor an existing application to use a DAL.
The second goal is to abstract the data model from the database implementation. This allows the database or code to change, as necessary, while only affecting a small piece of the main application -- the DAL. This goal is worthy of the refactoring effort necessary to implement it in an existing system.
An additional benefit of adding a DAL is enhancing the unit testing abilities. Without a DAL, tests must use real data from a database. This means the data to support the different scenarios must be created in a test database, and that database must be maintained in a constant state. This is difficult to do and error prone. With a DAL in place, tests can be written to create any kind of database data necessary to test different scenarios. It also lets you test what happens when the database is not available or crashes during a query. These edge cases are nearly impossible to reproduce on demand when working with a real database.
Refactor to modules and interfaces
One of the core ideas behind dependency injection is the single responsibility principle. This principle states that each object should have a specific purpose and the different parts of an application that need to take advantage of that purpose should use the appropriate object. This implies that these objects are reusable anywhere in the system. In existing systems, this is frequently untrue. Therefore, the first step in introducing DI is to refactor the application to use dedicated classes or modules for specific purposes.
The mechanics of implementing DI require the use of interfaces that match the published methods and properties of the different modules to be used. While refactoring functionality into modules, the application should also be refactored to use these interfaces instead of the concrete classes.
Note that none of this refactoring should affect the logic flow of the application. This is an exercise in moving code around, not changing how it works. To ensure defects aren't introduced, follow the normal quality assurance (QA) process. However, when done properly, the chances of creating bugs are minimal.
Add unit tests as you go
Having functionality wrapped up inside a monolithic object makes automated testing difficult or impossible. Refactoring modules and interfaces isolates specific objects and allows more advanced unit testing. It is tempting to continue refactoring modules with the idea of coming back and adding tests later, but this is a mistake.
Introducing new defects is always a concern when refactoring code. Creating the unit tests as soon as possible can address this risk, but there is also a rarely considered project management risk. Adding unit tests immediately can detect defects that already exist in the legacy code, but have gone undiscovered. I would argue that if the current system has been running for a while, these should not be considered defects, but "undocumented features." You must then decide whether these issues need to be addressed or left as they are.
Use service location, not constructor injection
There is more than one way to implement dependency injection. The most common way is using constructor injection, which requires that all software dependencies be provided when an object is first created. However, constructor injection makes the assumption that the entire system is using the pattern, which means the entire system must be refactored at the same time. This is difficult, risky and time-consuming.
An alternative approach to constructor injection is service location. This pattern can be implemented slowly, refactoring the application one piece at a time, as is convenient. Slow adaptation of existing systems is better than massive conversion efforts. So, when adapting an existing system to DI, service location is the best pattern to use.
There are those that criticize the service locator pattern, saying it replaces the dependencies rather than eliminating the tight coupling. I agree with this when building an application from scratch, but when updating an existing system, it is valuable to use the service locator during the transition. When the entire system has been adapted to the service locator, it is a trivial additional step to convert to constructor injection.
Dependency injection in Java