Reviving Java IoC/DI and why does it matter

It wouldn’t be bold to state that Inversion of Control and Dependency Injection represent the state of the art of software development in Java. These are well known and recognized principles at the heart of the most popular and leading application frameworks like Spring, OSGI or JEE/MicroProfile.

Most of these frameworks have existed for more than 15 years and their basic concepts have for the most part remained the same today which clearly demonstrates the value of IoC and DI principles and the quality of their implementations.

However, I believe several recent events and evolutions might induce a need for a change in the way IoC and DI should be used and therefore implemented in modern Java applications. I deeply think now is the perfect time to give some thought to rethinking IoC and DI principles in Java in order to find a solution that would match current and future development practices and resolve some issues that exist with current solutions.

In this article, I will talk about the evolutions of the Java platform and the general IT landscape and the issues I’ve encountered with existing IoC/DI frameworks that make me think that a need for a fresh approach to IoC/DI in Java is required. The goal is to objectively analyze current practices and confront them to the Java platform and IT landscape evolutions in order to determine whether another approach exists that would be better suited to develop modern Java applications.

What are IoC/DI principles for?

Before getting into specifics, let’s start by quickly reminding the benefits of IoC/DI.

In a nutshell, Inversion of Control is a design principle where a generic framework is used to build up an entire application by assembling application code, as opposed to having the application code assembles itself. In that definition the application code refers to the task-specific code of the application or some other task-specific code provided in resuable libraries. Concretely, with IoC, the framework calls the application code whereas in traditional programming, it is the application code that calls the framework.

One way to achieve IoC is by using Dependency Injection which is a technique where an object receives the objects it depends on rather than having the object creates or retrieves these objects.

A Java application basically consists in a set of interconnected objects. An application developed using traditional procedural programming techniques would be probably highly coupled with poor separation of concerns making it hard to develop and maintain. Let’s see a simple example:

The usage of design patterns like factories, builders, adapters, bridges… provides loose coupling as well as better separation of concerns but it also adds a lot of boiler plate code. In the end, the resulting application is probably more flexible and maintainable but also more complex to develop, besides the extra amount of code opens rooms for bugs. Let’s revise previous example with design patterns and some layers of abstraction:

One objective of IoC/DI principles is precisely to eliminate design patterns boiler plate code. An application embracing IoC/DI principles is loosely coupled, the execution of a task is completely decoupled from its implementation which allows proper separation of concerns since each task can be implemented separately in perfect isolation. Each part of the application is dedicated to a particular task which only relies on contracts with the parts it depends on, it doesn’t have to make assumptions about how other parts are working. The IoC/DI container is responsible for creating the objects composing the application and wiring them altogether. Maintainability is greatly improved as it is possible to change parts of the application with very limited side effects. You can for instance change the implementation of a particular part by configuration with no recompilation. Using an IoC/DI framework like Spring, previous example would look like:

Apart from the configuration, the application code is purely task-specific, there is no glu code to provide. When considering a large application developed by many people which is constantly changing to adapt to various technical and functional environments, being able to focus on a particular aspect knowing that you won’t impact directly the rest of the application is vital. This approach makes also full use of Java dynamic linking capability: an application can behave differently depending on what is defined on the classpath at runtime. IoC/DI principles allows to efficiently develop modular applications with proper separation of concerns resulting in flexible, maintainable and extensible applications.

Modularity is a fundamental software design technique which should be given priority over everything else. Modularity guarantees a proper separation of concerns providing flexibility, maintainability, stability and ease of development regardless of the lifespan of a software or the number of people involved to develop it. There are plenty of projects out there that show that this is the right way of doing things starting by the JDK itself which is now fully modular. There are also Spring, JBoss, Eclipse, Maven, just to name a few. All these projects use various forms of IoC and DI to achieve modularity which is probably the main benefit of these principles and why they have become an industry standard.

The evolution of Java

I will now start discussing the main reasons why I think IoC/DI in Java as we know it should change starting by how the Java platform evolved over time.

Looking back to early 2000s when most of the IoC/DI solutions were incepted, Java was quite different than what it is today. At the time, the solution of choice to implement IoC/DI was to define some kind of configuration to specify how the objects of an application shoud be created and pass it at runtime to a container who instantiates the classes defined in this configuration and wires resulting objects to build the application. Such container makes heavy use of reflection. Here is a simple configuration file for the Spring framework:

Some times later in the end of 2004, Java 5 was released with the support for Annotations. Annotations enabled IoC/DI frameworks to evolve and move part of the configuration from files to actual application code. For instance, the container could then introspect the classpath at runtime and look for annotated classes to build up the configuration. Previous example could then be rewritten:

Annotation processing was a major evolution, almost all IoC/DI frameworks created after Java 5 rely on annotations for configuration including Guice and CDI. Today, Spring still supports XML-based configuration but it has become very rare to see it in applications. Some could argue that using annotations is not pure IoC because part of the configuration and therefore the framework appears in the code but they are actually very convenient in practice and, by design, they have no direct effect on the operation of the code, which is compliant with IoC principle. But you can still choose not to annotate application code and centralize the configuration, Spring, for instance, allows you to specify IoC/DI configuration in separate dedicated configuration classes.

The fact that annotations are mostly used and evaluated at runtime is actually an interesting point because at the time most of existing metadata facilities in Java, which have since been replaced by Java annotations, were operating at compile time, the most remarkable example being XDoclet. Plexus which was the IoC/DI framework used in Maven was originally relying on Mojo JavaDoc tags to generate configuration files, they naturally replaced these tags by Java annotations used to generate the same configuration file at compile time. Nowadays, Plexus is deprecated and all IoC/DI frameworks uses annotations with RUNTIME retention policy. This disinterest in compile time annotation processing can be explained by the fact that it might be harder to implement, it also requires to configure the Java compiler with an annotation processor and it is not as flexible as reflection when you want to dynamically construct the configuration at runtime. However I think this approach might have been abandonned too early and regained interest recently with the revival of standalone applications. There are many things you can check at compile time that you clearly don’t want to discover at runtime. Besides using reflection to dynamically create an application is not that easy and might raise some serious security concerns.

Support for generics was also added with Java 5, this introduced some new challenges to IoC/DI, especially with the way generics were implemented. In order to determine if an object can be injected into a dependency, IoC/DI containers must take type parameters into account. All in all, IoC/DI frameworks have been able to adapt to this change, although they were designed to operate at runtime where information about generics is usually lost due to type erasure.

More recently in 2014, Java 8 added support for lambdas and functional programming. As for annotations, this has deeply changed the way we write Java code and opens up new ways of implementation for IoC/DI frameworks with the capability to compose functions. IoC/DI frameworks were not affected by this change, because they already have working mechanisms in place to instantiate and wire the objects composing an application. But if I look at closure, lazy evaluation and the Supplier<> interface in particular, I can see new ways of instantiating, injecting and initializing application objects. We don’t need functional programming to implement IoC/DI principles in Java but it is surely a path that needs exploring and that has not been explored so far.

Now the biggest change in the Java platform since Java 5 was probably introduced in Java 9 in 2017 in a relative indifference. I consider the Java Platform Module System to be probably the most disruptive thing that has ever been added to the platform because it fundamentally changes the way Java applications and libraries should be designed, built, packaged and distributed. The goal was to modularize the overgrowing Java Runtime in order to be able to build adhoc smaller runtimes matching the exact needs of an application. In the process, the module became a first class citizen. Modularity was inherent to Java from the start, the purpose of IoC/DI frameworks, OSGI or tools like Maven was always and still is, among other things, to enable modular programming. But now, modularity has been formalized and included in the Java platform itself. JDK folks did a good job to make things backward compatible so it is still possible not to use modules but they made it clear that there should be a transition phase where the Java ecosystem should shift to the module system. So what does it mean in practice for IoC/DI frameworks?

A Java module defines the modules it depends on and the packages he wants to expose to other modules. A module that depends on another module can only access types in packages explicitly exported by the required module. This is checked at both compile time and runtime and it includes reflective access, there is no setAccessible() method to bypass this.

This is a new visibility layer on top of the package visibility which allows a module to hide its internal implementation and only expose APIs for instance. This has actually a huge impact for everybody who is using reflection to access classes. Hopefully, the open keyword was created to bypass this constraint, allowing module to give reflective access to packages otherwise inaccessible. Most of Java middleware and therefore applications were and still are built on top of reflection so it would have been disatrous to alter this behaviour and in my opionion this is the main reason such keyword exists. If you think about it, reflection without boundaries is extremely dangerous, adding a malicious jar on the classpath of an application can have serious consequences. The module system was designed to make such reflective access explicit, a module must explicitly say it is ok with other modules accessing its types in a reflective way, it can even limit this access to specific trusted modules. This has some impacts on IoC/DI frameworks which uses reflection to instantiate application classes, it basically means that every application module must be explicitly opened either in the module descriptor using open or opens keywords or on the command line using --add-opens argument. This is clearly not intuitive, probably a bit insecure and make assumptions about how the module will be used. The module system clearly undermined the use of reflection to build secure applications. When Java 9 was released, illegal reflective access was permitted by default and result in warning messages being issued. It was originally planned to deny any illegal access with no exceptions starting from Java 10 (see --illegal-access=parameter in java tool documentation) but five releases later illegal reflective access is still permitted which means that the move towards the module system is slower than expected. However this move is unavoidable which is why I think we have to find new ways to achieve IoC/DI that fully embrace the Java module system.

Another noteworthy point is the new way to package and distribute Java libraries and applications. Java 9 introduced the jmod file format that basically extends the jar file format to include native code, configuration files, resources… The jmod tool is used to create jmod files which unlike jar files are not executable, so jmod files are not meant to replace jar files anytime soon. However, they can be used for compilation and more importantly they can be linked to form custom runtime images using the jlink tool which is possible now that the Java runtime is modular. The jlink tools generates custom optimized runtime images that includes applications modules as well as the Java runtime modules they depend on. Although these tools are at an early stage, there is a clear trend towards static linking and this is all the more true if we consider other experimental tools like jaotc introduced in Java 13. This is another indicator that shows applications should not be built dynamically at runtime using reflection like they are today with most IoC/DI frameworks.

The return of the stand-alone application

Deploying and running Java applications inside JEE application servers like JBoss, WebLogic or WebSphere has long be regarded as the state of the art of enterprise Java application development. However I believe the application servers era has now come to an end. The main reason behind this is the slow decline of JEE in favour of more agile frameworks like Spring. Most of today’s Java enterprise application are built on top of frameworks which overrides features usually provided by application servers. This leads to an awkward situation where applications actually don’t use anything from the application server they run on, even worth they sometimes embed conflicting dependencies leading to unpredictable runtime errors. This has several absurd consequences, for instance many companies are paying expensive support for application servers they don’t actually use, then an application server increases the application footprint for no reason and it can also make operation more complex again for no reason. We can also mention the greatly overrated hot deployments or multiple deployments praised by application servers. Hot deployment always leaves things behind and as a result a restart of the application server is almost always prefered. As for deploying multiple applications on a single application server, it is just an operational nonsense. To sum up, as soon as someone decides to build an application with Spring or enterprise frameworks other than standard JEE, and this actually happens a lot, the application server is of little interest or no interest at all.

From this observation, Spring released Spring Boot in 2014 which allows to build complete enterprise applications from scratch. The idea is to assemble everything required by an application including enterprise services like HTTP server, database connection pool, transaction manager, queueing systems… around the Spring IoC/DI container. The result is a full featured stand-alone application. This is by far more flexible and optimized than using an application server because the application embeds precisely what it needs to operate and only starts required services at runtime.

It therefore comes as no surprise that Oracle decided to give up JEE to the Eclipse foundation which already owns MicroProfile which is more or less a JEE alternative to Spring Boot. We can also mention Quarkus from RedHat which follows the same principle.

The revival of stand-alone applications clearly means something for the future of Java development. Java applications will be built around a generic framework rather than on an application server and packaged in small optimized executable images. Most IoC/DI frameworks dynamically build the application at runtime which might not be perfectly in line with that idea.


The architecture of applications has also evolved from fat multitier applications providing various functionalities to lighter more specialized applications exposing smaller services in REST APIs. Back in 2000, the functionnalities of an application were usually implemented as highly coupled services in a single backend application whereas today the same functionnalities are implemented as loosely coupled services exposed by multiple applications running on multiple servers.

This change can be explained by the fact that today’s applications are more complex as they have to integrate with more and more external systems, they are also everchanging, they have to adapt to the demand always faster and finally they can have various forms: desktop, web, mobile, TV, IoT… It is very difficult to get a monolithic application to adapt to such changing environment. Microservice architecture addresses these issues by decoupling applications from the services providing the functionalities they need. Such services provide a single well-defined consistent feature and they can be developed, released, deployed, scaled and used independently using various technologies (not only Java). They are also exposed using technology-agnostic protocols such as HTTP which makes them easy to integrate.

The ideal application in a microservice architecture is small, scalable, fast and have a very low footprint. In order to reduce footprint, we must be able to create applications that strictly embed and run what they need to operate. In practice, this means that a microservice application can’t afford to embed or use a complex framework to assemble its components at runtime because these have no value for the actual operation of the application.


The rise of containers is a direct consequence of the microservice architecture which advocates smaller, stateless applications that can then be colocated on the same powerful server or virtual machine that was once used to run a single greedy monolithic application. OS-level virtualization with containers is a proper way to isolate multiple microservices running on the same server.

In order to manage all these containers, container-orchestration systems like Kubernetes were created to start and stop applications on a cluster composed of multiple physical or virtual nodes. Depending on nodes availability or the resources allocated to specific applications, these tools can decide to move containers from node to node. As a result, applications must be able to scale, be fault tolerant, support back pressure and start very quickly because unlike before they can be started and stopped unexpectedly. It is therefore very important to reduce and optimize applications startup time.

Current IoC/DI frameworks

Current IoC/DI frameworks are proven solutions and it would be stupid to depreciate them, especially since they have largely contributed to the evolutions described above. That being said, they are not free from issues and can certainly be improved.

The Spring framework is certainly the most representative and the one I know the most, so I’ll focus my analysis on it but it wouldn’t be fair not to mention other solutions like:

  • Google Guice which has an interesting modular approach addressing some of the issues described below.
  • CDI which is the JEE response to Spring IoC/DI from which it took the main concepts.
  • PicoContainer which is a pioneer just like Spring focusing on simplicity.
  • OSGI which describes a dynamic module system used to develop modular applications.

I insist on the fact that my objective is not to discredit what has been working for years especially the Spring Framewok which I have successfully used for a long time and which greatly contributed to the adoption of IoC/DI principles.

The fact that all these frameworks evaluate IoC/DI configuration at runtime presents a big advantage as it allows to create very dynamic applications, you can for instance completely change the nature of an application with no recompilation by modifying either the configuration or by adding or removing one or more runtime dependencies. This takes full advantage of Java dynamic linking. But in return, this lowers the role of the compiler in ensuring that the code will operate properly at runtime. When performing dependency injection, many things can go wrong: there can be cycles in the object dependency graph, there can be conflicting dependencies that require particular configuration, there can be missing dependencies… This is problematic as it requires to actually run the code in real condition to verify that there are no such errors, this is never ideal and can be very complicated in some cases. From my experience, a lot of developers who are not very familiar with these frameworks or even some more experienced ones have troubles understanding how a code which is compiling just fine can result in a non-working application and this is made worth by the fact that these runtime errors, sometimes detected very late in the development process, are not directly connected to the code. Tools exist to spot these issues but they are available for specific IDEs and not integrated to the Java platform. Furthermore, IoC/DI errors greatly depend on how you assemble modules to form an application, taken individually, modules can all be correct but their assembly can be problematic and errors might be different from one assembly to another. This is probably the aspect that developers have the hardest time perceiving.

I think the interest of resolving IoC/DI configuration dynamically at runtime is greatly overrated in view of previous issues. In practice, no production grade application is actually assembled dynamically, it is built in a static image, duly tested before being deployed. Changing the IoC/DI configuration or adding or removing dependencies go through a new development cycle and a new image is usually built, tested and enventually deployed. One could argue that in a properly designed modular application, we can separate the final assembly of modules which is a pure configuration matter from the build of individual module but then why delegating the validation of the IoC/DI configuration to the runtime environment when we could do it during this assembly where all the elements are known and final and if so what is the point in resolving that configuration (again) when the application starts. Dynamic linking has a true interest in applications requiring hot deployment like pluggable applications for which OSGI is the ideal choice but which is not what frameworks like Spring are actually made for.

We know for sure that IoC/DI frameworks are great tools to achieve modularity however they often require experience and rigorous methodology to do it properly. A modular application is the result of the composition of multiple independent modules, each of them providing a consistent set of functionalities that possibly require some other external functionalities to operate. In practice, modules are JAR files providing application components to assemble in an application. In a Spring application, one single container is usually created to manage all the components found on the classpath, this container doesn’t understand the concept of module and it is then impossible to control the visibility of its components in the container which raises several issues.

Let’s consider a movie database application composed of the following two modules: a movie module providing a CRUD repository for movies and a reference module providing another CRUD repository for some kind of reference data like actors, directors… These two modules can be developed and used independently. In order to boost read performances, the movie module developers decided to add a cache layer in front of the repository so they added a cache manager component in their module and a mandatory cache manager dependency on their repository. The reference module developers chose a different approach, they decided it would be better to let the enclosing application provides a cache manager when caching is desirable so they defined an optional cache manager dependency on their repository. When the application runs with one of these modules individually, everything works fine. But when it runs with both modules, the cache manager provided in the movie module is injected in both repositories which might lead to unexpected behaviour or even runtime errors that only appears when the reference module repository is actually used. As far as the IoC/DI container is concerned this is the behaviour expected by the developers of the application.

Now let’s imagine that the developers of the reference module finally decided that caching should be mandatory and chose the same implementation as the movie module. Now the container doesn’t know which of the two cache managers to inject in the repositories and the application exits on startup with an error. In order to fix that issue, the movie module developers decided to define their cache manager as primary so that in case of conflict their component will be chosen. The application then starts without errors however the cache manager defined in the movie module is then injected in the reference module repository as well which is not expected. A solution is finally found by movie and reference modules developers, they will use qualifiers to select which cache manager component should be injected in their respective repositories.

Previous examples illustrate the fact that it is actually quite hard to develop fully independent modules with Spring. Strictly speaking, to create an independent module all components must be universally and uniquely identified, not primary and their dependencies should all be qualified to avoid any conflict. Nobody does that in practice as it would be too constraining and not very convenient. So the best practice is to make sure general purpose components are only provided by the enclosing application. In the previous example, modules should not provide any cache manager, repositories should instead have mandatory dependencies on a cache manager. It is then up to the application to provide cache manager components. Modules must be properly documented to indicate that a cache manager component has to be provided otherwise there is no easy way to know how to compose them in an application.

There are actually two issues here, first it is not obvious to get the list of components that must be provided to a module to operate properly. Without a proper documentation, the only way is to look into the module’s internal configuration ie. the code. In any case there is no clear contract so everything might change leading to runtime errors, no compiler will spot these for you. The second issue is that there is no easy way to limit the visibility of components to the module or a particular set of modules which might have unfortunate side effects. The answer to that issue it to define primary components or use qualifiers but this adds extra constraints and clearly breaks the independence of modules which are then aware of the application they are part of.

Let’s elaborate a bit on qualifiers and see why they are actually not an ideal solution to resolve conflicts. Let’s go back to the movie database application example and assume that movies are persisted in an Oracle database whereas reference data are persisted in a PostgreSQL database. Different data sources must then be injected in the movie module repository and the reference module repository. This is done by defining a qualifier on the data source dependency in the movie module repository pointing to the “OracleDS” component and a qualifier on the data source dependency in the reference module repository pointing to the “PostgreSQL” component. As noted earlier this clearly breaks the independence of modules. Now what happens to the application if the movie database is migrated to MariaDB? One option is to change the “OracleDS” component defined in the application and make it a MariaDB data source, this sloppy solution works as long as the former “OracleDS” component is not used anymore and can be removed, otherwise we have to define a new “MariaDS” component, change the qualifier in the module, recompile and repackage. This is far from ideal, why should the movie module be aware of the RDBMS used by the application to persist movies? The right way of using qualifiers in a module is then to make them point to components that are yet to be provided by any application willing to use the module rather than components defined by an existing application because as far as the module is concerned there is no application and it only needs a data source to operate. The qualifier on the data source dependency in the movie module repository should then point to a “movieDS” component and the qualifier on the data source dependency in the reference module repository should point to a “referenceDS” component that must both be provided by the application. The database migration then only impacts the application that would have been impacted no matter what. From my experience, developers only define qualifiers when conflicts arise which is pretty natural, but when code is shared between multiple applications, conflicts are inevitable so qualifiers if not primary are used to resolve them almost all the time the wrong way because the internet is full of bad examples on their usage. The issue here is that when designing a shared module with external dependencies, the worst case scenario where conflicts exist on each of them have to be assumed and as a result they all have to be qualified. By doing this, autowiring is basically lost and application using such module are forced to provide explicit configuration ie. name components according to the module dependencies qualifiers which basically comes down to manually wire these dependencies. A better approach would be to be able to resolve conflicts with explicit wiring at application level where a module is actually used and when conflicts actually arise rather than the module level which must remain unaware of the rest of the application.

Another issue I encountered using Spring is related to troubleshooting. I saw a lot of developers being blocked because of a missing required dependency. It is actually quite hard but not impossible to determine whether a component is actually created by the container, you can for instance start it in debug mode to get a list of the components created by the container or even add an explicit log in the constructor. A missing component can be due to a configuration not scanned by the container or the container decided to ignore it because some conditions are not met. Spring 4 introduced conditonal beans which actually made the problem worse and it can be really hard to find out which conditions to fulfill to get the container to create a particular component. But sometimes even if the component is created, the container might not inject it as expected leading to above error. This is usually due to some AOP instrumentations applied to a component whose actual type is used to specify the dependency. Anyway, some developers seem helpless in the face of these problems and they often complain that component instantiation and wiring is an obscure process. Although such issue tends to dissappear with experience, this last statement remains valid and we are forced to admit that an IoC/DI container is a black box indeed. You can put a breakpoint in a class constructor but you probably won’t be able to understand how you got there and what will happen to the resulting instance unless you have a basic understanding of container’s internals. The fact that there is no way to see clearly what is going on makes things somewhat magical and this can be very frustrating in certain situations.

To put things into context, it is important to remember that when Spring was created, Java did not have support for annotations and a typical application was a monolithic JEE web application. At the time, modularity did not have the same level of importance as today and component visibility was not really a concern. IoC/DI configuration was exclusively in XML and although autowiring was supported by name or by type, it has to be specified explictly on a particular bean definition so by default explicit wiring was expected which was actually better than using qualifiers from today’s perspective although clearly not as convenient. Spring had to evolve taking this legacy into account and they did a pretty good job. We can live with the above issues and they do not question what is good about using Spring.

Spring makes the development of complex applications quite natural, it is accessible and does not require too much knowledge to be productive, although it requires some experience to master it properly and to be able to deal with above limitations. The success of the Spring framework really lies in its accessibility and its rich ecosystem, in comparision to other solutions like OSGI for instance which is far more suited to modular applications development but less accessible.


In this article I wanted to draw attention to the fact that Inversion of Control and Dependency Injection principles are extremely valuable tools to develop modern modular applications. However I believe existing IoC/DI frameworks which were created a long time ago for the most part, are not suited to today’s challenges. The fact that they make heavy use of reflection makes it difficult to create small, fast and optimized applications needed by microservices architecture and containers. Besides, now that modularity is at the heart of the Java platform, it is fundamental to be able to develop independent modules relying on an IoC/DI solution consistent with the Java platform module system.

These are the main reasons why I created the Inverno framework with the objective to support IoC/DI in the most natural way possible. This framework performs IoC/DI at compile time and fully embraces the Java platform module system to create fast and secure modular applications.




Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

[July LeetCoding Challenge] #294. Ugly Number II

Modern data-driven organisations must rely on the combined power of 3

Multizone Kubernetes and VPC Load Balancer Setup

Architecture diagram

Zero code print forms for Eloquent

Python 101! Step by Step Introduction

Superalgos Review — Incredibly intuitive trading tool

Python Environment for Data Science Beginners

Use An API To Get Propane Live Rates In 2022

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Jeremy Kuhn

Jeremy Kuhn

More from Medium

Local variables with “var” since Java 10

Introduction to Dynamic Proxies in Java

Java record: A Brief