Preparing Your EnvironmentWe will be using Java 1.8 for these examples and building them with Maven. Please make sure for your environment you have the following prerequisites installed:
The Spring ecosystem has some great tools you may wish to use either at the command line or in an IDE. Most of the examples will stick to the command line to stay IDE neutral and because each IDE has its own way of working with projects. For Spring Boot, we’ll use the Spring Boot CLI 1.3.3.Alternative IDEs and tooling for Spring:
For both Dropwizard and WildFly Swarm, we’ll use JBoss Forge CLI and some addons to create and interact with our projects:
Alternative IDEs and tooling for Spring, Dropwizard, or WildFly Swarm projects (and works great with JBoss Forge):
Finally, when we build and deploy our microservices as Docker containers running inside of Kubernetes, we’ll want the following tools to bootstrap a container environment on our machines:
Spring historically was a nightmare to configure. Although the framework improved upon other high-ceremony component models (EJB 1.x, 2.x, etc.), it did come along with its own set of heavyweight usage patterns. Namely, Spring required a lot of XML configuration and a deep understanding of the individual beans needed to construct JdbcTemplates, JmsTemplates, BeanFactory lifecycle hooks, servlet listeners, and many other components. In fact, writing a simple “hello world” with Spring MVC required understanding of DispatcherServlet and a whole host of Model-View-Controller classes. Spring Boot aims to eliminate all of this boilerplate configuration with some implied conventions and simplified annotations—although, you can still finely tune the underlying beans if you need to.
Adding a submodule to your application brings in the curated set of transitive dependencies and versions that are known to work together saving developers from having to sort out dependencies themselves.
But what about management things we typically expect out of an application server?
With Spring Boot, we can leverage the power of the Spring Framework and reduce boilerplate configuration and code to more quickly build powerful, production-ready microservices.
Dropwizard is an opinionated framework like Spring Boot; however, it’s a little more prescriptive than Spring Boot. There are some components that are just part of the framework and cannot be easily changed. The sweet-spot use case is writing REST-based web applications/microservices without too many fancy frills. For example, Dropwizard has chosen the Servlet container (Jetty), REST library (Jersey), and serialization and deserialization (Jackson) formats for you. Changing them out if you want to switch (i.e., changing the servlet container to Undertow) isn’t very straightforward.
Dropwizard also doesn’t come with a dependency-injection container (like Spring or CDI). You can add one, but Dropwizard favors keeping development of microservices simple, with no magic. Spring Boot hides a lot of the underlying complexity from you, since Spring under the covers is pretty complex (i.e., spinning up all the beans actually needed to make Spring run is not trivial) and hides a lot of bean wiring with Java Annotations. Although annotations can be handy and save a lot of boilerplate in some areas, when debugging production applications, the more magic there is, the more difficult it is. Dropwizard prefers to keep everything out in the open and very explicit about what’s wired up and how things fit together. If you need to drop into a debugger, line numbers and stack traces should match up very nicely with the source code.
Just like Spring Boot, Dropwizard prefers to bundle your entire project into one, executable uber JAR. This way, developers don’t worry about which application server it needs to run in and how to deploy and configure the app server. Applications are not built as WARs and subject to complicated class loaders. The class loader in a Dropwizard application is flat, which is a stark difference from trying to run your application in an application server where there may be many hierarchies or graphs of class loaders. Figuring out class load ordering, which can vary between servers, often leads to a complex deployment environment with dependency collisions and run-time issues (e.g., NoSuchMethodError). Running your microservices in their own process gives isolation between applications so you can tune each JVM individually as needed and monitor them using operating system tools very familiar to operations folks. Gone are the GC or OutOfMemoryExceptions which allow one application to take down an entire set of applications just because they share the same process space.
Application servers and Java EE have been the workhorse of enterprise Java applications for more than 15 years. WildFly (formerly JBoss Application Server) emerged as an enterprise-capable, open source application server. Many enterprises heavily invested in the Java EE technology (whether open source or proprietary vendors) from how they hire software talent as well as overall training, tooling, and management. Java EE has always been very capable at helping developers build tiered applications by offering functionality like servlets/JSPs, transactions, component models, messaging, and persistence. Deployments of Java EE applications were packaged as EARs, which typically contained many WARs, JARs, and associated configuration. Once you had your Java archive file (EAR/WAR), you would need to find a server, verify it was configured the way you expect, and then install your archive. You could even take advantage of dynamic deployment and redeployment (although doing this in production is not recommended, it can be useful in development). This meant your archives could be fairly lean and only include the business code you needed. Unfortunately, this lead to bloated implementations of Java EE servers that had to account for any functionality that an application might need. It also led to over-optimization in terms of which dependencies to share (just put everything in the app server!) and which dependencies needed isolation because they would change at a different rate from other applications.
The application server provided a single point of surface area for managing, deploying, and configuring multiple applications within a single instance of the app server. Typically you’d cluster these for high availability by creating exact instances of the app server on different nodes. The problems start to arise when too many applications share a single deployment model, a single process, and a single JVM. The impedance arises when multiple teams who develop the applications running inside the app server have different types of applications, velocities of change, performance or SLA needs, and so on. Insofar as microservices architecture enables rapid change, innovation, and autonomy, Java EE application servers and managing a collection of applications as a single, all-in-one server don’t enable rapid change. Additionally, from the operations side of the house, it becomes very complex to accurately manage and monitor the services and applications running within a single application server. In theory a single JVM is easier to manage, since it’s just one thing, but the applications within the JVM are all independent deployments and should be treated as such. We can feel this pain when we try to treat the individual applications and services within a single process as “one thing,” which is why we have very expensive and complicated tooling to try and accomplish that introspection. One way teams get around some of these issues is by deploying a single application to an application server.
Even though the deployment and management of applications within a Java EE environment may not suit a microservices environment, the component models, APIs, and libraries that Java EE provides to application developers still provide a lot of value. We still want to be able to use persistence, transactions, security, dependency injection, etc., but we want an à la carte usage of those libraries where needed. So how do we leverage our knowledge of Java EE, the power it brings within the context of microservices? That’s where WildFly Swarm fits in.
WildFly Swarm evaluates your pom.xml (or Gradle file) and determines what Java EE dependencies your microservice actually uses (e.g., CDI, messaging, and servlet) and then builds an uber JAR (just like Spring Boot and Dropwizard) that includes the minimal Java EE APIs and implementations necessary to run your service. This process is known as “just enough application server,” which allows you to continue to use the Java EE APIs you know and love and to deploy them both in a microservices and traditional-application style. You can even just start using your existing WAR projects and WildFly Swarm can introspect them automatically and properly include the requisite Java EE APIs/fractions without having to explicitly specify them. This is a very powerful way to move your existing applications to a microservice-style deployment.