Microservices: Request Routing with Zuul Proxy (Spring Boot + Spring Cloud + Netflix Zuul)


In this article, i am going to how to create two microservices and expose them through Netflix Zuul Proxy.  Here i will show you step by step guide on setting up the Zuul proxy and routing the client requests to the related microservices.


What is Zuul and  importance of reverse proxy?

Zuul is an api gateway (or rather Reverse Proxy) comes under the Netflix OSS stack. If you want to know the importance of reverse proxy in microservices architecture or know about Zuul,  Please click here  to refer my article on that.  It is recommended to go through that article, before moving forward with this article.


Source Code

The fully source code related to this article can be found at GitHub. Click here to get it.


Setting up Microservices

In order to demonstrate the capabilities of Zuul proxy, i will set up two spring boot applications as microservices.

  • student-service
  • course-service

For the simplicity of this article, i will just post only the important code segments of above micorservices here. If you want to go thorough full application code, please get the source from the GitHub.

Important: First of all, i need to emphasize that all the REST endpoints are not properly implemented and they just contain few hardcoded values. The purpose of this article is to demonstrate the capabilities of the Zuul proxy in the context of Routing.  If your real project implementation, you can follow the best practices and standards to implement the controller logic as your wish. Here i have just set up few controllers and RESTFul endpoints for the demonstration purpose. 



Lets look at the StudentControlller.


The REST endpoints exposed in the StudentController can be listed as below.

#getting the name of the controller
GET  http://localhost:8081/name

#getting the student by student Id
GET  http://localhost:8081/students/{student_id}

#getting a list of students by course id
GET  http://localhost:8081/courses/{course_id}/students

#getting a list of students who are registered for a particular course in the particular department. 
GET  http://localhost:8081/departments/{department_id}/courses/{course_id}/students


You can see that the  REST endpoints contains simple URI Path (Resource Path) to some complex URI path with set of path variables. I have added those endpoints intentionally to show you how to map the routes for the those endpoints  in Zuul Proxy.

The student-service can be up and run with following command.  It will run on port 8081.

mvn spring-boot:run




Lets look at the CourseController


The REST endpoints exposed in the CourseController can be listed as below.

#getting the all courses
GET  http://localhost:8082/courses

#getting the course by course_id
GET  http://localhost:8081/courses/{course_id}


The course-service can be up and run with following command.  It will run on port 8082.

mvn spring-boot:run



Setting up the Zuul Proxy

I know that most people are curiously waited until i start this section of the article. and lets start it now.


How to create the Zuul Proxy?

Just believe me that Zuul proxy is just another spring boot application. It has the Spring Cloud Netflix Zuul on its classpath dependencies and annotated the main SpringBootApplication configuration class with @EnableZuulProxy annotation.


Lets create our Zuul Proxy application.

Go to https://start.spring.io/ and generate a Spring Boot Application with dependency Zuul. Please refer the below screen shot.

Screen Shot 2018-03-11 at 1.51.58 PM.png


Then open the generated project and annotate the main spring boot application configuration with @EnableZuulProxy annotation.


Lets change the port of the Zuul Proxy application into any port that is not in use. Here i will change it to 7070.

The Zuul Proxy can be up and run with following command.

mvn spring-boot:run



Route mapping with Zuul.

We have set up two microservices in our local server. one is running on port 8081 and other one is running on port 8082.  We need to expose those two microservices through the Zuul proxy running on port 7070.  These route mapping can be done in the application.properties file in the Zuul Proxy Application.


Lets look at how the REST endpoints in the student-service will be exposed through the Zuul proxy.


1.  URI resource mapping for  http://localhost:8081/name (No Path Variable)

zuul.routes.website-name.url = http://localhost:8081/name

Here you can see that it is a simple URI mapping. There is no path variable associated with it.  According to the above route mapping, it will consider the website-name as the default route mapping. Therefore any request for the /website-name will be redirected to the http://localhost:8081/name.

e.g:-  http://localhost:7070/website-name  SEND TO http://localhost:8081/name

Since there is no path mapping, the path will be considered as “website-name

Screen Shot 2018-03-11 at 8.13.44 PM



2.  URI resource mapping for  http://localhost:8081/students/{student_id}

You can notice that there is a path variable exists in the URI resource path. Therefore we need to do the “path” mapping too.

zuul.routes.students.path = /students/*
zuul.routes.students.url = http://localhost:8081/students

According to the above route mapping, any request for the /students/* will be directed to http://localhost:8081/students. (This is applicable for any request that matches the above pattern)

Therefore http://localhost:7070/students/1  will be directed to http://localhost:8081/students/1

Screen Shot 2018-03-11 at 8.16.34 PM



3. URI resource mapping for http://localhost:8081/courses/{course_id}/students

zuul.routes.students-courses.path = /courses/*/students
zuul.routes.students-courses.url = http://localhost:8081/courses

According to the above mapping any request that matches the /courses/*/students pattern will be directed to the url declared along with path variables.

The request for http://localhost:7070/courses/1/students will be directed to the http://localhost:8081/courses/1/students.

I know that now you are confused and thinking of why the URL path doest not start with /students-courses? It is the name that we have used to declare the URL (zuul.routes.students-courses.url).

Keep it in your mind that if there is a path declared in the application.properties file, that path will be considered.  If there is no path declared, then it will consider the name defined in the url mapping as the path. (This is the default behavior)

Screen Shot 2018-03-11 at 8.33.41 PM



4. http://localhost:8081/departments/{department_id}/courses/{course_id}/students

Here you can see that there are two path variables are available.

zuul.routes.department-courses-students.path = /departments/*/courses/*/students
zuul.routes.department-courses-students.url = http://localhost:8081/departments

Any request that matches the /departments//courses//students pattern, will be redirected to the endpoint http://localhost:8081/departments//courses//students URL along with path variables.

e.g:- http://localhost:7070/departments/1/courses/2/students  WILL BE DIRECTED TO http://localhost:8081/departments/1/courses/2/students

Screen Shot 2018-03-11 at 8.35.28 PM


Now we have completed the route mapping for all the REST services published in the student-service. Now lets look at how the REST endpoints in the course-service will be  exposed through the Zuul proxy.


In the course-service, there are only two REST endpoints. we can do just one simple mapping for both of those endpoints.

# zuul route mapping for the course-service
zuul.routes.courses.url = http://localhost:8082/courses

According to the above route mapping, any request for the /courses will be directed to the http://localhost:8082/courses url


http://localhost:7070/courses will forward to http://localhost:8082/courses

Screen Shot 2018-03-11 at 8.37.46 PM


http://localhost:7070/courses/1 will forward to http://localhost:8082/courses/1

Screen Shot 2018-03-11 at 8.44.54 PM


In this article, i have guided you on exposing the micro services with through Zuul Proxy with Simple URI resource mapping (no path variables) to some complex URI resource mapping (with multiple path variables)

If you want to learn more about Spring Cloud Zuul Proxy,  please click following link to visit the official documentation.

Click here to visit the Spring Cloud Documentation on Routing and Filtering with Zuul

Netflix Zuul : Importance of Reverse Proxy in Microservices Architecture (Spring Cloud + Netflix Zuul)


What is Zuul?

Zuul is a Proxy server (proxy service) provided by Netflix OSS. It provides wide range of features such as dynamic routing, filtering requests and server side load balancing etc…

In microservices architecture, Zuul acts as the api gateway for all the deployed microservices and it sits as the middle man in between client applications and backend services. This means that all the microservices will be exposed to the external parties (services or applications)  through the Zuul proxy. If any service/application need to access the any of the microservices deployed in behind the reverse proxy, it has to come through the Zuul proxy.  Zuul will hide the identities of the server applications behind the proxy and serve the client applications exposing its identity (identity of the reverse proxy) on behalf of backend servers and sever applications.  Therefore Zuul is identified as a Reverse Proxy.


Forward Proxy and Reverse Proxy

Here we should know what is the difference between Proxy (forward Proxy) and Reverse Proxy.  One is for protecting/hiding clients and other one is for protecting/hiding servers.

Forward Proxy is the proxy for the client and it hides the identities of the clients. It receives the request from the client and sends the requests to the server on behalf of the clients. The main purpose of forward proxy is to act on behalf of clients by hiding their identities.  The forward proxies are mainly used to access the contents or websites, that is blocked by your ISP or blocked for your country/area.



Reverse Proxy does the opposite of what the Forward Proxy does. It hides the identities of the servers and receive the requests from clients on behalf of servers. Behind the reverse proxy there might be different web services and servers may exist. It is the responsibility of the reverse proxy to delegate the client request to the relevant service/server application and responds back to the client. Therefore the main purpose of reverse proxy is to  server client applications on behalf of set of backend applications deployed in behind the reverse proxy.

Sometimes there might be several instances of the same service or server may running in behind the reverse proxy and that is known as clustering.  In this situation,the reverse proxy may determine the most appropriate server instance(or cluster node) for serving the client request and will delegate the request for that cluster node.  This is done/achieved with the load balancing application available in the reverse proxy.  Clustering will ensure the high availability of service (even if one node is down, the request will be served by next available node) and proper load balancing among multiple requests.  Lets look at those later with some other article.

Proxy (both proxies) will provide the centralized point(or rather single point) of access for the communication between client and servers. Therefore it is easy to implement the enforcing of security policies, content filtering and other constraints with proxies. Both Forward and Reverse proxies exists (should place) in between client and server.


Please refer the following diagram to see the role of the Reverse Proxy.



A reverse proxy allows you to route requests to a single domain to multiple backing services behind that proxy. This can be useful in situations where you want to break up your application into several loosely-coupled components (like microservices) and distribute them even in different servers but, you need to expose them to the public under a single domain. Then the users will get the same experience as  they are communicating with a single application.  This can be achieved with dynamic routing feature available in the reverse proxy.



The importance of Reverse Proxy in Microservices architecture can be summarized as below. 


  • High Availability: provides the supports for the high availability of the microservice in the clustered environment. Even if one service (node) fails down, the client request will be served by next available node.
  • Load Balancing: supports for the load balancing among multiple nodes in the cluster. Therefore it make sure that no server  or service is overloaded with multiple requests. It will properly distribute the requests among multiple nodes to maximize the utilization of resources.
  • Single Point of Access with Request and Response Filtering: This is the single point of access or the gateway for the microservices. If the microservices are exposed through the reverse proxy, the the external clients can access/consume those services through the reverse proxy. Therefore it is possible to filter the requests that are coming to the microservices. In addition, it can filter the responses that are going from the misroservices too. Therefore this will provide an extra level of request and response filtering support for the microservices.  Authentication and Authorization security policies can be enforced with making use of this single point of access.
  • Dynamic Routing:  There may be multiple microservices which are deployed in behind the reverse proxy. Those services may deployed in different servers with different domain names. Sometimes in the same server (where the reverse proxy is deployed) but with different ports. All the services will be exposed to public (client applications) through the reverse proxy and the proxy will assign their own route (url path)  to each service. each route will be mapped to original route in the related service. Therefore client will get the same experience as it communicates with a single application and SSO (Single Sign On) and CORS (Cross Origin Resource Sharing) related issue will be sorted.



The Netflix Zuul as a Reverse Proxy

We have already discussed the importance of the reverse proxy in the Microservices architecture and now it is the time to select the appropriate Reverse Proxy to use. The Netflix has introduced Zuul as the reverse proxy under their OSS (Open Source Software) stack.

Zuul proxy will provide following main functionalities as a reverse proxy. They can be listed as follows.  Lets look at each of them in detailed in separate articles.

  • Dynamic Routing
  • Request and Response Filtering
  • Server Side Load Balancing


Java Application Code Coverage with Cobertura + maven

Cobertura is a free Java tool that calculates the percentage of code accessed by tests. It can be used to identify which parts of your Java program are lacking test coverage.

In this article, i am going to show you how to use Cobertura for maven based java application for measuring the code coverage by test cases.


Cobertura Code Coverage Report

Go to the root of the project and run the following command  that will analyze the code coverage with Cobertura and generate the output report (showing the detailed analysis of coverage).

mvn cobertura:cobertura


Screen Shot 2018-03-01 at 10.07.44 AM.png


Accessing the report

The generated code coverage analysis report can be accessed through  ${project}/target/site/cobertura/index.html 

Screen Shot 2018-03-01 at 10.13.38 AM.png


Microservices : Service Registration and Discover in Netflix Eureka


The importance of Service Register and Discovery

In the Microservices architecture, your application may consist of many number of microservices. each of them may be deployed in different servers and different ports.  The microservices may need to communicate with each other to execute some tasks or operations. For instance, one microservice may need to access the service endpoint  (REST endpoint) in some other microservice for application related operation.  “how do they communicate with each other?” 

If I ask you this question, you will directly tell me that one service can access the other service with the IP address and the port number of that service.

YES! you are correct. But keep this in mind that this approach (using ip address and port number) has following limitations.

  • It is not practical to know the IP address and port number of each microservice as there are multiple microservices available. Assume that there are hundred of microservices available. In this case, do you think that it is practical to know the ip address and port number of each service? It is not right?


  • Assume a situation where a microservice is migrated to (deployed in) some other server with different IP address and port. If we have used the IP address and port number in the source code for making RestClient requests, then we will have to change them in the source code and re-build and deploy the affected services. whenever the ip address or port number get changed, we have to do the same. Dont you think that it is a sort of bad coding and programming practice? Absolutely it is right? (The issue raised again in a situation where we have multiple environments like devqauat and production)



How do we overcome above problems and move forward?

The solution is “Service Registration and Discovery“.

The microservices should register themselves in the centralized location with an identifier and server details. This is known as “Service Registration“.  Each microservice should be able to look up the list of registered services in the centralized location and it is known as “Service Discovery


What are the implementation for the Service Registration and Discovery?

There are multiple implementations for the Service Registration and Discovery Server

  • Netflix Eureka
  • Consul
  • Zookeeper

In this article, we are going to discuss about Netflix Eureka



What is Eureka Server? How it works?

Eureka Server is an implementation for the “Service Registration and Discovery” pattern. Eureka Server keeps a track of registered microsevices and therefore It is known as registry of microservices (that are related to the application).

Each microservice should register with Eureka server by providing their details such as host name, ip address, port, and health indicators etc… Then the Eureka service will register each microservice with unique identifier known as serviceId. The other services can use this serviceId to access the particular service.

The Eureka server will maintain a list of registered microservices with their provided details. Eureka server expects continuos ping messages (known as heartbeats) from the registered microservices to verify they are alive (up and running).  If any service fails to send the heartbeat (ping message) continuously, it will be considered as a dead service and will be removed from the registry.  Therefore Eureka server will maintain only a registry of up and running microservices.

On the other hand, the microservices can register themselves as service discover clients with Eureka Server. The Eureka Server will allow the discover clients (microservices) to look up the other registered services and fetch their information. If any service needs to communicate with any other service, it can register with the Eureka server as a discover client and fetch the information of the targeted service (server address and port number etc…).  In this way, microservices can communicate with each other without maintaining the IP addresses and port numbers manually.


Lets do some codings …..

The full source related to this article can be found at GitHub.

Continue reading “Microservices : Service Registration and Discover in Netflix Eureka”

Code Coverage and Source Quality Analysis with Spring Boot + Docker + SonarQube + JaCoCo


In this article, i am going to explain how to use SonarQube and JaCoCo as a Code Coverage and Source Code Quality analysis tool for Spring Boot application.


What is Code Coverage and why it is important?

Code Coverage is an important topic when it comes to Test Driven Development (TDD). most of the developers are curious to know how percentage of source code is covered with test cases developed (for both unit and integration tests).

Code Coverage shows the stats of how much of source code is covered and tested with  test cases (both unit and integration) developed for the application. Therefore the code coverage analysis is an important fact of measuring the quality of the source code. we need to write the test cases to achieve higher code coverage which will increase the maintainability of the source code.


Technology Stack

The following technologies will be used for this article.

  • SonarQube
  • Docker
  • JaCoCo
  • Spring Boot Application with maven


Install and Run SonarQube with Docker

The most of the developers know the “SonarQube” as a  code quality analysis tool. This has the capability of the executing Unit and Integration tests with given library/tool (such as Cobertura, JaCoCo etc..) and it gives a detailed analysis of code coverage of the source code.  In this article, we will run SonarQube as a docker image. Therefore we need to have docker installed in our development environment.

If you do not have SonarQube in your local development environment, you can download it with following command.


docker pull sonarqube

Screen Shot 2018-02-27 at 12.15.20 PM.png


Once the SonarQube docker image is retrieved, it can be run with following command.


docker run -d --name sonarqube -p 9000:9000 -p 9092:9092 sonarqube  

Screen Shot 2018-02-28 at 1.26.07 AM.png


This will start a docker container based on the sonarqube image and give it the name sonarqube. Adding the -d means the container will run in detached mode (background). The -p 9000:9000 and 9092:9092 means that we expose port 9000 and 9092 to the host using the same port numbers.

Now you can navigate to http://localhost:9000 and you will see your local SonarQube dashboard.


JaCoCo Maven configuration

JaCoCo is a one of the famous code coverage library available for java based applications. In oder to add JaCoCo for the project, you have to add the following maven plugin (under the plugins) for the pom.xml of the project.

(This should be added under the plugins section of the pom.xml of the project)


JaCoCo Test Coverage Analysis with SonarQube

First you need to run the test cases with maven before sending the report for the Sonar server. This can be done with following command.

mvn test


SonarQube has a really good integration with test code coverage. It allows you to analyze which parts of the code should be better covered, and you can correlate this with many other stats. If you want to send your report to your Sonar server the only thing you need to do is to execute the following command in the terminal. (make sure that you have run the mvn test command successfully before executing the below command)


mvn sonar:sonar -Dsonar.login=admin -Dsonar.password=admin

Screen Shot 2018-02-28 at 1.29.32 AM.png


Then it will send the inspection report to the SonarQube and you can access the detailed report through http://localhost:9000 using the specified login credentials.

username : admin
password : admin

Screen Shot 2018-02-28 at 1.34.27 AM.png



Run as a Single Command

As you can see that we have used two separate commands for integrating test result analysis with sonar.


Running test cases  with maven

mvn test

Sending the coverage report to sonar 

mvn sonar:sonar -Dsonar.login=admin -Dsonar.password=admin


Both of above commands can be composed into one single command as follows.

mvn test sonar:sonar -Dsonar.login=admin -Dsonar.password=admin



Exclude Classes from Code Coverage Analysis


In the code coverage analysis we focus only about the classes that should be covered with unit and integration tests. that mens the controllers, repositories, services and domain specific classes. There are some classes which are not covered by either unit or integration tests.  In order to get the correct figure of code coverage analysis, it is required  to exclude those non related classes when performing code coverage analysis.

E.g:- configuration related classes (SpringBootApplication configuration class, SpringSecurityApplication configuration class etc..) should be avoided

This can be done with adding the classes as classes to be excluded under the “properties” section of pom.xml.



You can add multiple exclusions and each of them should be separated  by comma. According to the above configuration, SpringBootDockerExampleApplication and any class under the config package will be excluded/ignored when performing  code coverage analysis.


Spring Boot REST Api with Docker (with docker-compose)


In this tutorial, i am going to show you how to develop an Spring Boot REST Api application that runs on docker container.  This is just a brief and quick demo of setting up spring boot application with docker. In this article, i have focused only on showing the steps of integrating docker support (for building and running image) for the spring boot web application.

If you want to read a detailed article about deploying spring boot application with docker, please click here to visit my some other article on that.


Project Structure and Source Code

The fully source code of the application can be found at GitHub. Click here to download.  The project file structure will be as follows.

Screen Shot 2018-03-03 at 12.33.13 AM.png


Here is the implementation of the WelcomeController.java 



Dockerfile contains the command and instructions for building the docker image from the project.  The contents of the Dockerfile related to this project, can be given as follows.


FROM java:8  

java8 will be identified as the base image for this application. Therefore the final docker image for this application should be built based on java8 docker image.  (in other words, in order to run this application, java8 docker image is required)



working directory has been set as the /app.  This directory will be created in the container and run the specified commands from this directory.



The copy command will copy the file from local project environment to docker image being built.  The file target/spring-boot-docker-example-0.0.1-SNAPSHOT.jar  in the local project environment will be copied as /app/spring-boot-app.jar.



The specified command will be executed once the docker image is successfully deployed and container is booted up.



docker-compose is a utility/tool that is used to run multi container docker applications. docker-compose utility will read the docker-compose.yml file for setting up the related services for the application.  This file should contains the declaration of the services that are required to run the application. If you need to run any service as a separate docker container, then you should declare it in the docker-compose.yml file.

The content of the docker-compose.yml file related to this project can be shown as follows.


The document complies with docker-compose document version 3.

The service name is “spring-boot-rest-api-app” and image name is “spring-boot-rest-docker-image“. The service should be deployed form the given image and if the image does not exist, it should be built with the Dockerfile available in the current working directory.

The port 8080 of the docker container should be mapped to the port 8087 of the docker host. So the service can be externally accessed with port 8087.

spring-boot-rest-api-app container will use the /data/spring-boot-app volume for managing data.


Building the project with maven

Since the Dockerfile depends on the final built artifact of the project (that is target/spring-boot-rest-api-docker-0.0.1-SNAPSHOT.jar), we need to build final deployable artifact before moving forward with building the docker image.  This can be done with following command.

mvn clean install

Now the project is successfully built and we can move forward with building docker image and running it in a docker container.


Building the docker image

In terminal, go to the directory where your docker-compose.yml file is available. Then run the following command for building the docker image.

docker-compose build


Screen Shot 2018-03-01 at 9.00.25 PM.png


This command can be used to build new image or rebuild existing images. That means if there is no docker image available for the given name, then it will directly build the image. Otherwise the existing image (already available image for the given name) will be removed and rebuild the image.


you can get a list of docker images available in the docker platform with following command and  verify wether the image has been successfully built.

docker images

Screen Shot 2018-03-03 at 12.27.52 AM.png

you can notice that the “spring-boot-rest-docker-image” is successfully built and available under the list of images.


Running application with docker-compose

This can be done with following command.

docker-compose up

After executing the above command, it will look for the services declared in the    docker-compose.yml  file and deploy and start each service in separate docker container.


Now, we should be able to access the REST api endpoint available in the WelcomeController.

GET  /api/welcome

Screen Shot 2018-03-03 at 12.40.21 AM.png


Docker: Spring Boot and Spring Data JPA (MySQL) REST Api example with docker (with docker-compose)


In the previous article (Click here to visit that article.), we have created, run and linked the docker containers manually.  In this article we will explore how to use the docker-compose utility for creating, running and managing the multiple docker containers.

docker compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application’s services. Then, using a single command, you create and start all the services from your configuration.In addition, It allows you to define how the image should be built as well.

For this article, we are going to use and modify the same project that is created in the previous article.

In this article, i am just focusing on the docker-compose utility and related features. I am not going to describe, any spring or spring-data-jpa related features here.

First we will clone the source code of the previous article and prepare our development environment.  This can be done with following command.

git clone git@github.com:chathurangat/spring-boot-data-jpa-mysql-docker-no-composer.git


Import the project into your preferred IDE and the source code should be appeared as follows.

Screen Shot 2018-02-17 at 9.27.47 PM.png


Lets create the docker-compose.yml file in the root of the project.



docker-compose is a utility/tool that is used to run multi container docker applications. docker-compose utility will read the docker-compose.yml file for setting up the related services for the application.  This file should contains the declaration of the services that are required to run the application. If you need to run any service as a separate docker container, then you should declare it in the docker-compose.yml file.

If you just look at the previous project (Click here to visit that article.), you will notice that there were two services those were run on two docker containers.  Those services can be listed as:

  • mysql service
  • application service (spring boot application)


In this article, we are going to explore how to run and manage those two services with docker compose.   Please refer the below file to see how those two services has been declared.


lets look at the file structure in detailed.



You can see that the docker compose document version is 3.The syntaxes declare in the docker-compose.yml document will change based on the document version. All the syntaxes that are declared in this document will compatible with version 3.


Setting up mysql container (service)

As you can see that, we have declared the two services here. each service will run in a separate  docker container.  lets look at each service in detailed as follows.

 image: mysql:latest
 - MYSQL_DATABASE=spring_app_db
 - MYSQL_USER=app_user
 - /data/mysql


we have named the mysql service as mysql-docker-container. (There is no rule and it is possible to select any name for the service)

The mysql:latest image should be used for providing the service. In other words, this image (mysql:latest)  will be deployed in the targeted container.

we have declared four  environmental variables which will help to initialize the database, create database user and setting up root password.

volume has been defined as /data/mysql. Volumes are the preferred mechanism for persisting data generated by and used by Docker containers.


setting up application container

 image: spring-boot-jpa-image
 context: ./
 dockerfile: Dockerfile
 - mysql-docker-container
 - 8087:8080
 - /data/spring-boot-app


The service has been named as “spring-boot-jpa-app“.

The image name is “spring-boot-jpa-image“. If the image does not exist, it should be built with the Dockerfile available in the current working directory.   In the previous article, we build the docker image with a manual command. But with docker compose, we can declare the docker image build command as above.

This application service depends on the mysql-docker-container.

The port 8080 of the docker container should be mapped to the port 8087 of the docker host. So the service can be externally accessed with port 8087.

spring-boot-jpa-app container will use the /data/spring-boot-app volume for managing data.


Before running the docker-compose

Now our docker-compose.yml file is ready and it is the time to up the containers with docker-compose.  Before moving forward with docker-compose utility, we will look at out Dockerfile related to this project. Dockerfile contains the instructions of how to build the docker image with the source code.


FROM java:8
LABEL maintainer=“chathuranga.t@gmail.com”
ADD target/spring-boot-data-jpa-example-0.0.1-SNAPSHOT.jar spring-boot-data-jpa-example-0.0.1-SNAPSHOT.jar
ENTRYPOINT ["java","-jar","spring-boot-data-jpa-example-0.0.1-SNAPSHOT.jar"]


Just look at the above bold (highlighted) line and you will notice that docker image is built with already built jar file. The Dockerfile does not contains any maven or any other command to build the jar file from the project source code and it continues to build the docker image with already available jar file in the target/spring-boot-data-jpa-example-0.0.1-SNAPSHOT.jar. Therefore before moving with building the images or running the containers with docker-compose, we need to build the project artifact (*jar, war or any other related artifact file).


Build the project with maven

The project can be built with following maven command.

mvn clean install -DskipTests

Once the project is built, we can run the docker compose utility to build the targeted images and run the declared services in docker containers.


“docker-compose” to build the images and run services in docker containers

In the command line, go to the project directory where your docker-compose.yml file is located.  Then run the following docker compose command to run the declared services (in docker-compose.yml) in docker containers.

docker-compose up


You can see that we have declare some command to build the docker image for a specific service (spring-boot-jpa-app) with docker-compose.  Therefore it will build the declared image before up and running the docker services in the containers. you can run the following command to check whether the image is successfully built.

docker images


This will display a list of available images as follows.

Screen Shot 2018-02-19 at 3.08.06 PM.png

If you just observe the first line of the screenshot, you can see that the image (spring-boot-jpa-image) has already been created.



It will take few seconds to build the image and up the containers for the declared/given services. Once above process is completed, you may run the following command to check whether the containers are up and running.

docker ps


It will display a list of up and running docker containers are as follows.

Screen Shot 2018-02-19 at 3.12.25 PM.png



Testing the application

Now everything is up and running. You can follow the testing instructions given in the previous article (click here to go to the previous article) to test the application.


Rebuilding the docker image

Most of the time you may need to rebuild the project. (due to source code and application logic changes). In such cases, you need to rebuild the docker image too. If you do not rebuild the docker image, the docker image repository may contain the older image version. Therefore the docker-compose will use the older image version available in the docker repository to run the service container.  In order to avoid this issue, we need to rebuild the docker image. This can be done with following command.

docker-compose build


Is it enough to run the “docker-compose build” once the source code is changed?

No. docker-compose build will build the docker image using the available project final build file (jar, war etc..). So if you have modified the source code of the project, you need to rebuild the project with maven.

Once the project is build, you can run the docker-compose build command to build the docker image.


Full source code related to this article can be found at GitHubClick here to Download the SourceCode.