Category: Docker

Code Coverage and Source Quality Analysis with Spring Boot + Docker + SonarQube + JaCoCo


In this article, i am going to explain how to use SonarQube and JaCoCo as a Code Coverage and Source Code Quality analysis tool for Spring Boot application.


What is Code Coverage and why it is important?

Code Coverage is an important topic when it comes to Test Driven Development (TDD). most of the developers are curious to know how percentage of source code is covered with test cases developed (for both unit and integration tests).

Code Coverage shows the stats of how much of source code is covered and tested with  test cases (both unit and integration) developed for the application. Therefore the code coverage analysis is an important fact of measuring the quality of the source code. we need to write the test cases to achieve higher code coverage which will increase the maintainability of the source code.


Technology Stack

The following technologies will be used for this article.

  • SonarQube
  • Docker
  • JaCoCo
  • Spring Boot Application with maven


Install and Run SonarQube with Docker

The most of the developers know the “SonarQube” as a  code quality analysis tool. This has the capability of the executing Unit and Integration tests with given library/tool (such as Cobertura, JaCoCo etc..) and it gives a detailed analysis of code coverage of the source code.  In this article, we will run SonarQube as a docker image. Therefore we need to have docker installed in our development environment.

If you do not have SonarQube in your local development environment, you can download it with following command.


docker pull sonarqube

Screen Shot 2018-02-27 at 12.15.20 PM.png


Once the SonarQube docker image is retrieved, it can be run with following command.


docker run -d --name sonarqube -p 9000:9000 -p 9092:9092 sonarqube  

Screen Shot 2018-02-28 at 1.26.07 AM.png


This will start a docker container based on the sonarqube image and give it the name sonarqube. Adding the -d means the container will run in detached mode (background). The -p 9000:9000 and 9092:9092 means that we expose port 9000 and 9092 to the host using the same port numbers.

Now you can navigate to http://localhost:9000 and you will see your local SonarQube dashboard.


JaCoCo Maven configuration

JaCoCo is a one of the famous code coverage library available for java based applications. In oder to add JaCoCo for the project, you have to add the following maven plugin (under the plugins) for the pom.xml of the project.

(This should be added under the plugins section of the pom.xml of the project)


JaCoCo Test Coverage Analysis with SonarQube

First you need to run the test cases with maven before sending the report for the Sonar server. This can be done with following command.

mvn test


SonarQube has a really good integration with test code coverage. It allows you to analyze which parts of the code should be better covered, and you can correlate this with many other stats. If you want to send your report to your Sonar server the only thing you need to do is to execute the following command in the terminal. (make sure that you have run the mvn test command successfully before executing the below command)


mvn sonar:sonar -Dsonar.login=admin -Dsonar.password=admin

Screen Shot 2018-02-28 at 1.29.32 AM.png


Then it will send the inspection report to the SonarQube and you can access the detailed report through http://localhost:9000 using the specified login credentials.

username : admin
password : admin

Screen Shot 2018-02-28 at 1.34.27 AM.png



Run as a Single Command

As you can see that we have used two separate commands for integrating test result analysis with sonar.


Running test cases  with maven

mvn test

Sending the coverage report to sonar 

mvn sonar:sonar -Dsonar.login=admin -Dsonar.password=admin


Both of above commands can be composed into one single command as follows.

mvn test sonar:sonar -Dsonar.login=admin -Dsonar.password=admin



Exclude Classes from Code Coverage Analysis


In the code coverage analysis we focus only about the classes that should be covered with unit and integration tests. that mens the controllers, repositories, services and domain specific classes. There are some classes which are not covered by either unit or integration tests.  In order to get the correct figure of code coverage analysis, it is required  to exclude those non related classes when performing code coverage analysis.

E.g:- configuration related classes (SpringBootApplication configuration class, SpringSecurityApplication configuration class etc..) should be avoided

This can be done with adding the classes as classes to be excluded under the “properties” section of pom.xml.



You can add multiple exclusions and each of them should be separated  by comma. According to the above configuration, SpringBootDockerExampleApplication and any class under the config package will be excluded/ignored when performing  code coverage analysis.


Spring Boot REST Api with Docker (with docker-compose)


In this tutorial, i am going to show you how to develop an Spring Boot REST Api application that runs on docker container.  This is just a brief and quick demo of setting up spring boot application with docker. In this article, i have focused only on showing the steps of integrating docker support (for building and running image) for the spring boot web application.

If you want to read a detailed article about deploying spring boot application with docker, please click here to visit my some other article on that.


Project Structure and Source Code

The fully source code of the application can be found at GitHub. Click here to download.  The project file structure will be as follows.

Screen Shot 2018-03-03 at 12.33.13 AM.png


Here is the implementation of the 



Dockerfile contains the command and instructions for building the docker image from the project.  The contents of the Dockerfile related to this project, can be given as follows.


FROM java:8  

java8 will be identified as the base image for this application. Therefore the final docker image for this application should be built based on java8 docker image.  (in other words, in order to run this application, java8 docker image is required)



working directory has been set as the /app.  This directory will be created in the container and run the specified commands from this directory.



The copy command will copy the file from local project environment to docker image being built.  The file target/spring-boot-docker-example-0.0.1-SNAPSHOT.jar  in the local project environment will be copied as /app/spring-boot-app.jar.



The specified command will be executed once the docker image is successfully deployed and container is booted up.



docker-compose is a utility/tool that is used to run multi container docker applications. docker-compose utility will read the docker-compose.yml file for setting up the related services for the application.  This file should contains the declaration of the services that are required to run the application. If you need to run any service as a separate docker container, then you should declare it in the docker-compose.yml file.

The content of the docker-compose.yml file related to this project can be shown as follows.


The document complies with docker-compose document version 3.

The service name is “spring-boot-rest-api-app” and image name is “spring-boot-rest-docker-image“. The service should be deployed form the given image and if the image does not exist, it should be built with the Dockerfile available in the current working directory.

The port 8080 of the docker container should be mapped to the port 8087 of the docker host. So the service can be externally accessed with port 8087.

spring-boot-rest-api-app container will use the /data/spring-boot-app volume for managing data.


Building the project with maven

Since the Dockerfile depends on the final built artifact of the project (that is target/spring-boot-rest-api-docker-0.0.1-SNAPSHOT.jar), we need to build final deployable artifact before moving forward with building the docker image.  This can be done with following command.

mvn clean install

Now the project is successfully built and we can move forward with building docker image and running it in a docker container.


Building the docker image

In terminal, go to the directory where your docker-compose.yml file is available. Then run the following command for building the docker image.

docker-compose build


Screen Shot 2018-03-01 at 9.00.25 PM.png


This command can be used to build new image or rebuild existing images. That means if there is no docker image available for the given name, then it will directly build the image. Otherwise the existing image (already available image for the given name) will be removed and rebuild the image.


you can get a list of docker images available in the docker platform with following command and  verify wether the image has been successfully built.

docker images

Screen Shot 2018-03-03 at 12.27.52 AM.png

you can notice that the “spring-boot-rest-docker-image” is successfully built and available under the list of images.


Running application with docker-compose

This can be done with following command.

docker-compose up

After executing the above command, it will look for the services declared in the    docker-compose.yml  file and deploy and start each service in separate docker container.


Now, we should be able to access the REST api endpoint available in the WelcomeController.

GET  /api/welcome

Screen Shot 2018-03-03 at 12.40.21 AM.png


Docker: Spring Boot and Spring Data JPA (MySQL) REST Api example with docker (with docker-compose)


In the previous article (Click here to visit that article.), we have created, run and linked the docker containers manually.  In this article we will explore how to use the docker-compose utility for creating, running and managing the multiple docker containers.

docker compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application’s services. Then, using a single command, you create and start all the services from your configuration.In addition, It allows you to define how the image should be built as well.

For this article, we are going to use and modify the same project that is created in the previous article.

In this article, i am just focusing on the docker-compose utility and related features. I am not going to describe, any spring or spring-data-jpa related features here.

First we will clone the source code of the previous article and prepare our development environment.  This can be done with following command.

git clone


Import the project into your preferred IDE and the source code should be appeared as follows.

Screen Shot 2018-02-17 at 9.27.47 PM.png


Lets create the docker-compose.yml file in the root of the project.



docker-compose is a utility/tool that is used to run multi container docker applications. docker-compose utility will read the docker-compose.yml file for setting up the related services for the application.  This file should contains the declaration of the services that are required to run the application. If you need to run any service as a separate docker container, then you should declare it in the docker-compose.yml file.

If you just look at the previous project (Click here to visit that article.), you will notice that there were two services those were run on two docker containers.  Those services can be listed as:

  • mysql service
  • application service (spring boot application)


In this article, we are going to explore how to run and manage those two services with docker compose.   Please refer the below file to see how those two services has been declared.


lets look at the file structure in detailed.



You can see that the docker compose document version is 3.The syntaxes declare in the docker-compose.yml document will change based on the document version. All the syntaxes that are declared in this document will compatible with version 3.


Setting up mysql container (service)

As you can see that, we have declared the two services here. each service will run in a separate  docker container.  lets look at each service in detailed as follows.

 image: mysql:latest
 - MYSQL_DATABASE=spring_app_db
 - MYSQL_USER=app_user
 - /data/mysql


we have named the mysql service as mysql-docker-container. (There is no rule and it is possible to select any name for the service)

The mysql:latest image should be used for providing the service. In other words, this image (mysql:latest)  will be deployed in the targeted container.

we have declared four  environmental variables which will help to initialize the database, create database user and setting up root password.

volume has been defined as /data/mysql. Volumes are the preferred mechanism for persisting data generated by and used by Docker containers.


setting up application container

 image: spring-boot-jpa-image
 context: ./
 dockerfile: Dockerfile
 - mysql-docker-container
 - 8087:8080
 - /data/spring-boot-app


The service has been named as “spring-boot-jpa-app“.

The image name is “spring-boot-jpa-image“. If the image does not exist, it should be built with the Dockerfile available in the current working directory.   In the previous article, we build the docker image with a manual command. But with docker compose, we can declare the docker image build command as above.

This application service depends on the mysql-docker-container.

The port 8080 of the docker container should be mapped to the port 8087 of the docker host. So the service can be externally accessed with port 8087.

spring-boot-jpa-app container will use the /data/spring-boot-app volume for managing data.


Before running the docker-compose

Now our docker-compose.yml file is ready and it is the time to up the containers with docker-compose.  Before moving forward with docker-compose utility, we will look at out Dockerfile related to this project. Dockerfile contains the instructions of how to build the docker image with the source code.


FROM java:8
LABEL maintainer=“”
ADD target/spring-boot-data-jpa-example-0.0.1-SNAPSHOT.jar spring-boot-data-jpa-example-0.0.1-SNAPSHOT.jar
ENTRYPOINT ["java","-jar","spring-boot-data-jpa-example-0.0.1-SNAPSHOT.jar"]


Just look at the above bold (highlighted) line and you will notice that docker image is built with already built jar file. The Dockerfile does not contains any maven or any other command to build the jar file from the project source code and it continues to build the docker image with already available jar file in the target/spring-boot-data-jpa-example-0.0.1-SNAPSHOT.jar. Therefore before moving with building the images or running the containers with docker-compose, we need to build the project artifact (*jar, war or any other related artifact file).


Build the project with maven

The project can be built with following maven command.

mvn clean install -DskipTests

Once the project is built, we can run the docker compose utility to build the targeted images and run the declared services in docker containers.


“docker-compose” to build the images and run services in docker containers

In the command line, go to the project directory where your docker-compose.yml file is located.  Then run the following docker compose command to run the declared services (in docker-compose.yml) in docker containers.

docker-compose up


You can see that we have declare some command to build the docker image for a specific service (spring-boot-jpa-app) with docker-compose.  Therefore it will build the declared image before up and running the docker services in the containers. you can run the following command to check whether the image is successfully built.

docker images


This will display a list of available images as follows.

Screen Shot 2018-02-19 at 3.08.06 PM.png

If you just observe the first line of the screenshot, you can see that the image (spring-boot-jpa-image) has already been created.



It will take few seconds to build the image and up the containers for the declared/given services. Once above process is completed, you may run the following command to check whether the containers are up and running.

docker ps


It will display a list of up and running docker containers are as follows.

Screen Shot 2018-02-19 at 3.12.25 PM.png



Testing the application

Now everything is up and running. You can follow the testing instructions given in the previous article (click here to go to the previous article) to test the application.


Rebuilding the docker image

Most of the time you may need to rebuild the project. (due to source code and application logic changes). In such cases, you need to rebuild the docker image too. If you do not rebuild the docker image, the docker image repository may contain the older image version. Therefore the docker-compose will use the older image version available in the docker repository to run the service container.  In order to avoid this issue, we need to rebuild the docker image. This can be done with following command.

docker-compose build


Is it enough to run the “docker-compose build” once the source code is changed?

No. docker-compose build will build the docker image using the available project final build file (jar, war etc..). So if you have modified the source code of the project, you need to rebuild the project with maven.

Once the project is build, you can run the docker-compose build command to build the docker image.


Full source code related to this article can be found at GitHubClick here to Download the SourceCode. 


NodeJs development with Docker (Webpack + ES6 + Babel)


In this article we will look at how to use Docker for NodeJs application development and deployment. Here i will be showing you how to use docker-compose utility to bundle NodeJs application as a docker image and run it in a docker container.  For the demonstration purpose, i am going to reuse a NodeJs application that was developed in some other article. I will take you through the step by step process for integrating Docker features and functionalities for  the application that we have developed.


we will be using the application developed in following article.

Click here to go the previous article

If you haven’t read the previous article, it is highly recommended to read it before moving forward with this article.


Lets remind and brush up our knowledge on the technologies used in the previous article as follows.

  • Express.js :- The application has been developed using Express.js framework.
  • ES6+ :- The source code complies with the ES6+ (ES6 and higher) JavaScript.
  • Babel and babel-loader :- has been used for transpiling the ES6+ source code into ES5 style code. babel-loader has been used with webpack for compiling/transpiling purpose.
  • webpack :- This has been used as the static resource bundling tool (here specially javascript) and executing babel traspiler with babel-loader.


Get the source code of the project related to the previous article from GitHub. You can get the source code with following command.

git clone


Once the source code is cloned, add below two empty files to the root of the project.

  • Dockerfile :- the file name should be “Dockerfile” without any extension (NO extension)
  • docker-compose.yml 

Don’t worry about the purposes of these two files for the moment right now. we will discuss the purpose of each file when they contents are added for them.


After adding above two files, open the project with your IDE and the project structure should looks like below.

Screen Shot 2018-02-25 at 12.03.13 AM.png


NodeJs Application with Express.Js , Babel and Webpack

Since i have demonstrated how to develop NodeJs application with Express.js with ES6+ JavaScript syntaxes and  how to use Babel and Webpack for transpiling and bundling purposes in the previous article, i am not going to repeat the same content here. If you need any clarification regarding this, please refer the previous article.  I will be moving forward with adding Docker for the previous developed application.


Moving forward with Docker

Now it is the time to modify the content of the Dockerfile and docker-compose.yml file. Lets look at the purpose of each file in detailed as follows.

Docker is all about creating images from source code and running them in standalone environment called containers.  if you are new to docker and need to get a basic idea, then click here to visit my article about Docker.



Dockerfile contains the instructions and related commands for building the docker image from the project source code.  add the following content for the empty Dockerfile that you have created.

FROM node:alpine
COPY . /app
RUN npm install
ENTRYPOINT ["npm","run","execute"]


FROM : This defines the base image for the image that we are building.(The image should be built from this base image). All we said is that, for this image to run, we need the node:alpine image.


WORKDIR : This will create a work directory when building the image. Here it will create the “/app” directory as the work directory.  If you go to the bash mode of the container, you can verify that “/app” directory is created with all copies files.

WORKDIR sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that available in the docker file.


COPY : command used to copy the given files from local development environment to the docker image. Here the local current working directory (all files in the current working directory) will be copied to “/app” directory.


RUN :  RUN command can be used to execute the shell commands when the docker image is being built.


ENTRYPOINT : This command will run when the container is created and up. Normally this should contain the command that should be executed in the shell to run the application in the docker container. The command should be given in JSON array format.

According to the above Dockerfile, the command will be:

 npm run execute


Here the “execute” is a custom built command and if you observe the scripts of the package.json, then you will find the related command.

"scripts": {
  "test": "echo \"Error: no test specified\" && exit 1",
  "execute": "webpack && node build/app.bundle.js"


If you want to learn more about Dockerfile, please click here to visit official documentation about it. 



What is docker-compose? 

Compose is a tool for defining and running multi-container Docker applications. With Compose, you need to create a YAML file (docker-compose.yml) to configure your application’s services. Then, with a single command, you can create and start all the services from your configuration.

Lets add the contents for the created docker-compose.yml file.



version: '3'

    image: nodejs-webpack-es6-image
      context: ./
      dockerfile: Dockerfile
      - 4000:2000


According to the above document, docker-compose version is 3. Therefore this document should contain the syntaxes that comply with version 3.

We can declare the list of services under the services. here i have declared only one service which it is built with the source code of this project.  each declared service will be deployed and run in a sperate docker container.

The name of the service is “nodejs-webpack-es6-app“. The service should be deployed with docker image “nodejs-webpack-es6-image“. If the docker image is not available, then build the docker image with using the Dockerfile available in the current working directory.

The service will be running in the container port 2000 and expose it though docker host port 4000. Therefore the service can be accessed externally with:

ip address of the docker host + 4000 (port)



docker-compose for building and running the application


In command shell, go to the directory where the docker-compose.yml file is located and run the below command to run the application.


docker-compose  up


After running the above command, you can access the application as follows.


Testing the Application

Now lets access each HTTP route with postman as follows.


GET   http://localhost:4000/

Screen Shot 2018-02-21 at 7.30.27 PM.png


GET   http://localhost:4000/products/12

Screen Shot 2018-02-21 at 7.31.26 PM.png


POST    http://localhost:4000/products

Screen Shot 2018-02-21 at 7.32.30 PM.png


Rebuild the image on source file changing

if you have modified the source code of the application, then you need to remove the old image and rebuild the new image. This can be done with following single command.


docker-compose build


The source code of this article can be found in GitHub. Click here to get the source code.




Docker: Spring Boot and Spring Data JPA (MySQL) REST Api example with docker (without docker-compose)

What is Docker?

The basic introduction and overview of the docker can be found at following blog article. So if you are new to docker, please read the following article, before continuing with this article.

Click here to go to the article of  “What is Docker and Its Overview”


What we are going to do …. 

Here we will be migrating an existing application to use docker based containers instead of traditional pre installed servers. The existing application is a Spring Boot REST Api application that uses Spring Data JPA for persistent layer and MySQL as the database server.  So in this article, we will try to use docker based mysql container for replacing the traditional MySQL server (The MySQL server that is run as a external sever).

Since the purpose of this article is to demonstrate the features and capabilities of the Docker, i am not going to explain the Spring Data JPA related code here. If you want to learn them, please refer my blog article of Simple CRUD Application with Spring Boot and Spring Data JPA.


There are two main ways that can be used to build and run applications with docker.

  1. Using docker-compose for managing  and running dependencies (servers) and link them
  2. manually managing and running the dependencies and link them (without docker-compose)


In this article we will be focusing on the second approach. That is “without docker-compose approach


Where to download the source code?

The initial source code of this  article can be found at the article of Simple CRUD Application with Spring Boot and Spring Data JPA . As i have already mentioned, we will be adding some docker related configurations to use docker for building and running this application.

The fully docker migrated source code( the outcome of this article) can be found at GitHubClick here to download


Starting to migrate the application to use Docker

If you have gone through the above reference article (Simple CRUD Application with Spring Boot and Spring Data JPA), the application has following components and dependencies.

  • Running Spring Boot Application
  • Running MySQL Server


Therefore  we need to create following containers in the process of migrating this application for the docker platform.

  • A Container for running the Spring Boot Application (developed application docker image)
  • A Container for running the MySQL Server (mysql docker image)


Click here If you want to see a list of important and frequently required Docker Commands


Create a docker container for MySQL


MySQL Team has provided an official docker image of MySQL through Docker Hub.  ( . Therefore we can create the MysQL docker container executing the following command.  It will first check the local docker registry for finding the requested mysql image. If it is not available, it will pull the image from the remote repository(Docker Hub) and create the container.


docker run -d \
      -p 2012:3306 \
     --name mysql-docker-container \
     -e MYSQL_ROOT_PASSWORD=root123 \
     -e MYSQL_DATABASE=spring_app_db \
     -e MYSQL_USER=app_user \
     -e MYSQL_PASSWORD=test123 \


After running the above command it will create a docker container with name “mysql-docker-container“.

Please refer the below screenshotScreen Shot 2018-01-07 at 10.17.40 PM.png


Lets break and understand the above “docker run” command.


We use this flag to run the container in detached mode, which means that it will run in a separate background process. If you want the terminal access, simply avoid this flag.


-p <host-port>:<container-port>

-p flag is used for port binding between host and container. sometimes you might need to connect to MySQL container from host or some other remote sever.  Therefore we need to bind the container port to a port of the host machine. Then it will be possible to access the mysql docker container through the  IP and PORT of the Host machine.

2012:3306   :-  By default MySQL server uses port 3306. Therefore container port will be 3306. We have mapped/bind the 3306 port to the 2012 post of the host machine. So the outsiders can access the MySQL container with host machine ip-address:2012 port.

Here is the screen shot of successful connection to docker container with Sequel Pro

Screen Shot 2018-01-07 at 10.41.00 PM.png



The name of the docker container.  In this case it is “mysql-docker-container




This describes the image name and the tag.

mysql is the image name and latest represents the tag


The rest of the parameters are used to create the root password, database and creating the user (with username and password) for giving  the access to the database.

Now we have a MySQL docker container  up and running.  Now our next target is to create a container for running spring boot application.


Create docker container for Spring Boot Application.

It was easy to create a container for MySQL server as there is an already published docker image for MySQL. Where we can find the docker image for the spring boot application that we have developed? Nowhere. We have to create it for our application.

It is the responsibility of the developer to create and publish (if required) the docker image of the application that he has developed.

Lets create the docker image for this spring boot application project.

First of all we need to change the database connection details of the file to point to the mysql-docker-container that we have already created.

spring.datasource.url = jdbc:mysql://mysql-docker-container:3306/spring_app_db?useSSL=false
spring.datasource.username = app_user
spring.datasource.password = test123


  • docker container name has been added for the host (mysql-docker-container)
  • 3306 is the port of the docker container on which the MySQL server is running on.




This file contains all the instructions and commands that are required to build the docker image for the application. This file should be added to the root of the source directory. Normally this file should not contain any extension.

FROM java:8
LABEL maintainer=“”
ADD target/spring-boot-data-jpa-example-0.0.1-SNAPSHOT.jar spring-boot-data-jpa-example-0.0.1-SNAPSHOT.jar
ENTRYPOINT ["java","-jar","spring-boot-data-jpa-example-0.0.1-SNAPSHOT.jar"]


FROM – This defines the base image for the image that we are building.(The image should be built from this base image). All we said is that, for this image to run, we need the java:8 image.


EXPOSE –  This specified the port number on which the docker container is running. The docker host will be informed about this port when the container is booting up.


We added a VOLUME pointing to “/tmp” because that is where a Spring Boot application creates working directories for Tomcat by default. The effect is to create a temporary file on your host under “/var/lib/docker” and link it to the container under “/tmp”. This step is optional for the simple app that we wrote here, but can be necessary for other Spring Boot applications if they need to actually write in the filesystem.

ADD – Adding the files into the docker image being created. Normally this command will be sued to add executable jar files into the docker image.


ENTRYPOINT : The specified command will get executed when the container is booted up.




Dockerfile (instructions in the Dockerfile) will be used by the docker build (command) when building the Docker Image.

If you want to learn more about Dockerfile, please click here to visit official documentation about it. 



Building the docker image from project

first you need to build the application. This can be done with following command

mvn clean install -DskipTests


Once the project is buit successfully, we can build the docker image with following command.

docker build -f Dockerfile -t spring-jpa-app .


spring-jpa-app –  refers the name of the docker image being built.  


Once the above process is completed, you can verify whether the docker image is built successfully with following command. It will show you a list of docker images available.

docker images


Running the built docker image 

Now we need to run the built docker image of our spring boot application. Since this application requires to connect with MySQL server, we need to make sure that MySQL server is up and running.

You can check the currently up and running docker containers with following command.

docker ps


If the MySQL container is not up and running, you need to run it now. (I have already explained you about executing the mysql-docker-container).


Link with MySQL Container. 

Once the mysql container is up and running, you can run your spring boot application image on container with following command.  You need to link your spring boot application with mysql container.


docker run -t --name spring-jpa-app-container --link mysql-docker-container:mysql -p 8087:8080 spring-jpa-app


–name spring-jpa-app-container

This represents the name of the docker container that is going to be created. You can use any name as your wish.


-p 8087:8080

The application will be running on port 8080 of the container. It is bound to the port 8087 of the host machine. (So the hosted application can be accessed with host ip address + port  8087) . In my case it is localhost:8087 



Now, because our application Docker container requires a MySQL container, we will link both containers to each other. To do that we use the --link flag. The syntax of this command is to add the container that should be linked and an alias, for example --link mysql-docker-container:mysql, in this case mysql-docker-container is the linked container and mysql is the alias.



This represents the name of the docker image that is going to be run on a container.


Now we can check and verify whether both containers (mysql and spring boot application containers) are up and running.

Screen Shot 2018-01-10 at 12.00.42 AM.png


Verify containers are linked properly

To verify whether the containers are linked properly, you can get into the application container (spring-jpa-app-container) and see the content of the /etc/hosts file.

login to container with bash mode

docker exec -it spring-jpa-app-container bash

spring-jpa-app-container is the name of the container that we need to access.  bash param says that we need the bash access. 

see the content of /etc/hosts ( cat /etc/hosts )

Screen Shot 2018-01-10 at 12.04.11 AM.png

Did you  notice the “ mysql 56f7f45a79c1 mysql-docker-container” ?

This confirms that containers are linked properly and spring boot application container is connected to mysql-docker-container properly.


Testing the application 

The hosted application can be access through  http://localhost:8087

You can refer the original article for getting more information about accessing the endpoints with correct request parameters.

Here i have done a sample postman call for you. It represents the REST endpoint for creating users.

Screen Shot 2018-01-10 at 12.15.26 AM.png


You can connect to the MySQL server with your preferred MySQL UI client application or container bash mode.

Screen Shot 2018-01-10 at 12.24.18 AM.png


Hope this tutorial helps you to understand how to use docker for application development.

In next tutorial we will modify this same application to use docker-compose to manage everything , instead of manual container running and linking approach.



Docker: The most important and frequently used commands

In this article i am expected to give you few commands that are frequently required and used when building, shipping and running docker images and containers.


Checking the docker version

docker -v     OR    docker --version


Get currently running containers (only the active containers)

docker ps


Get all containers ( all running + stopped)

docker ps -a


Get a list of images available

docker images


Removing image 

docker rmi <image-id>


Start/run a container

docker run <image-name>


Start the Stopped Container

docker start <container-id>


stop running container

docker stop <container-id>


Get the last created container

docker ps -l


Go to the bash mode of the running container

docker exec -it <running-container-name> bash


Get IP Address of the docker container

docker inspect <container-id>


Show docker disk usage

This will display the disk usage of the docker.

docker system df


See a list of volumes

docker volume ls


Remove a volume

docker volume rm  <volume-name>


Prune volumes

Volumes can be used by one or more containers, and take up space on the Docker host. Volumes are never removed automatically, because to do so could destroy data.

$ docker volume prune

WARNING! This will remove all volumes not used by at least one container.
Are you sure you want to continue? [y/N] y

By default, you are prompted to continue. To bypass the prompt, use the -f or --force flag.

By default, all unused volumes are removed. You can limit the scope using the --filter flag. For instance, the following command only removes volumes which are not labelled with the keep label:

$ docker volume prune --filter"label!=keep"



Docker: What is Docker and Its Overview?


What is Docker?

Docker is a platform to build, ship and run applications by wrapping them in containers.

In docker, the applications are composed as images and  run them in containers. So docker is all about creating images and run them inside containers.

By taking advantage of Docker’s methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production.

The official documentation of the docker can be found at

The architecture of the docker can be found at


What is Image and Container ?

A image is an lightweight, standalone and executable package of the software application. The image contains everything (including compiled source code,runtime dependencies, executable jars and libraries etc) that it needs to run the application.

container is a runtime instance of an image — what the image becomes in memory when actually executed. It runs completely isolated from the host environment by default, only accessing host files and ports if configured to do so.

The relationship between image and container can be described as follows. The image will be composed from the application source code  and it is the runtime executable version of the application. The container is the runtime representation of the image and the whole image will be run/executed in the container.

For instance, assumed that a php application that requires php and mysql in the runtime environment. Therefore the image should be packaged by providing those two dependencies. when the application is run in the container, those two dependencies will also run inside the container.

Multiple containers can be executed on the docker platform and they all will run as independent containers.

Package software.png

As you can see that  the docker platform can have multiple containers running on it. each container runs their own set of libraries and servers that are required to run the underlying docker image.


Are Docker Containers similar to Virtual Machines (VM) ?

No… They looks like same. But the are completely different. Lets look at why they are different and what are their differences.


Virtual Machine (VM)

As the name implies Virtual Machine is a machine that virtually running on a physical machine. Each VM has their own operating system(full version of the OS) , runtime libraries and installed apps. Therefore the size of a VM may take GB of space from the physical machine (It consumes high resources of the physical machine). There can be multiple VMs in a physical machine.  The following diagram illustrates the set of virtual machines created in a single physical machine.


Infrastructure represents the resources and software related to the physical machine. This includes the host’s machine operating system, runtime libraries and other resources.

Hypervisor is a piece of software that allows to run VM on host machine.  A computer on which a hypervisor runs one or more virtual machines is called a host machine, and each virtual machine is called a guest machineHypervisor is responsible for creating and maintaining VMs.  The virtual machines can access the host’s machine infrastructure and resources through the hypervisor.


Docker Containers

Docker containers run on top of the docker platform. They contains only the executable package of the application, libraries and dependent software/servers that are required to run the application. They do not have separate OS installations running on their own and they utilize the host machine’s OS. Therefore the size of the container may take MB of spaces (this may varies based on the size of the dependent softwares and libraries in the container) and thus containers are considered as light weight with compared to VM. The below diagram will demonstrate multiple dockers containers running on the docker platform installed in a single machine.



Why Docker containers are good over VM?

Virtual Machine Docker Container
 more heavy in size.  (contains the fully copy of OS) light weight. (only the required softwares and dependencies are contained)
since it is more heavy, takes some times to boot up. (Slow to boot up) since it is light weight, speedy boot up with compared to VM
since the resource consumptions is high, the running of multiple VMs simultaneously may slow down the host machine performace. since the resource consumptions is lower, the running of multiple containers simultaneously may not drastically slow down the host machine performance.
having own OS and utilizes memory and resources allocated for the VM. It is limited to use only the resources allocated for VM. This cannot directly use/utilize the host’s machine resources and infrastructure. utilizes the OS and underlying resources(memory and others) of the host’s machine. If no other docker containers are not running at the moment, the running container can fully utilize the available resources of the host machine.


Now i believe that you have a clear understanding about how docker container differs from a virtual machine.


Installing Docker

You can follow the instructions given at official documentation for installing the docker.

Once you have completed the installation process, you can verify the installation by checking the docker version.


Checking the docker version

There are two commands available to check the version of the docker. you can run one of the commands in the terminal. Then it will print the installed version.

docker -v
docker --version


Docker Hub

This is one of the most important place that you should be aware of.  This is a sort of repository that contains the published docker images. You can create your own docker image of your application and publish it here for later use or someone else to use. In addition, you can find any official docker image through this repository hub.

Lets search for the “hello-world” docker image. You can see the list of found container images.  Is is always advised to go with official image if available.

Screen Shot 2018-01-06 at 11.14.38 PM.png


Lets run our first docker image

We will run the “hello-world” docker container to check whether the installation is working perfectly.  I will take you through a list of useful docker commands in a separate article (click here to see the list of important and frequently used Docker commands).  At the moment, just remember the syntax of following command for running the docker image.

docker  run  <image-name> 


So we can directly run the following command in the terminal to run the hello-world official image.

docker run hello-world


Screen Shot 2018-01-06 at 11.36.08 PM.png


If you observe the execution log properly, you can see that it works perfectly and prints “Hello from Docker!” . The important thing to note from the execution log is below two lines.

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world


When you try to run a docker image, the docker platform first check whether the requested image is already available in the local registry. If it is locally available, then it will run the local image. Otherwise it will retrieve the image from the remote repository (from the Docker Hub) and run. If you want to look at and verify this behavior, run the image one more time and check the execution log.


Screen Shot 2018-01-06 at 11.52.01 PM.png


You can notice that this time it is not going to pull the image from remote repository. It will reuse and run the local image available.

The nice article of the docker architecture can be found at official documentation. It is worth to read it.  you can find it through

In up coming articles, we will try to explore the cool features of Docker. Keep in touch!


Docker Containers are not Virtual Machines (VM)

A natural response when first working with Docker containers is to try and frame them in terms of virtual machines. Oftentimes we hear people describe Docker containers as “lightweight VMs”. This is completely understandable, and many people have done the exact same thing when they first started working with Docker. It’s easy to connect those dots as both technologies share some characteristics. Both are designed to provide an isolated environment in which to run an application. Additionally, in both cases that environment is represented as a binary artifact that can be moved between hosts. There may be other similarities, but these are the two biggest. The key is that the underlying architecture is fundamentally different between the containers and virtual machines.

The analogy we use here at Docker is comparing houses (virtual machines) to apartments (Docker containers). Houses (the VMs) are fully self-contained and they possess their own infrastructure – plumbing, heating, electrical, etc. Furthermore, in the vast majority of cases houses are all going to have at a minimum a bedroom,living area, bathroom, and kitchen.

It’s incredibly difficult to ever find a “studio house” – even if one buys the smallest house they can find, they may end up buying more than they need because that’s just how houses are built. Apartments (Docker containers) also offer protection from unwanted guests, but they are built around shared infrastructure. The apartment building (the server running the Docker daemon, otherwise known as a Docker host) offers shared plumbing, heating, electrical, etc. to each apartment. Additionally apartments are offered in several different sizes – from studio to multi-bedroom penthouse. You’re only renting exactly what you need. Docker containers share the underlying resources of the Docker host. Furthermore, developers build a Docker image that includes exactly what they need to run their application: starting with the basics and adding in only what is needed by the application. Virtual machines are built in the opposite direction. They start with a full operating system and, depending on the application, developers may or may not be able to strip out unwanted components.


Can Virtual Machine(VM) run docker?

Answer is YES and explanation is as follows.

Screen Shot 2017-12-02 at 11.36.35 AM.png