Category: Spring Boot

Upload and Delete files with Amazon S3 and Spring Boot


As a developer, i am pretty sure that you may have come across with scenarios where you need to store images (either user uploaded or application itself) of your application.  There are several possibilities to store file(s) as follows.

  • Store the file(s) somewhere in the hosting server where the application is deployed (if it is a web application).
  • Store the file(s) in the database as binary files.
  • Store the file using cloud storage services.

Here we are going to evaluate the third option (given above) which is “Store the file using cloud storage services“.

Amazon Simple Storage Service (S3) is an AWS object storage platform which helps you to store the files in form of objects, and, store and retrieve any amount of data from anywhere.  Each file stored in Amazon S3 (as an object) is represented using a key.



Spring Boot Application and Amazon S3 Cloud  


Untitled Diagram (12).png

AWS Java SDK supports various APIs related to Amazon S3 service for working with files stored in S3 bucket.



Amazon S3 Account Configuration

Please follow the instructions given in the Amazon S3 official documentation for creating and configuring the S3 account and bucket.

Click here to visit the official documentation. 

I will list down the steps in below for your convenience.

Continue reading “Upload and Delete files with Amazon S3 and Spring Boot”

Spring Boot + SLF4J : Enhance the Application logging with SLF4J Mapped Diagnostic Context (MDC)



What is SLF4J?

I believe that SLF4J is not a new concept for most of the java developers. Here we are going to look at the MDC feature of the logging framework. If you need to brush up your knowledge on SLF4J,  here it is the time.

The Simple Logging Facade for Java (SLF4J) serves as a simple facade or abstraction for various logging frameworks, such as java.util.logginglogback and log4j.

SLF4J allows the end-user to plug in the desired logging framework at deployment time.

Continue reading “Spring Boot + SLF4J : Enhance the Application logging with SLF4J Mapped Diagnostic Context (MDC)”

Code Coverage and Source Quality Analysis with Spring Boot + Docker + SonarQube + JaCoCo


In this article, i am going to explain how to use SonarQube and JaCoCo as a Code Coverage and Source Code Quality analysis tool for Spring Boot application.


What is Code Coverage and why it is important?

Code Coverage is an important topic when it comes to Test Driven Development (TDD). most of the developers are curious to know how percentage of source code is covered with test cases developed (for both unit and integration tests).

Code Coverage shows the stats of how much of source code is covered and tested with  test cases (both unit and integration) developed for the application. Therefore the code coverage analysis is an important fact of measuring the quality of the source code. we need to write the test cases to achieve higher code coverage which will increase the maintainability of the source code.


Technology Stack

The following technologies will be used for this article.

  • SonarQube
  • Docker
  • JaCoCo
  • Spring Boot Application with maven


Install and Run SonarQube with Docker

The most of the developers know the “SonarQube” as a  code quality analysis tool. This has the capability of the executing Unit and Integration tests with given library/tool (such as Cobertura, JaCoCo etc..) and it gives a detailed analysis of code coverage of the source code.  In this article, we will run SonarQube as a docker image. Therefore we need to have docker installed in our development environment.

If you do not have SonarQube in your local development environment, you can download it with following command.


docker pull sonarqube

Screen Shot 2018-02-27 at 12.15.20 PM.png


Once the SonarQube docker image is retrieved, it can be run with following command.


docker run -d --name sonarqube -p 9000:9000 -p 9092:9092 sonarqube  

Screen Shot 2018-02-28 at 1.26.07 AM.png


This will start a docker container based on the sonarqube image and give it the name sonarqube. Adding the -d means the container will run in detached mode (background). The -p 9000:9000 and 9092:9092 means that we expose port 9000 and 9092 to the host using the same port numbers.

Now you can navigate to http://localhost:9000 and you will see your local SonarQube dashboard.


JaCoCo Maven configuration

JaCoCo is a one of the famous code coverage library available for java based applications. In oder to add JaCoCo for the project, you have to add the following maven plugin (under the plugins) for the pom.xml of the project.

(This should be added under the plugins section of the pom.xml of the project)


JaCoCo Test Coverage Analysis with SonarQube

First you need to run the test cases with maven before sending the report for the Sonar server. This can be done with following command.

mvn test


SonarQube has a really good integration with test code coverage. It allows you to analyze which parts of the code should be better covered, and you can correlate this with many other stats. If you want to send your report to your Sonar server the only thing you need to do is to execute the following command in the terminal. (make sure that you have run the mvn test command successfully before executing the below command)


mvn sonar:sonar -Dsonar.login=admin -Dsonar.password=admin

Screen Shot 2018-02-28 at 1.29.32 AM.png


Then it will send the inspection report to the SonarQube and you can access the detailed report through http://localhost:9000 using the specified login credentials.

username : admin
password : admin

Screen Shot 2018-02-28 at 1.34.27 AM.png



Run as a Single Command

As you can see that we have used two separate commands for integrating test result analysis with sonar.


Running test cases  with maven

mvn test

Sending the coverage report to sonar 

mvn sonar:sonar -Dsonar.login=admin -Dsonar.password=admin


Both of above commands can be composed into one single command as follows.

mvn test sonar:sonar -Dsonar.login=admin -Dsonar.password=admin



Exclude Classes from Code Coverage Analysis


In the code coverage analysis we focus only about the classes that should be covered with unit and integration tests. that mens the controllers, repositories, services and domain specific classes. There are some classes which are not covered by either unit or integration tests.  In order to get the correct figure of code coverage analysis, it is required  to exclude those non related classes when performing code coverage analysis.

E.g:- configuration related classes (SpringBootApplication configuration class, SpringSecurityApplication configuration class etc..) should be avoided

This can be done with adding the classes as classes to be excluded under the “properties” section of pom.xml.



You can add multiple exclusions and each of them should be separated  by comma. According to the above configuration, SpringBootDockerExampleApplication and any class under the config package will be excluded/ignored when performing  code coverage analysis.


Spring Boot REST Api with Docker (with docker-compose)


In this tutorial, i am going to show you how to develop an Spring Boot REST Api application that runs on docker container.  This is just a brief and quick demo of setting up spring boot application with docker. In this article, i have focused only on showing the steps of integrating docker support (for building and running image) for the spring boot web application.

If you want to read a detailed article about deploying spring boot application with docker, please click here to visit my some other article on that.


Project Structure and Source Code

The fully source code of the application can be found at GitHub. Click here to download.  The project file structure will be as follows.

Screen Shot 2018-03-03 at 12.33.13 AM.png


Here is the implementation of the 



Dockerfile contains the command and instructions for building the docker image from the project.  The contents of the Dockerfile related to this project, can be given as follows.


FROM java:8  

java8 will be identified as the base image for this application. Therefore the final docker image for this application should be built based on java8 docker image.  (in other words, in order to run this application, java8 docker image is required)



working directory has been set as the /app.  This directory will be created in the container and run the specified commands from this directory.



The copy command will copy the file from local project environment to docker image being built.  The file target/spring-boot-docker-example-0.0.1-SNAPSHOT.jar  in the local project environment will be copied as /app/spring-boot-app.jar.



The specified command will be executed once the docker image is successfully deployed and container is booted up.



docker-compose is a utility/tool that is used to run multi container docker applications. docker-compose utility will read the docker-compose.yml file for setting up the related services for the application.  This file should contains the declaration of the services that are required to run the application. If you need to run any service as a separate docker container, then you should declare it in the docker-compose.yml file.

The content of the docker-compose.yml file related to this project can be shown as follows.


The document complies with docker-compose document version 3.

The service name is “spring-boot-rest-api-app” and image name is “spring-boot-rest-docker-image“. The service should be deployed form the given image and if the image does not exist, it should be built with the Dockerfile available in the current working directory.

The port 8080 of the docker container should be mapped to the port 8087 of the docker host. So the service can be externally accessed with port 8087.

spring-boot-rest-api-app container will use the /data/spring-boot-app volume for managing data.


Building the project with maven

Since the Dockerfile depends on the final built artifact of the project (that is target/spring-boot-rest-api-docker-0.0.1-SNAPSHOT.jar), we need to build final deployable artifact before moving forward with building the docker image.  This can be done with following command.

mvn clean install

Now the project is successfully built and we can move forward with building docker image and running it in a docker container.


Building the docker image

In terminal, go to the directory where your docker-compose.yml file is available. Then run the following command for building the docker image.

docker-compose build


Screen Shot 2018-03-01 at 9.00.25 PM.png


This command can be used to build new image or rebuild existing images. That means if there is no docker image available for the given name, then it will directly build the image. Otherwise the existing image (already available image for the given name) will be removed and rebuild the image.


you can get a list of docker images available in the docker platform with following command and  verify wether the image has been successfully built.

docker images

Screen Shot 2018-03-03 at 12.27.52 AM.png

you can notice that the “spring-boot-rest-docker-image” is successfully built and available under the list of images.


Running application with docker-compose

This can be done with following command.

docker-compose up

After executing the above command, it will look for the services declared in the    docker-compose.yml  file and deploy and start each service in separate docker container.


Now, we should be able to access the REST api endpoint available in the WelcomeController.

GET  /api/welcome

Screen Shot 2018-03-03 at 12.40.21 AM.png


Spring Boot : Spring Data JPA Pagination



The purpose of this article is to demonstrate the Pagination and its related features with Spring Boot and Spring Data JPA. Therefore in order to keep this article simple and well focused, i will discuss only the pagination related topics with Spring Data JPA. Here i have assumed that you have a prior (basic) knowledge of Spring Boot and Spring Data JPA and you know how to develop a basic application.


The source code and Project Structure 

The source code related to this article can be found at GitHub. Click here to download it.

Screen Shot 2018-07-10 at 4.14.03 PM.png


Running the application

The following command can be executed to run the application.

mvn spring-boot:run

Now the sample application is up and running.




Lets look at each method in detailed.


Creating dummy data set

You can create the dummy set of data required to run this application by making following REST api endpoint invocation.

POST    http://localhost:8080/users

Screen Shot 2018-07-10 at 4.09.06 PM.png

createUsers :- As described above, this endpoint is used to create set of dummy data required to run this demonstration application.

Now your users table of the targeted database is populated with some dummy data entries.  Lets look at the paginated REST api endpoints implementation in detailed.


Pagination: – User specified page and page size  

In here the the related page that need to be fetched and page size (number of items need to be fetched) can be specified runtime (when the REST endpoint is invoked)


GET  http://localhost:8080/users?page=0&size=5

Screen Shot 2018-07-10 at 4.44.50 PM.png

The page number should start from zero and it may increase based on the number of records available and page size.

Here you can see that the page is 0 (first page)  and size (number of items per page)  is 5. You can invoke the above REST api endpoints by changing page and size parameters.

Lets look at the code of the method responsible for handling above REST Api invocation.

public UserResponse getUsers(Pageable pageable) 
   Page page = userService.findUsers(pageable);
   return new UserResponse(page);


Notice that we haven’t passed RequestParams to our handler method . When the endpoint /users?page=0&size=5 is hit, Spring would automatically resolve the page and size parameters and create a Pageable instance with those values . We would then pass this Pageable instance to the Service layer ,which would pass it to our Repository layer .



Pagination: – Application specified page and page size  

In here the, page and page size is set by the application itself. The user does not have to provide any parameter and it is delated in the application with some pre-defined classes.


GET  http://localhost:8080/users2

Here is the method responsible for handling above REST api invocation. (Note: “users2“)

public UserResponse getUsers2()
 int pageNumber = 3;
 int pageSize = 2;

 Page page = userService.findUsers(PageRequest.of(pageNumber, pageSize));
 return new UserResponse(page);


Here you can see that the page is 3  and size (number of items per page)  is 2.  After invoking above endpoint, you will get the following result.

Screen Shot 2018-07-10 at 5.01.03 PM.png



UserRepository is not directly extended from PagingAndSortingRepository

If you look a the the source code of the UserRepository class, you will notice that it is not directly inherited from the PagingAndSortingRepository. You might be wondering how this pagination works without extending the PagingAndSortingRepository.

Let me explain. UserRepository is extended from the JpaRepository.

If you examine the source code of the JpaRepository, you will notice that it is extended from PagingAndSortingRepository. Therefore any repository which is inherited from the JpaRepository will have the related methods and functionalities related to pagination.


Pagination related details (more)

If you go through the source code, you can find three classes (application specific) that are developed to encapsulate the pagination related details. They can be listed as follows. (please find some time to go through those classes)

  • PaginationDetails
  • NextPage
  • PreviousPage

Those three classes help to display the paginated related details in well formatted and descriptive manner as follows.

Screen Shot 2018-07-10 at 5.13.37 PM.png


We have called the method of PagingAndSortingRepository:

Page<T> findAll(Pageable pageable);
It returns a Page. Page contains the information about pagination and actual data being retrieved.
getContent() method of Page can be used to get the data as a List.

In addition, Page has set of  methods (inherited from  Slice)  those can be used to retrieve different pagination related details.


If you have any queries, related to this article, please feel free to drop a comment or contact me.


Spring Framework: Profiling with @Profile


In your software development life, you might be having an experience about different application environments such as DEVELOPMENT, STAGING and PRODUCTION. The developed applications are normally deployed in these environments.  The most of the time these environments are set up in separate servers and they are known as :

  • Development Server
  • Staging Server
  • Production Server

Each of these server environments has their own configuration and connection details. These details might be different from one server to another.

MySQL or some other database connection details
RabbitMQ server and connection details etc....


Therefore we should maintain separate configuration/properties files for each server environment and we need to pick up the right configuration file based on the server environment.

In traditional way, this is achieved by manually defining related configuration file when building and deploying the application. This requires few manual steps with some human resource involvement. Therefore there is a probability to arise deployment related issues.  In addition, there are some limitations with the traditional approach.


What we should do if there is a requirement to programmatically register a bean based on the environment?

e.g:- The staging environment should have a separate bean implementation while development and production environments are having their own bean instances with different implementations.

The Spring Framework has come up with the solutions for above problems and made our life easier with annotation called @Profile.



In spring the above deployment environments (development, staging and production) are treated as separate profiles@Profile annotation is used to separate the configuration for each profile. When running the application, we need to activate a selected profile and based on activated profile the relevant configurations will be loaded.

The purpose of @Profile is to separate/segregate the creating and registering of beans based on the profiles. Therefore @Profile can be used with any annotation that has the purpose of either creating or registering bean in Spring IOC container. So the @Profile can be used with following annotations.

  • Any stereo type annotations (mainly used with @Component and @Service)
  • @Configuration and @Bean annotations


After reading the above note, the first question you might be asking yourself is that “Why @Profile is used mainly with @Component and @Service? “. Lets figure it out before moving forward.



Why @Profile annotation is used mainly with @Component and @Service annotations? 

@Component designates the class as a spring managed component and @Service designates the class as the spring managed service. It makes a sense if the application creates different services and managed components based on the activated profiles. This is very logical and this should be the expected behavior of profiling.

Do you think that creating separate controllers and repositories based on different profiles make any sense? Is it logically acceptable? Different Controller for production environment and different ones for staging and development? Isn’t it crazy?

On the other hand, do you think that we need separate repositories based on profiles. Separate ones for development, staging and production?  wait… wait.. wait…  I agree with you that we need different database configurations and connection details for each of these environments.  Does it mean that we need separate repositories? No right? The separate database connection details does not have any relation with repository.

Now i think you can understand why @Profile is not used with @Controller and @Repository.



What will happen if it is used with other stereotype annotations such as  @Controller and @Repository?

It will work fine. I just just explained you the logical reasons behind of not using @Profile with @Controller and @Repository.

If you can logically prove that using @Profile with @Controller and @Repository annotations just do the right job for you, then you are free to go for it. But again think twice before proceeding.

Ok. Now you have an idea of how @Profile helps to create and register the relevant beans based on activated profiles. But i didn’t explain you how relevant file is picked up based on the activated profile. Lets look at it now.


Picking up the correct file with spring boot

According to our discussion, the application can be deployed in several server environments. Therefore the application should have different file for the deployment profile(or server environment).  When the profile is activated, the corresponding file should be picked up.

How the properties files are named based on the profile and Spring Boot picks up the correct file?

We can have property file specific to a profile with the convention application-{profile}.properties. In this way we can have separate property file for different environment. If we have activated a profile, then the corresponding property file will be used by the spring boot application. We can also have a property file for default profile.

Suppose we have profiles as dev for development environment , prod for production environment and staging for staging environment. Then the property file will be listed as below.


Ok lets do some fantastic coding example with Spring @Profile. We will try to cover most of the concepts we discussed here.


What we are going to build.

We will build a simple REST api application that persists some data to MySQL database with Spring Data JPA.  Here i am focused only with demonstrating @Profile and if you need to learn more about Spring Data JPA, please refer my article on that.

Click here to go to Spring Data JPA article. 

This application has three different databases that represents three different deployment profiles. deployment profiles are dev, staging and prod.

app_development_db  - database for the dev profile/environment 
app_staging_db - database for the staging profile/environment
app_production_db  - database for the prod  profile/environment.

(If you want to run this application and see the output, make sure that you have created above three databases in the MySQL server)

The source code of this example can be found at GitHub.

Click here to download the source code. 


If you open up the project in your IDE, you can see the following files structure.

Screen Shot 2018-01-03 at 12.58.22 AM.png


You can notice that we have created separate files for each profile.

So lets dig into some of the important source files.


ConfigurationManager is responsible for creating and registering the relevant/corresponding  bean based on the activated profile.


EnvironmentService has different implementations for each profile. Based on the activated profile, the corresponding service bean will be created and registered.


Finally we will look at our ApplicationLogController.


ApplicationLogController has exposed following REST endpoint.

POST  /logs

This will persists the ApplicationLog entries with the aid of ApplicationLogRepository. After that it reruns the persisted log entry. This can be seen in the body of the HTTP Response.


AppConfiguration has been auto-wired with the registered configuration bean based on the activated profile.

 EnvironmentService will also be auto-wired with the created service bean based on the activated profile.

Ultimately, the persisting database will be decided on the selected properties file based activated profile.

Since everything depends on the activated profile,  we need to run this application by activating any of these three profiles. Then we can see the result and understand how it works.


Running the Application by activating profiles.

The profile can be activated with following command.<<profile-name>>


Therefore the practical uses of the command can be given as follows.


Running spring boot application by enabling “prod” profile

mvn spring-boot:run

Running the application as a jar file by enabling the dev profile

java -jar target/spring-profile-example-0.0.1-SNAPSHOT.jar


Lets run the application and examine how the profile works. In order to identify how it works, please check all three databases after each REST api call.


Run the application by activating  “prod” profile

java -jar target/spring-profile-example-0.0.1-SNAPSHOT.jar

Making the REST api call.

/POST  localhost:8080/logs


The HTTP response will be as follows.

Screen Shot 2018-01-03 at 10.31.01 PM.png



Run the application by activating  “dev” profile

java -jar target/spring-profile-example-0.0.1-SNAPSHOT.jar

Making the REST api call.

/POST  localhost:8080/logs


The HTTP response will be as follows.

Screen Shot 2018-01-03 at 10.34.12 PM.png



Run the application by activating  “staging” profile

java -jar target/spring-profile-example-0.0.1-SNAPSHOT.jar

Making the REST api call.

/POST  localhost:8080/logs


The HTTP response will be as follows.

Screen Shot 2018-01-03 at 10.35.51 PM.png


As i have already mentioned, please check all three database after each REST api call. Then you will notice that only the corresponding file is picked up and the connection for the given database is made.


What are the uses of @EntityScan and @EnableJpaRepositories annotations?


Sometimes you may have noticed that some of the spring application projects (specially spring boot applications) uses @EntityScan and @EnableJpaRepositories annotations as a part of configuring Spring Data JPA support for the application.

But some of the spring boot applications managed to complete their configurations and run the applications with Spring Data JPA WITHOUT those two annotations.

You might be having a little confusion about the real usages of those two annotations and when to use them? that is fine.  The purpose of this article is to describe about the real usages of those two annotations and giving a full picture/idea of how to and when to use them properly.


What is the Spring Boot main application package?

It is the package that contains the Spring Boot main configuration class that is annotated with @SpringBootApplication annotation.

@SpringBootApplication annotation

This annotation automatically provides the features of the following annotations

  • @Configuration
  • @EnableAutoConfiguration
  • @ComponentScan


Spring Boot Auto-Configuration Feature with @EnableAutoConfiguration

If you want to get the maximum advantage of spring boot’s auto configuration feature, it is expected to put all your class packages under spring boot main application package (directly in main package or indirectly as sub packages).

The @EnableAutoConfiguration will scan the main package and its sub packages when executing the spring boot auto configuration feature for class path dependencies. If any class or package that is outside from the main application package and it is required for completing auto configuration for some dependency, then should be declared in the main configuration class properly (with related annotation).

Then the @EnableAutoConfiguration will scan for those declared packages for detecting the required classes in the process of completing/doing the auto configuration for the application dependency declared in the class path. Those can de described as follows.



This will enable the JPA repositories that contains in the given package(s).

For instance, Enabling auto configuration support for Spring Data JPA required to know the path of the JPA the repositories. By default, it will scan only the main application package and its sub packages for detecting the JPA repositories. Therefore, if the JPA repositories are placed under the main application package or its sub package, then it will be detected by  the @EnableAutoConfiguration as a part of auto configuring the spring based configurations. If the repository classes are not placed under the main application package or its sub package, then the relevant repository package(s) should be declared in the main application configuration class with @EnableJpaRepositories annotation. Then this will enable the JPA repositories contains in the given/declared package(s).


@EnableJpaRepositories(basePackages = "com.springbootdev.examples.jpa.repositories")




If the entity classes are not placed in the main application package or its sub package(s), then it is required to declare the package(s) in the main configuration class with @EntityScan annotation. This will tells spring boot to where to scan for detecting the entities for the application. Basically @EnableAutoConfiguration will scan the given package(s) for detecting the entities.


@EntityScan(basePackages = "com.springbootdev.examples.entity")


Lets look at some fun  and real code examples. Here i am not going to explain you the Spring Data JPA here. It has already been discussed in my following article.

Click here to go to Spring Data JPA example


You can download the source code of this article from GitHub.

Click here to download.


If you open the project in your IDE, you will notice that repository and entity packages are not placed in the main application package.

Screen Shot 2017-12-31 at 2.24.47 PM.png


main application package



JPA repository classes are  in package



entity classes are in package 



Therefore the entity classes location and JPA repositories location should be declared and enabled with @EntityScan and   @EnableJpaRepositories annotations respectively. Otherwise the application will fail to load.

Please refer the following Spring Boot main configuration class.