Author: Chathuranga Tennakoon

Spring Boot : Spring Data JPA Pagination

 

Introduction

The purpose of this article is to demonstrate the Pagination and its related features with Spring Boot and Spring Data JPA. Therefore in order to keep this article simple and well focused, i will discuss only the pagination related topics with Spring Data JPA. Here i have assumed that you have a prior (basic) knowledge of Spring Boot and Spring Data JPA and you know how to develop a basic application.

 

The source code and Project Structure 

The source code related to this article can be found at GitHub. Click here to download it.

Screen Shot 2018-07-10 at 4.14.03 PM.png

 

Running the application

The following command can be executed to run the application.

mvn spring-boot:run

Now the sample application is up and running.

 

UserController

import com.springbootdev.examples.jpa.examples.dto.response.user.UserResponse;
import com.springbootdev.examples.jpa.examples.model.User;
import com.springbootdev.examples.jpa.examples.service.UserService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.domain.Page;
import org.springframework.data.domain.PageRequest;
import org.springframework.data.domain.Pageable;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RestController;
import java.util.HashMap;
import java.util.Map;
@RestController
public class UserController
{
@Autowired
private UserService userService;
@PostMapping("/users")
public Map<String, Object> createUsers(@RequestBody User user)
{
int numOfRecords = 100;
String name = user.getName();
//creating 100 sample users
for (int index = 1; index <= numOfRecords; index++) {
User userNew = new User();
userNew.setName(name + " " + index);
userService.create(userNew);
}
Map<String, Object> response = new HashMap<String, Object>();
response.put("num_users_added", numOfRecords);
return response;
}
@GetMapping("/users")
public UserResponse getUsers(Pageable pageable)
{
Page page = userService.findUsers(pageable);
return new UserResponse(page);
}
@GetMapping("/users2")
public UserResponse getUsers2()
{
int pageNumber = 3;
int pageSize = 2;
Page page = userService.findUsers(PageRequest.of(pageNumber, pageSize));
return new UserResponse(page);
}
}

view raw
UserController.java
hosted with ❤ by GitHub

 

Lets look at each method in detailed.

 

Creating dummy data set

You can create the dummy set of data required to run this application by making following REST api endpoint invocation.

POST    http://localhost:8080/users

Screen Shot 2018-07-10 at 4.09.06 PM.png

createUsers :- As described above, this endpoint is used to create set of dummy data required to run this demonstration application.

Now your users table of the targeted database is populated with some dummy data entries.  Lets look at the paginated REST api endpoints implementation in detailed.

 

Pagination: – User specified page and page size  

In here the the related page that need to be fetched and page size (number of items need to be fetched) can be specified runtime (when the REST endpoint is invoked)

 

GET  http://localhost:8080/users?page=0&size=5

Screen Shot 2018-07-10 at 4.44.50 PM.png

The page number should start from zero and it may increase based on the number of records available and page size.

Here you can see that the page is 0 (first page)  and size (number of items per page)  is 5. You can invoke the above REST api endpoints by changing page and size parameters.

Lets look at the code of the method responsible for handling above REST Api invocation.

@GetMapping("/users")
public UserResponse getUsers(Pageable pageable) 
{
   Page page = userService.findUsers(pageable);
   return new UserResponse(page);
}

 

Notice that we haven’t passed RequestParams to our handler method . When the endpoint /users?page=0&size=5 is hit, Spring would automatically resolve the page and size parameters and create a Pageable instance with those values . We would then pass this Pageable instance to the Service layer ,which would pass it to our Repository layer .

 

 

Pagination: – Application specified page and page size  

In here the, page and page size is set by the application itself. The user does not have to provide any parameter and it is delated in the application with some pre-defined classes.

 

GET  http://localhost:8080/users2

Here is the method responsible for handling above REST api invocation. (Note: “users2“)

@GetMapping("/users2")
public UserResponse getUsers2()
{
 int pageNumber = 3;
 int pageSize = 2;

 Page page = userService.findUsers(PageRequest.of(pageNumber, pageSize));
 return new UserResponse(page);
}

 

Here you can see that the page is 3  and size (number of items per page)  is 2.  After invoking above endpoint, you will get the following result.

Screen Shot 2018-07-10 at 5.01.03 PM.png

 

 

UserRepository is not directly extended from PagingAndSortingRepository

If you look a the the source code of the UserRepository class, you will notice that it is not directly inherited from the PagingAndSortingRepository. You might be wondering how this pagination works without extending the PagingAndSortingRepository.

Let me explain. UserRepository is extended from the JpaRepository.

If you examine the source code of the JpaRepository, you will notice that it is extended from PagingAndSortingRepository. Therefore any repository which is inherited from the JpaRepository will have the related methods and functionalities related to pagination.

 

Pagination related details (more)

If you go through the source code, you can find three classes (application specific) that are developed to encapsulate the pagination related details. They can be listed as follows. (please find some time to go through those classes)

  • PaginationDetails
  • NextPage
  • PreviousPage

Those three classes help to display the paginated related details in well formatted and descriptive manner as follows.

Screen Shot 2018-07-10 at 5.13.37 PM.png

 

We have called the method of PagingAndSortingRepository:

Page<T> findAll(Pageable pageable);
It returns a Page. Page contains the information about pagination and actual data being retrieved.
getContent() method of Page can be used to get the data as a List.

In addition, Page has set of  methods (inherited from  Slice)  those can be used to retrieve different pagination related details.

 

If you have any queries, related to this article, please feel free to drop a comment or contact me.

 

Spring Framework: Profiling with @Profile

 

In your software development life, you might be having an experience about different application environments such as DEVELOPMENT, STAGING and PRODUCTION. The developed applications are normally deployed in these environments.  The most of the time these environments are set up in separate servers and they are known as :

  • Development Server
  • Staging Server
  • Production Server

Each of these server environments has their own configuration and connection details. These details might be different from one server to another.

e.g:-
MySQL or some other database connection details
RabbitMQ server and connection details etc....

 

Therefore we should maintain separate configuration/properties files for each server environment and we need to pick up the right configuration file based on the server environment.

In traditional way, this is achieved by manually defining related configuration file when building and deploying the application. This requires few manual steps with some human resource involvement. Therefore there is a probability to arise deployment related issues.  In addition, there are some limitations with the traditional approach.

 

What we should do if there is a requirement to programmatically register a bean based on the environment?

e.g:- The staging environment should have a separate bean implementation while development and production environments are having their own bean instances with different implementations.

The Spring Framework has come up with the solutions for above problems and made our life easier with annotation called @Profile.

 

@Profile

In spring the above deployment environments (development, staging and production) are treated as separate profiles@Profile annotation is used to separate the configuration for each profile. When running the application, we need to activate a selected profile and based on activated profile the relevant configurations will be loaded.

The purpose of @Profile is to separate/segregate the creating and registering of beans based on the profiles. Therefore @Profile can be used with any annotation that has the purpose of either creating or registering bean in Spring IOC container. So the @Profile can be used with following annotations.

  • Any stereo type annotations (mainly used with @Component and @Service)
  • @Configuration and @Bean annotations

 

After reading the above note, the first question you might be asking yourself is that “Why @Profile is used mainly with @Component and @Service? “. Lets figure it out before moving forward.

 

 

Why @Profile annotation is used mainly with @Component and @Service annotations? 

@Component designates the class as a spring managed component and @Service designates the class as the spring managed service. It makes a sense if the application creates different services and managed components based on the activated profiles. This is very logical and this should be the expected behavior of profiling.

Do you think that creating separate controllers and repositories based on different profiles make any sense? Is it logically acceptable? Different Controller for production environment and different ones for staging and development? Isn’t it crazy?

On the other hand, do you think that we need separate repositories based on profiles. Separate ones for development, staging and production?  wait… wait.. wait…  I agree with you that we need different database configurations and connection details for each of these environments.  Does it mean that we need separate repositories? No right? The separate database connection details does not have any relation with repository.

Now i think you can understand why @Profile is not used with @Controller and @Repository.

 

 

What will happen if it is used with other stereotype annotations such as  @Controller and @Repository?

It will work fine. I just just explained you the logical reasons behind of not using @Profile with @Controller and @Repository.

If you can logically prove that using @Profile with @Controller and @Repository annotations just do the right job for you, then you are free to go for it. But again think twice before proceeding.

Ok. Now you have an idea of how @Profile helps to create and register the relevant beans based on activated profiles. But i didn’t explain you how relevant application.properties file is picked up based on the activated profile. Lets look at it now.

 

Picking up the correct application.properties file with spring boot

According to our discussion, the application can be deployed in several server environments. Therefore the application should have different application.properties file for the deployment profile(or server environment).  When the profile is activated, the corresponding application.properties file should be picked up.

How the properties files are named based on the profile and Spring Boot picks up the correct application.properties file?

We can have property file specific to a profile with the convention application-{profile}.properties. In this way we can have separate property file for different environment. If we have activated a profile, then the corresponding property file will be used by the spring boot application. We can also have a property file for default profile.

Suppose we have profiles as dev for development environment , prod for production environment and staging for staging environment. Then the property file will be listed as below.

application-prod.properties
application-dev.properties 
application-staging.properties

 

Ok lets do some fantastic coding example with Spring @Profile. We will try to cover most of the concepts we discussed here.

 

What we are going to build.

We will build a simple REST api application that persists some data to MySQL database with Spring Data JPA.  Here i am focused only with demonstrating @Profile and if you need to learn more about Spring Data JPA, please refer my article on that.

Click here to go to Spring Data JPA article. 

This application has three different databases that represents three different deployment profiles. deployment profiles are dev, staging and prod.

app_development_db  - database for the dev profile/environment 
app_staging_db - database for the staging profile/environment
app_production_db  - database for the prod  profile/environment.

(If you want to run this application and see the output, make sure that you have created above three databases in the MySQL server)

The source code of this example can be found at GitHub.

Click here to download the source code. 

 

If you open up the project in your IDE, you can see the following files structure.

Screen Shot 2018-01-03 at 12.58.22 AM.png

 

You can notice that we have created separate application.properties files for each profile.

So lets dig into some of the important source files.

@Configuration
public class ConfigurationManager
{
@Bean
@Profile("dev")
public AppConfiguration getDevelopmentConfiguration()
{
return new AppConfiguration("development_config");
}
@Bean
@Profile("staging")
public AppConfiguration getStagingConfiguration()
{
return new AppConfiguration("staging_config");
}
@Bean
@Profile("prod")
public AppConfiguration getProductionConfiguration()
{
return new AppConfiguration("production_config");
}
}

 

ConfigurationManager is responsible for creating and registering the relevant/corresponding  bean based on the activated profile.

 

EnvironmentService has different implementations for each profile. Based on the activated profile, the corresponding service bean will be created and registered.

import org.springframework.context.annotation.Profile;
import org.springframework.stereotype.Service;
@Service
@Profile("dev")
public class DevelopmentEnvironmentService implements EnvironmentService
{
@Override
public String getCurrentEnvironment()
{
return "development_environment";
}
}

@Service
@Profile("prod")
public class ProductionEnvironmentService implements EnvironmentService
{
@Override
public String getCurrentEnvironment()
{
return "production_environment";
}
}

@Service
@Profile("staging")
public class StagingEnvironmentService implements EnvironmentService
{
@Override
public String getCurrentEnvironment()
{
return "staging_environment";
}
}

 

Finally we will look at our ApplicationLogController.

@RestController
public class ApplicationLogController
{
@Autowired
private AppConfiguration appConfiguration;
@Autowired
private EnvironmentService environmentService;
@Autowired
private ApplicationLogRepository repository;
@PostMapping("/logs")
public ApplicationLog createApplicationLog()
{
ApplicationLog applicationLog = new ApplicationLog();
applicationLog.setConfiguration(appConfiguration.getName());
applicationLog.setEnvironment(environmentService.getCurrentEnvironment());
return repository.save(applicationLog);
}
}

 

ApplicationLogController has exposed following REST endpoint.

POST  /logs

This will persists the ApplicationLog entries with the aid of ApplicationLogRepository. After that it reruns the persisted log entry. This can be seen in the body of the HTTP Response.

 

AppConfiguration has been auto-wired with the registered configuration bean based on the activated profile.

 EnvironmentService will also be auto-wired with the created service bean based on the activated profile.

Ultimately, the persisting database will be decided on the selected properties file based activated profile.

Since everything depends on the activated profile,  we need to run this application by activating any of these three profiles. Then we can see the result and understand how it works.

 

Running the Application by activating profiles.

The profile can be activated with following command.

-Dspring.profiles.active=<<profile-name>>

 

Therefore the practical uses of the command can be given as follows.

 

Running spring boot application by enabling “prod” profile

mvn spring-boot:run -Dspring.profiles.active=prod

Running the application as a jar file by enabling the dev profile

java -jar -Dspring.profiles.active=dev target/spring-profile-example-0.0.1-SNAPSHOT.jar

 

Lets run the application and examine how the profile works. In order to identify how it works, please check all three databases after each REST api call.

 

Run the application by activating  “prod” profile

java -jar -Dspring.profiles.active=prod target/spring-profile-example-0.0.1-SNAPSHOT.jar

Making the REST api call.

/POST  localhost:8080/logs

 

The HTTP response will be as follows.

Screen Shot 2018-01-03 at 10.31.01 PM.png

 

 

Run the application by activating  “dev” profile

java -jar -Dspring.profiles.active=dev target/spring-profile-example-0.0.1-SNAPSHOT.jar

Making the REST api call.

/POST  localhost:8080/logs

 

The HTTP response will be as follows.

Screen Shot 2018-01-03 at 10.34.12 PM.png

 

 

Run the application by activating  “staging” profile

java -jar -Dspring.profiles.active=staging target/spring-profile-example-0.0.1-SNAPSHOT.jar

Making the REST api call.

/POST  localhost:8080/logs

 

The HTTP response will be as follows.

Screen Shot 2018-01-03 at 10.35.51 PM.png

 

As i have already mentioned, please check all three database after each REST api call. Then you will notice that only the corresponding application.properties file is picked up and the connection for the given database is made.

 

SonarQube: Exclude classes from Code Coverage Analysis

In the code coverage analysis we focus only about the classes that should be covered with unit and integration tests. that mens the controllers, repositories, services and domain specific classes. There are some classes which are not covered by either unit or integration tests.  In order to get the correct figure of code coverage analysis, it is required  to exclude those non related classes when performing code coverage analysis.

E.g:- configuration related classes (SpringBootApplication configuration class, SpringSecurityApplication configuration class etc..) should be avoided

This can be done with adding the required classes as excluded list under the “properties” section of pom.xml.

<properties>
    <sonar.exclusions>
      **/SpringBootDockerExampleApplication.java,
      **/config/*.java
    </sonar.exclusions>
 </properties>

 

You can add multiple exclusions and each of them should be separated  by comma. According to the above configuration, SpringBootDockerExampleApplication and any class under the config package will be excluded/ignored when performing  code coverage analysis.

 

NodeJs development with Docker (Webpack + ES6 + Babel)

 

In this article we will look at how to use Docker for NodeJs application development and deployment. Here i will be showing you how to use docker-compose utility to bundle NodeJs application as a docker image and run it in a docker container.  For the demonstration purpose, i am going to reuse a NodeJs application that was developed in some other article. I will take you through the step by step process for integrating Docker features and functionalities for  the application that we have developed.

 

we will be using the application developed in following article.

Click here to go the previous article

If you haven’t read the previous article, it is highly recommended to read it before moving forward with this article.

 

Lets remind and brush up our knowledge on the technologies used in the previous article as follows.

  • Express.js :- The application has been developed using Express.js framework.
  • ES6+ :- The source code complies with the ES6+ (ES6 and higher) JavaScript.
  • Babel and babel-loader :- has been used for transpiling the ES6+ source code into ES5 style code. babel-loader has been used with webpack for compiling/transpiling purpose.
  • webpack :- This has been used as the static resource bundling tool (here specially javascript) and executing babel traspiler with babel-loader.

 

Get the source code of the project related to the previous article from GitHub. You can get the source code with following command.

git clone git@github.com:chathurangat/nodejs-webpack-es6.git

 

Once the source code is cloned, add below two empty files to the root of the project.

  • Dockerfile :- the file name should be “Dockerfile” without any extension (NO extension)
  • docker-compose.yml 

Don’t worry about the purposes of these two files for the moment right now. we will discuss the purpose of each file when they contents are added for them.

 

After adding above two files, open the project with your IDE and the project structure should looks like below.

Screen Shot 2018-02-25 at 12.03.13 AM.png

 

NodeJs Application with Express.Js , Babel and Webpack

Since i have demonstrated how to develop NodeJs application with Express.js with ES6+ JavaScript syntaxes and  how to use Babel and Webpack for transpiling and bundling purposes in the previous article, i am not going to repeat the same content here. If you need any clarification regarding this, please refer the previous article.  I will be moving forward with adding Docker for the previous developed application.

 

Moving forward with Docker

Now it is the time to modify the content of the Dockerfile and docker-compose.yml file. Lets look at the purpose of each file in detailed as follows.

Docker is all about creating images from source code and running them in standalone environment called containers.  if you are new to docker and need to get a basic idea, then click here to visit my article about Docker.

 

Dockerfile

Dockerfile contains the instructions and related commands for building the docker image from the project source code.  add the following content for the empty Dockerfile that you have created.

FROM node:alpine
WORKDIR /app
COPY . /app
RUN npm install
ENTRYPOINT ["npm","run","execute"]

 

FROM : This defines the base image for the image that we are building.(The image should be built from this base image). All we said is that, for this image to run, we need the node:alpine image.

 

WORKDIR : This will create a work directory when building the image. Here it will create the “/app” directory as the work directory.  If you go to the bash mode of the container, you can verify that “/app” directory is created with all copies files.

WORKDIR sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that available in the docker file.

 

COPY : command used to copy the given files from local development environment to the docker image. Here the local current working directory (all files in the current working directory) will be copied to “/app” directory.

 

RUN :  RUN command can be used to execute the shell commands when the docker image is being built.

 

ENTRYPOINT : This command will run when the container is created and up. Normally this should contain the command that should be executed in the shell to run the application in the docker container. The command should be given in JSON array format.

According to the above Dockerfile, the command will be:

 npm run execute

 

Here the “execute” is a custom built command and if you observe the scripts of the package.json, then you will find the related command.

"scripts": {
  "test": "echo \"Error: no test specified\" && exit 1",
  "execute": "webpack && node build/app.bundle.js"
},

 

If you want to learn more about Dockerfile, please click here to visit official documentation about it. 

 

 

What is docker-compose? 

Compose is a tool for defining and running multi-container Docker applications. With Compose, you need to create a YAML file (docker-compose.yml) to configure your application’s services. Then, with a single command, you can create and start all the services from your configuration.

Lets add the contents for the created docker-compose.yml file.

 

docker-compose.yml 

version: '3'

services:
  nodejs-webpack-es6-app:
    image: nodejs-webpack-es6-image
    build:
      context: ./
      dockerfile: Dockerfile
    ports:
      - 4000:2000

 

According to the above document, docker-compose version is 3. Therefore this document should contain the syntaxes that comply with version 3.

We can declare the list of services under the services. here i have declared only one service which it is built with the source code of this project.  each declared service will be deployed and run in a sperate docker container.

The name of the service is “nodejs-webpack-es6-app“. The service should be deployed with docker image “nodejs-webpack-es6-image“. If the docker image is not available, then build the docker image with using the Dockerfile available in the current working directory.

The service will be running in the container port 2000 and expose it though docker host port 4000. Therefore the service can be accessed externally with:

ip address of the docker host + 4000 (port)

 

 

docker-compose for building and running the application

 

In command shell, go to the directory where the docker-compose.yml file is located and run the below command to run the application.

 

docker-compose  up

 

After running the above command, you can access the application as follows.

 

Testing the Application

Now lets access each HTTP route with postman as follows.

 

GET   http://localhost:4000/

Screen Shot 2018-02-21 at 7.30.27 PM.png

 

GET   http://localhost:4000/products/12

Screen Shot 2018-02-21 at 7.31.26 PM.png

 

POST    http://localhost:4000/products

Screen Shot 2018-02-21 at 7.32.30 PM.png

 

Rebuild the image on source file changing

if you have modified the source code of the application, then you need to remove the old image and rebuild the new image. This can be done with following single command.

 

docker-compose build

 

The source code of this article can be found in GitHub. Click here to get the source code.

 

 

 

What is babel-polyfill and why it is important?

 

babel-polyfill will emulate a full ES6 environment. For example, without the polyfill, the following code:

function allAdd() {
    return Array.from(arguments).map((a) => a + 2);
}

will be transpiled to:

function allAdd() {
    return Array.from(argument).map(function (a) {
        return a + 2;
    });
}

This code will not work everywhere, because Array.from in not supported by every browser:

Uncaught TypeError: Array.from is not a function

To solve this problem we need to use a polyfill. A polyfill is a piece of code, that replicate the native API that does not exist in the current runtime.

To include the Babel polyfill, we need to install it:

npm install babel-polyfill --save-dev

 

Use the babel-polyfill in your source code

To include the polyfill you need to require it at the top of the entry point to your application. in order to use babel-polyfill withj your applciation, you can use one of following three methods.

 

method 01

With require

require("babel-polyfill");

 

method 02

If you are using ES6’s import syntax in your application’s entry point, you should instead import the polyfill at the top of the entry point to ensure the polyfills are loaded first.

With ES6 import.

import "babel-polyfill";

 

method 03

With webpack.config.js, add babel-polyfill to your entry array:

module.exports = {
  entry: ["babel-polyfill", "./app/js"]
};

NodeJS Simple Application with ECMAScript6 (Webpack + Babel + ES6)

 

In this article, i am going to develop simple NodeJs application that uses following technologies.

  • ES6 (ECMAScript6) based JavaScript Syntaxes
  • Babel for transpiling/compiling ES6 syntaxes into ES5.
  • Webpack for executing babel transpiler and bundling JavaScript  files into a single file
  • ExpressJs as the web application framework for NodeJs

Lets move forward to build our NodeJs application with ExpressJS using ES6+ JavaScript syntaxes.

 

Initializing the project and creating the package.json file.

NodeJs project should be a NPM based (managed) project. That means the dependent packages of the project should be managed with NPM. This can be achieved by creating the package.json file for the project.

The following command can be used to  create the package.json file with required intial configurations.

npm init

Once the npm init command is executed, you need to provide the project related details for the package.json. Please refer the below screenshot.

 

Screen Shot 2018-02-20 at 12.52.40 PM.png

 

Now the package.json is created and we can start the development of our NodeJs application.

 

Creating the project directory structure

The final project structure should looks like below.  Don’t worry at the moment. we will add each of files while we are moving forward with the article.

Screen Shot 2018-02-21 at 5.23.01 PM.png

 

Adding the external libraries and dependencies for the project.

As i have already pointed out, our NodeJs application depends on few external dependencies and  they can be listed as below.

  • Express.js
  • Babel
  • Webpack

We will move forward on installing each of these dependencies.

 

Installing Express.js dependency

Express.js is a web application framework for NodeJs. It has set of utility methods that simplifies the development of  the NodeJs applications.  Express.js is not bundled with NodeJs installation and we need to install it separately. It can be done with following command.

npm install express --save

After executing above command, you can see that ExpressJs  has been added to the dependencies list of the package.json file.

 

Installing Babel dependencies

Babel is used to transpile ES6+ based source code into ES5. This should be installed as a development related dependency as the transpiling is happened in the development environment. The following command will install the babel related dependencies in the development environment.

npm install --save-dev babel-core babel-loader babel-preset-env babel-polyfill
  • babel-core :- includes the babel core modules
  • babel-loader  :-  loader for running the ES6+ and JSX with webpack. This will be configured in the webpack.config.js
  • babel-preset-env :- (or any other preferred preset). The preset will also be configured in the webpack.config.js or can be defined in .babelrc file
  • babel-polyfill :- A polyfill is a piece of code, that replicate the fully ES6+ environment into the targeted runtime environment. Therefore some of the transpiled ES6+ based methods and objects can be executed in ES5 environment without any runtime error.  If you want to learn more about babel-polyfill, please click here to go to my article on that.

If you want to learn more about babel or webpack, please click here to visit article i have written on babel with Webpack.

Once the above command is executed successfully, you can see that the babel dependencies are installed under the devDependencies of the package.json

 

Installing the webpack dependencies

So we will be moving forward with setting up webpack. First we need to install the webpack dependencies for the project and there are two dependencies required.

  • webpack
  • webpack-dev-server

These dependencies can be installed with following command.

npm install webpack webpack-dev-server --save-dev

 

So once the dependencies are installed, we need to create a webpack.config.js file and do the necessary configuration for the webpack.

 

–save  : This will create a dependency entry under the dependencies of the package.json

–save-dev  : This will create a dependency entry under the devDependencies of the package.json

 

Once all the required dependencies are installed, the package.json of the project should looks like below. (This is just a sample snapshot of package.json and the versions of the dependencies can be different)

"dependencies": {
  "babel-polyfill": "^6.26.0",
  "express": "^4.16.2"
},
"devDependencies": {
  "babel-core": "^6.26.0",
  "babel-loader": "^7.1.2",
  "babel-preset-env": "^1.6.1",
  "webpack": "^3.11.0",
  "webpack-dev-server": "^2.11.1"
}

 

Lets look at each source file in detailed.

 

index.js 

create an index.js file in the root of your project.  The file should contains the following source. 

import express from "express";
const app = express();
import routes from "./src/routes";
let port = 2000;
app.listen(port, ()=> {
console.log(" server is started and listen on port [" + port + "]");
});
//all the requests will be handled by routes middleware
app.use("/", routes);

view raw
index.js
hosted with ❤ by GitHub

This is the entry point of your NodeJs application. This contains the application logic for spinning up the web server and handling the application routes (HTTP Routes)

This will create a web server and listen for the port 2000.  All the requests comes to the web server will be handled by the routes module developed in this application. we go through the routes module of the application in a while.

As you can see that we are using the ES6 based import here instead of ES5 based require. If you observe most of the source code in this project, you will notice that we have used ES6+ based syntaxes most of the places.

 

RequestHandlerService.js

This class hans been added demonstrate the robust features of ES6+ (Just to show the beauty of ES6+). This class will contain a simple method of handling HTTP request.

export class RequestHandlerService
{
static async handleHttpRequest(requestMethod, urlPattern)
{
return "HTTP " + requestMethod + " " + urlPattern + " received";
}
}

This method has been annotated with async. Therefore it guarantees that the method will return a promise.

 

routes.js

This file will contains the all the HTTP Route handlers related to this project. I am not going to explain all the route handlers here. I will just describe few of them for your understanding.

import {RequestHandlerService} from "./service/RequestHandlerService";
const express = require('express');
const router = express.Router();
router.all('/*', function (request, response, next) {
console.log(" this will be applied to all routes ");
next();
});
router.get("/", async (request, response)=> {
let message = await RequestHandlerService.handleHttpRequest(request.method, request.path);
response.status(200).json({
"message": message
});
});
router.get("/products", (request, response)=> {
response.status(200).json({
"message": "HTTP " + request.method + " Request with URL Pattern " + request.path
});
});
router.get("/products/:id", (request, response)=> {
response.status(200).json({
"message": "HTTP " + request.method + " Request with URL Pattern " + request.path
});
});
router.post("/products", (request, response)=> {
response.status(200).json({
"message": "HTTP " + request.method + " Request with URL Pattern " + request.path
});
});
module.exports = router;

view raw
routes.js
hosted with ❤ by GitHub

 

Lets look at some of the selected routes in detailed.  If you want to learn more about ExpressJS routing, it is recommended to read http://expressjs.com/en/guide/routing.html

router.all('/*', function (request, response, next) {
    console.log(" this will be applied to all routes ");
    next();
});

 

There is a special routing method, app.all(), used to load middleware functions at a path for all HTTP request methods. For example, the above handler is executed for requests to the route “/*” (for any route) with HTTP GET, POST, PUT, DELETE, or any other HTTP request method supported.

next() will delegate the request to the next available middleware or route.

 

router.get("/", async (request, response)=> {
    
    let message = await RequestHandlerService.handleHttpRequest(request.method, request.path);

    response.status(200).json({
        "message": message
    });
});

 

handleHttpRequest (Async method)  of the RequestHandlerService has been invoked in this route handler. Since the route handler function(arrow function) is modified with async keyword,it is possible to use the await keyword to wait for the result from the method invocation.

If you observe the last line of the file, you can see that we have export the module router. So the outside script can use this module.

module.exports = router;

 

webpack.config.js

Add the webpack.config.js file in the root of your project.The webpack.config.js related to this project can be shown as follows.

const path = require('path');
module.exports = {
target: 'node',
entry: {
app: [
'babel-polyfill',
'./index.js'
]
},
output: {
path: path.resolve(__dirname, 'build'),
filename: 'app.bundle.js'
},
module: {
loaders: [{
test: /\.js?$/,
exclude: /node_modules/,
loader: 'babel-loader',
query: {
presets: ['env']
}
}]
}
};

view raw
webpack.config.js
hosted with ❤ by GitHub

Using webpack, running/traspiling your JavaScript and JSX through Babel is a simple as adding the babel-loader to your webpack.config.js. It has been added as a loader and it will go through all js and JSX files. It uses the babel-preset-env as defined and excludes the files in the node_modules directory.

babel-polyfill will be added and managed by the webpack. In addition, index.js has been designated as a entry point js file for the project.

The output file will be stored in the build directory using file app.bundles.js.

 

Testing and Running the application.

Now we have come to the final section of our article. Here we will run the nodeJs application and test each application route.  In oder to run the application , we need to transpile the ES6+ syntaxes of the project into the ES5.

 

Transpiling/Compiling ES6+ syntaxes into ES5 with Babel and Webpack

Now we need to transpile/convert the ES6+ based source code into ES5 based source. This can be done with babel. In this project we use webpack to execute babel using babel-loader for the source code transpiling.  In oder to achieve this, go to the root of the project where your webpack.config.js is located.  Then run the following command to build the project.

webpack

 

Running the NodeJs application

Then you can see that project is build and final file is created in build/app.bundle.js. This is the file that should be run with node.

node build/app.bundle.js

 

Once the above command is executed, it will create a web server and up the NodeJs project. please refer the below screenshot.

Screen Shot 2018-02-21 at 5.01.31 PM.png

You can see that server is up and application listen on port 2000 for incoming  HTTP requests.

 

Executing both transpiling and running commands with NPM

We can combine both commands together and add a new command under the scripts of the package.json

"execute": "webpack && node build/app.bundle.js"

 

Now you can run following command to build and run the project. It will first build the project with webpack and then run the build file with node.

npm run execute

 

 

Now lets access each HTTP route with postman as follows. 

 

GET   http://localhost:2000/

Screen Shot 2018-02-21 at 7.30.27 PM.png

 

GET   http://localhost:2000/products/12

Screen Shot 2018-02-21 at 7.31.26 PM.png

 

POST    http://localhost:2000/products

Screen Shot 2018-02-21 at 7.32.30 PM.png

 

The source code of this article can be found in GitHub. Click here to get the source code. 

 

How to set up the project clone from GitHub

The source code has been added without the node_modules directory. Therefore run the following command to install the required dependencies for the project. This will install the dependencies locally for your project.

npm install

 

After that you can run project with following command.

npm run execute

 

This is the end of the article. If you have any concern or need any help, feel free to contact me.

 

 

Docker: Spring Boot and Spring Data JPA (MySQL) REST Api example with docker (without docker-compose)

What is Docker?

The basic introduction and overview of the docker can be found at following blog article. So if you are new to docker, please read the following article, before continuing with this article.

Click here to go to the article of  “What is Docker and Its Overview”

 

What we are going to do …. 

Here we will be migrating an existing application to use docker based containers instead of traditional pre installed servers. The existing application is a Spring Boot REST Api application that uses Spring Data JPA for persistent layer and MySQL as the database server.  So in this article, we will try to use docker based mysql container for replacing the traditional MySQL server (The MySQL server that is run as a external sever).

Since the purpose of this article is to demonstrate the features and capabilities of the Docker, i am not going to explain the Spring Data JPA related code here. If you want to learn them, please refer my blog article of Simple CRUD Application with Spring Boot and Spring Data JPA.

 

There are two main ways that can be used to build and run applications with docker.

  1. Using docker-compose for managing  and running dependencies (servers) and link them
  2. manually managing and running the dependencies and link them (without docker-compose)

 

In this article we will be focusing on the second approach. That is “without docker-compose approach

 

Where to download the source code?

The initial source code of this  article can be found at the article of Simple CRUD Application with Spring Boot and Spring Data JPA . As i have already mentioned, we will be adding some docker related configurations to use docker for building and running this application.

The fully docker migrated source code( the outcome of this article) can be found at GitHubClick here to download

 

Starting to migrate the application to use Docker

If you have gone through the above reference article (Simple CRUD Application with Spring Boot and Spring Data JPA), the application has following components and dependencies.

  • Running Spring Boot Application
  • Running MySQL Server

 

Therefore  we need to create following containers in the process of migrating this application for the docker platform.

  • A Container for running the Spring Boot Application (developed application docker image)
  • A Container for running the MySQL Server (mysql docker image)

 

Click here If you want to see a list of important and frequently required Docker Commands

 

Create a docker container for MySQL

 

MySQL Team has provided an official docker image of MySQL through Docker Hub.  (https://hub.docker.com/_/mysql/) . Therefore we can create the MysQL docker container executing the following command.  It will first check the local docker registry for finding the requested mysql image. If it is not available, it will pull the image from the remote repository(Docker Hub) and create the container.

 

docker run -d \
      -p 2012:3306 \
     --name mysql-docker-container \
     -e MYSQL_ROOT_PASSWORD=root123 \
     -e MYSQL_DATABASE=spring_app_db \
     -e MYSQL_USER=app_user \
     -e MYSQL_PASSWORD=test123 \
        mysql:latest

 

After running the above command it will create a docker container with name “mysql-docker-container“.

Please refer the below screenshotScreen Shot 2018-01-07 at 10.17.40 PM.png

 

Lets break and understand the above “docker run” command.

-d

We use this flag to run the container in detached mode, which means that it will run in a separate background process. If you want the terminal access, simply avoid this flag.

 

-p <host-port>:<container-port>

-p flag is used for port binding between host and container. sometimes you might need to connect to MySQL container from host or some other remote sever.  Therefore we need to bind the container port to a port of the host machine. Then it will be possible to access the mysql docker container through the  IP and PORT of the Host machine.

2012:3306   :-  By default MySQL server uses port 3306. Therefore container port will be 3306. We have mapped/bind the 3306 port to the 2012 post of the host machine. So the outsiders can access the MySQL container with host machine ip-address:2012 port.

Here is the screen shot of successful connection to docker container with Sequel Pro

Screen Shot 2018-01-07 at 10.41.00 PM.png

 

--name

The name of the docker container.  In this case it is “mysql-docker-container

 

mysql:latest

 

This describes the image name and the tag.

mysql is the image name and latest represents the tag

 

The rest of the parameters are used to create the root password, database and creating the user (with username and password) for giving  the access to the database.

Now we have a MySQL docker container  up and running.  Now our next target is to create a container for running spring boot application.

 

Create docker container for Spring Boot Application.

It was easy to create a container for MySQL server as there is an already published docker image for MySQL. Where we can find the docker image for the spring boot application that we have developed? Nowhere. We have to create it for our application.

It is the responsibility of the developer to create and publish (if required) the docker image of the application that he has developed.

Lets create the docker image for this spring boot application project.

First of all we need to change the database connection details of the application.properties file to point to the mysql-docker-container that we have already created.

spring.datasource.url = jdbc:mysql://mysql-docker-container:3306/spring_app_db?useSSL=false
spring.datasource.username = app_user
spring.datasource.password = test123

 

  • docker container name has been added for the host (mysql-docker-container)
  • 3306 is the port of the docker container on which the MySQL server is running on.

 

 

Dockerfile

This file contains all the instructions and commands that are required to build the docker image for the application. This file should be added to the root of the source directory. Normally this file should not contain any extension.

FROM java:8
LABEL maintainer=“chathuranga.t@gmail.com”
VOLUME /tmp
EXPOSE 8080
ADD target/spring-boot-data-jpa-example-0.0.1-SNAPSHOT.jar spring-boot-data-jpa-example-0.0.1-SNAPSHOT.jar
ENTRYPOINT ["java","-jar","spring-boot-data-jpa-example-0.0.1-SNAPSHOT.jar"]

 

FROM – This defines the base image for the image that we are building.(The image should be built from this base image). All we said is that, for this image to run, we need the java:8 image.

 

EXPOSE –  This specified the port number on which the docker container is running. The docker host will be informed about this port when the container is booting up.

 

VOLUME
We added a VOLUME pointing to “/tmp” because that is where a Spring Boot application creates working directories for Tomcat by default. The effect is to create a temporary file on your host under “/var/lib/docker” and link it to the container under “/tmp”. This step is optional for the simple app that we wrote here, but can be necessary for other Spring Boot applications if they need to actually write in the filesystem.

ADD – Adding the files into the docker image being created. Normally this command will be sued to add executable jar files into the docker image.

 

ENTRYPOINT : The specified command will get executed when the container is booted up.

 

dockerFile.jpg

 

Dockerfile (instructions in the Dockerfile) will be used by the docker build (command) when building the Docker Image.

If you want to learn more about Dockerfile, please click here to visit official documentation about it. 

 

 

Building the docker image from project

first you need to build the application. This can be done with following command

mvn clean install -DskipTests

 

Once the project is buit successfully, we can build the docker image with following command.

docker build -f Dockerfile -t spring-jpa-app .

 

spring-jpa-app –  refers the name of the docker image being built.  

 

Once the above process is completed, you can verify whether the docker image is built successfully with following command. It will show you a list of docker images available.

docker images

 

Running the built docker image 

Now we need to run the built docker image of our spring boot application. Since this application requires to connect with MySQL server, we need to make sure that MySQL server is up and running.

You can check the currently up and running docker containers with following command.

docker ps

 

If the MySQL container is not up and running, you need to run it now. (I have already explained you about executing the mysql-docker-container).

 

Link with MySQL Container. 

Once the mysql container is up and running, you can run your spring boot application image on container with following command.  You need to link your spring boot application with mysql container.

 

docker run -t --name spring-jpa-app-container --link mysql-docker-container:mysql -p 8087:8080 spring-jpa-app

 

–name spring-jpa-app-container

This represents the name of the docker container that is going to be created. You can use any name as your wish.

 

-p 8087:8080

The application will be running on port 8080 of the container. It is bound to the port 8087 of the host machine. (So the hosted application can be accessed with host ip address + port  8087) . In my case it is localhost:8087 

 

–link

Now, because our application Docker container requires a MySQL container, we will link both containers to each other. To do that we use the --link flag. The syntax of this command is to add the container that should be linked and an alias, for example --link mysql-docker-container:mysql, in this case mysql-docker-container is the linked container and mysql is the alias.

 

spring-jpa-app

This represents the name of the docker image that is going to be run on a container.

 

Now we can check and verify whether both containers (mysql and spring boot application containers) are up and running.

Screen Shot 2018-01-10 at 12.00.42 AM.png

 

Verify containers are linked properly

To verify whether the containers are linked properly, you can get into the application container (spring-jpa-app-container) and see the content of the /etc/hosts file.

login to container with bash mode

docker exec -it spring-jpa-app-container bash

spring-jpa-app-container is the name of the container that we need to access.  bash param says that we need the bash access. 

see the content of /etc/hosts ( cat /etc/hosts )

Screen Shot 2018-01-10 at 12.04.11 AM.png

Did you  notice the “172.17.0.2 mysql 56f7f45a79c1 mysql-docker-container” ?

This confirms that containers are linked properly and spring boot application container is connected to mysql-docker-container properly.

 

Testing the application 

The hosted application can be access through  http://localhost:8087

You can refer the original article for getting more information about accessing the endpoints with correct request parameters.

Here i have done a sample postman call for you. It represents the REST endpoint for creating users.

Screen Shot 2018-01-10 at 12.15.26 AM.png

 

You can connect to the MySQL server with your preferred MySQL UI client application or container bash mode.

Screen Shot 2018-01-10 at 12.24.18 AM.png

 

Hope this tutorial helps you to understand how to use docker for application development.

In next tutorial we will modify this same application to use docker-compose to manage everything , instead of manual container running and linking approach.

 

 

Spring RestTemplate – exchange() method with GET and POST Requests

 

The exchange() method

Execute the HTTP method to the given URI template, writing the given HttpEntity to the request, and returns the response as ResponseEntity.

In below, i am going to show you some sample RestClient exchange requests with GET and POST HTTP methods.

 

 

GET request with No Request Parameters (With Headers)

In here the HTTP GET request is made without any query params (request params) and Basic Authentication header. Therefore by observing the below example, you can get an idea of how exchange method is used to send HTTP GET request without request parameters and headers.

public void findUserById()
{
String username = "chathuranga";
String password = "123";
Integer userId = 1;
String url = "http://localhost:8080/users/" + userId;
//setting up the HTTP Basic Authentication header value
String authorizationHeader = "Basic " + DatatypeConverter.printBase64Binary((username + ":" + password).getBytes());
HttpHeaders requestHeaders = new HttpHeaders();
//set up HTTP Basic Authentication Header
requestHeaders.add("Authorization", authorizationHeader);
requestHeaders.add("Accept", MediaType.APPLICATION_JSON_VALUE);
//request entity is created with request headers
HttpEntity<AddUserRequest> requestEntity = new HttpEntity<>(requestHeaders);
ResponseEntity<FindUserResponse> responseEntity = restTemplate.exchange(
url,
HttpMethod.GET,
requestEntity,
FindUserResponse.class
);
if (responseEntity.getStatusCode() == HttpStatus.OK) {
System.out.println("response received");
System.out.println(responseEntity.getBody());
} else {
System.out.println("error occurred");
System.out.println(responseEntity.getStatusCode());
}
}

view raw
GET.java
hosted with ❤ by GitHub

 

 

GET request with Request Parameters (Query Params) and Headers

In here, the HTTP GET request is made with query parameters (request parameters) and Basic Authentication header. Therefore by observing the below example, you can get an idea of how exchange method is used to send HTTP GET request with request params and headers.

The generated request URL will be something like below. You can see that it includes the query params.

http://localhost:53793/users/1?name=chathuranga&email=chathuranga.t@gmail.com

 

public void findUserById()
{
String username = "chathuranga";
String password = "123";
Integer userId = 1;
String url = "http://localhost:" + port + "/users/" + userId;
//setting up the HTTP Basic Authentication header value
String authorizationHeader = "Basic " + DatatypeConverter.printBase64Binary((username + ":" + password).getBytes());
HttpHeaders requestHeaders = new HttpHeaders();
//set up HTTP Basic Authentication Header
requestHeaders.add("Authorization", authorizationHeader);
requestHeaders.add("Accept", MediaType.APPLICATION_JSON_VALUE);
//request entity is created with request headers
HttpEntity<AddUserRequest> requestEntity = new HttpEntity<>(requestHeaders);
//adding the query params to the URL
UriComponentsBuilder uriBuilder = UriComponentsBuilder.fromHttpUrl(url)
.queryParam("name", "chathuranga")
.queryParam("email", "chathuranga.t@gmail.com");
ResponseEntity<FindUserResponse> responseEntity = restTemplate.exchange(
uriBuilder.toUriString(),
HttpMethod.GET,
requestEntity,
FindUserResponse.class
);
if (responseEntity.getStatusCode() == HttpStatus.OK) {
System.out.println("response received");
System.out.println(responseEntity.getBody());
} else {
System.out.println("error occurred");
System.out.println(responseEntity.getStatusCode());
}
}

view raw
GET.java
hosted with ❤ by GitHub

 

 

POST request with Request Body and Headers

In here the HTTP POST request is made with valid request body and Basic Authentication header. Therefore by observing the below example, you can get an idea of how exchange method is used to send HTTP POST request with request body and headers.

public void httpPostRequestWithHeadersAndBody()
{
String url = "http://localhost:8080/users";
String username = "chathuranga";
String password = "123";
//set up the basic authentication header
String authorizationHeader = "Basic " + DatatypeConverter.printBase64Binary((username + ":" + password).getBytes());
//setting up the request headers
HttpHeaders requestHeaders = new HttpHeaders();
requestHeaders.setContentType(MediaType.APPLICATION_JSON);
requestHeaders.setAccept(Arrays.asList(MediaType.APPLICATION_JSON));
requestHeaders.add("Authorization", authorizationHeader);
//setting up the request body
User user = new User();
user.setName("Sample User");
user.setUsername("user1");
user.setPassword("pass123");
//request entity is created with request body and headers
HttpEntity<User> requestEntity = new HttpEntity<>(user, requestHeaders);
ResponseEntity<UserResponse> responseEntity = restTemplate.exchange(
url,
HttpMethod.POST,
requestEntity,
UserResponse.class
);
if(responseEntity.getStatusCode() == HttpStatus.OK){
UserResponse user = responseEntity.getBody();
System.out.println("user response retrieved ");
}
}

view raw
post.java
hosted with ❤ by GitHub

 

Runtime compile ES6+ in NodeJs with babel-register

babel-register is a library or rather plugin that transpile the ES6+ based javascript source files into ES5 on runtime (on the fly).

babel-register is a require hook, that will bind node’s require method and automatically transpile the file on the fly. This is not meant for production! It’s considered as a bad practice to compile the code this way. It’s far better to compile the code before deploying.

However this works quite well for development purposes.

Let’s install babel-register first:

npm install babel-register --save-dev

 

Create a simple index.js file:

console.log('Hello World');

 

Now create a sample.js file and  require index.js and  babel-register:

require('babel-register');
require('index.js');

 

When you run the code using node sample.js you will see the output of index.js – “Hello World”.

node sample.js

 

Note: You can’t require (babel-register) in the same file that you want to compile, because Node is executing the file before Babel’s transpile.

 

Is it safe to use babel-register in production?

No. this can be used only for development purposes. If the babel-register is used in the production environment, there may be performance and latency related issues.  This is because, the source file will be transpiled on the fly and it may take considerable amount of time for the transpilation process. In order to avoid such issues, you have to use a compiled (or rather transpiled) javascript file for the production environment.

 

What are the uses of @EntityScan and @EnableJpaRepositories annotations?

 

Sometimes you may have noticed that some of the spring application projects (specially spring boot applications) uses @EntityScan and @EnableJpaRepositories annotations as a part of configuring Spring Data JPA support for the application.

But some of the spring boot applications managed to complete their configurations and run the applications with Spring Data JPA WITHOUT those two annotations.

You might be having a little confusion about the real usages of those two annotations and when to use them? that is fine.  The purpose of this article is to describe about the real usages of those two annotations and giving a full picture/idea of how to and when to use them properly.

 

What is the Spring Boot main application package?

It is the package that contains the Spring Boot main configuration class that is annotated with @SpringBootApplication annotation.

@SpringBootApplication annotation

This annotation automatically provides the features of the following annotations

  • @Configuration
  • @EnableAutoConfiguration
  • @ComponentScan

 

Spring Boot Auto-Configuration Feature with @EnableAutoConfiguration

If you want to get the maximum advantage of spring boot’s auto configuration feature, it is expected to put all your class packages under spring boot main application package (directly in main package or indirectly as sub packages).

The @EnableAutoConfiguration will scan the main package and its sub packages when executing the spring boot auto configuration feature for class path dependencies. If any class or package that is outside from the main application package and it is required for completing auto configuration for some dependency, then should be declared in the main configuration class properly (with related annotation).

Then the @EnableAutoConfiguration will scan for those declared packages for detecting the required classes in the process of completing/doing the auto configuration for the application dependency declared in the class path. Those can de described as follows.

 

@EnableJpaRepositories

This will enable the JPA repositories that contains in the given package(s).

For instance, Enabling auto configuration support for Spring Data JPA required to know the path of the JPA the repositories. By default, it will scan only the main application package and its sub packages for detecting the JPA repositories. Therefore, if the JPA repositories are placed under the main application package or its sub package, then it will be detected by  the @EnableAutoConfiguration as a part of auto configuring the spring based configurations. If the repository classes are not placed under the main application package or its sub package, then the relevant repository package(s) should be declared in the main application configuration class with @EnableJpaRepositories annotation. Then this will enable the JPA repositories contains in the given/declared package(s).

e.g:-

@EnableJpaRepositories(basePackages = "com.springbootdev.examples.jpa.repositories")

 

 

@EntityScan 

If the entity classes are not placed in the main application package or its sub package(s), then it is required to declare the package(s) in the main configuration class with @EntityScan annotation. This will tells spring boot to where to scan for detecting the entities for the application. Basically @EnableAutoConfiguration will scan the given package(s) for detecting the entities.

e.g:-

@EntityScan(basePackages = "com.springbootdev.examples.entity")

 

Lets look at some fun  and real code examples. Here i am not going to explain you the Spring Data JPA here. It has already been discussed in my following article.

Click here to go to Spring Data JPA example

 

You can download the source code of this article from GitHub.

Click here to download.

 

If you open the project in your IDE, you will notice that repository and entity packages are not placed in the main application package.

Screen Shot 2017-12-31 at 2.24.47 PM.png

 

main application package

com.springbootdev.examples

 

JPA repository classes are  in package

com.springbootdev.domain.repository

 

entity classes are in package 

com.springbootdev.domain.entity

 

Therefore the entity classes location and JPA repositories location should be declared and enabled with @EntityScan and   @EnableJpaRepositories annotations respectively. Otherwise the application will fail to load.

Please refer the following Spring Boot main configuration class.

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.autoconfigure.domain.EntityScan;
import org.springframework.data.jpa.repository.config.EnableJpaRepositories;
@SpringBootApplication
@EntityScan(basePackages = {"com.springbootdev.domain.entity"})
@EnableJpaRepositories(basePackages = {"com.springbootdev.domain.repository"})
public class SpringBootDataJpaExampleApplication
{
public static void main(String[] args)
{
SpringApplication.run(SpringBootDataJpaExampleApplication.class, args);
}
}