Spring Boot REST Api with Docker (with docker-compose)

 

In this tutorial, i am going to show you how to develop an Spring Boot REST Api application that runs on docker container.  This is just a brief and quick demo of setting up spring boot application with docker. In this article, i have focused only on showing the steps of integrating docker support (for building and running image) for the spring boot web application.

If you want to read a detailed article about deploying spring boot application with docker, please click here to visit my some other article on that.

 

Project Structure and Source Code

The fully source code of the application can be found at GitHub. Click here to download.  The project file structure will be as follows.

Screen Shot 2018-03-03 at 12.33.13 AM.png

 

Here is the implementation of the WelcomeController.java 

 

Dockerfile

Dockerfile contains the command and instructions for building the docker image from the project.  The contents of the Dockerfile related to this project, can be given as follows.

 

FROM java:8  

java8 will be identified as the base image for this application. Therefore the final docker image for this application should be built based on java8 docker image.  (in other words, in order to run this application, java8 docker image is required)

 

WORKDIR /app

working directory has been set as the /app.  This directory will be created in the container and run the specified commands from this directory.

 

COPY

The copy command will copy the file from local project environment to docker image being built.  The file target/spring-boot-docker-example-0.0.1-SNAPSHOT.jar  in the local project environment will be copied as /app/spring-boot-app.jar.

 

ENTRYPOINT

The specified command will be executed once the docker image is successfully deployed and container is booted up.

 

docker-compose.yml

docker-compose is a utility/tool that is used to run multi container docker applications. docker-compose utility will read the docker-compose.yml file for setting up the related services for the application.  This file should contains the declaration of the services that are required to run the application. If you need to run any service as a separate docker container, then you should declare it in the docker-compose.yml file.

The content of the docker-compose.yml file related to this project can be shown as follows.

 

The document complies with docker-compose document version 3.

The service name is “spring-boot-rest-api-app” and image name is “spring-boot-rest-docker-image“. The service should be deployed form the given image and if the image does not exist, it should be built with the Dockerfile available in the current working directory.

The port 8080 of the docker container should be mapped to the port 8087 of the docker host. So the service can be externally accessed with port 8087.

spring-boot-rest-api-app container will use the /data/spring-boot-app volume for managing data.

 

Building the project with maven

Since the Dockerfile depends on the final built artifact of the project (that is target/spring-boot-rest-api-docker-0.0.1-SNAPSHOT.jar), we need to build final deployable artifact before moving forward with building the docker image.  This can be done with following command.

mvn clean install

Now the project is successfully built and we can move forward with building docker image and running it in a docker container.

 

Building the docker image

In terminal, go to the directory where your docker-compose.yml file is available. Then run the following command for building the docker image.

docker-compose build

 

Screen Shot 2018-03-01 at 9.00.25 PM.png

 

This command can be used to build new image or rebuild existing images. That means if there is no docker image available for the given name, then it will directly build the image. Otherwise the existing image (already available image for the given name) will be removed and rebuild the image.

 

you can get a list of docker images available in the docker platform with following command and  verify wether the image has been successfully built.

docker images

Screen Shot 2018-03-03 at 12.27.52 AM.png

you can notice that the “spring-boot-rest-docker-image” is successfully built and available under the list of images.

 

Running application with docker-compose

This can be done with following command.

docker-compose up

After executing the above command, it will look for the services declared in the    docker-compose.yml  file and deploy and start each service in separate docker container.

 

Now, we should be able to access the REST api endpoint available in the WelcomeController.

GET  /api/welcome

Screen Shot 2018-03-03 at 12.40.21 AM.png

 

Docker: Spring Boot and Spring Data JPA (MySQL) REST Api example with docker (with docker-compose)

 

In the previous article (Click here to visit that article.), we have created, run and linked the docker containers manually.  In this article we will explore how to use the docker-compose utility for creating, running and managing the multiple docker containers.

docker compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application’s services. Then, using a single command, you create and start all the services from your configuration.In addition, It allows you to define how the image should be built as well.

For this article, we are going to use and modify the same project that is created in the previous article.

In this article, i am just focusing on the docker-compose utility and related features. I am not going to describe, any spring or spring-data-jpa related features here.

First we will clone the source code of the previous article and prepare our development environment.  This can be done with following command.

git clone git@github.com:chathurangat/spring-boot-data-jpa-mysql-docker-no-composer.git

 

Import the project into your preferred IDE and the source code should be appeared as follows.

Screen Shot 2018-02-17 at 9.27.47 PM.png

 

Lets create the docker-compose.yml file in the root of the project.

 

docker-compose.yml

docker-compose is a utility/tool that is used to run multi container docker applications. docker-compose utility will read the docker-compose.yml file for setting up the related services for the application.  This file should contains the declaration of the services that are required to run the application. If you need to run any service as a separate docker container, then you should declare it in the docker-compose.yml file.

If you just look at the previous project (Click here to visit that article.), you will notice that there were two services those were run on two docker containers.  Those services can be listed as:

  • mysql service
  • application service (spring boot application)

 

In this article, we are going to explore how to run and manage those two services with docker compose.   Please refer the below file to see how those two services has been declared.

 

lets look at the file structure in detailed.

 

version

You can see that the docker compose document version is 3.The syntaxes declare in the docker-compose.yml document will change based on the document version. All the syntaxes that are declared in this document will compatible with version 3.

 

Setting up mysql container (service)

As you can see that, we have declared the two services here. each service will run in a separate  docker container.  lets look at each service in detailed as follows.

 mysql-docker-container:
 image: mysql:latest
 environment:
 - MYSQL_ROOT_PASSWORD=root123
 - MYSQL_DATABASE=spring_app_db
 - MYSQL_USER=app_user
 - MYSQL_PASSWORD=test123
 volumes:
 - /data/mysql

 

we have named the mysql service as mysql-docker-container. (There is no rule and it is possible to select any name for the service)

The mysql:latest image should be used for providing the service. In other words, this image (mysql:latest)  will be deployed in the targeted container.

we have declared four  environmental variables which will help to initialize the database, create database user and setting up root password.

volume has been defined as /data/mysql. Volumes are the preferred mechanism for persisting data generated by and used by Docker containers.

 

setting up application container

 spring-boot-jpa-app:
 image: spring-boot-jpa-image
 build:
 context: ./
 dockerfile: Dockerfile
 depends_on:
 - mysql-docker-container
 ports:
 - 8087:8080
 volumes:
 - /data/spring-boot-app

 

The service has been named as “spring-boot-jpa-app“.

The image name is “spring-boot-jpa-image“. If the image does not exist, it should be built with the Dockerfile available in the current working directory.   In the previous article, we build the docker image with a manual command. But with docker compose, we can declare the docker image build command as above.

This application service depends on the mysql-docker-container.

The port 8080 of the docker container should be mapped to the port 8087 of the docker host. So the service can be externally accessed with port 8087.

spring-boot-jpa-app container will use the /data/spring-boot-app volume for managing data.

 

Before running the docker-compose

Now our docker-compose.yml file is ready and it is the time to up the containers with docker-compose.  Before moving forward with docker-compose utility, we will look at out Dockerfile related to this project. Dockerfile contains the instructions of how to build the docker image with the source code.

 

FROM java:8
LABEL maintainer=“chathuranga.t@gmail.com”
EXPOSE 8080
ADD target/spring-boot-data-jpa-example-0.0.1-SNAPSHOT.jar spring-boot-data-jpa-example-0.0.1-SNAPSHOT.jar
ENTRYPOINT ["java","-jar","spring-boot-data-jpa-example-0.0.1-SNAPSHOT.jar"]

 

Just look at the above bold (highlighted) line and you will notice that docker image is built with already built jar file. The Dockerfile does not contains any maven or any other command to build the jar file from the project source code and it continues to build the docker image with already available jar file in the target/spring-boot-data-jpa-example-0.0.1-SNAPSHOT.jar. Therefore before moving with building the images or running the containers with docker-compose, we need to build the project artifact (*jar, war or any other related artifact file).

 

Build the project with maven

The project can be built with following maven command.

mvn clean install -DskipTests

Once the project is built, we can run the docker compose utility to build the targeted images and run the declared services in docker containers.

 

“docker-compose” to build the images and run services in docker containers

In the command line, go to the project directory where your docker-compose.yml file is located.  Then run the following docker compose command to run the declared services (in docker-compose.yml) in docker containers.

docker-compose up

 

You can see that we have declare some command to build the docker image for a specific service (spring-boot-jpa-app) with docker-compose.  Therefore it will build the declared image before up and running the docker services in the containers. you can run the following command to check whether the image is successfully built.

docker images

 

This will display a list of available images as follows.

Screen Shot 2018-02-19 at 3.08.06 PM.png

If you just observe the first line of the screenshot, you can see that the image (spring-boot-jpa-image) has already been created.

 

 

It will take few seconds to build the image and up the containers for the declared/given services. Once above process is completed, you may run the following command to check whether the containers are up and running.

docker ps

 

It will display a list of up and running docker containers are as follows.

Screen Shot 2018-02-19 at 3.12.25 PM.png

 

 

Testing the application

Now everything is up and running. You can follow the testing instructions given in the previous article (click here to go to the previous article) to test the application.

 

Rebuilding the docker image

Most of the time you may need to rebuild the project. (due to source code and application logic changes). In such cases, you need to rebuild the docker image too. If you do not rebuild the docker image, the docker image repository may contain the older image version. Therefore the docker-compose will use the older image version available in the docker repository to run the service container.  In order to avoid this issue, we need to rebuild the docker image. This can be done with following command.

docker-compose build

 

Is it enough to run the “docker-compose build” once the source code is changed?

No. docker-compose build will build the docker image using the available project final build file (jar, war etc..). So if you have modified the source code of the project, you need to rebuild the project with maven.

Once the project is build, you can run the docker-compose build command to build the docker image.

 

Full source code related to this article can be found at GitHubClick here to Download the SourceCode. 

 

Babel and Webpack for compiling and bundling JavaScript ES6 (ES6/ ES7/ ES8)

In this article i am going to explain how to use the Babel compiler  for compiling the javascript from ES6(or higher ES version) to ES5 compliant version. In addition, the webpack will be used for  executing barbel compiler and bundling the multiple javascript dependencies into a single file.

The majority of the internet browsers are supported to run ES5 compliant JavaScript. Therefore if you develop your application with ES6 or any other latest ECMAScript version, you may need to compile or transform it into ES5 compliant version for running it in most of the available browsers.

 

What is Babel?

Babel is a JavaScript transpiler (or known as javascript compiler). Babel enables us to write modern JavaScript that will be “transpiled” to widely-supported ES5 JavaScript. We call this process as “transpiling”.

Want to know more about Babel? Click here 

 

Start the project with NPM 

NPM is stands for Node Package manager. As the name implies , it is the package manager for JavaScript. This is installed when you install the NodeJs in your development environment.  NPM contains a large number of reusable JavaScript packages and modules.

to check and verify the NPM installation in your development environment, you can run the following command in the terminal . It will display the NPM version if it is already installed.

npm -v

 

If you do not have npm installed in your machine, you can refer the official website (www.npmjs.com)  for getting the instructions for installing it.

So once the installation is verified, we can create the JavaScript project with NPM with following command. This will create a standard package.json file for the project.

npm init

 

Screen Shot 2018-01-27 at 6.52.03 PM.png

What is package.json file?

This is the file that contains the all important information about your project(package). it contains information about project name, version, author, license, main script, list of dependencies, how to build the project, how to start the project etc..

Please refer the official documentation for more information.

The generated package.json file for this project can be shown as below.

 

Here you can see that main script file is main.js . So we need to create it. In addition i have created two other JavaScript files and do some ES6 based javascript codings to demonstrate the beauty of the ES6 and Webpack bundler.

 

calculator.js

Here is the beauty of the ES6. we can have a class with both static and non-static methods.

 

welcome.js

returns the welcome message by concatenating the user given name.

 

main.js

 

window object represents the browser window. if any property (variable, function or class) is assigned to the window, then it can be accessed through the browser. That is why we have assigned the properties (class and methods) for the window object here.

 

index.html 

you can see that we have imported the main.js file here. That file contains the ES6 based javascript and that is not compiled down to ES5.

when you loading the page on browser, you might get following error in the browser console.

main.js  Uncaught SyntaxError: Unexpected token export

 

This means that your browser is not supported for ES6  and it cannot understand the syntaxes written in ES6 style. Therefore we need to transform the ES6 style code into browser understandable ES5 style code. In this situation, Babel comes to play role of traspiling the source code.

 

Setting up Babel 

We need to install following few dependencies for getting the babel compiler for the project.

  • babel-core :- includes the babel core modules
  • babel-loader  :-  loader for running the ES6+ and JSX with webpack. This will be configured in the webpack.config.js
  • babel-preset-env :- (or any other preferred preset). The preset will also be configured in the webpack.config.js or can be defined in .babelrc file
  • babel-polyfill :- A polyfill is a piece of code, that replicate the fully ES6+ environment into the targeted runtime environment. Therefore some of the transpiled ES6+ based methods and objects can be executed in ES5 environment without any runtime error.  If you want to learn more about babel-polyfill, please click here to go to my article on that.

 

Here is a good article to learn more about babel.

https://kleopetrov.me/2016/03/18/everything-about-babel/

 

What is the purpose of barbel preset?

presets are plugins that enable the features and support for particular language. for example, If the JavaScript code is based on ES6, we need to tell the babel transpiler that the source code is based on ES2015. So it will enables the required features for transforming the ES6 code to ES5.  if the source codes is based on reactjs, we need to enable the react preset.

For more information about “preset”, please visit the official website.

https://www.npmjs.com/package/babel-preset-env

https://babeljs.io/docs/plugins/#presets 

 

Why do we enable “babel-preset-env” without enabling “babel-preset-es2015” ?

This is a good point. This preset will automatically detect the ECMAScript version of your source code and enable the related features for compiling the source code into ES5 version.

babel-preset-env is the latest babel preset that compiles ES2015+ (ES6, ES7, ES8 ) down to ES5 by automatically determining the Babel plugins needed based on your targeted browser or runtime environments.

Without any configuration options, babel-preset-env behaves exactly the same as babel-preset-latest (or babel-preset-es2015, babel-preset-es2016, and babel-preset-es2017 together).

In our case, we are trying to compile the ES6 code into ES5. So here babel-preset-env can be used.

We will install all the required dependencies for the babel with following command. This command should be run in your project directory.

npm install --save-dev babel-core babel-loader babel-preset-env

 

–save :-

This will install the given dependencies in the project locally. that means the dependencies are installed in the node_modules directory that is located inside your project directory.

If you want to install it in globally, use -g flag. Then the dependencies will be installed in the /usr/local/lib/node_modules directory as global dependencies. 

 

–save-dev :-

install the dependencies locally as dev dependencies. If you open the package.json, you can notice that the installed dependencies have been categorized under devDependencies. The dev dependencies are used only for development purpose and those should be omitted when the package is released for the production. Otherwise the final distributed package will contains overhead/unnecessary dependencies that are not required in runtime (production environment).  dependencies will be included in the final distribution and devDependencies will be neglected.

Please refer the following snapshot of package.json.

"devDependencies": {
  "babel-loader": "^7.1.2",
  "babel-core": "^6.26.0",
  "babel-preset-env": "^1.6.1"
}

 

 

Setting up Webpack

Here i am assumed that you have the basic knowledge and understanding about webpack. If you don’t have, please go and refresh your knowledge about webpack.

I have found that below articles are useful for beginners who want to learn webpack.

https://tutorialzine.com/2017/04/learn-webpack-in-15-minutes

https://blog.andrewray.me/webpack-when-to-use-and-why/

 

So we will be moving forward with setting up webpack. First we need to install the webpack dependencies for the project and there are two dependencies required.

  • webpack
  • webpack-dev-server

These dependencies can be installed with following command.

npm install webpack webpack-dev-server --save-dev

 

After installing the webpack dependencies, the devDependencies of the package.json should looks like below.

"devDependencies": {
  "babel-core": "^6.26.0",
  "babel-loader": "^7.1.2",
  "babel-preset-env": "^1.6.1",
  "webpack": "^3.10.0",
  "webpack-dev-server": "^2.11.1"
}

 

So once the dependencies are installed, we need to create a webpack.config.js file and do the necessary configuration for the webpack.

 

Adding webpack.config.js

Add the webpack.config.js file in the root of your project. The sample configuration for the webpack.config.js can be given as follows.

With Webpack, running your JavaScript and JSX through Babel is a simple as adding the babel-loader to your webpack.config.js.

 

webpack.config.js

 

Here are the important things to note from the webpack.config.js

  •  The entry point of the app has been named as app.js
  •  The output file will be app.bundle.js and it will be stored in the build directory.
  •  babel-loader has been added as a loader for handling  .js and .jsx file (match of the regular expression)
  • node_modules directory will be excluded by the babel-loader when webpack is running.
  • babel-preset-env  (presets: [env])  has been added in the webpack configuration. Therefore .babelrc  file is not required for this setup.

 

Hint:
You can commit the  preset configuration in the webpack.config.js and 
replace it with .babelrc file.

 

Adding the build command for package.json

In the package.json we need to specify the build command as “build” “webpack” under the “scripts“.

"scripts": {
  "build": "webpack -p",
  "test": "echo \"Error: no test specified\" && exit 1"
},

 

weback  -p  :- run the webpack in production mode. this will minify the javascript code in the output file.  If you do not use the -p option, then your generated file will not be minified.

This is the finally modified  package.json file of our application.

 

All the related project files and configuration files has been created. So finally, our project structure should looks like below.

Screen Shot 2018-01-28 at 10.05.23 PM.png

 

Building the project with webpack

Now it is the time to build out project with webpack.  So we need to make sure that our javascript files that are in ES6 syntaxes are transform into ES5 with babel (with babel-loader) and  set of javascript dependencies are bundled as one single file (the purpose of webpack). We can run the following command in the command line.

npm run build

 

when you run the build command, it will execute the command given in the package.json under the build script.

"scripts": {
  "build": "webpack -p"
}

 

Since we have used webpack -p , the webpack will be ruin in the production mode.  the output file will contain the minified version of the javascript source file.

 

You can refer the following screenshot for further understanding.

Screen Shot 2018-01-28 at 10.07.41 PM.png

The related javascript source files are compiled and bundled into the build/app.bundle.js  file . You can use that build file in your HTML to import the javascript methods and functionalities.

 

This is the final project structure (after executing the webpack build) and i have given it for your further understanding.

Screen Shot 2018-01-28 at 10.19.05 PM.png

 

Test the app.bundle.js with index.html file 

 

Now we can add the app.bundle.js for the index.html file as a script and run it on browser.

 

Run on browser will give you the following output.  That means our ES6+ syntaxes are compiled down to ES5 and working as we expected.

 

Screen Shot 2018-01-30 at 1.33.05 PM.png

 

You can see that the console.logs are printed as we expected and the result is published to browser. You can play with chrome console to check whether the classes and methods has been assigned/attached to the window. (If you observe the Chrome console output in the screenshot, you can see that i have done it)

 

How to get the Source Code and Run it?

Now we have come to the end of the article.  The source code of this article can be found at GitHub.

click here to download the source code. 

 

Steps by Step guide for building and running the project

Get the source files from the GitHub.

git clone git@github.com:chathurangat/webpack-barbel-for-es6.git

 

Go inside the project directory.

cd webpack-barbel-for-es6

 

Install the dependencies for the first time (locally).

npm install --save

 

Run the webpack build task.

npm run build

 

If you have any further inquiry related to this article, please feel free to contact me.

Thanks

Chathuranga Tennakoon

 

Spring Framework: Profiling with @Profile

 

In your software development life, you might be having an experience about different application environments such as DEVELOPMENT, STAGING and PRODUCTION. The developed applications are normally deployed in these environments.  The most of the time these environments are set up in separate servers and they are known as :

  • Development Server
  • Staging Server
  • Production Server

Each of these server environments has their own configuration and connection details. These details might be different from one server to another.

e.g:-
MySQL or some other database connection details
RabbitMQ server and connection details etc....

 

Therefore we should maintain separate configuration/properties files for each server environment and we need to pick up the right configuration file based on the server environment.

In traditional way, this is achieved by manually defining related configuration file when building and deploying the application. This requires few manual steps with some human resource involvement. Therefore there is a probability to arise deployment related issues.  In addition, there are some limitations with the traditional approach.

 

What we should do if there is a requirement to programmatically register a bean based on the environment?

e.g:- The staging environment should have a separate bean implementation while development and production environments are having their own bean instances with different implementations.

The Spring Framework has come up with the solutions for above problems and made our life easier with annotation called @Profile.

 

@Profile

In spring the above deployment environments (development, staging and production) are treated as separate profiles@Profile annotation is used to separate the configuration for each profile. When running the application, we need to activate a selected profile and based on activated profile the relevant configurations will be loaded.

The purpose of @Profile is to separate/segregate the creating and registering of beans based on the profiles. Therefore @Profile can be used with any annotation that has the purpose of either creating or registering bean in Spring IOC container. So the @Profile can be used with following annotations.

  • Any stereo type annotations (mainly used with @Component and @Service)
  • @Configuration and @Bean annotations

 

After reading the above note, the first question you might be asking yourself is that “Why @Profile is used mainly with @Component and @Service? “. Lets figure it out before moving forward.

 

 

Why @Profile annotation is used mainly with @Component and @Service annotations? 

@Component designates the class as a spring managed component and @Service designates the class as the spring managed service. It makes a sense if the application creates different services and managed components based on the activated profiles. This is very logical and this should be the expected behavior of profiling.

Do you think that creating separate controllers and repositories based on different profiles make any sense? Is it logically acceptable? Different Controller for production environment and different ones for staging and development? Isn’t it crazy?

On the other hand, do you think that we need separate repositories based on profiles. Separate ones for development, staging and production?  wait… wait.. wait…  I agree with you that we need different database configurations and connection details for each of these environments.  Does it mean that we need separate repositories? No right? The separate database connection details does not have any relation with repository.

Now i think you can understand why @Profile is not used with @Controller and @Repository.

 

 

What will happen if it is used with other stereotype annotations such as  @Controller and @Repository?

It will work fine. I just just explained you the logical reasons behind of not using @Profile with @Controller and @Repository.

If you can logically prove that using @Profile with @Controller and @Repository annotations just do the right job for you, then you are free to go for it. But again think twice before proceeding.

Ok. Now you have an idea of how @Profile helps to create and register the relevant beans based on activated profiles. But i didn’t explain you how relevant application.properties file is picked up based on the activated profile. Lets look at it now.

 

Picking up the correct application.properties file with spring boot

According to our discussion, the application can be deployed in several server environments. Therefore the application should have different application.properties file for the deployment profile(or server environment).  When the profile is activated, the corresponding application.properties file should be picked up.

How the properties files are named based on the profile and Spring Boot picks up the correct application.properties file?

We can have property file specific to a profile with the convention application-{profile}.properties. In this way we can have separate property file for different environment. If we have activated a profile, then the corresponding property file will be used by the spring boot application. We can also have a property file for default profile.

Suppose we have profiles as dev for development environment , prod for production environment and staging for staging environment. Then the property file will be listed as below.

application-prod.properties
application-dev.properties 
application-staging.properties

 

Ok lets do some fantastic coding example with Spring @Profile. We will try to cover most of the concepts we discussed here.

 

What we are going to build.

We will build a simple REST api application that persists some data to MySQL database with Spring Data JPA.  Here i am focused only with demonstrating @Profile and if you need to learn more about Spring Data JPA, please refer my article on that.

Click here to go to Spring Data JPA article. 

This application has three different databases that represents three different deployment profiles. deployment profiles are dev, staging and prod.

app_development_db  - database for the dev profile/environment 
app_staging_db - database for the staging profile/environment
app_production_db  - database for the prod  profile/environment.

(If you want to run this application and see the output, make sure that you have created above three databases in the MySQL server)

The source code of this example can be found at GitHub.

Click here to download the source code. 

 

If you open up the project in your IDE, you can see the following files structure.

Screen Shot 2018-01-03 at 12.58.22 AM.png

 

You can notice that we have created separate application.properties files for each profile.

So lets dig into some of the important source files.

 

ConfigurationManager is responsible for creating and registering the relevant/corresponding  bean based on the activated profile.

 

EnvironmentService has different implementations for each profile. Based on the activated profile, the corresponding service bean will be created and registered.

 

Finally we will look at our ApplicationLogController.

 

ApplicationLogController has exposed following REST endpoint.

POST  /logs

This will persists the ApplicationLog entries with the aid of ApplicationLogRepository. After that it reruns the persisted log entry. This can be seen in the body of the HTTP Response.

 

AppConfiguration has been auto-wired with the registered configuration bean based on the activated profile.

 EnvironmentService will also be auto-wired with the created service bean based on the activated profile.

Ultimately, the persisting database will be decided on the selected properties file based activated profile.

Since everything depends on the activated profile,  we need to run this application by activating any of these three profiles. Then we can see the result and understand how it works.

 

Running the Application by activating profiles.

The profile can be activated with following command.

-Dspring.profiles.active=<<profile-name>>

 

Therefore the practical uses of the command can be given as follows.

 

Running spring boot application by enabling “prod” profile

mvn spring-boot:run -Dspring.profiles.active=prod

Running the application as a jar file by enabling the dev profile

java -jar -Dspring.profiles.active=dev target/spring-profile-example-0.0.1-SNAPSHOT.jar

 

Lets run the application and examine how the profile works. In order to identify how it works, please check all three databases after each REST api call.

 

Run the application by activating  “prod” profile

java -jar -Dspring.profiles.active=prod target/spring-profile-example-0.0.1-SNAPSHOT.jar

Making the REST api call.

/POST  localhost:8080/logs

 

The HTTP response will be as follows.

Screen Shot 2018-01-03 at 10.31.01 PM.png

 

 

Run the application by activating  “dev” profile

java -jar -Dspring.profiles.active=dev target/spring-profile-example-0.0.1-SNAPSHOT.jar

Making the REST api call.

/POST  localhost:8080/logs

 

The HTTP response will be as follows.

Screen Shot 2018-01-03 at 10.34.12 PM.png

 

 

Run the application by activating  “staging” profile

java -jar -Dspring.profiles.active=staging target/spring-profile-example-0.0.1-SNAPSHOT.jar

Making the REST api call.

/POST  localhost:8080/logs

 

The HTTP response will be as follows.

Screen Shot 2018-01-03 at 10.35.51 PM.png

 

As i have already mentioned, please check all three database after each REST api call. Then you will notice that only the corresponding application.properties file is picked up and the connection for the given database is made.

 

SonarQube: Exclude classes from Code Coverage Analysis

In the code coverage analysis we focus only about the classes that should be covered with unit and integration tests. that mens the controllers, repositories, services and domain specific classes. There are some classes which are not covered by either unit or integration tests.  In order to get the correct figure of code coverage analysis, it is required  to exclude those non related classes when performing code coverage analysis.

E.g:- configuration related classes (SpringBootApplication configuration class, SpringSecurityApplication configuration class etc..) should be avoided

This can be done with adding the required classes as excluded list under the “properties” section of pom.xml.

<properties>
    <sonar.exclusions>
      **/SpringBootDockerExampleApplication.java,
      **/config/*.java
    </sonar.exclusions>
 </properties>

 

You can add multiple exclusions and each of them should be separated  by comma. According to the above configuration, SpringBootDockerExampleApplication and any class under the config package will be excluded/ignored when performing  code coverage analysis.

 

NodeJs development with Docker (Webpack + ES6 + Babel)

 

In this article we will look at how to use Docker for NodeJs application development and deployment. Here i will be showing you how to use docker-compose utility to bundle NodeJs application as a docker image and run it in a docker container.  For the demonstration purpose, i am going to reuse a NodeJs application that was developed in some other article. I will take you through the step by step process for integrating Docker features and functionalities for  the application that we have developed.

 

we will be using the application developed in following article.

Click here to go the previous article

If you haven’t read the previous article, it is highly recommended to read it before moving forward with this article.

 

Lets remind and brush up our knowledge on the technologies used in the previous article as follows.

  • Express.js :- The application has been developed using Express.js framework.
  • ES6+ :- The source code complies with the ES6+ (ES6 and higher) JavaScript.
  • Babel and babel-loader :- has been used for transpiling the ES6+ source code into ES5 style code. babel-loader has been used with webpack for compiling/transpiling purpose.
  • webpack :- This has been used as the static resource bundling tool (here specially javascript) and executing babel traspiler with babel-loader.

 

Get the source code of the project related to the previous article from GitHub. You can get the source code with following command.

git clone git@github.com:chathurangat/nodejs-webpack-es6.git

 

Once the source code is cloned, add below two empty files to the root of the project.

  • Dockerfile :- the file name should be “Dockerfile” without any extension (NO extension)
  • docker-compose.yml 

Don’t worry about the purposes of these two files for the moment right now. we will discuss the purpose of each file when they contents are added for them.

 

After adding above two files, open the project with your IDE and the project structure should looks like below.

Screen Shot 2018-02-25 at 12.03.13 AM.png

 

NodeJs Application with Express.Js , Babel and Webpack

Since i have demonstrated how to develop NodeJs application with Express.js with ES6+ JavaScript syntaxes and  how to use Babel and Webpack for transpiling and bundling purposes in the previous article, i am not going to repeat the same content here. If you need any clarification regarding this, please refer the previous article.  I will be moving forward with adding Docker for the previous developed application.

 

Moving forward with Docker

Now it is the time to modify the content of the Dockerfile and docker-compose.yml file. Lets look at the purpose of each file in detailed as follows.

Docker is all about creating images from source code and running them in standalone environment called containers.  if you are new to docker and need to get a basic idea, then click here to visit my article about Docker.

 

Dockerfile

Dockerfile contains the instructions and related commands for building the docker image from the project source code.  add the following content for the empty Dockerfile that you have created.

FROM node:alpine
WORKDIR /app
COPY . /app
RUN npm install
ENTRYPOINT ["npm","run","execute"]

 

FROM : This defines the base image for the image that we are building.(The image should be built from this base image). All we said is that, for this image to run, we need the node:alpine image.

 

WORKDIR : This will create a work directory when building the image. Here it will create the “/app” directory as the work directory.  If you go to the bash mode of the container, you can verify that “/app” directory is created with all copies files.

WORKDIR sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that available in the docker file.

 

COPY : command used to copy the given files from local development environment to the docker image. Here the local current working directory (all files in the current working directory) will be copied to “/app” directory.

 

RUN :  RUN command can be used to execute the shell commands when the docker image is being built.

 

ENTRYPOINT : This command will run when the container is created and up. Normally this should contain the command that should be executed in the shell to run the application in the docker container. The command should be given in JSON array format.

According to the above Dockerfile, the command will be:

 npm run execute

 

Here the “execute” is a custom built command and if you observe the scripts of the package.json, then you will find the related command.

"scripts": {
  "test": "echo \"Error: no test specified\" && exit 1",
  "execute": "webpack && node build/app.bundle.js"
},

 

If you want to learn more about Dockerfile, please click here to visit official documentation about it. 

 

 

What is docker-compose? 

Compose is a tool for defining and running multi-container Docker applications. With Compose, you need to create a YAML file (docker-compose.yml) to configure your application’s services. Then, with a single command, you can create and start all the services from your configuration.

Lets add the contents for the created docker-compose.yml file.

 

docker-compose.yml 

version: '3'

services:
  nodejs-webpack-es6-app:
    image: nodejs-webpack-es6-image
    build:
      context: ./
      dockerfile: Dockerfile
    ports:
      - 4000:2000

 

According to the above document, docker-compose version is 3. Therefore this document should contain the syntaxes that comply with version 3.

We can declare the list of services under the services. here i have declared only one service which it is built with the source code of this project.  each declared service will be deployed and run in a sperate docker container.

The name of the service is “nodejs-webpack-es6-app“. The service should be deployed with docker image “nodejs-webpack-es6-image“. If the docker image is not available, then build the docker image with using the Dockerfile available in the current working directory.

The service will be running in the container port 2000 and expose it though docker host port 4000. Therefore the service can be accessed externally with:

ip address of the docker host + 4000 (port)

 

 

docker-compose for building and running the application

 

In command shell, go to the directory where the docker-compose.yml file is located and run the below command to run the application.

 

docker-compose  up

 

After running the above command, you can access the application as follows.

 

Testing the Application

Now lets access each HTTP route with postman as follows.

 

GET   http://localhost:4000/

Screen Shot 2018-02-21 at 7.30.27 PM.png

 

GET   http://localhost:4000/products/12

Screen Shot 2018-02-21 at 7.31.26 PM.png

 

POST    http://localhost:4000/products

Screen Shot 2018-02-21 at 7.32.30 PM.png

 

Rebuild the image on source file changing

if you have modified the source code of the application, then you need to remove the old image and rebuild the new image. This can be done with following single command.

 

docker-compose build

 

The source code of this article can be found in GitHub. Click here to get the source code.

 

 

 

What is babel-polyfill and why it is important?

 

babel-polyfill will emulate a full ES6 environment. For example, without the polyfill, the following code:

function allAdd() {
    return Array.from(arguments).map((a) => a + 2);
}

will be transpiled to:

function allAdd() {
    return Array.from(argument).map(function (a) {
        return a + 2;
    });
}

This code will not work everywhere, because Array.from in not supported by every browser:

Uncaught TypeError: Array.from is not a function

To solve this problem we need to use a polyfill. A polyfill is a piece of code, that replicate the native API that does not exist in the current runtime.

To include the Babel polyfill, we need to install it:

npm install babel-polyfill --save-dev

 

Use the babel-polyfill in your source code

To include the polyfill you need to require it at the top of the entry point to your application. in order to use babel-polyfill withj your applciation, you can use one of following three methods.

 

method 01

With require

require("babel-polyfill");

 

method 02

If you are using ES6’s import syntax in your application’s entry point, you should instead import the polyfill at the top of the entry point to ensure the polyfills are loaded first.

With ES6 import.

import "babel-polyfill";

 

method 03

With webpack.config.js, add babel-polyfill to your entry array:

module.exports = {
  entry: ["babel-polyfill", "./app/js"]
};