Spring Boot + SLF4J : Enhance the Application logging with SLF4J Mapped Diagnostic Context (MDC)

 

 

What is SLF4J?

I believe that SLF4J is not a new concept for most of the java developers. Here we are going to look at the MDC feature of the logging framework. If you need to brush up your knowledge on SLF4J,  here it is the time.

The Simple Logging Facade for Java (SLF4J) serves as a simple facade or abstraction for various logging frameworks, such as java.util.logginglogback and log4j.

SLF4J allows the end-user to plug in the desired logging framework at deployment time.

 

The problem with in-proper log entries

It’s a simple yet useful concept. Before I explain what is MDC, lets assume that we are going to develop a simple web application with one servlet MyServlet that serves requests from multiple clients. And, this servlet uses log4j framework for logging. A file appender has been defined for this servlet, so all the log messages will be logged into a text file.

With the above said configuration, all the log messages from MyServlet will go into a single log file. And when this servlet is serving more than one clients at the same time, the log statements will be mixed and there’s  no way to differentiate which log statement is belongs to which client’s processing. This’ll make it difficult to trace and debug if any processing error occurred in MyServlet life cycle.

 

How to differentiate log statements with respective to each clients?

To avoid the mix up of the log statements, we can add a client identifier (anything that can be uniquely identify the client) to the log statements. Therefore we need to make sure that client identifier is added  to the each and every log statements of the application.

Assume that the application contains thousands of log entries (even more) in hundred of source files. If we are going add this manually, this will be a tedious and repetitive work that consumes more time and resources.

Dont worry! This can be done within few seconds with  MDC (Mapped Diagnostic Context) support of the logging framework that you are using.

 

What is Mapped Diagnostic Context (MDC)?

Mapped Diagnostic Context (MDC) is used to enhance the application logging by adding some meaningful information to log entries.

Mapped Diagnostic Context” is essentially a map maintained by the logging framework where the application code provides key-value pairs which can then be inserted by the logging framework in log messages. MDC data can also be highly helpful in filtering messages or triggering certain actions.

SLF4J supports MDC, or mapped diagnostic context. If the underlying logging framework offers MDC functionality, then SLF4J will delegate to the underlying framework’s MDC.

 

Lets look at some code examples with Spring Boot. Please follow the instructions given below.

 

 

Create the Spring Boot Project.

Here i have added the web dependency.

 

Screen Shot 2018-04-17 at 5.17.22 PM.png

 

Then create a simple controller as follows.

 

Now run the spring boot application and access the /welcome endpoint.

You can use following command to run the application.

mvn spring-boot:run 

 

Then access the /welcome endpoint as follows.

Screen Shot 2018-04-17 at 10.22.39 PM.png

 

If you look at the application log, you will notice the below log entry.

Screen Shot 2018-04-17 at 10.24.38 PM.png

2018-04-17 22:24:34.171  INFO 42251 --- [nio-8080-exec-2] c.s.e.s.m.s.c.WelcomeController   : inside the welcomeMessage

 

If  the  “welcome” endpoint is accessed by multiple clients, you will see a set of similar log entries (as above log entry) in the log file. Since all the log entries are identical and no distinguishable information it is impossible to say which log entry is belonging to which client request. all of them are mixed up.

As we discuss above, the solution is to enhance the log entries by adding meaningful information with MDC.

 

 

Enhance the logging with MDC (Mapped Diagnostic Context)

MDC is a key value pair store (similar to java Map) and the stored values are used to enhance the log entries ( add meaningful information to log entries). The logging framework may generate the log entries by adding the data/information stored in the MDC.

Therefore we need to make sure that MDC (Mapped Diagnostic Context) is populated with related entries before generating the log entries.

If you look at the source code of this article, you may notice that this application is a Restful web api. All application related log entries are added inside each Restful endpoint. Therefore we need to make sure that MDC is properly initialized and populated with related data before the request reach the targeted endpoint. In order to achieve this, we can use Servlet Filters.

The Servlet Filter will be the excellent centralized place to configure the MDC data store. This will guarantee that MDC is properly populated with relevant data before any request reach its designated endpoint.

 

 

This is a just sample demonstration of how to add the values to the MDC (Mapped Diagnostic Context) store. So i have added a hard coded value as below.

MDC.put("userId", "www.SpringBootDev.com");

In real implementation, you can write your own and most appropriate implementations to get the client identifier.  MDC Store can have multiple key-value pairs.

 

After adding all the above mentioned source files, your project structure should looks like below.

Screen Shot 2018-04-17 at 11.03.35 PM.png

 

 

Did we miss something?

 

Yes.  Adding the key-value pairs in the MDC store is not just enough to add those values to the log entries.  we need to tell the Spring Boot about what MDC data should be added to the log entry and where it should be added. This can be achieved with declaring the logging pattern.

add the logging pattern to the application.properties as follows.

logging.pattern.level = %X{userId}%5p

 

In properties file, when defining the conversionPattern, add a pattern %X{key} to retrieve the values that are present in the MDC. The key will be userId in our example.

The more of pattern conversation characters can be found at https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/PatternLayout.html

This conversationPattern tells Spring Boot to render the MDC data variable userId just before the priority field in the log output. The priority field is the logging level (DEBUG, INFO, etc…) that you are used to see.

 

Now again run the spring boot application and access the /welcome endpoint.

Screen Shot 2018-04-17 at 10.22.39 PM.png

 

Then you can notice that MDC data (http://www.SpringBootDev.com – userId) is added before the priority field (INFO) by looking at following log entry.

Screen Shot 2018-04-17 at 11.27.39 PM.png

2018-04-17 23:27:29.162 www.SpringBootDev.com INFO 42535 --- [nio-8080-exec-3] c.s.e.s.m.s.c.WelcomeController          : inside the welcomeMessage

 

Here we have enhanced the application logging with MDC (Mapped Diagnostic Context) feature of the underlying logging framework.

 

The full source code related to this article can be found at GitHub. Click here to download it.

 

 

Microservices: Declare Zuul routes with Eureka serviceId (Spring Cloud + Zuul + Eureka)

 

In a previous article, we have declared/defined the Zuul routes by providing the service details (URL) manually (Click here to visit that article). That means we have provided the domain name or ip address  and port number of each service.  Just think of a situation where the application contains a large number of microservices. Do you think that it is practical to find (manually) the server details (ip address/domain) and port details of every service? If that is difficult, then how do we declare the zuul route mapping to expose each service through the Zuul Proxy?

The solution is to perform the Zuul routes mapping with serviceId registered in the Eureka Server“.

 

Here i am not going to discuss about the importance of Netflix Zuul Proxy or Netflix Eureka server.  I have already written two separate articles on both of those areas. If you need to refresh your knowledge on those areas, please refer the relevant articles.

 

What we are going to do here?

In order to demonstrate the serviceId based Zuul route mapping, we will be creating following set of applications.

  • Eureka Server :- Spring Boot Applciation to act as Eureka Server. All the microservices will be registered here.
  • Zuul Proxy: – Spring Boot Application to act as Zuul Reverse Proxy. This is the centralized gateway for directing all the requests for the misroservices. Zuul proxy will communicate with Eureka server to get the details (ip address and port) of the relevant microservice for delegating the client request.
  • student-service :- just dummy microservice for representing the backend business service.

 

Lets create them one by one. The full source code of this application can be found at GitHub.

 

Eureka Server

Eureka Server is just another spring boot application with Spring Cloud Netflix Eureka dependency. Then annotate the main spring boot configuration class with @EnableEurekaServer annotation.

Therefore create a spring boot application with Eureka dependency.

eureke-server

 

Then  add the @EnableEurekaServer annotation to the main Spring Boot Application configuration class (That is the class annotated with @SpringBootApplication annotation)

 

 

application.properties (Eureka Server)

Continue reading “Microservices: Declare Zuul routes with Eureka serviceId (Spring Cloud + Zuul + Eureka)”

Microservices: Request Routing with Zuul Proxy (Spring Boot + Spring Cloud + Netflix Zuul)

 

In this article, i am going to how to create two microservices and expose them through Netflix Zuul Proxy.  Here i will show you step by step guide on setting up the Zuul proxy and routing the client requests to the related microservices.

 

What is Zuul and  importance of reverse proxy?

Zuul is an api gateway (or rather Reverse Proxy) comes under the Netflix OSS stack. If you want to know the importance of reverse proxy in microservices architecture or know about Zuul,  Please click here  to refer my article on that.  It is recommended to go through that article, before moving forward with this article.

 

Source Code

The fully source code related to this article can be found at GitHub. Click here to get it.

 

Setting up Microservices

In order to demonstrate the capabilities of Zuul proxy, i will set up two spring boot applications as microservices.

  • student-service
  • course-service

For the simplicity of this article, i will just post only the important code segments of above micorservices here. If you want to go thorough full application code, please get the source from the GitHub.

Important: First of all, i need to emphasize that all the REST endpoints are not properly implemented and they just contain few hardcoded values. The purpose of this article is to demonstrate the capabilities of the Zuul proxy in the context of Routing.  If your real project implementation, you can follow the best practices and standards to implement the controller logic as your wish. Here i have just set up few controllers and RESTFul endpoints for the demonstration purpose. 

 

student-service

Lets look at the StudentControlller.

 

The REST endpoints exposed in the StudentController can be listed as below.

#getting the name of the controller
GET  http://localhost:8081/name

#getting the student by student Id
GET  http://localhost:8081/students/{student_id}

#getting a list of students by course id
GET  http://localhost:8081/courses/{course_id}/students

#getting a list of students who are registered for a particular course in the particular department. 
GET  http://localhost:8081/departments/{department_id}/courses/{course_id}/students

 

You can see that the  REST endpoints contains simple URI Path (Resource Path) to some complex URI path with set of path variables. I have added those endpoints intentionally to show you how to map the routes for the those endpoints  in Zuul Proxy.

The student-service can be up and run with following command.  It will run on port 8081.

mvn spring-boot:run

 

 

course-service

Lets look at the CourseController

 

The REST endpoints exposed in the CourseController can be listed as below.

#getting the all courses
GET  http://localhost:8082/courses

#getting the course by course_id
GET  http://localhost:8081/courses/{course_id}

 

The course-service can be up and run with following command.  It will run on port 8082.

mvn spring-boot:run

 

 

Setting up the Zuul Proxy

I know that most people are curiously waited until i start this section of the article. and lets start it now.

 

How to create the Zuul Proxy?

Just believe me that Zuul proxy is just another spring boot application. It has the Spring Cloud Netflix Zuul on its classpath dependencies and annotated the main SpringBootApplication configuration class with @EnableZuulProxy annotation.

 

Lets create our Zuul Proxy application.

Go to https://start.spring.io/ and generate a Spring Boot Application with dependency Zuul. Please refer the below screen shot.

Screen Shot 2018-03-11 at 1.51.58 PM.png

 

Then open the generated project and annotate the main spring boot application configuration with @EnableZuulProxy annotation.

 

Lets change the port of the Zuul Proxy application into any port that is not in use. Here i will change it to 7070.

The Zuul Proxy can be up and run with following command.

mvn spring-boot:run

 

 

Route mapping with Zuul.

We have set up two microservices in our local server. one is running on port 8081 and other one is running on port 8082.  We need to expose those two microservices through the Zuul proxy running on port 7070.  These route mapping can be done in the application.properties file in the Zuul Proxy Application.

 

Lets look at how the REST endpoints in the student-service will be exposed through the Zuul proxy.

 

1.  URI resource mapping for  http://localhost:8081/name (No Path Variable)

zuul.routes.website-name.url = http://localhost:8081/name

Here you can see that it is a simple URI mapping. There is no path variable associated with it.  According to the above route mapping, it will consider the website-name as the default route mapping. Therefore any request for the /website-name will be redirected to the http://localhost:8081/name.

e.g:-  http://localhost:7070/website-name  SEND TO http://localhost:8081/name

Since there is no path mapping, the path will be considered as “website-name

Screen Shot 2018-03-11 at 8.13.44 PM

 

 

2.  URI resource mapping for  http://localhost:8081/students/{student_id}

You can notice that there is a path variable exists in the URI resource path. Therefore we need to do the “path” mapping too.

zuul.routes.students.path = /students/*
zuul.routes.students.url = http://localhost:8081/students

According to the above route mapping, any request for the /students/* will be directed to http://localhost:8081/students. (This is applicable for any request that matches the above pattern)

Therefore http://localhost:7070/students/1  will be directed to http://localhost:8081/students/1

Screen Shot 2018-03-11 at 8.16.34 PM

 

 

3. URI resource mapping for http://localhost:8081/courses/{course_id}/students

zuul.routes.students-courses.path = /courses/*/students
zuul.routes.students-courses.url = http://localhost:8081/courses

According to the above mapping any request that matches the /courses/*/students pattern will be directed to the url declared along with path variables.

The request for http://localhost:7070/courses/1/students will be directed to the http://localhost:8081/courses/1/students.

I know that now you are confused and thinking of why the URL path doest not start with /students-courses? It is the name that we have used to declare the URL (zuul.routes.students-courses.url).

Keep it in your mind that if there is a path declared in the application.properties file, that path will be considered.  If there is no path declared, then it will consider the name defined in the url mapping as the path. (This is the default behavior)

Screen Shot 2018-03-11 at 8.33.41 PM

 

 

4. http://localhost:8081/departments/{department_id}/courses/{course_id}/students

Here you can see that there are two path variables are available.

zuul.routes.department-courses-students.path = /departments/*/courses/*/students
zuul.routes.department-courses-students.url = http://localhost:8081/departments

Any request that matches the /departments//courses//students pattern, will be redirected to the endpoint http://localhost:8081/departments//courses//students URL along with path variables.

e.g:- http://localhost:7070/departments/1/courses/2/students  WILL BE DIRECTED TO http://localhost:8081/departments/1/courses/2/students

Screen Shot 2018-03-11 at 8.35.28 PM

 

Now we have completed the route mapping for all the REST services published in the student-service. Now lets look at how the REST endpoints in the course-service will be  exposed through the Zuul proxy.

 

In the course-service, there are only two REST endpoints. we can do just one simple mapping for both of those endpoints.

# zuul route mapping for the course-service
zuul.routes.courses.url = http://localhost:8082/courses

According to the above route mapping, any request for the /courses will be directed to the http://localhost:8082/courses url

 

http://localhost:7070/courses will forward to http://localhost:8082/courses

Screen Shot 2018-03-11 at 8.37.46 PM

 

http://localhost:7070/courses/1 will forward to http://localhost:8082/courses/1

Screen Shot 2018-03-11 at 8.44.54 PM

 

In this article, i have guided you on exposing the micro services with through Zuul Proxy with Simple URI resource mapping (no path variables) to some complex URI resource mapping (with multiple path variables)

If you want to learn more about Spring Cloud Zuul Proxy,  please click following link to visit the official documentation.

Click here to visit the Spring Cloud Documentation on Routing and Filtering with Zuul

Netflix Zuul : Importance of Reverse Proxy in Microservices Architecture (Spring Cloud + Netflix Zuul)

 

What is Zuul?

Zuul is a Proxy server (proxy service) provided by Netflix OSS. It provides wide range of features such as dynamic routing, filtering requests and server side load balancing etc…

In microservices architecture, Zuul acts as the api gateway for all the deployed microservices and it sits as the middle man in between client applications and backend services. This means that all the microservices will be exposed to the external parties (services or applications)  through the Zuul proxy. If any service/application need to access the any of the microservices deployed in behind the reverse proxy, it has to come through the Zuul proxy.  Zuul will hide the identities of the server applications behind the proxy and serve the client applications exposing its identity (identity of the reverse proxy) on behalf of backend servers and sever applications.  Therefore Zuul is identified as a Reverse Proxy.

 

Forward Proxy and Reverse Proxy

Here we should know what is the difference between Proxy (forward Proxy) and Reverse Proxy.  One is for protecting/hiding clients and other one is for protecting/hiding servers.

Forward Proxy is the proxy for the client and it hides the identities of the clients. It receives the request from the client and sends the requests to the server on behalf of the clients. The main purpose of forward proxy is to act on behalf of clients by hiding their identities.  The forward proxies are mainly used to access the contents or websites, that is blocked by your ISP or blocked for your country/area.

Proxy-sites.png

 

Reverse Proxy does the opposite of what the Forward Proxy does. It hides the identities of the servers and receive the requests from clients on behalf of servers. Behind the reverse proxy there might be different web services and servers may exist. It is the responsibility of the reverse proxy to delegate the client request to the relevant service/server application and responds back to the client. Therefore the main purpose of reverse proxy is to  server client applications on behalf of set of backend applications deployed in behind the reverse proxy.

Sometimes there might be several instances of the same service or server may running in behind the reverse proxy and that is known as clustering.  In this situation,the reverse proxy may determine the most appropriate server instance(or cluster node) for serving the client request and will delegate the request for that cluster node.  This is done/achieved with the load balancing application available in the reverse proxy.  Clustering will ensure the high availability of service (even if one node is down, the request will be served by next available node) and proper load balancing among multiple requests.  Lets look at those later with some other article.

Proxy (both proxies) will provide the centralized point(or rather single point) of access for the communication between client and servers. Therefore it is easy to implement the enforcing of security policies, content filtering and other constraints with proxies. Both Forward and Reverse proxies exists (should place) in between client and server.

 

Please refer the following diagram to see the role of the Reverse Proxy.

reverse_proxy.png

 

A reverse proxy allows you to route requests to a single domain to multiple backing services behind that proxy. This can be useful in situations where you want to break up your application into several loosely-coupled components (like microservices) and distribute them even in different servers but, you need to expose them to the public under a single domain. Then the users will get the same experience as  they are communicating with a single application.  This can be achieved with dynamic routing feature available in the reverse proxy.

 

 

The importance of Reverse Proxy in Microservices architecture can be summarized as below. 

 

  • High Availability: provides the supports for the high availability of the microservice in the clustered environment. Even if one service (node) fails down, the client request will be served by next available node.
  • Load Balancing: supports for the load balancing among multiple nodes in the cluster. Therefore it make sure that no server  or service is overloaded with multiple requests. It will properly distribute the requests among multiple nodes to maximize the utilization of resources.
  • Single Point of Access with Request and Response Filtering: This is the single point of access or the gateway for the microservices. If the microservices are exposed through the reverse proxy, the the external clients can access/consume those services through the reverse proxy. Therefore it is possible to filter the requests that are coming to the microservices. In addition, it can filter the responses that are going from the misroservices too. Therefore this will provide an extra level of request and response filtering support for the microservices.  Authentication and Authorization security policies can be enforced with making use of this single point of access.
  • Dynamic Routing:  There may be multiple microservices which are deployed in behind the reverse proxy. Those services may deployed in different servers with different domain names. Sometimes in the same server (where the reverse proxy is deployed) but with different ports. All the services will be exposed to public (client applications) through the reverse proxy and the proxy will assign their own route (url path)  to each service. each route will be mapped to original route in the related service. Therefore client will get the same experience as it communicates with a single application and SSO (Single Sign On) and CORS (Cross Origin Resource Sharing) related issue will be sorted.

 

 

The Netflix Zuul as a Reverse Proxy

We have already discussed the importance of the reverse proxy in the Microservices architecture and now it is the time to select the appropriate Reverse Proxy to use. The Netflix has introduced Zuul as the reverse proxy under their OSS (Open Source Software) stack.

Zuul proxy will provide following main functionalities as a reverse proxy. They can be listed as follows.  Lets look at each of them in detailed in separate articles.

  • Dynamic Routing
  • Request and Response Filtering
  • Server Side Load Balancing

 

Java Application Code Coverage with Cobertura + maven

Cobertura is a free Java tool that calculates the percentage of code accessed by tests. It can be used to identify which parts of your Java program are lacking test coverage.

In this article, i am going to show you how to use Cobertura for maven based java application for measuring the code coverage by test cases.

 

Cobertura Code Coverage Report

Go to the root of the project and run the following command  that will analyze the code coverage with Cobertura and generate the output report (showing the detailed analysis of coverage).

mvn cobertura:cobertura

 

Screen Shot 2018-03-01 at 10.07.44 AM.png

 

Accessing the report

The generated code coverage analysis report can be accessed through  ${project}/target/site/cobertura/index.html 

Screen Shot 2018-03-01 at 10.13.38 AM.png

 

Microservices : Service Registration and Discover in Netflix Eureka

 

The importance of Service Register and Discovery

In the Microservices architecture, your application may consist of many number of microservices. each of them may be deployed in different servers and different ports.  The microservices may need to communicate with each other to execute some tasks or operations. For instance, one microservice may need to access the service endpoint  (REST endpoint) in some other microservice for application related operation.  “how do they communicate with each other?” 

If I ask you this question, you will directly tell me that one service can access the other service with the IP address and the port number of that service.

YES! you are correct. But keep this in mind that this approach (using ip address and port number) has following limitations.

  • It is not practical to know the IP address and port number of each microservice as there are multiple microservices available. Assume that there are hundred of microservices available. In this case, do you think that it is practical to know the ip address and port number of each service? It is not right?

 

  • Assume a situation where a microservice is migrated to (deployed in) some other server with different IP address and port. If we have used the IP address and port number in the source code for making RestClient requests, then we will have to change them in the source code and re-build and deploy the affected services. whenever the ip address or port number get changed, we have to do the same. Dont you think that it is a sort of bad coding and programming practice? Absolutely it is right? (The issue raised again in a situation where we have multiple environments like devqauat and production)

 

 

How do we overcome above problems and move forward?

The solution is “Service Registration and Discovery“.

The microservices should register themselves in the centralized location with an identifier and server details. This is known as “Service Registration“.  Each microservice should be able to look up the list of registered services in the centralized location and it is known as “Service Discovery

 

What are the implementation for the Service Registration and Discovery?

There are multiple implementations for the Service Registration and Discovery Server

  • Netflix Eureka
  • Consul
  • Zookeeper

In this article, we are going to discuss about Netflix Eureka

 

 

What is Eureka Server? How it works?

Eureka Server is an implementation for the “Service Registration and Discovery” pattern. Eureka Server keeps a track of registered microsevices and therefore It is known as registry of microservices (that are related to the application).

Each microservice should register with Eureka server by providing their details such as host name, ip address, port, and health indicators etc… Then the Eureka service will register each microservice with unique identifier known as serviceId. The other services can use this serviceId to access the particular service.

The Eureka server will maintain a list of registered microservices with their provided details. Eureka server expects continuos ping messages (known as heartbeats) from the registered microservices to verify they are alive (up and running).  If any service fails to send the heartbeat (ping message) continuously, it will be considered as a dead service and will be removed from the registry.  Therefore Eureka server will maintain only a registry of up and running microservices.

On the other hand, the microservices can register themselves as service discover clients with Eureka Server. The Eureka Server will allow the discover clients (microservices) to look up the other registered services and fetch their information. If any service needs to communicate with any other service, it can register with the Eureka server as a discover client and fetch the information of the targeted service (server address and port number etc…).  In this way, microservices can communicate with each other without maintaining the IP addresses and port numbers manually.

 

Lets do some codings …..

The full source related to this article can be found at GitHub.

Continue reading “Microservices : Service Registration and Discover in Netflix Eureka”

Code Coverage and Source Quality Analysis with Spring Boot + Docker + SonarQube + JaCoCo

 

In this article, i am going to explain how to use SonarQube and JaCoCo as a Code Coverage and Source Code Quality analysis tool for Spring Boot application.

 

What is Code Coverage and why it is important?

Code Coverage is an important topic when it comes to Test Driven Development (TDD). most of the developers are curious to know how percentage of source code is covered with test cases developed (for both unit and integration tests).

Code Coverage shows the stats of how much of source code is covered and tested with  test cases (both unit and integration) developed for the application. Therefore the code coverage analysis is an important fact of measuring the quality of the source code. we need to write the test cases to achieve higher code coverage which will increase the maintainability of the source code.

 

Technology Stack

The following technologies will be used for this article.

  • SonarQube
  • Docker
  • JaCoCo
  • Spring Boot Application with maven

 

Install and Run SonarQube with Docker

The most of the developers know the “SonarQube” as a  code quality analysis tool. This has the capability of the executing Unit and Integration tests with given library/tool (such as Cobertura, JaCoCo etc..) and it gives a detailed analysis of code coverage of the source code.  In this article, we will run SonarQube as a docker image. Therefore we need to have docker installed in our development environment.

If you do not have SonarQube in your local development environment, you can download it with following command.

 

docker pull sonarqube

Screen Shot 2018-02-27 at 12.15.20 PM.png

 

Once the SonarQube docker image is retrieved, it can be run with following command.

 

docker run -d --name sonarqube -p 9000:9000 -p 9092:9092 sonarqube  

Screen Shot 2018-02-28 at 1.26.07 AM.png

 

This will start a docker container based on the sonarqube image and give it the name sonarqube. Adding the -d means the container will run in detached mode (background). The -p 9000:9000 and 9092:9092 means that we expose port 9000 and 9092 to the host using the same port numbers.

Now you can navigate to http://localhost:9000 and you will see your local SonarQube dashboard.

 

JaCoCo Maven configuration

JaCoCo is a one of the famous code coverage library available for java based applications. In oder to add JaCoCo for the project, you have to add the following maven plugin (under the plugins) for the pom.xml of the project.

(This should be added under the plugins section of the pom.xml of the project)

 

JaCoCo Test Coverage Analysis with SonarQube

First you need to run the test cases with maven before sending the report for the Sonar server. This can be done with following command.

mvn test

 

SonarQube has a really good integration with test code coverage. It allows you to analyze which parts of the code should be better covered, and you can correlate this with many other stats. If you want to send your report to your Sonar server the only thing you need to do is to execute the following command in the terminal. (make sure that you have run the mvn test command successfully before executing the below command)

 

mvn sonar:sonar -Dsonar.login=admin -Dsonar.password=admin

Screen Shot 2018-02-28 at 1.29.32 AM.png

 

Then it will send the inspection report to the SonarQube and you can access the detailed report through http://localhost:9000 using the specified login credentials.

username : admin
password : admin

Screen Shot 2018-02-28 at 1.34.27 AM.png

 

 

Run as a Single Command

As you can see that we have used two separate commands for integrating test result analysis with sonar.

 

Running test cases  with maven

mvn test

Sending the coverage report to sonar 

mvn sonar:sonar -Dsonar.login=admin -Dsonar.password=admin

 

Both of above commands can be composed into one single command as follows.

mvn test sonar:sonar -Dsonar.login=admin -Dsonar.password=admin

 

 

Exclude Classes from Code Coverage Analysis

 

In the code coverage analysis we focus only about the classes that should be covered with unit and integration tests. that mens the controllers, repositories, services and domain specific classes. There are some classes which are not covered by either unit or integration tests.  In order to get the correct figure of code coverage analysis, it is required  to exclude those non related classes when performing code coverage analysis.

E.g:- configuration related classes (SpringBootApplication configuration class, SpringSecurityApplication configuration class etc..) should be avoided

This can be done with adding the classes as classes to be excluded under the “properties” section of pom.xml.

<properties>
    <sonar.exclusions>
      **/SpringBootDockerExampleApplication.java,
      **/config/*.java
    </sonar.exclusions>
 </properties>

 

You can add multiple exclusions and each of them should be separated  by comma. According to the above configuration, SpringBootDockerExampleApplication and any class under the config package will be excluded/ignored when performing  code coverage analysis.