Skip to main content

Do you really need a sledgehammer to crack a nut?






Intended audience: Developers
Read time: 30 min


Requirement: A reusable component required to record and retrieve consents for minors 


In nutshell, we required an independently deployable service to record and query consent for a minor to or from a DB which would later be consumed in registration and other app access requests. Service also requires to implement authentication and RBAC security (as per configured scope) with JWT authentication as all other integrations are secured and implemented SSO using OpenID connect protocol. 

In Dec 2020, we got this requirement, fortunately, we had the option to choose from available frameworks. Following parameters were set to pick the most suited one
  1. The size and scope of the application
  2. The nature of complexity in the application
  3. Objective achievement with shippable dependencies vs provided by the runtime
  4. Testability
  5. Development effort
Then the requirement at hand was evaluated based on these parameters, and we came up with below 
  • The scope was limited to just one concern and that was recording and retrieving the consents for a minor within a secured context. 
  • It is also going to be a very simple application with no complex business rules around it, like a normal CRUD application. 
  • Besides it, we wanted to fully utilize what we could get from the underlying runtime instead of making our artifact like a shopping trolley of having all types of different stuff. 
  • The last two points were also important as the eligible candidate as a chosen framework should make the application easily testable and developer-friendly with no fuss, to keep the code clean and shifting responsibility to runtime whereas possible to achieve the imperative style.
Based on the above things we found MicroProfile a promising option based on what it provides but the most important driving factor was the comparison between need vs overall cost. Based on the nature of the application (a simple microservice), the end goal and to enjoy the enriched testability along with minimal development effort, MicroProfile was found as the best-suited one. In January 2021, this capability was rolled out to production successfully after going through multiple rounds of intensive testing along with the other integrations. 

By the end of this series, we will try to cover all different aspects of this application and would build this application right from a clean state, to get all our ducks in a row I divided this into the following sections
  1. Getting the development environment ready, by having all components from different layers in place with help of a sample starter project
  2. Adding core functionality to the project
  3. Improving the testing aspect by adding integration tests using test containers  
Yes, I agree before you raise this and admit that point 3 should have been moved to point 2, but here it was done intentionally to avoid touching another aspect of a practice which in reality requires a mindset reboot, we will not focus on that in this series but yes you're absolutely correct. In this post, we would cover the point 1

What is MicroProfile

It was the time when the whole industry started appreciating microservices in one voice and a trend started to limit the concerns, applying 12 factors (find these on https://12factor.net/), creating cloud-ready applications, and migrations to the cloud. During this time a lot of technologies started changing at a really high pace to accommodate the needs and necessities born due to that. During this time, some industry experts observed the JAVA EE space was a bit behind in this race and this was an alarming sign. 

On 27th June 2016, in presence of some independent java champions and some companies (IBM, Red Hat, Pyara, Tomitribe, and LJC), a collaborative decision was made to introduce a new subgroup of specification (and their implementations) optimized for microservice development and it was named as MicroProfile. The first version ie MicroProfile 1.0 consisted of JAX-RS 2.0, CDI 1.2, and JSON-P 1.0, because these were considered essential for a microservice and all of these were Java EE 7 specifications. 

Motivation

Though the fact cannot be denied that back in Dec 2009, Java EE 6 introduced the concept of profiles. A profile is a collection of Java EE technologies and APIs that address specific developer communities and application types. The following profiles are implemented through the distributions of Sun GlassFish Enterprise Server v3

Full Platform Profile 
This profile is designed for developers who require the full set of Java EE APIs for enterprise application development. The Full Platform Profile is installed when you install Sun GlassFish Enterprise Server v3. This profile is also installed as part of the Java EE 6 SDK installation.

Web Profile
This profile contains web technologies that are part of the full platform and is designed for developers who do not require the full set of Java EE APIs. The Web Profile is installed when you install Sun GlassFish Enterprise Server v3 Web Profile. This profile is also installed with Java EE 6 Web Profile SDK.

Back then the below was the classification


By the time in 2016, MicroProfile was introduced, this picture changed a little bit as Java EE 7 was already released and some specs were added & upgraded as below


And then a new specification (or you can say a new sub-set was standardized) named MicroProfile was born 


One thing to note here was at this point no new specification was created but were used from already existing specifications from Java EE 7 or more precisely from the Web Profile sub-set. But later with newer versions new MicroProfile specific ones were created and added to the umbrella specification (MicroProfile), for more information please refer here. At the time of writing of this post MicroProfile 4.0 is going to be officially released soon


Exciting isn't it? 😍

Note: For some reader, if it's confusing to choose between MicroProfile and SpringBoot, I would recommend you to go through a nice blog here and then rethink what is most suited for your needs, to be honest, both carry a lot to offer the important part is to what your application needs


Enough talking, let's move to the developer's favorite part, ... coding   :)

Ok so we have answers to what we eventually want to achieve with the system, what framework we want to use, and what all tools and spec we have to help us to achieve the end goal. In this application, we would make use of 4 MicroProfile specifications (mpHealth, mpMetrics, mpOpenApi & mpJwt) and others from Jakarta EE 8. Let start with the tech stack we will be using here 

Technologies & Specifications

Language:  Java 1.8
Runtime Spec:  Jakarta EE 8
Runtime:  IBM Websphere Liberty 20.0.0.6
Build tool:               Maven 3+
Containerization:    Docker (latest)
MicroProfile:          3.3 (current latest, but we would cherry-pick the specific ones we need)  
Keycloak:                In the containerized form
PostgreSQL:            In the containerized form

What, are you serious? Are you for real? If at this point these are your reaction (for java version used :)) I know and can understand but I can explain, we had environmental limitations and had to go with this version. Soon it would be addressed and we would come out of an old era, Hopefully 😊.

Getting environment ready

We will be doing the following for that
  1. Database: Start PostgreSQL container and execute the DDL Scrips
  2. Keycloak: Start Keycloak container and create the realm as per the shared configs
  3. Starter project: Download a MicroProfile starter project from here and would clean up a bit
  4. Verification: Test the token verification through sample secured resource comes with the starter project 

1. Database

Run the following command from the terminal, it would create a PostgreSQL database, use any database manager of your choice to access the database. Once it's ready and you can access it, execute the shared script to create the arbitrary database objects

docker run --rm \
--name pg \
-e POSTGRES_PASSWORD=password \
-e POSTGRES_DB=MRT_CONSENT \
-d -p 5432:5432 \
postgres

2. Keycloack

Save the config file locally and run the following command from the terminal, it would start keycloack container, once the command completes successfully, access the URL and login with the credential as "admin:admin". 
docker run --rm \
-p 8080:8080 \
-p 8443:8443 \
-e KEYCLOAK_USER=admin \
-e KEYCLOAK_PASSWORD=admin \
quay.io/keycloak/keycloak:12.0.4
Once logged in, hover on "Master" and click on "Add realm"



Select the locally saved config file and click on create, it would create a realm (test), a client (test), with 2 optional scopes (consent and consent-admin). 

Now select the test realm, click on "Clients" from the left pane, click on client ID "test", select the tab "Credentials" and click on "Regenerate Secret", keep it safe, we would need this to get access token for this client. 

In these steps, we set up a client here with a service account and grant type as "Confidential" as we would only make use of client authentication, not users, if these sound unfamiliar, please refer to some resources about OAUTH 2 on the internet or check Keycloak documentation here

Verify if you can get the access token by executing the following curl, if yes we are done with this step
curl --location --request POST 'https://localhost:8443/auth/realms/test/protocol/openid-connect/token' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'client_id=test' \
--data-urlencode 'grant_type=client_credentials' \
--data-urlencode 'client_secret=<secret-value-copied-in-previous-step>' \
--data-urlencode 'scope=consent' \

3. Starter project

Go to URL choose the options as displayed in the image below and download the compressed file, unzip it in your choice of location



The starter project contains 2 parts service-a and service-b, for simplicity sake, we would clean this up and would start from a clean state, to do that take the folder (minor-consent), remove service-a from it and move the content of service-b folder to the parent folder and delete the empty folder (service-b). Now we would have a top-level folder as minor-consent and inside that a folder src and two files (pom.xml & readme.md). As our requirement is slightly different and we have to implement RBAC security based on scopes (a claim in token) not on the role, so we would be handling this explicitly but would still let MicroProfile authenticate the token and roles verification.

As the first step just copy the PostgreSQL driver from here to src/main/liberty/config/resources/, we would need this to connect to the database later.

pom.xml

Let's make the following changes 
  1. Remove the jaeger-client and slf4j-jdk14 dependencies 
  2. Add commons-lang3 as a compile dependency
  3. Update the bootstrapProperties in liberty-maven-plugin    
<bootstrapProperties>
<server.httpPort>9082</server.httpPort>
<server.httpsPort>9445</server.httpsPort>
<contextRoot>${project.artifactId}</contextRoot>
<appLocation>${project.build.directory}/${project.build.finalName}.${project.packaging}</appLocation>
<project.name>${final.name}</project.name>
<jwt.jwksUri>https://localhost:8443/auth/realms/test/protocol/openid-connect/certs</jwt.jwksUri>
<jwt.issuer>https://localhost:8443/auth/realms/test</jwt.issuer>
<jwt.audiences>account</jwt.audiences>
<jwt.userNameAttribute>sub</jwt.userNameAttribute>
</bootstrapProperties>
The above properties are used while creating a server at runtime

server.xml



Required feature, needs, and motivation (line no 4 - 14)

  1. jaxrs-2.1: require to expose this as a REST service so that other services can consume 
  2. mpMetrics-2.3: required to expose metrics for monitoring purpose 
  3. mpHealth-2.2: required to expose the health endpoints for a quick health check 
  4. mpJwt-1.1: required to handle security aspects like RBAC & token verification 
  5. beanValidation-2.0: required for input validations
  6. jdbc-4.2: required for communication with EIS layer
  7. concurrent-1.0: required for taking help from runtime to manage threads and provide executor 
  8. jsonb-1.0: required for some customization during serialization and deserialization phases 
  9. mpOpenAPI-1.0: required for API contract generation\hosting
Note: we didn't mention some features (cdi, jndi, mpConfig and jsonp) explicitly because of the transitive nature of runtime features that would be installed as dependencies.

Line no 16: Defines host and port for application
Line no (18 - 20): Define the location and context root for the artifact
Line no 22: Defines an injectable executer for thread management
Line no 24: Disable authentication for metrics endpoints
Line no 26: Defines the Keystore for runtime
Line no (28 - 30): Define the location of the database driver
Line no (31 - 35): Define the database connection properties 
Line no (37 - 38): Define the JWT configurations, for token verification

MinorconsentRestApplication


On line no we updated the realmName as per our keycloak configuration

ProtectedController

We made some small changes here, added @Produce at class to get a response in JSON media type, added our custom annotation @ScopesAllowed, based on what scope verification would be applied in filter ScopeVerifier. These changes now restrict this method to be only accessible by a client with valid token and has consent-admin scope




Scope verification

For our explicit implementation for scope verification, added a filter (ScopeVerifier) of type ContainerRequestFilter, requests would be intercepted conditionally based on the existence of an annotation (ScopesAllowed). 


We also did some reshuffling the stuff as below



Before we test this with our Keycloak token we would need to do one more step needs to be done and that is to import the Keycloak certificate in Liberty Keystore so that the liberty server can access jwksUri for token validation. 

You can check in server.xml (line no 26), the default password for liberty Keystore is atbash, download and save the Keycloak certificate from URL and add it to this store using key-tool or Keystore explorer, I found it really handy, check it out here

Run the application by running the below command, notice the terminal output for all the endpoints exposed by the liberty server

mvn clean liberty:run OR mvn clean liberty:dev

The difference between the above two commands is the first one starts up the server in normal mode but the other one starts the server in dev mode facilitating the hot code replacement while running the server and you can check all your changes taking effect immediately 

You're all set now to test the first integration, use the curl mentioned above and generate the access token, append "Bearer " to the start of token and use that as Authorization header value while accessing the endpoint http://localhost:9082/data/protected

It's time to end this post here, we made use of only one MicroProfile specification (mpJwt-1.1) so far. In the next posts, we would start adding the core functionalities to this and will make use of other MicroProfile specifications and will go through all other aspects like containerization, using docker-compose, using test containers, etc. 


The final working code discussed in this post can be found on GitHub on branch post-1, checkout using the below command

git clone -b post-1 git@github.com:manish2aug/minor-consent.git


Comments

  1. Very interesting, good job and thanks for sharing such a nice blog. Keep posting.

    ReplyDelete

Post a Comment

Popular posts from this blog

Do you really need a sledgehammer to crack a nut? Part-2

In part-1 , we implemented mpJwt and secured a resource with help of the claim scope present in the JWT token received in the request. In this post, we will make use of other specifications, like mpHealth, mpOpenAPI, and mpMetrics, you can either checkout the source code from branch post-1 and code along or can refer to the final code in branch post-2. mpHealth The specification of the used version can be found here , this feature provides a way to check the overall health of an application. The first check is the Liveness check which conforms if the application is running or not and the second check Readiness confirms if the application is ready to serve all requests. The difference here is the second check should provide the confirmation only if all dependencies are available and the application in a state to perform the expected functions. It is also possible to combine both checks into one. AS per the documentation, the Eclipse MicroProfile Health Check specification defines a sing