THIS IS NOT A README FILE: This a collection of hints, knowledge and considerations gathered while this projects continues to be developed.
- Red: Write a failing test for the desired functionality.
- Green: Implement the simplest thing that can work to make the test pass.
- Refactor: Look for opportunities to simplify, reduce duplication, or otherwise improve the code without changing any behavior—to refactor.
HTTP is a stateless protocol, so each request must contain data that proves it’s from an authenticated Principal. Authenticated sessions can be implemented by a Session Token that is generated, and placed in a Cookie (set of data stored in a web client and associated with a specific URI).
- Are automatically sent to the server with every request.
- Persist for a certain amount of time even if the web page is closed and later re-visited.
- Authorization happens after authentication.
- SOP policy states that only scripts which are contained in a web page are allowed to send requests to the origin (URI) of the web page.
- CORS is a way that browsers and servers can cooperate to relax the SOP (i.e. Microservices). A server can explicitly allow a list of “allowed origins” of requests (@CrossOrigin).
- CSRF aka “Sea-Surf” or Session Riding is actually enabled by Cookies. CSRF attacks happen when a malicious piece of code sends a request to a server where a user is authenticated.
- Use a CSRF Token: it's generated on each request. This makes it harder for an outside actor to insert itself into the “conversation” between the client and the server.
- Arbitrary malicious code executes on the client or on the server wich do not depend on Authentication but on security “holes” caused by poor programming practices.
- The main way to guard against XSS attacks is to properly process all data from external sources (web forms and URI query strings), e.g. by properly escaping the special HTML characters (such as <script>) when a string is rendered.
Use CSRF protection for any request that could be processed by a browser by normal users. But if you are only creating a service that is used by non-browser clients, you will likely want to disable CSRF protection. More:
- https://docs.spring.io/spring-security/reference/servlet/test/mockmvc/csrf.html
- https://docs.spring.io/spring-security/site/docs/5.2.0.RELEASE/reference/html/test-webflux.html#csrf-support
- https://cheatsheetseries.owasp.org/cheatsheets/Cross-Site_Request_Forgery_Prevention_Cheat_Sheet.html#double-submit-cookie
Operation | API Endpoint | HTTP Method | Response Status |
---|---|---|---|
Create | /entity | POST | 201 (CREATED) |
Read | /entity/{id} | GET | 200 (OK) |
Update | /entity/{id} | PUT | 204 (NO CONTENT) |
Delete | /entity/{id} | DELETE | 204 (NO CONTENT) |
Request | Response |
---|---|
Method (aka Verb) | |
URI (aka Endpoint) | Status Code |
Body | Body |
HTTP Method | Operation | Definition of Resource URI | What does it do? | Response Status Code | Response Body |
---|---|---|---|---|---|
POST |
Create | Server generates and returns the URI | Creates a sub-resource ("under" or "within" the passed URI) | 201 CREATED |
The created resource |
PUT |
Create | Client supplies the URI | Creates a resource (at the Request URI) | 201 CREATED |
The created resource |
PUT |
Update | Client supplies the URI | Replaces the resource: The entire record is replaced by the object in the Request | 204 NO CONTENT |
(empty) |
PATCH |
Update | Client supplies the URI | Partial Update: modify only fields included in the request on the existing record | 200 OK |
The updated resource |
The difference is whether the URI (which includes the ID of the resource) needs to be generated by the server or not:
- POST: If you need the server to return the URI of the created resource.
- PUT: When the resource URI is known at creation time.
PUT creates or replaces (updates) a resource at a specific request URI.
Related to deciding whether to allow PUT to create objects, you need to decide what the response status code and body should be:
Response Code | Use Case |
---|---|
200 OK | If you replaced an existing object (it's recommended to return the object in the response body). |
201 CREATED | If you created the object. |
204 NOT CONTENT | Empty response body (since the object at the URI in the request has been saved, verbatim, on the server). |
Add IS_DELETED boolean or a DELETED_DATE timestamp column. With a soft delete, we also need to change how Repositories interact with the database.
We'd need to store additional data to be able to recover deletion.
A response to a DELETE request will have no body. The client shouldn't care what the response code is unless it's an error, in which case, it'll throw an exception.
Response Code | Use Case |
---|---|
204 NO CONTENT | The record exists, and The Principal is authorized, and The record was successfully deleted. |
404 NOT FOUND | The record does not exist (a non-existent ID was sent). |
404 NOT FOUND | The record does exist but the Principal is not the owner. |
- Archive (move) the deleted data into a different location.
- Audit fields to the record itself. (For example, the DELETED_DATE column or DELETED_BY_USER).
- Audit trail is a record of all important operations done to a record. It can contain not only Delete operations, but Create and Update as well.
Implement soft delete, then have a separate process which hard-deletes or archives soft-deleted records after a certain time period, like once per year. We could implement hard delete, and archive the deleted records. In any of the above cases, we could keep an audit log of which operations happened when.
Even though @Autowired is a form of Spring dependency injection, it’s best used only in tests.
Supplying an id Repository.save is supported when an update is performed on an existing resource.
The API requires that you DO NOT supply an Entity.id when creating a new Entity.
- Minimize cognitive overhead: Other developers (not to mention users) will probably appreciate a thoughtful ordering when developing it.
- Minimize future errors: What happens when a new version of Spring, or Java, or the database, suddenly causes the “random” order to change overnight?
- Provides Authorization via Role-Based Access Control (RBAC). This means that a Principal has a number of Roles
- Configures Spring Web to return a generic 403 FORBIDDEN in most error conditions in order to avoid "leaking" information about our application.
- Spring Security provides the @CrossOrigin annotation, allowing you to specify a list of allowed CORS sites. Be careful! If you use the annotation without any arguments, it will allow all origins.
If you have an @ExceptionHandler handling the same Exception in both Controller and GlobalExceptionHandler then Controller level @ExceptionHandler method takes priority.
Spring Framework 6 implemented the Problem Details for HTTP APIs specification
If you're using Java 13 or later, triple quotes is correct for a text block: Key Points About Text Blocks: Indentation: The leading whitespace on each line is preserved. The content starts at the point where the closing """ is aligned. No Need for Escape Sequences: You don’t need to escape special characters like \n for new lines or " for double quotes within the text block.
If one of the other tests is interfering with our new test by creating a new Entity @DirtiesContext fixes this problem by causing Spring to start with a clean slate, as if those other tests hadn't been run.
Aspect | @RunWith(SpringRunner.class) | @SpringBootTest |
---|---|---|
Purpose | To run tests with Spring's support in JUnit 4. | To load the full Spring Boot application context for integration testing. |
Level of Testing | Focuses on enabling Spring-specific features (e.g., dependency injection). | Focuses on testing the whole Spring Boot application context. |
Usage | Used with @ContextConfiguration, @MockBean, or any Spring component testing. | Used for integration testing to load the application context. |
Speed | Faster because it doesn’t load the entire application context unless specified. | Slower because it loads the full Spring application context. |
JUnit Version | JUnit 4. | Works with both JUnit 4 and JUnit 5. |
Typical Usage | Unit tests with Spring components. | Integration tests or end-to-end tests. |
How They Are Used Together | ||
In many cases, @SpringBootTest is used together with @RunWith(SpringRunner.class) in JUnit 4-based Spring Boot tests. @RunWith(SpringRunner.class) ensures that Spring's test context is loaded, and @SpringBootTest tells Spring to load the full application context. |
In Part 3 of this Spring Boot Microservices Tutorial series, we will implement the API Gateway pattern using the Spring Cloud Gateway MVC library.
An API Gateway also called an Edge Server, acts as an entry point for our microservices, so that external clients can access the services easily. It also helps us to handle cross-cutting concerns like Monitoring, Security, etc. In some instances, API Gateway also acts as Load Balancers. We use an API Gateway as the facade that provides an abstraction over the internal microservices.
Spring Cloud Gateway MVC is a library under the Spring Cloud project, that provides the API Gateway functionality. It forwards the request from the client to the relevant microservices. To implement this feature, Spring Cloud Gateway uses the below building blocks:
- Routes: A Route is the basic building block of the gateway, it can be defined using a uniqueId, a destination URI, and a collection of predicates and filters
- Predicates: A Predicate is nothing but a criteria or a condition that you define to match against the incoming HTTP Request, for example, you can create a routing rule where you want to route the requests that have a specific Header and Request Parameter to Service A, then you can consider the headers and request parameters you want to match against the request as predicates.
- Filters: Filters are components that allow you to modify the requests and responses before they are sent to the destination.
Note that we will be using Spring Cloud Gateway MVC, but not Spring Cloud Gateway which is based on reactive stack backed by Spring Webflux.
Open Feing is not(In Feign, "declarative" means you define HTTP clients by declaring an interface using annotations, rather than writing imperative code to handle the HTTP requests and responses manually. This reduces boilerplate code and simplifies integration with REST APIs.) As announced in Spring Cloud 2022.0.0 release blog entry, we’re now treating the Spring Cloud OpenFeign project as feature-complete. We are only going to be adding bugfixes and possibly merging some small community feature PRs. We suggest migrating over to Spring Interface Clients instead. Feign is a declarative web service client. It makes writing web service clients easier. To use Feign create an interface and annotate it. It has pluggable annotation support including Feign annotations and JAX-RS annotations. Feign also supports pluggable encoders and decoders. Spring Cloud adds support for Spring MVC annotations and for using the same HttpMessageConverters used by default in Spring Web. S
The Spring Framework provides the following choices for making calls to REST endpoints:
RestClient - synchronous client with a fluent API.
WebClient - non-blocking, reactive client with fluent API.
RestTemplate - synchronous client with template method API.
HTTP Interface - annotated interface with generated, dynamic proxy implementation.
RestClient The RestClient is a synchronous HTTP client that offers a modern, fluent API. It offers an abstraction over HTTP libraries that allows for convenient conversion from a Java object to an HTTP request, and the creation of objects from an HTTP response.
Synchronous Communication between our Order Service and Inventory Service using Spring Cloud OpenFeign Library.
Spring Cloud OpenFeign library uses that provides OpenFeign integrations with Spring Boot and Spring Cloud. It provides a declarative REST Client that makes consuming REST Endpoints in our code easy.
Inter Process Communication
We will implement Synchronous Communication between Order Service and Inventory Service using the Spring Cloud OpenFeign library.
What is Keycloak? Keycloak is an open-source Authorization Server that can be used to outsource the authentication and authorization from our application. Keycloak supports various authentication and authorization protocols like OAuth2, OpenID Connect, SAML, etc. It also offers features like Single Sign On (SSO), and Multi-Factor Authentication (MFA) out of the box.
If you want to learn more about OAuth2 and OIDC you can refer to the below documentation https://oauth.net/2/ and https://openid.net/developers/how-connect-works/
What is Open API? Open API (don't mistake it with Open AI :D )is a specification that defines a standard way to document the APIs. No matter which programming language or framework you use, Open AI provides a standard way of defining and documenting your API so that it's easy to read and use the API.
In the Java world, it's similar to the Java Persistence API (JPA) that defines a specification on how to persist data in our applications. Hibernate is a library that implements JPA, similarly, we have a tool called Swagger, which helps us implement the OpenAPI specification.
Springdoc OpenAPI Swagger does not provide out-of-the-box support with Spring Boot, that's where the library Springdoc OpenAPI comes in, it provides good support with Spring Boot and helps us generate the API documentation automatically in JSON/YML and HTML formats.
If you want to view the documentation in HTML format, we should add the below dependency in all our services:
org.springdoc springdoc-openapi-starter-webmvc-ui 2.5.0Make sure to check the documentation, to get the latest version of the dependency -https://springdoc.org/#getting-started
Next, let's customize the URL we want to serve the REST API documentation, by default, spring doc open API exposes the documentation at URL path - /swagger-ui/index.html, if we want to customize the URL path, add the below property to the application.properties file.
springdoc.swagger-ui.path=/swagger-ui.html Next, we have to create a configuration class, to define some metadata about our API, to create a class called OpenAPIConfig in a package called config.
What is Circuit Breaker Pattern ? Circuit Breaker is one of the widely used best practice in the real world distributed systems
Consider a scenario where your application A makes synchronous calls to a remote service R. If service R becomes unavailable or responds very slowly due to performance issues, this situation will negatively impact application A as well.
If the application A receives a large number of requests, then there will be lot of threads in the waiting state, waiting for the response from R, leading to ultimately crashing the application A. To avoid this issue, we can make use of the Circuit Breaker Pattern, which works very similar to the Circuit Breaker used in our homes to protect the electrical devices from the power spikes. If there is a power spike, then the Circuit Breaker is tripped and will stop the flow of electricity. Similarly, when the remote service R in our case, if it's unavailable or responding very slowly, we can introduce a Circuit Breaker that will stop the calls to the service, for a certain amount of time. After this timeout, the Circuit Breaker will again start allowing calls to the service R gradually.
In our Microservices Project, we can introduce this Circuit Breaker mechanism in the API Gateway and the Order Service.
API Gateway is the main service that is calling 3 other services, so this will be the best place to use Circuit Breaker, similarly we can also implement this feature in the Order Service as the service is calling Inventory Service to fetch the inventory information.
Different States in the Circuit Breaker Pattern At any given point of time, a circuit breaker will be in different states like:
Open: This states indicates that the Circuit Breaker is open, and all the traffic going through the Circuit Breaker will be blocked.
Half-Open: In this state, the Circuit Breaker will start allowing gradually the traffic to the remote service R
Closed: In this state, the Circuit Breaker will allow all the requests to the service, which means that the service R is working well without any problems.
Implement Circuit Breaker in the API Gateway Now let's implement this pattern in our API Gateway project, for that I am going to add the following dependencies to the pom.xml of the API Gateway project
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-circuitbreaker-resilience4j</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
The first dependency adds the Resilience4J library in our project and the second dependency adds the Spring Boot Actuator that provides us with useful endpoints to get useful information about our application like Metrics, we can make use of these endpoints to check the state of the Resilience4J Circuit Breaker.
After adding the above dependency, we need to add the circuitBreaker() method to our Route Configuration for all the routes.
You can see that the circuitBreaker() method is taking an ID which is a string and then a URL parameter which points to a fallback endpoint that will be displayed when the requests are blocked when the CircuitBreaker is OPEN
We have the fallbackRoute() bean that is defined as a fallback route at the path - /fallbackRoute that sends a HTTP 503 Service Unavailable response back to the client.
After adding this configuration for our routes, we have to now configure Resilience4J in our project, for that open application.properties file:
Enable Circuit Breaker for Timeouts We can enable Circuit Breaker to implement a timeout, when the remote service is taking a very long time to respond, for that all we have to do is add the following property:
resilience4j.timelimiter.configs.default.timeout-duration=3s With this configuration, the circuit breaker will be OPEN, when the remote service is taking more than 3 seconds to send back the response.
Implement Retries Sometimes, the service can be unavailable due to a small network issue (or) any other minor issue, in those cases, it's better to retry the call instead of directly activating the Circuit Breaker. For this reason, the Resilience4J library allows us to implement retries by adding the following configuration:
#Resilience4J Retry Properties resilience4j.retry.configs.default.max-attempts=3 resilience4j.retry.configs.default.wait-duration=2s The above configuration will retry for a maximum of 3 times, with a wait of 5 seconds in between the retries.
What are Event Driven Microservices? Event-driven microservices architecture is a way of building applications, where the systems communicate by publishing and consuming events, these events are available whenever other consumers need to read them at any time.
Apache Kafka is a distributed messaging and streaming platform used frequently in the industry to implement Event-Driven Architecture.
Installing Apache Kafka through Docker We will use Docker to install Apache Kafka together with Zookeeper. We will also use a Kafka UI to see the topics and messages in our Kafka Cluster using the Kafka UI project. Here is how the Docker compose file looks like in the order-service docker-compose.yaml file: The main services we use are
cp-zookeeper which is a Zookeeper cluster that is used to orchestrate multiple Kafka clusters.
cp-kafka which is the Kafka server itself
cp-schema-registry is the service we used to define the schema of the messages that are sent between producers and consumers
Lastly, we have kafka-ui which provides a nice UI to allow us to view the Kafka topics that are created and also helps us to view the messages from and send messages to the Kafka topic.
After updating the docker-compose file, just run docker compose up -d to start all the services.