There are two ways to implement a complete distributed microservices application with Light in a Kubernetes cluster: Embedded and Sidecar.
If you use Java 8 or Java 11 to implement your service, you can leverage one of the Light frameworks with cross-cutting concerns embedded in the request/response chain.
The other option is to use the HTTP Sidecar container in the same pod to address the cross-cutting concerns at the network level.
Package and deployed as a separate module to handle Cross-Cutting Concerns for the main container/service in the same pod. In this case, the main service only need care about the HTTP request/response and business logic
Ingress traffic: First, client API requests will come to the sidecar service. Sidecar service acts as a proxy to delegate light client features, including open API schema validation, observability, monitoring, logging, JWT verify, etc. Then forward the request to the main service.
Egress traffic: The main service calls the sidecar service first for egress traffic; in this case, the sidecar service acts as a router to delegate light client features, including service discovery, SSL handshake, JWT token management, etc. Then forward the request to server API.
Suppose an organization has made the decision to standardize with the sidecar approach. In that case, we highly recommend light-4j frameworks for backend API implementation for smooth integration with the HTTP Sidecar if Java 8 or Java 11 is the target language. However, users can build the backend API with any language and framework as the sidecar is deployed independently.
You can deploy the HTTP Sidecar in the same container as the backend API or a separate container. We recommend the separate container approach based on the analysis in the deploy patterns.
When the http-sidecar is used, all the traffic to and from the pod should go through the sidecar, and the network policy will be defined to control the access.
There are some special considerations for the configurations for the http-sidecar when deploying it with a backend API in the same pod in a Kubernetes cluster.