For an organization that established infrastructure to support services built on top of the Light Platform, chances are they have some existing services built with other platforms or even different languages. To leverage the security, metrics, logging, tracing, auditing, client-side service discovery for these existing services, it is a good idea to put light-proxy in front of these services to provide cross-cutting concerns and gateway features for them. Although the light-proxy adds an additional network hop, however, it might be faster given it supports HTTP 2.0 by default.
A proxy server is a go‑between or intermediary server that forwards requests for content from multiple clients to different servers across the Internet. A reverse proxy server is a type of proxy server that typically sits behind the firewall in a private network and directs client requests to the appropriate backend server. A reverse proxy provides an additional level of abstraction and control to ensure the smooth flow of network traffic between clients and servers.
Light-proxy can be used as a reverse proxy; however, it is considered as a smart proxy or gateway because users can enable handlers embedded to perform gateway function with minimum configuration.
Common use cases for a light proxy server include:
Support Swagger 2.0, OpenAPI 3.0, GraphQL and RPC
With the same code base, it supports the different style of API interactions by changing the configuration only. In light-4j, different frameworks share the same set of standard middleware handlers and have their own specific handlers. All of them can be wired in as plugins in handler.yml or service.yml config file to form a different type of gateway service. By wiring in framework specific handlers, light-proxy can be used with RESTful Swagger 2.0 and OpenAPI 3.0, GraphQL or RPC backend.
A reverse proxy server can act as a “traffic cop” sitting in front of your backend servers and distributing client requests across a group of servers in a manner that maximizes speed and capacity utilization while ensuring no one server is overloaded, which can degrade performance. If a server
goes down, the load balancer redirects traffic to the remaining online servers.
Reverse proxies can compress inbound and outbound data, which speeds up the flow of traffic between clients and servers. They can also perform additional tasks such as SSL encryption to take the load off of your web/API servers, thereby boosting their performance. When using the HTTP 2.0 protocol, data over the Internet between client and proxy server is in binary and headers are compressed. It saves the bandwidth and provides better performance even though the backend service is still in HTTP/1.1 protocol.
By intercepting requests headed for your backend servers, a reverse proxy server protects their identities and acts as an additional defense against security attacks. It also ensures that multiple servers can be accessed from a single record locator or URL regardless of the structure of your local area network. In the API world, this means the proxy server is responsible for authorization on the client and scope. If the backend service has other security implementation other then OAuth 2.0, the proxy can inject a handler to connect to the backend securely.
Light-4j metrics middleware handler can be placed into the request/response chain to collect the successful requests, failed requests, and response time from both client_id and service_id perspective. From the Grafana dashboard, service owner can see how many clients are accessing the service and response code distribution, volume as well as response time. For client owner, it can see how many APIs the client is calling and the corresponding response code distribution, volume, and response time.
Centralized Logging and auditing
The auditing handler can be enabled in the request/response chain to intercept the request and response and logged to audit.log or database based on the audit handler implementation. It gives insight on how
the existing API is accessed and provide auditing information in the same format with other services built on top of light-4j frameworks.
Distributed Tracing with OpenTracing API
You can wire in the Jaeger tracer startup hook provider and OpenTracing handler in the proxy to collect tracing info across multiple distributed services.
All different style of API frameworks will have schema validation again the request, and the validation handler in each framework does it. For example, if existing backend service is RESTful API, then you can create the swagger specification and enable swagger handler and validator handler to validate the request before reaching the backend service. It can dramatically reduce the validation load on the backend service and avoid exposing the backend business logic.
You can enable rate limiting handler on the proxy server to ensure that the high volume of requests will be throttled. It normally only be enabled if your service is exposed to the Internet through the proxy server.
Static IP service
As most light services will be Dockerized and deployed on the cloud. There is no static IP addresses
and port number available. For Mobile native applications or Single Page application, it would be easier
to address a reverse proxy server which has a static IP address and provide service discovery to forward
the request to the right IP and port number. (working in progress)
Serve static content
Like the static IP service, the proxy server can act as a web server as well to provide static content to serve single page application and associated contents. For a single site, the path resource handler can be used and for multiple sites, the virtual host handler should be used. Normally, the light-router is more suitable for this use case as it is designed to be placed closer to the consumer and the SPA is the consumer for the backend services.
In summary, the light-proxy provides the features of generic reverse proxy servers like Nginx or HAProxy and at the same time, provide better performance and a lot of cross-cutting concerns other proxies won’t have.
To learn more on how to configure each feature in the configuration files. Please refer to Configuration
and Tutorial. To find out all the deployment options and choose the right one, please refer to Artifact