When it comes to using REST API specification formats, two important approuches have emerged: The “Design First” and the “Code First” approach to REST API development.
The Code First approach is a more traditional approach to building APIs, with the development of code happening after the business requirements are laid out, eventually generating the documentation from the code. The Design First approach advocates for designing the API’s contract first before writing any code.
Read More »
These days, more and more organizations are moving their applications from private data centers to the public cloud. In this process, they need to convert the existing monolithic application to microservices. Microservices architecture solves certain problems, but it has several drawbacks. While adopting this architecture, some issues must be addressed. Over the years, we have learned a lot of lessons and gained a lot of experience while working with our customers.
Read More »
Change is happening all around us: new technologies, new methodologies… But how are these changes affecting the ways in which systems are architected, and how do recent developments like patterns and refactoring cause us to think differently about architecture?
When microservices were introduced, one of the benefits was that you could make changes to services easily. Is this the case if there are many consumers dependent on it, although the consumers and services are loosely coupled?
Read More »
When discussing service evolution, we have mentioned consumer contracts, and recommended building consumer side regression test cases to ensure that service updates won’t break any consumers. The consumer contract is not a new concept, and it was introduced in SOA architecture to address service evolution in XML schema. The same concept is still suitable for the microservices architecture, which is normally based on JSON and some RPC schemas.
Provider Contract A provider contract expresses a service provider’s business function capabilities in terms of the set of exportable elements necessary to support that functionality.
Read More »
Introduction In a distributed system there is the ever-present risk of partial failure. Since clients and services are separate processes or even reside on different physical servers, a service might not be able to respond in a timely way to a client’s request. A service might be down because of a failure or for maintenance. Or the service might be overloaded and responding extremely slowly to requests. Also, as services are distributed across networks or even data centers, it increases the risk of partial failures especially if you have too many small services interacting with each other to form a big application.
Read More »
This page describes the lifecycle of light-4j API. The light-4j API follows a defined lifecycle, starting in the API Server Start phase, moving through API Running phase, and then when the API server shutdown, moving API to shutdown phase.
API Server Start
API Running
API Server Shutdown
The defined lifecycle applies for all API running approaches. It includes start API by command line or IDE, run API in docker container, or run API in Kubernetes/Openshift container environment.
Read More »
Networks are unreliable. The networks connecting our clients and servers are, on average, more reliable than consumer-level last miles like cellular or home ISPs, but given enough information moving across the wire, they’re still going to fail given enough time. Outages, routing problems, and other intermittent failures may be statistically unusual on the whole, but still bound to be happening all the time at some ambient background rate.
While in microservices architecture, the number of network connections grows exponentially and the risk of network issues will be much higher than with monolithic applications.
Read More »
Environment segregation in APIs has two primary goals:
Prevent cross fire In a vast majority, applications maintain segregation between test and production environments to ensure privacy, data integrity, separation of concerns, etc. To this end, an API framework needs to offer the ability to prevent access across this divide.
For example, test environments should not be able to access production services or resources, nor should production applications accidentally access test services resources such as persisted stores.
Read More »
When adopting microservices architecture, the traditional relational database might not meet the requirement. Given different types of services, different databases are more suitable.
The following list of points that need to be considered.
SQL Database When an organization moves to microservices architecture from an existing monolithic application built with SQL database, the existing database running in a data center may be used again with some process updates. For a mission-critical application, it is still recommended to use a commercial database like Oracle or MS SQLServer.
Read More »
Decomposition Patterns We are constantly asked by our customers on how to decompose legacy monolithic applications to microservices.
Decompose by Business Capability Problem: Microservices is all about making services loosely coupled and applying the single responsibility principle. However, breaking an application into smaller pieces has to be done logically. How do we decompose an application into small services?
Solution: One strategy is to decompose by business capability.
Read More »
There are several benefits to using HTTP/2 instead of HTTP/1.x in microservices architecture.
HTTP/2 is Binary, Instead of Textual HTTP/2 is Fully Multiplexed, Instead of Ordered and Blocking HTTP/2 Can Use One Connection for Parallelism HTTP/2 Users Header Compression to Reduce Overhead HTTP/2 Allows Servers to Push Responses Proactively into Client Caches Please read this article that explains HTTP/2 benefits for microservices.
Read More »
When an organization adopts microservices architecture, a CI/CD pipeline is essential to ensure the quality of the delivery. Unlike common practices for monolithic applications, which involve applying the integration test after the entire application is completed, we need continuous integration from day one.
When using light-platform to build microservices, two use cases need continuous integration:
Often a team of developers focuses on middleware handlers that are shared by numeric microservices.
Read More »
Multi-tenancy is an architecture in which a single instance of a software application serves multiple customers. Each customer is called a tenant. Tenants may be given the ability to customize some parts of the application, such as the color of the user interface (UI) or business rules, but they cannot customize the application’s code.
Multi-tenancy can be economical because software development and maintenance costs are shared. It can be contrasted with single-tenancy, an architecture in which each customer has their own software instance and may be given access to code.
Read More »
In web service architecture, normally people handle JWT token expiration re-actively. Here is the flow.
Client sends a request with a JWT token in the header The service receives the request and verifies if the JWT token expired If expired, then returns 401 - token expired When the client receives this error and body, it will go to the OAuth 2.0 provider to renew a new token Resends the request with the new token Note that if token is not expired then go to the next step.
Read More »
When you are talking about microservices, chances are your existing application is built as web services. These days a lot of people and vendors are calling these web services as microservices and it is not right.
The following diagram shows the difference between web service and microservices.
As you can see, the traditional web servers are flattened behind an API gateway and they are normally built on top of Java EE platform with JAXRS 1.
Read More »
Most APIs built with light-4j or protected by http-sidecar or light-gateway deal with two types of OAuth 2.0 tokens: Client Credentials or Authorization Code.
To verify the client credentials token, the JwtVerifyHandler should be enough to verify the token signature, expiration and endpoint scope against the specification.
However, when we deal with the authorization code token, we might need to do a little bit more than the normal JWT token verification as the authorization code token contains the user-related claims, for example, userId, roles, AD groups etc.
Read More »
Layered Security When we are working on API security design, multiple layers need to be carefully considered. Just like an onion, you can peel multiple layers until you reach the core.
Technical Concerns First, the JWT verifier handler will ensure that the authorization header has a valid, unexpired token with the correct signature. It will allow the consumer to invoke the API at the API level.
Second, the JWT verifier also validates the scope in the JWT token against the scopes defined in the OpenAPI specification to ensure the JWT token has access to the current endpoint.
Read More »
Project teams often ask if we can add a codegen feature to generate client SDK to help consumers invoke the API based on the specification. The SwaggerHub codegen does it, and many users are getting used to the client SDK approach provided by the API.
In my opinion, I don’t think it is right to use client SDK to invoke APIs unless the API is public and a lot of clients are trying to integrate with it.
Read More »
For any enterprise to implement an API platform in the cloud, they need to think about bringing legacy clients and legacy services into the ecosystem. Most organizations will have a lot of existing applications that need to consume APIs deployed to the Kubernetes cluster. Some of the applications can also provide REST APIs to allow other applications to leverage.
To help backend APIs to address cross-cutting concerns, we have developed the http-sidecar for services deployed to the Kubernetes cluster.
Read More »