API Security Disconnect 2023
Discover how prepared your CIO, CISO, CTO, and…
A service mesh is a dedicated infrastructure layer that provides network connectivity, security, and observability for microservices within a distributed system. It works by abstracting away service-to-service communication complexities, such as load balancing, circuit breaking, traffic shifting, and retry mechanisms.
Defining a service mesh requires first defining microservices. If you don’t know how microservices work, then an explanation of a service mesh is like comparing flying first class vs. coach, but you’ve never heard of an airplane. The two go together. A service mesh is a technology that was created to make microservices-based applications run better, though there are some that argue they often complicate things.
Microservices are a modern approach to software application architecture where an application is split into loosely coupled smaller components, known as “services.” Collectively, these microservices provide the overall application functionality. This approach stands in contrast to the traditional “monolithic” application architecture that combines all functionality into a single piece of software.
Netflix is a well-known example of microservices. A decade ago, Netflix was one, unified and gigantic software application. Every feature of Netflix resided inside a single, massive codebase. The problem with this was that modifying one part of the app meant redeploying the entire thing—not a desirable situation for a busy and commercially significant piece of software.
After migrating to a microservices architecture, each area of Netflix, from content management to account management, players, and so forth, exists as its own microservice. Actually, if we want to get really granular here, each one of these areas consists of multiple microservices. Developers can work on each microservice in isolation. They can change them, scale them, or reconfigure them without concern for their impact on other microservices. In theory, if one microservice fails, it does not bring the rest of the application down with it (though there are outlier scenarios).
With a sense of microservices architecture in mind, consider some of the challenges that could emerge when trying to make a microservices-based application function reliably. The architecture, while revolutionary in its ability to separate applications into independent services, brings with it a number of difficulties.
Communication between microservices, in particular, can be problematic without some mechanism to ensure that microservices know where each other are, how to communicate, and how to give admins a sense of what’s happening inside the app. For example, how does the streaming microservice in Netflix know where to look to find information about a subscriber’s account? That’s where a service mesh comes into the picture. However, to appease the folks who aren’t fans of a service mesh, it’s critical to point out that you don’t strictly need a service mesh for this.
A service mesh is a layer of infrastructure that manages communications between microservices over a network. It controls requests for services inside the app. A service mesh also typically provides service discovery, failover, and load balancing, along with security features like encryption.
A service mesh’s job is to add security, observability, and reliability to a microservices-based system. It achieves this through the use of proxies known as “sidecars,” which attach to each microservice (there are ‘non-sidecar’ service meshes based on eBPF as well). Sidecars operate at layer 7 of the OSI stack. If the application is container-based, the sidecar attaches to each container or virtual machine (VM). The proxies then operate in a “Data Plane” and “Control Plane.”.
The data plane comprises services that run next to their sidecar proxies. For each service/sidecar pair, the service deals with the application’s business logic, while the proxy sits between the service and other services in the system. The sidecar proxy handles all traffic going to, and away from, the service. It also provides connection functionality such as Mutual Transport Layer Security (mTLS), which lets each service in the request/response message flow validate the other’s certificate.
The control plane is where admins interact with the service mesh. It deals with proxy configuration and control, handling the administration of the service mesh, and providing a way to set up and coordinate the proxies. Admins work through the control plane to enforce access control policies and define routing rules for messages traveling between microservices. The control plane may also enable the export of logs and other data related to microservice observability.
Working together, the service mesh’s data plane and control plane make possible:
It is easy to get confused between microservices and a service mesh. The microservices are the component parts of a microservices-based system. The service mesh can connect them. As the name suggests, the service mesh lays over the microservices like a connective fabric. Put another way, a service mesh is a “pattern” that can be implemented to manage the interconnections and relevant logic that drives a microservices-based application.
Monolithic applications do not do well as they grow larger and more complex. It starts to make sense to divide the application into microservices. This approach aligns well with modern application development methodologies like agile, DevOps, and continuous integration/continuous delivery (CI/CD). Software development teams and their partners in testing and operations can focus on new code for individual, independent microservices. This is usually best for the software and the overall business.
As the number of services in the microservices-based app expands, however, it can become challenging to keep up with all the connections. Stakeholders may struggle to track how each service needs to connect and interact with others. Monitoring service health gets difficult. With dozens, or even hundreds of microservices to connect and oversee, reliability may become an issue, as well.
Service mesh addresses these problems by allowing developers to handle service-to-service communications in a dedicated layer of the infrastructure. Instead of dealing with hundreds of connections, one at a time, developers can manage the entire application through proxies in the control plane. The service mesh provides efficient management and monitoring functionality.
In a nutshell, the core benefits of a service mesh include:
A service mesh can present its own difficulties, however. For example, the service mesh’s layers become another element of a system that requires infrastructure, maintenance, support, and so forth. They can be a resource drain, affecting overall network and hardware performance. Adding sidecar proxies can potentially inject more complexity into an already complex environment. Running service calls through the sidecar adds a step, which can potentially slow the application down. There can be problems related to integration between multiple microservices architectures. The network still needs management, the service mesh’s management and messages between services notwithstanding.
A service mesh can help you achieve success with microservices. The technology provides a much-needed layer of infrastructure to handle communications between services and related administrative functions. Implemented the right way, a service mesh provides security, reliability and observability for microservices-based applications and systems.
Experience the speed, scale, and security that only Noname can provide. You’ll never look at APIs the same way again.