[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

DEV Community

Cover image for Kubernetes Services
Vladimir Romanov
Vladimir Romanov

Posted on • Originally published at kerno.io

Kubernetes Services

Introduction to Kubernetes Services

A Kubernetes Service ties pods and nodes together by providing the “plumbing” within a cluster. Consider a set of nodes that contain different pods with applications running within every pod. The nature of a pod is such that it can be deleted at any time; the term commonly used to describe this behavior is ephemeral. In traditional systems, you’d typically assign an IP address to every machine part of a larger system to send inputs and collect outputs. The pod is a virtual system with similar capabilities — it is assigned an IP address through which other hosts can exchange information. As you may have already guessed, the challenge is that a pod can “die” at any time. The solution to this problem is a service that captures the state of a pod, or multiple pods, and creates a barrier that maintains the same IP address that redirects the traffic to the pod that is working and away from the ones that are shut down. In other words, Kubernetes Services allow for IP assignment and load balancing for services that are scaled horizontally or vertically by the cluster.
Figure 1 — Kubernetes Services | K8S Services FundamentalsFigure 1 — Kubernetes Services | K8S Services Fundamentals

As briefly stated above, the second challenge is balancing the traffic between redundant pods. In Kubernetes, having multiple pods running the same application is common. This is commonly done to ensure high availability of a service. In other words, depending on the application’s needs, you may choose to run the application on ten different pods. In this case, you’ll have a load-balancing service that will split the traffic between those pods as you specify. The simplest way is to send each request to a newly available pod. However, that’s not the extent of a load balancer. In many scenarios, it’s important to maintain the connection of a client or service to the same endpoint; the load balancer will thus ensure that the traffic from the same source goes to the correct pod. Furthermore, a load balancer will also maintain the health status of the underlying pods and allocate traffic accordingly; if a pod is terminated, the load balancer will redirect traffic accordingly. Note that these concepts are similar to the load balancers we’ve covered on the AWS Side! — Ex: AWS Application Load Balancer

Figure 2 — Kubernetes Services | K8S Services Load Balancer ConceptsFigure 2 — Kubernetes Services | K8S Services Load Balancer Concepts

Kubernetes Services — ClusterIP

When you don’t specify a service within your YAML file in Kubernetes, ClusterIP will be the default one to be used. Here’s a walkthrough of how the ClusterIP service works:
You’re deploying two pods within a node — one pod is running an application that has the port 4000 open, while the other port 8000. You’re going to need a second node with a replica of those pods. Note that the second node is going to be assigned a different IP address as we had previously discussed.
On a side note, you can find the IP address of your pods by using the following command:

pod -o wide
Enter fullscreen mode Exit fullscreen mode

If you’re looking to capture all namespaces, you can add the following modifier to the command above:

pod -o wide –all-namespaces
Enter fullscreen mode Exit fullscreen mode

Figure 3 — Kubernetes Services | ClusterIP pod IP addressesFigure 3 — Kubernetes Services | ClusterIP pod IP addresses

Getting back to ClusterIP… Our microservice is deployed across two nodes with each node containing 4 pods that expose 2 ports — 4000 and 8000. When a client sends a request to the K8S cluster, it’s handled by what’s called an Ingress service. This service will forward the request to the appropriate service that will then distribute the packet to the node / pod. Here’s a simplified diagram of how this is handled in this particular example:
Figure 4 — Kubernetes Services | ClusterIP K8S ServiceFigure 4 — Kubernetes Services | ClusterIP K8S Service

It’s important to note that a service is not a pod; it’s not something you need to run an application on to handle the requests between Ingress and the nodes. Kubernetes has pre-built services, including ClusterIP, which handles the requests out of the box as outlined above. Note that you’ll specify the services that need to be accessed via the YAML file configuration by giving them a unique name and port.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kernoApp-prod
spec:
  replicas: 4
  selector:
    matchLabels:
      app: kernoApp
  template:
    metadata:
      labels:
        app: kernoApp
    spec:
      containers:
      - name: kernoApp-container
        image: kernoApp-image
        ports:
        - containerPort: 8080
Enter fullscreen mode Exit fullscreen mode

How Does the ClusterIP Kubernetes Service Know Where to Forward the Request?

You’ll notice in the figure above a set of key-value pairs specified for our application and pods. The specific line we’re looking for falls under the “selector” specifier. In this case, we’ve set the “app” value to “kernoApp.” On the service side, we’ll need to specify a matching configuration key-value pair that will be used to identify to which nodes and pods this service is tied to. In other words, in this example, we’ll need to create a service that would call out the same key-value pair — “app: kernoApp.” As mentioned above, if you don’t specify a service for your application, the Kubernetes engine will create a default one.

The second, and last item when it comes to forwarding a message is the port number. When the request comes into the ClusterIP Service, a request port is specified. It will be matched based on the set of pods connected to the service.

Other Nodes & the ClusterIP Kubernetes Service

Kubernetes clusters can become quite complex with a variety of services. Data storage is a service that is required in most modern software applications. When it comes to ClusterIP, the same idea applies to a database. For example, your node / pod may need to access a database to store or retrieve information. In this case, a ClusterIP service will be created for the node(s) in front of the MongoDB service. When a request to store or retrieve data is made via the unique key-value pairs of the database service, the ClusterIP service will direct the traffic accordingly. It’s important to note that databases operate differently than applications — there’s an extra layer of complexity that ensures the read and write operations don’t happen concurrently, the data is stored in a single instance, the backup services create images that are reliable and accessible, etc. In other words, the image below is an oversimplification of what happens to the data versus an application.
Figure 5 — Kubernetes Services | ClusterIP MongoDB Interaction<br>
ClusterIP MultiPort ServicesFigure 5 — Kubernetes Services | ClusterIP MongoDB Interaction
ClusterIP MultiPort Services

We’ve discussed an example in which ClusterIP receives a single stream of requests that is distributed across multiple nodes and pods. The MultiPort service will intake different streams of data that specify the port. Let’s take a look at an example!
Figure 6 — Kubernetes Services | ClusterIP MultiPort ServicesFigure 6 — Kubernetes Services | ClusterIP MultiPort Services

Notice that a different service sends information to the MongoDB service nodes. Although we’ve illustrated a single pod within each node, an example of a service that would access these nodes would be a data transformation application or a metrics service.
The important point here is that you’ll need to specify two ports on the ClusterIP service, hence the name MultiPort. Each port will service requests coming into that specific address — the service will redirect the packets accordingly.

Kubernetes Services — Headless Services

A ClusterIP service can be considered the “head” of a cluster of nodes. A headless service is thus one where the request is made directly to a specific pod.
Why Would an Application Need to Communicate with a Specific Pod?
We’ve used a database service as an example above, explaining that there’s “more complexity than discussed here.” The reality of databases is that their clusters of services are structured differently. The basic premise is that a database is going to have a master replica into which the user (applications) can write into. For availability purposes, other database application nodes will be “copies” that can’t be written to but can be read. This is done to provide access to the database without compromising the integrity of the data within. This schema allows applications to access a single pod that can be written to and ignores the others. Internally, the master node will propagate copies of the database to the worker nodes from which other services and applications can read the data.
Figure 7 — Kubernetes Services | Headless Services<br>
Kubernetes Services — NodePortFigure 7 — Kubernetes Services | Headless Services
Kubernetes Services — NodePort

You’ll often see a specification of a nodePort within a YAML file defined for Kubernetes applications. The NodePort Service is a wrapper that opens up a port to the node. It’s important to note that this value must fall between 30000 and 32767; any value specified for NodePort outside of that range won’t be accepted.
The NodePort service will span across nodes and thus open them up to access via a specified IP and node configuration. It’s important to note that this isn’t the most secure way of specifying your infrastructure, as you’re opening up direct ports to talk to the nodes of the underlying applications / services.

Kubernetes Services — LoadBalancer

Every cloud provider (e.g., AWS, GCP, Azure, etc.) has its own load balancer service. Depending on which provider you’re deploying your K8S application, the LoadBalancer Service will access the native implementation of that cloud service. In other words, the K8S engine will utilize the APIs of the provider’s Load Balancer upon which it is deployed.
The LoadBalancer service is an extension of the NodePort service which is an extension of the ClusterIP service.

Conclusion on Kubernetes Services

There are four types of Kubernetes Services you should be familiar with — ClusterIP, Headless, NodePort, and LoadBalancer. They aren’t necessarily disjointed services; you’ll likely have a load balancer functionality in every application. The headless service is a specific type of service that is used in edge cases, such as databases where you need to access a single pod within a cluster. The last note is that you shouldn’t use NodePort outside of basic testing scenarios, as it exposes your application to vulnerabilities.

Top comments (0)