Photo by Pero Kalimero on Unsplash

Consuming Kubernetes deployed Microservice using Services

Mourad KHAIRANE
5 min readNov 16, 2020

--

Hello Folks,

In the previous article, we discussed how to deploy our simple application into a K8S cluster. Feel free to have a quick look at it, because it would help in this article in which we will explain how to consume the deployed application from the previous article.

We left the last article with this challenge in mind:

How we can consume our hello endpoint from two running instances of the same application with different IP addresses ?

Obviously, the automatic answer was to place a load balancer and it’s the right answer. but it’s not the only way to access a deployed application in K8S. In fact, a Load balancer is just one of the four types of services that are available for use in K8S, these four types are :

NodePort: here we expose our application to the node in which our PODs are running.

ClusterIP: here our application is only accessible in the K8S private network and can’t be accessed from the outside world.

LoadBalancer: the type that’s most used in production because we can have multiple instances running in different nodes inside the K8S cluster. to use this type we should route the traffic from the external IP address to the cluster network in order to deliver packets.

No service type: used to statically expose a service. An example of that would be a database or a third-party service provider.

One important information to keep in mind here is that the first three types of K8S service play the rôle of a load balancer. In fact, the first two types are using an internal load balancer mechanism contrary to the third type which uses an external load balancer.

For more in-depth details about these service types, I strongly recommend you to read the section about publishing services in the official documentation.

So enough theory, let’s expose our application.

Creating a service for the simple-app-deployment

First we will expose our simple app using the below ad-hoc command:

kubectl expose deployment simple-app-deployment — port 8090

ad-hoc service exposition

After exposing our deployment simple-app-deployment under the port 8090 a service object has been created and therefore exposed. To check the creation of the service, we listed the available service in the current namespace and it shows that indeed our deployed application has been exposed using a ClusterIP service type which is the default type. We can access it inside the K8S private network using the IP address 10.98.75.253 and port 8090.

Consume the /hello endpoint

First, we got inside the K8S cluster because we can’t reach the service network from the outside world using the ClusterIP service type. Next, we consumed the newly created service which works perfectly.

Edit the running service type to NodePort

Now let’s change the type of our exposed service to NodePort. One way we can do that is to edit the previously created service using the command:

Kubectl edit service simple-app-deployment

Edit the service type

Here we changed the service type to NodePort and we set the node access port to 32000. Let’s check the new service type by listing the available service.

kubectl get svc

listing available services

As you can see the type of service is NodePort and the exposed port is 32000. Now let’s consume the /hello endpoint using the updated service.

consume the /hello endpoint

Since we’re using Minikube we can use minikube ip to extract the IP address of the node hosting the PODs. As you can see we successfully consumed the endpoint /hello directly using the node IP address.

Remember that you can always SSH into the node and consume the endpoint using the cluster IP address.

Edit the running service type to LoadBalancer

Now let’s change the running service type to LoadBalancer.

Update the service type and list services

The same as the NodePort update, we changed the type of the service to LoadBalancer and we deleted the nodePort entry on purpose to see if K8S will assign a port to the node automatically and as you can see it does.

Now what are the possibilities to access our application using this type of service ?

The answer is that we can still consume the application from the node directly and SSH to the node and consume it using the cluster IP address. However, it’s not the purpose of the Load balancer type of service. The later type of services uses an external load balancer generally from a cloud provider. the traffic coming to the LB is directed to the PODs directly which is the ultimate purpose of this kind of service.

Up to this point, we created the 3 types of simple app service either by using the ad-hoc expose command or by editing the service in the ETCD database which is a key-value database that stores all the objects created in K8S.

Let’s see how we can do what we did previously using the recommended way, which is a YAML file specification that describes the desired state for our service.

K8S service specification

Let’s break it down to see what’s happening in this YAML file.

apiVersion: the version of the API that we can use to create the service object

Kind: indicate the kind of service

metadata.name: obviously indicate the name of the service

metadata.namespace: indicate that the service will be created in the namespace development

spec.selector: you remember when I mentioned the label attribute in the previous article? the label is used by the service selector. In fact, any POD with the label app=simple-app will be selected and therefore provisioned by the my-simple-app-service.

spec.ports.protocol: the network protocol used for transporting packets

spec.ports.port: the port that a client can use to consume the service

spec.ports.targetPort: the target port in PODs, the default value is the same as the spec.ports.port.

spec.type: the type of the service which is either ClusterIP (default), NodePort, or Loadbalancer.

That’s it for this article, you can find the examples listed in this article in the Github repository: https://github.com/khairaneMurad/demo-simple-api

I hope you enjoyed it and see you for the next one.

--

--

Mourad KHAIRANE

Backend & cloud Engineer @SFΞIR | 2x Kubernetes certified