Nowadays more and more enterprises use and orchestrate more than one cloud platform to deliver application services. In this context it is born "The Liqo project", which enables the creation of a multi-cluster environment. Moreover, the Service Mesh is a new technology that is being developed. It provides features like observability, reliability, security, and even a better Load Balancing than Kubernetes ones. This project aims at simulating a real scenario where a micro-services application like Online Boutique provided by Google, which includes multiple cooperating services, wants to scale based on the workloads reached into the cluster and on more clusters. During the project, you'll design, setup, implement and compare different scenarios and service mesh solutions like Service Mesh on Liqo or Linkerd.
Before starting to run the demos you should have installed some software on your system.
For my tests, I’m going to setup the clusters on the virtual machines hosted on CrownLabs.
VMs: Ubuntu Desktop 20.04 with 32 CORE, 32 GB of RAM, and 50 GB of disk. NOTE: By default, not all 25 GB are available. You can use these commands to extend the disk:
sudo growpart /dev/vda 2
sudo growpart /dev/vda 5
sudo resize2fs /dev/vda5
Disable swap:
free -h
sudo swapoff -a
#Check
free -h
First things first, I'm going to install some extra tools, and all the necessary dependencies on all the VMs:
sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl gnupg lsb-release xclip git jq
# Goland
wget https://go.dev/dl/go1.18.linux-amd64.tar.gz
sudo rm -rf /usr/local/go && sudo tar -C /usr/local -xzf go1.18.linux-amd64.tar.gz
# Step
wget https://dl.step.sm/gh-release/cli/docs-cli-install/v0.19.0/step-cli_0.19.0_amd64.deb
sudo dpkg -i step-cli_0.19.0_amd64.deb
Now, you can check and install all the necessary software for the system:
# Check
command -v docker
# Install
sudo apt update
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io
# Check
command -v kind
# Install
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/bin/kind
#Check
command -v kubectl
# Install
sudo apt update
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update
sudo apt install -y kubectl
sudo apt-mark hold kubectl
Finally, helm because the liqoctl
uses it to configure and install the Liqo by means of helm chart
.
# Check
command -v helm
# Install
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
For these tests, you'll play with a micro-services application provided by Google, which includes multiple cooperating services. Five different scenarios are provided:
- Online Boutique on a basic cluster.
- Online Boutique on a multi-cluster with Liqo.
- Online Boutique on a multi-cluster with Liqo, and Linkerd as Service Mesh provider.
- Online Boutique on a multi-cluster with Linkerd.
- Online Boutique on a cluster with Linkerd.
Before starting the test, you should create the cluster where you'll operate.
# Test 1
sudo kind create cluster --name cluster1 --kubeconfig $HOME/.kube/configC1 --image kindest/node:v1.23.5
sudo chmod 644 $HOME/.kube/configC1
echo "alias lc1=\"export KUBECONFIG=$HOME/.kube/configC1\"" >> $HOME/.bashrc
source $HOME/.bashrc
lc1
k create ns online-boutique
k apply -f ./kubernetes-manifests/online-boutique/boutique-manifests.yaml -n online-boutique
Once the demo application manifest is applied, you can observe the creation of the different pods:
k get pods -n online-boutique -o wide
And with the command kubectl port-forward
you can forward the requests from your local machine (i.e. http://localhost:8080
) to the frontend service:
kubectl port-forward -n online-boutique service/frontend-external 8080:80
Before deploying the kube-prometheus stack you must start the loadgenerator.
kubectl port-forward -n online-boutique service/loadgenerator 8089
I'm using 200 users with 1 second of spawn rate for my test.
Now, you can check that losut-exporter is monitoring the loadgenerator resource.
kubectl port-forward -n online-boutique service/locust-exporter 9646
Before setting up a monitoring system with Grafana and Prometheus, you’ll first deploy the kube-prometheus stack Helm chart. The stack contains Prometheus, Grafana, Alertmanager, Prometheus operator, and other monitoring resources.
kubectl create namespace monitoring
# Add prometheus-community repo and update
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts && helm repo update
# Install
helm install prometheus prometheus-community/kube-prometheus-stack --namespace monitoring
Finally, run the following command to confirm your kube-prometheus stack deployment.
kubectl get pods -n monitoring
You’ll use the Prometheus service to set up port-forwarding so your Prometheus instance can be accessible outside of your cluster. But, before that, you should create the service monitor resource so that Prometheus can scrape metrics exposed by the locust-exporter.
k -n monitoring apply -f ./kubernetes-manifests/metrics/locust-servicemonitor.yaml
kubectl get svc -n monitoring
# wait some mininutes
kubectl port-forward svc/prometheus-kube-prometheus-prometheus -n monitoring 9090
Now, you can import the Grafana dashboard that show the application status.
# Admin Password
k -n monitoring get secret prometheus-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
# Admin User
k -n monitoring get secret prometheus-grafana -o jsonpath="{.data.admin-user}" | base64 --decode ; echo
kubectl port-forward svc/prometheus-grafana -n monitoring 8080:80
xclip -sel clip < ./kubernetes-manifests/grafana-dashboard.json
# Import the json dashboard ./kubernetes-manifests/grafana-dashboard.json
You can deploy the Kubernetes Metrics Server on clusters you created using Container Engine.
To deploy the Kubernetes Metrics Server on a cluster you've created with Container Engine for Kubernetes:
- In a terminal window, deploy the Kubernetes Metrics Server by entering:
kubectl apply -f ./kubernetes-manifests/metrics/ms-components.yaml
- Confirm that the Kubernetes Metrics Server has been deployed successfully and is available by entering:
kubectl get deployment metrics-server -n kube-system
Now, you can create the horizontal pod autoscaling resources.
k -n online-boutique apply -f ./kubernetes-manifests/hpa/hpa-manifest-cpu.yaml
Before starting the test, you should create the cluster where you'll operate.
# Test 2
sudo kind create cluster --name cluster2 --kubeconfig $HOME/.kube/configC2 --image kindest/node:v1.23.5
sudo chmod 644 $HOME/.kube/configC2
echo "alias lc2=\"export KUBECONFIG=$HOME/.kube/configC2\"" >> $HOME/.bashrc
sudo kind create cluster --name cluster3 --kubeconfig $HOME/.kube/configC3 --image kindest/node:v1.23.5
sudo chmod 644 $HOME/.kube/configC3
echo "alias lc3=\"export KUBECONFIG=$HOME/.kube/configC3\"" >> $HOME/.bashrc
source $HOME/.bashrc
lc2
chmod +x ./scripts/liqoInstaller.sh
./scripts/liqoInstaller.sh
source <(liqoctl completion bash) >> $HOME/.bashrc
liqoctl install kind --cluster-name cluster2
lc3
liqoctl install kind --cluster-name cluster3
liqo generate peer-command
lc2
Using kubectl, you can also manually obtain the list of discovered foreign clusters:
kubectl get foreignclusters
If the peering has succeeded, you should see a virtual node (named liqo-*) in addition to your physical nodes:
kubectl get nodes
k create ns online-boutique
kubectl label namespace online-boutique liqo.io/enabled=true
k apply -f ./kubernetes-manifests/online-boutique/boutique-manifests-affinity.yaml -n online-boutique
Once the demo application manifest is applied, you can observe the creation of the different pods. On the node column you can see if the pods are hosted by the local or remote cluster:
k get pods -n online-boutique -o wide
When all pods are running you can start the loadgenerator:
kubectl port-forward -n online-boutique service/loadgenerator 8089
I'm using 200 users with 1 second of spawn rate for my test.
Now, you can check that losut-exporter is monitoring the loadgenerator resource.
kubectl port-forward -n online-boutique service/locust-exporter 9646
Now, you can deploy the kube-prometheus stack Helm chart.
kubectl create namespace monitoring
# Add prometheus-community repo and update
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts && helm repo update
# Install
helm install prometheus prometheus-community/kube-prometheus-stack --namespace monitoring
Finally, run the following command to confirm your kube-prometheus stack deployment.
kubectl get pods -n monitoring
In order to scrape metrics exposed by the locust-exporter you should create the service monitor resource.
k -n monitoring apply -f ./kubernetes-manifests/metrics/locust-servicemonitor.yaml
You can see the metrics by importing the Grafana dashboard.
# Admin Password
k -n monitoring get secret prometheus-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
# Admin User
k -n monitoring get secret prometheus-grafana -o jsonpath="{.data.admin-user}" | base64 --decode ; echo
kubectl port-forward svc/prometheus-grafana -n monitoring 8080:80
xclip -sel clip < ./kubernetes-manifests/grafana-dashboard.json
# Import the json dashboard ./kubernetes-manifests/grafana-dashboard.json
You can deploy the Kubernetes Metrics Server on the cluster you created with the following commands:
lc2
kubectl apply -f ./kubernetes-manifests/metrics/ms-components.yaml
lc3
kubectl apply -f ./kubernetes-manifests/metrics/ms-components.yaml
lc2
Confirm that the Kubernetes Metrics Server has been deployed successfully and is available by entering:
kubectl get deployment metrics-server -n kube-system
Now, you can create the horizontal pod autoscaling resources.
k -n online-boutique apply -f ./kubernetes-manifests/hpa/hpa-manifest-cpu.yaml
Before starting the test, you should create the cluster where you'll operate.
# Test 3
sudo kind create cluster --name cluster4 --kubeconfig $HOME/.kube/configC4 --config ./kubernetes-manifests/kind/kind-manifestC4.yaml
sudo chmod 644 $HOME/.kube/configC4
echo "alias lc4=\"export KUBECONFIG=$HOME/.kube/configC4\"" >> $HOME/.bashrc
sudo kind create cluster --name cluster5 --kubeconfig $HOME/.kube/configC5 --config ./kubernetes-manifests/kind/kind-manifestC5.yaml
sudo chmod 644 $HOME/.kube/configC5
echo "alias lc5=\"export KUBECONFIG=$HOME/.kube/configC5\"" >> $HOME/.bashrc
source $HOME/.bashrc
lc4
chmod +x ./scripts/liqoInstaller.sh
./scripts/liqoInstaller.sh
source <(liqoctl completion bash) >> $HOME/.bashrc
export NODE1=$(sudo docker ps | grep cluster4 | head -n1 | cut -d " " -f1)
sudo docker exec -it $NODE1 iptables -t nat -I KIND-MASQ-AGENT 2 --dst 10.20.0.0/16 -j RETURN
export NODE2=$(sudo docker ps | grep cluster4 | tail -n1 | cut -d " " -f1)
sudo docker exec -it $NODE2 iptables -t nat -I KIND-MASQ-AGENT 2 --dst 10.20.0.0/16 -j RETURN
export NODE1=$(sudo docker ps | grep cluster5 | head -n1 | cut -d " " -f1)
sudo docker exec -it $NODE1 iptables -t nat -I KIND-MASQ-AGENT 2 --dst 10.10.0.0/16 -j RETURN
export NODE2=$(sudo docker ps | grep cluster5 | tail -n1 | cut -d " " -f1)
sudo docker exec -it $NODE2 iptables -t nat -I KIND-MASQ-AGENT 2 --dst 10.10.0.0/16 -j RETURN
lc4
liqoctl install kind --cluster-name cluster4 --version=416839b0915a8a0a7d78331b5efb76bde5444910
lc5
liqoctl install kind --cluster-name cluster5 --version=416839b0915a8a0a7d78331b5efb76bde5444910
Using kubectl, you can also manually obtain the list of discovered foreign clusters:
kubectl get foreignclusters
If the peering has succeeded, you should see a virtual node (named liqo-*) in addition to your physical nodes:
kubectl get nodes
If this is your first time running Linkerd, you will need to download the linkerd CLI onto your local machine. The CLI will allow you to interact with your Linkerd deployment.
To install the CLI manually, run:
curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install | sh
export PATH=$PATH:$HOME/.linkerd2/bin
linkerd check --pre
lc4
linkerd install | kubectl apply -f -
linkerd check
liqoctl offload namespace linkerd --pod-offloading-strategy=Local --namespace-mapping-strategy=EnforceSameName
kubectl create ns online-boutique
liqoctl offload namespace online-boutique --namespace-mapping-strategy=EnforceSameName
cat ./kubernetes-manifests/online-boutique/boutique-manifests-affinity.yaml | linkerd inject - | kubectl -n online-boutique apply -f -
linkerd -n online-boutique check --proxy
Once the demo application manifest is applied, you can observe the creation of the different pods. On the node column you can see if the pods are hosted by the local or remote cluster:
k get pods -n online-boutique -o wide
When all pods are running you can start the loadgenerator:
kubectl port-forward -n online-boutique service/loadgenerator 8089
I'm using 200 users with 1 second of spawn rate for my test.
Now, you can check that losut-exporter is monitoring the loadgenerator resource.
kubectl port-forward -n online-boutique service/locust-exporter 9646
Now, you can deploy the kube-prometheus stack Helm chart.
kubectl create namespace monitoring
# Add prometheus-community repo and update
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts && helm repo update
# Install
helm install prometheus prometheus-community/kube-prometheus-stack --namespace monitoring
Finally, run the following command to confirm your kube-prometheus stack deployment.
kubectl get pods -n monitoring
In order to scrape metrics exposed by the locust-exporter you should create the service monitor resource.
k -n monitoring apply -f ./kubernetes-manifests/metrics/locust-servicemonitor.yaml
You can see the metrics by importing the Grafana dashboard.
# Admin Password
k -n monitoring get secret prometheus-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
# Admin User
k -n monitoring get secret prometheus-grafana -o jsonpath="{.data.admin-user}" | base64 --decode ; echo
kubectl port-forward svc/prometheus-grafana -n monitoring 8080:80
xclip -sel clip < ./kubernetes-manifests/grafana-dashboard.json
# Import the json dashboard ./kubernetes-manifests/grafana-dashboard.json
You can deploy the Kubernetes Metrics Server on the cluster you created with the following commands:
lc4
kubectl apply -f ./kubernetes-manifests/metrics/ms-components.yaml
lc5
kubectl apply -f ./kubernetes-manifests/metrics/ms-components.yaml
Confirm that the Kubernetes Metrics Server has been deployed successfully and is available by entering:
kubectl get deployment metrics-server -n kube-system
Now, you can create the horizontal pod autoscaling resources.
lc4
k -n online-boutique apply -f ./kubernetes-manifests/hpa/hpa-manifest-cpu.yaml
Before starting the test, you should create the cluster where you'll operate.
# Test 4
sudo kind create cluster --name cluster6 --kubeconfig $HOME/.kube/configC6
sudo chmod 644 $HOME/.kube/configC6
echo "alias lc6=\"export KUBECONFIG=$HOME/.kube/configC6\"" >> $HOME/.bashrc
sudo kind create cluster --name cluster7 --kubeconfig $HOME/.kube/configC7
sudo chmod 644 $HOME/.kube/configC7
echo "alias lc7=\"export KUBECONFIG=$HOME/.kube/configC7\"" >> $HOME/.bashrc
sudo cp ./kubernetes-manifests/linkerd/config-multicluster.yaml $HOME/.kube/config-multicluster
sudo chmod 644 $HOME/.kube/config-multicluster
echo "alias lmc=\"export KUBECONFIG=$HOME/.kube/config-multicluster\"" >> $HOME/.bashrc
# NOTE: Now, you must replace the values in `$HOME/.kube/config-multicluster` file with the correct values from the files: `$HOME/.kube/configC6` and `$HOME/.kube/configC7`
source $HOME/.bashrc
sudo kubectl config --kubeconfig=$HOME/.kube/config-multicluster rename-context kind-cluster6 west
sudo kubectl config --kubeconfig=$HOME/.kube/config-multicluster rename-context kind-cluster7 east
lmc
To work Linkerd, in a multi-cluster scenario, a service of type LoadBalancer is required, but Kubernetes does not offer an implementation of network load balancers for bare-metal clusters. If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when created. MetalLB aims to redress this imbalance by offering a network load balancer implementation.
You can deploy the MetalLB in your cluster by running the following commands:
kubectl --context=west apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl --context=west apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
To complete layer2 configuration, we need to provide metallb a range of IP addresses it controls. We want this range to be on the docker kind network.
You can see the kind network rage with this command:
sudo docker network inspect -f '{{.IPAM.Config}}' kind
Now, you must create the ConfigMap resource containing the desired range:
k --context=west apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.18.1.1-172.18.1.255 # Change with the correct range
EOF
You must repeat these steps for the east cluster:
kubectl --context=east apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl --context=east apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
k --context=east apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.18.2.1-172.18.2.255 # Change with the correct range
EOF
Secondly, you can check the communication between the two clusters by deploying two Nginx services and performing a curl command for each side:
k --context=west apply -f https://raw.githubusercontent.com/inlets/inlets-operator/master/contrib/nginx-sample-deployment.yaml
kubectl --context=west expose deployment nginx-1 --port=80 --type=LoadBalancer
k --context=east apply -f https://raw.githubusercontent.com/inlets/inlets-operator/master/contrib/nginx-sample-deployment.yaml
kubectl --context=east expose deployment nginx-1 --port=80 --type=LoadBalancer
export NGINX_NAME=$(k --context=west get po -l app=nginx --no-headers -o custom-columns=:.metadata.name)
export NGINX_ADDR=$(k --context=east get svc -l app=nginx -o jsonpath={'.items[0].status.loadBalancer.ingress[0].ip'})
kubectl --context=west exec --stdin --tty $NGINX_NAME -- /bin/bash -c "apt-get update && apt install -y curl && curl ${NGINX_ADDR}"
export NGINX_NAME=$(k --context=east get po -l app=nginx --no-headers -o custom-columns=:.metadata.name)
export NGINX_ADDR=$(k --context=west get svc -l app=nginx -o jsonpath={'.items[0].status.loadBalancer.ingress[0].ip'})
kubectl --context=east exec --stdin --tty $NGINX_NAME -- /bin/bash -c "apt-get update && apt install -y curl && curl ${NGINX_ADDR}"
Linkerd requires a shared trust anchor to exist between the installations in all clusters that communicate with each other. This is used to encrypt the traffic between clusters and authorize requests that reach the gateway so that your cluster is not open to the public internet.
NOTE: For more information see the official documentation.
mkdir trust && cd ./trust
# Root CA
step certificate create root.linkerd.cluster.local root.crt root.key --profile root-ca --no-password --insecure
# Issuer credentials
step certificate create identity.linkerd.cluster.local issuer.crt issuer.key --profile intermediate-ca --not-after 8760h --no-password --insecure --ca root.crt --ca-key root.key
To install linkerd run:
curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install | sh
export PATH=$PATH:$HOME/.linkerd2/bin
# Linkerd
linkerd install \
--identity-trust-anchors-file root.crt \
--identity-issuer-certificate-file issuer.crt \
--identity-issuer-key-file issuer.key \
| tee \
>(kubectl --context=west apply -f -) \
>(kubectl --context=east apply -f -)
cd ..
You can verify that everything has come up successfully with a check:
for ctx in west east; do
echo "Checking cluster: ${ctx} ........."
linkerd --context=${ctx} check || break
echo "-------------"
done
To install the multicluster components on both west and east, you can run:
# Multi-Cluster components:
# - Gateway
# - Service Mirror
# - Service Account
for ctx in west east; do
echo "Installing on cluster: ${ctx} ........."
linkerd --context=${ctx} multicluster install | \
kubectl --context=${ctx} apply -f - || break
echo "-------------"
done
Make sure the gateway comes up successfully by running:
for ctx in west east; do
echo "Checking gateway on cluster: ${ctx} ........."
kubectl --context=${ctx} -n linkerd-multicluster \
rollout status deploy/linkerd-gateway || break
echo "-------------"
done
Double check that the load balancer was able to allocate a public IP address by running:
for ctx in west east; do
printf "Checking cluster: ${ctx} ........."
while [ "$(kubectl --context=${ctx} -n linkerd-multicluster get service -o 'custom-columns=:.status.loadBalancer.ingress[0].ip' --no-headers)" = "<none>" ]; do
printf '.'
sleep 1
done
printf "\n"
done
The next step is to link west to east. This will create a credentials secret, a Link resource, and a service-mirror controller. The credentials secret contains a kubeconfig which can be used to access the target (east) cluster’s Kubernetes API.
# Retrieve the API Server address for the east cluster
kubectl --context=east proxy --port=8080 &
export API_SERVER_ADDR=$(curl http://localhost:8080/api/ | jq '.serverAddressByClientCIDRs[0].serverAddress' | sed 's/\"//g')
kill -9 %%
# Perform the link
linkerd --context=east multicluster link --cluster-name east --api-server-address="https://${API_SERVER_ADDR}"| kubectl --context=west apply -f -
# Retrieve the API Server address for the west cluster
kubectl --context=west proxy --port=8080 &
export API_SERVER_ADDR=$(curl http://localhost:8080/api/ | jq '.serverAddressByClientCIDRs[0].serverAddress' | sed 's/\"//g')
kill -9 %%
# Perform the link
linkerd --context=west multicluster link --cluster-name west --api-server-address="https://${API_SERVER_ADDR}" | kubectl --context=east apply -f -
# Some check
linkerd --context=west multicluster check
linkerd --context=east multicluster check
# with linkerd viz install
# linkerd --context=west multicluster gateways
# linkerd --context=east multicluster gateways
k --context=west create ns online-boutique
k --context=east create ns online-boutique
cat ./kubernetes-manifests/online-boutique/boutique-west-manifest.yaml | linkerd inject - | k --context=west -n online-boutique apply -f -
cat ./kubernetes-manifests/online-boutique/boutique-east-manifest.yaml | linkerd inject - | k --context=east -n online-boutique apply -f -
# Traffic Split
cat ./kubernetes-manifests/linkerd/trafficsplit-west.yaml | k --context=west -n online-boutique apply -f -
cat ./kubernetes-manifests/linkerd/trafficsplit-east.yaml | k --context=east -n online-boutique apply -f -
Once the demo application manifest is applied, you can observe the creation of the different pods.
k --context=west get pods -n online-boutique -o wide
When all pods are running you can start the loadgenerator:
kubectl --context=west port-forward -n online-boutique service/loadgenerator 8089
I'm using 200 users with 1 second of spawn rate for my test.
Now, you can check that losut-exporter is monitoring the loadgenerator resource.
kubectl --context=west port-forward -n online-boutique service/locust-exporter 9646
Now, you can deploy the kube-prometheus stack Helm chart.
kubectl --context=west create namespace monitoring
# Add prometheus-community repo and update
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts && helm repo update
# Install
helm install prometheus prometheus-community/kube-prometheus-stack --namespace monitoring
Finally, run the following command to confirm your kube-prometheus stack deployment.
kubectl --context=west get pods -n monitoring
In order to scrape metrics exposed by the locust-exporter you should create the service monitor resource.
k --context=west -n monitoring apply -f ./kubernetes-manifests/metrics/locust-servicemonitor.yaml
You can see the metrics by importing the Grafana dashboard.
# Admin Password
k --context=west -n monitoring get secret prometheus-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
# Admin User
k --context=west -n monitoring get secret prometheus-grafana -o jsonpath="{.data.admin-user}" | base64 --decode ; echo
kubectl --context=west port-forward svc/prometheus-grafana -n monitoring 8080:80
xclip -sel clip < ./kubernetes-manifests/grafana-dashboard.json
# Import the json dashboard ./kubernetes-manifests/grafana-dashboard.json
You can deploy the Kubernetes Metrics Server on the cluster you created with the following commands:
kubectl --context=west apply -f ./kubernetes-manifests/metrics/ms-components.yaml
kubectl --context=east apply -f ./kubernetes-manifests/metrics/ms-components.yaml
Confirm that the Kubernetes Metrics Server has been deployed successfully and is available by entering:
kubectl --context=west get deployment metrics-server -n kube-system
Now, you can create the horizontal pod autoscaling resources.
k --context=west -n online-boutique apply -f ./kubernetes-manifests/hpa/hpa-manifest-cpu-west.yaml
k --context=east -n online-boutique apply -f ./kubernetes-manifests/hpa/hpa-manifest-cpu-east.yaml
Before starting the test, you should create the cluster where you'll operate.
# Test 5
sudo kind create cluster --name cluster8 --kubeconfig $HOME/.kube/configC8
sudo chmod 644 $HOME/.kube/configC8
echo "alias lc8=\"export KUBECONFIG=$HOME/.kube/configC8\"" >> $HOME/.bashrc
source $HOME/.bashrc
lc8
To install linkerd run:
curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install | sh
export PATH=$PATH:$HOME/.linkerd2/bin
# Linkerd
linkerd install | kubectl apply -f -
linkerd check
kubectl create ns online-boutique
cat ./kubernetes-manifests/online-boutique/boutique-manifests.yaml | linkerd inject - | kubectl -n online-boutique apply -f -
linkerd -n online-boutique check --proxy
Once the demo application manifest is applied, you can observe the creation of the different pods. On the node column you can see if the pods are hosted by the local or remote cluster:
k get pods -n online-boutique -o wide
When all pods are running you can start the loadgenerator:
kubectl port-forward -n online-boutique service/loadgenerator 8089
I'm using 200 users with 1 second of spawn rate for my test.
Now, you can check that losut-exporter is monitoring the loadgenerator resource.
kubectl port-forward -n online-boutique service/locust-exporter 9646
You can deploy the Kubernetes Metrics Server on the cluster you created with the following commands:
kubectl apply -f ./kubernetes-manifests/metrics/ms-components.yaml
Confirm that the Kubernetes Metrics Server has been deployed successfully and is available by entering:
kubectl get deployment metrics-server -n kube-system
Now, you can create the horizontal pod autoscaling resources.
k -n online-boutique apply -f ./kubernetes-manifests/hpa/hpa-manifest-cpu.yaml