Multi-Cluster Slice: iPerf Deployment
Introduction
iPerf is a tool commonly used to measure network performance, perform network tuning, and more. The iPerf application consists of two main services, iperf-sleep (client) and iperf-server.
This tutorial provides the steps to:
- Install the iperf-sleep and iperf-server services on two clusters within a KubeSlice configuration.
- Verify inter-cluster communication over KubeSlice.
Objective
This tutorial is designed to aid you in improving your understanding of deploying applications.
Prerequisites
Before you begin, ensure the following prerequisites are met:
- You have a KubeSlice configuration with two or more clusters registered to the KubeSlice Controller. For more information, see Install KubeSlice.
- Before creating a slice, create the
iperf
namespace in all the participating worker clusters. Use the following command to create theiperf
namespace:kubectl create ns iperf
- You have the slice created across the worker clusters. For more information, see Creating a Slice.
Deploy iPerf
In this tutorial, iperf-sleep and iperf-server will be deployed in the two different worker clusters.
The worker cluster used for iperf-sleep is referred to as sleep cluster
, and the worker cluster used for iperf-server
is referred to as server cluster
.
Create the iPerf Sleep YAML File
Create the iperf-sleep.yaml
file using the below template.
apiVersion: apps/v1
kind: Deployment
metadata:
name: iperf-sleep
namespace: iperf
labels:
app: iperf-sleep
spec:
replicas: 1
selector:
matchLabels:
app: iperf-sleep
template:
metadata:
labels:
app: iperf-sleep
spec:
containers:
- name: iperf
image: mlabbe/iperf
imagePullPolicy: Always
command: ["/bin/sleep", "3650d"]
- name: sidecar
image: nicolaka/netshoot
imagePullPolicy: IfNotPresent
command: ["/bin/sleep", "3650d"]
securityContext:
capabilities:
add: ["NET_ADMIN"]
allowPrivilegeEscalation: true
privileged: true
Apply the iPerf Sleep YAML File
To apply the iperf-sleep YAML file:
- Switch the context to the worker cluster you want to install
iperf-sleep
.kubectx <cluster name>
- Create the
iperf
namespace using the following command:kubectl create ns iperf
- Apply the
iperf-sleep.yaml
deployment using the following command:kubectl apply -f iperf-sleep.yaml -n iperf
Create the iPerf Server YAML File
Create the iperf-server.yaml
file using the below template.
All the fields in the template remain the same except for <slice name>
instance.
Replace the <slice name>
with the name of your slice.
apiVersion: apps/v1
kind: Deployment
metadata:
name: iperf-server
namespace: iperf
labels:
app: iperf-server
spec:
replicas: 1
selector:
matchLabels:
app: iperf-server
template:
metadata:
labels:
app: iperf-server
spec:
containers:
- name: iperf
image: mlabbe/iperf
imagePullPolicy: Always
args:
- '-s'
- '-p'
- '5201'
ports:
- containerPort: 5201
name: server
- name: sidecar
image: nicolaka/netshoot
imagePullPolicy: IfNotPresent
command: ["/bin/sleep", "3650d"]
securityContext:
capabilities:
add: ["NET_ADMIN"]
allowPrivilegeEscalation: true
privileged: true
---
apiVersion: networking.kubeslice.io/v1beta1
kind: ServiceExport
metadata:
name: iperf-server
namespace: iperf
spec:
slice: <slice-name>
selector:
matchLabels:
app: iperf-server
ingressEnabled: false
ports:
- name: tcp
containerPort: 5201
protocol: TCP
Apply the iPerf Server YAML File
To apply the iperf-server YAML file:
- Switch context to the worker cluster you want to install the iperf server.
kubectx <cluster name>
- Create the
iperf
namespace using the following command:kubectl create ns iperf
- Apply the
iperf-server.yaml
deployment using the following command:kubectl apply -f iperf-server.yaml -n iperf
Validate your iPerf Installation
Validate the installation of iperf-sleep and iperf-server by checking the status of the application pods.
Validate the iPerf Sleep Installation
To validate the iperf-sleep installation:
-
Switch the context to the cluster where you installed the iperf-sleep.
kubectx <cluster name>
-
Validate the
iperf-sleep
pods belonging to theiperf
namespace use the following command:kubectl get pods -n iperf
Example Output
NAME READY STATUS RESTARTS AGE
iperf-sleep-5477bf94cb-vmmtd 3/3 Running 0 10s -
Validate the serviceimport using the following command:
kubectl get serviceimport -n iperf
Example Output
NAME SLICE PORT(S) ENDPOINTS STATUS
iperf-server lion 1 READY
Validate the iPerf Server Installation
To validate the iperf-server installation:
- Switch the context to the worker cluster where you have installed the iperf-server.
kubectx <cluster name>
- Validate the iperf-server pods belonging to the
iperf
namespace using the following command:Example Outputkubectl get pods -n iperf
NAME READY STATUS RESTARTS AGE
iperf-server-5958958795-fld2p 3/3 Running 0 20s - Validate the serviceimport using the following command:
Example Output
kubectl get serviceimport -n iperf
NAME SLICE PORT(S) ENDPOINTS STATUS
iperf-server lion 1 READY - Validate the serviceexport using the following command:
Example Output
kubectl get serviceexport -n iperf
NAME SLICE INGRESS SERVICEPORT(S) PORT(S) ENDPOINTS STATUS
iperf-server lion 5201/TCP 1 READY
Validate ServiceExport and ServiceImport
Perform these steps in the cluster where the KubeSlice Controller is installed:
-
Switch the context of the cluster.
kubectx <cluster name>
-
Validate the serviceexportconfig using the following command:
kubectl get serviceexportconfigs -A
Example Output
NAMESPACE NAME AGE
kubeslice-devops iperf-server 5m12s -
Validate the workerserviceimports using the following command:
kubectl get workerserviceimports -A
Example Output
NAMESPACE NAME AGE
kubeslice-devops iperf-server-iperf-lion-worker-cluster-1 5m59s
kubeslice-devops iperf-server-iperf-lion-worker-cluster-2 5m59s
Get the DNS Name
Use the following command to describe the iperf-server service and retrieve the short and full DNS names for the service. Use the short DNS name later to verify the inter-cluster communication.
kubectl describe serviceimport iperf-server -n iperf | grep "Dns Name:"
Expected Output
Dns Name: iperf-server.iperf.svc.slice.local #The DNS Name listed here will be used as the DNS Name below.
Dns Name: <iperf server service>.<cluster identifier>.iperf-server.iperf.svc.slice.local #Full DNS Name
Verify the Inter-Cluster Communication
To verify the inter-cluster communication:
-
Switch the context of the cluster.
kubectx <cluster name>
-
List the pods in the
iperf
namespace to get the full name of the iperf-sleep pod.kubectl get pods -n iperf
-
Using the pod name you just retrieved, execute the command into the
iperf-sleep
pod with the following command:kubectl exec -it <iperf-sleep pod name> -c iperf -n iperf -- sh
-
Use the short DNS Name retrieved above to connect to the server from the sleep pod.
iperf -c <short iperf-server DNS Name> -p 5201 -i 1 -b 10Mb;
Expected Output
If the iperf-sleep pod is able to reach the iperf-server pod across clusters, you should see similar output to that below.
kubectl exec -it iperf-sleep-5477bf94cb-vmmtd -c iperf -n iperf -- sh
/ $ iperf -c iperf-server.iperf.svc.slice.local -p 5201 -i 1 -b 10Mb;
------------------------------------------------------------
Client connecting to iperf-server.iperf.svc.slice.local, TCP port 5201
TCP window size: 45.0 KByte (default)
------------------------------------------------------------
[ 1] local 10.1.1.89 port 38400 connected with 10.1.2.25 port 5201
[ ID] Interval Transfer Bandwidth
[ 1] 0.00-1.00 sec 1.25 MBytes 10.5 Mbits/sec
[ 1] 1.00-2.00 sec 1.25 MBytes 10.5 Mbits/sec
[ 1] 2.00-3.00 sec 1.25 MBytes 10.5 Mbits/sec
[ 1] 3.00-4.00 sec 1.25 MBytes 10.5 Mbits/sec
[ 1] 4.00-5.00 sec 1.25 MBytes 10.5 Mbits/sec
[ 1] 5.00-6.00 sec 1.25 MBytes 10.5 Mbits/sec
[ 1] 6.00-7.00 sec 1.25 MBytes 10.5 Mbits/sec
[ 1] 7.00-8.00 sec 1.25 MBytes 10.5 Mbits/sec
[ 1] 8.00-9.00 sec 1.25 MBytes 10.5 Mbits/sec
[ 1] 9.00-10.00 sec 1.25 MBytes 10.5 Mbits/sec
[ 1] 0.00-10.00 sec 12.8 MBytes 10.7 Mbits/sec
/ $
This completes the verification of inter-cluster communication.
Uninstall iPerf
To uninstall iPerf from your KubeSlice configuration, the instructions in uninstall KubeSlice