Skip to main content
Version: 1.14.0

Install KubeSlice using the kubeslice-cli Tool

This topic describes the steps to install KubeSlice on cloud clusters using the kubeslice-cli tool.

Objective

This tutorial helps you to get hands-on with the kubeslice-cli tool. Using kubeslice-cli, you will learn to install KubeSlice, deploy an application, and export the service.

Prerequisites

Before you begin, ensure the following prerequisites are met:

  • Cluster administrator privileges to install the KubeSlice Controller on controller cluster and the Slice Operator on worker clusters.
  • Install the command line tools. For more information, see Command Line Tools.

Install KubeSlice

In this demonstration, let us install KubeSlice on cloud clusters using the custom topology configuration YAML. To install the KubeSlice Controller and the KubeSlice Worker, you need access to the enterprise helm repository. You must register to get the trial license and access the helm repository. Upon registration, you will receive the credentials to your registered email address to access the enterprise helm repository. For more information, see KubeSlice Trial License.

Install the KubeSlice Controller and Register Worker Clusters

Create a topology configuration file that includes the names of clusters and the cluster contexts that host the KubeSlice Controller, the worker clusters, and a project name. Add the list of users to be created with the read-write access under project_users. For more information, see topology configuration parameters file. Assuming the kubeconfig path /home/runner/.kube/mainconfig has all the cluster configurations.

The following is an example custom topology file for installing KubeSlice in an existing setup.

configuration:
cluster_configuration:
kube_config_path: /home/runner/.kube/mainconfig
controller:
name: controller
context_name: arn:aws:eks:us-east-1:***:cluster/dev-kubeslice-cli-eks-cluster
workers:
- name: worker-1
context_name: gke_avesha-dev_us-east1-c_cli-gke
- name: worker-2
context_name: kubeslice-cli-aks-cluster
kubeslice_configuration:
project_name: avesha
project_users: # Add the list of users to be created with the read-write access.
- user1
- user2
helm_chart_configuration:
repo_alias: kubeslice
repo_url: https://kubeslice.aveshalabs.io/repository/kubeslice-helm-ent-prod/
controller_chart:
chart_name: kubeslice-controller
worker_chart:
chart_name: kubeslice-worker
ui_chart:
chart_name: kubeslice-ui
cert_manager_chart:
chart_name: cert-manager
image_pull_secret:
registory: https://index.docker.io/v1/
username: ***
password: ***
email: ***@aveshasystems.com

Use the following command to install the controller and the worker clusters:

kubeslice-cli install --config=<topology-configuration-file>

The above command installs the KubeSlice Controller, creates a project, and registers the worker clusters with the project by installing the Slice Operator on the worker clusters.

Register a New Worker Cluster

To register a new worker cluster with an existing KubeSlice configuration (or KubeSlice Controller):

  1. Add new worker cluster information under workers in the custom topology file that was used to install KubeSlice earlier.
  2. Use the install command to apply the updated custom topology file.

The following is an example custom topology file for registering a new worker cluster. Under workers, add a new worker with the name worker-3 and the cluster context cloud-gke-cluster.

configuration:
cluster_configuration:
kube_config_path: /home/runner/.kube/mainconfig
controller:
name: controller
context_name: arn:aws:eks:us-east-1:***:cluster/dev-kubeslice-cli-eks-cluster
workers:
- name: worker-1
context_name: gke_avesha-dev_us-east1-c_cli-gke
- name: worker-2
context_name: cloud-aks-cluster
- name: worker-3 # This is the new worker cluster that will be registered with the controller.
context_name: cloud-gke-cluster
kubeslice_configuration:
project_name: avesha
project_users:
- user1
- user2
helm_chart_configuration:
repo_alias: kubeslice
repo_url: https://kubeslice.aveshalabs.io/repository/kubeslice-helm-ent-prod/
controller_chart:
chart_name: kubeslice-controller
worker_chart:
chart_name: kubeslice-worker
ui_chart:
chart_name: kubeslice-ui
cert_manager_chart:
chart_name: cert-manager
image_pull_secret:
registory: https://index.docker.io/v1/
username: ***
password: ***
email: ***@aveshasystems.com

Use the following command to register a new worker cluster with the KubeSlice Controller:

kubeslice-cli install --config=<new-worker-topology-yaml> -s controller -s ui

Create a Slice

To onboard your existing namespaces (and their applications) onto a slice:

  1. Create a slice configuration YAML file (choose the namespaces, clusters, and so on to be part of the slice).
  2. Use the kubeslice-cli create command to apply the slice configuration YAML file.

Create a Slice Configuration YAML File

Use the following template to create the slice configuration YAML file.

info

To understand more about the configuration parameters, see Slice Configuration Parameters.

apiVersion: controller.kubeslice.io/v1alpha1
kind: SliceConfig
metadata:
name: <slice-name> #The name of the slice
spec:
sliceSubnet: <slice-subnet> #The slice subnet
sliceType: Application
sliceGatewayProvider:
sliceGatewayType: OpenVPN
sliceCaType: Local
sliceIpamType: Local
clusters:
- <worker-cluster-name1> #The name of your worker cluster1
- <worker-cluster-name2> #The name of your worker cluster2
qosProfileDetails:
queueType: HTB
priority: 0
tcType: BANDWIDTH_CONTROL
bandwidthCeilingKbps: 30000
bandwidthGuaranteedKbps: 20000
dscpClass: AF11

Apply the Slice Configuration YAML file

caution

The kubeslice-cli create sliceConfig -n <project-namespace> -f <slice-configuration-yaml> command returns successfully after the slice configuration is applied. However, in each cluster, the relevant pods for controlling and managing the slice may still be starting. Ensure to wait for the slice to complete the initialization before deploying services on it.

To apply the slice configuration YAML, use the following command:

kubeslice-cli create sliceConfig -n <project-namespace> -f <slice-configuration-yaml> --config=<path-to-the-custom-topology-file>

Example

kubeslice-cli create sliceConfig -n kubeslice-avesha -f slice-config.yaml 

Example output

🏃 Running command: /usr/local/bin/kubectl apply -f slice-config.yaml -n kubeslice-avesha
sliceconfig.controller.kubeslice.io/slice-red created

Successfully Applied Slice Configuration.

Edit the Slice Configuration

You can add additional parameters to an existing slice. Use the following command to edit the slice configuration:

kubeslice-cli edit sliceConfig <slice-name> -n <project-namespace>

Example

kubeslice-cli edit sliceConfig blue -n kubeslice-demo

The following YAML file contains a list of parameters you can add to an existing slice configuration:

namespaceIsolationProfile:
applicationNamespaces:
- namespace: bookinfo
clusters:
- '*'
- namespace: teamb
clusters:
- '*'
isolationEnabled: true #make this true in case you want to enable isolation
allowedNamespaces:
- namespace: kube-system
clusters:
- '*'
- namespace: istio-system
clusters:
- '*'
- namespace: projectcontour
clusters:
- 'worker-1'
externalGatewayConfig:
- ingress:
enabled: false
egress:
enabled: true
nsIngress:
enabled: false
gatewayType: istio
clusters:
- worker-1
- ingress:
enabled: true
egress:
enabled: false
nsIngress:
enabled: false
gatewayType: istio
clusters:
- worker-2

Deploy the Application

info

If the application is already deployed on a namespace that is onboarded onto a slice, then re-deploy the application.

Create a Service Export

To create a service export, use the following command:

kubeslice-cli create serviceExportConfig -f <service-export-yaml> -n <project-namespace>  --config=<path-to-the-custom-topology-file>

Validate the Service Export

When an application service runs on one of the worker clusters that are onboarded onto a slice, the worker generates a ServiceExport for the application and propagates it to the KubeSlice Controller.

To verify the service export on the controller cluster, use the following command:

kubeslice-cli get serviceExportConfig -n <project-namespace>

Example

kubeslice-cli get serviceExportConfig -n kubeslice-avesha

Example Output

Fetching KubeSlice serviceExportConfig...
🏃 Running command: /usr/local/bin/kubectl get serviceexportconfigs.controller.kubeslice.io -n kubeslice-avesha
NAME AGE
iperf-server-iperf-cloud-worker-1 43s

To view the details of the service export configuration, use the following command:

kubeslice-cli describe serviceExportConfig <resource-name> -n <project-namespace>

Example

kubeslice-cli describe serviceExportConfig iperf-server-iperf-cloud-worker-1 -n kubeslice-avesha

The following output shows the ServiceExportConfig for iperf-server application is present on the controller cluster.

Describe KubeSlice serviceExportConfig...
🏃 Running command: /usr/local/bin/kubectl describe serviceexportconfigs.controller.kubeslice.io iperf-server-iperf-cloud-worker-1 -n kubeslice-avesha
Name: iperf-server-iperf-cloud-worker-1
Namespace: kubeslice-avesha
Labels: original-slice-name=slice-red
service-name=iperf-server
service-namespace=iperf
worker-cluster=cloud-worker-1
Annotations: <none>
API Version: controller.kubeslice.io/v1alpha1
Kind: ServiceExportConfig
Spec:
Service Discovery Ports:
Name: tcp
Port: 5201
Protocol: TCP
Service Name: iperf-server
Service Namespace: iperf
Slice Name: slice-red
Source Cluster: cloud-worker-1

Modify the Service Discovery Configuration

kubeslice-cli enables you to modify the service discovery parameters. For example, to modify the port on which the service is running, edit the value and save. This updates the ServiceExportConfig. The ServiceExportConfig will again be propagated to all the worker clusters.

To edit the service export configuration, use the following command:

kubeslice-cli edit serviceExportConfig <resource-name> -n <project-namespace> --config=<path-to-the-custom-topology-file>

Example

kubeslice-cli edit serviceExportConfig iperf-server-iperf-cloud-worker-1 -n kubeslice-avesha

Example Output

Editing KubeSlice serviceExportConfig...
🏃 Running command: /usr/local/bin/kubectl edit serviceexportconfigs.controller.kubeslice.io iperf-server-iperf-cloud-worker-1 -n kubeslice-avesha
...

Uninstall KubeSlice

Use the following command to uninstall KubeSlice:

kubeslice-cli uninstall --config=<file-path-of-topology> --all

For more information, see Uninstall KubeSlice.