Skip to main content
Version: 1.15.0

Quick Start Guide

This topic serves as a quick-start guide that assists you in installing KubeSlice with the bare minimum configuration required to start using KubeSlice.

Kubeslice Registration

Go to https://avesha.io/kubeslice-registration/. You receive an email with login credentials to be used in the topology.yaml file for installation.

Cluster Authentication

To register your worker clusters with the KubeSlice Controller, it is necessary to authenticate with each cloud provider used in the installation.

Microsoft AKS

az aks get-credentials --resource-group <resource group name> --name <cluster name>

Amazon EKS

aws eks update-kubeconfig --name <cluster-name> --region <cluster-region>

Google GKE

gcloud container clusters get-credentials <cluster name> --region <region> --project <project id>

Cluster Network Ports

To ensure the proper functioning of the KubeSlice Gateway Nodes in both public and private clusters, please open the required UDP ports.

Protocol: UDP Ports: 30000-33000

Label Kubeslice Gateway Nodes

Labeling your gateway nodes on the worker cluster is required to ensure proper management of scheduling rules for nodes and enabling node gateway to gateway network communication.

Set this label for your gateway nodes: kubeslice.io/node-type=gateway.

You can do this at cluster creation time, which is required for AKS or post cluster creation by using the following commands:

EKS

gcloud container node-pools update <nodepool name> \
--node-labels=kubeslice.io/node-type=gateway \
--cluster=<cluster name> \
[--region=<region> | --zone=<zone>]

GKE

gcloud container node-pools update <nodepool name> \
--node-labels=kubeslice.io/node-type=gateway \
--cluster=<cluster name> \
[--region=<region> | --zone=<zone>]

Verify Labeled Nodes

Verify at least one node is labeled in the node pool.

Change to each cluster context:

kubectx <cluster name>

Validate:

kubectl get no -l kubeslice.io/node-type=gateway

Configure the Helm Repository

Add the helm repo using the following command:

helm repo add kubeslice https://kubeslice.aveshalabs.io/repository/kubeslice-helm-ent-prod/

Update the repo using the following command:

helm repo update

Verify the repo using the following command:

helm search repo kubeslice

Install Prometheus

If you already have Prometheus in your configuration, you can skip this step, otherwise:

Create a monitoring namespace using the following command:

kubectl create ns monitoring

Install Prometheus using the following command:

helm install prometheus kubeslice/prometheus -n monitoring

Validate the Prometheus installation using the following command:

kubectl get pods -n monitoring

Get the Prometheus URL using the following command:

kubectl get svc -n monitoring | grep prometheus-server

The Prometheus URL is in the http://<ExternalNodeIp>:<port> format. For example, http://34.145.38.195:32700 is the Prometheus URL.

Install Istio

You can skip these steps if you have already installed the recommended Istio version on the cluster. Install Istio on all worker cluster(s) participating in the configuration:

kubectx <cluster name>
kubectl create ns istio-system
helm install istio-base kubeslice/istio-base -n istio-system
helm install istiod kubeslice/istio-discovery -n istio-system

Validate the Istio Installation

Validate the installation of Istio by checking the pod status. Use the following command to check if the pods are running:

kubectl get pods -n istio-system

Kubeslice Installation through kubeslice-cli

Create the topology configuration file using the following template to install KubeSlice Enterprise on clusters:

configuration:
cluster_configuration:
kube_config_path: <path to kubeconfig file>
controller:
name: <cluster name acting as controller>
context_name: <controller cluster context name>
workers:
- name: <cluster name>
context_name: <cluster context name>
- name: <cluster name>
context_name: <cluster context name>
kubeslice_configuration:
project_name: <create project name>
helm_chart_configuration:
repo_alias: kubeslice
repo_url: https://kubeslice.aveshalabs.io/repository/kubeslice-helm-ent-prod/
cert_manager_chart:
chart_name: cert-manager
controller_chart:
chart_name: kubeslice-controller
worker_chart:
chart_name: kubeslice-worker
"metrics.insecure": true
ui_chart:
chart_name: kubeslice-ui
image_pull_secret: #{The image pull secrets. Optional for OpenSource, required for enterprise}
registry: https://index.docker.io/v1/
username: '<username from registration email>'
password: '<password from registration email>'
email: '<email provided during registration>'

Apply the Topology YAML File

Use the following command to apply the topology YAML file:

kubeslice-cli --config <path-to-the-topology.yaml> install

Retrieve the KubeSlice Manager Endpoint

Use the following command to retrieve the KubeSlice Manager endpoint:

 kubeslice-cli get ui-endpoint -c <path-to-custom-topology-file>

Output format:

https://<Node-IP>:<Node-Port>

Enterprise Manager UI Login

To log into the KubeSlice Manager (enterprise Manager UI), you need a service account token-based kubeconfig file. The default admin user account can be used for initial login.

Create a Service Account Token-based Kubeconfig File

Follow the below instructions using the script provided to create the service account token-based kubeconfig file to log into the KubeSlice Manager.

Kubeconfig Generation Script

Copy and paste the script provided below in a file called 'kube-configs.sh'.

# The script returns a kubeconfig for the service account.
# you need to have kubectl on PATH with the context set to the cluster you want to create the config file.

# Cosmetics for the created config
clusterName=$1
# your server address goes here get it via `kubectl cluster-info`
server=$2
# the Namespace and ServiceAccount name that is used for the config
namespace=$3
serviceAccount=$4

######################
# actual script starts
set -o errexit

secretName=$(kubectl --namespace $namespace get serviceAccount $serviceAccount -o jsonpath='{.secrets[0].name}')
ca=$(kubectl --namespace $namespace get secret/$secretName -o jsonpath='{.data.ca\.crt}')
token=$(kubectl --namespace $namespace get secret/$secretName -o jsonpath='{.data.token}' | base64 --decode)

echo "
---
apiVersion: v1
kind: Config
clusters:
- name: ${clusterName}
cluster:
certificate-authority-data: ${ca}
server: ${server}
contexts:
- name: ${serviceAccount}@${clusterName}
context:
cluster: ${clusterName}
namespace: ${namespace}
user: ${serviceAccount}
users:
- name: ${serviceAccount}
user:
token: ${token}
current-context: ${serviceAccount}@${clusterName}

Get the Secrets

Use the following command to get the secrets for the Service Account and redirect to an output file called kubeconfig:

sh kube-configs.sh <controller-cluster-name> <controller-endpoint> kubeslice-<projectname> <serviceaccount-name> > kubeconfig'''

Access the Kubeslice Manager using the Kubeconfig File

Go to the URL that you have retrieved as described in the KubeSlice Manager Endpoint.

On the login page, for Enter Service Account Token, copy the token from the kubeconfig file that you have generated from the script and paste the service account token.

Alternatively, drop or upload your kubeconfig file that you have created above in the text box below that states Drop your KubeConfig file in the box or Click here to upload.

Click Sign in. After a successful authentication, you see the dashboard of the KubeSlice Manager as the landing page.