Skip to main content
Version: 1.14.0

Rancher Deployments

The prerequisites section specifically addresses the requirements for Rancher deployments of KubeSlice. If you are not using Rancher for your KubeSlice installation, you can safely skip this section as it pertains only to Rancher-specific configurations and dependencies. However, if you are utilizing Rancher, it is recommended to review the prerequisites carefully to ensure a seamless integration of KubeSlice with Rancher and to ensure the smooth functioning of your cluster management.

Infrastructure Requirements

KubeSlice Controller

Cluster Requirements1 Kubernetes Cluster
Minimum Nodes Required1 with a minimum of 2vCPU and 1.25 Gi of RAM
Supported Kubernetes Versions1.23 and 1.24
Supported Kubernetes ServicesAzure Kubernetes Service (AKS), Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE), and Rancher Kubernetes Engine (RKE)
Required Helm Version3.7.0

KubeSlice Worker Clusters

Minimum Clusters Required1 Kubernetes Clusters
Minimum Nodes or NodePools Required2, Each with a minimum of 4 vCPUs and 16 GB of RAM
NodePools Reserved for KubeSlice Components1 NodePool
Cluster Requirements1 Kubernetes Cluster, 2 minimum nodes or NodePools each with a minimum of 4 vCPUs & 16 GB of RAM, and 1 NodePool for KubeSlice components
Supported Kubernetes Versions1.23 and 1.24
Supported Kubernetes ServicesAzure Kubernetes Service (AKS), Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE), and Rancher Kubernetes Engine (RKE)
Required Helm Version3.7.0
Required Istio Version1.13.3

Open the TCP and UDP Ports on the KubeSlice Worker Clusters

The UDP ports should be open on the nodes in the Rancher-managed worker clusters for inter-node communication.

info

On all the Rancher-managed worker clusters, ensure that the TCP 6443 port and the UDP 30000-33000 ports are opened for inter cluster communication.

Label the KubeSlice Gateway Nodes on the KubeSlice Worker Clusters

If you have multiple node pools on your worker cluster, then you can add a label to each node pool. Labels are useful in managing and scheduling rules for nodes.

info

Labeling gateway nodes only applies to the worker cluster. You must perform these steps on all the participating worker clusters.

Label Node Pool

To label your node pool from the Rancher UI:

  1. Log in to the Rancher UI.
  2. Navigate to the top-left menu, and select the worker cluster.
  3. Under the Cluster tab, click Nodes. nodes
  4. On the Nodes page, select the node you want to label and click the vertical ellipsis. nodes
  5. Click Edit Config. labeling
  6. Click the Labels&Annotations tab. labeling
  7. Under the Labels section, click Add Label and enter these details:
    • For Key, enter kubeslice.io/node-type as the value.
    • For Value, enter gateway as the value. labeling
  8. Click Save.
success

You have completed labeling the nodepools of your worker cluster.

KubeSlice Controller Cluster

If you are a Rancher Marketplace user, it is important to note that you should not follow external documentation for creating a Rancher workload cluster. Instead, follow the instructions below to create a Rancher workload cluster using Rancher Kubernetes Engine (RKE) specifically for installing the KubeSlice Controller + Manager (referred to henceforth as KubeSlice Controller Cluster) through the Rancher user interface:

  1. Log into your Rancher account and navigate to the Cluster Manager by clicking the Global option on the left-hand side of the screen.

  2. Click the Add Cluster button to create a new cluster.

  3. Select RKE Cluster from the options provided.

  4. Provide a name for your cluster and select the desired options for your cluster nodes.

  5. Click on the Create button to create your new cluster.

  6. Wait for the cluster to be created and then proceed with the installation of the KubeSlice Controller + Manager.

warning
  • Do not install the KubeSlice Controller on the Rancher server cluster to avoid authentication conflicts with the Rancher's authentication proxy.
  • Make sure the ACE is enabled on the Rancher workload cluster, where you want to deploy the KubeSlice Controller.
  • Be sure that the API server endpoint you provide while installing the KubeSlice Controller does not use the Rancher authentication proxy. Instead use the downstream workload cluster's API server endpoint. When you enable ACE, you get the downstream workload cluster's API server endpoint in the kubeconfig file.

Enable ACE on the KubeSlice Controller Cluster

KubeSlice Controller must be installed on a cluster only provisioned by Rancher Kubernetes Engine (RKE) with the authorized cluster endpoint (ACE) enabled.

To enable ACE on the Rancher user interface:

  1. Go to Cluster Management.

  2. Select the workload cluster and click its vertical ellipsis.

  3. Select Edit Config from the menu. alt

  4. Enable the authorized endpoint. alt

  5. To verify if ACE is enabled, click the workload cluster's vertical ellipsis.

  6. From the menu, click View YAML.

  7. In the YAML file, look for localClusterAuthEndpoint, which must be enabled. alt

Get the Controller Cluster Endpoint

The Controller Cluster endpoint is required to install the KubeSlice Controller. To get the controller cluster endpoint:

  1. Download the kubeconfig file from the Rancher user interface for the KubeSlice Controller cluster using this command:

    export KUBECONFIG=<fully qualified path of the kubeconfig file>

  2. Set up kubectl proxy using the following command:

    kubectl proxy --append-server-path
  3. Go to the web browser and type http://localhost:8001/api/ on the address bar.

  4. You see the serverAddress as shown in this example. alt

  5. To install the KubeSlice Controller, you need to copy the serverAddress, which is the endpoint of the controller cluster. You will then use this endpoint value to fill in the appropriate field in the values.yaml file during installation.