Skip to main content
Version: 1.17.0

Create Your First Slice

In this guide you will learn how to install KubeSlice and create your first slice.

Install the KubeSlice Contoller

The KubeSlice Controller is a component of KubeSlice that manages the lifecycle of slices and their associated resources. To install the KubeSlice Controller, follow these steps:

Add and Verify the KubeSlice Helm Repository

To add the KubeSlice Helm repository, use the following command:

helm repo add kubeslice https://charts.kubeslice.io

After adding the repository, verify that it has been added successfully using the following command:

helm repo list

You should see the KubeSlice repository in the list of Helm repositories.

Create a Namespace for KubeSlice

Before installing KubeSlice, you need to create a namespace on the controller cluster. You can create a namespace using the following command:

kubectl create namespace kubeslice-controller

Apply the License File

Before installing KubeSlice, you need to apply the license file to the cluster. You can apply the license file using the following command:

kubectl apply -f <path-to-license-file> -n kubeslice-controller

After the license is applied, it is stored securely in KubeSlice. You can manage the license through CLI. The license secret is stored in the kubeslice-controller namespace and can be viewed using the following command:

kubectl get secret <license-secret-name.yaml> -n kubeslice-controller -o yaml

Install KubeSlice Controller

Get the PostgresSQL secret details, Kubeslice controller endpoint, and Prometheus endpoint URL. You will need these details to configure the values-controller.yaml file. The values-controller.yaml file is used to install the KubeSlice Controller.

  1. Get the PostgreSQL secret details using the following command:

    info

    The PostgreSQL secret is created in the kubeslice-controller namespace as part of the pre-requisites installation. Use these secrets and not the ones in the kt-postgresql namespace. The secret name is kubetally-db-credentials. The values must be base64 decoded.

    Example

    kubectl get secret kubetally-db-credentials -n kubeslice-controller -o json | jq -r '.data | to_entries[] | "\(.key)=\(.value|@base64d)"'
  2. Get the Kubernetes controller endpoint using the following command:

    kubectl cluster-info

    Example Output

    Kubernetes control plane is running at https://pu.mk8scluster-e00w111mv8rn8em35z.mk8s.eu-north1.nebius.cloud:443
    CoreDNS is running at https://pu.mk8scluster-e00w111mv8rn8em35z.mk8s.eu-north1.nebius.cloud:443/api/v1/namespaces/kube-system/services/coredns:udp-53/proxy
  3. Get the Prometheus endpoint URL using the following command:

    kubectl get svc -n kubeslice-monitoring

    Example Output

    NAME                                       TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                     AGE
    prometheus-operated ClusterIP None <none> 9090/TCP 10m
    prometheus-kube-prometheus-prometheus LoadBalancer 10.43.240.123 129.1XX.116.71 443:32000/TCP 10m

    For example, the PrometheusUrl value is "http://prometheus-kube-prometheus-prometheus.kubeslice-monitoring.svc.cluster.local:9090".

  4. Create a values-controller.yaml file with the following properties:

    # Default values for k-native.
    # This is a YAML-formatted file.
    # Declare variables to be passed into your templates.

    # if you're installing in openshift cluster make this variable true
    global:
    imageRegistry: docker.io/aveshasystems
    # Profile settings (e.g., for OpenShift)
    profile:
    openshift: false
    kubeTally:
    # Enable or disable KubeTally
    enabled: true

    postgresSecretName: kubetally-db-credentials # Default value, secret name can be overridden
    # Ensure to configure the mandatory PostgreSQL database settings when 'kubetally enable' is true.
    postgresAddr: "" # Optional, can be specified here or retrieved from the secret
    postgresPort: # Optional, can be specified here or retrieved from the secret
    postgresUser: "" # Optional, can be specified here or retrieved from the secret
    postgresPassword: "" # Optional, can be specified here or retrieved from the secret
    postgresDB: "" # Optional, can be specified here or retrieved from the secret
    postgresSslmode: require

    kubeslice:
    # Configuration for the KubeSlice controller
    # user can configure labels or annotations that KubeSlice Controller namespaces should have
    namespaceConfig:
    labels: {}
    annotations: {}
    controller:
    # Log level for the controller
    logLevel: info
    # Endpoint for the controller (should be specified if needed)
    endpoint: <control-plane endpoint>
    # Image pull policy for the KubeSlice controller
    pullPolicy: IfNotPresent

    # license details by default mode set to auto and license set to trial - please give company-name or user-name as customerName
    license:
    # possible license type values ["kubeslice-trial-license"]
    type: kubeslice-trial-license
    # possible license mode - ["auto", "manual"]
    mode: auto
    # please give company-name or user-name as customerName
    customerName: <customer name>

    imagePullSecretsName: "kubeslice-image-pull-secret"
    # leave the below fields empty if secrets are managed externally.
    imagePullSecrets:
    repository: https://index.docker.io/v1/
    username: <user-name>
    password: <password>
    email: <email-address>
    dockerconfigjson: ## Value to be used if using external secret managers
    note

    In a multi-cluster deployment, the controller cluster must be able to reach the Prometheus endpoints running on the worker clusters.

    warning

    If the Prometheus endpoints are not configured, you may experience issues with the dashboards (for example, missing or incomplete metric displays).

  5. Use the values-controller.yaml file in the following command to install the KubeSlice Controller:

    helm install kubeslice-controller kubeslice/kubeslice-controller -f <values-controller.yaml> -n kubeslice-controller --create-namespace

    Example Output

    NAME: kubeslice-controller
    LAST DEPLOYED: Thu Nov 11 13:12:49 2024
    NAMESPACE: kubeslice-controller
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    kubeslice-controller installation successful!
  6. Verify the installation by checking the status of the KubeSlice Controller pod using the following command:

    kubectl get pods -n kubeslice-controller

    Example Output

    NAME                                                              READY   STATUS      RESTARTS   AGE
    kubeslice-controller-kubetally-pricing-service-b67cb59cc-h72bl 1/1 Running 0 23h
    kubeslice-controller-kubetally-pricing-updater-job-vdmv5 0/1 Completed 0 23h
    kubeslice-controller-kubetally-report-756dff5fb4-ztjnb 1/1 Running 0 4h22m
    kubeslice-controller-manager-7dd5b4c7fd-kf9th 2/2 Running 0 23h
    kubeslice-controller-prometheus-service-7bdc699b5-5cmv5 2/2 Running 0 47h
    license-job-9f5fb056-6jzz2 0/1 Completed 0 23h 0/1 Completed 0 35h

Install the KubeSlice Manager

KubeSlice Manager is the user interface for KubeSlice that allows you to manage your clusters, slices, and applications. To install the KubeSlice Manager, follow these steps:

  1. Create a file called values-ui.yaml with the following properties:

    # Default values for k-native.
    # This is a YAML-formatted file.
    # Declare variables to be passed into your templates.
    global:
    imageRegistry: docker.io/aveshasystems
    profile:
    openshift: false
    kubeTally:
    costApiUrl: http://kubetally-pricing-service:30001
    enabled: false
    kubeslice:
    productName: kubeslice
    dashboard:
    enabled: true
    uiproxy:
    service:
    type: ClusterIP # Service type for the UI proxy
    ## if type selected to NodePort then set nodePort value if required
    # nodePort:
    # port: 443
    # targetPort: 8443
    labels:
    app: kubeslice-ui-proxy
    annotations: {}

    ingress:
    ## If true, ui‑proxy Ingress will be created
    enabled: false
    ## Port on the Service to route to
    servicePort: 443
    ## Ingress class name (e.g. "nginx"), if you’re using a custom ingress controller
    className: ""
    hosts:
    - host: ui.kubeslice.com # replace with your FQDN
    paths:
    - path: / # base path
    pathType: Prefix # Prefix | Exact
    ## TLS configuration (you must create these Secrets ahead of time)
    tls: []
    # - hosts:
    # - ui.kubeslice.com
    # secretName: uitlssecret
    annotations: []
    ## Extra labels to add onto the Ingress object
    extraLabels: {}

    imagePullSecretsName: "kubeslice-ui-image-pull-secret"
    # leave the below fields empty if secrets are managed externally.
    imagePullSecrets:
    repository: https://index.docker.io/v1/
    username: "<username>"
    password: "<password>"
    email: "<email address>"
    dockerconfigjson: ## Value to be used if using external secret managers
  2. Use the values-ui.yaml in the following command to install the KubeSlice Manager:

    helm install kubeslice-ui kubeslice/kubeslice-ui -f kubeslice-manager.yaml -n kubeslice-controller

    Example Output

    NAME                                                             READY   STATUS      RESTARTS   AGE
    kubeslice-api-gw-b9c7b7f7d-f4png 1/1 Running 0 23h
    kubeslice-controller-kubetally-pricing-service-b67cb59cc-h72bl 1/1 Running 0 23h
    kubeslice-controller-kubetally-pricing-updater-job-vdmv5 0/1 Completed 0 23h
    kubeslice-controller-kubetally-report-756dff5fb4-ztjnb 1/1 Running 0 4h22m
    kubeslice-controller-manager-7dd5b4c7fd-kf9th 2/2 Running 0 23h
    kubeslice-controller-prometheus-service-7bdc699b5-5cmv5 2/2 Running 0 47h
    kubeslice-ui-66f7f686d5-77x78 1/1 Running 0 23h
    kubeslice-ui-proxy-685d47f756-f57wk 1/1 Running 0 23h
    kubeslice-ui-v2-856c74c4f7-2vg2t 1/1 Running 0 23h
    license-job-9f5fb056-6jzz2 0/1 Completed 0 23h

Create a Project

  1. Create a file called project.yaml with the following properties:

    apiVersion: controller.kubeslice.io/v1alpha1
    kind: Project
    metadata:
    name: avesha
    namespace: kubeslice-controller
    spec:
    serviceAccount:
    readWrite:
    - admin
  2. Apply the project.yaml file by using it in the following command in the controller cluster:

    kubectl apply -f project.yaml -n kubeslice-controller

Log in to the KubeSlice Manager

To access the KubeSlice Manager, you need to retrieve the Manager URL and the admin access token.

  1. To get the KubeSlice Manager URL, use the following command:

    # Check KubeSlice Manager pod status
    kubectl get pods -n kubeslice-controller | grep kubeslice-ui-proxy

    Ensure that the kubeslice-ui-proxy pod is in the Running state before proceeding.

    Depending on the service type configured in the values-ui.yaml file, use one of the following methods to get the Admin Portal URL:

    1. For LoadBalancer service type, use the following command to get the external IP or hostname of the LoadBalancer:

      kubectl get svc kubeslice-ui-proxy -n kubeslice-controller

      Then, use the following command to get the external IP or hostname directly:

      kubectl get svc kubeslice-ui-proxy -n kubeslice-controller -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' 2>/dev/null || \
      kubectl get svc kubeslice-ui-proxy -n kubeslice-controller -o jsonpath='{.status.loadBalancer.ingress[0].ip}' 2>/dev/null

      Example

      kubectl get svc kubeslice-ui-proxy -n kubeslice-controller

      Example Output

      NAME                 TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)         AGE
      kubeslice-ui-proxy LoadBalancer 10.128.144.231 139.144.167.243 443:32185/TCP 9m23s

      Example

      kubectl get svc kubeslice-ui-proxy -n kubeslice-controller -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' 2>/dev/null || \
      kubectl get svc kubeslice-ui-proxy -n kubeslice-controller -o jsonpath='{.status.loadBalancer.ingress[0].ip}' 2>/dev/null

      Example Output

      139-144-167-243.ip.linodeusercontent.com
    2. For NodePort service type, use the following command to get the Node IP and Node Port:

      kubectl get svc kubeslice-ui-proxy -n kubeslice-controller

      Then, use the following command to get the Node IP and Node Port directly:

      # Get the Node IP (ExternalIP or InternalIP)
      NODE_IP=$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[?(@.type=="ExternalIP")].address}' 2>/dev/null | grep -Eo '([0-9]{1,3}\.){3}[0-9]{1,3}' | head -n1)
      if [ -z "$NODE_IP" ]; then
      NODE_IP=$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")].address}' 2>/dev/null | grep -Eo '([0-9]{1,3}\.){3}[0-9]{1,3}' | head -n1)
      fi

      # Get the Node Port
      NODE_PORT=$(kubectl get svc kubeslice-ui-proxy -n kubeslice-controller -o jsonpath='{.spec.ports[0].nodePort}' 2>/dev/null)

      echo "https://$NODE_IP:$NODE_PORT"
      NODE_IP=$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[?(@.type=="ExternalIP")].address}' 2>/dev/null | grep -Eo '([0-9]{1,3}\.){3}[0-9]{1,3}' | head -n1)
      if [ -z "$NODE_IP" ]; then
      NODE_IP=$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")].address}' 2>/dev/null | grep -Eo '([0-9]{1,3}\.){3}[0-9]{1,3}' | head -n1)

      NODE_PORT=$(kubectl get svc kubeslice-ui-proxy -n kubeslice-controller -o jsonpath='{.spec.ports[0].nodePort}' 2>/dev/null)
      echo "https://$NODE_IP:$NODE_PORT"

      Example Output

      NODE_IP=$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[?(@.type=="ExternalIP")].address}' 2>/dev/null | grep -Eo '([0-9]{1,3}\.){3}[0-9]{1,3}' | head -n1)
      echo $NODE_IP
      139.177.207.126

      if [ -z "$NODE_IP" ]; then
      NODE_IP=$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")].address}' 2>/dev/null | grep -Eo '([0-9]{1,3}\.){3}[0-9]{1,3}' | head -n1)
      fi

      echo $NODE_IP
      139.177.207.126

      NODE_PORT=$(kubectl get svc kubeslice-ui-proxy -n kubeslice-controller -o jsonpath='{.spec.ports[0].nodePort}' 2>/dev/null)
      echo "https://$NODE_IP:$NODE_PORT"
      https://139.177.207.126:32185
    3. For ClusterIP service type, use the following command to port-forward the service to your local machine and access it through localhost:

      kubectl port-forward -n kubeslice-controller svc/kubeslice-ui-proxy 8080:443
      echo "https://localhost:8080"

      Example

      kubectl port-forward -n kubeslice-controller svc/kubeslice-ui-proxy 8080:443 
      echo "https://localhost:8080"

      The output will be the Admin Portal URL. For example, https://<EXTERNAL-IP>:<NODE-PORT> or https://<LOAD-BALANCER-IP>.

      Example Output

      kubectl port-forward -n kubeslice-controller svc/kubeslice-ui-proxy 8080:443
      echo "https://localhost:8080"
      Forwarding from 127.0.0.1:8080 -> 8443
      Forwarding from [::1]:8080 -> 8443
  2. To get the admin access token, use the following command:

    kubectl get secret kubeslice-rbac-rw-admin -o jsonpath="{.data.token}" -n kubeslice-avesha

Register a Worker Cluster

After a successful login, you can start using the Kubeslice Manager to manage your Kubernetes clusters and applications.

To register a worker cluster, you have two options:

Create a Slice

A slice is a logical representation of a group of resources across one or more clusters. It allows you to manage and monitor these resources as a single entity.

To create a slice, you can use the KubeSlice Manager or the CLI. For more information on how to create a slice, see Create a Slice.