Install the Kubeslice Controller
The KubeSlice Controller orchestrates the creation and management of slices across the worker clusters. The KubeSlice Controller components and the worker cluster components can coexist on a cluster. Hence, the cluster running the KubeSlice Controller can also be used as a worker cluster.
We recommend that you run the KubeSlice Controller on a separate cluster.
KubeSlice Controller Components
KubeSlice Controller installs the following:
- KubeSlice Controller specific ClusterResourceDefinitions(CRDs)
- ClusterRole, ServiceAccount and ClusterRoleBinding for KubeSlice Controller
- A Role and RoleBinding for KubeSlice Controller Leader Election
- KubeSlice Controller workload
- KubeSlice Controller API Gateway
Create KubeSlice Controller YAML
To install the KubeSlice Controller on one of the clusters, you need to create a controller.yaml
file that requires
the endpoint of the controller cluster. The endpoint is the location on which you install
the KubeSlice Controller. Installing the KubeSlice Controller installs the Prometheus with default settings. The Prometheus will
have Persistent Volume of 5GB and the retention period of 30 days. You can change the Prometheus default configuration.
Get the Cluster Endpoint
Use the following command to get the cluster endpoint:
kubectl cluster-info
Example output
Kubernetes control plane is running at https://aks-controller-cluster-dns-06a5f5da.hcp.westus2.azmk8s.io:443
addon-http-application-routing-default-http-backend is running at https://aks-controller-cluster-dns-06a5f5da.hcp.westus2.azmk8s.io:443/api/v1/namespaces/kube-system/services/addon-http-application-routing-default-http-backend/proxy
addon-http-application-routing-nginx-ingress is running at http://40.125.122.238:80 http://40.125.122.238:443
healthmodel-replicaset-service is running at https://aks-controller-cluster-dns-06a5f5da.hcp.westus2.azmk8s.io:443/api/v1/namespaces/kube-system/services/healthmodel-replicaset-service/proxy
CoreDNS is running at https://aks-controller-cluster-dns-06a5f5da.hcp.westus2.azmk8s.io:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://aks-controller-cluster-dns-06a5f5da.hcp.westus2.azmk8s.io:443/ap
From the above output, copy the URL for the Kubernetes control plane to
add it as the cluster endpoint in the controller.yaml
file.
For example,
https://aks-controller-cluster-dns-06a5f5da.hcp.westus2.azmk8s.io:443
.
View Cost Allocation
You can view the aggregated chargeback amounts for multi-cluster Kubernetes environments through the KubeSlice Manager.
To enable the KubeTally feature in the KubeSlice Manager. You must enable the kubeTally in the YAML file. The
default value is false
. You must set the kubeTally:enabled
value to true
and configure the PostgreSQL
database as a prerequisite in the YAML file.
The current version of kubeTally only supports AWS clusters. Currently, you can view cost allocation only for AWS clusters.
Create the Controller YAML
Create the controller.yaml
file using the following template.
To know more about the configuration details, see KubeSlice Controller parameters.
# Default values for k-native.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# if you're installing in openshift cluster make this variable true
global:
imageRegistry: docker.io/aveshasystems
# Profile settings (e.g., for OpenShift)
profile:
openshift: false
kubeTally:
# Enable or disable KubeTally
enabled: false
postgresSecretName: kubetally-db-credentials # Default value, secret name can be overridden
# Ensure to configure the mandatory PostgreSQL database settings when 'kubetally enable' is true.
postgresAddr: "" # Optional, can be specified here or retrieved from the secret
postgresPort: # Optional, can be specified here or retrieved from the secret
postgresUser: "" # Optional, can be specified here or retrieved from the secret
postgresPassword: "" # Optional, can be specified here or retrieved from the secret
postgresDB: "" # Optional, can be specified here or retrieved from the secret
postgresSslmode: require
kubeslice:
# Configuration for the KubeSlice controller
controller:
# Log level for the controller
logLevel: info
# Endpoint for the controller (should be specified if needed)
endpoint: <control-plane endpoint>
# Image pull policy for the KubeSlice controller
pullPolicy: IfNotPresent
# license details by default mode set to auto and license set to trial - please give company-name or user-name as customerName
license:
# possible license type values ["kubeslice-trial-license"]
type: kubeslice-trial-license
# possible license mode - ["auto", "manual"]
mode: auto
# please give company-name or user-name as customerName
customerName: <customer name>
imagePullSecretsName: "kubeslice-image-pull-secret"
# leave the below fields empty if secrets are managed externally.
imagePullSecrets:
repository: https://index.docker.io/v1/
username: <user-name>
password: <password>
email: <email-address>
dockerconfigjson: ## Value to be used if using external secret managers
Apply Controller YAML
helm install kubeslice-controller kubeslice/kubeslice-controller -f <full path of the controller>.yaml --namespace kubeslice-controller --create-namespace
Expected Output
NAME: kubeslice-controller
LAST DEPLOYED: Thu Nov 11 13:12:49 2024
NAMESPACE: kubeslice-controller
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
kubeslice-controller installation successful!
Validate Controller Installation
Validate the installation of the KubeSlice Controller by checking the status of the pods
that belong to the kubeslice-controller
namespace using the following command:
kubectl get pods -n kubeslice-controller
Expected Output
NAME READY STATUS RESTARTS AGE
kubeslice-controller-manager-5bf66447b7-nmxr7 2/2 Running 0 35h
kubeslice-controller-prometheus-service-7bdc699b5-jrgwj 2/2 Running 0 35h
license-job-f38c6fb9-fg4dm 0/1 Completed 0 35h
Expected Output when KubeTally is enabled
The KubeTally pricing updater job is a one-time job that will run to get the latest resource prices and updates in database.
NAME READY STATUS RESTARTS AGE
kubeslice-controller-kubetally-pricing-service-b67cb59cc-h72bl 1/1 Running 0 23h
kubeslice-controller-kubetally-pricing-updater-job-vdmv5 0/1 Completed 0 23h
kubeslice-controller-kubetally-report-756dff5fb4-ztjnb 1/1 Running 0 4h22m
kubeslice-controller-manager-7dd5b4c7fd-kf9th 2/2 Running 0 23h
kubeslice-controller-prometheus-service-7bdc699b5-5cmv5 2/2 Running 0 47h
license-job-9f5fb056-6jzz2 0/1 Completed 0 23h
Validate the MinIO Backup Storage
Validate the MinIO backup storage on the controller cluster using the following command:
kubectl get pods -n minio
Expected Output
NAMESPACE NAME READY STATUS RESTARTS AGE
minio minio-7459dd6949-hw55w 1/1 Running 0 40s
Install the KubeSlice Manager
KubeSlice Manager is a web-based user interface that allows you to register your worker cluster, create a slice on the registered worker cluster(s), and onboard your application namespaces with or without enabling the namespace isolation. KubeSlice Manager also enables you to access the Kubernetes dashboard to see the workload status of your worker cluster. You must install the KubeSlice Manager on the controller cluster.
Create KubeSlice Manager YAML
Create the kubeslice-manager.yaml
file for the KubeSlice Manager using the following template.
To know more about the configuration details, see KubeSlice Manager parameters.
# Default values for k-native.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
global:
imageRegistry: docker.io/aveshasystems
profile:
openshift: false
kubeTally:
costApiUrl: http://kubetally-pricing-service:30001
enabled: false
kubeslice:
productName: kubeslice
dashboard:
enabled: true
uiproxy:
service:
type: LoadBalancer
imagePullSecretsName: "kubeslice-ui-image-pull-secret"
# leave the below fields empty if secrets are managed externally.
imagePullSecrets:
repository: https://index.docker.io/v1/
username: "<username>"
password: "<password>"
email: "<email address>"
dockerconfigjson: ## Value to be used if using external secret managers
Apply the KubeSlice Manager YAML File
Apply the kubeslice-manager.yaml
file:
helm install kubeslice-ui kubeslice/kubeslice-ui -f kubeslice-manager.yaml -n kubeslice-controller
Validate the KubeSlice Manager Installation
To validate the installation, check the status of pods that belong to
the kubeSlice-controller
namespace using the following command:
kubectl get pods -n kubeslice-controller
Expected Output
NAME READY STATUS RESTARTS AGE
kubeslice-api-gw-6bdd86c574-cgtfv 1/1 Running 0 35h
kubeslice-controller-manager-5bf66447b7-nmxr7 2/2 Running 0 35h
kubeslice-controller-prometheus-service-7bdc699b5-jrgwj 2/2 Running 0 35h
kubeslice-ui-576f94544-5nb8w 1/1 Running 0 35h
kubeslice-ui-proxy-6645d66cb5-h779v 1/1 Running 0 35h
kubeslice-ui-v2-57bdb69797-kk8rd 1/1 Running 0 8h
license-job-f38c6fb9-fg4dm 0/1 Completed 0 35h
Expected Output when KubeTally (Cost Management) is enabled
The KubeTally pricing updater job is a one-time job that will run to get the latest resource prices and updates in database.
NAME READY STATUS RESTARTS AGE
kubeslice-api-gw-b9c7b7f7d-f4png 1/1 Running 0 23h
kubeslice-controller-kubetally-pricing-service-b67cb59cc-h72bl 1/1 Running 0 23h
kubeslice-controller-kubetally-pricing-updater-job-vdmv5 0/1 Completed 0 23h
kubeslice-controller-kubetally-report-756dff5fb4-ztjnb 1/1 Running 0 4h22m
kubeslice-controller-manager-7dd5b4c7fd-kf9th 2/2 Running 0 23h
kubeslice-controller-prometheus-service-7bdc699b5-5cmv5 2/2 Running 0 47h
kubeslice-ui-66f7f686d5-77x78 1/1 Running 0 23h
kubeslice-ui-proxy-685d47f756-f57wk 1/1 Running 0 23h
kubeslice-ui-v2-856c74c4f7-2vg2t 1/1 Running 0 23h
license-job-9f5fb056-6jzz2 0/1 Completed 0 23h
Validate Kubernetes Dashboard
To validate the installation of the Kubernetes dashboard, check the status of pods that belong to
the kubernetes-dashboard
namespace using the following command:
kubectl get pods -n kubernetes-dashboard
Expected Output
NAME READY STATUS RESTARTS AGE
kubernetes-dashboard-api-6f89c48d74-rfs46 1/1 Running 0 2d10h
kubernetes-dashboard-auth-767d5d7864-xqxpd 1/1 Running 0 12d
kubernetes-dashboard-kong-7b7c75db8d-tspql 1/1 Running 0 2d10h
kubernetes-dashboard-metrics-scraper-fb7df48f5-xml67 1/1 Running 0 12d
kubernetes-dashboard-web-c74cddfcb-9bwtq 1/1 Running 0 2d10h
Access KubeSlice Manager URL
To access the KubeSlice Manager URL, you need to retrieve the external IP & high port of the kubeslice-ui-proxy
load balancer service. To validate the installation of KubeSlice Manager, you can use the following command to get the services associated with the kubeslice-controller
namespace:
kubectl get svc -n kubeslice-controller
Expected Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubeslice-api-gw ClusterIP 10.96.33.222 <none> 8080/TCP 44m
kubeslice-controller-controller-manager-metrics-service ClusterIP 10.96.14.102 <none> 8443/TCP 46m
kubeslice-controller-kubetally-pricing-service ClusterIP 10.96.38.210 <none> 30001/TCP 3m54s
kubeslice-controller-prometheus-service ClusterIP 10.96.216.219 <none> 9090/TCP 46m
kubeslice-controller-webhook-service ClusterIP 10.96.168.219 <none> 443/TCP 46m
kubeslice-ui ClusterIP 10.96.131.238 <none> 80/TCP 44m
kubeslice-ui-proxy NodePort 10.96.169.126 <none> 443:31000/TCP 44m
kubeslice-ui-v2 ClusterIP 10.96.192.35 <none> 80/TCP 44m
URL Example
Using the above expected output the Kubslice URL is as follows:
https://34.159.124.159:30257
You have successfully installed the KubeSlice Manager on a controller cluster.
Integrate an Identity Provider with KubeSlice
You must integrate a supported Identity Provider (IdP) with KubeSlice to enable Slice RBAC functionality.
For more information, see Configure Identity Provider.
Create Project Namespace
A project may represent an individual customer or an organization or a department within an organization. Each project would have a dedicated auto-generated namespace, which will ensure that the resources of one project do not clash with the resources of another project.
For example, a slice with the same name can exist across multiple projects but with different configurations. Changes to the slice in one project will not affect the slice in another project. For more information, see the KubeSlice Architecture.
Create Project YAML
Create a project namespace by creating a <project_name>.yaml
file using the following template.
To know more about the configuration details, see project namespace parameters.
apiVersion: controller.kubeslice.io/v1alpha1
kind: Project
metadata:
name: <project name>
namespace: kubeslice-controller
spec:
serviceAccount:
readOnly:
- <readonly user1>
- <readonly user2>
- <readonly user3>
readWrite:
- <readwrite user1>
- <readwrite user2>
- <readwrite user3>
Apply Project YAML
Use the <project_name>.yaml
file that you have created and apply it to create the project.
Apply the YAML file:
kubectl apply -f <full path of the project name>.yaml -n kubeslice-controller
Project Validation
After applying the YAML file on the project namespace, you can validate if the project and service accounts are created successfully.
Validate the Project
Use the following command on the kubeslice-controller
namespace to get
the list of the project:
kubectl get project -n kubeslice-controller
Expected Output
NAME AGE
avesha 30s
Validate the Service Accounts
To validate the account creation, check the service accounts that belong to the project namespace using the following command:
kubectl get sa -n kubeslice-<project name>
Example:
kubectl get sa -n kubeslice-avesha
Example Output
NAME SECRETS AGE
default 1 30s
kubeslice-rbac-ro-user1 1 30s
kubeslice-rbac-rw-user2 1 30s
You have successfully installed the KubeSlice Controller and created the project with a dedicated namespace.