Configure Karpenter with Rancher on the Linode Cluster
This topic describes the steps to deploy Karpenter with Rancher on the Linode cluster.
Procure the Karpenter License
The following are the steps to procure a Karpenter license.
-
Contact Avesha Sales at
sales@avesha.io
to obtain a Karpenter license. -
License options:
- A trial license is available for evaluation purposes and is valid for a limited time.
- For production deployments, a commercial license must be purchased.
-
Retrieve the cluster ID (for Rancher on Linode) using the following command:
kubectl get namespace kube-system -o jsonpath='{.metadata.uid}'
-
To set resource limits, specify the maximum number of vCPUs that Karpenter is allowed to provision in the cluster.
You will need to provide the cluster ID for each cluster you intend to license. Avesha issues a license on per-cluster basis. After processing your request, Avesha shares the license details with you.
Deploy Karpenter
Use the following command to encode the values to base64:
echo -n '<value>' | base64 -w0
The following are the steps to deploy Karpenter:
-
SSH to Rancher worker node.
-
Use the following command to get the RKE controller server and the token:
# cat /etc/rancher/rke2/config.yaml.d/50-rancher.yaml
{
"node-label": [
"rke.cattle.io/machine=d459c182-ba22-46e4-a1a9-97400437eace"
],
"private-registry": "/etc/rancher/rke2/registries.yaml",
"protect-kernel-defaults": false,
"server": "https://[IP_ADDR]:9345",
"token": "TOKEN" -
Update the
charts.yaml
orvalues.yaml
with the RKE controller URL and the token.rancherLinode:
cluster:
server: "" #base64 of "https://[IP_ADDR]:9345"
token: "" #base64 of the RKE. See /etc/rancher/rke2/config.yaml.d/50-rancher.yaml of an existing worker node
rootPass: "" #base64 of the root Password on the nodes. Please make sure the password meets linode requiremets
linodeToken: "" #base64 of the linode token. Make sure the Linode token has enough rights. -
Add the repository using the following commands:
helm repo add smartscaler https://smartscaler.nexus.aveshalabs.io/repository/smartscaler-helm-ent-prod
helm repo update -
Use the following command to deploy Karpenter:
helm install karpenter smartscaler/avesha-karpenter -f values.yaml --namespace smart-scaler --create-namespace
Create a NodeClass
Use the following command to create a NodeClass:
cat <<EOF | kubectl apply -f -
apiVersion: karpenter.multicould.sh/v1alpha1
kind: LinodeNodeClass
metadata:
name: default
spec:
imageName: linode/ubuntu20.04
rke2Version: "v1.30.7+rke2r1"
privateIP: True
#scriptUrl: "https://get.rke2.io" #optional
#subnetID: <subnetID> - #optional
#firewallID: <firewallID> - #optional
EOF
Create a NodePool
Use the following command to create a NodePool:
cat <<EOF | kubectl apply -f -
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
name: default
spec:
template:
spec:
requirements:
- key: kubernetes.io/arch
operator: In
values: ["amd64"]
- key: kubernetes.io/os
operator: In
values: ["linux"]
- key: topology.kubernetes.io/zone
operator: In
#values: ["eu-west", "eu-central", "se-sto", "de-fra-2"]
values: ["de-fra-2"]
nodeClassRef:
name: default
kind: LinodeNodeClass
group: karpenter.multicould.sh
expireAfter: 720h # 30 * 24h = 720h
limits:
cpu: 1000
disruption:
consolidationPolicy: WhenEmptyOrUnderutilized
consolidateAfter: 2m
EOF
Update the System Agent Upgrader
Rancher System Agent Upgrader continuously deploys short-live pods on Karpenter's nodes, and Karpenter cannot disrupt those nodes even where no user pods are running. The nodes seem never to be empty to Karpenter.
To stop the Rancher System Agent Upgrader from deploying short-lived pods on Karpenter-managed nodes, follow these steps:
-
Use the following command to edit the plan:
kubectl -n cattle-system edit plan system-agent-upgrader
-
Use the following command to add add the
node.kubernetes.io/instance-type
node selector:spec:
concurrency: 10
nodeSelector:
matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
- key: node.kubernetes.io/instance-type
operator: In
values:
- rke2