FAQs
General
How can I disable disruption on a NodePool?
On a NodePool definition, replace the following disruption
object parameters:
disruption:
consolidationPolicy: WhenEmptyOrUnderutilized
With the following disruption
object parameters:
disruption:
consolidationPolicy: WhenEmpty
consolidateAfter: 1m
How to make sure a Pod will not be disrupted by Smart Karpenter?
If you are using Smart Karpenter and want to ensure a pod is not disrupted during scale-down or node
consolidation, set the karpenter.sh/do-not-disrupt: "true"
annotation on the pod. Add it
under the metadata.annotations
field in your Pod spec.
The following is an example YAML file:
apiVersion: apps/v1
kind: Deployment
spec:
template:
metadata:
annotations:
karpenter.sh/do-not-disrupt: "true"
This annotation tells Smart Karpenter not to consider the pod for disruption during node termination, consolidation, or deprovisioning operations.
OCI
How does Smart Karpenter select OCI images?
Smart Karpenter launches self-managed nodes.
OL7 and OL8 images are required for self-managed nodes. Smart Karpenter deploys the images as described in the following steps:
-
Smart Karpenter uses the first image found on the existing OKI node pools.
-
Override this behavior by setting the
IMAGE_OCID
environment variable to the required image. The image that you configure must beOL7
orOL8
with Kubernetes installed. -
Override the
ImageOCID
property in theOciNodeClass
definition as shown in the following example:apiVersion:
karpenter.multicloud.sh/v1alpha1
kind:
OciNodeClass
metadata:
name:
ocinodeclass1
spec:
#ImageOCID:
<image.ocid>
#BootVolumeSizeGB:
<size in GB>
#SubnetOCID:
<subnet OCID>
#NetworkSgOCID:
<nsgOCid1,nsgOCid2>
#PodsSubnetOCID:
<PODs subnet OCID> #For OCI VCN-Native Pod Networking
#PodsNetworkSgOCIDs:
<PODs nsgOCid1,PODs nsgOCid2> #For OCI VCN-Native Pod Networking
#SSHKeys:
"ssh-rsa ********" -
Updating
ImageOCID
in theOciNodeClass
definition marks all nodes as drifted and replaces them with new nodes.
We receive additional discounts on E5 instance types, which could help with cost savings. Is there a way to prioritize E5 (or any specific instance type) when Smart Karpenter scales up new nodes?
The following example values files represents the prices per CPU and per GB of memory per hour, taken from OCI Price List.
oci:
prices:
vm:
- name: "VM.Standard.E3.Flex"
ocpu: 0.025
mem: 0.0015
- name: "VM.Standard.E4.Flex"
ocpu: 0.025
mem: 0.0015
- name: "VM.Standard.E5.Flex"
ocpu: 0.03
mem: 0.002
- name: "VM.GPU.A10"
gpu: 2
- name: "VM.GPU2"
gpu: 1.275
- name: "VM.GPU3"
gpu: 2.95
If you assign the same pricing values to E3, E4, and E5 instance types, Smart Karpenter will treat them as equally cost-effective and may provision any of them interchangeably. However, if E5 is configured with a lower price, Smart Karpenter will prioritize it to optimize for cost savings.
Another way to control which VM shapes Smart Karpenter uses is by setting the karpenter.oci.sh/instance-family
requirement in the node pool definition:
- key: "karpenter.oci.sh/instance-family"
operator: In
values: ["E3","E4"]
This acts as a restrictive filter. For example:
- If you specify [
E3
,E4
,E5
] and the pricing for all is equal, Smart Karpenter will likely favor E3 and E4 due to internal preferences or availability. - If you set it to only [
E5
], then only E5 instances will be provisioned regardless of price, because you have restricted the selection explicitly.