Assign Node Labels
Assigning node labels to namespaces secure them from being shared with other namespaces on worker clusters using the YAML configuration.
A critical application running on a slice can be configured to have its deployments on predefined nodes on a cluster without sharing the nodes with different applications. This is required to avoid security and cluster's multi-tenancy issues such as:
- The deployment that happen on application namespaces onboarded onto a slice are positioned on the cluster nodes arbitrarily. Sometimes, the same nodes on which the deployments happen are shared with other applications and become vulnerable to issues such as denial of service and resource evasion.
- This could also announce as a security issue where an application with the sensitive data is on the same node is shared with others, or the application cannot stay longer without accessing CPU.
- Setting limits and requests does not prevent utilization of common node resources such as network interfaces, SSD drives, and GPUs. These resources do not fit under the scope of the Kubernetes resource quota.
Benefits
- Assigning node labels creates node affinity that allows pods of namespaces to be placed on a node or group of nodes with the same node label. Node affinity allows restricting the pods of namespaces only to specific nodes with same labels.
- Node Affinity helps in effective management of common node resources such as network interfaces, SSD drives, and GPUs.
- It helps in eliminating security issues by isolating the labeled nodes with applications running on a slice from other nodes on the worker clusters.
Create Assign Node Label YAML
You can selectively assign node labels to run applications on a slice and namespaces and restrict the nodes from being shared with other applications.
Create the following assign-nodes.yaml
file to assign node labels to slice and namespaces.
To know more about the configuration details, see slice parameters
apiVersion: controller.kubeslice.io/v1alpha1
kind: SliceNodeAffinity
metadata:
name: red
spec:
nodeAffinityProfiles:
- cluster: worker-1
nodeAffinityRules:
- namespace: iperf
nodeSelectorLabels:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: cloud.google.com/gke-boot-disk
operator: In
values:
- pd-standard
- cluster: worker-2
nodeAffinityRules:
- namespace: "*"
nodeSelectorLabels:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: cloud.google.com/gke-boot-disk
operator: In
values:
- pd-standard
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
In the above configuration, application pods belonging to the iperf
namespace will be placed on
nodes that have the beta.kubernetes.io/os=linux
and cloud.google.com/gke-boot-disk=pd-standard
labels assigned to them.
To assign node labels to a namespace of all the worker clusters, add an asterisk (*
) as the value of the
cluster
property.
Similarly, to assign node labels to all namespaces of a worker cluster, add an asterisk (*
) as
the value of the namespace
property.
Ensure that the nodes are correctly labeled on the worker clusters. If nodes matching the labels
configured under node affinity rules are not found, the Kubernetes scheduler places the application
pods in the Pending
state.
Apply Assign Node Labels YAML
Apply the node label assignment configuration YAML file using the following command:
kubectl apply -f assign-nodes.yaml -n <project namespace>
Validate Assignment of Node Labels
Validate the assignment of node labels using the following command:
kubectl get slicenodeaffinity.controller.kubeslice.io -n kubeslice-<project-name>
Example
kubectl get slicenodeaffinity.controller.kubeslice.io -n kubeslice-avesha
**** Expected Output****
NAME AGE
red 4s
Edit Assignment of Node Labels
To edit the node label assignment of namespaces, update the configuration in the YAML file and:
Remove Assignment of Node Labels
To remove all assigned node labels from a slice, use the following command:
kubectl delete slicenodeaffinity.controller.kubeslice.io <name-of-the-slice> -n kubeslice-<project-name>