Skip to main content
Version: 1.15.0

Manage Resource Quotas

Resource quotas enable the cluster admins to enforce limits for CPU, memory, ephemeral storage and number of application pods per namespace on a slice. It requires setting and monitoring the threshold limit and requests of the resources at the slice level.

The cluster admin can manage the usage of compute resources on a slice ensuring that the namespaces get a fair share of the resources. This prevents some namespaces from overusing the resources leaving little or no resources for other namespaces on the same slice.

Requests and limits are the ways Kubernetes uses to manage resources such as CPU and memory. Requests are what the container is assured to get. Limits ensure that a container is restricted to a particular value. You can also set quotas for requests and limits to manage the local ephemeral storage.

A default limit and request can be configured per container and their purpose is described below:

  • Default limit per container: This is used for a container in the namespace that does not have resource limits configured. The default limit per container is set at the namespace level.
  • Default request per container: This is used for a container in the namespace that does not have resource requests configured. A default request per container can be set at the slice quota that applies to all namespaces on the slice. This default request per container set at the slice level is overridden when it is set at the namespace level.

Enforce Resource Quota

The resource quota is enforced only at the namespace level but you can set quotas for the limits and requests for a slice. The CPU, memory, pod count, and the local ephemeral storage of all namespaces in a slice must be less than or equal to their corresponding limits and requests set for a slice. The admin can check for quota breaches by tracking the violation in usage metrics through PromQL queries and KubeSlice Manager.

While setting the limits and requests for resources, you should consider the application requirements and define the limit and request values. For more information, see quotas.

Configure Quotas

clusterQuota is used as an object that contains the names of the worker clusters and namespaces that are on them. The limits can be enforced on the namespaces that are within the cluster. The limits are not set at a cluster level.

Create Resource Quota YAML

Copy and save the below slice-resource-configuration.yaml template:

info

To know more about the configuration details, see slice parameters

apiVersion: controller.kubeslice.io/v1alpha1
kind: SliceResourceQuotaConfig
metadata:
name: red
spec:
sliceQuota:
resources:
limit:
cpu: 1800m
memory: 1800Mi
podCount: 44
ephemeralStorage: 1300Mi
request:
cpu: 500m
memory: 500Mi
ephemeralStorage: 300Mi
defaultRequestPerContainer:
cpu: 14m
memory: 17Mi
ephemeralStorage: 12Mi
clusterQuota:
- clusterName: cluster-1
namespaceQuota:
- enforceQuota: true
namespace: namespace-1
resources:
limit:
cpu: 500m
memory: 500Mi
podCount: 5
ephemeralStorage: 300Mi
request:
cpu: 50m
memory: 18Mi
ephemeralStorage: 30Mi
defaultRequestPerContainer:
cpu: 4m
memory: 10Mi
ephemeralStorage: 10Mi
defaultLimitPerContainer:
cpu: 10m
memory: 20Mi
ephemeralStorage: 20Mi

Apply the Slice Quota Configuration

After creating the YAML file, apply it on the project namespace using the following command:

kubectl apply -f <slice-resource-configuration>.yaml -n <project namespace>

Validate the Slice Quota Configuration

Validate the slice quota configuration using the following command:

 kubectl get SliceResourceQuotaConfig -n <project namespace>

Example

 kubectl get SliceResourceQuotaConfig -n kubeslice-avesha

**** Expected Output****

NAME  AGE
red 10s

Edit Slice Quotas

You can edit quotas for limits and requests by:

  • Editing the slice-resource-configuration.yamlfile and applying the YAML file to refresh the configuration.

Delete Slice Quotas

You can delete the slice quotas for limits by running the following command to delete the slice quotas for limits and requests:

kubectl delete SliceResourceQuotaConfig <name of the slice resource quota> -n <project namespace>