Skip to main content
Version: 1.14.0

GPR Templates

This topic describes the REST APIs to manage GPR Templates.

Create a GPR Template

Use this API to create a GPR Template.

Method and URL

POST /api/v1/gpr-template

Parameters

ParameterParameter TypeDescriptionRequired
nameStringThe name for the GPR template.Mandatory
clusterNameStringThe name of the cluster this template is being created for.Mandatory
numberOfGPUsIntegerThe number of GPUs that needs to be provisioned.Mandatory
instanceTypeString (as available on the inventory list)GPU node instance type that you can get from the Inventory list details.Mandatory
numberOfGPUNodesIntegerNumber of GPU Nodes that you want to request.Mandatory
priorityStringPriority of the request. You can change the priority of a GPR in the queue. You can select a GPR and increase the priority number (low: 1-100, medium: 101-200, high: 201-300) to move a GPR higher in the queue.Mandatory
memoryPerGPUIntegerThe memory per GPU is specified using this parameter. The default value is set if you do not pass this parameter.Mandatory
gpuShapeString (as available on the inventory list)Name of the GPU type that you can get from the List Inventory details.Mandatory
exitDurationTimestampThe duration for which you want the GPU allocation in the timestamp format, DDHHMM.Mandatory
idleTimeOutDurationTimestampThe duration for which a GPU node can be considered idle before it can be used by another GPR.Optional
enforceIdleTimeOutBooleanIf you set the idleTimeOutDuration, then the value of this parameter is enabled by default. Set the value to false if you do not want it enforce the idle time out.Optional
enableEvictionBooleanEnable this option to enable auto-eviction of the low priority GPR.Optional
requeueOnFailureBooleanEnable this option to requeue GPR in case it fails.Optional

Example Request

curl -X POST --location --globoff --request
'{
"name":"final-tes1",
"clusterName":"worker-1",
"numberOfGPUs":1,
"instanceType":"n1-highcpu-2",
"exitDuration":"0d0h1m",
"numberOfGPUNodes": 1,
"priority":201,
"memoryPerGPU":15,
"gpuShape":"Tesla-T4",
"enableEviction": true,
"requeueOnFailure": true,
"idleTimeOutDuration": "0d0h1m",
"enforceIdleTimeOut":true
}'

Example Responses

Successful Response

{
"statusCode": 200,
"status": "OK",
"message": "Success",
"data": {
"gprTemplateName": "api-test1"
}
}

Bad Request

{
"statusCode": 409,
"status": "UPSTREAM_REQUEST_ERROR",
"message": "Error while creating a GPR Template",
"data": null,
"error": {
"errorKey": "UPSTREAM_REQUEST_ERROR",
"message": "Error while creating a GPR Template",
"data": "Error: gprtemplates.gpr.kubeslice.io \"api-test1\" already exists; Reason: AlreadyExists"
}
}

Get a GPR Template

Use this API to get a specific GPR template by calling its name.

Method and URL

GET /api/v1/gpr-template?gprTemplateName=<template-name>

Parameters

ParameterParameter TypeDescriptionRequired
gprTemplateNameStringThe name of the GPR template that you want to retrieve.Mandatory

Example Request

curl -X GET --location --globoff --request

{{host}}/api/v1/gpr-template?gprTemplateName=vc-test1-template

Example Response

{
"statusCode": 200,
"status": "OK",
"message": "Success",
"data": {
"name": "vc-test1-template",
"clusterName": "worker-1",
"numberOfGPUs": 1,
"instanceType": "n1-highcpu-2",
"exitDuration": "0h12m0s",
"numberOfGPUNodes": 1,
"gpuShape": "Tesla-T4",
"memoryPerGpu": 15,
"priority": 201,
"enableEviction": true,
"requeueOnFailure": true,
"idleTimeOutDuration": "",
"enforceIdleTimeOut": false
}
}

List GPR Templates

Use this API to get the list of all GPR templates.

Method and URL

GET /api/v1/gpr-template/list

Parameters

None

Example Request

curl -X GET --location --globoff --request

{{host}}/api/v1/gpr-template

Example Response

{
"statusCode": 200,
"status": "OK",
"message": "Success",
"data": {
"items": [
{
"name": "9999",
"clusterName": "worker-2",
"numberOfGPUs": 1,
"instanceType": "a2-highgpu-2g",
"exitDuration": "75h3m0s",
"numberOfGPUNodes": 1,
"gpuShape": "NVIDIA-A100-SXM4-40GB",
"memoryPerGpu": 40,
"priority": 201,
"enableEviction": false,
"requeueOnFailure": false,
"idleTimeOutDuration": "",
"enforceIdleTimeOut": false
},
{
"name": "abc",
"clusterName": "worker-1",
"numberOfGPUs": 1,
"instanceType": "a2-highgpu-2g",
"exitDuration": "96h0m0s",
"numberOfGPUNodes": 1,
"gpuShape": "NVIDIA-A100-SXM4-40GB",
"memoryPerGpu": 40,
"priority": 101,
"enableEviction": true,
"requeueOnFailure": true,
"idleTimeOutDuration": "72h0m0s",
"enforceIdleTimeOut": true
},
{
"name": "auto-gpr",
"clusterName": "worker-1",
"numberOfGPUs": 2,
"instanceType": "n1-highcpu-2",
"exitDuration": "1h0m0s",
"numberOfGPUNodes": 1,
"gpuShape": "n1-highcpu-2",
"memoryPerGpu": 15,
"priority": 101,
"enableEviction": true,
"requeueOnFailure": true,
"idleTimeOutDuration": "",
"enforceIdleTimeOut": false
}
]
}
}

Update a GPR Template

Use this API to update a specific GPR template.

Method and URL

PUT /api/v1/gpr-template

Parameters

ParameterParameter TypeDescriptionRequired
nameStringThe name for the GPR template.Mandatory
clusterNameStringThe name of the cluster this template is being updated for.Mandatory
numberOfGPUsIntegerThe number of GPUs that needs to be provisioned.Mandatory
instanceTypeString (as available on the inventory list)GPU node instance type that you can get from the Inventory list details.Mandatory
numberOfGPUNodesIntegerNumber of GPU Nodes that you want to request.Mandatory
priorityStringPriority of the request. You can change the priority of a GPR in the queue. You can select a GPR and increase the priority number (low: 1-100, medium: 101-200, high: 201-300) to move a GPR higher in the queue.Mandatory
memoryPerGPUIntegerThe memory per GPU is specified using this parameter. The default value is set if you do not pass this parameter.Mandatory
gpuShapeString (as available on the inventory list)Name of the GPU type that you can get from the List Inventory details.Mandatory
exitDurationTimestampThe duration for which you want the GPU allocation in the timestamp format, DDHHMM.Mandatory
idleTimeOutDurationTimestampThe duration for which a GPU node can be considered idle before it can be used by another GPR.Optional
enforceIdleTimeOutBooleanIf you set the idleTimeOutDuration, then the value of this parameter is enabled by default. Set the value to false if you do not want it enforce the idle time out.Optional
enableEvictionBooleanEnable this option to enable auto-eviction of the low priority GPR.Optional
requeueOnFailureBooleanEnable this option to requeue GPR in case it fails.Optional

Example Request

curl -X PUT --location --globoff --request
{{host}}/api/v1/gpr-template

'{
"name":"312",
"clusterName":"worker-1",
"numberOfGPUs":1,
"instanceType":"n1-highcpu-2",
"exitDuration":"0d0h1m",
"numberOfGPUNodes": 1,
"priority":201,
"memoryPerGPU":15,
"gpuShape":"Tesla-T4",
"enableEviction": true,
"requeueOnFailure": true,
"idleTimeOutDuration": "0d0h1m",
"enforceIdleTimeOut":true
}'

Example Response

{
"statusCode": 200,
"status": "OK",
"message": "Success",
"data": {}
}

Delete a GPR Template

Use this API to delete a specific GPR template by calling its name.

Method and URL

DELETE /api/v1/gpr-template

Parameters

ParameterParameter TypeDescriptionRequired
gprTemplateNameStringThe name of the GPR template that you want to delete.Mandatory

Example Request

curl -X DELETE --location --globoff --request
{{host}}/api/v1/gpr-template

'{
"gprTemplateName": "api-test1"
}'

Example Responses

Successful Response

{
"statusCode": 200,
"status": "OK",
"message": "Success",
"data": {}
}

Bad Request

{
"statusCode": 404,
"status": "UPSTREAM_REQUEST_ERROR",
"message": "Error while deleting GPR template",
"data": null,
"error": {
"errorKey": "UPSTREAM_REQUEST_ERROR",
"message": "Error while deleting GPR template",
"data": "Error: gprtemplates.gpr.kubeslice.io \"api-test1\" not found; Reason: NotFound"
}
}