Skip to main content
Version: 1.16.0

Workspace NS Gateway

This topic describes the configuration of the workspace or slice NS (North -South) Gateway.

info

Across our documentation, we refer to the Workspace as the Slice. The two terms are used interchangeably

Overview

The NS Gateway is an ingress management component within the EGS architecture. It manages external traffic entry (North-South) into a logical application slice, enabling unified access and global load balancing across multi-cluster environments.

While EGS handles the traffic between your services (East-West traffic), the NS Gateway handles the traffic coming from the outside world (North-South traffic). The NS Gateway bridges the external world and this internal Slice network. It allows external clients (internet users, third-party APIs) to access a service without knowing which specific cluster hosts the workload. It abstracts the underlying multi-cluster topology, providing a single entry point that can route to local or remote endpoints dynamically.

Workspace NS Gateway using Envoy

EGS supports deploying NS Gateway for a workspace (slice). The NS Gateway can be deployed using Envoy Gateway, which is a cloud-native ingress gateway built on the Envoy proxy.

In this example, the Envoy Gateway is used as the NS Gateway. The Workspace NS Gateway is configured using the SliceNSGateway custom resource. This resource defines the complete configuration for the NS Gateway, including:

  • Backend services
  • Routing rules
  • Gateway references

The example configuration in this section uses the autoCreate option, which automatically creates the required Gateway resources.

In this example, aveshaone is the name of the workspace (slice). Replace it with your actual workspace (slice) name wherever applicable.

Prerequisites

Before you begin, ensure that you have the following prerequisites:

  1. Ensure that you have a workspace created in EGS environment. For more information, see Create a Workspace.

  2. Verify that you have at least two clusters connected to the workspace. For more information on how to register a worker cluster, see Register Clusters to a Workspace.

  3. Onboard the application namespace onto the workspace. For more information, see Onboard Namespace to Workspace.

Step 1. Install Envoy Gateway

Ensure that you have the Envoy Gateway installed in the cluster where you want to deploy the Workspace NS Gateway. Envoy Gateway must be installed with GatewayNamespace deployment type so that Gateway deployments are created in the same namespace as the Gateway CR (for example, the application namespace). Without this configuration, SliceNSGateway will not work correctly as the Gateway deployment needs to be in the same namespace as the SliceNSGateway. The SliceNSGateway will be installed in the application namespace.

Syntax

helm upgrade --install <envoyproxy helm release name> oci://docker.io/envoyproxy/gateway-helm -n <release-namespace> --create-namespace --set config.envoyGateway.provider.kubernetes.deploy.type=GatewayNamespace

Example

In the following example, the Envoy Gateway is installed with the release name eg in the envoy-gateway-system namespace.

helm upgrade --install eg oci://docker.io/envoyproxy/gateway-helm -n envoy-gateway-system --create-namespace --set config.envoyGateway.provider.kubernetes.deploy.type=GatewayNamespace
info

Repeat the above step to install Envoy Gateway in all the clusters that are part of the workspace where you want to deploy the Workspace NS Gateway.

Verify Envoy Gateway Installation

  1. Use the following to verify the Envoy gateway namespace:

    Example

    kubectl get namespace envoy-gateway-system

    Example Output

    NAME                   STATUS   AGE
    envoy-gateway-system Active 3h15m
  2. Use the following command to get the Envoy Gateway pods:

    kubectl get pods -n envoy-gateway-system

    Example Output

    NAME                                           READY   STATUS    RESTARTS   AGE
    envoy-gateway-66cf548dfb-9mnbs 1/1 Running 0 131m
  3. Use the following command to verify the Gateway API CRDs:

    kubectl get crd | grep -i gateway.envoyproxy.io 

    Example Output

    backends.gateway.envoyproxy.io                            2025-12-18T13:31:47Z
    backendtrafficpolicies.gateway.envoyproxy.io 2025-12-18T13:31:48Z
    clienttrafficpolicies.gateway.envoyproxy.io 2025-12-18T13:31:49Z
    envoyextensionpolicies.gateway.envoyproxy.io 2025-12-18T13:31:49Z
    envoypatchpolicies.gateway.envoyproxy.io 2025-12-18T13:31:50Z
    envoyproxies.gateway.envoyproxy.io 2025-12-18T13:31:52Z
    httproutefilters.gateway.envoyproxy.io 2025-12-18T13:31:53Z
    securitypolicies.gateway.envoyproxy.io 2025-12-18T13:31:54Z

Step 2: Create a Namespace and Deploy a Workload

Create a namespace and a workload in one of the clusters that is part of the workspace.

  1. Switch to one of the worker clusters that is part of the workspace.

  2. Use the following commands to create your application namespace:

    kubectl create namespace vllm-demo
  3. Onboard the namespace onto the workspace. For more information, see Onboard Namespace to Workspace.

  4. Create a sample application my-app in the vllm-demo namespace:

    # Create a test application (for example, nginx)
    kubectl create deployment my-app --image=nginx:latest -n vllm-demo

    # Expose it as a service
    kubectl expose deployment my-app --port=8080 --target-port=80 -n vllm-demo
  5. Use the following commands to verify the deployment and service:

    Example

    kubectl get deployment my-app -n vllm-demo

    Example Output

    NAME     READY   UP-TO-DATE   AVAILABLE   AGE
    my-app 0/1 0 0 16h
  6. Use the following command to verify the service:

    Example

    kubectl get svc my-app -n vllm-demo

    Example Output

    NAME     TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
    my-app ClusterIP 10.7.46.7 <none> 8080/TCP 16h
  7. Repeat the above steps to create the same namespace and deploy the same application in another cluster that is part of the workspace.

Step 3. Create a ServiceExport and ServiceImport

Service discovery across clusters in EGS is facilitated through the use of ServiceExport and ServiceImport custom resources.

ServiceExports

ServiceExport allows you to expose a service running in one cluster to other clusters within the same slice. This enables workloads in different clusters to discover and communicate with the exported service seamlessly.

Create a ServiceExport YAML File

info

For more information on the slice configuration parameters, see Workspace (Slice) Configuration Parameters.

To export a service, create a service-export.yaml file using the following template.

apiVersion: networking.kubeslice.io/v1beta1
kind: ServiceExport
metadata:
name: <serviceexport name>
namespace: <application namespace>
spec:
slice: <slice name>
aliases:
- <alias name>
- <alias name>
selector:
matchLabels:
<key>: <value>
ports:
- name: <protocol name>
containerPort: <port>
protocol: <protocol>
serviceProtocol: <protocol> # HTTPS or HTTP; only relevant to a multi-network slice
servicePort: <port_number> # only mandatory for a multi-network slice

The following is an example of a ServiceExport YAML file for exporting the my-app service in the vllm-demo namespace.

apiVersion: networking.kubeslice.io/v1beta1
kind: ServiceExport
metadata:
name: my-app #service export name, refered by sliceNSGW CR
namespace: vllm-demo #application namespace where app is deployed
spec:
slice: aveshaone # slice name
selector:
matchLabels:
app: my-app #label selector of the service to be exported
ports:
- containerPort: 8080 #port number of service to be exposed
name: http #port name
protocol: TCP #protocol of the service
servicePort: 8080 #port number of service to be exposed

Apply the ServiceExport YAML File

To apply the service export YAML file, use the following command:

kubectl apply -f service-export.yaml -n vllm-demo

Verify ServiceExport

Verify if the service is exported successfully using the following command:

Example

kubectl get serviceexport -n vllm-demo

Example Output

NAME     SLICE    INGRESS   SERVICEPORT(S)      PORT(S)    ENDPOINTS   STATUS   ALIAS
my-app aveshaone 8080/TCP 1 READY ["my-app.vllm-demo.slice.local"]

ServiceExport DNS

The service is exported and reachable through KubeSlice DNS at:

<serviceexport name>.<namespace>.svc.slice.local

For example, in this case, the service can be accessed using the following DNS name:

my-app.vllm-demo.svc.slice.local

ServiceImports

When a ServiceExport is deployed, the corresponding ServiceImport is automatically created on each of the worker clusters that are part of the workspace. This populates the necessary DNS entries and ensures your traffic always reaches the correct cluster and endpoint.

To verify that the service is imported on other worker clusters, use the following command:

Example

kubectl get serviceimport -n vllm-demo

Example Output

NAME     SLICE       PORT(S)    ENDPOINTS   STATUS   ALIAS
my-app aveshaone 8080/TCP 1 READY ["my-app.vllm-demo.slice.local"]

Step 4: Create a SliceNSGateway Resource

The SliceNSGateway custom resource provides flexible configuration options for setting up your NS Gateway. You can configure it in multiple ways depending on your requirements.

info

For more information on the slice configuration parameters, see Workspace (Slice) Configuration Parameters.

The simplest way to configure SliceNSGateway is using serviceExportName. The controller will automatically discover backends from the ServiceImport and create routes for all ports defined in the ServiceImport.

  1. Use the following examples to create a file called slice-nsgateway.yaml:

    Example: Basic Configuration with ServiceExportName

    apiVersion: networking.kubeslice.io/v1alpha1
    kind: SliceNSGateway
    metadata:
    name: nginx-gateway
    namespace: vllm-demo
    spec:
    sliceName: aveshaone
    serviceExportName: my-app
    gatewayRef:
    name: demo-gw
    autoCreate: true
    serviceType: LoadBalancer
    loggingLevel: info
    externalTrafficPolicy: Cluster
    fqdn:
    - nginx.example.com
  2. Use the following command to apply the configuration:

    kubectl apply -f slice-nsgateway.yaml

SliceNSGateway Configuration Examples

Example for HTTP Gateway with Path-based Routing

The following is example configuration for a SliceNSGateway with path-based routing:

 apiVersion: networking.kubeslice.io/v1alpha1
kind: SliceNSGateway
metadata:
name: nginx-gateway
namespace: vllm-demo
spec:
sliceName: aveshaone # Use your existing slice name
gatewayRef:
name: demo-gw
autoCreate: true
gatewayClassName: envoy
backends:
- name: v1-service
type: local
service:
name: api-v1
port: 8080
routingRules:
- priority: 100
match:
path:
type: prefix
value: /api/v2
backends:
- name: v2-service
type: local
service:
name: api-v2
port: 8080

Advanced Configuration with Explicit Backends

For more control over backend configuration, you can explicitly define backends using the backends array.

The following is an example configuration for a SliceNSGateway with explicit backend definitions:

apiVersion: networking.kubeslice.io/v1alpha1
kind: SliceNSGateway
metadata:
name: nginx-gateway
namespace: vllm-demo
spec:
sliceName: aveshaone
gatewayRef:
name: demo-gw
autoCreate: true
serviceType: LoadBalancer
ports:
- name: http
port: 80
protocol: HTTP
backends:
# Local backend (same cluster)
- name: local-app
type: local
loadBalancingMode: Weighted
service:
name: my-app
namespace: vllm-demo
port: 8080
# Remote backend (different cluster via ServiceImport)
- name: remote-app
type: remote
loadBalancingMode: RR
serviceImport:
name: my-app
namespace: vllm-demo
port: 8080
# Remote backend (direct NSM IP)
- name: remote-app-direct
type: remote
loadBalancingMode: Weighted
nsmIP: 192.168.1.100
service:
name: my-app
namespace: vllm-demo
port: 8080

Timeout Configuration

SliceNSGateway supports timeout settings for improved reliability.

The following is an example configuration for a SliceNSGateway with timeout settings:

apiVersion: networking.kubeslice.io/v1alpha1
kind: SliceNSGateway
metadata:
name: nginx-gateway
namespace: vllm-demo
spec:
sliceName: aveshaone
gatewayRef:
name: demo-gw
autoCreate: true
serviceExportName: my-app
timeoutConfig:
requestTimeout: 60s
backendRequestTimeout: 30s

Step 5: Verify the installation

When autoCreate: true, the reconciler automatically creates:

  1. An EnvoyProxy custom resource (named {sliceName}-envoy-proxy)
  2. A GatewayClass (named {sliceName}-gatewayclass)
  3. The Gateway resource
  4. An HTTPRoute (or GRPCRoute for gRPC)

Verify Resources

  1. Watch the Workspace NS Gateway status using the following command, expect the STATUS field to read Ready:

    Example

    kubectl get slicensgateway nginx-gateway -n vllm-demo -w

    Example Output

    NAME            GATEWAY   FQDN   STATUS   AGE
    nginx-gateway demo-gw Ready 3h52m
  2. Verify the gateway using the following command:

    kubectl get gateway demo-gw -n vllm-demo

    Example

    NAME      CLASS                    ADDRESS          PROGRAMMED   AGE
    demo-gw aveshaone-gatewayclass 35.226.211.239 True 3h57m
  3. Verify the HTTPRoute using the following command:

    Example

    kubectl get httproute  -n vllm-demo

    Example Output

    NAME                  HOSTNAMES   AGE
    nginx-gateway-route 3h58m
  4. Verify the GatewayClass using the following command:

    Example

    kubectl get gatewayclass aveshaone-gatewayclass

    Example Output

    NAME                     CONTROLLER                                      ACCEPTED   AGE
    aveshaone-gatewayclass gateway.envoyproxy.io/gatewayclass-controller True 6h36m
  5. Verify the Envoy Proxy using the following command:

    Example

    # Check EnvoyProxy (if autoCreate: true)
    kubectl get envoyproxy -n vllm-demo

    Example Output

    NAME                    AGE
    aveshaone-envoy-proxy 4h1m

Verify the Gateway Status

Verify the Gateway status using the following command:

Example

kubectl get gateway demo-gw -n vllm-demo -o yaml

Example Output

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
annotations:
networking.kubeslice.io/gateway-created-by: slicensgateway-reconciler
networking.kubeslice.io/gateway-managed-by: slicensgateway
networking.kubeslice.io/gateway-reference-count: "1"
networking.kubeslice.io/gateway-references: '[{"name":"nginx-gateway","namespace":"vllm-demo","uid":"d542463f-6549-4
769-b6d6-ad41a3260c27"}]'
creationTimestamp: "2025-12-10T10:51:40Z"
finalizers:
- networking.kubeslice.io/gateway-finalizer
generation: 1
labels:
app.kubernetes.io/managed-by: slicensgateway
name: demo-gw
namespace: vllm-demo
resourceVersion: "1765363934336495010"
uid: 526347c0-ed0f-405a-9cf8-b6a5f30692bc
spec:
gatewayClassName: aveshaone-gatewayclass
listeners:
- allowedRoutes:
namespaces:
from: Same
name: http
port: 80
protocol: HTTP
status:
addresses:
- type: IPAddress
value: 35.226.211.239
conditions:
- lastTransitionTime: "2025-12-10T10:52:14Z"
message: The Gateway has been scheduled by Envoy Gateway
observedGeneration: 1
reason: Accepted
status: "True"
type: Accepted
networking.kubeslice.io/gateway-references: '[{"name":"nginx-gateway","namespace":"vllm-demo","uid":"d542463f-6549-4769-b6d6-ad41a3260c27"}]'
creationTimestamp: "2025-12-10T10:51:40Z"
finalizers:
- networking.kubeslice.io/gateway-finalizer
generation: 1
labels:
app.kubernetes.io/managed-by: slicensgateway
name: demo-gw
namespace: vllm-demo
resourceVersion: "1765363934336495010"
uid: 526347c0-ed0f-405a-9cf8-b6a5f30692bc
spec:
gatewayClassName: aveshaone-gatewayclass
listeners:
- allowedRoutes:
namespaces:
from: Same
name: http
port: 80
protocol: HTTP
status:
addresses:
- type: IPAddress
value: 35.226.211.239
conditions:
- lastTransitionTime: "2025-12-10T10:52:14Z"
message: The Gateway has been scheduled by Envoy Gateway
observedGeneration: 1
reason: Accepted
status: "True"
type: Accepted
- lastTransitionTime: "2025-12-10T10:52:14Z"
message: Address assigned to the Gateway, 1/1 envoy replicas available
observedGeneration: 1
reason: Programmed
status: "True"
type: Programmed
listeners:
- attachedRoutes: 1
conditions:
- lastTransitionTime: "2025-12-10T10:52:14Z"
message: Sending translated listener configuration to the data plane
observedGeneration: 1
reason: Programmed
status: "True"
type: Programmed
- lastTransitionTime: "2025-12-10T10:52:14Z"
message: Listener has been successfully translated
observedGeneration: 1
reason: Accepted
status: "True"
type: Accepted
- lastTransitionTime: "2025-12-10T10:52:14Z"
message: Listener references have been resolved
observedGeneration: 1
reason: ResolvedRefs
status: "True"
type: ResolvedRefs
name: http
supportedKinds:
- group: gateway.networking.k8s.io
kind: HTTPRoute
- group: gateway.networking.k8s.io
kind: GRPCRoute

Verify the LoadBalancer Service

The Envoy Gateway creates a LoadBalancer service for each Gateway.

  1. Use the following command to verify the LoadBalancer service:

    Example

    kubectl get svc -n vllm-demo

    Example Output

    NAME                             TYPE           CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
    demo-gw LoadBalancer 10.7.41.67 35.226.211.239 80:30373/TCP 47m

Step 6: Test the Gateway Service

When the Gateway is ready and has an External IP assigned, test using the curl.

  1. Get the Gateway External IP using the following command:

    Example

    kubectl get gateway demo-gw -n vllm-demo -o jsonpath='{.status.addresses[0].value}'

    Example Output

    35.226.211.239
  2. Test the service using the following curl command:

    Example

    curl -v http://35.226.211.239

    Example Output

    VERBOSE: GET with 0-byte payload
    VERBOSE: received 615-byte response of content type text/html


    StatusCode : 200
    StatusDescription : OK
    Content : <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx!</title>
    <style>
    html { color-scheme: light dark; }
    body { width: 35em; margin: 0 auto;
    font-family: Tahoma, Verdana, Arial, sans-serif; }
    </style...
    RawContent : HTTP/1.1 200 OK
    Accept-Ranges: bytes
    Content-Length: 615
    Content-Type: text/html
    Date: Wed, 10 Dec 2025 15:27:42 GMT
    ETag: "69386a3a-267"
    Last-Modified: Tue, 09 Dec 2025 18:28:10 GMT
    Server: ng...
    Forms : {}
    Headers : {[Accept-Ranges, bytes], [Content-Length, 615], [Content-Type, text/html], [Date, Wed, 10 Dec 2025 15:27:42 GMT]...}
    Images : {}
    InputFields : {}
    Links : {@{innerHTML=nginx.org; innerText=nginx.org; outerHTML=<A href="http://nginx.org/">nginx.org</A>; outerText=nginx.org;
    tagName=A; href=http://nginx.org/}, @{innerHTML=nginx.com; innerText=nginx.com; outerHTML=<A
    href="http://nginx.com/">nginx.com</A>; outerText=nginx.com; tagName=A; href=http://nginx.com/}}
    ParsedHtml : mshtml.HTMLDocumentClass
    RawContentLength : 615
    info

    You see the default Nginx welcome page HTML content in the response, indicating that the request was successfully routed to the Nginx application through the Slice NS Gateway.

Step 7: Verify the Workspace NS Gateway Setup

To verify the complete setup of the Workspace NS Gateway along with all associated resources, follow these steps:

  1. Verify the Gateway API resources using the following command:

    Example

    kubectl get gateway,httproute -n vllm-demo

    Example Output

    NAME                                        CLASS                 ADDRESS          PROGRAMMED   AGE
    gateway.gateway.networking.k8s.io/demo-gw aveshaone-gatewayclass 35.226.211.239 True 4h24m

    NAME HOSTNAMES AGE
    httproute.gateway.networking.k8s.io/nginx-gateway-route 4h24m
  2. Verify the SliceNSGateway status using the following command: `
    Example

    kubectl describe slicensgateway  nginx-gateway -n vllm-demo

    Example Output

    Name:         nginx-gateway
    Namespace: vllm-demo
    Labels: <none>
    Annotations: <none>
    API Version: networking.kubeslice.io/v1alpha1
    Kind: SliceNSGateway
    Metadata:
    Creation Timestamp: 2025-12-10T10:51:40Z
    Finalizers:
    networking.kubeslice.io/slicensgateway-finalizer
    Generation: 1
    Resource Version: 1765365808056543015
    UID: d542463f-6549-4769-b6d6-ad41a3260c27
    Spec:
    Backends:
    Load Balancing Mode: Weighted
    Name: local-app
    Service:
    Name: my-app
    Port: 8080
    Type: local
    Weight: 100
    Gateway Ref:
    Auto Create: true
    Name: demo-gw
    Slice Name: aveshaone
    Status:
    Conditions:
    Last Transition Time: 2025-12-10T11:23:28Z
    Message: All resources created successfully
    Reason: ReconcileSuccess
    Status: True
    Type: Ready
    Gateway Ref:
    Name: demo-gw
    Namespace: vllm-demo
    Ready: true
    Http Route: nginx-gateway-route
    Observed Generation: 1
    Phase: Ready
    Events: <none>

Multi-Cluster gRPC Routing Example

This example demonstrates how to route a gRPC application across multiple clusters using SliceNSGateway.

Example Scenario

You have a gRPC service (echo-service) running in multiple clusters within the same slice. You want to:

  • Expose the gRPC service through SliceNSGateway
  • Route traffic to both local and remote gRPC services
  • Load balance requests across all available endpoints

Step 1: Deploy the gRPC Application

First, deploy a sample gRPC echo service in your clusters. In the local cluster (cluster 1), create a deployment and service for the gRPC echo server.

Use the following example to create a file called echo-service.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
name: echo-service
namespace: my-app
labels:
app: echo-service
spec:
replicas: 2
selector:
matchLabels:
app: echo-service
template:
metadata:
labels:
app: echo-service
spec:
containers:
- name: echo-server
image: moul/grpcbin:latest
ports:
- containerPort: 9000
name: grpc
protocol: TCP
env:
- name: GRPCBIN_PORT
value: "9000"
readinessProbe:
exec:
command: ["/bin/sh", "-c", "nc -z localhost 9000"]
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
exec:
command: ["/bin/sh", "-c", "nc -z localhost 9000"]
initialDelaySeconds: 10
periodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: echo-service
namespace: my-app
labels:
app: echo-service
spec:
type: ClusterIP
ports:
- port: 50051
targetPort: 9000
protocol: TCP
name: grpc
selector:
app: echo-service

In the local cluster, apply the above manifest:

kubectl apply -f echo-service.yaml

In the remote cluster (cluster 2), deploy the same gRPC echo service using the same manifest. The service will be exported through ServiceExport in the next step.

Step 2: Export the Service in Remote Cluster

In the remote cluster (cluster 2), create a ServiceExport to make the gRPC service available across the slice. Create a file called service-export.yaml with the following content:

apiVersion: networking.kubeslice.io/v1beta1
kind: ServiceExport
metadata:
name: echo-service
namespace: my-app
spec:
slice: my-slice
ports:
- name: grpc
port: 50051
protocol: GRPC
targetPort: 9000

Apply the ServiceExport in the remote cluster:

kubectl apply -f service-export.yaml

This will automatically create a ServiceImport in the local cluster when the ServiceExport exists.

Step 3: Verify Namespace Onboarding

Verify your namespace is onboarded to the slice in both clusters using the following command :

kubectl get namespace my-app -o yaml | grep kubeslice.io

Step 4: Create the SliceNSGateway Resource

Create a SliceNSGateway resource that routes gRPC traffic to both local and remote backends: Create a file called grpc-multicluster-gateway.yaml with the following content:

apiVersion: networking.kubeslice.io/v1alpha1
kind: SliceNSGateway
metadata:
name: grpc-echo-gateway
namespace: my-app
spec:
# Slice name (workspace name)
sliceName: my-slice

# Gateway reference - auto-create the Gateway resource
gatewayRef:
name: grpc-echo-gateway
namespace: my-app
autoCreate: true
gatewayClassName: envoy-gateway-class
serviceType: LoadBalancer
loggingLevel: info

# FQDN for external access
fqdn:
- echo-service.my-app.svc.cluster.local

# Port configuration - specify GRPC protocol
ports:
- name: grpc
port: 50051
protocol: GRPC

# ServiceExport configuration - automatically discovers remote backends from ServiceImport
# The remote gRPC service is exposed via ServiceExport in another cluster
serviceExportName: echo-service
serviceExportNamespace: my-app

# Backend configuration - local backend
backends:
- name: echo-service-local
type: local
loadBalancingMode: Weighted
service:
name: echo-service
namespace: my-app
port: 50051

# Optional: Timeout configuration
timeoutConfig:
requestTimeout: 60s
backendRequestTimeout: 30s

Step 5: Apply the Configuration

# Apply in local cluster
kubectl apply -f grpc-multicluster-gateway.yaml

Step 6: Verify the Gateway Status

Verify the status of your SliceNSGateway using the following commands to ensure it is correctly set up and ready to route gRPC traffic:

# Check SliceNSGateway status
kubectl get slicensgateway grpc-echo-gateway -n my-app -o yaml

# Check the Gateway resource
kubectl get gateway grpc-echo-gateway -n my-app

# Check the GRPCRoute created automatically
kubectl get grpcroute -n my-app

# Get detailed GRPCRoute information
kubectl get grpcroute -n my-app -o yaml

Expected output

  • status.phase: Ready
  • status.grpcRoute with the created GRPCRoute name
  • status.backends showing both local and remote backends (if ServiceImport has endpoints)
  • Gateway listener on port 50051 with HTTP protocol (gRPC uses HTTP/2)

Verify the remote backends using the ServiceImport using the following commands:

# Check ServiceImport status
kubectl get serviceimport echo-service -n my-app -o yaml

# Check if ServiceImport has endpoints
kubectl get serviceimport echo-service -n my-app -o jsonpath='{.status.endpoints}'

# Check the headless service created for remote backends
kubectl get svc grpc-echo-gateway-echo-service -n my-app

# Check endpoint slices for remote backends
kubectl get endpointslice -n my-app -l kubeslice.io/managed-by=slicensgateway

Step 7: Test the gRPC Service

After the SliceNSGateway is set up and ready, you can test the gRPC service using a gRPC client like grpcurl.

# Get the LoadBalancer IP or NodePort
kubectl get svc -n my-app | grep grpc-echo-gateway

# Test with grpcurl (if installed)
# List available services
grpcurl -plaintext <GATEWAY_IP>:50051 list

# Test echo service (grpcbin example)
grpcurl -plaintext <GATEWAY_IP>:50051 list grpcbin.GRPCBin
grpcurl -plaintext <GATEWAY_IP>:50051 grpcbin.GRPCBin/Index

The expected output should show the gRPC services and methods available, and you should be able to successfully call the echo service. The gateway will automatically load balance requests across:

  • Local cluster endpoints (2 replicas in this example)
  • Remote cluster endpoints (from ServiceImport)

Limitations

  • The SliceNSGateway currently supports only HTTP and gRPC protocols.
  • The SliceNSGateway supports only Envoy Gateway as the gateway provider.
  • The SliceNSGateway requires the Envoy Gateway to be installed with GatewayNamespace deployment type.
  • No method/service matching support for gRPC routes in SliceNSGateway.
  • Missing explicit match configuration in GRPCRoute rules created by SliceNSGateway.
  • Weighted traffic distribution for gRPC routes is a known limitation.
  • Older versions of Envoy Gateway may have compatibility issues with SliceNSGateway.

For more information on these limitations, see Troubleshooting Guide.