Skip to main content
Version: 1.14.0

Create No-Network Slices

This topic describes the steps to create a slice without overlay network (no-network). The worker clusters must be registered with the KubeSlice Controller to create a slice with no-network. For more information, see how to register worker clusters.

To leverage KubeSlice's capabilities to provide application oversight for services that are distributed across multiple clusters, select:

  • Service Mapped Connectivity if your applications are distributed and require the ability to communicate, but do not have need of a dedicated overlay. The overlay network deployment mode associated with this type is multi network.
  • Overlay Connectivity if your applications benefit from communication through a dedicated overlay network. The overlay network deployment mode associated with this type is single network.
  • No Connectivity if your distributed applications do not require extended-cluster communication. The overlay network deployment mode associated with this type is no network.

Slice Configuration Parameters

The following tables describe the configuration parameters used to create a slice with registered worker cluster(s).

ParameterParameter TypeDescriptionRequired
apiVersionStringThe KubeSlice Controller API version. A set of resources that are exposed together, along with the version. The value must be controller.kubeslice.io/v1alpha1.Mandatory
kindStringThe name of a particular object schema. The value must be SliceConfig.Mandatory
metadataObjectThe metadata describes parameters (names and types) and attributes that have been applied.Mandatory
specObjectThe specification of the desired state of an object.Mandatory

Slice Metadata Parameters

These parameters are related to the metadata configured in the slice configuration YAML file.

ParameterParameter TypeDescriptionRequired
nameStringThe name of the slice you create. Each slice must have a unique name within a Project namespace.Mandatory
namespaceStringThe project namespace on which you apply the slice configuration file.Mandatory

Slice Spec Parameters

These parameters are related to the spec configured in the slice configuration YAML file.

ParameterParameter TypeDescriptionRequired
sliceSubnetString (IP/16 Subnet) (RFC 1918 addresses)This subnet is used to assign IP addresses to pods that connect to the slice overlay network. The CIDR range can be re-used for each slice or can be modified as required. Example: 192.168.0.0/16Mandatory
maxClustersIntegerThe maximum number of clusters that are allowed to connect a slice. The value of maxClusters can only be set during the slice creation. The maxClusters value is immutable after the slice creation. The minimum value is 2, and the maximum value is 32. The default value is 16. Example: 5. The maxClusters affect the subnetting across the clusters. For example, if the slice subnet is 10.1.0.0/16 and the maxClusters=16, then each cluster would get a subnet of 10.1.x.0/20, x=0,16,32.Optional
sliceTypeStringDenotes the type of the slice. The value must be set to Application.Mandatory
clustersList of StringsThe names of the worker clusters that would be part of the slice. You can provide the list of worker clusters.Mandatory
overlayNetworkDeploymentModeStringThis parameter is to set the overlay network deployment mode for a slice to single-network, multi-network, or no-network. If this parameter is not passed, then a single-network slice is created. The value is no-network for a slice without inter-cluster connectivity. For A single-network slice contains flat overlay network, and the pod-to-pod connectivity is at L3. A multi-network slice contains the pod-to-pod connectivity across clusters that is set up through a network of L7 ingress and egress gateways. A multi-network slice only supports HTTP and HTTPs protocols where as a single network-slice supports HTTP, HTTPs, TCP, and UDP protocols. A no-network slice does not contain inter-cluster connectivity. To know more, refer to the slice overlay network deployment mode.Optional
namespaceIsolationProfileObjectIt is the configuration to onboard namespaces and/or isolate namespaces with the network policy.Mandatory

Namespace Isolation Profile Parameters

These parameters are related to onboarding namespaces, isolating the slice, and allowing external namespaces to communicate with the slice. They are configured in the slice configuration YAML file.

ParameterParameter TypeDescriptionRequired
applicationNamespacesArray objectDefines the namespaces that will be onboarded to the slice and their corresponding worker clusters.Mandatory
allowedNamespacesArray objectContains the list of namespaces from which the traffic flow is allowed to the slice. By default, native Kubernetes namespaces such as kube-system are allowed. If isolationEnabled is set to true, then you must include namespaces that you want to allow traffic from.Optional
isolationEnabledBooleanDefines if the namespace isolation is enabled. By default, it is set to false. The isolation policy only applies to the traffic from the application and allowed namespaces to the same slice.Optional

Application Namespaces Parameters

These parameters are related to onboarding namespaces onto a slice, which are configured in the slice configuration YAML file.

ParameterParameter TypeDescriptionRequired
namespaceStringThe namespace that you want to onboard to the slice. These namespaces can be isolated using the namespace isolation feature.Mandatory
clustersList of StringsCorresponding cluster names for the namespaces listed above. To onboard the namespace on all clusters, specify the asterisk * as this parameter's value.Mandatory

Allowed Namespaces Parameters

These parameters are related to allowing external namespaces to communicated with the slice, which are configured in the slice configuration YAML file.

ParameterParameter TypeDescriptionRequired
namespaceStringsList of external namespaces that are not a part of the slice from which traffic is allowed into the slice.Optional
clustersList of StringsCorresponding cluster names for the namespaces listed above. To onboard the namespace on all clusters, specify the asterisk * as this parameter's value.Optional

Slice Creation with No-Network

In KubeSlice, the clusters are connected by an overlay network i.e, single-network or multi-network and managed by KubeSlice networking components such as slice routers, slice gateways, envoy gateways, and istio gateways. Alternatively, the user can create a slice without the inter-cluster connectivity by setting the parameter overlayNetworkDeploymentMode value to no-network in the slice configuration YAML file.

The following is an example slice configuration file.

apiVersion: controller.kubeslice.io/v1alpha1
kind: SliceConfig
metadata:
name: beta
spec:
overlayNetworkDeploymentMode: "no-network"
sliceType: Application
clusters:
- cluster2
- cluster3
namespaceIsolationProfile:
isolationEnabled: true
applicationNamespaces:
- namespace: iperf
clusters:
- "*"
allowedNamespaces:
- namespace: test
clusters:
- "cluster2"

Apply Slice Configuration

The following information is required.

VariableDescription
<cluster name>The name of the cluster.
<slice configuration>The name of the slice configuration file.
<project namespace>The project namespace on which you apply the slice configuration file.

Perform these steps:

  1. Switch the context to the KubeSlice Controller using the following command:
kubectx <cluster name>
  1. Apply the YAML file on the project namespace using the following command:
kubectl apply -f <slice configuration>.yaml -n <project namespace>

Switch the Slice Overlay Network Deployment Mode

info

You can switch the slice overlay deployment mode from a no-network to single-network or multi-network, whereas you cannot switch from slice overlay deployment mode from a single-network or multi-network to a no-network.

Upgrade Slice Operator on the Worker Cluster

To switch slice overlay network deployment mode from no-network to single-network or multi-network at run-time, ensure the parameter kubesliceNetworking.enabled value is true for all participating worker clusters.

To verify if the KubeSlice networking is true for a participating worker cluster, run the following command on the KubeSlice Controller:

kubectl get cluster <cluster_name> -n <project_namespace> -o jsonpath='{.status.networkPresent}'

Example

kubectl get cluster aws -n kubeslice-avesha -o jsonpath='{.status.networkPresent}'

Example Output

true

If the KubeSlice Networking parameter value is false, update the kubesliceNetworking.enabled parameter value to true in the slice-operator.yaml and run the following helm upgrade command on the participating worker clusters:

helm upgrade -i kubeslice-worker <path-to-the-charts>/kubeslice-worker -f <slice-operator.yaml> -n kubeslice-system --create-namespace --debug
warning

If network connectivity is enabled on an existing slice (either single or multi network), then the slice will become unhealthy upon disabling the KubeSlice networking on any participating clusters.

Update the Slice Configuration YAML

In the slice configuration YAML, update the overlayNetworkDeploymentMode parameter value to single-network or multi-network, add the networking parameters, and apply the slice configuration yaml on the controller cluster.

The following is an example slice configuration YAML.

apiVersion: controller.kubeslice.io/v1alpha1
kind: SliceConfig
metadata:
name: beta
spec:
overlayNetworkDeploymentMode: "single-network"
sliceType: Application
clusters:
- cluster2
- cluster3
namespaceIsolationProfile:
isolationEnabled: true
applicationNamespaces:
- namespace: iperf
clusters:
- "*"
allowedNamespaces:
- namespace: test
clusters:
- "cluster2"
# network fields to be configured while switching overlay type from no-network to single-network or multi-network.
sliceSubnet: 10.170.0.0/16
sliceGatewayProvider:
sliceGatewayType: OpenVPN
sliceCaType: Local
qosProfileDetails:
queueType: HTB
priority: 0
tcType: BANDWIDTH_CONTROL
bandwidthCeilingKbps: 30000
bandwidthGuaranteedKbps: 20000
dscpClass: AF11
sliceIpamType: Local