Frequently Asked Questions
This topic describes the questions that are categorized based on KubeSlice features. The FAQs also include a separate category for generic questions related to KubeSlice.
General
Why and when should I use KubeSlice?
KubeSlice offers a simpler solution to the complex challenges of running multi-cluster applications at scale by creating a Kubernetes operator called a Slice. This operator creates a virtual cluster, across a fleet of clusters, that serves as a logical application boundary, enabling pods and services to communicate with each other seamlessly.
As enterprises expand application architectures to span multiple clusters located in data centers or cloud provider regions, or across cloud providers, Kubernetes clusters need the ability to fully integrate connectivity and pod-to-pod communications with namespace propagation across clusters. KubeSlice enables creating multiple logical slices in a single cluster or group of clusters regardless of their physical location. Existing intra-cluster communication remains local to the cluster utilizing each pod's CNI interface. KubeSlice provides isolation of network traffic between clusters by creating an overlay network for inter-cluster communication.
What is the difference between KubeSlice Open Source and Enterprise Editions?
The list below includes KubeSlice open source features, followed by a table that outlines both open source and enterprise-only features.
KubeSlice Open Source Features
- Multi-cluster connectivity
- On-prem, EKS, GKE, AKS, OCP, LKE, etc.
- Slice specific L3 overlay network, CNI agnostic, slice subnet, IPAM, and QoS/priority
- Full mesh connectivity
- Micro-segmentation and isolation
- Automation of Gateways, OpenVPN tunnels, redundant tunnels, and key rotation
- NSM overlay network in clusters
- Pod-to-pod connectivity over the overlay network
- On-prem, EKS, GKE, AKS, OCP, LKE, etc.
- Namespace association
- Application namespaces
- One or more namespaces can be associated with a slice.
- Allowed namespaces
- Application namespaces
- Application onboarding
- Application services/pods are onboarded on to a slice.
- Network policy, isolation, and monitoring
- Network policies applied to all the associated application namespaces across all clusters associated with a slice.
- Worker Operator monitors for drifts, alerts, and remediation
- Service discovery across a slice
- Service export and import
- Service-to-service connectivity over the slice overlay network
- Istio service mesh integration
- Control plane per cluster
- Service imports as virtual services
- E/W ingress/egress gateways
- mTLS across clusters
- KubeSlice Controller
- KubeSlice Control Plane and GitOps
- Extends Kubernetes Control Plane
- Multiple projects/tenants
- Configuration/state managed using CRDs
- Slice management, operations, and policies
KubeSlice Enterprise Features
Open Source Core Features | Enterprise-only Features |
---|---|
Secure network/service connectivity, service discovery, observability, multi-tenancy, isolation, and OpenVPN | Additional clouds' support: GCP, AWS, Azure, Akamai, OCI, Rancher, OpenShift, and so on. |
All the open source features listed above. | Slice Resource Quota |
Slice Node Affinity | |
Slice RBAC | |
Replicate (replication of namespace objects/data across clusters) | |
External IDP/OIDC integrations | |
Slice Single overlay net, slice multi-network, and no-network slice | |
Multiple deployment options | |
Fleet Management | |
KubeSlice Manager, a user interface for multi-cluster and multi-slice management, including cost management |
Is KubeSlice available through any deployment partner's marketplace?
Yes. Currently, the KubeSlice Controller and KubeSlice Worker charts are available on the Rancher Marketplace. To know more, see Rancher.
Does KubeSlice support Identity Providers?
Yes. We do support identity providers (IdPs) that can be integrated with KubeSlice to log into the KubeSlice Manager. To know more on supported IdPs, see IdP Integration.
How can we access the enterprise repo helm charts?
You can access the enterprise charts by registering on https://avesha.io/kubeslice-registration/. To know more, see KubeSlice registration.
Which are the supported clouds providers?
All the cloud providers are supported. The gateway nodes must be labeled with the kubeslice.io/node-type=gateway
label.
To know more, see labeling nodes and cluster authentication.
How does KubeSlice help me in cloud cost optimization?
KubeSlice enables efficient cloud management costs by placing compute and storage workloads in the right cloud or on premises infrastructure based on policies and application needs in real time. The cloud usage is efficiently optimized by providing a framework for efficient resource allocation.
Usage and Benefits
How is KubeSlice useful to me?
KubeSlice creates a flat, secure virtual network overlay for streamlined data distribution and communication between distributed workloads. It enables multi-tenancy and reduces deployment time, complexity, and costs for multi-cloud, hybrid cloud, and edge environments. KubeSlice seamlessly integrates with the Kubernetes ecosystem.
How do I use KubeSlice to onboard applications?
KubeSlice comes with a centralized user interface called the KubeSlice Manager, which is useful to register your clusters, create a slice and onboard your namespaces onto a slice. To know more, see onboarding applications.
How do I distribute my applications across clusters?
You can onboard applications from your cluster onto a slice by onboarding namespaces and create a service export to distribute applications across clusters. The other clusters must also be part of that slice.
How do I distribute my applications across clusters?
You can onboard applications from your cluster onto a slice by onboarding namespaces and create a service export to distribute applications across clusters. The other clusters must also be part of that slice.
Can KubeSlice help to manage and maintain applications from multiple tenants in the same cluster?
Yes, KubeSlice provides multi-tenancy management with resource isolation and access control for different teams.
Can KubeSlice help to migrate my cluster from one location to another?
KubeSlice provides a seamless application migration solution that enables customers to connect their clusters and move their workloads without worrying about IP conflicts. With KubeSlice, customers can redefine their workload environments across clusters with ease and save valuable time in the migration process.
How does KubeSlice help me in saving costs?
KubeSlice enables customers to consolidate clusters and achieve multi-tenancy by providing advanced resource management and isolation capabilities. KubeSlice optimizes resource allocation and allows for efficient utilization of cluster resources, reducing the need for maintaining multiple clusters.
If I have many clusters spanning multiple cloud providers, is there a centralized way that I can observe those clusters?
KubeSlice provides a centralized KubeSlice Manager for registration, monitoring, and management of multiple clusters.
License
What are the two types of licenses supported?
KubeSlice supports a trial and an enterprise license.
How can I obtain a trial license?
Write to sales@avesha.io
to obtain a trial license.
How can I upgrade to enterprise license from a trial license?
You can click an upgrade license label on the KubeSlice Manager or write to sales@avesha.io
to upgrade to the enterprise
license from an existing trial license. To know more, see upgrading to enterprise license.
What is the basis for enterprise license?
The enterprise license is based on infrastructure, measured in terms of vCPUs (virtual CPUs).
What is the frequency of the license fee payment?
Currently, we accept the annual license fee payment.
Prerequisites
How do I prepare my clusters that are on different clouds?
You must prepare your clusters for registration with the KubeSlice Controller. Authenticate the clusters with the respective cloud providers. To accomplish this, run the commands in Cluster Authentication to retrieve the relevant kubeconfig file and add it to your default kubeconfig path.
What are the command-line tools required to install KubeSlice?
To install KubeSlice, you must install the command-line tools listed below:
- Helm
- kubectl
- kubectx and kubens
- kubeslice-cli To know more, see command-line tools.
What are the supported firewall ports?
The supported UDP firewall ports for cluster networking are between 30000 - 33000.
Installation
Can I install the KubeSlice Controller on more than one cluster?
No. You should use only one controller cluster to install the KubeSlice Controller. The KubeSlice Controller cannot be installed on other worker clusters.
How do I know that the KubeSlice Controller is successfully installed?
You can check the status of the running pods by running the command described in validating the controller installation.
Do we have any utility to speed up the installation process?
We have a kubeslice-cli utility that quickly installs the KubeSlice Controller, KubeSlice Manager, and registers worker charts. To know more, see kubeslice-cli.
How do I access the KubeSlice Manager?
Install the KubeSlice Manager after installing the controller through a YAML file. To know more, see the KubeSlice Manager installation. You can also install the KubeSlice Manager using the kubeslice-cli utility. After installing the KubeSlice Manager, access it by retrieving the URL as described in the endpoint and log in to it.
Monitoring
How do I access the KubeSlice Manager?
Install the KubeSlice Manager after installing the controller through a YAML file. To know more, see the KubeSlice Manager installation. You can also install the KubeSlice Manager using the kubeslice-cli utility. After installing the KubeSlice Manager, access it by retrieving the URL as described in the endpoint and log in.
Does KubeSlice support integration with Slack?
Yes, KubeSlice supports integration with slack for monitoring events and alerting metrics. To know more, see Slack event monitoring and Slack metric alerting.
How do I monitor the slice and working clusters' health?
Kubeslice offers a comprehensive dashboard with real-time monitoring of metrics, logs, and events. The Slice Operator creates events that indicate the slice and cluster's health along with other operations. You can also use the Health tab of the KubeSlice Manager Dashboard to visually monitor slice and cluster health.
Cluster Registration
How do I monitor the slice and working clusters' health?
Kubeslice offers a comprehensive dashboard with real-time monitoring of metrics, logs, and events. The Slice Operator creates events that indicate the slice and cluster's health along with other operations. You can also use the Health tab of the KubeSlice Manager Dashboard to visually monitor slice and cluster health. To know more, see cluster events and slice events.
Can I install a Slice Operator on a worker cluster before installing the KubeSlice Controller?
No. You must install the KubeSlice Controller first, then register the worker cluster with the controller by installing the Slice Operator.
Can I install the Slice Operator using the KubeSlice Manager?
You can register a cluster in manual or automated mode. In automated mode, the KubeSlice Manager installs the Slice Operator at the time of registration. In manual mode, you must install the Slice Operator on the worker cluster for the registration to be successful. To know more, see Cluster Operations.
Slice Management
How does the KubeSlice Controller communicate with the other worker clusters on a given slice?
The Slice Operator installed on the worker clusters interacts with the KubeSlice controller to receive slice configuration updates.
The Slice Operator interacts with the KubeSlice Controller to:
- Facilitate network policy and service discovery across the slice. I
- mport/export Istio services to/from the other clusters attached to the slice.
- Implement Role-Based Access Control (RBAC) for managing the slice components.
Can I connect a worker cluster to more than one slice?
Yes. A worker cluster can be connected to more than one slice by editing the YAML configuration or through the KubeSlice Manager.
Should all the clusters on a slice be under the same project?
Yes. You can connect clusters to a slice when all of them are under the same project.
Can a worker cluster connected to a slice communicate with clusters on another slice?
Yes. A worker cluster connected to a slice can communicate with clusters that are connected to another slice within the same project.
How do I handle the namespace resource quota for a slice?
Resource quotas enable the cluster admins to enforce limits for CPU, memory, ephemeral storage, and the number of application pods per namespace on a slice. It requires setting and monitoring the threshold limit and requests of the resources at the slice level. A default limit and request can be configured per container. To know more, see Resource Quota.
How do I handle RBAC for a given slice?
RBAC manages access to the resources based on the roles. KubeSlice is shipped with two role templates, reader-role-template and deployment-role-template. To know more, see Manage RBAC.
How do I apply node affinity to the namespaces?
Assigning node labels to a slice creates node affinity that allows namespaces to be placed on a node or group of nodes with the same node label. Node affinity allows restricting the namespaces to only specific nodes with the same labels. The node affinity can be applied through the YAML file or using the KubeSlice Manager. To know more, see Assign Node Labels.
How do I onboard namespaces onto a slice?
Onboard application namespaces onto a slice and manage them using the KubeSlice Manager. To know more, see Onboard applications.
How do I advertise service from a cluster when applications are onboarded onto a slice from all the worker clusters?
The service is advertised from a cluster using service export. To know more, see Service Export.
Can we have a longer slice name?
No, if you have a longer slice name, you will get an invalid label error during service export. So limit the cluster and slice names to 15 characters or fewer, as exceeding the limit results in a service export error.
What happens when you offboard a namespace from a slice?
When a namespace is offboarded from a slice, the NSM interface is removed from all workloads within that namespace.
Upon deleting a slice, the slice gateway pods (depending on deployment models) and related services are removed from the
kubeslice-system
namespace. It is important to ensure all application namespaces are offboarded before deleting a slice.
If you attempt to delete a slice without offboarding all namespaces, an error will occur.
kubectl delete -f slice.yaml -n kubeslice-cisco The SliceConfig "slicedemo" is invalid: Field: ApplicationNamespaces: Forbidden: Please
deboard the namespaces before deletion of slice.
You must also ensure to remove all service exports before deleting a slice. Otherwise, an error occurs (illustrated below) when you try to delete a slice.
kubectl delete -f slice_deboard.yaml -n kubeslice-cisco
The SliceConfig "slicedemo" is invalid: ServiceExportConfig: Forbidden: The SliceConfig can only be deleted after all the service export configs are deleted for the slice.
How do I exclude all the pods of a deployment?
Add the kubeslice.io/exclude
label to the template section of your deployment to exclude all
pods in that deployment.
Uninstallation
What are the components that can be uninstalled using the KubeSlice Manager?
You can delete a slice, detach a worker cluster, and delete the worker cluster register with the KubeSlice Controller.
Can I delete a worker cluster when its onboarded application is still running?
You must detach a worker cluster from a slice before deleting it. Detaching the worker cluster offboards the onboarded applications.
Can I offboard namespaces when the applications are running?
No. You cannot offboard namespaces when the applications are running. If a ServiceExport is created in the application namespace, it must be deleted first. Deleting the ServiceExport removes the corresponding ServiceImport automatically on all the clusters of the slice. To know more, see Offboarding Namespaces.
Can I delete a slice without deregistering or detaching worker clusters?
No. You must detach and deregister a worker cluster before deleting a slice.