Skip to main content
Version: 1.16.0

Release Notes for KubeSlice EE 0.5.0

Release Date: 23rd January 2023

KubeSlice is a cloud-independent platform that combines network, application, Kubernetes, and deployment services in a framework to accelerate application deployment in a multi-cluster and multi-tenant environment. KubeSlice achieves this by creating logical application slice boundaries which allow pods and services to communicate seamlessly across clusters, clouds, edges, and data centers.

We continue to add new features and enhancements to KubeSlice.

What's New

These release notes describe the new changes and enhancements in this version.

Resource Quota for Requests and Ephemeral Storage

You can now configure resource quotas for ephemeral storage along with CPU, memory, and pod count. You can also configure resource quotas for requests for all resources (except pod count) in addition to limits. The usage of the resource quota can also be tracked on the KubeSlice Manager's dashboard. To know more, see:

IdP Authentication

Okta and GitHub can be configured as identity providers to access KubeSlice. To know more, see:

Network Service Mesh Upgrade

The Network Service Mesh (NSM) component has been upgraded to the stable GA version 1.5.0 that provides upstream networking fixes.

Breaking Change

If you are using a NSM version older than GA version 1.5.0, you cannot upgrade the Worker Operator helm chart directly. To upgrade the Worker Operator:

  1. You must uninstall the helm chart and delete all the NSM-related CRDs.
  2. Reinstall the Worker Operator.

Latest Supported Kubernetes Version

The latest Kubernetes version that we support from this release is version 1.24.

Enhancements

  • When the namespace sameness is applied to a namespace on a slice, then it applies to all worker clusters that are part of the slice. Now that namespace is created on a worker cluster if it does not have that namespace. This ensures that all the worker clusters that are part of a slice will have that namespace for which the namespace sameness is applied. This created namespace remains on the worker cluster even after the worker cluster is detached from that slice, and even when that slice is deleted.

  • The base location for installing Prometheus is now a Kubeslice system namespace by default (if the working cluster doesn't have Prometheus already installed). The base location for Prometheus is no longer the the istio-system namespace from this release. The user can still choose any other namespace over the default KubeSlice system namespace to install Prometheus.

KubeSlice Manager

There are enhancements in the following operations of the KubeSlice Manager:

  • You can now delete a slice without detaching worker clusters and offboarding namespaces. The delete operation handles offboarding namespaces and detaching worker clusters from the slice.
  • The slice creation operation is simplified and it requires a few parameters now.

Known Issues

  • KubeSlice Manager dashboard:

    • On the Resource Quota tab, selecting a namespace and a worker cluster of a slice (that has slice and namespace quotas configured) shows 0 quotas on the corresponding charts.

    • When a Worker Operator is upgraded or restarted, the pod count on the dashboard's Slice tab shows 0.

  • Deleting the service exports does not delete the service entries. After deleting the service exports, delete the corresponding service entries from all the worker clusters.

    To delete the service entries of a worker cluster:

    1. Get the service entries object by using the following command:

      kubectl get serviceentries -n kubeslice-system
    2. Make a note of the service entries object.

    3. Delete the service entries object by running the following command:

      kubectl delete serviceentries <service-entries object name> -n kubeslice-system