Skip to main content
Version: 1.16.0

Installation and IdP Issues

This topic helps you diagnose and solve common problems that can occur with KubeSlice without having to call Avesha Support.

If these issues do not help, we encourage you to contact Avesha Support.

Installation Issues

Troubleshooting Helm Upgrade Error for KubeSlice Controller

Introduction

This scenario addresses a specific error that may occur during a Helm upgrade of the KubeSlice Controller. The error is related to a mutating webhook and can be resolved by following the steps outlined below.

Issue Description

Currently, you can only upgrade to a software patch version that does not contain schema changes. You cannot upgrade to a software patch/complete version that contains schema changes. When attempting to upgrade the KubeSlice Controller using the Helm upgrade command, the following error message may be displayed:

Patch Deployment `kubeslice-controller-manager` in namespace kubeslice-controller error updating the 
resource "kubeslice-controller-manager":cannot patch "kubeslice-controller-manager" with kind Deployment: Internal error occurred: failed calling webhook "mdeploy.avesha.io": failed to calLooks like there are no changes for Deployment "kubernetes-dashboard"Looks like there are no changes for Deployment "dashboard-metrics-scraper"Patch Certificate "kubeslice-controller-serving-cert" in namespace kubeslice-controllerPatch Issuer "kubeslice-controller-selfsigned-issuer" in namespace kubeslice-controllerPatch MutatingWebhookConfiguration "kubeslice-controller-mutating-webhook-configuration" in namespacePatch ValidatingWebhookConfiguration "kubeslice-controller-validating-webhook-configuration" in namespaceError: UPGRADE FAILED: cannot patch "kubeslice-controller-manager" with kind Deployment: Internal error occurred: failed calling webhook "mdeploy.av

This error indicates a failure to call the mutating webhook due to no available endpoints for the webhook service.

Solution

To resolve the error and proceed with the helm upgrade:

  1. Identify the MutatingWebhookConfiguration

    Open a terminal and run the following command to retrieve the list of MutatingWebhookConfigurations:

    kubectl get mutatingwebhookconfiguration

    Expected Output:

    NAME WEBHOOKS AGEcdi-api-datavolume-mutate 1 16dcert-manager-webhook 1 31distio-sidecar-injector 4 15dkubeslice-controller-mutating-webhook-configuration 7 30dkubeslice-mutating-webhook-configuration 1 29dlonghorn-webhook-mutator 1 17dnsm-admission-webhook-cfg 1 29dvirt-api-mutator 4 18d

    Locate the MutatingWebhookConfiguration with a similar name to kubeslice-mutating-webhook-configuration in the output. Make note of the name for the next step.

  2. Delete the MutatingWebhookConfiguration

    Using the name obtained in the previous step, execute the following command to delete the MutatingWebhookConfiguration:

    kubectl delete mutatingwebhookconfiguration kubeslice-mutating-webhook-configuration

    Wait for the deletion to complete.

  3. Retry the Helm Upgrade

    After the MutatingWebhookConfiguration has been successfully deleted, you can proceed with the Helm upgrade command to upgrade the KubeSlice Controller.

Additional Considerations

If you continue to encounter issues during the Helm upgrade or experience any other errors, it is recommended to review the error messages, consult the official documentation, or seek assistance from the Avesha Systems support team or community forums.

Conclusion

By manually deleting the problematic MutatingWebhookConfiguration, you can resolve the error encountered during the Helm upgrade of the KubeSlice Controller. Following the steps provided in this scenario will allow you to proceed with the upgrade and ensure the successful operation of your KubeSlice installation.

Troubleshooting KubeSlice Reinstallation After Uninstall

Introduction

This troubleshooting document addresses issues encountered during the reinstallation of KubeSlice after an initial uninstallation using the kubeslice-cli. The document provides detailed steps to diagnose and resolve the problems that arose due to incomplete cleanup during uninstallation. The scenario involves a multi-cluster setup with on-premises OpenShift and AWS Kubernetes clusters. The objective is to achieve a successful reinstallation of KubeSlice on the AWS cluster with the controller also functioning as a worker.

Issue Description

During certain processes involving objects like clusters, specifically in the context of this scenario, a signal from the worker operator is necessary to ensure a successful cleanup process. If the worker is already uninstalled, there is no indication of the successful completion of the de-registration process. Consequently, the controller waits for approximately 10 minutes before initiating the removal of the finalizers on its own.

A complication arises if the controller is also removed in such situations. This results in the object being left in an orphaned state. To prevent this issue, manual removal of the finalizers is essential. Otherwise, the associated project namespace becomes trapped in a terminating state. It is recommended to manually uninstall the product by following the uninstallation steps.

Errors

Initial Failed Installation error:

Error: Error: UPGRADE FAILED: cannot patch "kubeslice-operator" with kind Deployment: Deployment.apps "kubeslice-operator" is invalid: spec.selector exit status 1

Failed cluster update due to Webhookconfigurations:

ubuntu@linux-host-2:~$ kubectl edit cluster.controller.kubeslice.io/aws-powerflex -n kubeslice-dell error: clusters.controller.kubeslice.io "aws-pow
Error from server (InternalError): Internal error occurred: failed calling webhook "vcluster.kb.io": failed to call webhook: Post "https://kubeslice

Solutions

Remove any mutating/validating webhooks

  1. Get the mutatingwebhookconfiguration using the following command:

    kubectl get mutatingwebhookconfiguration

    Expected Output

    NAME                                                WEBHOOKS  AGE 
    authorization-cert-manager-webhook 1 34d
    cert-manager-webhook 1 34d
    istio-sidecar-injector 4 14d
    karavi-observability-cert-manager-webhook 1 34d
    kubeslice-controller-mutating-webhook-configuration 9 7d20h
  2. Delete the stale webhookconfiguration using the following command:

    kubectl delete mutatingwebhookconfiguration kubeslice-controller-mutating-webhook-configuration
  3. Verify the deletion of webhookconfiguration using the following command:

    kubectl get mutatingwebhookconfiguration

    Expected Output

    NAME                                           WEBHOOKS     AGE 
    authorization-cert-manager-webhook 1 34d
    cert-manager-webhook 1 34d
    istio-sidecar-injector 4 14d
    karavi-observability-cert-manager-webhook 1 34d
  4. Get the validating webhookconfiguration using the following command:

    kubectl get validatingwebhookconfigurations.admissionregistration.k8s.io -n <namespace>
  5. Delete the validation webhookconfiguration using the following command:

    kubectl delete validatingwebhookconfigurations.admissionregistration.k8s.io kubeslice-controller-validating-webhook-configuration

Identify Dangling Resources and Remove Finalizers (Iterate over Namespaces)

  1. Check to see if there are any namespaces in a hung terminating state using the following command:

    kubectl get namespace

    Expected Output

    kubeslice-controller    Terminating 7d23h
    kubeslice-<projectname> Terminating 7d20h
  2. Identify lingering cluster objects causing hindrance to the reinstallation process.

    Run the following commands to inspect resources in all suspect namespaces:

    kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -n kubeslice-<projectname>
    kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -n kubeslice-controller

Clean-up and Verify all Dangling Resources are Removed

Identification of Problematic Resources

Found "cluster.controller.kubeslice.io/aws-powerflex" in the "kubeslice-<projectname>" namespace.

Still after cleanup of resources, if Namespaces are in terminating state, patch and remove it's finalizers

Edit the Cluster Object YAML To remove Finalizer

Use the following command to edit the cluster object, aws-powerflex:

kubectl edit cluster -n kubeslice-controller aws-powerflex

Update the metadata section to remove the finalizers:

metadata:
name: cluster-name
finalizers: Kubernetes

Replace it with:

metadata:
name: cluster-name
finalizers: {}

Patch the cluster to remove the finalizer using the following command:

kubectl patch project/dell -n kubeslice-controller -p '{"metadata":{"finalizers":[]}}' --type=merge

Verify all namespaces are deleted using the following command:

kubectl get namespace

Reinstall using kubeslice-cli

Use the following command to reinstall:

kubeslice-cli --config <path-to-the-topology.yaml> install

Installation Issues on Ubuntu OS with KubeSlice on kind Clusters

Introduction

When installing KubeSlice on kind clusters running on Ubuntu OS, you may encounter installation issues related to too many open files. This scenario provides guidance on resolving this issue and successfully installing KubeSlice.

Issue Description

On Ubuntu OS, the default limit for the number of open files is set to a relatively low value. This limit can cause installation problems when deploying KubeSlice on kind clusters.

Solution

To resolve the installation issues:

  1. Verify ulimit Settings

    1. Open a terminal on the Ubuntu machine where the kind cluster is running.
    2. Run the following command to check the current ulimit settings:
      ulimit -n
      If the output is lower than 2048 or shows "unlimited", proceed to the next step. Otherwise, you may not beencountering the open file limit issue.
  2. Increase ulimit

    1. Open the /etc/security/limits.conf file using a text editor with root privileges.

    2. Add the following lines to the end of the file:

      * hard nofile 2048 * soft nofile 2048
    3. Save the changes and exit the text editor.

  3. Update PAM Configuration

    1. Open the /etc/pam.d/common-session file using a text editor with root privileges.

    2. Add the following line at the end of the file:

      session required pam_limits.so
    3. Save the changes and exit the text editor.

  4. Reboot the Machine

    Reboot the Ubuntu machine to apply the new ulimit settings.

  5. Retry Installation

    After the machine restarts, attempt to install KubeSlice on the kind cluster again. Verify that the installation proceeds without encountering the previous open file limit issue.

Additional Considerations

If you continue to experience installation issues or encounter errors related to too many open files after following the above steps, it is recommended to review the error messages and consult the relevant documentation or community resources for further troubleshooting.

Conclusion

By increasing the ulimit to 2048 or unlimited and updating the PAM configuration, you can overcome installation issues related to too many open files when installing KubeSlice on kind clusters running on Ubuntu OS. Following these steps will help ensure a successful installation and enable you to leverage the capabilities of KubeSlice effectively.

IdP Integration Issues

Troubleshooting Error: IdP Account Not Found on KubeSlice Manager Login Page

Introduction

This scenario addresses an error that may occur when accessing the KubeSlice Manager login page, specifically an error stating that the IdP (Identity Provider) account does not exist. This error is related to the access control configuration within KubeSlice and can be resolved by following the steps outlined below.

Issue Description

When attempting to log in to the KubeSlice Manager, users may encounter an error message indicating that their IdP account does not exist. This error occurs when the access to the KubeSlice project is controlled by KubernetesRoleBinding, and the necessary RoleBinding with the required RBAC (Role-Based Access Control) for the IdP account is not created.

Solution

To resolve the error and gain access to the KubeSlice Manager:

  1. Verify RoleBinding

    Ensure that a RoleBinding is created for your IdP account on the KubeSlice project namespace. This RoleBinding associates the IdP account with the appropriate RBAC rules and permissions within KubeSlice.

    You can use one of the pre-defined roles such as kubeslice-read-onlyor kubeslice-read-write for the RoleBinding. These roles provide predefined access levels, or you can create a custom role to meet your specific requirements.

  2. Create or Update RoleBinding

    If a RoleBinding for your IdP account does not exist, you need to create one using the appropriate RBAC rules. Execute the necessary Kubernetes command, such as kubectl create rolebinding, to create the RoleBinding with the desired RBAC configuration. Alternatively, if a RoleBinding already exists but lacks the necessary permissions, you can update theRoleBinding to include the required RBAC rules.

  3. Verify Access

    After creating or updating the RoleBinding, verify that your IdP account has the necessary access to theKubeSlice project. Attempt to log in to the KubeSlice Manager again using your IdP account, ensuring that you provide the correct credentials and authentication details.

    If successful, you should now be able to access the KubeSlice Manager without encountering the error.

Additional Considerations:

If you continue to experience issues with the IdP account not being found or encounter any other authentication-related errors, it is recommended to review the KubeSlice documentation or check the RBAC configuration. You can also consult with your system administrator or the Avesha Systems support team for further assistance.

Conclusion

By creating or updating the RoleBinding with the appropriate RBAC configuration for your IdP account, you can resolve the error that states the IdP account does not exist on the KubeSlice Manager login page. Following the steps outlined in this scenario will enable you to establish the necessary access controls and successfully log in to the KubeSliceManager to manage your KubeSlice projects effectively.

Missing IdP Login Button on KubeSlice Manager Login Page

Introduction

This scenario addresses an issue where the IdP (Identity Provider) login button is not visible on the KubeSlice Manager login page. The cause of this issue is related to the persistence of IdP configuration in the kubeslice-ui-oidc Kubernetes Secret within the kubeslice-controller namespace. To resolve this issue and display the IdP login button correctly, follow the steps provided below.

Issue Description

When accessing the KubeSlice Manager login page, users may find that the IdP login button is missing or not visible.This issue occurs when there are problems with the configuration stored in the kubeslice-ui-oidc Kubernetes Secret, specifically within the kubeslice-controller namespace.

Solution

To resolve the missing IdP login button issue on the KubeSlice Manager login page:

  1. Verify IdP Configuration

    1. Ensure that the IdP configuration stored in the kubeslice-ui-oidc Kubernetes secret is correct.
    2. Access the Kubernetes cluster and navigate to the kubeslice-controller namespace.
    3. Locate the kubeslice-ui-oidc secret and verify that the configuration values are accurate and match your IdP settings.
  2. Propagate Configuration Changes

    After modifying the values in the kubeslice-ui-oidc secret, allow some time for the changes to propagate.It may take up to 90 seconds for the updated Secret to be synchronized and reflected in the volume mounts.

  3. Restart the ekubeslice-api-gw Deployment

    If the updated IdP configuration is still not visible after the propagation time, you need to restart the kubeslice-api-gw deployment. Execute the following command to restart the deployment:

    kubectl rollout restart deploy/kubeslice-api-gw -n kubeslice-controller
  4. Verify the IdP Login Button

    1. Wait for the kubeslice-api-gw deployment to restart and stabilize.
    2. Access the KubeSlice Manager login page again and check if the IdP login button is now visible. If the IdP configuration was correct and the deployment restart was successful, the IdP login button should be displayed as expected.

Additional Considerations

If you still encounter issues with the IdP login button not being visible or face any other authentication-related problems, review the KubeSlice documentation. Alternatively, double-check the IdP configuration, and seek assistance from your system administrator or the Avesha Systems support team.

Conclusion

By ensuring the correctness of the IdP configuration stored in the kubeslice-ui-oidc Kubernetes secret and restarting the kubeslice-api-gw deployment, you can resolve the issue of the missing IdP login button. Following the provided steps will help you display the IdP login button on the KubeSlice Manager login page.

kubeslice-cli Issues

Resolving the Unverified Developer Error Message When Installing KubeSliceCLI on macOS

Introduction

This scenario addresses the Unverified Developer error message that occurs when trying to install kubeslice-cli on macOS. The error message indicates that the application is from an unregistered developer and cannot be installed directly. The solution below provides step-by-step instructions to enable the application for macOS installation.

Issue Description

When attempting to install kubeslice-cli on macOS, users encounter an Unverified Developer error message. macOS displays this message when trying to install applications from developers who are not registered with Apple.

Solution

To resolve the Unverified Developer error message and proceed with installing kubeslice-cli on macOS, follow these steps to enable the application:

  1. Find the kubeslice-cli application on your macOS system.

  2. Control-click (or right-click) on the kubeslice-cli application icon. A context menu will appear.

  3. From the context menu, select Open. This action will trigger a dialog box with the Unidentified Developer warning message.

  4. The warning dialog box will have an Open button. Click Open to proceed with the installation. By doing so, you are confirming that you trust the application and want to run it despite the unverified developer warning.

  5. macOS may require you to enter your administrator password to proceed with the installation. This step is to ensure that you have the necessary permissions to install the application.

  6. After you've bypassed the Unverified Developer warning and provided the required permissions, the kubeslice-cliinstallation will be completed. The application is now installed and ready to be used.

Additional Resources

For more information and visual instructions on enabling applications from unverified developers on macOS, refer to enabling the application for macOS.

Conclusion

By following the steps to enable the kubeslice-cli application on macOS and bypassing the Unverified Developer warning, users can successfully install and use the CLI tool. The provided instructions help users to proceed with the installation while exercising caution and maintaining macOS security.

KubeSlice CLI Command Execution Issues

Introduction

This scenario addresses the issue of being unable to run KubeSlice CLI commands after a successful installation. Specifically, when attempting to use commands like kubeslice-cli get sliceConfig -n kubeslice-demo, an error is encountered. The solution below provides a step-by-step resolution to troubleshoot and fix the problem.

Issue Description

After installing KubeSlice using kubeslice-cli, running KubeSlice CLI commands, such as kubeslice-cli get sliceConfig-n kubeslice-demo, results in the following error message:

Fetching KubeSlice sliceConfig...? Running command: /usr/local/bin/kubectl get sliceconfigs.controller.kubeslice.io -n demoerror: the server doesn't have a resource type "sliceconfigs"2022/10/04 08:26:40 Process failed exit status 1

Solution

To resolve the issue and successfully run KubeSlice CLI commands:

  1. Switch to the Controller Cluster

    Ensure that you are operating on the correct cluster by switching to the controller cluster. Use the kubectx -ccommand to switch to the controller cluster context. This ensures that the KubeSlice CLI commands are executed onthe intended cluster.

    kubectx -c
  2. Export Configuration File

    Export the configuration file to set the KUBECONFIG environment variable. This is crucial for the KubeSlice CLI to access the correct cluster information and resources. Replace <path-to-the-kubeconfig-file> with the actual path to the configuration file:

    export KUBECONFIG=kubeslice/<path-to-the-kubeconfig-file>

    This step allows the KubeSlice CLI to interact with the Kubernetes API server using the specified configuration, ensuring the necessary resource types and endpoints are available.

Conclusion

By ensuring that you are on the controller cluster and exporting the configuration file using the export KUBECONFIG command, you can resolve the issue of being unable to run KubeSlice CLI commands. These steps set the context and configuration for the KubeSlice CLI, enabling it to interact with the intended cluster and retrieve KubeSlice resources without encountering errors.

Licensing Issues

Dealing with the Invalid License Warning in KubeSlice Manager

Issue Description

When using the KubeSlice Manager, you encounter an Invalid License warning, indicating that the license being used is not valid or has expired.

Solution

  1. Check License Details

    Verify the license details entered in the KubeSlice Manager. Ensure that the license key is correct and has not expired.

  2. Contact KubeSlice Support

    If you believe the license key is valid and should be working, or if you encounter any issues related to licensing,contact KubeSlice support for assistance. You can reach out to KubeSlice support by sending an email to support@avesha.io.

  3. Provide Relevant Information

    When contacting support, provide relevant details such as the license key, the warning message you received, and any other error messages or symptoms related to the license issue. This helps the support team diagnose the problem and provide appropriate solutions.

  4. Wait for Support Response

    After contacting support, wait for their response. The support team investigates the license issue and provide guidance or a resolution.

  5. Check License Status

    After the license issue is resolved, verify the license status in the KubeSlice Manager. Ensure that the InvalidLicense warning is no longer displayed.

  6. Renew or Update License if Required

    If the license has expired, make sure to renew it promptly to avoid any service interruptions. If necessary, update the license key with the new valid license provided by the KubeSlice team.

note

It's essential to maintain a valid and active license to ensure uninterrupted access to KubeSlice features and support services.

For further assistance or information regarding licensing and KubeSlice support, refer to the Licensing or reach out to the support team at support@avesha.io.

Handling the Warning: Automatic License Creation Failed. Unable to Reach the License Server

Issue Description

Upon attempting to create a license secret in KubeSlice, you encounter a warning message stating Automatic LicenseCreation Failed. Unable to Reach the License Server. This indicates that KubeSlice is unable to establish a connection with the license server to automatically create the license secret.

Solution

  1. Verify Internet Connectivity

    Ensure that the worker cluster on which KubeSlice is deployed has internet connectivity. The license server requires access to the internet to create the license secret automatically.

  2. Check Firewall Settings

    If you have any firewall settings in place, make sure they do not block outbound connections to the license server. Allow necessary network access to the license server.

  3. Confirm License Server URL

    Double-check the license server URL provided in the KubeSlice configuration or settings. Ensure that the URL is correct and accessible.

  4. Check Kubernetes API Connectivity

    Verify that the Kubernetes API server can be accessed from the worker cluster where KubeSlice is running. The license creation process requires communication with the Kubernetes API server.

  5. Contact KubeSlice Support

    If you have performed the above steps and are still encountering the Automatic License Creation Failed warning, it's best to contact KubeSlice support for assistance. Reach out to the support team by sending an email to support@avesha.io.

  6. Provide Relevant Information

    When contacting support, provide details about the warning message, the license server URL, any error messages, and any other relevant information. This helps the support team diagnose the issue accurately.

  7. Wait for Support Response

    After contacting support, wait for their response. The support team investigates the issue and provide guidance or a solution to resolve the problem.

note

The automatic license creation process is essential for KubeSlice to function correctly. If the license secret could not be created automatically, you might not have access to certain KubeSlice features. It is crucial to address this issue promptly to ensure smooth operations.

For further assistance or information related to the license creation process and KubeSlice support, refer to licensing or reach out to the support team at support@avesha.io.