Skip to main content
Version: 1.15.0

Application Replication

This topic describes how to use a replication slice to replicate applications from one cluster to another.

Overview

The replication slice type is a new type of slice within the KubeSlice platform. It is designed specifically for efficiently transferring applications/resources from one cloud or data center cluster to another cloud or data center cluster. After the replication of namespaces is complete, they can be integrated into an application slice that manages them. Namespaces can be integrated into an application slice if it is beneficial for them to communicate with remote applications in the source cluster or any other.

A replication slice can also be used to back up or replicate namespaces by specifying the same cluster as the source and the destination during slice creation. The replicated namespaces can be restored later.

Replicate Applications Between Two Worker Clusters

Let us install the KubeSlice controller, register worker clusters, create a slice, and connect worker clusters to the slice. On the worker clusters, let us deploy the Boutique application microservices on a worker cluster.

  1. You must clone the examples repo as it contains all the required configuration files in the examples/replication-demo directory.

    Use the following command to clone the examples repo:

    git clone https://github.com/kubeslice/examples.git

    After cloning the repo, use the files from the examples/replication-demo directory.

    The following tree structure describes all the folders and files of the examples/burst-demo directory.

    ├── boutique-app-manifests           # Contains the Boutique application's frontend and backend services 
    │ ├── boutique-app.yaml # Contains the boutique app
    ├── kubeslice-cli-topology-template # Contains the topology template for installing the KubeSlice Controller and register two worker clusters
    │ └── kubeslice-cli-topology-template.yaml
    ├── replication-job-config # Contains the replication job YAML configuration
    │ └── replication-job-config.yaml
    ├── replication-slice-config # Contains the replication-slice YAML configuration
    │ └── replication-slice-config.yaml
  2. You must install the KubeSlice Controller. Identify a controller cluster to install the KubeSlice Controller. Identify two worker clusters, worker-1 and worker-2 with the KubeSlice Controller. Use the following template to install the KubeSlice Controller and register the two worker clusters:

    examples/replication-demo/kubeslice-cli-topology-template/kubeslice-cli-topology-template.yaml
    1. Modify the values as per your requirement. Refer to topology parameters for more details. Add the project name as kubeslice-avesha in which you would create a replication slice. If you want to provide a different project name, then be sure to add that project while creating the replication slice.

      info

      For more information, see kubeslice-cli. Refer to installing the KubeSlice Controller to install the KubeSlice Controller and KubeSlice Manager using YAML.

      note

      KubeSlice Manager simplifies cluster registration and slice operations. To know more, see cluster operations.

    2. Install KubeSlice using the following command on the controller cluster:

      kubeslice-cli --config examples/replication-demo/kubeslice-cli-topology-template/kubeslice-cli-topology-template.yaml install

      Running this command installs the KubeSlice Controller and registers worker-1 and worker-2 clusters with it.

    3. On the controller cluster, create a namespace called boutique using the following command:

      kubectl create ns boutique
    4. (Optional) To retrieve the endpoint/URL for accessing the KubeSlice Manager:

      1. Run the following command on the KubeSlice Controller:

        kubectl get service kubeslice-ui-proxy -n kubeslice-controller -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
      2. The output should display the endpoint/URL in the following format:

        https://<LoadBalancer-IP>
      3. Copy the endpoint/URL from the output and paste into your browser window to access the KubeSlice Manager. alt

      4. You must create a service-account token to log in to the KubeSlice Manager. Create a service-account token using the following command:

        note

        This command works for an admin with admin-ui as the username.

        kubectl get secret kubeslice-rbac-rw-admin-ui -o jsonpath="{.data.token}" -n kubeslice-avesha | base64 --decode

        Running this command returns a secret token. Use this token to log in to the KubeSlice Manager.

  3. Deploy the Boutique application on the worker-1 cluster using the following command:

    kubectl apply -f examples/burst-demo/boutique-app-manifests/boutique-app.yaml -n boutique
    1. Validate the Boutique application pods on the worker-1 cluster using the following command:

      kubectl get pods -n boutique

      Expected Output

      NAME                                     READY   STATUS    RESTARTS   AGE
      dservice-5c6f8b77fb-7t6tf 1/1 Running 0 12s
      cartservice-5445649469-fzc6l 1/1 Running 0 19s
      checkoutservice-5cc44f775c-kkgn2 1/1 Running 0 8s
      currencyservice-66f5c9f698-dgw2v 1/1 Running 0 17s
      emailservice-59c44668d7-jrk5k 1/1 Running 0 9s
      frontend-77464b4775-mzj5z 1/1 Running 0 5s
      paymentservice-6c5465c467-q2fhj 1/1 Running 0 2s
      productcatalogservice-c57fbfdf-vxh7g 1/1 Running 0 1s
      recommendationservice-5677d665d6-9tgs8 1/1 Running 0 6s
      redis-cart-bbf4477f4-gs6cn 1/1 Running 0 13s
      shippingservice-5d94d4496f-njbdj 1/1 Running 0 16s
    2. Validate the Boutique application based on the following service type:

      • For NodePort, use http://externalip:nodeport/.
      • For LoadBalancer, use https://<external-ip-of-LoadBalancer>/.
    3. Validate the services using the following command:

      kubectl get svc -n boutique

      Expected Output

      NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP                                                               PORT(S)          AGE
      adservice ClusterIP 172.20.112.204 <none> 9555/TCP 1s
      cartservice ClusterIP 172.20.249.101 <none> 7070/TCP 8s
      checkoutservice ClusterIP 172.20.90.84 <none> 5050/TCP 17s
      currencyservice ClusterIP 172.20.96.78 <none> 7000/TCP 7s
      emailservice ClusterIP 172.20.117.63 <none> 5000/TCP 19s
      frontend ClusterIP 172.20.28.197 <none> 80/TCP 14s
      frontend-external LoadBalancer 172.20.190.246 <Load-Balancer> 80:31305/TCP 13s
      paymentservice ClusterIP 172.20.60.15 <none> 50051/TCP 11s
      productcatalogservice ClusterIP 172.20.114.176 <none> 3550/TCP 10s
      recommendationservice NodePort 172.20.23.96 <none> 8080:32200/TCP 15s
      redis-cart ClusterIP 172.20.227.236 <none> 6379/TCP 3s
      shippingservice ClusterIP 172.20.213.101 <none> 50051/TCP 5s
  4. On the controller cluster, create a replication slice called boutique-migration using the following command:

    note

    In the command below, kubeslice-avesha is the project name that you pass in the kubeslice-cli topology template. If you have passed a different project name, then add that project name in this command.

    kubectl apply -f examples/replication-demo/replication-slice-config/replication-slice-config.yaml -n kubeslice-avesha
    1. Use the following command to validate the slice configuration:

      kubectl get migrationslice -n kubeslice-avesha

    Expected Output

    NAME                 AGE
    boutique-migration 14s
    1. Use the following command to validate the source and destination worker clusters connected to the replication slice:

      kubectl get workermigrationslice  -n kubeslice-avesha

      Expected Output

      NAME                          AGE
      boutique-migration-worker-1 20s
      boutique-migration-worker-2 19s
    2. Switch to the worker 1 and worker 2 clusters, and use the following commands to verify the replication slice from the source or the destination worker cluster:

      kubectl get deploy -n migration  

      Expected Output

      NAME     READY   UP-TO-DATE   AVAILABLE   AGE
      velero 1/1 1 1 35s
      kubectl get bsl -n migration

      Expected Output

      NAME                           PHASE       LAST VALIDATED   AGE   DEFAULT
      kubeslice-boutique-migration Available 9s 83s
  5. After validating the replication slice, switch to the controller cluster and deploy the replication job using the following command:

    kubectl apply -f examples/replication-demo/replication-job-config/replication-job-config.yaml -n kubeslice-avesha

    To validate the replication job on the controller cluster:

    1. Use the following command to validate the replication job configuration:

      kubectl get migrationjobconfig -n kubeslice-avesha

      Expected Output

      NAME                 AGE
      boutique-migration 43s
    2. Use the following command to validate the worker clusters connected to the replication slice:

      kubectl get workermigrationjobconfig -n kubeslice-avesha

      Expected Output

      NAME                          AGE
      boutique-migration-worker-1 49s
      boutique-migration-worker-2 49s
    3. Switch to worker-1 cluster to validate the replication job on in it (the source worker cluster) using the following command:

      kubectl get backup -n migration

      Expected Output

      NAME                           AGE
      kubeslice-boutique-migration 9s
    4. On the worker-1 cluster, use the following command to check whether the backup was successful or not:

      kubectl describe backup kubeslice-boutique-migration -n migration
    5. Switch to the worker-2 cluster and validate the replication job using the following command:

      kubectl get restore -n migration

      Expected Output

      NAME                 AGE
      kubeslice-boutique 9s
    6. On the worker-2 cluster, use the following the command to check whether the restore was successful or not:

      kubectl describe restore kubeslice-boutique -n migration
  6. After applying the replication job, check if the Boutique pods are running using the following command:

    kubectl get pods -n boutique

    Expected Output

    NAME                                     READY   STATUS    RESTARTS   AGE
    dservice-5c6f8b77fb-7t6tf 1/1 Running 0 12s
    cartservice-5445649469-fzc6l 1/1 Running 0 19s
    checkoutservice-5cc44f775c-kkgn2 1/1 Running 0 8s
    currencyservice-66f5c9f698-dgw2v 1/1 Running 0 17s
    emailservice-59c44668d7-jrk5k 1/1 Running 0 9s
    frontend-77464b4775-mzj5z 1/1 Running 0 5s
    paymentservice-6c5465c467-q2fhj 1/1 Running 0 2s
    productcatalogservice-c57fbfdf-vxh7g 1/1 Running 0 1s
    recommendationservice-5677d665d6-9tgs8 1/1 Running 0 6s
    redis-cart-bbf4477f4-gs6cn 1/1 Running 0 13s
    shippingservice-5d94d4496f-njbdj 1/1 Running 0 16s
  7. After applying the replication job, check if the Boutique services are running using the following command:

    kubectl get svc -n boutique

    Expected Output

    NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP                                                               PORT(S)          AGE
    adservice ClusterIP 172.20.112.204 <none> 9555/TCP 1s
    cartservice ClusterIP 172.20.249.101 <none> 7070/TCP 8s
    checkoutservice ClusterIP 172.20.90.84 <none> 5050/TCP 17s
    currencyservice ClusterIP 172.20.96.78 <none> 7000/TCP 7s
    emailservice ClusterIP 172.20.117.63 <none> 5000/TCP 19s
    frontend ClusterIP 172.20.28.197 <none> 80/TCP 14s
    frontend-external LoadBalancer 172.20.190.246 <Load-Balancer> 80:31305/TCP 13s
    paymentservice ClusterIP 172.20.60.15 <none> 50051/TCP 11s
    productcatalogservice ClusterIP 172.20.114.176 <none> 3550/TCP 10s
    recommendationservice NodePort 172.20.23.96 <none> 8080:32200/TCP 15s
    redis-cart ClusterIP 172.20.227.236 <none> 6379/TCP 3s
    shippingservice ClusterIP 172.20.213.101 <none> 50051/TCP 5s