Multi-Cloud YugabyteDB with Kubernetes

David Roberts

Our recent blog post, ‘Multi-Cloud YugabyteDB in Practice,’ explored the impact of multi-cloud architectures on resilience. This blog explains how the latest YugabyteDB Anywhere features simplify the deployment of YugabyteDB universes across multiple clouds with Kubernetes.

Kubernetes Support

YugabyteDB Anywhere (YBA) is the self-managed control plane for YugabyteDB. It offers several interfaces for deploying and managing universes (database clusters) interactively or as code. In addition to an extensive API, it provides an operator to support Kubernetes-native, declarative infrastructure management.

A YBUniverse Kubernetes resource can be declared for a universe. YugabyteDB Anywhere’s operator will maintain the universe in line with the declared state. It will transparently create a provider to connect the operator to the current Kubernetes cluster.

To deploy a universe across multiple Kubernetes clusters, you need to create a provider in advance through the user interface or API.

YugabyteDB Anywhere 2025.2 – New Features

YugabyteDB Anywhere 2025.2 introduced the YBProvider Custom Resource Definition in Early Access. This allows multi-cluster topologies to be fully managed as Kubernetes resources.

It also supports Kubernetes resources that configure encryption-in-transit certificates, disaster recovery, and point-in-time restore.

YugabyteDB Anywhere allows the operator functionality to be deployed with namespace, rather than cluster, permissions using YBA Helm chart overrides:

rbac:
    create: true
    namespaced: true

When rbac.namespaced is true, only Roles rather than ClusterRoles are deployed. Therefore, providers must be declaratively created as the operator will not have sufficient permission to automatically generate them.

Prerequisites

A multi-cloud Kubernetes architecture depends upon a multi-cluster service mesh. YugabyteDB requires pod-level discovery and connectivity across all Kubernetes clusters to allow its nodes to find and talk to each other. This can be facilitated through several popular service meshes.

Kubernetes Multi-Cluster Services (MCS)

MCS is a new official standard that leverages ServiceExport / ServiceImport Custom Resource Definitions to provide cross-cluster access to services. It uses the clusterset.local domain, served by the CoreDNS multicluster extension, to offer cross-cluster service discovery.

Although MCS is still in beta, YugabyteDB Anywhere supports its implementations in:

When creating universes across an MCS-enabled multi-cluster topology, the following provider-level adjustments must be made per availability zone.

The universe Helm chart overrides must include:

    multicluster:
        createServiceExports: true
        kubernetesClusterId: <cluster name>

The domain name must be clusterset.local, and the pod address template must be

    {pod_name}.<cluster name>.{service_name}.{namespace}.svc.{cluster_domain}.

<cluster name> is usually, but not always, the Kubernetes cluster name. Cilium accepts an ad-hoc cluster name when it’s deployed.

Istio

Istio’s multi-cluster support does not use MCS, but is supported by YugabyteDB Anywhere when transparent DNS proxying is enabled. This requires the following options to be set in Istio for its sidecars:

ISTIO_META_DNS_CAPTURE: "true"
ISTIO_META_DNS_AUTO_ALLOCATE: "true"

When creating universes across an Istio-enabled multi-cluster topology, the following provider- or universe-level adjustments must be made to the universe Helm chart overrides:

istioCompatibility:
    enabled: true
multicluster:
    createServicePerPod: true

Providers

To support a multi-cluster topology, a single provider must be defined with multiple regions and zones. Each zone must point to a Kubernetes cluster and have its own credential defined to allow YugabyteDB Anywhere and the operator to manage the cluster.

The following example uses three London-based Kubernetes clusters:

CloudKubernetes serviceClusterAvailability Zone
AWSElastic Kubernetes Servicesekseu-west-2a
AzureAzure Kubernetes Serviceaksuksouth-1
GoogleGoogle Kubernetes Enginegkeeurope-west2-a

Each cluster must have a suitable storage class defined – in this case, it’s yugabyte.

Kubeconfig

The provider requires a kubeconfig authentication context for each Kubernetes cluster that it will manage. Each cluster must have a Kubernetes service account with bound cluster or namespace permissions. Each cluster’s context uses the service account to authenticate and must be exported to a file.

A Kubernetes secret for each cluster must be created in the cluster hosting YBA. For example, if YugabyteDB Anywhere is in the yba namespace, a secret to authenticate with the eks cluster would be created with:

kubectl create secret generic eks-kubeconfig -n yba --from-file=kubeconfig=/tmp/kubeconfig-eks.conf

YBProvider

A single provider will be created in YugabyteDB Anywhere with three regions, each with one zone, which directs to the relevant Kubernetes cluster using the staged kubeconfig contexts.

The provider will be created with a YBProvider resource. The example below uses the universe Helm chart override and settings, in bold, for MCS-compatible multi-cluster service meshes, and should be adjusted for Istio.

apiVersion: operator.yugabyte.io/v1alpha1
kind: YBProvider
metadata:
  name: multicluster-provider
spec:
  cloudInfo:
    kubernetesProvider: custom
    kubernetesImageRegistry: quay.io/yugabyte/yugabyte
  regions:
    - code: europe-west2
      zones:
        - code: europe-west2-a
          cloudInfo:
            kubernetesStorageClass: yugabyte
            kubeNamespace: yba
            kubeConfigSecret:
              name: gke-kubeconfig
              namespace: yba
            kubeDomain: clusterset.local
            kubePodAddressTemplate: "{pod_name}.gke.{service_name}.{namespace}.svc.{cluster_domain}"
            overrides:
              multicluster:
                createServiceExports: true
                kubernetesClusterId: gke
    - code: eu-west-2
      zones:
        - code: eu-west-2a
          cloudInfo:
            kubernetesStorageClass: yugabyte
            kubeNamespace: yba
            kubeConfigSecret:
              name: eks-kubeconfig
              namespace: yba
            kubeDomain: clusterset.local
            kubePodAddressTemplate: "{pod_name}.eks.{service_name}.{namespace}.svc.{cluster_domain}"
            overrides:
              multicluster:
                createServiceExports: true
                kubernetesClusterId: eks
           - code: uksouth
      zones:
        - code: uksouth-1
          cloudInfo:
            kubernetesStorageClass: yugabyte
            kubeNamespace: yba
            kubeConfigSecret:
              name: aks-kubeconfig
              namespace: yba
            kubeDomain: clusterset.local
            kubePodAddressTemplate: "{pod_name}.aks.{service_name}.{namespace}.svc.{cluster_domain}"
            overrides:
              multicluster:
                createServiceExports: true
                kubernetesClusterId: aks

Universes

Universe Super User

A Kubernetes secret must be created to hold the default administrative passwords for a universe.

If the Operator is watching all namespaces, this secret must be in the default namespace, otherwise, it must be in the watched namespace.

The YCQL password is for the cassandra user, and the YSQL / PostgreSQL password is for the yugabyte user.

For example, if watching the namespace yba and using Password_123! as the default, the password for both the YCQL and YSQL database APIs is:

kubectl create secret generic ysqlpassword --namespace=yba --from-literal=ysqlPassword=Password_123!
kubectl create secret generic ycqlpassword --namespace=yba --from-literal=ycqlPassword=Password_123!

YBUniverse

A universe will be created in YugabyteDB Anywhere, leveraging the multi-cloud provider to span all three Kubernetes clusters/clouds.

The universe will be created with a YBUniverse resource:

apiVersion: operator.yugabyte.io/v1alpha1
kind: YBUniverse
metadata:
  name: multicluster-universe
  namespace: yba
spec:
  universeName: global
  providerName: multicluster-provider
  replicationFactor: 3
  numNodes: 3
  enableNodeToNodeEncrypt: true
  enableClientToNodeEncrypt: true
  enableLoadBalancer: false
  ybSoftwareVersion: "2025.2.0.0-b131"
  enableYSQL: true
  enableYSQLAuth: true
  ysqlPassword:
    secretName: ysqlpassword
  enableYCQL: true
  enableYCQLAuth: true
  ycqlPassword:
    secretName: ycqlpassword
  gFlags:
    tserverGFlags:
      yb_enable_read_committed_isolation: "true"
    masterGFlags: {}
  deviceInfo:
    volumeSize: 5
    numVolumes: 1
    storageClass: yugabyte
  kubernetesOverrides:
    resource:
      master:
        requests:
          cpu: 1
          memory: 1.5Gi
        limits:
          cpu: 1
          memory: 2Gi
      tserver:
        requests:
          cpu: 1
          memory: 1.5Gi
        limits:
          cpu: 1
          memory: 2Gi

Conclusion

YugabyteDB Anywhere’s embedded Kubernetes operator allows declarative, Kubernetes-native infrastructure-as-code management of multi-cloud topologies leveraging popular service meshes.

It also provides a range of interfaces to work with other management patterns, including a user interface, REST API, and CLI. This gives you the flexibility to manage your universes with different tooling, or even manually.

Read more about setting up YugabyteDB with Istio here.

David Roberts

Related Posts

Explore Distributed SQL and YugabyteDB in Depth

Discover the future of data management.
Learn at Yugabyte University
Get Started
Browse Yugabyte Docs
Explore docs
PostgreSQL For Cloud Native World
Read for Free