This talk is about orchestrating Percona XtraDB Cluster (PXC) nodes atop Google Container Engine (GCE) with Kubernetes. PXC provides for synchronous replication among MySQL nodes through the WSREP (writeset replication) API, and Galera plugin which implements it, to provide group communication and configuration through extended virtual synchrony (EVS). While it can be run in isolation, GCE provides other architectural elements such as fluentd for logging, etcd for co-ordination, skydns for DNS among others, which are vital in this design.
This talk is about orchestrating Percona XtraDB Cluster (PXC) nodes atop Google Container Engine (GCE) with Kubernetes. PXC provides for synchronous replication among MySQL nodes through the WSREP (writeset replication) API, and Galera plugin which implements it, to provide group communication and configuration through extended virtual synchrony (EVS). While it can be run in isolation, GCE provides other architectural elements such as fluentd for logging, etcd for co-ordination, skydns for DNS among others, which are vital in this design.
Key elements of the talk will be:
a) Details of PXC and synchronous replication that it provides while ensuring ACID compliance with MVCC. Extended virtual synchrony (EVS) will also be described as are its CAP limitations. Finally, existing deployment strategies of PXC will also be mentioned.
b) The Docker image built for PXC. The design is intended to be flexible and extensible, either from git or from release packages.
c) Initial docker-compose, designing and porting to Kubernetes. Docker-compose has been used for a while to bring up a βNβ node cluster with minimal configuration. Some of the elements in this design cannot be used with Kubernetes as is. Thus, details of porting will be discussed as follows:
i ) Each PXC node goes into a Pod. Same Pod may also contain other optional services like xinetd or haproxy. While having Pods may be sufficient, it will not be making use of Kubernetes fully. Hence, Replication Controllers (RC) are used to control Pod placement and lifetimes. The Pod and RC configuration will be discussed here.
ii) The nature of architecture and bootstrapping of cluster. PXC is a master-less cluster which requires a bootstrapped node that others can connect to in order to form the cluster. Kubernetes, while not allowing for direct linking among containers, allows for service endpoints. A βclusterβ service endpoint is created for cluster group communication and client connections. The service endpoint also provides load-balancing and high availability as desired side effects. This addresses the agnostic approach and allows for simpler, more elegant bootstrapping for PXC (a strong benefit of deployment with Kubernetes over normal deployment of PXC).
iii) Database itself is mounted through volumes allowed for by both Docker and Kubernetes. This also allows for persistence and separation of data and design.
iv) Dynamic generation of JSON Pod configuration is required to allow for certain runtime elements to be injected into it.
Finally, future work will be discussed: benchmarking, application Pods, CAP testing (akin to jepsen), integration with Apache Mesos and so on.
The Go code for this is already up and running at https://github.com/ronin13/pxc-kubernetes.