Modular Control Plane for Databases

A simple and modular Kubernetes Operator to manage the lifecycle of your databases: resource provisioning, routine maintenance, monitoring or encryption among others.

Full lifecycle

Simplify the complete operational complexity of running production databases on Kubernetes.

Resource provision and dynamic scaling.

Maintenance and failover tasks.

Monitoring and insights.

Security best practices for data and traffic encryption.

Single interface

A consistent Yaml specification to provision any type of cluster and their internal resources.

Each cluster supports a fine grained configuration of their settings and includes native integrations with other Kubernetes services (i.e. Secrets).

Ensemble ensures that the configuration is always up to date. If necessary, it performs a rolling update or scales transparent to the user to reach the desired configuration.

One operator to rule them all

A common interface based on the Operator pattern to provision, operate and manage a variety of databases on Kubernetes.

Use a single service to model and automate a complete data pipeline solution: databases, queues, schedulers or olap analytical warehouses.

Reduce the complexity of running databases on Kubernetes and ensure high availability and security compliance along your data-layer.


High availability

Automated configuration, provision and recovery. Reliable deployments with failure recovery and zero downtime.


Export and analyze metrics, insights and workload analysis from any database in real time.


Deploy the application as a single Kubernetes service. Simple to operate with minimal operational overhead.

Kubernetes native

Define any database deployment using declarative Yaml and integrate with other Kubernetes services.


Use a consistent and secure workflow to protect the data. Enabled by default, both inflight and stored data are secured.


Move between minor versions without downtime and stay up to date with security patches and improvements.

Use cases

Data-driven applications

Run a long lived database (e.g. Postgresql or Redis) alongside your application.

Data pipelines

Define a complete data-layer (e.g. Kafka and Clickhouse) to support data processing at scale.

Ad-hoc clusters

Create ephemeral deployments (e.g. Spark) for specific analytical jobs.

Ready to dive in?Use Kubernetes to deploy databases on any cloud or on-premise.