Two modes. One portable stack.
Zenokube supports two deployment topologies depending on whether your clusters can reach each other. Both use the same components and the same YAML—the difference is how data stays in sync.
Connected Clusters
When clusters can see each other over the network, Cilium's cluster mesh provides a flat Layer 3/4 fabric across sites. This enables native PostgreSQL streaming replication managed by CloudNativePG—giving you real-time, byte-level consistency of all identity, secrets, and application data.
- Sub-second replication lag via WAL streaming
- Automatic failover with CloudNativePG replica promotion
- ZenoAuth, ZenoVault, ZenoLMS, ZenoMail all replicate via the same mechanism
- Cilium encrypts cross-cluster traffic with WireGuard (transparent to apps)
- Works across clouds, on-prem, and hybrid topologies
Air-Gapped / Disconnected Sites
When clusters are isolated—different jurisdictions, classified networks, or simply no direct connectivity—each site runs a fully independent Zenokube stack. ZenoAuth's bidirectional SCIM keeps users and hierarchical groups consistent across sites via scheduled sync.
- Each cluster is 100% self-contained—no shared failure mode
- SCIM sync runs over HTTPS—works through firewalls and proxies
- Full nested group hierarchy preserved across sites
- Meets data residency requirements per jurisdiction
- Ideal for government, defense, healthcare, and regulated industries
You can mix both modes. A primary region pair might use connected mode with PG streaming replication for instant failover, while edge sites or regulated jurisdictions use air-gapped mode with SCIM sync. The application layer is identical in both cases.
What runs inside every cluster
Every component uses PostgreSQL as its sole stateful dependency. No Redis, no Kafka, no S3, no cloud-specific managed services. This is what makes the stack portable.
| Component | Language | Database | Purpose |
|---|---|---|---|
| ZenoAuth | Rust Next.js | PostgreSQL | OAuth 2.0/OIDC, SAML, SCIM, MFA, Passkeys |
| ZenoVault | Go Next.js | PostgreSQL | Zero-knowledge secrets, Shamir unsealing, K8s OIDC auth |
| ZenoIngress | Rust | None | Gateway API controller, tunnel mode, 30K+ req/s, ~5 MB RAM |
| ZenoLMS | Rust Next.js | PostgreSQL | License management, EdDSA-signed JWTs, offline validation |
| ZenoMail | Go Next.js | PostgreSQL | Transactional email, SMTP + Graph API, GDPR crypto-shredding |
| ZenoScope | Go | None | K8s operator: one CRD provisions DB + Vault + Ingress + Auth |
ZenoAuth: self-hosted IAM with
cross-site SCIM federation
ZenoAuth is a production-grade identity platform written in Rust. It runs inside every Zenokube cluster as a fully independent identity provider—no external dependency on Entra ID, Auth0, or Okta. If a cloud region goes down, every other cluster's authentication continues to function normally.
Bidirectional SCIM with Hierarchical Groups
This is the mechanism that makes multi-site identity work. ZenoAuth implements inbound and outbound SCIM 2.0 (RFC 7643/7644) with a capability that competitors like Keycloak and FusionAuth lack: nested group hierarchies.
- Groups can contain other groups, mirroring real organizational structures
- Transitive membership—users inherit permissions through the hierarchy
- Bidirectional sync pushes changes to downstream systems (Slack, ServiceNow, etc.)
- Circular reference prevention enforced at the database level
- Keep your existing directory structure intact while distributing it across sites
In connected mode, SCIM is unnecessary—PostgreSQL streaming replication keeps identity data in sync at the byte level with sub-second lag. In air-gapped mode, SCIM provides eventual consistency of users and groups over HTTPS, working through firewalls and proxies.
Protocol Support
- OAuth 2.0 with PKCE, PAR, RAR, and Dynamic Client Registration
- OpenID Connect with Discovery, UserInfo, and ID tokens
- SAML 2.0 for legacy enterprise integration
- MFA with TOTP, WebAuthn/Passkeys, and device-aware sessions
Resource Efficiency
ZenoAuth compiles to a single ~11 MB Rust binary using ~50 MB of RAM at baseline. Compare that to Keycloak (512 MB–2 GB, 30–60s startup) or Auth0 (cloud-only, per-user pricing). The only dependency is PostgreSQL. No Redis, no Java runtime, no Node.js process.
ZenoVault: zero-knowledge
encryption with Shamir unsealing
ZenoVault implements a true zero-knowledge architecture. The Root Key never touches persistent storage—it exists only in encrypted, locked RAM (via memguard with mlock) during active operations. On shutdown or signal, the key is wiped and the vault auto-seals.
Four-Layer Envelope Encryption
- Layer 1 – Root Key: 256-bit AES, RAM-only, reconstructed from Shamir shards
- Layer 2 – KEK: Per-vault key, encrypted by Root Key
- Layer 3 – DEK: Per-secret-version key, encrypted by KEK
- Layer 4 – Ciphertext: Your data, encrypted with AES-256-GCM
Even a full database compromise yields only ciphertext encrypted by keys that don't exist on disk.
Shamir's Secret Sharing
During initialization, the Root Key is split into N shards with a threshold of T. For example, 5 shards where any 3 can reconstruct the key. This distributes trust across your team—no single person can unseal the vault alone. Shards are displayed once and never stored.
Kubernetes-Native Authentication
Pods authenticate to ZenoVault using projected service account tokens (native K8s OIDC). No credentials in environment variables. The ZenoVault Operator watches RemoteSecret custom resources and syncs vault secrets to native K8s Secrets with configurable refresh intervals.
Auto-Unseal Options
For automated deployments, ZenoVault supports auto-unsealing via AWS KMS, Azure Key Vault, Google Cloud KMS, or another Vault instance—without compromising the zero-knowledge guarantee for the encrypted data itself.
ZenoIngress: Rust-native
Gateway API controller
ZenoIngress is a high-performance Kubernetes Gateway API ingress controller written in pure Rust with #![forbid(unsafe_code)]. It implements Gateway API v1.2.0 as its sole routing interface—no legacy Ingress annotations.
Performance
- 30,000+ req/s per instance (benchmark-verified)
- P99 latency under 5ms, proxy overhead <0.05ms
- ~5 MB RAM baseline, stable at 2,000 concurrent connections
- Zero-copy body streaming, connection pooling, trie-based routing
Tunnel Mode for Hybrid/Edge Deployments
This is unique to ZenoIngress. In tunnel mode, the proxy initiates outbound connections to edge servers using Yamux stream multiplexing. This means you can deploy behind restrictive NATs and firewalls without opening inbound ports. The same HTTP processing applies regardless of mode.
Use case: Deploy Zenokube in a private data center with no inbound internet access. ZenoIngress tunnels out to an edge node in a public cloud, registering hostnames for routing. Users hit the edge, traffic flows through the tunnel. No VPN, no port forwarding, no firewall rules.
Full Gateway API Support
- Path, header, method, and query parameter matching
- Weighted traffic splitting across backends
- Request/response header modification, URL rewrite, redirects
- Request mirroring with fractional sampling
- TLS termination with SNI, cross-namespace ReferenceGrant
- External auth filter (ExtAuthFilter CRD) with L1/L2 cache
- Circuit breakers, retries, session affinity
Defense in depth,
from memory to wire
Memory Safety
ZenoAuth, ZenoIngress, and ZenoLMS are written in Rust. ZenoVault and ZenoMail are written in Go. Neither language permits the buffer overflows, use-after-free, or null pointer dereferences that account for 70% of CVEs in C/C++ infrastructure software. ZenoIngress explicitly enforces #![forbid(unsafe_code)].
Encryption Everywhere
- ZenoVault: AES-256-GCM with four-layer envelope encryption
- ZenoAuth: Argon2id password hashing, Ed25519 JWT signatures, per-tenant signing keys
- ZenoMail: AES-256-GCM per-user encryption, GDPR crypto-shredding (mathematically irreversible deletion)
- ZenoLMS: EdDSA-signed license JWTs for tamper-proof offline validation
- Cilium: WireGuard-based transparent encryption for all cross-node and cross-cluster traffic
Zero-Trust Networking
Cilium provides eBPF-based network policies that operate at the kernel level—no iptables chains, no userspace proxies. Pod-to-pod traffic is enforced by identity rather than IP address, surviving pod restarts and IP reassignment. Cross-cluster traffic is encrypted with WireGuard automatically.
Audit & Compliance
Every component produces structured audit logs with correlation IDs, source IP tracking, and operation metadata. Combined with Prometheus metrics and Grafana dashboards pre-configured for each service, you have full visibility for SOC 2, FedRAMP, HIPAA, and GDPR compliance frameworks.
Cilium: eBPF-powered networking
that makes multi-cluster work
Cilium is the networking layer that makes connected-mode multi-cloud possible. Its Cluster Mesh feature provides a flat network across multiple Kubernetes clusters—pods in Cluster A can reach pods in Cluster B as if they were local. This is what enables PostgreSQL streaming replication across sites without VPNs or custom tunnels.
What Cilium Provides
- Cluster Mesh: Cross-cluster service discovery and pod-to-pod connectivity
- WireGuard encryption: All cross-cluster traffic encrypted transparently
- eBPF network policies: Kernel-level enforcement, no iptables
- Identity-based security: Policies follow pod identity, not IP addresses
- Bandwidth management: eBPF-based rate limiting at the kernel level
- Hubble observability: Flow logs and service maps without sidecar proxies
How It Enables PostgreSQL Replication
CloudNativePG manages PostgreSQL clusters natively in Kubernetes. When Cilium's cluster mesh connects two sites, CNPG can establish streaming replication between a primary in Cluster A and a replica in Cluster B. WAL (Write-Ahead Log) records stream continuously, keeping the replica within sub-second lag of the primary.
If the primary site fails, CNPG promotes the replica to primary. Because every Zeno component connects to PostgreSQL via the -rw service endpoint, the failover is transparent to ZenoAuth, ZenoVault, and all other services. They reconnect and continue operating.
Why not just use a cloud-managed database? Because it locks you into that cloud. CloudNativePG + Cilium gives you the same replication guarantees while remaining portable. Deploy the same setup on AWS today, migrate to bare metal tomorrow. The YAML doesn't change.
PostgreSQL + CloudNativePG:
the only stateful dependency
Every data-bearing component in Zenokube—ZenoAuth, ZenoVault, ZenoLMS, ZenoMail—uses PostgreSQL and nothing else. This is a deliberate architectural decision: one stateful dependency means one replication strategy, one backup strategy, one failure mode to reason about.
What CloudNativePG Manages
- Automated failover: Detects primary failure, promotes replica, updates service endpoints
- Continuous archiving: WAL archiving to object storage (S3, GCS, Azure Blob, MinIO)
- Point-in-time recovery: Restore to any second within your retention window
- Rolling updates: Zero-downtime PostgreSQL version upgrades
- Connection pooling: Built-in PgBouncer integration
- Encrypted backups: AES-256 encryption at rest for all archived WAL and base backups
Shared Cluster Model
ZenoScope provisions per-application databases within a shared CNPG cluster. Each application gets its own database and dedicated PostgreSQL user with strict privilege isolation. This is more resource-efficient than running a separate PostgreSQL cluster per service while maintaining strong isolation at the database level.
Disaster Recovery Tiers
- Tier 1 – In-cluster: CNPG manages synchronous replicas within the same cluster for pod/node failure
- Tier 2 – Cross-cluster (connected): Streaming replication via Cilium mesh for site failure
- Tier 3 – Air-gapped backup: WAL archiving to object storage in a separate region for catastrophic failure
ZenoScope Operator:
from YAML to running environment
The ZenoScope Operator is the orchestration layer that ties everything together. It watches two custom resources—ZenoScope (cluster-scoped, for teams) and ZenoApp (namespace-scoped, for individual applications)—and provisions all dependent infrastructure in strict dependency order.
What One ZenoScope Creates
- Kubernetes namespace with labels and RBAC
- PostgreSQL database and user (in shared CNPG cluster)
- Vault secrets path, access policies, and K8s auth role
- RemoteSecret to sync credentials into K8s Secrets
- Gateway API HTTPRoute for ingress
- ZenoAuth OAuth/OIDC client with redirect URIs
- Prometheus ServiceMonitor for metrics
- Scoped kubeconfig stored in Vault
# This single resource provisions everything above apiVersion: scope.zenokube.io/v1alpha1 kind: ZenoScope metadata: name: payment-team spec: database: enabled: true clusterRef: name: shared-db-prod extensions: [uuid-ossp, pgcrypto] vault: enabled: true ingress: enabled: true hosts: - hostname: api.payments.zenokube.local paths: - path: / service: {name: payment-api, port: 8080} oauth: enabled: true redirectUris: - https://api.payments.zenokube.local/callback monitoring: enabled: true
Deletion & Cleanup
ZenoScope uses Kubernetes finalizers to guarantee clean teardown. When a ZenoScope is deleted, resources are removed in reverse dependency order: routes, secrets, vault policies, OAuth clients, database user, database, and finally the namespace. For ZenoApps that share a database, the database is only dropped when the last referencing app is removed.