For a long time, our infrastructure felt “correct”.
Hanko Cloud was built as a Kubernetes-native, single-tenant SaaS. One project meant one Hanko backend, one database, one isolated runtime. Everything was declarative, automated, and cleanly separated. We used CRDs, operators, auto-suspension, and auto-wake mechanisms. On paper, it looked like a textbook cloud setup.
And to be clear: it worked.
But over time, it also became obvious that we had optimized for the wrong thing.
This post is about why we’re deliberately walking away from that architecture, what didn’t age well, and why multi-tenancy is the better long-term choice for Hanko, our users, and our team.
Hanko Cloud runs on AWS EKS across three availability zones. We operate separate clusters for production and staging, each with multiple node pools for free and paid plans.
When a user creates a project in the Hanko Console, the following happens:
A custom resource is created in the cluster.
Our operator reacts to it.
A new pod is started, running a dedicated Hanko backend for that project.
That backend gets its own database.
Configuration changes are applied by updating the custom resource and restarting the pod.
Inactive free projects are suspended after some time to reduce resource usage. When a user tries to log in again, the project is automatically woken up.
This design gave us very strong isolation guarantees. Each project was cleanly separated at the runtime and database level. Configuration lived in YAML. Infrastructure was fully declarative. From a purity standpoint, it was hard to argue against.
The main issue is simple: our infrastructure scaled with the number of projects, not with real usage.
Most projects, especially on the free plan, are idle most of the time. Yet each of them still meant a pod, memory allocation, scheduling overhead, and a database sitting around. Thousands of them.
As a result:
On top of that, the complexity of the setup meant we needed external DevOps support, which further increased operating costs.
None of this is unusual or “wrong”. But for a small team, it’s a heavy operational tax.
At some point we had to admit something uncomfortable:
We had built a Kubernetes-native single-tenant SaaS, not “cloud native” in some abstract sense.
Cloud native does not require one tenant per pod.
Kubernetes does not force you into single-tenant runtimes.
Those were architectural choices we made to maximize isolation.
Isolation is valuable. But isolation at the runtime level is not free.
For our size and use case, it turned out to be an optimization for theoretical cleanliness, not for operating efficiency.
We are making Hanko properly multi-tenant.
Concretely, this means:
From a user perspective, this changes almost nothing. From an operational perspective, it changes everything.
The benefits are straightforward:
We still run on Kubernetes. We still use managed cloud services. From an infrastructure point of view, Hanko remains cloud native.
What we’re leaving behind is single-tenant runtimes as the default.
Looking back, our mistake wasn’t using Kubernetes.
It was assuming that maximum isolation was the right default for a small team running a SaaS with a long tail of mostly idle tenants.
Cloud native architectures are excellent at scaling systems.
They are much less good at scaling teams.
Sometimes the most senior engineering decision is not adding another layer, another operator, or another abstraction. Sometimes it’s deleting an entire class of complexity and accepting a more pragmatic trade-off.
That’s what we’re doing now.
And we’re confident it’s the right step forward for Hanko.