Technology
5 Min.
Read

Bye bye Cloud Native

For a long time, our infrastructure felt “correct”.

Hanko Cloud was built as a Kubernetes-native, single-tenant SaaS. One project meant one Hanko backend, one database, one isolated runtime. Everything was declarative, automated, and cleanly separated. We used CRDs, operators, auto-suspension, and auto-wake mechanisms. On paper, it looked like a textbook cloud setup.

And to be clear: it worked.

But over time, it also became obvious that we had optimized for the wrong thing.

This post is about why we’re deliberately walking away from that architecture, what didn’t age well, and why multi-tenancy is the better long-term choice for Hanko, our users, and our team.

What we built initially

Hanko Cloud runs on AWS EKS across three availability zones. We operate separate clusters for production and staging, each with multiple node pools for free and paid plans.

When a user creates a project in the Hanko Console, the following happens:

A custom resource is created in the cluster.
Our operator reacts to it.
A new pod is started, running a dedicated Hanko backend for that project.
That backend gets its own database.
Configuration changes are applied by updating the custom resource and restarting the pod.

Inactive free projects are suspended after some time to reduce resource usage. When a user tries to log in again, the project is automatically woken up.

This design gave us very strong isolation guarantees. Each project was cleanly separated at the runtime and database level. Configuration lived in YAML. Infrastructure was fully declarative. From a purity standpoint, it was hard to argue against.

Where this started to break down

The main issue is simple: our infrastructure scaled with the number of projects, not with real usage.

Most projects, especially on the free plan, are idle most of the time. Yet each of them still meant a pod, memory allocation, scheduling overhead, and a database sitting around. Thousands of them.

As a result:

  • Our clusters ended up with thousands of mostly idle pods.
  • RAM was the primary bottleneck, not CPU or traffic.
  • Red/green deployments stressed the database layer.
  • Config changes required restarts and took seconds or minutes to apply.
  • Operating the system required deep Kubernetes knowledge.
  • AWS costs scaled in ways that had little to do with actual end-user traffic.

On top of that, the complexity of the setup meant we needed external DevOps support, which further increased operating costs.

None of this is unusual or “wrong”. But for a small team, it’s a heavy operational tax.

A subtle but important realization

At some point we had to admit something uncomfortable:

We had built a Kubernetes-native single-tenant SaaS, not “cloud native” in some abstract sense.

Cloud native does not require one tenant per pod.
Kubernetes does not force you into single-tenant runtimes.
Those were architectural choices we made to maximize isolation.

Isolation is valuable. But isolation at the runtime level is not free.

For our size and use case, it turned out to be an optimization for theoretical cleanliness, not for operating efficiency.

The direction we’re taking now

We are making Hanko properly multi-tenant.

Concretely, this means:

  • A single Hanko backend can manage many projects.
  • Each project still has a strictly isolated user pool.
  • Configuration moves into the database instead of being managed via CRDs.
  • Hanko Cloud will run only a small number of redundant backend instances instead of thousands.
  • The Hanko Console will talk directly to these backends.
  • The Kubernetes operator will no longer be needed.

From a user perspective, this changes almost nothing. From an operational perspective, it changes everything.

Why this is the right move for us

The benefits are straightforward:

  • Far simpler and more robust infrastructure.
  • Costs scale with real usage, not with project count.
  • No need to suspend inactive free projects anymore.
  • Faster configuration changes without restarts.
  • Less Kubernetes-specific knowledge required to operate the system.

We still run on Kubernetes. We still use managed cloud services. From an infrastructure point of view, Hanko remains cloud native.

What we’re leaving behind is single-tenant runtimes as the default.

What we learned

Looking back, our mistake wasn’t using Kubernetes.

It was assuming that maximum isolation was the right default for a small team running a SaaS with a long tail of mostly idle tenants.

Cloud native architectures are excellent at scaling systems.
They are much less good at scaling teams.

Sometimes the most senior engineering decision is not adding another layer, another operator, or another abstraction. Sometimes it’s deleting an entire class of complexity and accepting a more pragmatic trade-off.

That’s what we’re doing now.

And we’re confident it’s the right step forward for Hanko.

arrow
Back to overview

More blog posts

Don't miss out on latest blog posts, new releases and features of Hanko's products, and more.

Your submission has been received!
Something went wrong.