(And why your Linux kernel is working harder than ever)
Kubernetes is now well into its teenage years. The early chaos has mostly settled, and for many teams, it’s become less of a science experiment and more of a reliable platform; like a power grid, or a nervous system.
But let’s be honest: the word exciting doesn’t usually apply to something that’s stable. And yet, in 2026, Kubernetes is quietly doing some very exciting things. Not flashy-new-feature exciting – more like finally-we-can-sleep-at-night exciting.
What’s changed? Mostly, it’s what’s happening underneath the orchestrator.
1. The AI Elephant in the Cluster
A few years ago, Kubernetes was still primarily about web apps and microservices. Today, it’s the go-to platform for deploying AI/ML workloads, and that has real consequences for infrastructure.
According to the CNCF’s 2026 State of Cloud Native report, 82% of organisations now use Kubernetes as their primary AI/ML platform. Which is great. Until your LLM training run eats your entire node pool and starts eyeing up your staging cluster.
We’ve spent the last year helping clients deal with this exact problem. It’s not about scaling the orchestrator, it’s about managing the Linux underneath.
Things like:
- Pinning workloads to specific CPU sets
- Prioritising memory access for GPU-bound tasks
- Tuning I/O behaviour to stop runaway logging from throttling everything else
Kubernetes might be the control plane, but Linux still does the heavy lifting. And when AI workloads hit, that lifting gets a lot heavier.
2. In-Place Pod Restarts (v1.35) – Finally!
For years, changing anything in a container meant killing the pod and hoping for the best.
Changed an env var? Delete the pod.
Tweaked a mount path? Delete the pod.
Did literally anything? You guessed it.
Kubernetes v1.35 introduced in-place pod restarts, and for once, the hype is justified. You can now restart containers without triggering a full teardown.
Which means:
- Faster recovery from transient errors
- Less load on your container registry
- No more re-pulling 50GB images because someone updated a config line
For production environments (especially those with bandwidth constraints or complex storage layers), this is a game-changer. It’s not glamorous, but it’s deeply satisfying; like fixing a creaky floorboard that’s annoyed you for years.
3. Security Is a Kernel Job Now
Remember when Kubernetes security meant configuring RBAC and hoping for the best?
That ship has sailed. In 2026, the focus has shifted to what happens inside the node, not just between services.
Some key changes:
- The exec plugin allowList (v1.35) now prevents local scripts from sneaking into kubeconfig files.
- eBPF-based policy control is becoming standard, with signed eBPF programs replacing traditional firewalls for in-cluster enforcement.
- Kernel hardening is no longer optional. If you’re not running a properly secured Linux build, your cluster’s a sitting duck.
Security used to be a “nice to have” until someone used your nodes to mine enough Monero to buy a small island. We’ve seen it. It’s not pretty.
This is where a Linux-first support model really earns its keep. If your provider isn’t talking about kernel modules, syscall filtering, or cgroup restrictions; they’re not securing your cluster.
4. Kubernetes at the Edge (And the Joy of Thin Linux)
The other trend we’re seeing is the spread of Kubernetes into places it was never really designed for. Warehouses, smart vehicles, factory floors.
In these environments, you’re dealing with:
- Lightweight Kubernetes distributions like K3s or MicroK8s
- Minimal Linux installs with just enough to boot and schedule pods
- Spotty connectivity (think “4G every third Tuesday”)
- Long uptime expectations with minimal maintenance windows
Oh, and WebAssembly (Wasm) is making moves here too, with early production deployments using Wasm workloads as a lightweight alternative to full containers.
Supporting Linux in these environments isn’t about scale; it’s about resilience, smart patching, and knowing what not to touch.
We’ve worked with clients who only get SSH access once every few days. In that kind of setup, automation is nice. But what you really want is confidence that your system won’t fall over before you get another chance to fix it.
5. Kubernetes Is Boring (In a Good Way). Linux Isn’t.
At this point, most teams have got a decent handle on how to deploy workloads, configure ingress, and manage namespaces. Kubernetes itself is relatively predictable.
But that’s the point.
Now that Kubernetes is maturing, the real value is in the boring stuff. The stuff that nobody writes Medium posts about:
- Tuning VM host parameters so your K8s nodes don’t grind to a halt under AI inference load
- Understanding how NUMA memory allocation affects GPU performance
- Keeping container logs from silently filling up /var
This is the stuff we love. And it’s the stuff that keeps your beautifully orchestrated workloads from falling over at 2am.
Kubernetes might be the face of your infrastructure in 2026. But it still runs on Linux. And Linux still needs looking after.
Exciting? Maybe. Important? Definitely.
There’s plenty of excitement in Kubernetes right now; AI, Wasm, eBPF, edge deployments.
But what’s actually exciting to us is the fact that we can finally stop firefighting and start optimising. It’s exciting that tools like in-place restarts and signed eBPF programs mean we can run production environments that don’t feel like they’re held together with duct tape.
And that Linux is still doing the hard work in the background, making sure all this shiny new tech actually stays up.
Want to Talk About Your 2026 Kubernetes Plans?
If you’re looking at your roadmap and wondering how your Linux estate is going to handle the move to AI-native K8s, let’s have a chat.
We’ve probably already solved the problem you’re about to have.



