Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I can only speak for myself, but I considered a few options, including "simple k8s" like [Skate](https://skateco.github.io/), and ultimately decided to build on uncloud.

It was as much personal "taste" than anything, and I would describe the choice as similar to preferring JSON over XML.

For whatever reason, kubernetes just irritates me. I find it unpleasant to use. And I don't think I'm unique in that regard.





> For whatever reason, kubernetes just irritates me. I find it unpleasant to use. And I don't think I'm unique in that regard.

I feel the same. I feel like it's a me problem. I was able to build and run massive systems at scale and never used kubernetes. Then, all of a sudden, around 2020, any time I wanted to build or run or do anything at scale, everywhere said I should just use kubernetes. And then when I wanted to do anything with docker in production, not even at scale, everywhere said I should just use kubernetes.

Then there was a brief period around 2021 where everyone - even kubernetes fans - realised it was being used everywhere, even when it didn't need to be. "You don't need k8s" became a meme.

And now, here we are, again, lots of people saying "just use k8s for everything".

I've learned it enough to know how to use it and what I can do with it. I still prefer to use literally anything else apart from k8s when building, and the only time I've ever felt k8s has been really needed to solve a problem is when the business has said "we're using k8s, deal with it".

It's like the Javascript or WordPress of the infrastructure engineering world - it became the lazy answer, IMO. Or the me problem angle: I'm just an aged engineer moaning at having to learn new solutions to old problems.


It’s a nice portable target, with very well defined interfaces. It’s easy to start with and pretty easy to manage if you don’t try to abuse it.

I mean the real answer it is got easily to deploy k8s so the justification for not using it kinda vanished.

How many flawless, painless major version upgrades have you had with literally any flavor of k8s? Because in my experience, that’s always a science experiment that results in such pain people end up just sticking at their original deployed version while praying they don’t hit any critical bugs or security vulnerabilities.

I’ve run Kubernetes since 2018 and I can count on one hand the times there were major issues with an upgrade. Have sensible change management and read the release notes for breaking changes. The amount of breaking changes has also gone way down in recent years.

Same. I think maybe twice in that time frame we've had a breaking change, and those did warn us for several versions. Typically the only "fix" we need to apply is changing the API version on objects that have matured beyond beta.

I applaud you for having a specific complaint. 'You might not need it' 'its complex' and 'for some reason it bothers me' are all these vibes based winges that are so abundant. But with nothing specific, nothing contestable.

My home lab has grown over the years, now consisting of a physical Proxmox cluster, and a handful of servers (RaspPi and micro hosts). A couple years back I got tired of failures related to host-level Docker issues, so I got a NAS and started using NAS storage for everything I could.

I also re-investigated containerization - weighing Docker Swarm vs K3s - and settled on Docker Swarm.

I’ve hated it ever since. Swarm is a PITA to use and has all kinds of failure modes that are different than regular old Docker Compose.

I’ve considered migrating again - either to Kubernetes, or just back to plain Docker - but haven’t done it. Maybe I should look at Uncloud?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: