
How we cut Kubernetes costs by half at Wildflower
I’ve been using Kubernetes for almost a decade, and it never really gets ‘easy.’ On paper, my job is simple: keep things stable and keep costs reasonable. In practice, Kubernetes will test that every day. The part that’s always bugged me is the basics: how do you actually know what resources your workloads need?
At Wildflower, that challenge carries even more weight. We provide a mobile app that helps connect pregnant mothers with their clinics and payers. The app combines education, counseling, and remote patient monitoring, so high-risk pregnancies can be spotted earlier. The systems underneath have to be rock solid.
On the infrastructure side, we’re entirely in AWS. We run separate Kubernetes clusters for test and production, with about 800 pods at any given time. Traffic is fairly steady with daily and weekly patterns, heavier in the mornings, lighter in the evenings, and pretty quiet around Christmas.
When things started getting messy
Kubernetes has always had this one big missing piece for me: how do I know what resources my workloads actually need? Every guide says, “set your CPU and memory limits,” but nobody tells you how to figure that out. For years, it was just guessing.
Without visibility, we handled things reactively. If pods ran out of memory, we added more nodes. If something big was coming, we scaled up manually and then scaled back down later. Otherwise, we just waited for something to break. That meant overprovisioning to play it safe, and still running into jobs that would silently stop for hours or even days without us noticing.
The worst part was time. Some weeks I’d spend hours tuning configs; other weeks it swallowed days. And because I had no historical visibility, it felt unpredictable. I could have set something up in CloudWatch, but the time investment was more than I had.
Meanwhile, the costs were going up much faster than our client growth justified. It wasn’t sustainable.
Finding a smarter way
Around that time, we were already using Zesty’s Commitment Manager. Honestly, it’s by far the best software product I’ve ever dealt with. You just turn it on, and your bill goes down. That gave me confidence to try Pod Rightsizing when it came along.
The idea immediately clicked. One of the things you’re supposed to do in Kubernetes is tell it what resources you need. I’d never been able to do that until now. Pod Rightsizing was the first time I could see both real-time and historical resource usage, and let the system set requirements automatically. No more guessing.
Onboarding was high-touch, which I appreciated. I got on a call with Zesty engineers, ran the install commands, and we set it up together live. Within hours, I could finally see how workloads behaved over days and weeks. For the first time, I wasn’t blind.
Life after automation
Once Pod Rightsizing was in place, the impact was immediate. CPU and memory requests adjusted on their own. Workloads scheduled properly. And the problems that used to happen when Kubernetes packed too many pods onto a single node? They just disappeared.
Jobs that used to silently stop running for hours or days don’t fail anymore. And I don’t spend any time managing resources. It’s all done for me.
The savings came faster than expected. I was hoping for a 30% reduction. Instead, we cut node count in half in our test cluster, which translated to about 50% savings right away. I expect production will drop even further once I turn on full automation, probably closer to 65%.
What it meant for me
The dollars saved are significant, but for me, the bigger win is time. I used to spend hours every week monitoring and tuning, sometimes whole days. Now it’s completely off my plate.
Even more than that, Pod Rightsizing gives me new capabilities I didn’t have before. Some optimizations were so time-consuming that I just didn’t do them. Now they happen automatically. I can finally focus on things like upgrading our infrastructure and improving security instead of babysitting Kubernetes.
Finance picked up on the results quickly, too. Just like with Commitment Manager, Pod Rightsizing is now one of the clearest wins in our AWS cost reviews.
Looking back
I’ve been working with Kubernetes for almost a decade, and for me, Pod Rightsizing fills the missing piece that always frustrated me: knowing what resources workloads really need, and adjusting them automatically.
Now our costs are lower, our systems are stable, and I’ve got hours of my week back. More importantly, I have confidence that our infrastructure can scale with the business instead of holding it back.
That’s not just a technical improvement. It’s solving a problem I’ve lived with for years. And I honestly don’t know how I’d go back to running Kubernetes without it.