I don't understand this comment. How else are you going to deploy pieces of k8s infra into k8s if not with Helm and Helm Charts? Sure, you can use Argo to deploy and sync Helm charts into k8s but...you're still going to be using Helm (if not indirectly via Argo) and you will inevitably need to template things that need to be dynamically configured at render-time.
I don't use templates for manifests and avoid them like the plague.
I use my preferred language to emit manifests and built tooling around it. It does not template, and instead generates the manifest by transforming a data structure (hash) into json. I can then use whatever language feature or library I need to generate that data structure. This is much easier to work with than trying to edit template files.
I don't need to save the output of these because when I use the tooling, it generates and feeds that directly into kubectl. There's also a diff tool that works most of the time that lets me see what I am about to change before changing it.
In fact, I ended up adding a wrapper for Helm so that I can have all the various --set and values, source chart, chart repo, chart version pinning all versioned in git, and use the same tooling to apply those with the flags to install things idempotently turned on by default. It sets that up and calls helm instead of kubectl.
That tooling I wrote is open-source. You never heard of it because (1) I don't have the time and energy to document it or promote it, and (2) it only makes sense for teams that use that particular language. Helm is language-agnostic.
EDIT: and reading this thread, someone mentioned Kustomize. If that had been around in '16, I might not have written my own tool. It looks like it also treat YAML manifests as something to be transformed rather than templates to be rendered.
Just kubectl apply the manifests. You can even use kubectl -k for the Kustomize configuration engine that can more or less replace most of what helm does today.
So what, I'm going to have a big Makefile or something with a bunch of kubectl applies? For each environment too? What if one of my dependencies (cert-manager for example) doesn't support directly applying via kubectl but have to be rendered with Helm? How do I manage versions of these dependencies too?
For better or for worse Helm is the defacto standard for deploying into k8s. Kustomizing toy apps or simple things may work but I have yet to see a large production stack use anything but Helm.
You can prerender helm charts into plain old manifests and apply those. How you want to handle applying them is up to you, even helm doesn't recommend or want people to run it as a service that auto applys charts anymore. Most folks check manifests into a git repo and setup automation to apply the state of the repo to their clusters--there are tons of tools to do this for you if you want.
Definitely check out kustomize, it's not a toy and it can easily handle one main manifest with specializations for each unique deployment environment. It's a very nice model of override and patching config instead of some insane monster template and yaml generation like helm.
I've worked 4 years for a client where everything was either plain K8S YAMLs Kustomizations, mostly the latter.
Large clusters, about 80 service teams and many hundreds of apps.
We (the platform team) managed roughly 30-40 services or controllers, including cert-manager, our own CRD for app deployments, ElasticSearch, Victoria Metrics, Grafana and others.
It was (still is, only I'm sadly not there anymore!) a high performing team and organisation with a lot going on.
We reviewed the decision to not use Helm many times and every time it was pretty much unanimous that we had a better solution than the opaque and awkward templating.
I'm not using Helm for deploying applications, though I use it for vendor manifests. It's not a small production stack, nor is it a toy app.
I'm not using Makefile either.
Helm is a kind of least-common denominator software that's better than doing nothing, but the template paradigm leaves a lot to be desired. It's main advantage is being language-agnostic.
Why would you need a Makefile? You have to run helm to apply helm charts, how is `kubectl apply -f .` any more complicated then that?
The entire existence of helm is superfluous. The features it provides are already part of Kubernetes. It was created (by /) for people who understand the package manager metaphor but don't understand how Kubernetes fundamentally works.
The metaphor is wrong! You are not installing applications on some new kind of OS. Using helm is like injecting untracked application code into a running application.
At best helm is just adding unnecessary complexity by re-framing existing features as features that helm adds.
In reality helm's obfuscation leads to an impenetrable mess of black boxes, that explodes the cost and complexity of managing a k8s cluster.
First off if you are packaging and/or publishing apps using helm charts, stop it!
There is purpose to the standardization of the Kubernetes configuration language. Just publish the damn configuration with a bit of documentation.... You know just like every other open source library! You're building the configuration for the helm chart anyway, so just publish that. It's a lot less work then creating stupid helm charts that serve no purpose but to obfuscate.
Here is your new no helm instructions:
We've stopped using helm to deploy our app. To use our recommended deployment, clone this repo of yaml configs. Copy these files into your kubernetes config repo, change any options you want (see inline comments). Apply with `kubectl apply -f .`, or let your continuous deployment system deploy it on commit.
The only thing helm provides is awkward templating, IME. Ideally you'd never use a text template library to manipulate YAML or JSON structured objects. Instead you'd have scripts that generate and commit whole YAML files, or you'd just update the YAML manually (less optimal), and then you'd write those to the k8s API directly or through a purpose-built tool.
(Or, hell, helm but with no templating whatsoever).
Yeah I think it's awfully unfair to say "you can't post the name of a relevant tool unless you want to go on record as LOVING TEXT TEMPLATING". I thought your reply was useful.
> How else are you going to deploy pieces of k8s infra into k8s if not with Helm and Helm Charts?
kubectl apply -f $MANIFESTS
> you're still going to be using Helm (if not indirectly via Argo) and you will inevitably need to template things that need to be dynamically configured at render-time.
Use Kustomize for dynamic vars and keep it at a minimum. Templating is the root of all evil.
Helm mostly adds unnecessary complexity and obscurity. Sure it's faster to deploy supporting services with it, but how often do you actually need to do that anyway ?
The time you're initially gaining by using Helm might generate an order of magnitude more time in maintenance later on because you've created a situation where the underlying mechanics are both hidden and unknown from you.
How do you configure it? Like you're installing new version, do you go over manifests and edit those by hand over and over every update? Do you maintain some sed scripts?
helm is awesome because it separates configuration.
I actually used to sed configurations, now I use yq[0] whenever I need to programatically edit YAML/JSON. It has much less side effects.
But for Kubernetes manifests specifically, the right tool for the job is Kustomize[1] (which in that case does what Helm does for you, keeping dynamic variables separate). It ships with kubectl and I'm a big believer in using default tools when possible.
> Like you're installing new version, do you go over manifests and edit those by hand over and over every update?
I check the patch notes, diff the configuration files to see if anything new popped up, do the required changes if necessary, jump the version number and deploy.
It sounds laborious but it's really not that much work most of the time, and more importantly it forces you to have a good sense for what and how everything works. Plus it allows you to have a completely transparent, readable environment. Both are important for keeping things running in a production environment. Otherwise you might find yourself debugging incomprehensible systems you've never paid attention to in the middle of the night with 2 hours left before traffic starts coming in.
Well, that makes sense but sounds like I would spend much more time than necessary for many packages. If that would be my full-time job, I guess it would work, but I just need to install something and move on to other things. Some packages install thousands of lines of yaml. I guess I would need more than one day to grok it. Installing it with helm is easy and it usually works.
Something simple like GitLab probably would be impossible to understand for a simple person.
Yeah, I might be thinking of it as simpler than it really is just out of habit. Though it has to be said than most Kubernetes resources are pretty pedestrian, with habit you know where to look for the meaningful bits (much like any other form of source code).
So instead of Go Templates, I'm going to use Dhall? Why? I'd be losing interop with the entire Helm ecosystem too, so there go dependencies and such to maintain my IaC.
That blog post doesn't alleviate any issues one might have using the traditional Go Templates + Helm.
Gives you a Typescript API for generating resources. The difference is that these templates are semantic (via the type system) not just syntactic as in text templates. Relatedly, they also factor and compose better because functions and imports.
It’s probably best you avoid using Kubernetes in production for a long while. At the very least until you understand why your comment is being so heavily downvoted.
I’ve been using Kubernetes in production for over 4 years. I think I fully understand what’s going on, it’s just I have a different opinion than those downvoting me.
There's a few dozen ways to work with k8s that don't involve helm. I understand that hasn't been your experience but to deny they exist shows a lack of.