What is funding all those Flock reps jetting around BFE to dazzle and kickback the boomer city managers and county commissioners of deep red littleville America? Is it the 2 cameras in Big Rapids MI or the 2425[1] cameras in Detroit metro?
The "roll over" that mattered has already been secured.
Do flock reps even need to fly out? They have massive contracts with the Walmarts of the world and the underlying commercial property owners. You don’t need to have a rep when it’s already in your area.
> Does it apply to completely novel tasks? No, that would be magic.
Are there novel tasks? Inside the limits of physics, tasks are finite, and most of them are pointless. One can certainly entertain tasks that transcend physics, but that isn't necessary if one merely wants an immortal and indomitable electronic god.
Perl was effectively "dead" before Perl 6 existed. I was there. I bought the books, wrote the code, hung out in #perl and followed the progress. I remember when Perl 6 was announced. I remember barely caring by that time, and I perceived that I was hardly alone. Everyone had moved on by then. At best, Perl 6 was seen as maybe Perl making a "come back."
Java, and (by extension) Windows, killed Perl.
Java promised portability. Java had a workable cross-platform GUI story (Swing). Java had a web story with JSP, Tomcat, Java applets, etc. Java had a plausible embedded and mobile story. Java wasn't wedded to the UNIX model, and at the time, Java's Windows implementation was as least as good as its non-Windows implementations, if not better. Java also had a development budget, a marketing budget, and the explicit blessing of several big tech giants of the time.
In the late 90's and early 2000's, Java just sucked the life out of almost everything else that wasn't a "systems" or legacy big-iron language. Perl was just another casualty of Java. Many of the things that mattered back then either seem silly today or have been solved with things other than Java, but at the time they were very compelling.
Could Perl have been saved? Maybe. The claims that Perl is difficult to learn or "write only" aren't true: Perl isn't the least bit difficult. Nearly every Perl programmer on Earth is self-taught, the documentation is excellent and Google has been able to answer any basic Perl question one might have for decades now. If Perl had somehow bent itself enough to make Windows a first-class platform, it would have helped a lot. If Perl had provided a low friction, batteries-included de facto standard web template and server integration solution, it would have helped a lot as well. If Perl had a serious cross-platform GUI story, that would helped a lot.
To the extent that the Perl "community" was somehow incapable of these things, we can call the death of Perl a phenomena of "culture." I, however, attribute the fall of Perl to the more mundane reason that Perl had no business model and no business advocates.
Excellent point in the last paragraph. Python, JavaScript, Rust, Swift, and C# all have/had business models and business advocates in a way that Perl never did.
Do you not think O'Reilly Associates fits some of that role? It seemed like Perl had more commercial backing compared to the other scripting languages if anything at that point. Python and JavaScript were picked up by Google, but later. Amazon was originally built out of Perl. Perl never converted its industry footprint into that kind of advocacy, I think some of that is also culture-driven.
Maybe until the 2001 O'Reilly layoffs. Tim hired Larry for about 5 years, but that was mostly working on the third edition of the Camel. A handful of other Perl luminaries worked there at the same time (Jon Orwant, Nat Torkington).
When I joined in 2002, there were only a couple of developers in general, and no one sponsored to work on or evangelize any specific technology full time. Sometimes I wonder if Sun had more paid people working on Tcl.
I don't mean to malign or sideline the work anyone at ORA or ActiveState did in those days. Certainly the latter did more work to make Perl a first-class language on Windows than anyone. Yet that's very different from a funded Python Software Foundation or Sun supporting Java or the entire web browser industry funding JavaScript or....
Thanks for detailed reply. Yes, the marketing budget for Java was unmatched, but to my eye they were in retreat towards the Enterprise datacentre by 2001. I don't think the Python foundation had launched until 2001. Amazon was migrating off Perl and Oracle. JavaScript only got interesting after Google maps/Wave I think, arguably the second browser wars start when Apple launches Safari, late 2002.
So, I guess the counterfactual line of enquiry ought to be why Perl didn't, or couldn't, or didn't want, to pivot towards stronger commercial backing, sooner.
"I keep seeing teams reach for K8s when they really just need to run a bunch of containers across a few machines"
Since k8s is very effective at running a bunch of containers across a few machines, it would appear to be exactly the correct thing to reach for. At this point, running a small k8s operation, with k3s or similar, has become so easy that I can't find a rational reason to look elsewhere for container "orchestration".
I can only speak for myself, but I considered a few options, including "simple k8s" like [Skate](https://skateco.github.io/), and ultimately decided to build on uncloud.
It was as much personal "taste" than anything, and I would describe the choice as similar to preferring JSON over XML.
For whatever reason, kubernetes just irritates me. I find it unpleasant to use. And I don't think I'm unique in that regard.
> For whatever reason, kubernetes just irritates me. I find it unpleasant to use. And I don't think I'm unique in that regard.
I feel the same. I feel like it's a me problem. I was able to build and run massive systems at scale and never used kubernetes. Then, all of a sudden, around 2020, any time I wanted to build or run or do anything at scale, everywhere said I should just use kubernetes. And then when I wanted to do anything with docker in production, not even at scale, everywhere said I should just use kubernetes.
Then there was a brief period around 2021 where everyone - even kubernetes fans - realised it was being used everywhere, even when it didn't need to be. "You don't need k8s" became a meme.
And now, here we are, again, lots of people saying "just use k8s for everything".
I've learned it enough to know how to use it and what I can do with it. I still prefer to use literally anything else apart from k8s when building, and the only time I've ever felt k8s has been really needed to solve a problem is when the business has said "we're using k8s, deal with it".
It's like the Javascript or WordPress of the infrastructure engineering world - it became the lazy answer, IMO. Or the me problem angle: I'm just an aged engineer moaning at having to learn new solutions to old problems.
How many flawless, painless major version upgrades have you had with literally any flavor of k8s? Because in my experience, that’s always a science experiment that results in such pain people end up just sticking at their original deployed version while praying they don’t hit any critical bugs or security vulnerabilities.
I’ve run Kubernetes since 2018 and I can count on one hand the times there were major issues with an upgrade. Have sensible change management and read the release notes for breaking changes. The amount of breaking changes has also gone way down in recent years.
Same. I think maybe twice in that time frame we've had a breaking change, and those did warn us for several versions. Typically the only "fix" we need to apply is changing the API version on objects that have matured beyond beta.
I applaud you for having a specific complaint. 'You might not need it' 'its complex' and 'for some reason it bothers me' are all these vibes based winges that are so abundant. But with nothing specific, nothing contestable.
My home lab has grown over the years, now consisting of a physical Proxmox cluster, and a handful of servers (RaspPi and micro hosts).
A couple years back I got tired of failures related to host-level Docker issues, so I got a NAS and started using NAS storage for everything I could.
I also re-investigated containerization - weighing Docker Swarm vs K3s - and settled on Docker Swarm.
I’ve hated it ever since. Swarm is a PITA to use and has all kinds of failure modes that are different than regular old Docker Compose.
I’ve considered migrating again - either to Kubernetes, or just back to plain Docker - but haven’t done it. Maybe I should look at Uncloud?
100%. I’m really not sure why K8S has become the complexity boogeyman. I’ve seen CDK apps or docker compose files that are way more difficult to understand than the equivalent K8S manifests.
Docker Compose is simple: You have a Compose file that just needs Docker (or Podman).
With k8s you write a bunch of manifests that are 70% repetitive boilerplate. But actually, there is something you need that cannot be achieved with pure manifest, so you reach for Kustomize. But Kustomize actually doesn't do what you want, so you need to convert the entire thing to Helm.
You also still need to spin up your k8s cluster, which itself consists of half a dozen pods just so you have something where you can run your service. Oh, you wanted your service to be accessible from outside the cluster? Well, you need to install an ingress controller in your cluster. Oh BTW, the nginx ingress controller is now deprecated, so you have to choose from a handful of alternatives, all of which have certain advantages and disadvantages, and none of which are ideal for all situations. Have fun choosing.
Literally got it in one, here. I’m not knocking Kubernetes, mind, and I don’t think anyone here is, not even the project author. Rather, we’re saying that the excess of K8s can sometimes get in the way of simpler deployments. Even streamlined Kubernetes (microk8s, k3s, etc) still ultimately bring all of Kubernetes to the table, and that invites complexity when the goal is simplicity.
That’s not bad, but I want to spend more time trying new things or enjoying the results of my efforts than maintaining the underlying substrates. For that purpose, K8s is consistently too complicated for my own ends - and Uncloud looks to do exactly what I want.
> Docker Compose is simple: You have a Compose file that just needs Docker (or Podman).
And if you want to use more than one machine then you run `docker swarm init`, and you can keep using the Compose file you already have, almost unchanged.
It's not a K8s replacement, but I'm guessing for some people it would be enough and less effort than a full migration to Kubernetes (e.g. hobby projects).
This is some serious rose colored glasses happening here.
If you have a service with a simple compose file, you can have a simple k8s manifest to do the same thing. Plenty of tools convert right between the two (incl kompose, which k8s literally hands you: https://kubernetes.io/docs/tasks/configure-pod-container/tra...)
Frankly, you're messing up by including kustomize or helm at all in 80% of cases. Just write the (agreed on tedious boilerplate - the manifest format is not my cup of tea) yaml and be done with the problem.
And no - you don't need an ingress. Just spin up a nodeport service, and you have the literal identical experience to exposing ports with compose - it's just a port on the machines running the cluster (any of them - magic!).
You don't need to touch an ingress until you actually want external traffic using a specific hostname (and optionally tls), which is... the same as compose. And frankly - at that point you probably SHOULD be thinking about the actual tooling you're using to expose that, in the same way you would if you ran it manually in compose. And sure - arguably you could move to gateways now, but in no way is the ingress api deprecated. They very clearly state...
> "The Ingress API is generally available, and is subject to the stability guarantees for generally available APIs. The Kubernetes project has no plans to remove Ingress from Kubernetes."
Plenty of valid complaints for K8s (yaml config boilerplate being a solid pick) but most of the rest of your comment is basically just FUD. The complexity scale for K8s CAN get a lot higher than docker. Some organizations convince themselves it should and make it very complex (debatably for sane reasons). For personal needs... Just run k3s (or minikube, or microk8s, or k3ds, or etc...) and write some yaml. It's at exactly the same complexity as docker compose, with a slightly more verbose syntax.
Honestly, it's not even as complex as configuring VMs in vsphere or citrix.
> And no - you don't need an ingress. Just spin up a nodeport service, and you have the literal identical experience to exposing ports with compose - it's just a port on the machines running the cluster (any of them - magic!).
Might need to redefine the port range from 30000-32767. Actually, if you want to avoid the ingress abstraction and maybe want to run a regular web server container of your choice to act as it (maybe you just prefer a config file, maybe that's what your legacy software is built around, maybe you need/prefer Apache2, go figure), you'd probably want to be able to run it on 80 and 443. Or 3000 or 8080 for some other software, out of convenience and simplicity.
Depending on what kind of K8s distro you use, thankfully not insanely hard to change though: https://docs.k3s.io/cli/server#networking But again, that's kind of going against the grain.
If you just want to do development, honestly it's probably better to just use kubectl port-forward (ex - map 3000, or 8080, on your machine to any service/pod you'd like).
As for grabbing 443 or 80, most distros support specifying the port in the service spec directly, and I don't think it needs to be in the range of the reserved nodeports (I've done this on k3s, worked fine last I checked, which is admittedly a few years ago now).
As you grow to more than a small number of exposed services, I think an ingress generally does make sense, just because you want to be able to give things persistent names. But you can run a LONG way on just nodeports.
And even after going with an ingress - the tooling here is pretty straight forward. MetalLB (load balancer) and nginx (ingress, reverse proxy) don't take a ton of time or configuration.
As someone who was around when something like a LAMP stack wasn't "legacy", I think it's genuinely less complicated to setup than those old configurations. Especially because once you get it right in the yaml once, recreating it is very, very easy.
It's not the manifests so much as the mountain of infra underlying it. k8s is an amazing abstraction over dynamic infra resources, but if your infra is fairly static then you're introducing a lot of infra complexity for not a ton of gain.
The network is complicated by the overlay network, so "normal" troubleshooting tools aren't super helpful. Storage is complicated by k8s wanting to fling pods around so you need networked storage (or to pin the pods, which removes almost all of k8s' value). Databases are annoying on k8s without networked storage, so you usually run them outside the cluster and now you have to manage bare metal and k8s resources.
The manifests are largely fine, outside of some of the more abnormal resources like setting up the nginx ingress with certs.
But that's not what anyone is arguing here, nor what (to me it seems at least) uncloud is about. It's about simpler HA multinode setup with a single/low double digit containers.
If you already know k8s, this is probably true. If you don't it's hard to know what bits you need, and need to learn about, to get something simple set up.
I don't understand the point? You can say that about anything, and that's the whole reason why it's good that alternatives exist.
The clear target of this project is a k8s-like experience for people who are already familiar with Docker and docker compose but don't want to spend the energy to learn a whole new thing for low stakes deployments.
k3s makes it easy to deploy, not to debug any problems with it. It's still essentially adding few hundred thousand lines of code into your infrastructure, and if it is a small app you need to deploy, also wasting a bit of ram
K3s is just a repackaged, simplified k8s distro. You get the same behavior and the same tools as you have any time you operate an on-premises k8s cluster, and these, in my experience, are somewhere between good and excellent. So I can't imagine what you have in mind here.
"It's still essentially adding few hundred thousand lines of code into your infrastructure"
Sure. And they're all there for a reason: it's what one needs to orchestrate containers via an API, as revealed by a vast horde of users and years of refinement.
> K3s is just a repackaged, simplified k8s distro. You get the same behavior and the same tools as you have any time you operate an on-premises k8s cluster, and these, in my experience, are somewhere between good and excellent. So I can't imagine what you have in mind here.
...the fact it's still k8s which is a mountain of complexity compared to near anything else out there ?
Except it isn't just "a way to run a bunch of containers across a few machines".
It seems that way but in reality "resource" is a generic concept in k8s. K8s is a management/collaboration platform for "resources" and everything is a resource. You can define your own resource types too. And who knows, maybe in the future these won't be containers or even linux processes? Well it would still work given this model.
But now, what if you really just want to run a bunch of containers across a few machines?
My point is, it's overcomplicated and abstracts too heavily. Too smart even... I don't want my co workers to define our own resource types, we're not at a google scale company.
Merely an anecdote: I had one female house cat that clearly understood a number of words. She could easily and consistently pick out "catnip" in a sentence. "Cow", "get up", "tuna" and several other words and phrases were all understood.
This is unique in my personal experience. I've haven't seen this in other cats.
That's what you did before AWS had the "NAT Gateway" managed service. It's literally called "NAT Instance" in current AWS documentation, and you can implement it in any way you wish. Of course, you don't have to limit yourself to iptables/nftables etc. OPNsense is a great way to do a NAT instance.
I believe the NAT instances also use super old and end-of-life Amazon Linux. I prefer Debian Trixie with Packer and EC2 instances and no EIP. Most secure, performant, and cost effective setup possible.
> NAT AMI is built on the last version of the Amazon Linux AMI, 2018.03, which reached the end of standard support on December 31, 2020 and end of maintenance support on December 31, 2023.
If the differential in pre-war shell stockpiles and on-going shell production mattered as much as some people appear to think, Putin would be in Lviv by now, making deals with obsequious European leaders.
That hasn't happened. So what can be said? Perhaps that tube ammo isn't all that relevant a metric for military power any longer.
This Rolls Royce design isn't all that "small." A RR SMR design is a 470MWe PWR. About half the size of a typical PWR reactor. Fukushima Daiichi Unit 1 was 460MWe. Calling this an "SMR" is a stretch, likely for PR purposes.
It's a rather conventional design, low enriched fuel, no exotic coolants. There is a paper on it at NRC[1]. And they've never built one, so if they get it running by the 2030's they'll be doing pretty well for a Western company.
I downplayed there, because occasionally I try to moderate my extreme cynicism.
It would be miraculous, in the biblical sense of the word. Not only because it would be a technical and regulatory triumph for RR and the UK, but because it would mean this is something other than what it appears to be to me.
None of this will get built. It's all fake, and after the benefits are taken, and the subsidy budgets are drained, and the various political and academic and regulatory folks have populated the requisite non-profit no-show jobs, and the professional opposition leaders have collected all the anti-nook bucks, and RR et al. have wiggled out of whatever obligations they're pretending to pursue via the holes they've already carefully arranged for themselves, these papers and headlines will be forgotten.
Closer to a third for recent models (the French P4 reactors from the 80s were 1300, the later N4 1450~1500, the EPR is 1650). 500-ish is a relatively typical density for reactors from the mid to late 60s.
> And you think the same problem wouldn't exist with 6ghz?
Wi-Fi 6E and later standards that unlock 6 GHz are designed to mitigate contention through several dynamic power management and multiplexing capabilities: TWT, MLO, OFDMA, improved TPC, etc. While these things aren't somehow inherent to 6 GHz, the 6 GHz band isn't crowded with legacy devices mindlessly blasting the spectrum at max power, so it is plausible that 6 GHz Wi-Fi will perform better in dense urban environments. The higher frequency also contributes because attenuation is substantially greater, although in really dense, thin-walled warrens that attenuation won't solve every problem.
I know if I had noisy Wi-Fi neighbors interfering with me, the few important Wi-Fi only devices I have would all be on at least 6E 6 GHz by now, not only because 6 GHz has fewer users, but also in the hope that ultimately, when the users do appear, their devices will be better neighbors by design. I don't actually have that problem, however. The nearest 5 GHz AP I can actually see (that isn't mine,) in Kismet (using rather high gain antennas) is -96 dB, and my actual APs hardly ever see those at all. I've yet to actually detect a 6 GHz device that isn't mine. I known there are a few because the manufacturers and model numbers of many APs are visible, but between the inherent attenuation and the power level controls, I don't see them.
What is funding all those Flock reps jetting around BFE to dazzle and kickback the boomer city managers and county commissioners of deep red littleville America? Is it the 2 cameras in Big Rapids MI or the 2425[1] cameras in Detroit metro?
The "roll over" that mattered has already been secured.
[1] https://deflock.me
reply