I can't really agree. I think it's a sad indictment of this field that we so easily will throw away significant efficiencies - even if it's only a few milliseconds here or there.
Software today is shit. It's overly expensive, wastes tons of time and energy, and it's buggy as hell. The reason that it's cheaper to do things this way is because, unlike other engineering fields, software has no liability - we externalize 99.99% of our fuckups as a field.
Software's slow? The bar is so low users won't notice.
Software crashes a lot? Again, the bar is so low users will put up with almost anything.
Software's unsafe garbage and there's a breach? Well it's the users whose data was stolen, the company isn't liable for a thing.
That's to say that if we're going to look at this situation in the abstract, which I think you're doing (since my initial interpretation was in the concrete), then I'm going to say that, abstractly, this field has a depressingly low bar for itself when we throw away the kind of quality that we do.
But... this is precisely not a significant efficiency. At best, you can contrive an architecture where it is one. But, those are almost certainly aspirational at best, and come with their own host of problems that are now harder to reason about, as we are throwing out all of the gains we had to get here.
I agree with you, largely, in the abstract. But I'm failing to see how these fall into that? By definition, small ms optimizations of system startup are... small ms optimizations of system startup. Laudable if you can do them when and where you can. But this is like trying to save your way to a larger bank account. At large, you do that by making more, not saving more.
A 7% improvement from a trivial change is an insane thing to question, honestly. It is absolutely significant. Whether it is valuable is a judgment, but I believe that software is of higher value (and that as a field we should strive to produce higher quality software) when it is designed to be efficient.
> At best, you can contrive an architecture
FaaS is hardly contrived and people have been shaving milliseconds off of AWS Lambda cold starts since it was released.
> But I'm failing to see how these fall into that?
> The improvement in speed from Example 2 to Example 2a is only about 12%, and many people would pronounce that insignificant. The conventional wisdom shared by many of today's software engineers calls for ignoring efficiency in the small; but I believe this is simply an overreaction to the abuses they see being practiced by pennywise-and-pound-foolish programmers, who can't debug or maintain their "optimized" programs.
In established engineering disciplines a 12 % improvement, easily obtained, is never considered marginal; and I believe the same viewpoint should prevail in software engineering
You have two framings here that I realize are not mine.
First, I am not arguing to not make the change. It was identified, make it. As you say, it is a trivial one to do. Do it.
Second, thinking of percentage improvements has to be done on the total time. Otherwise, why is it not written in much more tightly tuned assembly? After all, I am willing to wager that they still don't have the code running within 7% of the absolute optimum speed it can be going in as long. Heck, this is a program that is a post processing of a list. They already have a way that could get the 40us completely gone. Why stop there?
If I am to argue that anything here would have been a "waste" and shouldn't have been done, it would be the search for a 2ms improvement in startup. But, again, that isn't what they were doing. They were shaving off seconds from startup and happened to find this 2ms improvement. It made headlines because people love pointing out poor algorithmic choice. And it is fun to muse on.
This would be like framing your wealth as just the money you have in your wallet as you go to the store. On the walk, you happen to see a $5 bill on the ground. You only have $20 with you, so picking it up is a huge % increase in your wealth. Of course you should pick it up. I'd argue that, absolutely considering it, there is really no reason to ever not pick it up. My "framing" is that going around looking for $5 bills to pick up from the ground is a waste of most peoples time. (If you'd rather, you can use gold mining. Was a great story on the radio recently about people that still pan for gold. It isn't a completely barren endeavor, but it is pretty close.)
Software today is shit. It's overly expensive, wastes tons of time and energy, and it's buggy as hell. The reason that it's cheaper to do things this way is because, unlike other engineering fields, software has no liability - we externalize 99.99% of our fuckups as a field.
Software's slow? The bar is so low users won't notice.
Software crashes a lot? Again, the bar is so low users will put up with almost anything.
Software's unsafe garbage and there's a breach? Well it's the users whose data was stolen, the company isn't liable for a thing.
That's to say that if we're going to look at this situation in the abstract, which I think you're doing (since my initial interpretation was in the concrete), then I'm going to say that, abstractly, this field has a depressingly low bar for itself when we throw away the kind of quality that we do.