I was mostly joking, but worst-cases include sorted and reverse-sorted lists. I've seen those happen accidentally, in practice, resulting in accidentally-quadratic runtime. This is why, when I taught that class, I made students compare sorting algorithms on real world data sets.
Sure. In this case though the list doesn't change from boot to boot, or for that matter from system to system; if we hit a worst-case we would notice before the release shipped.
Also, the worst case would be "systems take 2 ms extra time to boot" so it's not exactly a disaster.
> though the list doesn't change from boot to boot, or for that matter from system to system; if we hit a worst-case we would notice before the release shipped.
Wait what - why don't you just sort it at build time then? I'm probably misunderstanding what you are saying or maybe just a premature optimization.
e: Nevermind, read further in the thread and saw you answered [0]
We might end up doing that. A quicksort takes it from 2 ms down to 40 us though, and doing the sorting at build time would take a lot of linker magic.
(The SYSINITs are records in separate object files which are marked as going into a particular ELF section, so we don't actually have a complete list of them until we finish linking the kernel.)
Isn't the point of quicksort that the randomization of the pivot selection means that you're extremely unlikely to run into a pathological case for the specific pivot chosen?
And perhaps even more unlikely if you permute the input sequence before sorting it.
I'm basing this off of my understanding of Skiena's discussion of quicksort in The Algorithm Design Manual.
Not to mention that QuickSort isn't a single algorithm but rather a class of algorithms (as the choice of pivot is left unspecified), and there are methods to choose the pivot that give O(n log n) worst case or O(n log n) expected running time (e.g. median of medians or soft heaps for the former, random choice for the latter).
Most of them? If the instructor doesn't call out the difference and point out that all terms of a O() expression have invisible but important constants tacked on, it's easy to accidentally conflate scaling complexity with actual total speed.
They're not intentionally avoiding it, they just get caught up in the mathematical angle and only mention its (dis)connection to real execution time in a passing comment if at all. Or they might even mention that there are constants in play... and then say that of course since they're constants we can ignore them for the purposes of O() notation. It's the difference between learning from PHDs and engineers, I suppose.
... downthread ...
> I suspect quicksort will be enough though.
Somebody hasn't taken a freshman complexity theory class in a while...
(Note, this is tongue in cheek. Worst case is not average case and all that. I'm sure quicksort will do fine)