In my first computer science class in college, after finishing the unit on Red-Black Trees, the professor said something like "Now you should never implement a Red-Black Tree again. If you need a balanced binary tree, use std::map or whatever your standard library provides. Same goes for all the other sorting and data structures we've done in this class."
Relatedly:
Q: When is Python faster than C?
A: When the C programmer used a linked list or some O(n^2) algorithm because they didn't have a standard library full of easy to use well implemented data structures and algorithms.
That's actually something I've been saying for quite awhile when people bring up microbenchmarks complaining about how a language like Haskell is slow.
Like, yes, no question, if you try and implement a tightloop in Haskell vs C, C is going to come out a lot faster.
However, no one writes Haskell like that! People rely pretty heavily on standard libraries (or even third party libraries), and on the optimizations afforded to them by the "rich" language of Haskell.
C might be capable of being faster, but that doesn't imply that a C program will inherently be faster than a similar Haskell program, especially if the programs are relatively large.
Even though I kind of mostly agree that you almost never should implement your own rb tree, this advice is also kind of disempowering and I worry that it is part of a mindset that accepts the status quo when it is so often the case that we could have better things if we just got our hands a little dirty.
> A: When the C programmer used a linked list or some O(n^2) algorithm because they didn't have a standard library full of easy to use well implemented data structures and algorithms.
I don't think so, regarding the linked list. In almost every case you'll end up with at least as much pointer indirection in Python. But yeah, if you have enough data that complexity starts mattering a worse algorithm will cause more problems.
Relatedly:
Q: When is Python faster than C?
A: When the C programmer used a linked list or some O(n^2) algorithm because they didn't have a standard library full of easy to use well implemented data structures and algorithms.