You might sometimes hear someone express a sentiment along the lines of
"it's not worth trying to optimize this. What does it matter if it
takes 10 seconds unoptimized while an optimized version could take
5 seconds? Who cares? I can wait for the extra seconds. It's not such
a big deal. A quick&dirty implementation is just fine."
But what about an unoptimized quick&dirty implementation taking
*one month* for a task that can be computed in *less than a second*?
Recently Matt Parker was wondering about groups of words with no
shared letters and he wrote a program in Python that found all
groups of five words from a dictionary that shared no letters.
His program took a month to run.
https://www.youtube.com/watch?v=c33AZBnRHks
When he implemented that program he didn't even think that there's
something horribly wrong with that runtime. Any experienced
programmer will have at least a gut instinct that such a task should
run at a minimum in less than a minute, probably in a few seconds,
perhaps even less than a second.
It didn't take long for actual programmers to submit their versions
of the program, with the record holder taking less than a tenth of
a second (written in C++).
Sometimes optimization *does* matter. Sometimes expertise in programming
and algorithms does make quite a difference. It can be the difference
between the program running for a month vs. running less than a second.