That's one of the things that drives me nuts about all the public discourse about AI and our future. The vast majority of words written/spoken on the subject are by generic "thought leaders" who really have no greater understanding of AI than anyone else who uses it regularly.
A characteristic of the field since the beginning. Reading What Computers Can't Do in college (early 2000s) was an important contrast for me.
> A great misunderstanding accounts for public confusion about thinking machines, a misunderstanding perpetrated by the unrealistic claims researchers in AI have been making, claims that thinking machines are already here, or at any rate, just around the corner.
> Dreyfus' last paper detailed the ongoing history of the "first step fallacy", where AI researchers tend to wildly extrapolate initial success as promising, perhaps even guaranteeing, wild future successes.
And the article agrees with you, and is pretty scathing about all the books except Narayanan’s (which is also the only book with a balanced anti-hype perspective):
> A puzzling characteristic of many AI prophets is their unfamiliarity with the technology itself
> After reading these books, I began to question whether “hype” is a sufficient term for describing an uncoordinated yet global campaign of obfuscation and manipulation advanced by many Silicon Valley leaders, researchers, and journalists