The article explains the general problem on the example of software development. But given that AI models are heavily promoted by billion-dollar US companies and important actors in that space are not at all friendly to the European Union, I think the relevance can be far larger.
Generally, the article explains that judging usefulness of AI models, specifically LLMs, by trying them out is very prone to the same psychlological traps like astrology, tarot cards or psychics - the so-called Barnum effect. This is specifically because these models are carefully engineered to produce plausible-sounding andwers! And even very intelligent but unaware people can easily fall prey to it.
This is a mischaracterization of how AI is used for coding and how it can lead to job loss. The use case is not “have the AI develop apps entirely on its own” it’s “allow one programmer to do the work of 3 programmers by using AI to write or review portions of code” and “allow people with technical knowledge who are not skilled programmers to write code that’s good enough without the need for dedicated programmers.” Some companies are trying to do the first one, but almost everyone is doing the second one, and it actually works. That’s how AI leads to job loss. A team of 3 programmers can do what used to take a team of 10 or so on.