The article explains the general problem on the example of software development. But given that AI models are heavily promoted by billion-dollar US companies and important actors in that space are not at all friendly to the European Union, I think the relevance can be far larger.

Generally, the article explains that judging usefulness of AI models, specifically LLMs, by trying them out is very prone to the same psychlological traps like astrology, tarot cards or psychics - the so-called Barnum effect. This is specifically because these models are carefully engineered to produce plausible-sounding andwers! And even very intelligent but unaware people can easily fall prey to it.

  • HaraldvonBlauzahn@feddit.orgOP
    link
    fedilink
    English
    arrow-up
    21
    ·
    edit-2
    5 days ago

    What called my attention is that assessments of AI are becoming polarized and somewhat a matter of belief.

    Some people firmly believe LLMs are helpful. But tasks lile programming are logical tasks and LLMs absolutely can’t think - only generate statistically plausible patterns.

    The author of the article explains that this creates the same psychological hazards like astrology or tarot cards, psychological traps that have been exploited by psychics for centuries - and even very intelligent people can fall prey to these.

    Finally what should cause alarm is that on top that LLMs can’t think, but people behave as if they do, there is no objective scientifically sound examination whether AI models can create any working software faster. Given that there are multi-billion dollar investments, and there was more than enough time to carry through controlled experiments, this should raise loud alarm bells.