Can’t be bothered to read the whole news article? Get a quick AI summary in the post itself. Uses a specialised summariser (not just asking an LLM “summarise this”). Summaries are 60% identical to human summaries and >95% accurate in keeping original meaning.
News summary had moved to [email protected] the bot has been updated to use a better we scraping method and improved summarisation.
If u don’t like this please just block the community no need to complain or downvote.
This is genuinely harmful. LLMs will hallucinate, which means that using them as a substitute for reading the news will result in the spread of misinformation. And in an era where we see just how dangerous misinformation can be, I beg you to please not do this.
“95% accurate” means 5% lies, which is 5% too many.
I’m using an LLM architecture that’s better suited to summarisation meaning it won’t invent false facts like traditional gpts do. The worse errors it has made are a couple cases of missattribution of actions that are easily spotted from within the context of the whole summary.
The ai has no more misinformation than a human journalist. It is not biass in its summary. It does not assert falsehoods in malice. I have been accused of spreading misinformation for many articles and yes the bot is saying misinformation but that is the misinformation stated in the original human authored article. It is not my bots job to pass judgement but simply to make ur ability to do so easyer.
What architecture is that? If you have an LLM that doesn’t hallucinate, there will surely have been papers written about the breakthrough.
And that dear reader was when the work of foolishness became something much more sinister.
Humans, and trust in humans, are important. The internet divorced the human face and the accumulation of trust from the news, which has allowed engineered alternative facts to enter the mainstream consciousness, which might be the single biggest harmful development in a modern age which has no shortage of them. I am not trying to tell you that your summarizer project is automatically responsible for that. But be cautious about what future you’re constructing.
“My LLM will simply never make a mistake ever”
I don’t believe you.
U just made up a quote and then attributed it to me. I would say that’s worse than anything my ai has ever done.
I’m just summarizing.
But that’s what you are saying. You either admit that this is a horrible idea, or you are confident that your AI never makes a mistake. It’s black and white, really.
Wow that’s one hell of a false choice. I fully admit my ai makes mistakes but I believe it makes less than or equal to the same amount of mistakes as a human.
The guy who made said mistake made 2 mistakes (invented a false quote and false attribution) that’s 2 mistakes in 4 messages with an error rate of 50% my not has a measured error rate of <5%.
It makes errors just the rate at which it makes said errors is far smaller than a human.
The tiny difference is that this person does not create impactful articles as a news outlet. 😅
Neither does my bot. It just summarises existing human created articles.