Discussion about this post

User's avatar
John Paluska's avatar

At my publication (The Washington Gazette) we found a way around LLMs hallucinating. What we do is use Factiverse to fact-check the content. It searches for credible sources like legacy news outlets and academic websites from all ideological persuasions. It then categorizes which agree and which oppose. When there is no clear consensus, we use Findsight to scour books and academic publications and other vetted texts. We plan on adding I Doubt News for bias analysis soon when we finish working with the developer. We use the same programs to fact check human-written content too.

Of course, we also try to go out of our way to link to the primary source whenever possible, but not everything has a primary source (like a breaking story from AP or an exclusive article on undisclosed documents, etc.). In these instances we still fact check using the above programs and try to find the best sources possible.

In short, Factiverse.ai, Findsight.ai, and Idoubt.News are ways of fact-checking LLM content quickly and easily.

Expand full comment
Dennis Murphy's avatar

This is one of the clearest and most useful articles about LLM chatbots I have ever read. I wonder, though, about the suggested advice to "Verify first and only then trust." From Dr Dembski's examples, it seems that the more appropriate maxim would be just "Verify first." There is no trusting, because the very next query might contain bogus results.

I fear for the messed-up world that is coming due to proliferation of unverified AI generated stories.

Expand full comment
4 more comments...

No posts