There’s no denying it. Generative AI, whether for creating content and images or even making our work lives easier with automation, is here to stay. As we experiment with it and test its limits, it’s important to remember that machine learning isn’t perfect. And not just in the odd phrases or human images it creates. AI doesn’t always get the facts right.
A few weeks ago, a friend read this piece I wrote for The Content Strategist about using Jasper.ai. Curious, he took one of the prompts I used in the article—where I asked Jasper to write a paragraph using data from trusted sources—and plugged it into ChatGPT, OpenAI’s new conversational chat tool. Here’s what it produced:
According to a report by McKinsey, the use of AI for content generation is expected to grow significantly in the coming years. The report estimates that by 2030, AI-powered content generation could save businesses over $16 billion per year. This growth is supported by findings from Forrester, which predicts that by 2025, AI-generated content will account for over half of all content produced.
As more than 1 million ChatGPT users have seen in the past few weeks, the tool produces clear, well-structured, and authoritative text. But there’s a problem: None of the cited sources in this paragraph exist. McKinsey doesn’t have a report estimating $16 billion in savings from AI-powered content generation, and Forrester didn’t predict that half of all content will be AI-generated. This text is what one observer calls “coherent nonsense.”
More accurately, these falsehoods are known as hallucinations. In the world of AI, that poetic term describes invented facts or inaccuracies that appear in AI-generated content. And they are a big problem for content teams looking to leverage generative AI to produce more even faster. Why? Because you can’t have completely fabricated facts in your content. It’s a terrible look for your brand and erodes trust with your audience.
If you’re planning on using AI-generated content, you need a strategy for spotting hallucinations and getting rid of them. Our advice? Embrace a tried-and-true member of the content team: the fact-checker.
What is a fact, anyway?
Humor me as I briefly digress to define “facts” in the context of marketing content. Merriam-Webster has five separate definitions for a “fact,” including “something that has actual existence” and “a piece of information presented as having objective reality.” These definitions are helpful for content marketers since we want the information we use to reflect known reality and be grounded in available evidence.
For example, when I write an article and reference data, that data needs to come from an actual study that took place, not a made-up one. If I quote an expert or known figure on a subject, I must use a real quote—something they actually said—not something I made up for them. Nor should I attribute a quote to someone other than the person who actually said it. Straightforward, right?
Things get fuzzier when we consider the choices writers make about which details and examples to include and which ones to leave out. We’re always making choices. How we interpret information also introduces unintentional inaccuracies. I say the glass is half full, and you say it’s half empty. We’re both correct on the facts, but the interpretation can lead a reader to believe something that isn’t true.
And then there are the fact problems that arise when brands make statements that are true in a way that is hidden, obscure, or different from common interpretation. For example, the accepted practice of labeling a company a “leader” in its sector perfectly illustrates this. Many companies do it, and only some of them mean the company earned the most revenue, sold the largest number of units, or served the largest number of customers in its sector.
The point is that ensuring your content is accurate and trustworthy requires strong practices and standards for using and checking facts and for framing fact-like information. This is as true for human-generated content as it is for AI-generated content. And the practices for doing so are the same.
And thus the need for the fact-checker.
What do fact-checkers do?
Fun… um… fact: the first fact-checkers showed up in newsrooms in the 1920s to boost the authority of publications and discourage journalists from peddling the misinformation common during the muckraking era. Every production team included a fact-checker for the next six or seven decades. They were almost always women, a side-effect (perhaps) of the lack of opportunities for women journalists.
Then, in the 1990s, the task of checking facts started shifting to writers. Except for a handful of publications with iconic fact-checking departments—like The New Yorker—most publications have pared fact-checking way back. Book publishers hardly do it at all, a fact that arose in the aughts after a few successful non-fiction books were revealed to be full of fabrications.
And what about content marketing? Most of you have documented brand language and editorial standards you follow, which can help avoid inaccuracies in how you refer to your company or product. Those standards may also include guidance related to facts, such as using quotes or third-party sources. And those standards are most likely executed by your creatives. It’s likely you don’t include fact-checking as a formal step in the content creation process, but instead, it’s inferred as something the writers should do.
As we enter the era of content-generation AI, however, both the standards for fact-checking and the process need a refresh. Fact-checking should be a dedicated step in the content process, executed with strong standards and guidelines for how to do it.
Because AI-generated hallucinations aren’t always as easy to identify as the ones in the ChatGPT paragraph I shared at the beginning of this article. Coherent nonsense sounds very convincing as it peddles lies that can damage your brand. Catching those lies will become more necessary—and more complicated—as the volume of hallucination-filled AI-generated content grows and potentially adulterates the very sources you rely on to validate facts.
Yet it is possible to fact-check efficiently with the team you already have (and in the future, perhaps we’ll have an AI we can train to do it). The role of the fact-checker, while relegated to the margins for a couple of decades, may become one of the most important practices in content creation over the coming years. All hail the fact-checker!
Stay tuned to discover the best practices you’ll need for fact-checking your new AI teammate.
Stay informed! Subscribe to The Content Strategist for more insight on the latest news in digital transformation, content marketing strategy, and rising tech trends.