The mess we’re currently in
Let’s get it out of the way – at present, AI-generated written content is largely disposable.
You’ve read the emails, the newsletters, the platitudes, the content that just isn’t quite convincing enough.
The uncanny valley tells us when an android’s face isn’t quite realistic enough.
So too does the human brain recognize an AI article.
The words don’t sit right
It’s just off.
One place that AI is over-utilized is SEO content, particularly in online industries like food blogs, product reviews, or travel writing.
Digital nomads eager to maximize their affiliate link revenue, have taken to vomiting forth AI-spun content.
This is digital inbreeding at its worst. Everything becomes a more sterilized, more optimized version of itself, designed to only appeal to the great algorithm god.
Every idea is whittled down and refined until it loses all authenticity.
Personal opinions and unique takes aren’t ranking?
Sorry kid, enjoy your robotic travel and restaurant recommendations.
What about technical solutions to problems? AI should be able to solve math and programming issues, right? After all, it’s just numbers solving numbers.
Except LLMs like ChatGPT are a little too eager to help,
In a recent Twitter thread, a programmer highlighted this issue when trying to find a specific programming procedure that ChatGPT recommended as a solution to a problem.
However, the issue was, it didn’t exist.
ChatGPT had hallucinated it, based on a GitHub post of someone asking about it, based on another LLM hallucinating it…
From one ‘innocent’ hallucinatory attempt by ChatGPT to make up an answer, an entire tree of errors had sprouted. “Repeat a lie often enough and it becomes the truth”, is a law of propaganda often attributed to the infamous Nazi Joseph Goebbels.
The idea of ‘fake it til you make it’ isn’t exactly something you want from an AI.
Sports Illustrated came under fire recently when they were caught using a fully AI generated writer to produce some of their articles.
Okay, you may think, what’s wrong with that? Sport reporting can be largely objective if all you need to know is the final score and any notable incidents.
The problem was, that this was objectively bad.
The AI made glaring errors in its text, including describing volleyball as ‘difficult to get into if you don’t have a ball’.
Hardly Skynet levels of awareness. Clearly, the editor had the day off that day.
Generating not noticing
To quote celebrated music producer Rick Rubin, “AI can generate, but it cannot notice”.
There will still be a need for a human touch to take place somewhere in the content generation workflow.
Content writers, editors, real human beings with ideas and thoughts and the skillset to produce something with true human connection.
That is what is lacking from AI content as it stands.
Your customers aren’t stupid. They will notice. (Hopefully)
They deserve to be treated with a human touch.
Question for your business…
Are your business automations compromising your outcomes?