“Before we work on artificial intelligence, why don’t we do something about natural stupidity?” – Steve Polyak

As we all know all too well, artificial intelligence — or AI — is everywhere. People are using it for everything from planning vacations to creating very realistic deep fakes and everything in between.

For better or worse, AI is also widely used (maybe “misused” is a better word) in all types of writing, ranging from college writing assignments to newspaper and magazine articles, which is the aspect on which I’ll focus today.

The original meaning of “artificial” was a positive one: “Produced by humans, especially in imitation of something natural. Gradually the word underwent a semantic shift to the point where something artificial is often considered a fake. So, is artificial intelligence good or bad? Let’s take a look.

Recently the Associated Press issued guidelines on the use of artificial intelligence saying that it cannot be used to create publishable content and images for the news service. Other news organizations have also begun to set rules on how to use new tech tools such as ChatGPT in their work.

The creation of these new rules was prompted by the Poynter Institute, a journalism think tank, which recently urged news organizations to create standards for the use of generative AI. These rules are necessary, says the institute, because while AI has the ability to create text, images and audio on command, it isn’t yet able to distinguish between fact and fiction.


The AP says that material created by AI must be vetted carefully, just like material from any other news source. That’s because AI suffers from the anthropomorphic malady of “hallucination,” which one website defines as “A confident response from any AI that seems unjustified by the training data.” In other words, AI simply makes up things about 15 to 20 percent of the time.

For example, technology and electronics website CNET recently was forced to issue corrections after an article generated by an AI tool gave wildly inaccurate personal finance advice when it was asked to explain how compound interest works.

And if it’s not making up stuff, it’s conflating the facts, such as the time it said that it takes nine women one month to make a baby.

Why do ChatGPT and its ilk “hallucinate?” Because, says a Microsoft document, new AI systems are “built to be persuasive, not truthful. This means that outputs can look very realistic but contain statements that aren’t true.”

“Why do these outputs contain untrue statements?” you ask. Because AI learns by analyzing massive amounts of digital text from the internet. The problem is the internet is filled with untruthful information.

Since large language models are simply trained to “produce a plausible-sounding answer” to user prompts, it’s easy to see wherein the problem lies, says Suresh Vankatasubramanian, a computer science professor at Brown University.


Somewhat surprisingly, AI didn’t just crash down on us recently – it was first mentioned in The New York Times in 1963. So why hasn’t the hallucination problem been worked out yet. Some experts say it could take another couple of years to work out. Others say it may never be fixed.

“This isn’t fixable,” says Emily Bender, a linguistics professor at the University of Washington. “It’s inherent in the mismatch between the technology and the proposed use cases.”

And then there’s the double-edged sword if AI’s accuracy ever is fixed. If chatbots become more truthful, users will come to thrust their information even more, making the remaining hallucinations even more dangerous.

So what’s the bottom line to all this chatbot-generated stuff? I suggest trusting only the stuff that’s written by an actual person. You know, such as some word guy hunting and pecking every week at his kitchen table.

Jim Witherell of Lewiston is a writer and lover of words whose work includes “L.L. Bean: The Man and His Company” and “Ed Muskie: Made in Maine.” He can be reached at jlwitherell19@gmail.com.

Comments are no longer available on this story

filed under: