Stop worrying about whether content is AI-generated

Outcomes matter more than inputs.

Log onto LinkedIn on any given day, and you’re bound to see impassioned debates about how to tell if something is AI-generated.

If it has an em dash, it was AI! If it uses the phrase “in today’s challenging environment,” a robot did it! On and on, tips for sniffing out AI that begin to sound more like articles of faith than helpful advice, a search for human connection in a time of technological uncertainty.

As a writer and editor who reads submissions from writers every day, I understand these concerns. In the early days of AI, I used to try to read the tea leaves to determine whether or not something was AI. It worried me! I want to give my readers the best. Could a robot really do that? Of course not.

I was so confident that a few times, I asked people, as kindly as I could: Did you use AI to write this?

And every time, the answer was no.

 

 

Eventually, I came to the realization that it doesn’t matter if AI wrote the content I was reading, just like it didn’t matter if they wrote it in Microsoft Word or Google Docs, whether they used Bing or Google to do their research. What ultimately mattered was, did the piece do what it needed to do?

If it did, then did it matter if there was an AI assist?

Now, I believe that at this moment, in the second quarter of 2025, humans will succeed in that goal more than AI will. Generative AI, in its current state, is an aggregation of massive amounts of data written by humans. It can only rearrange those pieces like a giant Mad Lib. It isn’t capable of creating anything truly new.

But in many cases, neither are humans. I read rehashed submissions long before AI came onto the scene, just like people used em dashes before ChatGPT was invented.

Ultimately, whether a piece was whipped up by a robot with a great prompt or painstakingly written letter by letter by a human doesn’t matter. Here’s what does:

  • Is the piece accurate? These are table stakes. If the content isn’t trustworthy, nothing else matters. And both AI and people have their struggles in this regard.
  • Is the piece interesting or useful? Not every piece of content is going to have you on the edge of your seat — nor should it. But it should, generally, either entice you with great storytelling or give you the information that you need. Otherwise, why does it exist?
  • Is the piece ethical? If AI is writing about some human emotion it can’t experience, that’s a problem. If its use isn’t transparent, that’s an issue. If it’s stealing content, that’s an issue. But humans lying about facts is also an issue. Keep it all above board.
  • Does the piece have some form of originality? Not every item reinvents the wheel — nor should it. But whether we’re talking about an anecdote, a flash of humor or personalization, something about the content should stand out.
  • Does the piece achieve its goal? Content can be designed to inform, persuade, move to action, entertain and on and on. If it’s an educational piece that doesn’t teach the audience anything, it isn’t successful.

In other words, communicators should focus more on how content is received than how it’s created. You can achieve this through all the usual methods: analyzing page views, read time, email open rates, pulse surveys or tracking when journalists respond to your pitches. Or heck, you could just show it to another human and ask for their opinion the old-fashioned way.

AI isn’t the enemy. Bad content is. No matter who the author is.

Allison Carter is editorial director of PR Daily and Ragan.com. Follow her on LinkedIn.

COMMENT

PR Daily News Feed

Sign up to receive the latest articles from PR Daily directly in your inbox.