The cautionary tale of Sports Illustrated’s alleged AI blunders
Learn from their mistakes.
Sports Illustrated is under fire for its reported use of AI, which ended badly.
Futurism outed the legendary sports magazine for allegedly using AI-written articles masquerading behind AI-generated authors.
The brand posted photos and bios of these apparently AI-generated writers who don’t seem to exist.
In addition to the questionable author bios, the articles had bizarrely worded phrases that no human would write, such as declaring that playing volleyball “can be a little tricky to get into, especially without an actual ball to practice with.”
The weird stuff got even weirder as Sports Illustrated deleted all its AI-suspected authors’ photos, bios and articles after Futurism asked for a comment.
Here are some of the lessons you can learn from their mistakes.
Be transparent with your use of AI
After Futurism’s article came out, Sports Illustrated’s parent company stated that the content was written by a vendor, AdVon, and insisted the articles were written by real people. Yet, AdVon allowed its writers to use fake names in some articles to “protect their privacy,” which the magazine condemned.
“We are removing the content while our internal investigation continues and have since ended the partnership,” Sports Illustrated said in a statement.
What can we draw from this? Be truthful first and don’t insult your readers’ intelligence. While Sports Illustrated denied any AI claims, the proof is in the pudding. The authors’ photos came up on a stock image site and a source close to the matter told Futurism that some of the articles were AI-generated. This goes beyond AdVon trying to protect their writers’ privacy.
It’s critical to be open with your stakeholders and clear about how your brand uses AI. Sports Illustrated embarrassingly failed to do so, and that’s a breeding ground for mistrust from audiences. Sports Illustrated is hurting its reputation as a purveyor of high-quality, original content. And it seems to be facing further fallout amid reorganization, although they say it’s not connected to AI. According to a recent Futurism article, Sports Illustrated’s publisher, The Arena Group, fired President Rob Barrett and COO Andrew Kraft on Dec. 6, roughly a week after Futurism’s article came out. The article notes that the cuts were due to an “overall reorganization plan.” The reorganizing might be a legitimate reason for the firings, but the timing does raise eyebrows.
And while not every brand falls into the publishing category, if you create content with AI, it’s wise to not leave stakeholders in the dark about it. Speak out about it sooner rather than later. This lets your audience know they can trust what they’re reading – whether it’s from a bot or a person.
Bentley University Professor Christie Lindor shared some language on how to easily disclose AI use in an HR Brew article, including:
- “No generative AI was used to create this product.”
- “Generative AI produced this content.”
- “This content was created with the assistance of generative AI.”
Humans must edit AI content
AI is a powerful tool, but humans need to be in the mix from beginning to end when guiding and editing these AI bots.
The line about how hard it is to play volleyball without a ball would have stuck out had any human editor seen the copy before it was published. It’s a crazy line that has no place in any story, but especially for a brand as storied and respected as Sports Illustrated. Even cursory human oversight should have caught the trademarks of both AI and bad writing before any reader saw them.
Have a system in place where all content, especially AI-generated submissions, is vetted and approved. Check for errors and awkward phrasings. Everyone needs an editor – and especially an emerging technology like AI.
Sports Illustrated isn’t the only publication to fall into this trap of publishing unedited, likely AI-generated content. The Columbus Dispatch and other Gannett-owned newspapers published AI-generated articles and kept in placeholder text, CNN reported.
One example CNN posted reads:
“The Worthington Christian [[WINNING_TEAM_MASCOT]] defeated the Westerville North [[LOSING_TEAM_MASCOT]] 2-1 in an Ohio boys soccer game on Saturday.”
The issues are glaring and cringeworthy. If any human had read that story before publication, they, too, would have caught these mistakes. They’re blatant and clear, unlike the more subtle weirdness of the Sports illustrated pieces, and all underscore the importance of not trusting these AI tools to do it all.
But that doesn’t mean that AI can’t be used responsibly to help create great content. Remember, don’t leave anything to chance.
Learn more about AI’s risks and benefits by joining us at Ragan’s Writing & Content Strategy Virtual Conference on Dec. 13.
Sherri Kolade is a writer and conference producer at Ragan Communications. She enjoys watching old films, reading and building an authentically curated life. Follow her on LinkedIn. Have a great PR/comms speaker in mind for one of Ragan’s events? Email her at sherrik@ragan.com.