AI for communicators: What’s new and what matters

Tons of lawsuits and a tumultuous economy.

AI for communicators

The last few weeks have been filled with lawsuits, major changes in the AI economy and new insights into how everyday users are using AI – and want to use it in the future.

Here’s what communicators should know.

The AI economy

As your 401K might have noticed, it’s been a tumultuous week for global markets. And AI had a role to play in some of the drama.

Big Tech companies are investing billions and billions of dollars into generative AI technologies, but few are yet reaping significant financial benefits. Amazon’s lackluster earnings were attributed, in part, to its profligate spending on AI without much to show for it, CNN reported. Intel is facing a disaster after spending big on AI but now realizing those investments haven’t borne fruit –  in fact, the company needs to shed 15,000 jobs and cut $10 billion in costs.

Microsoft, Meta and Google all seemed undeterred, each expecting to spend tens of billions in 2024 and in fiscal 2025. It seems that the most important ingredient to make money is time – think10-15 years.

But that’s an incredibly long horizon.

“For public companies, we expect to get return on investment in much shorter time frames,” D.A. Davidson analyst Gil Luria told CNN. “So that’s causing discomfort, because we’re not seeing the types of applications and revenue from applications that we would need to justify anywhere near these investments right now.”

Part of those investments are going into AI startups, the Wall Street Journal reported. They, too, are struggling to find fast returns and need funding to stay afloat. Rather than turning to traditional acquisitions that are getting a wary eye from regulators, many are turning to a different funding model. Such is the case between Amazon and Adept AI, where the ecommerce giant agreed to hire most of Adept’s employees and pay a $330 million licensing fee for the technology.

Despite the attempt to skirt regulators, the Federal Trade Commission is still taking a close look at the deal. But the broader infusions of cash from major players into startups suggests that there may be a bubble as these companies struggle to earn money in the short-term.

However, in that same short-term, AI might reshape the kinds of jobs white-collar workers hold. The New York Times speculates that the tech might help kill “meaningless jobs.” Here’s how the Times describes those:

Robots are adept at pattern recognition, which means they excel at applying the same solution to a problem over and over: churning out copy, reviewing legal documents, translating between languages. When humans do something ad nauseam, their eyes might glaze over, they slip up; chatbots don’t experience ennui.

Jobs ripe for AI-ification include the already endangered species of the executive assistant, telemarketers and some forms of software engineering.

The reduction of elimination of those roles could begin to reshape the global economy.

 

 

Tools and use cases

Meta hopes to bring in celebrities to voice its generative AI assistants. Celebs who are reportedly in conversations with the social media giant include Dame Judi Dench, Awkwafina and Keegan-Michael Key. While none has yet inked a deal, welcoming users with a familiar, distinctive voice gives an interesting preview of what Meta’s overall generative AI voice strategy could be.

It also follows on the heels of OpenAI’s disastrous attempt to license Scarlett Johansson’s voice, a nod to her iconic voice acting as an AI assistant in the film “Her.” She declined, yet OpenAI released a voice eerily similar to hers, while denying it trained on her work. Legal messiness ensued. We’ll see if Meta can avoid the same mistakes.

The Washington Post opened a portal this week into how real people are using AI. And the answers are, perhaps unsurprisingly, sex and homework. The number one use of AI bot WildChat was creative writing and roleplay, including spicy scenes. Eighteen percent used it for homework help, 17% for search queries, and 15% for work and business.

While the exact percentages will vary based on the platform used, these numbers show just how versatile AI tools can be. And it’s a good reminder to have a filter in place to avoid more salacious requests of your work-related chatbots.

That emphasis on using AI to help with homework is spilling over into the classroom, where colleges are preparing the workers of tomorrow for an environment where AI must be used – but cannot replace human thinking.

The University of Southern California already has an AI for Business major, a co-venture between the business and engineering schools, the Wall Street Journal reported. Cornell University has an AI and Society minor. But even those outside of academia are looking for instruction on AI. The Wall Street Journal article compared AI to the necessity of knowing how to type or using Microsoft Office: table stakes for any office worker, including communicators.

But even as those skills become expected, schools (and employers) must grapple with ways to identify the use of AI in order to differentiate true knowledge and skill with prompts from mere ability to work with them.

OpenAI apparently has a tool that can identify if content was created by its ChatGPT tool with 99.9% accuracy using text watermarks, but has hesitated to release it. An OpenAI spokesperson cited concerns over how the tool could flag non-native English speakers, but an internal survey revealed that 30% of current users would be dissuaded from using the tool by detection technology.

At some point, all AI platforms will have to grapple with this issue. But apparently, not yet.

Risks and regulation

Elon Musk features in two major regulatory issues this week. First, with his own lawsuit against OpenAI  head Sam Altman. Re-submitting  a lawsuit he filed and later withdrew, Musk claims that OpenAI breached its founding principles by pursuing profit. OpenAI has called the suit meritless.

Musk is also the subject of pushback from five state attorneys general over bad information spread by his Grok AI tool that said it was too late for Kamala Harris to take Joe Biden’s place on the presidential ballot in nine states.  “This is false,” the letter from the top lawyers reads. “In all nine states the opposite is true.” They have asked that the AI chatbot refer elections-related questions to impartial sources offering information on voter registration.

The stakes are high in this election, and misinformation can spread quickly. AI companies will have to figure out their duty to the truth – and fast.

In other lawsuit news, a YouTube creator wants to start a class action suit against OpenAI, claiming that the platform trained its model on transcriptions of his videos. It’s the latest in a series of suits against OpenAI and other LLMs over the content they’re trained on – but it’ll be far from the last as data-hungry models constantly seek new inputs.

This week’s antitrust ruling against Google doesn’t seem to have much to do with AI on the surface – it’s ostensibly about search engines. But if you scratch the surface, it’s apparent that the ruling could change the future of AI.

As the New York Times reports:

During the trial, Microsoft’s chief executive, Satya Nadella, testified that he was concerned that his competitor’s dominance had created a “Google web” and that its relationship with Apple was “oligopolistic.” If Google continued undeterred, it was likely to become dominant in the race to develop artificial intelligence, he said.

Google’s chief executive, Sundar Pichai, countered in his testimony that Google created a better service for consumers.

In other words, if Google’s power isn’t checked now, it could go on to create an additional monopoly in the AI space. It’s still far from certain how the antitrust ruling will play out – or even if it’ll stand up on appeal – but expect it to echo far beyond your search bar.

Finally, in a less tangible risk, Google found that there are limits for what people want to use AI for. Its Olympics commercial, “Dear Sydney,” saw a father using AI to help his daughter write a letter to her favorite U.S. hurdling Olympian. But there was outcry over outsourcing this timeless task to a robot instead of helping the child write from the heart.

Google eventually pulled the ad from rotation. “We believe that AI can be a great tool for enhancing human creativity, but can never replace it,” Google said in a statement. “Our goal was to create an authentic story celebrating Team USA.”

Whatever the intent, it’s clear that people felt that Google was trying to use tech to take away from humans rather than add to their life. This misunderstanding of what people want from AI could prove as disastrous as any lawsuit.

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

COMMENT

PR Daily News Feed

Sign up to receive the latest articles from PR Daily directly in your inbox.