AI for communicators: What’s new and what’s next

Apple’s major expansion into AI dominates the headlines.

ChatGPT is coming to Apple

This week, Apple made a huge step forward in its own AI journey – and likely toward democratization and expanding the use of AI by everyday people. California is also tired of waiting for the feds to regulate AI and is stepping up to the plate.  

Read on to find out what communicators need to be aware of this week in the chaotic, promising world of AI. 

Tools and Advancements

Apple this week tried to catch up on the great AI race of 2024 – and in the process took some of the biggest steps toward integrating Ai into daily life. 

The next iteration of the iOS operating system will be stuffed full of AI features, including:

  • Assistant Siri will get smarter, thanks to AI, able to carry out more tasks using natural language, carry over commands between tasks (for instance, “text that picture Gary emailed me to Juan”) as well as perform all the expected generative AI tasks like rewriting your emails, summarizing your notifications and more. Siri will be able to understand both your voice and typed commands.
  • When Siri doesn’t know an answer, she’ll turn to a partnership with OpenAI’s ChatGPT.

Privacy, both from OpenAI and Apple, was a major concern. OpenAI won’t train on content from Apple users, the company said, while Apple also pledges it will never store requests made of its own AI, which it’s dubbed … Apple Intelligence.

Groan.

In an interview with the Washington Post, Apple head Tim Cook was most bullish on the technology’s ability to save people time by collapsing items that used to be multiple requests into one fluid action.

 

 

But he’s also realistic about the limitations of the technology. While he said the technology has been deeply tested, he couldn’t guarantee “100 percent” that their AI wouldn’t face the same hallucinations that have plagued other models. 

The markets certainly liked what they heard from Apple. The stock price jumped 7% by end of trading on the day of the announcement, reaching a new record high of $207.15.

But someone did come to rain on Apple’s parade: Elon Musk.  

“If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies,” Musk wrote on X

Musk, who helped found OpenAI, has since turned against the company for allegedly abandoning its founding mission to chase profit. However, he did on Tuesday unexpectedly drop a lawsuit against OpenAI alleging just that. 

But even if Musk locks every iPhone that comes into Tesla HQ in a Faraday cage, the integration of AI into the foundation of that most ubiquitous of modern conveniences is likely to be a major jump forward for the average person. It may even change how we perceive AI and reframe its common understanding as a separate aspect of UX to an under-the-hood given. After all, it doesn’t feel like we’ve using a novel AI technology We’re simply talking to Siri, something we’ve been able to do for 14 years now. 

Apple is far from alone in putting AI deep into smartphones. Samsung was far ahead, in fact, rolling out a similar suite of features back in January. But Apple gets the most attention given its marketplace dominance. 

Elsewhere in Silicon Valley, Meta also wants its slice of the AI pie. The company is rolling out customer service chatbots in WhatsApp, hoping to gain revenue from businesses. This would be a boon to social media managers, but it’s unclear how much customers will love the sometimes frustrating experience. 

Meta is also facing backlash from visual artists as it uses imagery posted to Instagram to train its ravenous AI models. Some artists are now fleeing Instagram for a portfolio app known as Cara, which promises to protect their artwork But it’s hardly a perfect solution, as many artists rely on Instagram and its massive userbase to make sales. Expect user revolt against having their work deployed as AI training fodder to continue. 

And finally, consulting giant McKinsey shared its lessons from building its own in-house AI tool, Lilli. Their tips include assembling a multidisciplinary team, anchoring decisions in user needs, the importance of training and iteration, and ongoing measurement and maintenance. Learn how they made it happen, and perhaps be inspired to build your own custom tool. 

Risks and regulation

While the glut of questionable AI-generated content has caused headaches for Google Gemini and given us some pretty good laughs, it also highlights the tremendous risk that comes from publishing content without questioning and vetting it.

These heaps of sus AI content now have a name, The New York Times reports:s lop.

Naming this low-quality content is but one way to normalize its detection in our day-to-day lives, especially as some domestic policy experts worry the U.S. is downplaying the existential risks this tech creates. 

That concern drove the proposal of a framework this past April, drafted by a bipartisan group of lawmakers including Senators Mitt Romney, Jack Reed, Angus King and Jerry Moran, that seeks to codify federal oversight over AI models that will guard against chemical, biological, cyber and nuclear threats. It’s unclear how this framework fits into, or diverges, from the comprehensive AI task force update shared by The White House this past spring. 

Not content with the pace of advancing federal regulation, California advanced 30 new measures in May that amount to some of the toughest national restrictions on AI, the New York Times reports. These measures are focused on preventing AI tools from housing and healthcare services discrimination, the protection of IP and job stability.

“As California has seen with privacy, the federal government isn’t going to act, so we feel that it is critical that we step up in California and protect our own citizens,” said Rebecca Bauer-Kahan, a Democratic assembly member who chairs the State Assembly’s Privacy and Consumer Protection Committee.

This shouldn’t suggest that the feds don’t share California’s concerns, however. The FTC is currently investigating Microsoft’s partnership with AI startup Inflection as part of a larger effort to ramp up antitrust investigations and minimize the likelihood of one organization having a monopoly on enterprise AI software. 

Central to this probe is whether the partnership is actually an acquisition by another name that Microsoft failed to disclose, reports CNN. The FTC is currently finalizing details with the Justice Department on how they can jointly oversee the work of AI tech giants like Microsoft, Google, Nvidia, OpenAI and more.

According to CNN:

The agreement shows enforcers are poised for a broad crackdown on some of the most well-known players in the AI sector, said Sarah Myers West, managing director of the AI Now Institute and a former AI advisor to the FTC.

“Clearance processes like this are usually a key step before advancing an investigation,” West said. “This is a clear sign they’re moving quickly here.”

Microsoft declined to comment on the DOJ-FTC agreement but, in a statement, defended its partnership with Inflection.

“Our agreements with Inflection gave us the opportunity to recruit individuals at Inflection AI and build a team capable of accelerating Microsoft Copilot, while enabling Inflection to continue pursuing its independent business and ambition as an AI studio,” a Microsoft spokesperson said, adding that the company is “confident” it has complied with its reporting obligations.

But whether concerns are existential or logistical, it’s clear that fresh threats are coming fast.

Earlier this week, Human Rights Watch reported that photos and identifying information of Brazilian kids have been used without their consent to inform AI image tools like Stable Diffusion.

HRW warns that these photos contain personal metadata and can be used to train deepfakes.

HRW reports:

Analysis by Human Rights Watch found that LAION-5B, a data set used to train popular AI tools and built by scraping most of the internet, contains links to identifiable photos of Brazilian children. Some children’s names are listed in the accompanying caption or the URL where the image is stored. In many cases, their identities are easily traceable, including information on when and where the child was at the time their photo was taken.

One such photo features a 2-year-old girl, her lips parted in wonder as she touches the tiny fingers of her newborn sister. The caption and information embedded in the photo reveals not only both children’s names but also the name and precise location of the hospital in Santa Catarina where the baby was born nine years ago on a winter afternoon.

While the privacy of children is paramount,discussion of deepfakes also resurfaces concern about how digitally manipulated images and voices will continue to influence global elections this year.

But the emerging discipline of ‘responsible AI’ may mitigate the spread, as it includes designing tools that can detect deepfake audio and video similar to how a spam filter works. 

PwC, which is a member of Ragan Communications Leadership Council, is working on defining boundaries around responsible AI use, and developing tools that help communicators operate within those ethical frameworks. U.S. and Mexico Communications Lead Megan DiSciullo says this and similar efforts present an opportunity to train employees, inform end users and reduce risk.

“Whether it’s big sessions with thought leaders, teaching people how to prompt, curriculum on responsible AI or even just teaching people about what AI does and doesn’t do, a very important element remains the role of the human,” she told Ragan last month

The scaling of responsible AI tools will become more important, as a new study conducted by Epoch AI found that the availability of training data for AI models is close to running out. 

AP reports:

In the short term, tech companies like ChatGPT-maker OpenAI and Google are racing to secure and sometimes pay for high-quality data sources to train their AI large language models – for instance, by signing deals to tap into the steady flow of sentences coming out of Reddit forums and news media outlets.

In the longer term, there won’t be enough new blogs, news articles and social media commentary to sustain the current trajectory of AI development, putting pressure on companies to tap into sensitive data now considered private — such as emails or text messages — or relying on less-reliable “synthetic data” spit out by the chatbots themselves.

The depletion of existing data sources for language models and the increase in misinformation both make a strong case for having custom, proprietary GPTs that keep your data out of the slop pile. 

Ensuring your communications expertise and judgment is present during any internal AI council meetings, and that these risks are shared with leaders across your organization, will ensure your organization is positioned to embrace the foundations of responsible AI while validating the worth of your role and function.

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.

Justin Joffe is the editorial director and editor-in-chief at Ragan Communications.  Follow him on LinkedIn.

COMMENT

PR Daily News Feed

Sign up to receive the latest articles from PR Daily directly in your inbox.