AI for communicators: What’s new and what matters
From risks to regulation, what you need to know this week.
AI continues to shape our world in ways big and small. From misleading imagery to new attempts at regulation and big changes in how newsrooms use AI, there’s no shortage of big stories.
Here’s what communicators need to know.
AI risks and regulation
As always, new and recurring risks continue to emerge around the implementation of AI. Hence, the push for global regulation continues.
Consumers overwhelmingly support federal AI regulation, too, according to a new survey from HarrisX. “Strong majorities of respondents believed the U.S. government should enact regulation requiring that AI-generated content be labeled as such,” reads the exclusive feature in Variety.
But is the U.S. government best equipped to lead on regulation? On Wednesday, the European Parliament approved a landmark law that its announcement claims “ensures safety and compliance with fundamental rights, while boosting innovation.” It is expected to take effect this May.
The law includes new rules banning applications that threaten citizen rights, such as biometric systems collecting sensitive data to create facial recognition databases (with some exceptions for law enforcement). It also requires clear obligations for high-risk AI systems that include “critical infrastructure, education and vocational training, employment, essential private and public services, certain systems in law enforcement, migration and border management,” and “justice and democratic processes,” according to the EU Parliament.
The law will also require general-purpose AI systems and the models they are based on to meet transparency requirements in compliance with EU copyright law and publishing, which will include detailed summaries of the content used for training. Manipulated images, audio and video will need to be labeled.
Dragos Tudorache, a lawmaker who oversaw EU negotiations on the agreement, hailed the deal, but noted the biggest hurdle remains implementation.
“The AI Act has pushed the development of AI in a direction where humans are in control of the technology, and where the technology will help us leverage new discoveries for economic growth, societal progress, and to unlock human potential,” Tudorache said on social media on Tuesday.
“The AI Act is not the end of the journey, but, rather, the starting point for a new model of governance built around technology. We must now focus our political energy in turning it from the law in the books to the reality on the ground,” he added.
Legal professionals described the act as a major milestone for international artificial intelligence regulation, noting it could pave the path for other countries to follow suit.
Last week, the bloc brought into force landmark competition legislation set to rein in U.S. giants. Under the Digital Markets Act, the EU can crack down on anti-competitive practices from major tech companies and force them to open out their services in sectors where their dominant position has stifled smaller players and choked freedom of choice for users. Six firms — U.S. titans Alphabet, Amazon, Apple, Meta, Microsoft and China’s ByteDance — have been put on notice as so-called gatekeepers.
Communicators should pay close attention to U.S. compliance with the law in the coming months, diplomats reportedly worked behind the scenes to water down the legislation.
“European Union negotiators fear giving in to U.S. demands would fundamentally weaken the initiative,” reported Politico.
“For the treaty to have an effect worldwide, countries ‘have to accept that other countries have different standards and we have to agree on a common shared baseline — not just European but global,’” said Thomas Schneider, the Swiss chairman of the committee.
If this global regulation dance sounds familiar, that’s because something similar happened when the EU adopted the General Data Protection Regulation (GDPR) in 2016, an unprecedented consumer privacy law that required cooperation from any company operating in a European market. That law influenced the creation of the California Consumer Privacy Act two years later.
As we saw last week when the SEC approved new rules for emissions reporting, the U.S. can water down regulations below a global standard. It doesn’t mean, however, that communicators with global stakeholders aren’t beholden to global laws.
Expect more developments on this landmark regulation in the coming weeks.
As news of regulation dominates, we are reminded that risk still abounds. While AI chip manufacturer NVIDIA rides all-time market highs and earned coverage for its competitive employer brand, the company also finds itself in the crosshairs of a proposed class action copyright infringement lawsuit just like OpenAI did nearly a year ago.
Authors Brian Keene, Abdi Nazemian and Steward O’Nan allege that their works were part of a datasite NVIDIA used to train its NeMo AI platform.
Part of the collection of works NeMo was trained on included a dataset of books from Bibliotik, a so-called “shadow library” that hosts and distributes unlicensed copyrighted material. That dataset was available until October 2023, when it was listed as defunct and “no longer accessible due to reported copyright infringement.”
The authors claim that the takedown is essentially Nvidia’s concession that it trained its NeMo models on the dataset, thereby infringing on their copyrights. They are seeking unspecified damages for people in the U.S. whose copyrighted works have been used to train Nemo’s large language models within the past three years.
“We respect the rights of all content creators and believe we created NeMo in full compliance with copyright law,” a Nvidia spokesperson said.
While this lawsuit is a timely reminder that course corrections can be framed as an admission of guilt in the larger public narrative, the stakes are even higher.
A new report from Gladstone AI, commissioned by the State Department, consulted experts at several AI labs including OpenAI, Google DeepMind and Meta offers substantial recommendations for the national security risks posed by the technology. Chief among its concerns is what’s characterized as a “lax approach to safety” in the interest of not slowing down progress, cybersecurity concerns and more.
The finished document, titled “An Action Plan to Increase the Safety and Security of Advanced AI,” recommends a set of sweeping and unprecedented policy actions that, if enacted, would radically disrupt the AI industry. Congress should make it illegal, the report recommends, to train AI models using more than a certain level of computing power. The threshold, the report recommends, should be set by a new federal AI agency, although the report suggests, as an example, that the agency could set it just above the levels of computing power used to train current cutting-edge models like OpenAI’s GPT-4 and Google’s Gemini. The new AI agency should require AI companies on the “frontier” of the industry to obtain government permission to train and deploy new models above a certain lower threshold, the report adds. Authorities should also “urgently” consider outlawing the publication of the “weights,” or inner workings, of powerful AI models, for example under open-source licenses, with violations possibly punishable by jail time, the report says. And the government should further tighten controls on the manufacture and export of AI chips, and channel federal funding toward “alignment” research that seeks to make advanced AI safer, it recommends.
On the ground level, Microsoft stepped up in blocking terms that generated violent, sexual imagery using Copilot after an engineer expressed their concerns to the FTC.
Prompts such as “pro choice,” “pro choce” [sic] and “four twenty,” which were each mentioned in CNBC’s investigation Wednesday, are now blocked, as well as the term “pro life.” There is also a warning about multiple policy violations leading to suspension from the tool, which CNBC had not encountered before Friday.
“This prompt has been blocked,” the Copilot warning alert states. “Our system automatically flagged this prompt because it may conflict with our content policy. More policy violations may lead to automatic suspension of your access. If you think this is a mistake, please report it to help us improve.”
This development is a reminder that AI platforms will increasingly put the onus on end users to follow evolving guidelines when we publish automated content. Whether you work within the capabilities of consumer-optimized GenAI tools or run your own, custom GPT, sweeping regulations to the AI industry are not a question of “if” but “when”.
Tools and use cases
Walmart is seeking to cash in on the AI craze with pretty decent results, CNBC reports. Its current experiments surround becoming a one-stop destination for event planning. Rather than going to Walmart.com and typing in “paper cups,” “paper plates,” “fruit platter” and so on, the AI will generate a full list based on your needs – and of course, allow you to purchase it from Walmart. Some experts say this could be a threat to Google’s dominance, while others won’t go quite that far, but are still optimistic about its potential. Either way, it’s something for other retailers to watch.
Apple has been lagging other major tech players in the AI space. Its current biggest project is a laptop that touts its power for other AI applications, rather than developing its own. But FastCompany says that could change this summer when Apple rolls out its next operating systems, which are all but certain to include their own AI.
FastCompany speculates that a project internally dubbed “AppleGPT” could revolutionize how voice assistant Siri works. It also may include an AI that lives on your device rather than in the cloud, which would be a major departure from other services. They’ll certainly make a splash if they can pull it off.
Meanwhile, Google’s Gemini rollout has been anything but smooth. Recently the company restricted queries related to upcoming global elections, The Guardian reported.
A statement from Google’s India team reads: “Out of an abundance of caution on such an important topic, we have begun to roll out restrictions on the types of election-related queries for which Gemini will return responses.” The Guardian says that even basic questions like “Who is Donald Trump?” or asking about when to vote give answers that point users back to Google searches. It’s another black eye for the Gemini rollout, which consistently mishandles controversial questions or simply sends people back to familiar, safe technology.
But then, venturing into the unknown has big risks. Nature reports that AI is already being used in a variety of research applications, including generating images to illustrate scientific papers. The problems arise when close oversight isn’t applied, as in the case of a truly bizarre image of rat genitalia with garbled, nonsense text overlaid on it. Worst of all, this was peer reviewed and published. It’s yet another reminder that these tools cannot be trusted on their own. They need close oversight to avoid big embarrassment.
AI is also threatening another field, completely divorced from scientific research: YouTube creators. Business Insider notes that there is an exodus of YouTubers from the platform this year. Their reasons are varied: Some face backlash, some are seeing declining views and others are focusing on other areas, like stand-up comedy. But Business Insider says that AI-generated content swamping the video platform is at least partly to blame:
Experts believe if the trend continues, it may usher in a future where relatable and authentic friends people used to turn to the platform to watch are fewer and far between. Instead, replaced by a mixture of exceedingly high-end videos only the MrBeasts of the internet can reach and sub-par AI junk thrown together by bots and designed to meet our consumption habits with the least effort possible.
That sounds like a bleak future indeed – and one that can also change the available influencers available to partner on the platform.
But we are beginning to see some backlash against AI use, especially in creative fields. At SXSW, two filmmakers behind “Everything Everywhere All at Once” decried the technology. Daniel Scheinert warned against AI, saying: “And if someone tells you, there’s no side effect. (AI’s) totally great, ‘get on board’ — I just want to go on the record and say that’s terrifying bullshit. That’s not true. And we should be talking really deeply about how to carefully, carefully deploy this stuff.”
Thinking carefully about responsible AI use is something we can all get behind.
AI at work
As the aforementioned tools promise new innovations that will shape the future of work, businesses continue to adjust their strategies in kind.
Thompson-Reuters CEO Steve Hasker told the Financial Times that the company has “tremendous financial firepower” to expand the business into AI-driven professional services and information ahead of selling the remainder of its holding to the London Stock Exchange Group (LSEG).
“We have dry powder of around $8 billion as a result of the cash-generative ability of our existing business, a very lightly levered balance sheet and the sell down of [our stake in] LSEG,” said Hasker.
Thompson-Reuters has been on a two-year reorg journey to shift its services as a content provider into a “content-driven” tech company. It’s a timely reminder that now is the time to consider how AI fits not only into your internal use cases, but your business model. Testing tech and custom GPTs as “customer zero” internally can train your workforce and prepare a potentially exciting new product for market in one fell swoop.
A recent WSJ feature goes into the cost-saving implications of using GenAI to integrate new corporate software systems, highlighting concerns that the contractors hired to implement these systems will see bottom-line savings through automation while charging companies the same rate.
How generative AI efficiencies will affect pricing will continue to be hotly debated, said Bret Greenstein, data and AI leader at consulting firm PricewaterhouseCoopers. It could increase the cost, since projects done with AI are higher quality and faster to deliver. Or it could lead to lower costs as AI-enabled integrators compete to offer customers a better price.
Jim Fowler, chief technology officer at insurance and financial services company Nationwide, said the company is leaning on its own developers, who are now using GitHub Copilot, for more specialized tasks. The company’s contractor count is down 20% since mid-2023, in part because its own developers can now be more productive. Fowler said he is also finding that contractors are now more willing to negotiate on price.
Remember, profits and productivity are not necessarily one in the same. Fresh Axios research found workers in Western countries are embracing AI’s potential for productivity less than others – only 17 % of U.S. respondents and 20% of EU said that AI improved productivity. That’s a huge gap from the countries reporting higher productivity, including 67% of Indian respondents, 65% in Indonesia and 62% in the UAE.
Keeping up and staying productive will also require staying competitive in the global marketplace. No wonder the war for AI talent rages on in Europe.
“Riding the investment wave, a crop of foreign AI firms – including Canada’s Cohere and U.S.-based Anthropic and OpenAI – opened offices in Europe last year, adding to pressure on tech companies already trying to attract and retain talent in the region,” Reuters reported.
AI is also creating new job opportunities. Adweek says that marketing roles involving AI are exploding, from the C-suite on down. Among other new uses:
Gen AI entails a new layer of complexity for brands, prompting people within both brands and agencies to grasp the benefits of technology, such as Sora, while assessing its risks and ethical implications.
Navigating this balance could give rise to various new roles within the next year, including ethicists, conversational marketing specialists with expertise in sophisticated chatbots, and data-informed strategists on the brand side, according to Jason Snyder, CTO of IPG agency Momentum Worldwide.
Additionally, Snyder anticipates the emergence of an agency integration specialist role within brands at the corporate level.
“If you’re running a big brand marketing program, you need someone who’s responsible for integrating AI into all aspects of the marketing program,” said Snyder. “[Now] I see this role in in bits and pieces all over the place. [Eventually], whoever owns the budget for the work that’s being done will be closely aligned with that agency integration specialist.”
As companies like DeepMind offer incentives such as restricted stock, domestic startups will continue to struggle with hiring top talent if their AI tech stack isn’t up to the standard of big players like NVIDIA.
“People don’t want to leave because when you don’t have anything when they have peers to work with, and when they already have a great experimentation stack and existing models to bootstrap from, for somebody to leave, it’s a lot of work,” Aravind Srinivas, the founder and CEO of Perplexity, told Business Insider,
“You have to offer such amazing incentives and immediate availability of compute. And we’re not talking of small compute clusters here.”
Another reminder that building a competitive, attractive employer brand around your organization’s AI integrations should be on every communicator’s mind.
What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!
Allison Carter is editor-in-chief of PR Daily. Follow her on Twitter or LinkedIn.
Justin Joffe is the editorial director and editor-in-chief at Ragan Communications. Before joining Ragan, Joffe worked as a freelance journalist and communications writer specializing in the arts and culture, media and technology, PR and ad tech beats. His writing has appeared in several publications including Vulture, Newsweek, Vice, Relix, Flaunt, and many more.