A PR pro’s guide to AI privacy: Protecting client data in the age of AI
Know the right questions to ask.

Pete Pachal is the founder and CEO of The Media Copilot.
As someone who often teaches how to use AI to PR teams, the No. 1 thing that complicates my sessions is privacy. Specifically, data privacy — everyone is concerned about what these chatbots and tools are doing with that information we give them. There’s a baseline assumption that everything you feed into and software becomes fodder for the AI to train on.
That’s not a terrible assumption, but it’s also often not accurate. Without a sophisticated understanding of AI and how its various flavors handle privacy, you’re either going to risk data leaks or be forced to implement draconian “do not use” policies with respect to AI. And considering Muck Rack’s 2025 report on the State of AI in PR found a 75% adoption rate in the industry, the leak potential is enormous.
Understanding AI privacy in PR
Much of the power of AI lies in applying it to specific data. Simply asking an AI to tell you about some information in its knowledge base (“What social and political trends led to the French Revolution?”) isn’t really a compelling use case, and the answers can often be riddled with hallucinations — i.e. when an AI gets things wrong. Better to aim AI’s incredible power of natural language processing (NLP) to specific data, which both reduces the chance of hallucination and makes the exercise truly useful since now it’s applying its abilities to your work.
The way AI typically works means the information you give it goes into the cloud, gets processed into a format that AI can use to compare it with the the information in its database, and store it for later — both so you can reference the data again in the future, but also to help retrain the model so it has more data to work with.
It’s that last part that users are often the most concerned about, and with good reason. If the data becomes part of the AI’s training set, that means the information within it will inform future answers. If a user asks the AI to cite its sources, there’s a chance it might even call up the exact text it was fed in the first place, a phenomenon known as “regurgitation.”
You can see pretty quickly how careless sharing with an AI system can lead to a leak. The leak isn’t instantaneous; training takes time. And to be fair, AI companies have reduced the chance of regurgitation considerably over the past two years — it rarely happens anymore. Nonetheless, the chance is not zero.
Essential rules for AI privacy
Preventing your data and prompts from being harvested to retrain the model is actually fairly straightforward. Most public-facing chatbots include a setting you can simply turn off. For ChatGPT, all you need to do is go into your profile and switch off “improve the model for everyone.” That’s it — now ChatGPT won’t train on your data. And if you want to turn off the setting for only a single session, you can fire up a Temporary chat, which won’t store any data until you close that browser tab. Other AI tools have similar features.
Of course, changing a setting is something easily missed or forgotten, and it doesn’t help an organization since it’s impossible to manage. Luckily most software vendors that use AI do so via APIs, which means no data from any user interaction ever gets put in general training data. And if you build your own tools with the API (easier than you might think), the same rule applies. That said, the vendor will have its own privacy policy, separate from the AI’s, so be sure to vet that properly as well.
For most applications of AI, that level of privacy should provide a large amount of comfort. However, technically the data still needs to be in the AI provider’s cloud so its large language model can process it quickly enough that you get the answer in seconds. What that means is the data still needs to leave the premises, which some clients may still consider too high-risk.
In those cases, there’s little choice but to use AI locally. This involves downloading and running an AI model on a computer or server that’s securely under your control. That will ensure the data never goes to the AI provider at all. Of course, that also means you usually won’t have any of the features of the commercial software, you’ll have to pay for your own computing costs, it’s technically cumbersome, and your AI will probably run much slower than public-facing systems do. But the interactions will be 100% private.
Putting AI privacy first
With an understanding of the different levels of privacy protection available, you’ll be equipped to create your own AI privacy plan, which includes three essential elements:
- Create a privacy policy: Your workforce needs to know what they can and can’t put into an AI system. Simple rules are generally better, but be careful: If you’re too restrictive, you risk some employees using “shadow AI” — performing tasks with AI tools on their personal devices or accounts. Too liberal, and leak risk increases. Restricting client data to a single system with enhanced privacy, for example, is a good first step toward a balanced approach.
- Select the right tools: Personal ChatGPT accounts are great for experimentation, but not the most privacy-focused tool for real work. Take the time to get to know business or enterprise tools that are private by default while also making it easy for teams to collaborate on projects.
- Have an emergency response plan: Mistakes happen. When they happen with data, it’s important to respond quickly and decisively, without blame. Ensure everyone on your team has access to the plan — especially to whom they should escalate concerns at both the software vendor and the AI company.
‘Should you trust AI?’ is the wrong question
Over the past two years, AI has advanced from a mind-blowing curiosity to a rapidly growing part of knowledge work. And the way it speaks back to you — sometimes literally — makes it feel like a real human collaborator.
Except that it’s not human. It’s a complex system of computing tasks, built by multibillion-dollar corporations and governed by specific programming, laws, and company policies. Everything you “tell” AI goes into this system, so it’s important to choose your words — and your data — carefully. With the right tools and approach, you’ll be able to handle AI interactions with confidence.
For more AI insights, joins us for the first AI Horizons Conference in Miami, Feb. 24-26.