The ethics of AI in public relations (are the robots good?)
The robots are here to stay, and we need to talk about the ethical implications.
Since OpenAI shook the world in November 2022 by launching ChatGPT-3, the world has never been the same. And by that, I mean, everyone won't stop talking about artificial intelligence. Whether you're excited about this massive disruption and ready to see the world change, or you harbor deep resistance to the inevitable, the reality is that the world as we know it is probably not going to stay the same, and that includes public relations.
So, what does all of this mean for the communications industry? Let's discuss the ethics of AI, PR ethics, avoiding plagiarism, and keeping our jobs in the Golden Age of Robot Takeover.
People are understandably stoked about the quick, effortless ease of navigating chatbots like ChatGPT for things like meal planning, trip itineraries, or even writing entire college essays. And, for personal use, AI chatbots are great. They still need to be fact-checked, but they are pretty great.
But what about using chatbots and other generative AI for work?
Many, many people are quick to call AI an industry killer. Who needs teachers when your child can learn everything they need to know from an iPad?
We're not saying AI tools can't be helpful for work; in fact, we've created a list of our favorite AI tools for PR. But there are serious ethical issues around the use of AI (and chatbots in particular) in the workplace. Let's take a look at just a few.
While AI can be a big help for mundane or repetitive tasks, at the end of the day, it's just a text prediction tool. A brilliant text prediction tool. It cannot parse right from wrong, good from evil, N'Sync from Backstreet Boys. It just recognizes patterns within large sets of data.
This, combined with the fact that AI chatbots typically don't provide sourcing for their answers without prompting, can mean that many users receive and pass along completely made-up information as fact. It doesn't take a Ph.D. in computer science to understand why this could be a pretty terrible thing, especially if done on a massive scale with millions of users and particularly in the context of professional settings.
In the world of eroded trust, Fake News™️, and an internet where an estimated 62% of data is already unreliable information, ChatGPT misuse may inadvertently exacerbate this problem. For those in public relations, there's an increased need for due diligence to provide clients and customers with accurate, reliable, sourced, and fact-checked information, which can make the over-reliance on AI a liability.
Do you know how artificial intelligence works? You may have a functional knowledge of the concept behind it, but the big AI players are not out here doing things transparently. This issue of opacity in the growing AI industry is not necessarily new and has been an ongoing conversation for years. Also, because of their incredibly complex nature, AI programs are extremely complicated to understand, even if a company chooses to be transparent about it (which most of them do not).
Where are they getting the data that informs their answers? What sources are being used, and how might those sources influence the process? It's always important to be critical on the internet, but doubly so with AI, which continues to be mired in ethical issues.
Many articles and think-pieces have been published regarding the controversial biases of AI software. While companies like OpenAI claim to be working on correcting this (stating biases are "bugs, not features"), the issue remains that AI companies can pretty much program their software to do or say whatever they want.
Google's Bard is another AI chatbot recently under fire for supposedly providing overtly politically biased answers. While many AI companies purport to be doing their best to avoid bias in their software, what if they… don't? What can be done to stop them from programming their software however they want, claiming it lacks bias, and distributing propaganda for one political ideology or another?
So, what happens to the data you ask the chatbots to process? How is your company's IP (intellectual property) being managed? We discussed previously the issues around transparency and how data is being used by the various AI software, but what about your data (and, subsequently, your employers' and clients' data)? Snapchat recently added an AI chatbot to its app and is getting slammed with tons of articles around concerns about privacy and inappropriate responses (particularly among their often underage user base).
Generally, one would hope that private company and trade secrets aren't being mishandled. But how much information do we realistically have about how data is being used, stored, or distributed after being handed over to these various AI services?
As previously mentioned, by and large, AI is not creating new and original content. AI is simply regurgitating information scraped from data sources. This is true for chatbots but also applies to things like text-to-image and text-to-video generators as well.
There are huge and still unresolved ethical, moral, and legal implications of using artificial intelligence to create content, tweaking it slightly, and passing it off as novel intellectual property. The U.S. Copyright Division is currently investigating these issues and the future of copyright and AI is still largely unknown.
What are the ethical implications for the future of AI in PR?
Ethics, morals, etc. Yeah, yeah, yeah. What does this mean for the future of public relations?
There are still a lot of unknowns about the future of AI when it comes to the public relations industry. In fact, there are a lot of unknowns when it comes to the future of many industries and how AI will impact, disrupt, and shape those industries moving forward.
Many industry professionals are practicing cautious optimism, and many businesses and agencies are drafting SOP around how their IP can be used with AI tools. But it's hard to give a definitive answer about the future of PR or, really, anything right now since the software is only now becoming ubiquitous and is changing seemingly every day.
But it is important to be having these conversations as an industry. Some professionals see these tools and jump in feet first, not considering the murky ethical waters that can cause serious problems down the line if these tools aren't used critically and carefully. And particularly as all companies are seemingly jamming AI into all of their software, these issues will only increase.
Here are our best tips for how to navigate AI in PR moving forward for the foreseeable future:
Talk to your direct report about the SOP of your agency, and talk to your clients about how they feel about the use of AI. Some businesses are welcoming AI with open arms, even replacing staff and freelancers with AI. And others are putting strict company guidelines in place around the usage of such software.
It's best not to assume that everyone is fine and dandy with their IP being fed into Mystery Machines just to save a few minutes on a task.
Nor is it wise to assume all employers or clients are okay with generative AI producing a sizable chunk of their brand copy. Having open conversations around these topics will be incredibly important moving forward.
The best way to avoid legal or ethical issues regarding any AI tool or technological advancement is to simply not over-rely on it for the creative process. Sure, get that first draft out of the way with a chatbot or start a logo on Canva using AI, but don't let that be the end product. Your employer or clients could do that themselves for much cheaper.
Even if AI starts the process, make sure you put your own unique and human spin on anything it produces. Let your uniquely mortal creativity shine through, and use machines for the boring bits.
Because of the aforementioned ethical and moral issues surrounding the constantly changing artificial intelligence landscape, it's on us to be constantly researching and fact-checking every single thing we use from AI software. "I got it from ChatGPT" will not hold up as a reasonable excuse if you accidentally publish or produce something erroneous.
There are also myriad tools out there to make sure you're not regurgitating copyrighted or protected information (but really, this shouldn't be an issue if you're following Step 2).
We can't necessarily stop various AI tools from having bias, but we can use our human judgment to prevent bias and discrimination from leaking into our work. Like with all websites and news sources, watching for bias and requesting references from chatbots can help us observe and filter for "facts" that aren't actually facts.
The conversation around the ethics and legalities of artificial intelligence has only just begun. Some countries, like Italy, are pre-emptively banning AI tools due to privacy concerns. Even technological cowboy and prolific shitposter Elon Musk is side-eyeing the technology (claiming it could cause "civilization destruction").
While ChatGPT is only explicitly banned in seven countries, the legalities of all AI services and chatbots are constantly evolving. The best thing you can do to stay compliant is to be vigilant and follow the latest news around artificial intelligence.
There are fears that we're heading into a new age of pure, unbridled laziness. Who needs to try anymore when we can outsource a good chunk of our jobs to the machines? We've already seen this particular brand of laziness proliferate the PR world through cluttered inboxes full of generic spam pitches, HARO pitches written entirely by AI, and press releases pulled straight from budget copywriting software.
What will set you apart as a PR professional in this new age is simply not being lazy. Think strategically, get creative, and be human-centered in your approach to any and all communications. In a world where everything is written by robots, be a person.
The future of artificial intelligence in the public relations industry is still an evolving and ongoing conversation. Thought leaders and industry powerhouses are still debating the technology and its role in the PR and comms space.
Interested in learning more about how AI will *checks notes* cause civilization destruction? Or, want a few cheeky marketing tips written by humans sent straight to your inbox? Why not sign up for our irregular emails? We promise we won't spam (because we often forget to send them).