OpenAI, ChatGPT, and Safeguarding Your Content Marketing

‘It’s as if the AI system were going into your factory and stealing your machine’

Add bookmark

Two gray model robots sitting back to back, one holding up a pistol in front of their face with both hands, the other looking down toward and about to type on a laptop off to their side

Elon Musk says “we are not far from dangerously strong AI” because “ChatGPT is scary good.” I see what he means. Five days after laying off 10,000 people, and less than two months after the release of large language model (LLM) ChatGPT, Microsoft is making a “multiyear, multibillion dollar investment” in creator OpenAI. But wait, it gets scarier

OpenAI started out as a nonprofit with a stated mission of advancing AI in ways that would be “most likely to benefit humanity as a whole” but incorporated a for-profit business three years later and reassigned nearly its entire staff. Then — perhaps trying to avoid what journalist Robert Evans highlighted at Google, but creating its own controversy in the process — the company paid Kenyan workers less than $2 per hour to make its free natural language processing tool “less toxic.” 

Pretty dangerous, particularly in an era of ever-increasing online privacy concerns, corporate responsibility, and sensitivity to human and workers’ rights. And particularly when the system’s creators call it “a median human” (whatever tf that means) and assert that the “very large businesses” of the future “will get built with this as the interface.” 

“Computers have never been instruments of reason that can solve matters of human concern; they’re just apparatuses that structure human experience through a very particular, extremely powerful method of symbol manipulation. That makes them aesthetic objects as much as functional ones. GPT and its cousins offer an opportunity to take them up on the offer — to use computers not to carry out tasks but to mess around with the world they have created. Or better: to destroy it.” – Ian Bogost, “ChatGPT Is Dumber Than You Think,” The Atlantic, December 7, 2022

Can’t get much more scary or dangerous than that. Or can it?

A yellow neon sign in call caps reading DO NOT TRUST ROBOTS, hanging from a store window, with rows of cans of Spam visible through it

That Didn’t Take Long: How They’re Using ChatGPT, and What it Means for Businesses and Content Creators

Less than two months into the popular AI chatbot’s release:

Worse still, as podcast host Leila Charles Leigh warned, “[when] the service is ‘free,’ you’re the service.” When we upload a photo, answer an online question or write a review, we’re doing “free work” for tech companies, because this is how their AI systems learn.

“This exploitative dynamic is particularly damaging when it comes to the new wave of generative AI programs like Dall-E and ChatGPT. Without your content, ChatGPT and all of its ilk simply would not exist. Many AI researchers think that your content is actually more important than what computer scientists are doing. Yet these intelligent technologies that exploit your labor are the very same technologies that are threatening to put you out of a job. It’s as if the AI system were going into your factory and stealing your machine.” – Nick Vincent and Hanlin Li, “ChatGPT Stole Your Work. So What Are You Going to Do?,” WIRED, January 20, 2023

In other words, even if AI weren’t regularly racist, classist and gender biased, even if the new tech had been finetuned by fairly-paid, American workers, and even if industry ethicists weren’t getting unjustly fired left and right (just ask Ariel Koren):

  • ChatGPT would still be problematic for your employees who’ve spent their careers creating and communicating (and, apparently, feeding what would become ChatGPT)
  • ChatGPT would still be problematic for you, because if your organization chose to replace your digital marketing, PR, communications, sales, customer experience and/or internal comms experts, your messaging, design, brand reputation and ROI would suffer

A worker with shoulder-length brown hair sits at their laptop at a desk, looking worried or frustrated, with hands on the top of their head

AI expert Gary Marcus doesn’t think ChatGPT is really that smart. Nor does Ezra Klein, Vox co-founder and New York Times columnist and podcast host, who introduced his recent conversation with Marcus with the following: 

“ChatGPT and systems like it, what they’re going to do right now is they’re going to drive the cost of producing text and images and code and, soon enough, video and audio to basically zero. It’s all going to look and sound and read very, very convincing. That is what these systems are learning how to do. They are learning how to be convincing. They are learning how to sound and seem human. But they have no actual idea what they are saying or doing. It is bullshit. And I don’t mean bullshit as slang. I mean it in the classic philosophical definition by Harry Frankfurt. It is content that has no real relationship to the truth.”

From the numerous tests he conducted with ChatGPT, game designer and author Ian Bogost found that ChatGPT does have knowledge and can express it, but “does not have the ability to truly comprehend the meaning” of what we feed it or what it spits out — and, “when pressed,” will almost always admit that it is “just making things up.” 

Of all the examples Bogost shared, this is my favorite: 

“When I asked ChatGPT to generate a cover letter for a university job, it obliged in a competent but uninspired way. It also produced exactly the same letter for a job as a magazine editor as it did for a job as a cannabis innovator in the Web3 space.”

I mean, seriously, this isn’t something we’d accept from our human employees (or applicants), so why should it be OK for robotic ones? And if we’d never replace a high-performing employee with a less effective human being, why would we replace our longstanding content creators with lower-quality, less experienced machines?

As I’ve argued, content creation and digital marketing are creative endeavors — and the output from ChatGPT and similar LLMs, “while fluent and persuasive as text, is consistently uninteresting as prose.” The team behind AI Breakfast, meanwhile, told me ChatGPT is “the calculator for communication,” a definition that sounds anything but creative.

A screenshot of a twitter DM exchange between CEI lead content analyst Philip Mandelbaum and AI newsletter distribution organization AI Breakfast, reading: (Phil) need a quote on chatgpt! (AI) CHATGPT IS THE CALCULATOR FOR COMMUNICATION (Phil) and is that a good thing? should we communicate like calculators? (AI) It is the calculator for writing*

(Good, my job’s safe.)

How You Should Deal with ChatGPT and AI at Your Organization

I asked Dr. Desmond Upton Patton, a University of Pennsylvania professor and AI expert, for his thoughts; he told me ChatGPT “has the potential to be a valuable tool,” but, he added, “it should be used with care and caution, and its limitations and potential biases should be considered.”

For AI researchers Vincent and Li, content creators “need to pressure the courts, the market, and regulators” before people like me lose our livelihoods — and change the conversation. 

“Discussions over the use of sophisticated AI technologies often come from a place of powerlessness and the stance that AI companies will do what they want, and there’s little the public can do to shift the technology in a different direction,” they write. But that’s not true: “The public has a tremendous amount of ‘data leverage’ that can be used to create an AI ecosystem that both generates amazing new technologies and shares the benefits of those technologies fairly with the people who created them.”

According to the WIRED report, there are “at least” four ways to take action:

  • Direct action (for instance, individuals banding together to withhold, “poison,” or redirect data)
  • Regulatory action (for instance, pushing for data protection policy and legal recognition of “data coalitions”) 
  • Legal action (for instance, communities adopting new data-licensing regimes or pursuing a lawsuit)
  • Market action (for instance, demanding large language models be trained only with data from consenting creators)

A thin blonde CMO or CEO in a dark pantsuit speaks to her employees at a shiny wooden podium, with big glass windows behind her, about ChatGPT from OpenAI not replacing content creators and customer support agents

As I see it, however, there’s only one action that you need to take: 

State in writing for your entire team that you will not replace your workers with AI, without documented proof that it would improve not only business returns but also employee engagement and company culture. Otherwise — and even if they were to become better writers than I — robots and other LLMs wouldn’t be a smart investment because there’d be no human workers to train, monitor or fix them.

Then, feel free to play with ChatGPT and any other AI you’d like; you can even encourage your team to do so. As Bogost promises: 

"ChatGPT isn’t a step along the path to an artificial general intelligence that understands all human knowledge and texts; it’s merely an instrument for playing with all that knowledge and all those texts… LLMs are surely not going to replace college or magazines or middle managers. But they do offer those and other domains a new instrument — that’s really the right word for it— with which to play with an unfathomable quantity of textual material."

Don’t Trust AI, But Need Help With Content Marketing?

Perfect! That’s what we at Customer Engagement Insider do best. Want to learn more?

Download our media kit >>>

 


Image Credits (in order of appearance)

  1. Photo by Brett Jordan on Unsplash: https://unsplash.com/photos/5L0R8ZqPZHk
  2. Photo by Nick Fewings on Unsplash: https://unsplash.com/photos/C2J92BO3qTw
  3. Photo by Elisa Ventur on Unsplash: https://unsplash.com/photos/bmJAXAz6ads
  4. Photo by Unsplash+ in collaboration with Getty Images on Unsplash: https://unsplash.com/photos/ddkKoSLPsLI

RECOMMENDED