Mission Forward

View Original

Better Bots?

This article is part of Finding the Words, a newsletter that delivers practical insights on the day’s issues.

In the past 5 years, I’ve learned more than I ever imagined I would about the effects of technology on society, on democracy, and on the human condition. I’ve learned the wonderful benefits of technology in education, on our health, and in our ability to stay connected through a global pandemic. I’ve also learned much more about the unintended effects of technology when the companies behind the technology take their power for granted.
 
So it may come as no surprise that this week, I’m focused on Artificial Intelligence (AI), and its increasing role in professional communications. For those of you following the headlines, it’s time to chat about ChatGPT.
 
ChatGPT was launched by OpenAI in November 2022 and apparently reached 1M users within its first five days. With an ability to create conversational language, ChatGPT has been making the rounds as a tool for writing short stories, newsletters, legal documents, and even term papers.  
 
Here’s how OpenAI describes it on their homepage:
“We’ve trained a model which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer follow up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.”
 
Using AI for personal or professional communications is far from a new concept. You’ve likely been using spellcheck for years and may also be intimately familiar with the AI-powered writing assistant in Microsoft Editor. But what is new is how advanced the conversational features are becoming in AI platforms: Chatbots and artificial intelligence tools like ChatGPT can almost instantly produce well-written and sophisticated content to inform a variety of documents from college essays to real estate contracts and even authoring legislation.
 
Given the hype, I went looking to see how some of you—our readers—have been using it. One day last week, I found out that:

  • One of you used ChatGPT to come up with new headline ideas for a press release.

  • One of you used it to write a fundraising letter.

  • One of you used it to create a more descriptive real estate listing.

  • And one of you used it to craft a basic contract for your small business.


That’s a lot of interesting applications. Just as its worth understanding how people are using the tool, it’s also important to understand how people are reacting to this new form of AI.

Here’s what one of you said about ChatGPT:

 “It’s not perfect but it was a great starting point. It gave me fresh ideas I hadn’t thought of, and spurred me to come up with a press release title I wouldn’t have thought of on my own.”
 
Harmless, perhaps… or is it? I respect those who advocate for the use of AI tools in small, under-resourced organizations, and I see the potential of what it can do for differently-abled communicators, but we must have our eyes wide open to the well-documented limitations and negative effects of this technology as well:

Fionna Agomuoh is a technology journalist who has been writing about ChatGPT. Here are a few of the insights she raised in her recent article for Digital Trends:

  • It may spit out some great content, but is a breeding ground for plagiarism. Students are being caught using ChatGPT to plagiarize schoolwork in high schools and universities. College professor Darren Hick shared his story with the New York Post about catching a student who used the chatbot to formulate a 500-word essay. Hick detailed that not only was the submission flagged for AI usage but that the text read like it was written by a “very smart 12th-grader” or someone learning to write who hasn’t developed their own style.

  • It may be creative, but it can spread misinformation and bias. No technology program, as impressive as it seems, can remove misinformation or bias in what it delivers to you. At the end of the day, AI platforms are only as good as the data being fed into it. UC Berkeley psychology and neuroscience professor Steven Piantadosi shared on Twitter one example of the worrisome results he uncovered when inputting specific text into the chatbot.

  • It may make some tasks easier, but it has a direct effect on human connection. If cheating and misinformation spreading isn’t enough, we need to also consider the platform’s role on writing and analytics skills. As Evan Selinger, a philosopher shared in this fascinating article on BBC, “By encouraging us not to think too deeply about our words, predictive technology may subtly change how we interact with each other.”


Bottom line:  At best, AI tools like ChatGPT can spark fresh ideas or inspiration for your writing. At worst, they spread misinformation and reinforce bias, while reducing our human abilities to connect, learn and process information. It’s not a case to avoid the technology, but a reminder to use it wisely.

Learn more about this issue, in human form. Check out this great read on ChatGPT from our friends at The MarkUp, a nonprofit news publication focused on the impact of technology on society.


This post is part of the Finding The Words column, a series published every Wednesday that delivers a dose of communication insights direct to your inbox. If you like what you read, we hope you’ll subscribe to ensure you receive this each week.

See this gallery in the original post