Can AI Make Us More Human?

This article is part of Finding the Words, a newsletter that delivers practical insights on the day’s issues.

A line drawing of a robot hand and a human hand about to grab each other.

In December 2017, MIT launched a research project called Deep Empathy in collaboration with the Scalable Cooperation group and the UNICEF Innovation Office. The research team had been tracking the Syrian war, which by 2017 had affected more than 13.5 million people in Syria, including 80% of the country’s children. The researchers wanted to examine if Artificial Intelligence—by definition, the least human of tools—could be used to increase empathy and drive human responses among people who were far away from the war-torn country.
 
So, the team set out to learn the characteristics of war-torn Syrian neighborhoods. They applied those characteristics to transform images of cities all over the world, simulating how cities like Boston or L.A. would look if they suffered a similar disaster. The team then used these simulated images in a controlled experiment to build empathetic responses in the people who were seeing the images, all while training AI to become more empathetic in the process.

While the project has since been archived, Deep Empathy is a terrific example of AI activated through the lens of humanity, and a reminder that technology can help people see things differently and make us more human as a result. That’s of course not always the case. While the bots are learning, they’re still often spitting out information that is inaccurate, misleading, or biased.

So, how and when to use AI in your workplace? You can start by seeing AI as a necessary and supportive tool for your future.

As my colleague and Mission Partners' Creative Director Anne Kerns passed along to me last week, "Designers won’t be replaced by AI. They'll be replaced by designers using AI." The same sentiment holds true for most roles. Yes, we should be engaging with AI: as a tool to augment our work, not replace our work or us as humans.

Consider AI as Augmented Intelligence rather than Artificial Intelligence.

This is not a new concept, but one that feels especially important to reinforce now. Here's what IDEO founder Tim Brown said of augmented intelligence back in 2017 in his blog, Design for Augmented Intelligence:

The choice of “augmented” rather than “artificial” isn’t just fancy wordplay — augmented intelligence extends the capabilities of humans in a way that feels natural. If the intelligence of our devices, systems, and relationships feels artificial, it will never stick.
 
As I shared in my previous column about AI, I see great potential in what AI can do for the future of work and society. However, we must take an eyes-wide-open approach to the well-documented limitations and unintended consequences of AI as well.

Here are some considerations to ponder:

  • AI Without Humanity Is Dumb. Literally, dumb. As Tim Brown of IDEO said in that same 2017 blog, "Data by itself is inert- dumb raw material. Making things smart means making them dynamic, flexible, and evolutionary…existing in relationships." If you think you can use AI as a tool on auto-pilot, without oversight or human engagement, you've been warned: that's just plain dumb.

  • Humanity Reliant on AI is Dumb, Too. As Evan Selinger, a philosopher, shared in this fascinating article on BBC, “By encouraging us not to think too deeply about our words, predictive technology may subtly change how we interact with each other.” As I shared in this related blog, it is becoming ever more common for humans to suffer from digital amnesia, also known as the Google effect, which sets in when we rely too heavily on technology to do our thinking for us.

  • Just Because AI Can Do Something Doesn't Mean It Should. There are all sorts of applications for AI, but that doesn't mean all of them should be adopted. Consider carefully how you will and will not use AI in your work. Set criteria for your workplace if it doesn't already exist, and be aware of those unintended outcomes that tend to do more harm than good.

  • Don't Take AI Tools, Information, or Recording Lightly. AI notetaking tools can be time-saving, and they're very impressive. But who owns the data when you're using an AI reader? Where is meeting data being stored? And who has access to the data? All AI tools are different, so know the answers to these questions (and take the questions seriously) before inviting your virtual notetaker to join your next meeting. As for consent, keep in mind that several states now require all parties to give consent before a conversation can be recorded. While some AI note-taking tools can be added to the calendar invites, this doesn’t pass as a clear request for consent in these states.

 
I'll bring this topic of conversation into this season of the Mission Forward podcast, which starts again just a week from tomorrow. So, if you've got pressing AI questions related to communications, or you're stewing on something I've shared in this week's post, send me an email, and let's keep this topic of conversation going.
 
Bottom line: AI is an important tool and an indelible part of our future. But we don't need to lose our humanity in the process of using it. Consider how your team's intelligence can be augmented by technology, rather than replaced by technology, and you'll be futureproofing your work in the process.

Ready for more? Listen to this article on the Mission Forward podcast.


This post is part of the Finding The Words column, a series published every Wednesday that delivers a dose of communication insights direct to your inbox. If you like what you read, we hope you’ll subscribe to ensure you receive this each week.

 
Previous
Previous

Way to Win: Lessons Learned from Coco Gauff

Next
Next

Reconnect With Your Story.