Navigation

Editor’s Note: Who’s Afraid of AI?

The global conversation going on about artificial intelligence (AI) flits at the edges of my peripheral vision like some dark Tinkerbell humming a white-noise tune. 

It’s persistently there, in other words, but has elicited only a swat response, and failed to capture my full attention.

I know I interact with AI on a daily basis in ways both mundane and complex, from using my toaster to being gamed on Royale Match. There are also numerous ways I don’t know about, or ways I know but hadn’t considered an AI thing.

We were having a conversation in the newsroom about the ethics of photo manipulation for newspapers, for example. Those ethics predate current technology – it’s not OK to present a manipulated photo as real – and yet today’s software, like Photoshop, has tools designed solely to assist you in creating as deep a fake as you want to create. 

That didn’t seem like an AI issue at the time, but of course it is. 

To me, that’s the danger AI presents. We don’t know what we don’t know about it. While that can be said about many things, AI has the ability to cause us to distrust or doubt everything we see, read or hear. Maybe that would not be such a bad thing if it required us to become more discerning individuals. However, I doubt that will happen. We’re often too busy to prioritize self-education. It’s also somewhat of a habit for us to believe, or want to believe, the deep fake that aligns with our perspective, opinions or political views.

I recently read about a Pew survey in a Wired story that finds “a majority of Americans are more concerned than excited about the impact of artificial intelligence – adding weight to calls for more regulation.”

That seems to be coming.

Pres. Joe Biden issued an executive order Oct. 30 that’s designed to advance the safe and responsible use of AI by harnessing AI for its benefits, while mitigating its risks. The benefits listed are our ability to solve challenges, “while making our world more prosperous, productive, innovative and secure.” 

The risks from irresponsible AI use “could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.”

The executive order and a fact sheet about it can be found at whitehouse.gov if you’re reading this in print, or by using the hyperlink above if you’re reading this digitally. It’s quite long, and feels like any of the terms and conditions we automatically agree to without ever reading – despite their ability to protect us by illuminating what we don’t know.