Are You Free Thursday, and Would You Do Me a Favor?

Jul 12, 2022 | Editorial, Town Crier

Part One in an Occasional Series About How You “Do” Town Hall

Hi friends,

For the last 15 years, I’ve visited Munich annually with Barbara and the girls to see their grandparents. This rhythm means I miss a bunch of stuff at Town Hall every summer — and this time, that includes an event that I’m really drawn to, personally.

First, a little backstory. Questions about technology and its social implications have been woven throughout our calendar for years, as far back as 2006 when Ray Kurzweil gave me the chills at a packed Great Hall conversation about his book The Singularity is Near: When Humans Transcend Biology. Who wants or needs anything like “a singularity,” I asked? (Note: I have subsequently asked the same thing about NFTs and blockchain, but I remain optimistic that I need not fret about the answers to those questions.)

Artificial Intelligence, or AI, is different. In its implementation, we’re asking and answering questions about what (and eventually, who) constitutes humanity, at lightning speed. And when I say “we” I mean the comparatively small group of scientists, researchers, and schemers motivated by altruism, curiosity, or commercial opportunity — or maybe a little of each.

And so this Thursday, July 14, we have the latest stitch in the weave, a powerhouse discussion at what feels like AI’s rubicon moment. Honestly, that moment was probably years ago, maybe even when Ray released that scary book. But I’m talking about last month’s reporting of a Google engineer’s contention that the Google AI chatbot LaMDA has been “consistent in its communications about what it wants and what it believes its rights are as a person.”

I think this issue would feel like a bigger deal to all of us were it not for the continually chaotic news cycle permeating our lives. But I have to ask myself what problem exactly we’re addressing here and if any of this is what the rest of humankind signed up for. And if we did, was it buried somewhere in the small print, like a Terms of Use Agreement for some digital appliance, showing up bundled in the next upgrade foisted on our lives?

Fortunately, some of the brilliant, well-meaning folks building the building are coming over on July 14 to talk me down off the ledge. Blaise Aguera y Arcas is a VP and Fellow at Google Research, and an active participant in big-picture considerations around AI and ethics, fairness, bias, and risk. Melanie Mitchell is a professor at Santa Fe Institute, a student of Douglas Hofstadter, and a leading forecaster and translator of the implications of AI to the general public. And just announced — the event will be moderated by Lili Cheng, Corporate VP for Microsoft’s AI and Research. It will be a spirited discussion featuring three visionaries staring into the world of AI. We couldn’t have asked for more perceptive observers or more respectfully divergent perspectives, and you should come if you can (you can get tickets here)!

And now, here’s the favor — I’ve got a question I’m hoping you can ask during Q&A:

As we invite technology to support/supplant human decision-making across so many fronts, I’ve started to believe that maybe making mistakes and valuing imperfection is essential to humanity. And that we should resist efforts to use technology to eliminate the fallibility of our incomplete knowledge or poor judgment, or to avoid choices that might fail to fulfill expectations and lead to disappointment. As we begin to invite AI into so many dimensions of life, how can we protect space for fallibility?

It’s time to level with you. For some, Q&A has long been a controversial part of our programs where people make windy speeches (like the one you just read from me — sorry) before a question that feels like an afterthought. Q&A is central to what makes Town Hall special: bring an open mind, and you get a chance to grapple with a deeply informed perspective on a topic, followed by the honor of an opportunity to interrogate a presenter’s conclusions through a direct question.

Being present for Q&A means you showed up, no matter the weather; you stayed engaged to the end and actively collaborated in the meaning of the event. Great questions help us pick up pieces we missed and help our speakers understand new things about their arguments; great questioners help make meaning for all of us. It’s one of the best ways to “do” Town Hall.

We believe that our programs can be more than infotainment; they help us understand issues and decide how we want to live our lives. Not to mention how we express our desires, especially as our daily lives are yoked to intuitions formulated, data gathered, and decisions made by computers that think.

In the end, AI is poised to change a lot about the society we share, and we all have a right to formulate our own perspectives on how we feel about it. This program will undoubtedly offer a rich conversation, and I hope you will be there to join it — and to tell me how it went!

Wier

P.S. We’ve also just announced that the event will feature an appearance by another unintended inevitability of AI: a chatbot evangelist called The Word of the Future. The interactive exhibit by Jacob Peter Fennell and Reilly Donovan was first presented at the Museum of Museums on First Hill last summer. Come early/stay late to be moved by the spirit of a full-on Digital Deity.

Town Hall’s July 14 event will also feature an appearance by The Word of the Future, an interactive exhibit by Jacob Peter Fennell and Reilly Donovan.

Upcoming Events

Rental Partner: University of Washington Office of Public Lectures presents

Mahzarin Banaji

Hidden Biases of Good People