Emerging Intelligence: An Interview with Richard Yonck

Humans occupy a unique place in nature. Our evolution has been interdependent on pattern recognition and the development of technology unlike any other animal on the planet. We often consider our primary evolutionary advantage to be our intelligence—yet we are still limited in our ability to define what “intelligence” actually means. 

Futurist Richard Yonck speaks to audiences and writes about artificial intelligence and other emerging trends and technologies. Yonck sat down with Town Hall’s Alexander Eby to discuss how we define intelligence today, and introduce us to technologies and ideas that are redefining our understanding of intelligence both in and beyond ourselves.

AE: Your new book, Future Minds, is about the many ways our world is becoming more and differently intelligent. Can you expand on that?

RY: Absolutely. Future Minds takes a very broad view of the purpose of intelligence in the universe in order to better understand the ways it is likely to develop here on Earth over the next century and beyond. Using what’s known as a Big History framework that spans the life of the cosmos, it explores the possible fundamental drivers behind the development of complexity, life and intelligence. From there, it dives into the work and research into intelligence-related technologies that are rapidly transforming our world today. From advanced AI to brain-computer interfaces to biotechnological augmentation and more, it looks at the possible paths that lie ahead for humankind.

AE: Broadly speaking, how do we currently define “intelligence”? Do you agree with that definition—and if not, how should we be defining intelligence?

RY: There is no single unifying definition of intelligence. In fact, based on the literature, there are literally hundreds of such definitions. This is in part because it’s one of those words that AI pioneer Marvin Minsky used to call a “suitcase word.” Not rigorously defined, it carries many different meanings and associations which contributes to lots of challenges when talking about it. In the opening of the book, I explore a range of these in order to try to get to the kernel, the core meaning for the purpose of this discussion. This isn’t to say it’s the only way to define intelligence. I’m just trying to establish a starting point.

In writing Future Minds, I wanted to understand why intelligence exists at all in this cold, impersonal universe of ours. How do the basic underlying principles of nature drive the incremental ascent of complexity, life and intelligence? Eventually, this came down to a variation of a definition that comes to us via the field of physics: Intelligence is anything that seeks to maximize its own future freedom of action, whether that effort involves cognitive choice or is merely deterministic in nature. In other words, anything that promotes and perpetuates its own existence or that of its descendants, including its ability to act on and influence the environment to its benefit, can be said to be a form of intelligence. Taken as a relative statement, this can apply to different levels of living, and perhaps even nonliving processes, up and down the scale. Single-celled organisms, plants, animals, including humans, and even technology can all fit along a spectrum when viewed in this way.

AE: What are some of the technologies that are causing our world to become differently intelligent?

RY: Certainly, a lot of people would point to artificial intelligence and deep learning as examples, though many in and outside the field take issue with calling these systems intelligent. But we tend to forget just how young this field and its advances are. The 75 years of AI is barely one human lifetime and we’ve been wielding technology for some 150,000 generations. Beyond that, our species has existed for an eyeblink of time relative to all life on this planet. So far as I’m concerned, these technologies are only barely getting started.

Beyond this, we have augmentation technologies like brain-computer interfaces, otherwise known as BCIs. Designed to allow us to connect to and operate devices using our thoughts, many people think of these as belonging to the world of science fiction, but BCIs have already come so far. They’ve been used to send messages on Twitter, to operate wheelchairs, to type on virtual keyboards, and even to play video games. While much of the early research and its applications have focused on restoring lost functions for the physically and cognitively impaired, in time, uses of this technology will extend to much more elective and discretionary purposes. 

We are probably only a few decades from BCIs being very commonplace in society. This will continue a long-term trend that has allowed us to access and interact with our technologies by ever more natural and immediate means. The most recent and evident form of this has been the progression of computer user interfaces. Eight decades ago, computers were programmed by physically rewiring them. This was followed by punch cards and punch tape, then using keyboards to type on a command line, followed by the graphic user interfaces of the 1980s and 1990s. Today, our devices are increasingly operated using natural user interfaces including touch, voice, and gesture. Each step of the way, these interfaces have allowed us to interact in more natural ways with our devices, until today, even a young child can easily make use of the amazing processing power of a smartphone. BCIs and other interface technologies will only continue this trend, eventually giving each of us the ability to access immense informational and processing resources using only our thoughts.

Numerous other fields are contributing to a world in which we and our technologies will become increasingly and differently intelligent. Biotechnological neural augmentation, neuro-enhancing drugs and various forms of directed brain stimulation are all taking us in this direction. In the end, we shouldn’t be surprised to see each playing a role according to different needs and circumstances.

AE: Your books and other writing look at many different aspects of the future. What does it mean to be a futurist? 

RY: Again, here’s another word that’s often not well-defined. There are plenty of people who attach the word “futurist” to their resume because they think it makes them sound forward-thinking, but a futurist’s work is much more specific than that. It’s a field made up of people who actively study the future in structured ways, applying rigorous methodologies to try to anticipate and prepare for different potential futures. Many of these people receive formal education in master’s and doctoral foresight programs, while others have transitioned from different fields, many of which often have a strong analytic component. In my case, that’s computer science. Still others are trained and mentored within large corporate environments, such as Intel, Microsoft, and Ford Motor Company.

Though human beings have always had the ability to anticipate what lies ahead, the field really got its formal start following World War II, during the Cold War era. There was a need to better understand weapons development feasibility and timelines, which led to the early think tanks, such as the Rand Corporation. Over time, the need for futurist thinking and foresight extended to and became essential to other parts of government as well as to corporations and other organizations. Today, most major companies utilize in-house and consulting futurists, though these forecasters and analysts often go under many different titles.

AE: What technologies or ideas are helping us understand intelligence today?

RY: It might be easier to say what fields are not involved. The study of intelligence in its many forms draws from so many different branches of science. Scanning technologies, artificial intelligence, philosophy, cognitive neuroscience, linguistics, and developmental psychology are but a few of the fields that play a role in our developing understanding of the brain’s structure, function and language. Some of this knowledge is being used in the course of trying to model and emulate the brain through technologies like neuromorphic computing, while other areas have inspired neural networks and developmental learning models that are being applied to deep learning.

AE: How likely do you think it is that the refinement of technologies like brain-computer interfaces or deep learning neural networks will result in a new form of intelligence? 

RY: That really depends on the time frame being discussed. If you’re talking about five or even ten years, then I don’t feel it’s very likely. But if you’re asking about 50 or 100 years from now, then it’s much closer to being an inevitability.

I’d also suggest that developments in intelligence will probably remain incremental. In the past, new technologies like written language and literacy, mass communication, the Internet, and smartphones have taken time to develop and be adopted. Each of these has changed what we know, as well as how and when we can know it. Not only has this augmented our intelligence and access to knowledge, but it’s contributed to the rewiring of our neuroplastic brains in order to make better use of these technologies. While many aspects of technological adoption are speeding up, it still takes time to make that happen.

As far as computers, neural networks, deep learning, and the like, these are all becoming more capable, but it’s a mistake to think of them specifically in terms of human intelligence. Besides, we already have plenty of human intelligence all around us. The value of artificial intelligence is the many ways in which it differs from our own thinking so that we can leverage it to our advantage. This isn’t to say that we can’t or shouldn’t draw on our understanding of the human brain as an inspiration for AI. But biology and technology are such radically different starting points, it would be a mistake to assume we’ll ever develop a machine that can think exactly like a person. At least not for a long time. This isn’t to say AI won’t eventually be able to emulate human thought and behavior quite well. It almost inevitably will and that raises many of its own issues and concerns. But AI will achieve this through fundamentally different means than the human brain does. Which in part is what I mean about our world becoming “differently intelligent”.

AE: If humanity is going to leave Earth one day, we may need to be prepared to encounter intelligences beyond our planet. How can we bridge the gap when approaching intelligences that may be very different from our own? 

RY: That’s a really big question and there are several different elements to address. First, it’s not a question of if, but when we leave Earth. From a purely statistical standpoint we have to expand beyond this one planet if we are to survive as a species in the long run. As the famous physicist Stephen Hawking said, “if we want to continue beyond the next 100 years, our future is in space.”

While my book attempts to expand what we identify as intelligence, in part to better recognize it when we do encounter it, both here on Earth and elsewhere, I’m not optimistic that we will meet anything nearly so advanced as ourselves beyond this planet for a very long time. Of course, I could be pleasantly surprised, but I think it may be many millennia before we find ourselves directly interacting with an extraterrestrial species, if in fact any exist within our tiny region of the universe.

AE: We share a world with many animals we often consider to be intelligent (chimpanzees, dolphins, elephants, ravens, octopuses, etc.) How do we apply contemporary discussions of intelligence to these creatures? Could studying these animals prepare us for interactions with artificial or alien intelligences?

RY: There’s a ton of terrific research being done on all sorts of animal intelligence. Many of us tend to think of human intelligence as superior to most, if not all of these, but in fact we’re all just differently intelligent. After all, intelligence is much more than just the neural processes that take place in our brains, which even in us humans we’re only aware of a tiny percentage of. It’s also our metabolism, our emotions, our cells, our genetics and much more. Whether we’re talking about an ant, a bat, or a whale, that creature’s intelligence has evolved specifically so it can successfully occupy a particular ecological niche. Human beings evolved to fill what’s been referred to as a cognitive niche which has worked out very well for us. But we’re extremely poorly evolved for many of those other niches, so from those perspectives, we’re really not all that intelligent at all.

AE: What steps could we take as a society to live in an era alongside an emerging intelligence we can’t easily comprehend? How can we foster that comprehension?

RY: The question implies or assumes that the emerging intelligence is separate from ourselves, which is only one scenario. Certainly, if AI ever reaches a certain level or threshold, there are many ways we’ll have to adjust, but it very likely won’t be in the sort of adversarial terms we’ve come to know from the fictions of The Terminator or The Matrix. As I discuss in my book, the biggest issues are probably around what are known as value alignment problems. It could even be argued that we’re already starting to see this. Since people begin from such different origins than whatever technological intelligence may eventually be developed, what is important to us isn’t likely to align with what is important to that advanced AI. As a result, we wouldn’t need to be in direct conflict with the AI in order to experience it as a threat. In fact, the AI wouldn’t even need to be conscious or have what we would recognize as self-awareness. Simply the inability to care about us or it seeing us as an obstacle to what’s known as its “utility function” would be sufficient to get us in a great deal of trouble that could ultimately result in our extinction. While that’s an extreme extrapolation of existing trends, we’d be wise to consider the possibility and seriously explore how best to avoid such an outcome.

But that’s only one possibility. What if we ourselves are the emerging intelligences, given the potential for augmenting our own minds? After all, we are by far the most advanced and adaptable intelligence we know of, so that’s a pretty significant jumping off point to start from. What challenges would this present for different forms of inequality between populations, for how we interact as a society, for maintaining the many features we value in our humanity? As we continue to enhance human intelligence, we’ll have a lot of different things to consider and plan for.

Yet another emerging intelligence we need to think about is our global civilization itself. By many measures, this may be the true emergent intelligence we need to pay attention to. Just as individual cells don’t comprehend the larger body they’re part of, it’s challenging for us to fully appreciate society as its own form of evolving independent intelligence. To be sure, these are still early days and human civilization is still developing. But I wonder how things will appear in a few thousand years, taken from a different, less anthropocentric perspective?

AE: What is your favorite product of human intelligence? Which advancement in the modern era do you appreciate most?

RY: My favorite product of human intelligence would certainly have to be language. Spoken language has only been around for a few hundred thousand years—and the written word, a bare fraction of that. With the development of language, it became possible to assemble and manipulate concepts, to exchange and build ideas, to communicate knowledge between otherwise isolated minds, share our inner worlds, expand our understanding that others have their own subjective worlds, all as deep and rich as our own. In the absence of true language, none of this would really exist.

As for a favorite advancement in the modern era, I’d have to say it’s the scientific method. While there are those who see this as its own type of belief system, it is in many ways the antithesis of that. More than just a single area of knowledge or understanding, the scientific method is itself a technology, a series of processes by which real, objective insights about nearly everything can be acquired and built upon over time. All along the way, the capacity for reevaluation and correction of that information exists, leading to a deeply structured body of knowledge based on objective reality that all of humanity can draw on and benefit from. Nearly every aspect of our world has been enhanced, improved, or made possible by the products of the scientific method and probably more than 99% of all existing technology has resulted from its use over the past four hundred years. Now certainly, we’ve reached a stage when we can say that there are things we will never be able to know and answers we can never attain, but even this insight comes to us courtesy of this profoundly beneficial process. As a result, this will remain one of the key drivers in the continued advancement of intelligence and knowledge for the foreseeable future.


Richard Yonck will be joining us at Town Hall next season on 9/15/2020 to discuss his book Future Minds: The Rise of Intelligence from the Big Bang to the End of the Universe. 

 

Upcoming Events

Rental Partner: Philharmonia Northwest presents

Children’s Concert

Star Wars & Peter and the Wolf — With Lisa Bergman, Narrator and Michael Wheatley, Conductor

Town Hall Seattle and Northwest Center for Creative Aging present

Erika Crichton with Rebecca Crichton

Here’s to the Future! An Intergenerational Conversation about Aging