Tom Nunlist Authors

Man and Machine: an Interview with Brian Christian

December 18, 2017

Brian Christian, author of The Most Human Human and Algorithms to Live By, discusses the gaps and overlaps between humans and machines.

Decades before Siri and Alexa began battling it out for best virtual assistant, computer scientist Alan Turing invented the eponymous Turing Test of machine intelligence. The test goes: if a human operator cannot, after a text conversation, determine if he is talking to a human or a computer, then the computer is “intelligent” insofar as the operator is concerned.

Today, the Loebner Prize, an annual competition in artificial intelligence, sets a panel of judges the task of finding via the Turing Test the “Most Human Computer” and also the “Most Human Human,” or the person the judges least often mistake for a computer. In 2009, author Brian Christian entered the competition and later produced the best-selling book The Most Human Human, which investigates the nature of intelligence. His second book, Algorithms to Live By, co-authored with cognitive scientist Tom Griffiths, was published last year.

In this interview with CKGSB Knowledge, Christian dives into the ideas underlying both books.

You hold degrees in philosophy and computer science. Why did you start with such a dual track approach and how has that influenced your career?

I have always been motivated by curiosity and by the big questions: “What does it mean to have a mind?”, “What is the nature of intelligence?”, “What is the nature of reality?” Philosophy gives us a way of framing these questions, but the rigor available in computer science offers a set of tools and insights that, for me, are also strikingly applicable to that set of questions. There are fertile intersections between the two areas, which I have explored in my books.

The first, The Most Human Human, investigates the question of intelligence. What are the hallmarks of intelligent behavior? What is the nature of interpersonal communication? And, at the broadest level, what have we learned about what it means to be human by attempting to build machines in our own image? In large part, that’s the story of what we have learned about ourselves from our failures to replicate certain aspects of our own intelligence.

The second book, Algorithms to Live By, in a way takes the question from the flip side—what do minds and machines have in common? And what are the things that we can learn from the sometimes unexpected parallels between problems in computer science, and problems in our everyday lives?

There is a dialogue between the two books where they almost ask the same question from two different sides: what do we learn about the differences between humans and machines, and what do we learn from the similarities.

The first book is a critique of the perception that a certain dehumanization results from our constant interactions with and through machines. But given the pervasiveness of digital culture, how do you begin to fight back?

There is a paradox that as communication tools become more powerful, we are communicating with one another in ever lower-bandwidth forms. In the last century, we went from meeting in person to talking on the phone. We went from talking on the phone to writing emails. Then we went from writing emails to texting. And now from the text message to the emoji, or to the single-button “Like.” We have almost reduced human conversation to its logical minimum, literally in some cases to a single bit of information. I think this has a homogenizing effect.

Another example is the Gmail Smart Reply, which includes automatic suggested replies to messages. If someone proposes a meeting, it might offer “sounds good” and “sorry, I can’t make that.” But we should be mindful of what we are trading off in that equation of efficiency. I think the Turing Test gives us the perfect illustration. In a Turing Test you have nothing except the idiosyncrasies of your word choice to assert your identity.

Before we get into talking about the second book, can you demystify the term “algorithm”?

The concept of algorithms far predates the computer, and arguably predates mathematics, and so one of the goals of the project was in fact to re-humanize them. You can think of an algorithm as just a discrete series of steps, a process that you follow to get something done. Any process that you can break down into steps is an algorithm, including a cooking recipe.

Computer science gives us a way of recognizing some really fundamental things in everyday life. One of the examples the book gives is if you are hosting a party, or if you are at a large dinner, there is a moment where everyone shakes hands with one another in greeting. You might have noticed that when there are more than a few people there, it takes a noticeably long time for everyone to make sure that they have shaken everyone else’s hand.

Computer science gives us a language for identifying what’s going on here. And so, for example, the number of handshakes that need to happen grows with the order of n squared, the square of the number of guests at the party; computer scientists would call this a quadratic algorithm. It doesn’t scale well! Part of the real value of computer science is that it gives us a vocabulary and a rigorous set of tools for identifying even these everyday things that are around us in life.

You make the point in the book that despite computers being extremely powerful, there are problems that cannot be solved by brute force of calculation. To arrive at solutions, you need to introduce an element of randomness or simplification. How has developing algorithms to tackle these types of tasks helped change our understanding of handling difficult tasks more broadly?

One of the most valuable contributions of theoretical computer science has been complexity theory, which is a way for understanding and ranking how difficult problems are. In broad terms, you could say mathematics is about finding the correct answer to a problem and computer science is about deciding how hard the problem is.

Computer scientists deal with what are known as “intractable problems,” or “NP-hard” problems. In this set of problems there simply is no scalable way to get the exact correct answer every time. To address them, computer scientists turn to a toolkit of strategies. These include things like settling for approximate solutions, or settling for algorithms that are correct only most of the time.

One of my favorite examples comes from the world of encryption. If you want secure banking or commerce, the starting point is usually generating an enormous random prime number, and that requires finding efficient ways of determining whether a large random number is in fact prime. One of the best ways to do this is using the Miller-Rabin test, which happens to be wrong 25% of the time.

We asked the developers of Open SSL, an open-source library for secure communications technology, which uses this test, what they do about that and the answer was they just run the test 40 times, and accept that the margin of error of 25% to the 40th power is good enough. And this is in banking and even in military applications.

The deeper point is that computer science really gives us a way of thinking in new terms about what it means to be rational. Behavioral economics has highlighted the idea that people are fallible, they make mistakes, they have cognitive biases and behave “irrationally” and so forth. Computer science, I think, offers a bit of a different story: many of the problems that we face in life are simply hard, that is, computationally intractable. In many real-life situations we trade off the quality of the answer or decision that we ultimately get with the pain or the cost of actually thinking about it.

Tell us about one such tradeoff situation.

A classic one is the explore/exploit tradeoff—how much time do you spend gathering information, and how much time do you allocate for using the information you’ve got? Computer scientists refer to this as the “multi-armed bandit” problem, which references the “one-armed bandit,” a nickname for casino slot machines.

It goes like this: in a casino, each slot machine is set to pay out with some probability, and it is different for each machine. If you go to play for the afternoon you will want to maximize your return. This involves some combination of trying different machines out and some amount of time cranking away on the machine that seems the best.

For much of the 20th century, the question of what exactly constitutes the best strategy was considered unsolvable, but a series of breakthroughs on the problem over the last several decades yielded some exact solutions and broader insights. The details of the optimal algorithms are difficult to explain concisely, but the key consideration is how much time you have. If it is your final moment in the casino, you should pull the handle of the best machine you know about. But if you are going to be in the casino for 80 years, then you should spend almost all your time initially just trying things out at random.

These algorithms are now powering huge parts of the digital economy. Google, for example, has an enormous pool of ads that they could serve for any particular search query. They could always serve the ad that got the most clicks historically, but on the other hand they have a lot of ads that they have never served that they need more information for. The algorithm optimizes the process.

In more personal terms, I also feel like this is an idea that helps us makes sense of the arc of a human lifespan—why children seem so random and older people seem so set in their ways. Well, in fact they are both behaving optimally, with respect to how long they have in life’s casino.

Might thinking about life in terms of algorithms take some of the magic out of it? For example, there seems to be a qualitative difference between “trying to find the optimum romantic partner” and “falling in love.”

In many areas of life there is a mixture of an intuitive, emotional, ineffable process and a more deliberate, intentional, rational process. Buying a house is one example of the two working together.

Sometimes you walk into a house and something doesn’t feel right, and you may not ever be able to articulate why. Or on the other hand, you might feel good as soon as you set eyes on it. Nobody can tell you what is good, or what isn’t—but there is an algorithm that can help you with the more rational part of the equation, which is whether to settle for something good or hold out for something even better. This is called an “optimal stopping problem,” and the answer is surprisingly specific: 37%. The optimal way to pick a candidate is to get through 37% of the available options or of the time allotted, and then commit to the next option that is better than all previous ones.

When it comes to something like romance, most people, including me, are resistant to the idea of a methodical approach—it’s just not romantic. But in practice, we are more logical about it than we realize. If you’re the parent of a teenager and the teenager says, “You know, I met this amazing person who is going to a totally different college, so I’m just going to put my life trajectory on hold and follow them across the world…” you would say, “No way! You think this is the relationship you should stake the direction of your life on, but maybe if you just go to your college you will meet someone else.” But if someone at 35 says the same thing, “I met this incredible person, and I am going to move across the world,” one is more inclined to say, “Go for it! You know what you’re looking for at this point.”

This is anecdotal, but I think it is interesting that 37% of the average life expectancy in the first world is about 28-29 years old, and the average age of someone at their wedding is also 28-29 years old. There is sort of a funny sense in which these principles may offer us a macro-level understanding of societal norms and patterns, even if we are reluctant to apply them at the individual level.

What will you work on next?

I am working on a book that is about the intersection of computer science and ethics. I think that’s the next big thing. As we were discussing, I think we are at a point where philosophy and computer science are very much in dialogue with one another and this to me seems like the next wave that’s breaking. We are increasingly deploying automated systems to make consequential, moral judgments, like who gets parole. There is this question of how do we ensure that the systems we are entrusting such decisions to actually uphold our sense of human and civic values. There is a fascinating conversation that is just beginning to happen, and that is what I am researching right now.

Enjoying what you’re reading?

Sign up to our monthly newsletter to get more China insights delivered to your inbox.

Our Programs

Frontiers in Digital Innovation: AI, Future of Tech & Data Science

Global Unicorn Program Series

This program offers you the opportunity to master AI and data analytics, navigate evolving technology landscapes, and embrace cross-cultural perspectives.

LocationColumbia University, USA

DateMay 20-24, 2024

LanguageEnglish

Learn more

The Biotech Accelerator Unicorn Program

Global Unicorn Program Series

This program equips CEOs and founders in the life sciences and biotechnology industry with the essential knowledge and connections needed to thrive in this rapidly evolving sector.

LocationUniversity of California, San Diego, USA

DateSeptember 9-13, 2024

LanguageEnglish

Learn more

Global Unicorn Program: Scaling for Success in the Age of AI

Global Unicorn Program Series

In collaboration with the Stanford Center for Professional Development (SCPD), this CKGSB program equips entrepreneurs, intrapreneurs and key stakeholders with the tools, insights, and skills necessary to lead a new generation of unicorn companies.

LocationStanford University Campus, California, United States

DateDec 09 - 13, 2024

LanguageEnglish

Learn more

Emerging Tech Management Week: Silicon Valley

This program offers insights into emerging technology developments and the skills required to innovate, grow, and transform your business strategy.

LocationUniversity of California, Berkeley, USA

DateNovember 3-8, 2024

LanguageEnglish

Learn more