Terrence J. Sejnowski, a pioneer in computational neuroscience explains how machine learning has already fundamentally transformed the nature of human life
AI and the “deep learning” revolution have brought us autonomous vehicles, greatly improved online translation, fluent conversations with bots such as Siri and Alexa, and enormous profits from automated trading on global stock exchanges. Deep learning networks can even play poker better than the world’s best professional players.
Terrence J. Sejnowski is a professor at the Salk Institute for Biological Studies in San Diego, California, where he directs the Computational Neurobiology Laboratory, and director of the Crick-Jacobs Center for theoretical and computational biology. His research in neural networks and computational neuroscience played an important role in the founding of deep learning.
Sejnowski is the author of The Deep Learning Revolution, and with Barbara Oakley, he also created and taught Learning How to Learn: Powerful mental tools to help you master tough subjects, the world’s most popular online course on the subject.
Your book discusses applications of deep learning, from self-driving cars to trading, but which parts of the economy will be most impacted and in what kind of timescale?
Every sector of the economy is going to be affected in much the same way that the Industrial Revolution enhanced physical power through the invention of the steam engine, which led to the creation of factories and electricity, and eventually transformed all of civilization. We’re now living in a world where every aspect of commerce, entertainment and social interactions has been affected by those developments. AI is a similar transformative technology.
Which impacts are going to be the most important, no one knows. It’s too difficult to predict. This can be illustrated by the development of the internet. When it was introduced in 1995, nobody could have imagined how it would affect every aspect of our life. The internet has transformed entertainment, shopping, social media and even politics. These technologies have unimagined consequences.
To what extent is it going to fundamentally change the nature of human life?
Any new technology can be used for good and bad. It’s something that can be seen with the internet. When it first came out, everybody was enthusiastic about how it was going to allow information to be freely available and how we could now talk to people in different countries. That was the good part, but the bad has also become clear with people now able to spread misinformation quickly. It can be very hard for people to know what’s true. This reflects on human beings, as it’s human beings doing the damage, and not the technology.
All these technologies take decades to go from proof of principle to something widely disseminated and scaled up. Scaling up is the most difficult part because it requires a tremendous amount of infrastructure. When automobiles were invented, horses still used in the street. Streets were made of dirt and mud, but once cars were common, streets had to be paved. Decades of work went into creating roads that cars could drive on. Building a car was just the first step, but to create infrastructure that a car can use, ensure its safety and to make enough cars for people is a hundred years’ worth of work. It was a process that couldn’t be sped up, as scaling is constrained by the physical reality of how much must be done and created.
Part of the reason why it feels as if AI has expanded so quickly is that a lot of the infrastructure was already in place, which in this case would be cloud computing. Edge devices like cell phones take advantage of that because you’re communicating directly to the cloud. AI is a computation-intensive technology. In order to be able to take what we have now and expand it so that it has more capabilities means that we have to expand the cloud. That’s the equivalent of making superhighways in the air that have much more bandwidth and a new class of chips. Again, that’s a long process.
How complex is the human brain compared to the deep learning machines?
If you look at the biggest deep learning networks, they have hundreds of millions of weights and people are beginning to design ones that have a billion. Deep learning is a simple model of the cerebral cortex, which is a thin layer that’s around five millimeters thick on the surface of your brain. If you look at just a single cubic millimeter of cortex, it contains a billion synapses. So the biggest deep learning network is just a tiny piece of your brain.
That shows you how far away we are from scaling up. We still have another factor of a million before we get up to the computational complexity of the human brain, and it may be even worse because we are using simple processing units while nature uses complex neurons. There is additional processing going on within each neuron that is adding computational power to the whole network. Neuroscience has made a lot of progress in understanding what those extra potentially useful computational principles are.
How quickly and in what way do you see the combining of technology and biology? And what needs to be done to make it happen?
It’s something that’s already happening, but we’re limited by two things. First, by what we know about the brain, and second, by the computational power available. As I mentioned, the infrastructure still needs to be put in place, which is something that is improving quickly. There are about a hundred startup companies currently designing new machine learning chips. Google already has a TPU tensor processing unit that has improved things by a factor of more than 10-50 in efficiency. An entirely new generation of machine learning chips is being developed and it’s going to make things even more efficient. That’s billions of dollars of infrastructure development being built—everybody’s come to realize that this is a new type of chip worth investing in. It’s going to be a huge market.
What do you see as the appropriate role of government oversight and involvement in deep learning?
Traditionally, companies are shortsighted in terms of projects. They’ll invest in a project where they can see the pay off in just a few years. That’s where governments come in, when it’s necessary to have a long-term goal that may take decades. A lot of government support that goes into basic research may not pay off for 50 years.
An example is cancer research. Back in the 1960s, Richard Nixon declared a war on cancer and pumped a lot of money into the NIH. What came out was a much deeper understanding of the problem. They were able to show that cancer is a genetic and heterogeneous disease. They could prove that a whole lot of different pathways lead to cancer in different parts of the body.
Fast-forward and here we are. There are several cancers that have been cured, such as Non-Hodgkin lymphoma. It took 50 years to go from proof of principle of what the underlying problem was to the point where we can actually design cures for these incredibly devastating diseases. The same is true for technology.
If you look at computers, they were invented in 1956 and it’s taken 60 years to get to the point now where they are powerful enough to actually solve difficult problems. You have more computing power in your cellphone than supercomputers did back when I got my first job, which cost $100 million. That was the era where we did the pioneering work. We had these very slow computers, but they were fast enough so that we could do simulations and test algorithms and we managed to prove a principle. But it took 40 years to go from proof of principle to the point where it becomes practical.
To what extent is deep learning being used for defense and security purposes?
I don’t think anybody knows what the government is using it for. But I can assure you that they’re very interested in the technology. Military applications would be ones that have to do with guided missiles and making smart bombs smarter. One of the interesting problems is that, when there is a war, humans are the ones on the front line. Now autonomous vehicles and airplanes are being created to collect data, but the problem is that they’re controlled by humans on the ground. So, what they’re probably working on is to put deep learning into airplanes so that it can start making its own decisions.
Are there any differences in how companies in China and the US are thinking?
I don’t believe there to be significant differences. All of the knowledge is out there in public because it’s all been generated by academics. Even friends of mine at Google, like Jeff Hinton, they’re allowed to publish their new work. In fact, none of the early algorithms were patented. And we did that purposely because we felt that this was not something we wanted to profit from personally. We thought that it was more important for other researchers to have access to the same insights and the results of the experiments that we did. It eventually paid off, because we created a community that shared knowledge and grew. I’m sure there are companies who are working on applications that are proprietary, but all the basic research is out in the open. And I think that’s true in China too.
Do you think there is a difference from a commercial perspective in how companies in China and the US think about deep learning or AI technology?
It is possible. Let’s use the example of facial recognition. Apple has facial recognition on their phones, but it’s not being used in public to track people like it is here in China. That’s an example of a difference in the application of existing technology along with the issue of access to data. However, I think that both countries have large data sets. I was at a meeting where I gave a talk and someone from the audience was saying that they were having trouble getting financial data. It’s having access to the data that makes a big difference. Some companies have more data than others and that gives them an advantage.
Big Data can provide huge value in terms of social benefits, but to be successful there needs to be a significant gathering of personal data. What is your view on this moral dilemma?
I think privacy is a luxury; it’s great if you can get it, but most people can’t afford it. There was an interesting study with the question ‘How much would you be willing to pay Google to be able to use their search engine?’ being posed. It was interesting because most people aren’t willing to pay anything for it because they’re so used to getting it for free. They’re basically saying that their privacy has no value whatsoever and I think that most people feel that way.
There are others that feel strongly about privacy and are willing to spend hundreds or thousands of dollars on protecting it because they can afford it, but a lot of people don’t have the money to do so. It may be different in other cultures, as some might feel vulnerable and want to avoid giving away information, but for other cultures it’s not a problem as everyone feels that it’s not something that will hurt them. That’s where the problem lies. Because this is all so recent, we don’t know how it can hurt people. But we can already see a few problems, for example, with medical data. If it becomes widely available for deep learning, you can imagine that insurance companies will use the data to avoid insuring people who are predicted to have serious health problems.
There could be some new data set out there that nobody has yet thought of, and when it’s accessed and applied to AI, it is going to have consequences for the future that we can’t imagine. Look at how the internet created these incredible opportunities and problems that nobody thought of. What I’m more concerned about is the unknown of unknowns.
The question is a cliché, but it’s an important one: Will AI and robots end up controlling everything?
We have to put it into perspective. Humans created AI and robots, so we set the rules. If the robots get loose, it’s our fault, and so I’m pretty sure we’re going to be careful. It’s going to take a lot longer to create robots than AI because the infrastructure for creating a body similar to ours is much more complex than the software that people are using to create deep learning networks. Even creating something that has the dexterity of a hand is so incredibly complex that we’re nowhere near that.
I can’t make a good prediction for when this would happen, but I’m pretty sure it’s possible and I don’t see any reason why it wouldn’t happen in our lifetime. It’s going to depend on computer power and we’re still a million miles away from having the power of the human brain, so maybe it’ll take another 40 years, who knows?
How do people significantly younger than you approach and view AI compared to those in your generation?
They don’t have to struggle the way we did. Everybody believes what is going on in AI right now. When you’re working on something that nobody believes except you, it’s a lot more difficult to make progress, because you’re on your own.
They’re also living in a time when there’s a consensus about value in research and where money should be invested. That having been said, we still need visionaries. We still need young people who are willing to take risks. Most entrepreneurs fail, but the ones who make it have a massive influence. Young people are willing to put their careers on the line and that’s what’s going to propel the future.
Enjoying what you’re reading?
Global Unicorn Program Series
This program offers you the opportunity to master AI algorithms and data analytics, navigate future technology landscapes, and embrace cross-cultural perspectives.
LocationColumbia University, USA
DateMay 20-24, 2024
Global Unicorn Program Series
This program equips CEOs and founders in the life sciences and biotechnology industry with the essential knowledge and connections needed to thrive in this rapidly evolving sector.
LocationUniversity of California, San Diego, USA
DateSeptember 9-13, 2024