8 Quotes from Ilya Sutskever (from iHuman documentary

The documenatry iHuman (2019) interviewed dozens of thought leaders on AI.

But they caught one, Ilya Sutskever, the co-founder and then Chief Scientist at OpenAI., at a pivotal time in AI history (a year before GPT-3 was released).

He since left OpenAI but in my opinion critical to our future AI-centric world.

His thoughts are so important I wanted to document them in one place here.

I rank iHuman #13 in my list of “The Best AI Documentaries” (out of 15).

Ilya Sutskever Quotes

Here are the main 8 quotes I found useful (with timestamps):

“Now AI is a great thing, because AI will solve all the problems that we have today. It will solve employment, it will solve disease, it will solve poverty. But it will also create new problems.”

Ilya Sutskever on problems that AI will solve (14:45 – 15:06)

“…it’s going to be important to have these beings’ goals be aligned with our goals. That’s what we’re trying to do at OpenAI—be at the forefront of research and steer the research, steer the initial conditions to maximize the chance that the future will be good for humans.”

— Ilya Sutskever on OpenAI’s responsibility with AGI (13:22 – 14:45)

“Artificial general intelligence AGI—imagine your smartest friend with a thousand friends just as smart, and then run them at a thousand times faster than real time.

So it means that in every day of our time, they will do three years of thinking. Can you imagine how much you could do if for every day you could do three years’ worth of work?”

— Ilya Sutskever on power of AGI (26:34 — 27:02

“AGI is going to be, like, without question the most important technology in the history of the planet by a huge margin. It’s going to be bigger than electricity, nuclear, the internet combined.

In fact, you could say that the whole purpose of all human science, the purpose of computer science—the endgame—this is the endgame: to build this. And it’s going to be built. It’s going to be a new life form. It’s going to make us obsolete.”

— — Ilya Sutskever on the historical importance of AGI (30:03 – 30:32)

“We haven’t created the human-level thinking machine yet, but we get closer and closer. Maybe we’ll get to human-level AI in 5 years from now, or maybe it’ll take 50 or 100 years from now—it almost doesn’t matter.”

— Ilya Sutskever predicting timing of AGI (59:07 – 59:14)

“The beliefs and desires of the first AGI will be extremely important, and so it’s important to program them correctly. I think that if this is not done, then the nature of evolution of natural selection will favor those systems that prioritize their own survival above all else.

It’s not that it’s going to actively hate humans and want to harm them, but it’s just going to be too powerful. And I think a good analogy would be the way humans treat animals. It’s not—we hate animals—I think humans love animals and have a lot of affection for them.

But when the time comes to build a highway between two cities, we’re not asking the animals for permission. We just do it because it’s important for us. And I think by default, that’s the kind of relationship that’s going to be between us and AGIs, which are truly autonomous and operating on their own behalf.”

— Ilya Sutskever on programming AI and human survival (1:26:00 – 1:27:20)

“If you have an arms race dynamic between multiple teams trying to build the AGI first, they will have less time to make sure that the AGI that they will build will care deeply for humans. Because the way I imagine it is that there is an avalanche—like there is an avalanche of AGI development.

Imagine you have this huge unstoppable force. And I think it’s pretty likely the entire surface of the Earth will be covered with solar panels and data centers. Given these kinds of concerns, it will be important that the AGI is somehow built as a cooperation between multiple countries. The future is going to be good for the AIs regardless. It would be nice if it were good for humans as well.”

— Ilya Sutskever on the arms race, solar panels, data centers (1:27:20 – 1:28:21)

“I think that the problem of fake news is going to be a thousand—a million—times worse. Cyber attacks will become much more extreme.

We will have totally automated AI weapons. I think AI has the potential to create infinitely stable dictatorships.”

— Ilya Sutskever on fake news, cyber attacks and AI weaponry (15:13 – 15:39)

Other AI thought leaders interviewed in iHuman include:

  • Stuart J. Russell – A UC Berkeley professor who co-wrote the top AI textbook. He wants AI to match human values.
  • Max Tegmark – A physicist from MIT who works to keep AI safe through the Future of Life Institute.
  • Jürgen Schmidhuber – He invented LSTM, a deep learning tool used in speech and video apps. He’s called the “father of modern AI.”
  • Ben Goertzel – He started SingularityNET and has been talking about super-smart AI for years.
  • Rumman Chowdhury – She led Twitter’s team on algorithm bias and helps make AI systems fair.
  • Zeynep Tüfekçi – A Columbia professor who writes about how tech affects democracy.
  • Robert Work – A former U.S. Deputy Secretary of Defense, who helped push the U.S. military to use AI.

You can watch the iHuman documentary for free by clicking the video embed above.

You can also check here for other streaming options.

Thanks for reading!

-Rob Kelly, Chief Maniac, Daily Doc