Dwarkesh Patel (Host) 00:00.000
Yeah.
Ilya Sutskever (Co-founder and Chief Scientist) 00:00.360
What should they aspire to build? And there has been one big idea that actually that um everyone has been locked in locked into which is the the self-improving AI. And why why did it happen? Because there is fewer ideas than companies. But I maintain that there is something
Ilya Sutskever (Co-founder and Chief Scientist) 00:18.760
that's better to build. And I think that everyone will actually want that. It's like the AI that's robustly aligned to care about sentient life specifically. I think in particular it will be there's a case to be made that it will be easier to build an AI that cares about
Ilya Sutskever (Co-founder and Chief Scientist) 00:38.400
sentient life than an AI that cares about human life alone because the AI itself will be sentient. And if you think about things like mirror neurons and human empathy for animals, which is, you know, you might argue it's not big enough, but it exists. I think it's an emergent
Ilya Sutskever (Co-founder and Chief Scientist) 00:56.120
property from the fact that we model others with the same circuit that we used to model ourselves because that's the most efficient thing to do.
Dwarkesh Patel (Host) 01:05.520
So, even if you got an AI to hear about sentient beings, and it's not actually clear to me that that's what you should try to do if you solve the alignment, it would still be the case that most sentient beings will be AI's. They will be trillion eventually
Dwarkesh Patel (Host) 01:20.040
quadrillions of AIs. Humans will be a very small fraction of sentient beings. So, it's not clear to me if the goal is some kind of human control over this future civilization that this is the best criterion.
Ilya Sutskever (Co-founder and Chief Scientist) 01:37.220
It's true. I I think that it's possible it's not the best criterion. I'll say two things. I think that thing number one, I think that if there is so I think that care for sentient life, I think there is merit to it. I think it should be considered.
Ilya Sutskever (Co-founder and Chief Scientist) 01:58.180
I think that it will be helpful if there was some kind of a short list of ideas that then the companies when they are in the situation could use. That's number two. Number three, I think it would be really materially helpful if the power of the most powerful super intelligence
Ilya Sutskever (Co-founder and Chief Scientist) 02:19.860
was somehow capped because it would address a lot of these concerns. The question of how to do it, I'm not sure, but I think that would be materially helpful. When you're talking about to really really powerful systems.
Dwarkesh Patel (Host) 02:34.500
Yeah, um before we continue the alignment discussion, I I I want to double click on that. How much room is there at the top? How do you think about super intelligence? Do you think I mean using this learning efficiency idea maybe is just extremely fast at learning new skills or
Dwarkesh Patel (Host) 02:49.540
new knowledge? And does it just have a bigger pool of strategies? Is there a single cohesive it in the center that's more powerful or bigger? And if so, do you Do you imagine that this will be sort of god-like in comparison to the rest of human civilization or does it just feel
Dwarkesh Patel (Host) 03:06.460
like another agent or another cluster of agents?
Ilya Sutskever (Co-founder and Chief Scientist) 03:10.460
So, this is an area where different people of different intuitions.
Dwarkesh Patel (Host) 03:13.340
Yeah.
Ilya Sutskever (Co-founder and Chief Scientist) 03:13.900
I think it will be very powerful for sure. I think that what I think is most likely to happen is that there will be multiple such AIs being created roughly at the same time. I think that if the cluster is big enough, like if the cluster is literally continent sized, that thing
Ilya Sutskever (Co-founder and Chief Scientist) 03:38.860
could be really powerful indeed. Right? If you literally have a continent size cluster, like those those AI's can be very powerful. And I like all I can tell you is that if you're talking about extremely powerful AI's, like truly dramatically powerful, then yeah, it would be
Ilya Sutskever (Co-founder and Chief Scientist) 03:55.980
nice if they could be restrained in some ways or if there was some kind of an agreement or something. Because I think that if you are saying hey like if you if you really like what what is the the concern of super intelligence? What is one way to explain the concern? If you
Ilya Sutskever (Co-founder and Chief Scientist) 04:15.740
imagine a system that is sufficiently powerful, like really sufficiently powerful, and you could say okay you need to do something sensible like care for sentient life let's say in a very single-minded way, we might not like the results. That's really what it is. And so maybe by
Ilya Sutskever (Co-founder and Chief Scientist) 04:32.740
the way, the answer is that you do not build a single you do not build an RL agent in the usual sense. And actually I'll point I'll point several things out. I think human beings are a semi-RL agent. You know, we pursue a reward and then the emotions or whatever make a stare out
Ilya Sutskever (Co-founder and Chief Scientist) 04:49.820
of the reward, we pursue a different reward. The market is like kind it's like a very short-sighted kind of agent. Evolution is the same. Evolution is very intelligent in some ways but very down in other ways. The government has been designed to be a never-ending fight between
Ilya Sutskever (Co-founder and Chief Scientist) 05:07.140
three parts, which has an effect. So, I think things like this Another thing that makes this discussion difficult is that we are talking about systems that don't exist that we don't know how to build. Right, that's the other thing. And that's actually my belief. I think what
Ilya Sutskever (Co-founder and Chief Scientist) 05:24.060
people are doing right now will go some distance and then peter out. It will continue to improve but it will also not be it. So