Dwarkesh Patel (Host) 00:00.100
quadrillions of AIs. Humans will be a very small fraction of sentient beings. So, it's not clear to me if the goal is some kind of human control over this future civilization that this is the best criterion. It's true. I I think that it's possible it's not the best criterion.
Dwarkesh Patel (Host) 00:22.920
I'll say two things. I think that thing number one, I think that if there is so I think that care for sentient life, I think there is merit to it. I think it should be considered.
Ilya Sutskever (Co-founder and Chief Scientist) 00:38.240
I think that it will be helpful if there was some kind of a short list of ideas that then the companies when they are in the situation could use. That's number two. Number three, I think it would be really materially helpful if the power of the most powerful super intelligence
Ilya Sutskever (Co-founder and Chief Scientist) 00:59.920
was somehow capped because it would address a lot of these concerns. The question of how to do it, I'm not sure, but I think that would be materially helpful. When you're talking about to really really powerful systems. Yeah, um before we continue the element discussion, I I I
Ilya Sutskever (Co-founder and Chief Scientist) 01:17.480
want to double click on that. How much room is there at the top? How do you think about super intelligence? Do you think I mean using this learning efficiency idea maybe is just extremely fast at learning new skills or new knowledge? And does it just have a bigger pool of
Ilya Sutskever (Co-founder and Chief Scientist) 01:32.440
strategies? Is there a single cohesive it in the center that's more powerful or bigger? And if so, do you Do you imagine that this will be sort of god-like in comparison to the rest of human civilization or does it just feel like another agent or another cluster of agents? So,
Ilya Sutskever (Co-founder and Chief Scientist) 01:50.640
this is an area where different people of different intuitions. Yeah. I think it will be very powerful for sure. I think that what I think is most likely to happen is that there will be multiple such AIs being created roughly at the same time. I think that if the cluster is big
Ilya Sutskever (Co-founder and Chief Scientist) 02:13.600
enough, like if the cluster is literally continent sized, that thing could be really powerful indeed. Right? If you literally have a continent size cluster, like those those AI's can be very powerful. And I like all I can tell you is that if you're talking about extremely
Ilya Sutskever (Co-founder and Chief Scientist) 02:32.560
powerful AI's, like truly dramatically powerful, then yeah, it would be nice if they could be restrained in some ways or if there was some kind of an agreement or something. Because I think that if you are saying hey like if you if you really like what what is the the concern of
Ilya Sutskever (Co-founder and Chief Scientist) 02:52.160
super intelligence? What is one way to explain the concern? If you imagine a system that is sufficiently powerful, like really sufficiently powerful, and you could say okay you need to do something sensible like care for sentient life let's say in a very single-minded way, we
Ilya Sutskever (Co-founder and Chief Scientist) 03:09.560
might not like the results. That's really what it is. And so maybe by the way, the answer is that you do not build a single you do not build an RL agent in the usual sense. And actually I'll point I'll point several things out. I think human beings are a semi-RL agent. You know,
Ilya Sutskever (Co-founder and Chief Scientist) 03:25.560
we pursue a reward and then
Dwarkesh Patel (Host) 03:27.160
the emotions or whatever make a stare out of the reward, we pursue a different reward. The market is like kind it's like a very short-sighted kind of agent. Evolution is the same. Evolution is very intelligent in some ways but very down in other ways. The government has been
Dwarkesh Patel (Host) 03:44.360
designed to be a never-ending fight between three parts, which has an effect. So, I think things like this Another thing that makes this discussion difficult is that we are talking about systems that don't exist that we don't know how to build. Right, that's the other thing. And
Dwarkesh Patel (Host) 04:02.880
that's actually my belief. I think what people are doing right now will go some distance and then peter out. It will continue to improve but it will also not be it. So
Ilya Sutskever (Co-founder and Chief Scientist) 04:12.040
So, the it, we don't know how to build. And I think that a lot a lot hinges on understanding reliable generalization. And I'll say another thing, which is like, you know, one of the things that you could say is what what that cause alignment to be difficult is that human value
Ilya Sutskever (Co-founder and Chief Scientist) 04:32.080
that it's it's um your ability to learn human values is fragile, then your ability to optimize them is fragile, you will you actually learn to optimize them. And then can't you say, "Are these not all instances of unreliable generalization?" Why is it that human beings appear to
Ilya Sutskever (Co-founder and Chief Scientist) 04:50.320
generalize so much better? What if generalization was much better? What would happen in this case? What would be the effect? But those we can't we we can't like those questions are right now still unanswerable. Um, how does one think about what AI going well looks like? Because
Ilya Sutskever (Co-founder and Chief Scientist) 05:06.400
I think you've scoped out how AI might evolve, we'll have these sort of continual learning agents. AI will be very powerful. Maybe there will be many different DIs. How do you think about lots of continent compute size intelligences going around? How dangerous is that? How do we
Ilya Sutskever (Co-founder and Chief Scientist) 05:24.280
make that less dangerous. And how do we do that in a way that protects a equilibrium where there might be misaligned AI's out there and bad actors out there. So one reason why I like the AI that cares for sentient life, you know, and we can debate on whether it's good or bad.
Ilya Sutskever (Co-founder and Chief Scientist) 05:45.220
But if the first end of these dramatic systems actually do care for, you know, Love humanity or something, you know, care for sentient life. Obviously, this also needs to be achieved. This needs to be achieved. So, if this is achieved by the first