Dwarkesh Patel (Host) 00:00.680
It seems to me that this is a very precarious situation to be in where looking the limit, we know that this should be possible because if you have something that is as good as a human at learning, but which can merge its brains, merge their different instances in a way that
Dwarkesh Patel (Host) 00:17.240
humans can't merge. Already, this seems like a thing that should physically be possible. Humans are possible. Digital computers are possible. You just need both of those combined to produce this thing. And it also seems like this kind of thing is extremely powerful and economic
Dwarkesh Patel (Host) 00:35.160
growth is one way to put it. I mean Dyson Sphere is a lot of economic growth, but another way to put it is just like you will have potentially a very short period of time because a human on the job can you know you you're hired people to SSI in six months they're like net
Dwarkesh Patel (Host) 00:47.480
productive probably, right? A human like learns really fast and so this thing is becoming smarter and smarter very fast. What is how do you think about making that go well? And why is SSI position to that one? What is the size plan there basically? So I'm trying to ask.
Ilya Sutskever (Co-founder and Chief Scientist) 01:01.160
Yeah. So one of the one of the ways in which my thinking has been changing is that I now place more importance on AI being deployed incrementally and in advance. One very difficult thing about AI is that we are totally talking about systems that don't yet exist. And it's hard to
Ilya Sutskever (Co-founder and Chief Scientist) 01:34.520
imagine them. I think that one of the things that's happening is that in practice, it's very hard to feel the AGI. It's very hard to feel the AGI. We can talk about it, but it's like it's like talking about like the long few like imagine like having a conversation about like how
Ilya Sutskever (Co-founder and Chief Scientist) 01:55.880
is it like to be old when you like old and and frail and you can have a conversation, you can try to imagine it, but it's just hard and you come back to reality where that's not the case. And I think that a lot of the issues around AGI and its future power stem from the fact
Ilya Sutskever (Co-founder and Chief Scientist) 02:19.940
that it's very difficult to imagine. Future AI is going to be different different. It's going to be powerful.
Ilya Sutskever (Co-founder and Chief Scientist) 02:28.620
Indeed, the whole problem What is the problem of AI and AGI? The whole problem is the power. The whole problem is the power. When the power is really big, what's going to happen? And one of the one of the ways in which I've changed my mind over the past year and so that that
Ilya Sutskever (Co-founder and Chief Scientist) 02:50.380
change of mind may back may I'll say I'll I'll I'll I'll hedge a little bit may back propagate into into the plans of our of our company is that so if it's hard to imagine what do you do? You got to be showing the thing. You got to be showing the thing. And I maintain that I
Ilya Sutskever (Co-founder and Chief Scientist) 03:10.620
think I think most people who work on AI also can't imagine it. Because it's too different from what people see on a day-to-day basis. I do maintain here is something which I predict will happen. That's a prediction. I maintain that as AI becomes more powerful then people will
Ilya Sutskever (Co-founder and Chief Scientist) 03:35.020
change their behaviors. And we will see all kinds of un precedented things which are not happening right now. And I'll give some examples. I do like I I think I think for better or worse the the frontier companies will play a very important role in what happens as will the
Ilya Sutskever (Co-founder and Chief Scientist) 03:54.620
government. And the kind of things that I think we'll see which you can see the beginnings of
Ilya Sutskever (Co-founder and Chief Scientist) 04:01.660
companies that are fierce competitors starting collaborate to collaborate on AI safety. You may have seen Open AI and Anthropic doing a first small step, but that did not exist. That's actually something which I predicted in one of my talks about three years ago that such a
Ilya Sutskever (Co-founder and Chief Scientist) 04:20.780
thing will happen. I also maintain that as AI continues to become more powerful more visibly powerful, there will also be a desire from governments and the public to do something. And I think that this is a very important force of showing the AI. That's number one. Number two,
Ilya Sutskever (Co-founder and Chief Scientist) 04:43.180
okay, so then the AI is being built. What needs to what needs to be done? So, one thing that I maintain that will happen is that right now people who are working on AI, I maintain that the AI doesn't feel powerful because of its mistakes. I do think that at some point the AI
Ilya Sutskever (Co-founder and Chief Scientist) 05:01.100
will start to feel powerful actually. And I think when that happens, we will see a big change in the way all AI companies approach safety. They'll become much more paranoid. I think I I say this is a predict as a as a as a prediction that we will see happen. We'll see if I'm
Ilya Sutskever (Co-founder and Chief Scientist) 05:19.860
right. But I think this is something that will happen because they will see the AI becoming more powerful. Everything that's happening right now, I maintain, is because people look at today's AI and it's hard to imagine the future AI. And there is a third thing which needs to
Ilya Sutskever (Co-founder and Chief Scientist) 05:36.940
happen. And I think this is this this and and I'm talking about about it in in broader terms, not just from the perspective of SSI because you asked me about our company. But the question is okay, so then what should what should the companies aspire to build?