Ilya Sutskever (Co-founder and Chief Scientist) 00:00.160
Yeah. So one of the one of the ways in which my thinking has been changing is that I now place more importance on AI being deployed incrementally and in advance. One very difficult thing about AI is that we are totally talking about systems that don't yet exist. And it's hard to
Ilya Sutskever (Co-founder and Chief Scientist) 00:33.520
imagine them. I think that one of the things that's happening is that in practice, it's very hard to feel the AGI. It's very hard to feel the AGI. We can talk about it, but it's like it's like talking about like the long few like imagine
Dwarkesh Patel (Host) 00:53.160
like having a conversation about like how is it like to be old when you like old and and frail and you can have a conversation, you can try to imagine it, but it's just hard and you come back to reality where that's not the case. And I think that a lot of the issues around AGI
Dwarkesh Patel (Host) 01:14.380
and its future power stem from the fact that it's very difficult to imagine. Future AI is going to be different different. It's going to be powerful.
Ilya Sutskever (Co-founder and Chief Scientist) 01:27.620
Indeed, the whole problem What is the problem of AI and AGI? The whole problem is the power. The whole problem is the power. When the power is really big, what's going to happen? And one of the one of the ways in which I've changed my mind over the past year and so that that
Ilya Sutskever (Co-founder and Chief Scientist) 01:49.380
change of mind may back may I'll say I'll I'll I'll I'll hedge a little bit may back propagate into into the plans of our of our company is that so if it's hard to imagine what do you do? You got to be showing the thing. You got to be showing the thing. And I maintain that I
Ilya Sutskever (Co-founder and Chief Scientist) 02:09.620
think I think most people who work on AI also can't imagine it. Because it's too different from what people see on a day-to-day basis. I do maintain here is something which I predict will happen.
Dwarkesh Patel (Host) 02:24.660
That's a prediction. I maintain that as AI becomes more powerful then people will change their behaviors. And we will see all kinds of un precedented things which are not happening right now. And I'll give some examples. I do like I I think I think for better or worse the the
Dwarkesh Patel (Host) 02:49.740
frontier companies will play a very important role in what happens as will the government. And the kind of things that I think we'll see which you can see the beginnings of
Ilya Sutskever (Co-founder and Chief Scientist) 03:00.660
companies that are fierce competitors starting collaborate to collaborate on AI safety. You may have seen open AI and tropic eventlist, doing a first small step, but that did not exist. That's actually something which I predicted in one of my talks about three years ago that
Ilya Sutskever (Co-founder and Chief Scientist) 03:19.500
such a thing will happen. I also maintain that as AI continues to become more powerful more visibly powerful, there will also be a desire from governments and the public to do something. And I think that this is a very important force of showing the AI. That's number one. Number
Ilya Sutskever (Co-founder and Chief Scientist) 03:41.540
two, okay, so then the AI is being built. What needs to what needs to be done? So, one thing that I maintain that will happen is that right now people who are working on AI, I maintain that the AI doesn't feel powerful because of its mistakes. I do think that at some point the
Ilya Sutskever (Co-founder and Chief Scientist) 03:59.900
AI will start to feel powerful actually. And I think when that happens, we will see a big change in the way all AI companies approach safety. They'll become much more paranoid. I think I I say this is a predict as a as a as a prediction that we will see happen. We'll see if I'm
Ilya Sutskever (Co-founder and Chief Scientist) 04:18.860
right. But I think this is something that will happen because they will see the AI becoming more powerful. Everything that's happening right now, I maintain, is because people look at today's AI and it's hard to imagine the future AI. And there is a third thing which needs to
Ilya Sutskever (Co-founder and Chief Scientist) 04:35.940
happen. And I think this is this this and and I'm talking about about it in in broader terms, not just from the perspective of SSI because you asked me about our company. But the question is okay, so then what should what should the companies aspire to build? Yeah. What should
Ilya Sutskever (Co-founder and Chief Scientist) 04:50.700
they aspire to build? And there has been one big idea that actually that um everyone has been locked in locked into which is the the self-improving AI. And why why did it happen? Because there is fewer ideas than companies. But I maintain that there is something that's better to
Ilya Sutskever (Co-founder and Chief Scientist) 05:09.540
build. And I think that everyone will actually want that. It's like the AI that's robustly aligned to care about sentient life specifically. I think in particular it will be there's a case to be made that it will be easier to build an AI that cares about sentient life than an AI
Ilya Sutskever (Co-founder and Chief Scientist) 05:30.380
that cares about human life alone because the AI itself will be sentient. And if you think about things like mirror neurons and human empathy for animals, which is, you know, you might argue it's not big enough, but it exists. I think it's an emergent property from the fact that
Ilya Sutskever (Co-founder and Chief Scientist) 05:47.660
we model others with the same circuit that we used to model ourselves because that's the most efficient thing to do. So, even if you got an AI to hear about sentient beings, and it's not actually clear to me that that's what you should try to do if you solve the alignment, it
Ilya Sutskever (Co-founder and Chief Scientist) 06:04.140
would still be the case that most sentient beings will be AI's. They will be trillion eventually