Yann LeCun (Chief AI Scientist) 00:00.540
You know, politics. Um I won't cite names. So, I think what what is required is systems that are intelligent, in other words, can solve problems. for us, but it will solve the problem we give them.
Yann LeCun (Chief AI Scientist) 00:20.020
Okay? And again, that would require a new design than LLMs. LLMs are not designed to fulfill a goal. They're designed to predict the next word.
Yann LeCun (Chief AI Scientist) 00:33.760
And we fine-tune them so that they behave, you know, for particular questions they answer in a particular way. Um, but it's always what's called a generalization gap, which means you can never trend them for every possible question
Yann LeCun (Chief AI Scientist) 00:47.560
and there's a very long tail. And so they're not controllable. Um, and again, that doesn't mean they're very it's very dangerous because they're not that smart.
Yann LeCun (Chief AI Scientist) 00:57.280
Um, now, if we build a system that are smart, we want them to controllable and we want them to be driven by objectives. We give them an objective and the only thing they can do is fulfill this objective according to their you know internal model of the world if you
Yann LeCun (Chief AI Scientist) 01:12.040
want. So plan a sequence of actions that will fulfill that objective. If we design them this way and we also put guardrails in them so that in the process of fulfilling the objectives they don't do anything bad for for humans
Yann LeCun (Chief AI Scientist) 01:28.360
so the the The usual joke is, if you have a robot, domestic robot, and you ask it to fetch you coffee, and someone else is, you know, someone is standing in front of the coffee machine, you don't want your robot to just, you know, kill that person to get access to the coffee
Yann LeCun (Chief AI Scientist) 01:41.800
machine,
Yann LeCun (Chief AI Scientist) 01:42.160
right? So, we want to put some guardrails into the the behavior of that robot, and we do have those guardrails in our head. Evolution build them into us, right? So, we don't kill each other all the time. I mean, we do kill each other all the time, but not, you know, not all the
Yann LeCun (Chief AI Scientist) 01:55.360
time, all the time. Um
Yann LeCun (Chief AI Scientist) 01:59.600
I mean, and you know, we feel empathy and and things like that and that's just built into us by evolution. That That's the way evolution sort of hardware God rails into us. So, we should build our AI systems the same
Yann LeCun (Chief AI Scientist) 02:11.080
way, have objectives and goals, drives, but also um you know, God rails inhibition basically. Um and and then they will solve problems for us. They will amplify our intelligence. They will uh do what we ask them to do.
Yann LeCun (Chief AI Scientist) 02:29.280
And our relationship to those intelligence system will be like the relationship of, let's say, a professor with graduate students who are smarter than them, right?
Yann LeCun (Chief AI Scientist) 02:40.480
I mean, I don't know about you, but I have students who are smarter than me. So, um It's the best thing that can happen to you, It right is. It's the best thing
Yann LeCun (Chief AI Scientist) 02:49.080
that can happen. Right? So, we'll be working around with AI assistant. Um, they will help us in our daily lives. They be smarter than us, they will work for us. They be like our staff.
Yann LeCun (Chief AI Scientist) 03:00.400
Again, there is a political analogy here, right? A politician, you know, right? Is a figurehead and they have a staff of evil. All of All of whom are smarter than them, right? Um, so it's going to be the same thing with the AI system,
Yann LeCun (Chief AI Scientist) 03:12.120
which is why I to the question of Renaissance, I said, Renaissance.
Janna Levin (Professor of Physics and Astronomy) 03:16.280
So you have no concerns um about the safety of the current models, but the question is, maybe we should stop there. I mean, why is it necessary for us to scale up so widely that every single person has the super intelligence in their pocket
Janna Levin (Professor of Physics and Astronomy) 03:34.040
on their iPhone. Is that really necessary? A friend of mine was saying it's like bringing a ballistic missile to a knife fight. I mean, is this necessary that every person has a ballistic missile capability? Or should we stop here where we have these controllable
Yann LeCun (Chief AI Scientist) 03:49.360
systems? You can say exactly the same thing about teaching people to read, giving giving them a textbook of chemistry of volatile volatile chemicals, you know, with which they can make explosives or nuclear physics book, right? I mean,
Yann LeCun (Chief AI Scientist) 04:07.720
we do not question the idea that knowledge and more intelligence is good, intrinsically good, right? We do not question anymore the fact that the invention of printing press was a good thing,
Yann LeCun (Chief AI Scientist) 04:22.200
right? It made everybody smarter. It It gave It gave access to knowledge to everyone. Um which was not possible before, it incited people to learn to read. It's uh it caused the enlightenment. It also caused 200 years of, you know, religious wars in Europe, but okay, but uh We
Yann LeCun (Chief AI Scientist) 04:41.080
got over it, yeah.
Yann LeCun (Chief AI Scientist) 04:42.560
But it it caused the enlightenment, it caused, you know, the emergence of philosophy, science, democracy, the American Revolution, the French Revolution. Um all of that would not have been possible without uh the the printing press.
Yann LeCun (Chief AI Scientist) 04:56.160
So, you know, every technology that, particularly communication technology, but technology that amplifies human intelligence, I think is intrinsically good.