Adam Brown (Research Scientist) 00:00.120
Are you describing understanding as a behavioral trait here where it gives the right answers to problems or whether it deeply at the neural level understands? Yeah,
Janna Levin (Professor of Physics and Astronomy) 00:08.200
I'm I'm I'm completely at the whims of the philosophers here. No, I I don't know if I understand that at my at the human level, right? I can't tell you what process I'm executing at the moment either, right? But I'd have some intuitive, subjective experience that I understand
Janna Levin (Professor of Physics and Astronomy) 00:23.860
the conversation. Obviously, not that well. Um
Janna Levin (Professor of Physics and Astronomy) 00:27.540
but but uh I when I'm talking to you, I feel you are understanding. And uh when I'm talking to ChatGPT, I do not. And you're telling me I'm mistaken. It's understanding as well as I am or
Adam Brown (Research Scientist) 00:42.060
you are. In my opinion, it is understanding, yes. And I think there's two different pieces of evidence for that. One is I think if you talk to them, like If you talk to them and ask them about difficult concepts, I'm frequently surprised and with every passing month and every
Adam Brown (Research Scientist) 01:00.340
new model that comes out, I am more and more surprised at the level of sophistication with which they're able to discuss things.
Adam Brown (Research Scientist) 01:08.140
And so just just at that level, it's it's super impressive. I would I would really encourage everybody here um to talk to these large language models if you've not already, you know, when the science fiction writers imagine imagined that we built some sort of touring test
Adam Brown (Research Scientist) 01:23.820
passing uh machine that that was gonna you know some new alien intelligence that we'd have in a box. Uh they all imagine that we sort of hide it in a basement you know in a castle surrounded by a moat with armed guards and we'd only have like a priestly class who could be able
Adam Brown (Research Scientist) 01:39.660
to go and and talk to it.
Adam Brown (Research Scientist) 01:41.420
Uh that is not not as not the way it worked out. The way it's worked out is the first thing we did is we immediately hooked it up to the internet. And now anybody can go talk to it. And uh I would highly encourage you to to talk to these things and explore in areas that you know
Adam Brown (Research Scientist) 01:55.140
to see both their limitations but also their strength and their their depth of understanding.
Adam Brown (Research Scientist) 01:59.060
So I'd say that's the first piece of evidence. The second piece of evidence is you said they're a black box, they're not exactly a black box. We do have access to their neurons. In fact, we have a much better access to the neurons of these things than we do with a human.
Adam Brown (Research Scientist) 02:11.340
It's very hard to get IRB approval to slice up a human while they're doing a math test and see how their neurons are firing. And if you do do that, you can only do that once on a human basis. Whereas these neural networks, we can freeze them, replay them, write down everything
Adam Brown (Research Scientist) 02:26.700
that happened.
Adam Brown (Research Scientist) 02:27.940
If we're curious, we can go and prod their neurons in certain ways and see what happened. And so this is it's still rudimentary, but this is the field of interpretability, mechanistic interpretability, trying to understand not just what they say, but why they say it, how they
Adam Brown (Research Scientist) 02:41.980
think
Adam Brown (Research Scientist) 02:42.220
it. And when you do that, we see uh when you feed them a math problem problem, there's a little bit of a a circuit there that computes the answer. that that we didn't program it to have that. It learnt how to do that. While trying to predict the next token on all of this text,
Adam Brown (Research Scientist) 02:58.300
it learnt that in order to most accurately predict the next the next word, I should say, in order to most accurately predict the next word, it needed to figure out uh how to do maths and it needed to build a sort of proto-little circuit inside it to do the mathematical
Adam Brown (Research Scientist) 03:11.460
computations.
Janna Levin (Professor of Physics and Astronomy) 03:13.260
Now, Yann, you famously through a slide up at one of your uh keynote lectures, that was very provocative. Um Um, very scholarly. It said, um, machine learning sucks, I believe was it. And then that kind of went wild. Jan L. Kuhn says, "Machine learning sucks." Um, why are you
Janna Levin (Professor of Physics and Astronomy) 03:31.740
saying machine learning sucks? Adam has just told us how phenomenal it is. He talks to them and wants us to do the same. Um, why do you think it sucks? What's the problem?
Yann LeCun (Chief AI Scientist) 03:43.500
Well, that statement has been wildly misinterpreted, but the point the point I was making is It's the point that we both we both made which is that why is it that a teenager can learn to drive a car in 20 hours of practice. A 10-year-old can clean up the dinner table and fill up
Yann LeCun (Chief AI Scientist) 04:05.580
the dishwasher the first time you ask the child to do it whether the 10-year-old will want to do it is a different story but you know certainly can.
Yann LeCun (Chief AI Scientist) 04:15.580
We don't have robots that are anywhere near this and we don't have robots that are even anywhere near the you know physical understanding of of reality of of a cat or a dog. And so in that sense machine learning sucks. It doesn't mean that the the deep learning method, the back
Yann LeCun (Chief AI Scientist) 04:32.700
propagation algorithm, the neural nets suck.
Yann LeCun (Chief AI Scientist) 04:35.660
That was obviously excellent. Yes. Obviously, that's great. And we don't have any alternative to this. And uh I I certainly believe that you know neural nets and deep learning and back propagation would be you know are with us for for a long time will be the basis of future AI
Yann LeCun (Chief AI Scientist) 04:54.300
systems.
Yann LeCun (Chief AI Scientist) 04:55.340
But But how is it that you know young humans can can learn how the world works in the first few months of life? It takes nine months for human babies to learn intuitive physics like gravity, inertia, and things like this. Baby animals learn this much faster. They have smaller
Yann LeCun (Chief AI Scientist) 05:12.660
brains so it's easier for them to learn.