Janna Levin (Professor of Physics and Astronomy) 00:14.640
It's a pleasure to have Adam, my colleague and friend and Jan, he's been with us before. Jan, you really are all over the news right now.
Janna Levin (Professor of Physics and Astronomy) 00:26.240
I've gotten so many people forwarding articles about you this week. it all kicked off on Wednesday. Do you want to discuss the I can just say the headline. The headline was the equivalent of Jan Lakun, chief scientist leaves meta. Um, do you care to comment?
Yann LeCun (Chief AI Scientist) 00:45.880
I can neither confirm nor deny.
Janna Levin (Professor of Physics and Astronomy) 00:48.480
Okay. So all of the press that the core that's here to get the scoop, cannot get the scoop tonight. All right, well, um you can come afterwards and buy you on a drink and see how far you get.
Yann LeCun (Chief AI Scientist) 01:03.880
I already had one but that was a
Janna Levin (Professor of Physics and Astronomy) 01:07.480
The Frenchman had some wine upstairs. So we have this era where every time any of us turn on the news, look at the computer, read the paper, we're confronted with conversations about the societal implications of AI and whether it's about economic upheaval or the potential for
Janna Levin (Professor of Physics and Astronomy) 01:25.880
political manipulation or AI psychosis. There's a lot of pundits out there are discussing this. And
Janna Levin (Professor of Physics and Astronomy) 01:31.120
I And And I It is a very important issue. I kind of want to push that towards the end of our conversation because what a lot of people who are discussing this don't have is the technical expertise that's on this stage. And so I really want to begin by grounding this in that
Janna Levin (Professor of Physics and Astronomy) 01:47.520
technical scientific conversation.
Janna Levin (Professor of Physics and Astronomy) 01:50.560
And so I want to begin with you, Jan, about neural nets. Here's this instance of kind of biomimicry where you have these computational neural networks that are emulating human networks. Can you describe to us what that means that a machine is emulating human neural networks?
Yann LeCun (Chief AI Scientist) 02:09.640
Well, it's not really mimicry. It's more inspiration the same way I don't know airplanes are inspired by by birds, right?
Janna Levin (Professor of Physics and Astronomy) 02:19.200
And the That didn't work I thought. Say again. But I thought that didn't work copying birds with airplanes.
Yann LeCun (Chief AI Scientist) 02:25.240
Well, in the sense that you know airplanes have wings like birds and they generate shift by propelling themselves through the air, but then the analogy stops stops there. And the wing of an airplane is much simpler than the wing of a bird, but yet the underlying principle is the
Yann LeCun (Chief AI Scientist) 02:40.920
same.
Yann LeCun (Chief AI Scientist) 02:41.480
So, neural networks are a bit like like that, like are like, you know, our two real brains as airplanes are two birds. They're much simplified in many ways. Um, but perhaps some of the underlying principles are the same. We don't actually know because we don't really know the
Yann LeCun (Chief AI Scientist) 03:00.400
sort of underlying algorithm of the cortex if you want or the method by which the brain organizes itself and learns. So
Yann LeCun (Chief AI Scientist) 03:11.920
we invented substitutes. Sort of like birds flap their wings and not airplanes right the air propellers so or turbojets. In our nets we have learning algorithms and they they allow artificial nets to learn in a way that we think is similar to how the brains learn.
Yann LeCun (Chief AI Scientist) 03:35.200
So, the brain is a network of neurons, the neurons are interconnected with each other, and the way the brain learns is by modifying the efficacy of the connections between the neurons. And the way a neural net is trained is by modifying the efficacy of the connections between
Yann LeCun (Chief AI Scientist) 03:51.000
those similarity neurons.
Yann LeCun (Chief AI Scientist) 03:52.720
Each of those is like a we call it a parameter. You you see this in the price the number of parameters of a neural net, right? So the the biggest neural net at the moment have, you know, hundreds of billions of parameters, if not more, and um those are the individual
Yann LeCun (Chief AI Scientist) 04:09.320
coefficients that are modified by by by training. So
Janna Levin (Professor of Physics and Astronomy) 04:14.440
And how is deep learning uh emerge in this discussion? Because deep learning came along the path after thinking about neural nets. And this has been since the 80s or earlier even.
Yann LeCun (Chief AI Scientist) 04:25.280
Um yeah, 80s roughly. Um So early neural on that, the first ones are capable of learning or learning something useful at least in the 50s were shallow. You could you could basically train a single layer of neurons, right? So you would feed the input and train the system to
Yann LeCun (Chief AI Scientist) 04:45.800
produce a particular output and you could use those things to kind of recognize or classify relatively simple patterns but not really sort of complex things. And
Yann LeCun (Chief AI Scientist) 04:57.000
people at the time even in the 60s realise that the way to make progress was going to be able to train neural nets with multiple layers. They built neural nets with multiple layers, but they couldn't train all the layers, so we only trained the last layer, for example. And