Do LLMs Understand? AI Pioneer Yann LeCun Spars with DeepMind’s Adam Brown. - part 10/15
2025-12-12_17-05 • 1h 15m 39s
Yann LeCun (Chief AI Scientist)
00:00.160
work.
So
so
you
have
to,
you
know,
come
up
with
those
new
architectures
like
JPA
and
stuff
like
that.
And
those
kind
of
work,
like
we
we
have
models
that
actually
understand
video.
Janna Levin (Professor of Physics and Astronomy)
00:08.680
And
Adam,
are
people
exploring
other
ways
of
building
an
architecture
or
imagining
a
computer
mind?
This
the
actual
fundamental
structure
of
a
computer
mind
and
how
it
would
um
how
it
would
learn,
how
it
would
acquire
information.
Janna Levin (Professor of Physics and Astronomy)
00:23.040
One
of
the
criticisms,
as
I
understand
it,
is
it's
a
lot
of
the
LLMs
are
trained
for
this
one
specific
task
of
this
discrete
prediction.
of
these
tokens,
but
something
that
is
more
unpredictable
like
how
the
audience
is
distributed
in
this
room,
what
might
happen
with
the
Janna Levin (Professor of Physics and Astronomy)
00:36.500
weather
next,
unpredictable
more
human
experience
based
phenomenon.
Adam Brown (Research Scientist)
00:42.620
Um,
certainly
all
kinds
of
explorations
are
being
made
in
all
kinds
of
directions
including
Yans
and
like
a
thousand
flowers
bloom.
Um,
but
all
of
the
resources,
I
mean,
the
bulk
of
the
resources
right
now
are
going
into
large
language
models
and
large
language
models
like
Adam Brown (Research Scientist)
01:00.860
applications
in
in
including
taking
in
text.
Adam Brown (Research Scientist)
01:03.660
To
say
to
say
that
they
are
it's
a
specialized
task
predicting
the
next
token.
I
think
that's
a
not
a
helpful
way
to
think
about
it.
It
is
true
that
the
thing
that
you
train
them
on
is
given
this
corpus
of
text.
I
mean
there
are
other
things
we
do
as
well
but
the
the
bulk
of
the
Adam Brown (Research Scientist)
01:19.340
computer
goes
to
given
this
corpus
of
text.
Please
predict
the
next
word.
Please
predict
the
next
word.
Please
predict
the
next
word.
Um
Adam Brown (Research Scientist)
01:26.500
but
we
have
discovered
something
truly
extraordinary
by
doing
it.
which
is
that
given
a
large
enough
body
of
text
to
be
able
to
reliably
predict
the
next
word
or
you
know
do
it
do
it
well
enough
to
predict
the
next
word
you
really
need
to
understand
the
universe
Adam Brown (Research Scientist)
01:42.340
and
we
have
seen
the
emergence
of
understanding
of
the
universe
as
we've
done
that.
So
I
I
would
like
in
it
a
little
bit.
I
mean
in
physics
we're
very
used
to
systems
where
you
just
take
a
very
simple
rule
and
you
know
by
the
repeated
application
of
that
very
simple
rule
you
get
Adam Brown (Research Scientist)
01:58.620
extremely
impressive
of
behavior.
Uh,
Adam Brown (Research Scientist)
02:01.540
we
see
the
same
with
these
LLMs.
Uh,
and
another
example
of
that
would
maybe
be
evolution.
You
know,
in
each
stage
and
evolution,
you
just
say
uh
biological
evolution.
You
just
say,
you
know,
maximize
the
number
of
offspring,
maximize
the
number
of
offspring,
maximize
the
number
Adam Brown (Research Scientist)
02:14.540
of
offspring.
A
very
sort
of
unsophisticated
learning
objective.
Adam Brown (Research Scientist)
02:18.540
But
out
of
this
simple
learning
objective
repeated
many,
many
times,
uh
you
eventually
get
all
of
the,
you
know,
splendor
of
biology
that
we
see
around
us
and
and
indeed
this
room.
So
Adam Brown (Research Scientist)
02:30.700
The
evidence
is
that
predicting
the
next
token
while
a
very
simple
task,
because
it's
so
simple
we
can
do
it
at
massive
scale.
Huge
amounts
of
compute
and
once
you
do
it
a
huge
amount
of
compute
you
get
an
emergent
complexity.
Janna Levin (Professor of Physics and Astronomy)
02:44.180
So
I
I
guess
the
next
question
could
be
related
to
evolution.
However,
this
intelligence
emerges
that
you
both
imagine
is
certainly
possible.
You
don't
think
there's
anything
special
about
this
wetware
Janna Levin (Professor of Physics and Astronomy)
02:55.780
that
there
will
be
machines
we
just
have
to
figure
out
how
to
launch
them
that
will
um
have
capacities
that
we
align
as
a
kind
of
intelligence
or
maybe
consciousness.
That's
a
Janna Levin (Professor of Physics and Astronomy)
03:06.100
almost
a
different
question.
Will
consciousness
be
a
crutch?
Machines
don't
need.
I
don't
know,
we
can
talk
about
that,
but
but
is
there
a
point
in
the
evolution
of
these
uh
machines
where
they're
going
to
say,
"Oh,
how
quaint,
mom
and
dad,
you
you
made
me
in
your
image
with
Janna Levin (Professor of Physics and Astronomy)
03:20.620
these
human
neural
nets."
But
I
know
a
way,
a
much
better
way
having
scanned
10,000
years
of
human
output
to
make
a
machine
intelligence
and
I'm
going
to
evolve
and
leave
us
in
the
dust.
I
mean,
Janna Levin (Professor of Physics and Astronomy)
03:34.740
yeah,
what
are
we
why
are
we
imagining
that
they
would
be
limited
at
that
capacity
to
the
way
we
design
them.
Adam Brown (Research Scientist)
03:41.220
Absolutely.
This
is
this
idea
of
recursive
self-improvement
where
when
they're
bad,
they're
useless,
but
when
they
get
good
enough
and
strong
enough,
you
can
start
using
them
to
augment
human
intelligence
and
perhaps
eventually
just
be
fully
autonomous
and
replace
and
make
Adam Brown (Research Scientist)
03:58.940
future
of
them.
Adam Brown (Research Scientist)
04:00.380
Once
we
do
that,
I
mean,
I
think
what
we
should
do
is
just
take
this
large
language
model
paradigm
that's
currently
working
so
well
and
just
see
how
far
we
can
push
it.
You
know,
it
keeps
Every
time
someone
says
it's
a
barrier,
it
pushes
through
the
barrier
over
the
last
5
Adam Brown (Research Scientist)
04:12.100
years,
but
eventually
these
things
will
get
smart
enough
and
then
they
can
uh
read
Jan's
papers,
uh
read
all
the
other
papers
that
have
been
made,
try
and
figure
out
uh
new
ideas
that
none
none
of
us
have
thought
of.
Yann LeCun (Chief AI Scientist)
04:24.460
Yeah.
So,
I
completely
disagree
with
this.
Um
So,
LMs
are
not
controllable.
It's
not
dangerous
because
they're
not
that
smart
as
as
as
I
explained
previously.
Yann LeCun (Chief AI Scientist)
04:38.900
And
they're
certainly
not
autonomous
in
a
way
that
we
understand
autonomy.
We
have
to
distinguish
between
autonomy
and
intelligence.
You
can
be
very
intelligent
without
being
autonomous
and
you
can
be
autonomous
without
without
being
intelligent.
Yann LeCun (Chief AI Scientist)
04:53.220
Um,
and
you
can
be
dangerous
without
being
particularly
intelligent.
Um,
and
You
can
want
to
be
dominant
without
being
intelligent.
In
fact,
that's
kind
of
inversely
correlated
in
the
human
species.
Um