Do LLMs Understand? AI Pioneer Yann LeCun Spars with DeepMind’s Adam Brown. - part 5/15
2025-12-12_17-05 • 1h 15m 39s
Adam Brown (Research Scientist)
00:00.780
But
they
didn't
just
were
not
confined
to
making
the
same
number
of
uh
playing
the
same
number
of
games
that
a
human
grandmaster
could
play.
Because
silicon
chips
are
so
fast
because
we
can
build
them
with
such
parallel
processing,
they're
able
to
play
many
more
human
many
more
Adam Brown (Research Scientist)
00:17.700
games
than
any
human
could
ever
play
in
their
lifetime.
Adam Brown (Research Scientist)
00:20.400
And
what
we
found
is
that
when
they
did
that,
they
reached
and
then
far
surpassed
the
level
of
human
chess
players.
They're
less
sample
efficient,
but
that
doesn't
mean
they're
worse
at
chess.
It
is
clear
that
they're
much
better
at
chess.
So
too
with
understanding.
Adam Brown (Research Scientist)
00:34.280
When
uh
we
it
is
it
is
true
that
we
can
you
know
it
is
harder
with
these
things
to
you
need
more
samples
to
get
them
up
to
the
same
level
of
proficiency.
But
the
question
is
once
they've
reached
that
and
we
use
the
fact
that
they
are
so
much
more
general
and
so
much
more
so
much
Adam Brown (Research Scientist)
00:53.000
faster
and
more
inherent
to
push
beyond
that.
I
mean
another
example
perhaps
with
the
cat
is
a
cat
is
in
fact
even
more
sample
efficient
than
a
human.
Uh
a
human
takes
a
a
year
to
learn
to
to
walk.
A
cat
learns
to
walk
in
a
in
a
week
or
so.
You
know
it's
much
much
faster.
That
Adam Brown (Research Scientist)
01:10.360
does
not
mean
that
a
cat
is
smarter
than
a
human.
Uh
it
does
not
mean
that
a
cat
is
smarter
than
a
large
language
model.
The
final
question
At
the
end
should
be
what
is
the
capabilities
of
these
things.
How
far
can
we
push
the
capabilities?
And
on
almost
every
except
for
the
Adam Brown (Research Scientist)
01:26.760
somewhat
impoverished
metric
of
sample
efficiency
on
every
metric
that
counts,
we
pushed
these
large
language
models
far
beyond
the
frontier
of
cat
intelligence.
Janna Levin (Professor of Physics and Astronomy)
01:36.120
So
Um
yes.
I
don't
understand
why
we're
not
making
cats.
Sorry,
what
was
Yann LeCun (Chief AI Scientist)
01:45.080
that
again?
I
mean
certainly
the
LLM's
in
question
have
much
more
more
accumulated
knowledge
than
cats
or
even
humans
for
that
matter.
And
we
do
have
many
examples
of
computers
being
far
superior
to
humans
in
a
number
of,
you
know,
different
tasks
like
playing
chess,
for
Yann LeCun (Chief AI Scientist)
02:02.040
example.
Um,
that's
Yann LeCun (Chief AI Scientist)
02:04.040
humbling.
I
mean,
it
just
means
that
humans
just
suck
at
chess.
That's
all
it
means.
Now,
we
really
suck
at
chess
and
go
by
the
way
even
even
more.
Um,
and
and
many
other
tasks
at
computers
are
much
better
than
than
us
uh
at
at
at
uh
solving.
Um,
so
certainly
LLMs
can
accumulate
Yann LeCun (Chief AI Scientist)
02:23.160
a
huge
amount
of
of
of
of
knowledge
and
some
form
of
them
Yann LeCun (Chief AI Scientist)
02:27.400
can
be
trained
to
translate
languages,
understand
spoken
language
and
and
translate
it
into
another
one
from,
you
know,
a
thousand
languages
to
another
thousand
languages
in
any
direction.
No
human
can
do
this.
Um,
so
they
they
do
have
superhuman
capabilities.
Yann LeCun (Chief AI Scientist)
02:43.240
Um,
but
the
ability
to
learn
quickly,
efficiently,
to
apprehend
a
new
problem
that
we've
never
been
trained
to
solve
and
be
able
to
come
up
with
a
solution
and
to
really
you
know
understand
a
lot
about
how
the
how
the
world
behaves
that
is
still
out
of
reach
of
AI
systems
at
the
Yann LeCun (Chief AI Scientist)
03:05.320
moment.
Adam Brown (Research Scientist)
03:06.600
I
I
would
I
mean
we've
had
recent
successes
with
this
where
it
is
not
the
case
that
they're
just
taking
problems
that
they've
seen
before
letter
for
letter
and
looking
up
the
answer
in
a
in
a
lookup
table
or
even
that
they're
they
are
they
are
in
some
sense
doing
pattern
Adam Brown (Research Scientist)
03:22.160
matching,
but
they're
doing
pattern
matching
at
a
sufficiently
elevated
level
of
abstraction
that
they're
able
to
do
things
that
they've
never
seen
before
and
no
no
human
can
do.
Adam Brown (Research Scientist)
03:31.200
So
there's
a
there's
a
competition
each
year
called
the
International
Math
Olympiad.
Um
it
is
the
very
smartest
finishing
high
school
math
teenagers
in
the
entire
world.
They're
all
given
six
problems
each
year,
the
pinnacle
of
human
intelligence.
I
have
some
mathematical
Adam Brown (Research Scientist)
03:48.720
abilities
I
look
at
these
problems,
I
don't
even
know
where
to
start.
Um,
Adam Brown (Research Scientist)
03:53.000
you
know,
this
this
year
we
fed
them
into
our
machine
uh
as
as
in
a
number
of
other
LLM
companies,
and
they
took
these
problems,
they've
never
seen
before,
they
were
completely
fresh,
didn't
appear
anywhere
in
the
training
data,
completely
made
up,
took
a
whole
bunch
of
Adam Brown (Research Scientist)
04:08.000
different
ideas,
combined
the
different
ideas,
and
got
a
score
on
these
tests
that
was
better
than
all
except
the
first
dozen
the
top
dozen
humans
on
the
planet.
Adam Brown (Research Scientist)
04:17.200
I
think
that's
uh
that's
pretty
impressive
intelligence.
Janna Levin (Professor of Physics and Astronomy)
04:20.680
I
I
guess
the
question
is
um
back
to
this
idea.
Do
they
understand
you
we
can
look
at
the
mathematics
of
the
model.
There's
some
input
data,
we
understand
what
it's
doing.
It
is
a
black
box,
which
is
kind
of
fascinating.
It's
just
so
complex
that
it's
not
as
though
we
can't
do
Janna Levin (Professor of Physics and Astronomy)
04:39.880
that
with
the
human
mind
either.
Janna Levin (Professor of Physics and Astronomy)
04:41.360
It's
not
as
though
you
can
look
at
the
inner
workings
and
and
see
exactly
what
they're
doing
to
some
extent
it
is
a
black
box,
but
we
presume
it's
just
doing
these
calculations.
It's
moving
these
Majaces,
it's
working
in
some
vector
space,
it's
doing
some
higher
dimensional
Janna Levin (Professor of Physics and Astronomy)
04:52.400
thing.
I
have
some
experience
of
understanding.
I
Janna Levin (Professor of Physics and Astronomy)
04:56.040
guess
people
are
still
grasping
at
that.
Is
it
having
some
experience
of
understanding?
Is
it
important
whether
or
not
they
experience
understanding?
Is
that
sufficient
to
call
it
comprehension
of
meaning?
Autoscroll