Do LLMs Understand? AI Pioneer Yann LeCun Spars with DeepMind’s Adam Brown. - part 13/15
2025-12-12_17-05 • 1h 15m 39s
Janna Levin (Professor of Physics and Astronomy)
00:00.620
the
is
um
when
when
they
really
are
self-motivated
agents
if
that
ever
actually
happens,
that
they
won't
collude,
fight
amongst
themselves,
want
to
wrestle
for
power,
that
we
won't
be
sitting
back
watching
conflicts
that
we
simply
couldn't
have
imagined
Yann LeCun (Chief AI Scientist)
00:19.340
before.
We
give
them
clear
objectives
and
we
build
them
in
such
a
way
that
the
only
thing
they
can
do
is
fulfill
those
objectives.
Now,
this
is
not
doesn't
mean
it's
going
to
be
perfect,
but
the
question
of
AI
safety
in
the
future,
I'm
I'm
worried
about
it
in
the
the
same
way
Yann LeCun (Chief AI Scientist)
00:33.900
that
I'm
worried
about.
The
question
of
reliability
of
turbojets,
Yann LeCun (Chief AI Scientist)
00:38.340
okay?
I
mean
turbojets
I
mean
it's
it's
amazing.
I
don't
know
about
you,
but
and
my
dad
was
aeronautical
engineer,
but
I'm
totally
amazed
by
the
fact
that
you
can
fly
halfway
around
the
world
in
complete
safety
on
a
two-engine
airplane.
It's
amazing,
right?
And
Yann LeCun (Chief AI Scientist)
00:55.440
and
we
feel
completely
safe
doing
this.
It's
a
It's
It's
a
magical
production
of
engineering
of
the
modern
science
and
engineering
of
the
modern
world.
AI
safety
is
a
problem
of
this
type.
It's
It's
an
engineering
problem.
Yann LeCun (Chief AI Scientist)
01:10.400
I
think
the
fears
are
caused
by
people
who
think
about
you
know,
science
fiction
scenario
where
somewhere
someone
invents
the
secret
to
super
intelligence,
turns
on
the
machine
and
the
next
second
it
takes
over
the
world.
That
is
complete
BS.
Like
the
world
doesn't
work
this
Yann LeCun (Chief AI Scientist)
01:28.880
way.
Certainly
the
world
and
technology
and
science
doesn't
work
world
this
Yann LeCun (Chief AI Scientist)
01:32.160
way.
The
emergence
of
super
intelligence
is
not
going
to
be
an
event.
Um
as
we
see
we
have
super
intelligent
systems
that
can
do
super
intelligent
tasks,
you
know,
and
there
is
kind
of
continuous
progress
when
at
a
time.
Uh,
Yann LeCun (Chief AI Scientist)
01:48.400
but
you
know,
we're
going
to
find
some,
you
know,
better
recipe
to
build
AI
systems
that
may
have
kind
of
a
more
general
intelligence
than
we
currently
have
and
and
we'll
have
system.
There's
no
question
that
are
smarter
than
humans.
But
we'll
build
them
so
that
they
fulfill
the
Yann LeCun (Chief AI Scientist)
02:04.800
goals
we
give
them,
subject
to
God
rails.
Janna Levin (Professor of Physics and Astronomy)
02:08.400
Um,
I
I
I
was
going
to
uh
again
question
this
idea
of
we
we
we
we
know
that
if
we
can
code
them
in
a
certain
way,
somebody
could
re-code
them.
And
the
concept
of
bad
actors,
but
before
we
fall
into
that
hole,
I
have
a
plant
in
the
audience.
Does
my
plant
have
a
mic?
Janna Levin (Professor of Physics and Astronomy)
02:28.080
Is
my
plant
know
who
he
is?
Does
my
Meredith,
Isaac,
does
my
plant
have
a
mic?
Yes?
Oh,
but
he
doesn't
have
the
mic.
Okay,
David,
can
you
shout?
Janna Levin (Professor of Physics and Astronomy)
02:41.480
Okay,
so
um
so
I
want
to
introduce
the
uh
philosopher
of
mine,
David
Chalmers.
I'm
going
to
give
you
a
very
brief
introduction.
David,
I
can't
see
you
but
I
I
I
said
um
that
you
could
be
my
plant
to
ask
a
question.
Could
you
Do
you
want
to
throw
something
down
David Chalmers (Professor of Philosophy and Neural Science)
03:01.000
here?
Okay,
I
didn't
know
here.
Okay,
you
Jana
asked
me
to
ask
a
question
about
uh
AI,
consciousness,
higher
atom.
Hi
you
Okay,
so
uh
you
both
said
I
think
roughly
current
AI
systems
probably
not
conscious
David Chalmers (Professor of Philosophy and Neural Science)
03:21.120
future
AI
systems
possibly
descendants
of
the
one
today
but
some
future
AI
systems
probably
will
be
conscious.
So
I
guess
I'm
going
to
know
that
one
what
requirements
for
consciousness
do
you
think
current
systems
are
lacking
David Chalmers (Professor of Philosophy and Neural Science)
03:38.640
um
and
then
the
positive
side
of
that
is
what
steps
do
you
think
we
need
to
take
in
order
to
Well,
AI
systems
which
are
conscious
and
then
third
when
is
that
going
to
happen?
Yann LeCun (Chief AI Scientist)
03:55.000
Okay,
I
give
a
quick
at
this.
And
David
already
knows
my
answer
but
so
first
of
all
I
don't
attribute
like
I
don't
really
know
how
to
define
consciousness
and
I
don't
attribute
much
importance
to
it.
Yann LeCun (Chief AI Scientist)
04:09.360
And
this
is
an
insult
to
David,
I'm
sorry.
Because
he
devoted
his
entire
career
to
it.
Yann LeCun (Chief AI Scientist)
04:17.280
Okay,
that's
a
different
thing.
Okay,
subjective
experience.
So
clearly
we
are
going
to
have
systems
that
have
subjective
experience
that
have
emotions.
Emotions
to
some
extent
are
an
anticipation
of
outcome.
Yann LeCun (Chief AI Scientist)
04:31.560
If
we
have
systems
that
have
world
models
that
are
capable
of
anticipating
the
outcome
of
a
situation,
perhaps
resulting
from
their
actions,
they're
going
to
have
emotions
because
they
can
predict
whether
something
is
going
to
end
up,
you
know,
good
or
bad
for
you
know,
in
on
Yann LeCun (Chief AI Scientist)
04:48.120
the
way
to
fulfilling
the
objectives,
right?
Yann LeCun (Chief AI Scientist)
04:50.080
So
so
they're
going
to
have
all
of
those
characteristics.
Now,
I
don't
know
how
to
define
consciousness
in
this
kind
of
in
in
this,
but
perhaps
uh
consciousness
would
be
the
ability
for
the
system
to
kind
of
observe
itself
and
configure
itself
to
solve
a
particular
sub
problem
Yann LeCun (Chief AI Scientist)
05:06.200
that
it's
facing.
Autoscroll