Ilya Sutskever – We're moving from the age of scaling to the age of research - part 8/17
2025-11-25_17-29 • 1h 36m 3s
Dwarkesh Patel (Host)
00:00.840
There's
been
public
estimates
that
you
know
companies
like
OpenAI
spend
on
the
order
of
5
6
billion
dollars
a
year
even
just
so
so
far
on
Dwarkesh Patel (Host)
00:09.000
experiments.
This
is
separate
from
the
amount
of
money
they're
sending
on
inference
and
so
forth.
So,
seems
like
they're
spending
more
a
year
running
like
research
experiments
than
you
guys
have
in
total
funding.
Ilya Sutskever (Co-founder and Chief Scientist)
00:21.000
I
think
it's
a
question
of
what
you
do
with
it.
It's
a
question
of
what
you
do
with
it.
Like
they
have
a
like
is
the
more
I
think
in
in
in
their
case
and
the
case
of
others,
I
think
there
is
a
lot
more
demand
on
the
training
compute.
There's
a
lot
more
different
work
streams.
Ilya Sutskever (Co-founder and Chief Scientist)
00:35.600
There
is
there
are
different
modalities
there
is
just
more
stuff.
And
so
it
becomes
fragmented.
Dwarkesh Patel (Host)
00:42.440
How
will
SSI
make
money?
Ilya Sutskever (Co-founder and Chief Scientist)
00:44.280
You
know,
my
answer
to
this
question
is
something
like
we
just
right
now
we
just
focus
on
the
research
and
then
the
answer
to
this
question
will
reveal
itself.
I
think
there
will
be
lots
of
possible
answers.
Dwarkesh Patel (Host)
00:58.480
Hm.
Is
SSI's
plan
still
to
straight
shot
super
intelligence?
Ilya Sutskever (Co-founder and Chief Scientist)
01:02.880
Maybe.
I
think
that
there
is
merit
to
it.
I
think
there's
a
lot
of
merit
because
I
think
that
it's
very
nice
to
not
be
affected
by
the
day-to-day
market
competition.
But
I
think
there
are
two
reasons
that
may
cause
us
to
change
the
plan.
One
is
pragmatic
if
timelines
turn
out
to
Ilya Sutskever (Co-founder and Chief Scientist)
01:25.720
be
long,
which
they
might.
And
second,
I
think
there
is
a
lot
of
value
in
the
best
and
most
powerful
AI
being
out
there
impacting
the
world.
Yeah.
Ilya Sutskever (Co-founder and Chief Scientist)
01:41.800
I
think
this
is
a
meaningfully
valuable
thing.
But
Dwarkesh Patel (Host)
01:44.200
then
so
why
is
your
default
plan
to
straight
out
super
intelligence?
Because
it
sounds
like,
you
know,
Open
AI
and
Anthropic
all
these
other
companies,
their
explicit
thinking
is
look
we
have
weaker
and
weaker
intelligences
that
the
public
can
get
used
to
and
prepare
for
and
why
Dwarkesh Patel (Host)
01:59.200
is
it
potentially
better
to
build
the
super
intelligence
directly?
So
Ilya Sutskever (Co-founder and Chief Scientist)
02:04.360
I'll
make
the
case
for
and
against.
Dwarkesh Patel (Host)
02:06.280
Yeah.
The
Ilya Sutskever (Co-founder and Chief Scientist)
02:06.840
case
for
is
that
you
are
So
one
of
the
challenges
that
people
face
when
they're
in
the
market
is
that
they
have
to
participate
in
the
rat
race.
And
the
rat
race
is
quite
difficult
in
that
it
exposes
you
to
to
to
difficult
trade-offs
which
you
need
to
make.
And
there
is
it
is
it
Ilya Sutskever (Co-founder and Chief Scientist)
02:27.640
is
nice
to
say
we'll
insulate
ourselves
from
all
this
and
just
focus
on
the
research
and
come
out
only
when
we
are
ready
and
not
before.
But
the
counterpoint
is
valid
too.
And
those
are
those
are
opposing
forces.
The
counterpoint
is
hey,
it
is
useful
for
the
world
to
see
Ilya Sutskever (Co-founder and Chief Scientist)
02:47.280
powerful
AI.
It
is
useful
for
the
world
to
see
powerful
AI
because
that's
the
only
way
you
can
communicate
it.
Dwarkesh Patel (Host)
02:53.320
Well,
I
guess
not
even
just
that
you
can
communicate
the
idea,
but
Ilya Sutskever (Co-founder and Chief Scientist)
02:56.160
Communicate
the
AI.
Not
the
idea.
Communicate
the
AI.
Dwarkesh Patel (Host)
03:00.760
What
do
you
mean
communicate
the
AI?
Ilya Sutskever (Co-founder and Chief Scientist)
03:02.000
So
okay,
so
let's
suppose
you
read
an
essay
about
AI.
And
the
essay
says
AI
is
going
to
be
this
and
AI
is
going
to
gonna
be
that
and
it's
gonna
be
this.
And
you
read
it
and
you
say,
"Okay,
this
is
an
interesting
essay."
Dwarkesh Patel (Host)
03:13.040
Right.
Ilya Sutskever (Co-founder and Chief Scientist)
03:13.760
Now
suppose
you
see
an
AI
doing
this
and
AI
doing
that,
it
is
incomparable.
Like
basically,
I
think
I
think
that
there
is
a
big
benefit
from
AI
being
in
the
public
and
that
would
be
a
reason
for
us
to
not
be
quite
straight
shot.
Dwarkesh Patel (Host)
03:33.440
Yeah.
Well,
I
guess
it's
not
even
that
which
I
but
I
do
think
that
is
an
important
part
of
it.
The
other
big
thing
is,
I
can't
think
of
another
discipline
in
human
engineering
and
research
where
the
end
artifact
was
made
safer
mostly
through
just
thinking
about
how
to
make
it
Dwarkesh Patel (Host)
03:51.720
safe
as
opposed
to
why
are
airplane
crashes
per
mile
so
much
lower
today
than
they
were
decades
ago?
Why
is
it
so
much
harder
to
find
a
bug
in
Linux
than
it
would
have
been
decades
ago?
And
I
think
it's
mostly
because
these
systems
were
deployed
to
the
world.
You
noticed
Dwarkesh Patel (Host)
04:07.120
failures
Those
failures
were
corrected
and
the
systems
became
more
robust.
Now
I'm
not
sure
why
AGI
and
superhuman
intelligence
would
be
any
different,
especially
given
and
I
hope
we
can
we're
going
to
get
to
this.
It
seems
like
the
the
harms
of
super
intelligence
are
not
just
Dwarkesh Patel (Host)
04:24.560
about
like
having
some
malevolent
uh
paper
clipper
out
there,
but
it's
just
like
this
is
a
really
powerful
thing
and
we
don't
even
know
how
to
conceptualize
how
people
will
interact
with
it,
what
people
will
do
with
it.
And
having
gradual
access
to
it
seems
like
a
um
better
way
Dwarkesh Patel (Host)
04:39.240
to
maybe
spread
out
the
impact
of
it
and
to
help
people
prepare
for
it.
Ilya Sutskever (Co-founder and Chief Scientist)
04:43.320
Well,
I
think
I
think
on
this
point,
even
in
the
straight
shot
scenario,
you
would
still
do
a
gradual
release
of
it.
It's
how
I
would
imagine
it.
The
The
gradualism
would
be
an
inherent
inherent
component
of
any
plan.
It's
just
a
question
of
what
is
the
first
thing
that
you
get
Ilya Sutskever (Co-founder and Chief Scientist)
05:03.060
out
of
the
door.
That's
number
one.
Number
two,
I
also
think,
you
know,
I
believe
you
have
advocated
for
continual
learning
more
than
other
people.
And
I
actually
think
that
this
is
an
important
and
correct
thing,
and
here
is
why.
So,
one
of
the
things.
So,
I'll
give
you
another
Ilya Sutskever (Co-founder and Chief Scientist)
05:22.300
example
of
how
thinking
how
language
affects
thinking.
And
in
this
case,
this
will
be
two
words,
two
words
that
have
shaped
everyone's
thinking
I
maintain.
First
word,
AGI.
Second
word,
pre-training.
Let
me
explain.
So,
the
word
the
term
AGI,
why
does
this
term
exist?
It's
a
Ilya Sutskever (Co-founder and Chief Scientist)
05:48.140
very
particular
term.
Why
does
it
exist?
There's
a
reason.
The
reason
that
the
term
AGI
exists
is
in
my
opinion
not
so
much
because
it's
like
a
very
important
essential
descriptor
of
of
some
end
state
of
intelligence,
but
because
it
is
a
reaction
to
a
different
term
that
existed
Ilya Sutskever (Co-founder and Chief Scientist)
06:12.860
and
the
term
is
narrow
AI.
If
you
go
back
to
ancient
history
of
game
playing
AI,
of
checkers
AI,
chess
AI,
computer
games
AI.
Everyone
would
say,
"Look
at
this
narrow
intelligence."
Sure
the
chess
AI
can
beat
Kasparov,
but
it
can't
do
anything
else.
It
is
so
narrow,
artificial
Ilya Sutskever (Co-founder and Chief Scientist)
06:30.980
narrow
intelligence.
So
in
response
as
a
reaction
to
this,
some
people
said,
"Well,
this
is
not
good.
It
is
so
narrow.
What
we
need
is
general
AI."
General
AI,
an
AI
that
can
just
do
all
the
things.
The
second
And
and
that
term
just
got
a
lot
of
traction.