WEBVTT

00:00:05.504 --> 00:00:07.720 align:center
I'm going to talk
about a failure of intuition

00:00:07.744 --> 00:00:09.344 align:center
that many of us suffer from.

00:00:09.984 --> 00:00:13.024 align:center
It's really a failure
to detect a certain kind of danger.

00:00:13.864 --> 00:00:15.600 align:center
I'm going to describe a scenario

00:00:15.624 --> 00:00:18.880 align:center
that I think is both terrifying

00:00:18.904 --> 00:00:20.664 align:center
and likely to occur,

00:00:21.344 --> 00:00:23.000 align:center
and that's not a good combination,

00:00:23.024 --> 00:00:24.560 align:center
as it turns out.

00:00:24.584 --> 00:00:27.040 align:center
And yet rather than be scared,
most of you will feel

00:00:27.064 --> 00:00:29.144 align:center
that what I'm talking about
is kind of cool.

00:00:29.704 --> 00:00:32.680 align:center
I'm going to describe
how the gains we make

00:00:32.704 --> 00:00:34.480 align:center
in artificial intelligence

00:00:34.504 --> 00:00:36.280 align:center
could ultimately destroy us.

00:00:36.304 --> 00:00:39.760 align:center
And in fact, I think it's very difficult
to see how they won't destroy us

00:00:39.784 --> 00:00:41.464 align:center
or inspire us to destroy ourselves.

00:00:41.904 --> 00:00:43.760 align:center
And yet if you're anything like me,

00:00:43.784 --> 00:00:46.440 align:center
you'll find that it's fun
to think about these things.

00:00:46.464 --> 00:00:49.840 align:center
And that response is part of the problem.

00:00:49.864 --> 00:00:51.584 align:center
OK? That response should worry you.

00:00:52.424 --> 00:00:55.080 align:center
And if I were to convince you in this talk

00:00:55.104 --> 00:00:58.520 align:center
that we were likely
to suffer a global famine,

00:00:58.544 --> 00:01:01.600 align:center
either because of climate change
or some other catastrophe,

00:01:01.624 --> 00:01:05.040 align:center
and that your grandchildren,
or their grandchildren,

00:01:05.064 --> 00:01:06.864 align:center
are very likely to live like this,

00:01:07.704 --> 00:01:08.904 align:center
you wouldn't think,

00:01:09.944 --> 00:01:11.280 align:center
"Interesting.

00:01:11.304 --> 00:01:12.504 align:center
I like this TED Talk."

00:01:13.704 --> 00:01:15.224 align:center
Famine isn't fun.

00:01:16.304 --> 00:01:19.680 align:center
Death by science fiction,
on the other hand, is fun,

00:01:19.704 --> 00:01:23.680 align:center
and one of the things that worries me most
about the development of AI at this point

00:01:23.704 --> 00:01:27.800 align:center
is that we seem unable to marshal
an appropriate emotional response

00:01:27.824 --> 00:01:29.640 align:center
to the dangers that lie ahead.

00:01:29.664 --> 00:01:32.864 align:center
I am unable to marshal this response,
and I'm giving this talk.

00:01:34.624 --> 00:01:37.320 align:center
It's as though we stand before two doors.

00:01:37.344 --> 00:01:38.600 align:center
Behind door number one,

00:01:38.624 --> 00:01:41.920 align:center
we stop making progress
in building intelligent machines.

00:01:41.944 --> 00:01:45.960 align:center
Our computer hardware and software
just stops getting better for some reason.

00:01:45.984 --> 00:01:48.984 align:center
Now take a moment
to consider why this might happen.

00:01:49.584 --> 00:01:53.240 align:center
I mean, given how valuable
intelligence and automation are,

00:01:53.264 --> 00:01:56.784 align:center
we will continue to improve our technology
if we are at all able to.

00:01:57.704 --> 00:01:59.371 align:center
What could stop us from doing this?

00:02:00.304 --> 00:02:02.104 align:center
A full-scale nuclear war?

00:02:03.504 --> 00:02:05.064 align:center
A global pandemic?

00:02:06.824 --> 00:02:08.144 align:center
An asteroid impact?

00:02:10.144 --> 00:02:12.720 align:center
Justin Bieber becoming
president of the United States?

00:02:12.744 --> 00:02:15.024 align:center
(Laughter)

00:02:17.264 --> 00:02:21.184 align:center
The point is, something would have to
destroy civilization as we know it.

00:02:21.864 --> 00:02:26.160 align:center
You have to imagine
how bad it would have to be

00:02:26.184 --> 00:02:29.520 align:center
to prevent us from making
improvements in our technology

00:02:29.544 --> 00:02:30.760 align:center
permanently,

00:02:30.784 --> 00:02:32.800 align:center
generation after generation.

00:02:32.824 --> 00:02:34.960 align:center
Almost by definition,
this is the worst thing

00:02:34.984 --> 00:02:37.000 align:center
that's ever happened in human history.

00:02:37.024 --> 00:02:38.320 align:center
So the only alternative,

00:02:38.344 --> 00:02:40.680 align:center
and this is what lies
behind door number two,

00:02:40.704 --> 00:02:43.840 align:center
is that we continue
to improve our intelligent machines

00:02:43.864 --> 00:02:45.464 align:center
year after year after year.

00:02:46.224 --> 00:02:49.864 align:center
At a certain point, we will build
machines that are smarter than we are,

00:02:50.584 --> 00:02:53.200 align:center
and once we have machines
that are smarter than we are,

00:02:53.224 --> 00:02:55.200 align:center
they will begin to improve themselves.

00:02:55.224 --> 00:02:57.960 align:center
And then we risk what
the mathematician IJ Good called

00:02:57.984 --> 00:02:59.760 align:center
an "intelligence explosion,"

00:02:59.784 --> 00:03:01.784 align:center
that the process could get away from us.

00:03:02.624 --> 00:03:05.440 align:center
Now, this is often caricatured,
as I have here,

00:03:05.464 --> 00:03:08.680 align:center
as a fear that armies of malicious robots

00:03:08.704 --> 00:03:09.960 align:center
will attack us.

00:03:09.984 --> 00:03:12.680 align:center
But that isn't the most likely scenario.

00:03:12.704 --> 00:03:17.560 align:center
It's not that our machines
will become spontaneously malevolent.

00:03:17.584 --> 00:03:20.200 align:center
The concern is really
that we will build machines

00:03:20.224 --> 00:03:22.280 align:center
that are so much
more competent than we are

00:03:22.304 --> 00:03:26.080 align:center
that the slightest divergence
between their goals and our own

00:03:26.104 --> 00:03:27.304 align:center
could destroy us.

00:03:28.464 --> 00:03:30.544 align:center
Just think about how we relate to ants.

00:03:31.104 --> 00:03:32.760 align:center
We don't hate them.

00:03:32.784 --> 00:03:34.840 align:center
We don't go out of our way to harm them.

00:03:34.864 --> 00:03:37.240 align:center
In fact, sometimes
we take pains not to harm them.

00:03:37.264 --> 00:03:39.280 align:center
We step over them on the sidewalk.

00:03:39.304 --> 00:03:41.440 align:center
But whenever their presence

00:03:41.464 --> 00:03:43.960 align:center
seriously conflicts with one of our goals,

00:03:43.984 --> 00:03:46.461 align:center
let's say when constructing
a building like this one,

00:03:46.485 --> 00:03:48.445 align:center
we annihilate them without a qualm.

00:03:48.984 --> 00:03:51.920 align:center
The concern is that we will
one day build machines

00:03:51.944 --> 00:03:54.680 align:center
that, whether they're conscious or not,

00:03:54.704 --> 00:03:56.704 align:center
could treat us with similar disregard.

00:03:58.264 --> 00:04:01.024 align:center
Now, I suspect this seems
far-fetched to many of you.

00:04:01.864 --> 00:04:08.200 align:center
I bet there are those of you who doubt
that superintelligent AI is possible,

00:04:08.224 --> 00:04:09.880 align:center
much less inevitable.

00:04:09.904 --> 00:04:13.524 align:center
But then you must find something wrong
with one of the following assumptions.

00:04:13.548 --> 00:04:15.120 align:center
And there are only three of them.

00:04:16.304 --> 00:04:21.023 align:center
Intelligence is a matter of information
processing in physical systems.

00:04:21.824 --> 00:04:24.439 align:center
Actually, this is a little bit more
than an assumption.

00:04:24.463 --> 00:04:27.920 align:center
We have already built
narrow intelligence into our machines,

00:04:27.944 --> 00:04:29.960 align:center
and many of these machines perform

00:04:29.984 --> 00:04:32.624 align:center
at a level of superhuman
intelligence already.

00:04:33.344 --> 00:04:35.920 align:center
And we know that mere matter

00:04:35.944 --> 00:04:38.560 align:center
can give rise to what is called
"general intelligence,"

00:04:38.584 --> 00:04:42.240 align:center
an ability to think flexibly
across multiple domains,

00:04:42.264 --> 00:04:45.400 align:center
because our brains have managed it. Right?

00:04:45.424 --> 00:04:49.360 align:center
I mean, there's just atoms in here,

00:04:49.384 --> 00:04:53.880 align:center
and as long as we continue
to build systems of atoms

00:04:53.904 --> 00:04:56.600 align:center
that display more and more
intelligent behavior,

00:04:56.624 --> 00:04:59.160 align:center
we will eventually,
unless we are interrupted,

00:04:59.184 --> 00:05:02.560 align:center
we will eventually
build general intelligence

00:05:02.584 --> 00:05:03.880 align:center
into our machines.

00:05:03.904 --> 00:05:07.560 align:center
It's crucial to realize
that the rate of progress doesn't matter,

00:05:07.584 --> 00:05:10.760 align:center
because any progress
is enough to get us into the end zone.

00:05:10.784 --> 00:05:14.560 align:center
We don't need Moore's law to continue.
We don't need exponential progress.

00:05:14.584 --> 00:05:16.184 align:center
We just need to keep going.

00:05:17.984 --> 00:05:20.904 align:center
The second assumption
is that we will keep going.

00:05:21.504 --> 00:05:24.264 align:center
We will continue to improve
our intelligent machines.

00:05:25.504 --> 00:05:29.880 align:center
And given the value of intelligence --

00:05:29.904 --> 00:05:33.440 align:center
I mean, intelligence is either
the source of everything we value

00:05:33.464 --> 00:05:36.240 align:center
or we need it to safeguard
everything we value.

00:05:36.264 --> 00:05:38.520 align:center
It is our most valuable resource.

00:05:38.544 --> 00:05:40.080 align:center
So we want to do this.

00:05:40.104 --> 00:05:43.440 align:center
We have problems
that we desperately need to solve.

00:05:43.464 --> 00:05:46.664 align:center
We want to cure diseases
like Alzheimer's and cancer.

00:05:47.464 --> 00:05:51.400 align:center
We want to understand economic systems.
We want to improve our climate science.

00:05:51.424 --> 00:05:53.680 align:center
So we will do this, if we can.

00:05:53.704 --> 00:05:56.990 align:center
The train is already out of the station,
and there's no brake to pull.

00:05:58.384 --> 00:06:03.840 align:center
Finally, we don't stand
on a peak of intelligence,

00:06:03.864 --> 00:06:05.664 align:center
or anywhere near it, likely.

00:06:06.144 --> 00:06:08.040 align:center
And this really is the crucial insight.

00:06:08.064 --> 00:06:10.480 align:center
This is what makes
our situation so precarious,

00:06:10.504 --> 00:06:14.544 align:center
and this is what makes our intuitions
about risk so unreliable.

00:06:15.624 --> 00:06:18.344 align:center
Now, just consider the smartest person
who has ever lived.

00:06:19.144 --> 00:06:22.560 align:center
On almost everyone's shortlist here
is John von Neumann.

00:06:22.584 --> 00:06:25.920 align:center
I mean, the impression that von Neumann
made on the people around him,

00:06:25.944 --> 00:06:30.000 align:center
and this included the greatest
mathematicians and physicists of his time,

00:06:30.024 --> 00:06:31.960 align:center
is fairly well-documented.

00:06:31.984 --> 00:06:35.760 align:center
If only half the stories
about him are half true,

00:06:35.784 --> 00:06:37.000 align:center
there's no question

00:06:37.024 --> 00:06:39.480 align:center
he's one of the smartest people
who has ever lived.

00:06:39.504 --> 00:06:42.024 align:center
So consider the spectrum of intelligence.

00:06:42.824 --> 00:06:44.253 align:center
Here we have John von Neumann.

00:06:46.064 --> 00:06:47.398 align:center
And then we have you and me.

00:06:48.624 --> 00:06:49.920 align:center
And then we have a chicken.

00:06:49.944 --> 00:06:51.880 align:center
(Laughter)

00:06:51.904 --> 00:06:53.120 align:center
Sorry, a chicken.

00:06:53.144 --> 00:06:54.400 align:center
(Laughter)

00:06:54.424 --> 00:06:58.160 align:center
There's no reason for me to make this talk
more depressing than it needs to be.

00:06:58.184 --> 00:06:59.784 align:center
(Laughter)

00:07:00.843 --> 00:07:04.320 align:center
It seems overwhelmingly likely, however,
that the spectrum of intelligence

00:07:04.344 --> 00:07:07.464 align:center
extends much further
than we currently conceive,

00:07:08.384 --> 00:07:11.600 align:center
and if we build machines
that are more intelligent than we are,

00:07:11.624 --> 00:07:13.920 align:center
they will very likely
explore this spectrum

00:07:13.944 --> 00:07:15.800 align:center
in ways that we can't imagine,

00:07:15.824 --> 00:07:18.344 align:center
and exceed us in ways
that we can't imagine.

00:07:19.504 --> 00:07:23.840 align:center
And it's important to recognize that
this is true by virtue of speed alone.

00:07:23.864 --> 00:07:28.920 align:center
Right? So imagine if we just built
a superintelligent AI

00:07:28.944 --> 00:07:32.400 align:center
that was no smarter
than your average team of researchers

00:07:32.424 --> 00:07:34.720 align:center
at Stanford or MIT.

00:07:34.744 --> 00:07:37.720 align:center
Well, electronic circuits
function about a million times faster

00:07:37.744 --> 00:07:39.000 align:center
than biochemical ones,

00:07:39.024 --> 00:07:42.160 align:center
so this machine should think
about a million times faster

00:07:42.184 --> 00:07:44.000 align:center
than the minds that built it.

00:07:44.024 --> 00:07:45.680 align:center
So you set it running for a week,

00:07:45.704 --> 00:07:50.264 align:center
and it will perform 20,000 years
of human-level intellectual work,

00:07:50.904 --> 00:07:52.864 align:center
week after week after week.

00:07:54.144 --> 00:07:57.240 align:center
How could we even understand,
much less constrain,

00:07:57.264 --> 00:07:59.544 align:center
a mind making this sort of progress?

00:08:01.344 --> 00:08:03.480 align:center
The other thing that's worrying, frankly,

00:08:03.504 --> 00:08:08.480 align:center
is that, imagine the best case scenario.

00:08:08.504 --> 00:08:12.680 align:center
So imagine we hit upon a design
of superintelligent AI

00:08:12.704 --> 00:08:14.080 align:center
that has no safety concerns.

00:08:14.104 --> 00:08:17.360 align:center
We have the perfect design
the first time around.

00:08:17.384 --> 00:08:19.600 align:center
It's as though we've been handed an oracle

00:08:19.624 --> 00:08:21.640 align:center
that behaves exactly as intended.

00:08:21.664 --> 00:08:25.384 align:center
Well, this machine would be
the perfect labor-saving device.

00:08:26.184 --> 00:08:28.613 align:center
It can design the machine
that can build the machine

00:08:28.637 --> 00:08:30.400 align:center
that can do any physical work,

00:08:30.424 --> 00:08:31.880 align:center
powered by sunlight,

00:08:31.904 --> 00:08:34.600 align:center
more or less for the cost
of raw materials.

00:08:34.624 --> 00:08:37.880 align:center
So we're talking about
the end of human drudgery.

00:08:37.904 --> 00:08:40.704 align:center
We're also talking about the end
of most intellectual work.

00:08:41.704 --> 00:08:44.760 align:center
So what would apes like ourselves
do in this circumstance?

00:08:44.784 --> 00:08:48.864 align:center
Well, we'd be free to play Frisbee
and give each other massages.

00:08:50.344 --> 00:08:53.200 align:center
Add some LSD and some
questionable wardrobe choices,

00:08:53.224 --> 00:08:55.400 align:center
and the whole world
could be like Burning Man.

00:08:55.424 --> 00:08:57.064 align:center
(Laughter)

00:08:58.824 --> 00:09:00.824 align:center
Now, that might sound pretty good,

00:09:01.784 --> 00:09:04.160 align:center
but ask yourself what would happen

00:09:04.184 --> 00:09:06.920 align:center
under our current economic
and political order?

00:09:06.944 --> 00:09:09.360 align:center
It seems likely that we would witness

00:09:09.384 --> 00:09:13.520 align:center
a level of wealth inequality
and unemployment

00:09:13.544 --> 00:09:15.040 align:center
that we have never seen before.

00:09:15.064 --> 00:09:17.680 align:center
Absent a willingness
to immediately put this new wealth

00:09:17.704 --> 00:09:19.184 align:center
to the service of all humanity,

00:09:20.144 --> 00:09:23.760 align:center
a few trillionaires could grace
the covers of our business magazines

00:09:23.784 --> 00:09:26.224 align:center
while the rest of the world
would be free to starve.

00:09:26.824 --> 00:09:29.120 align:center
And what would the Russians
or the Chinese do

00:09:29.144 --> 00:09:31.760 align:center
if they heard that some company
in Silicon Valley

00:09:31.784 --> 00:09:34.520 align:center
was about to deploy a superintelligent AI?

00:09:34.544 --> 00:09:37.400 align:center
This machine would be capable
of waging war,

00:09:37.424 --> 00:09:39.640 align:center
whether terrestrial or cyber,

00:09:39.664 --> 00:09:41.344 align:center
with unprecedented power.

00:09:42.624 --> 00:09:44.480 align:center
This is a winner-take-all scenario.

00:09:44.504 --> 00:09:47.640 align:center
To be six months ahead
of the competition here

00:09:47.664 --> 00:09:50.440 align:center
is to be 500,000 years ahead,

00:09:50.464 --> 00:09:51.960 align:center
at a minimum.

00:09:51.984 --> 00:09:56.720 align:center
So it seems that even mere rumors
of this kind of breakthrough

00:09:56.744 --> 00:09:59.120 align:center
could cause our species to go berserk.

00:09:59.144 --> 00:10:02.040 align:center
Now, one of the most frightening things,

00:10:02.064 --> 00:10:04.840 align:center
in my view, at this moment,

00:10:04.864 --> 00:10:09.160 align:center
are the kinds of things
that AI researchers say

00:10:09.184 --> 00:10:10.744 align:center
when they want to be reassuring.

00:10:11.504 --> 00:10:14.960 align:center
And the most common reason
we're told not to worry is time.

00:10:14.984 --> 00:10:17.040 align:center
This is all a long way off,
don't you know.

00:10:17.064 --> 00:10:19.504 align:center
This is probably 50 or 100 years away.

00:10:20.224 --> 00:10:21.480 align:center
One researcher has said,

00:10:21.504 --> 00:10:23.080 align:center
"Worrying about AI safety

00:10:23.104 --> 00:10:25.384 align:center
is like worrying
about overpopulation on Mars."

00:10:26.620 --> 00:10:28.240 align:center
This is the Silicon Valley version

00:10:28.264 --> 00:10:30.640 align:center
of "don't worry your
pretty little head about it."

00:10:30.664 --> 00:10:32.000 align:center
(Laughter)

00:10:32.024 --> 00:10:33.920 align:center
No one seems to notice

00:10:33.944 --> 00:10:36.560 align:center
that referencing the time horizon

00:10:36.584 --> 00:10:39.160 align:center
is a total non sequitur.

00:10:39.184 --> 00:10:42.440 align:center
If intelligence is just a matter
of information processing,

00:10:42.464 --> 00:10:45.120 align:center
and we continue to improve our machines,

00:10:45.144 --> 00:10:48.024 align:center
we will produce
some form of superintelligence.

00:10:48.824 --> 00:10:52.480 align:center
And we have no idea
how long it will take us

00:10:52.504 --> 00:10:54.904 align:center
to create the conditions
to do that safely.

00:10:56.704 --> 00:10:58.000 align:center
Let me say that again.

00:10:58.024 --> 00:11:01.840 align:center
We have no idea how long it will take us

00:11:01.864 --> 00:11:04.104 align:center
to create the conditions
to do that safely.

00:11:05.424 --> 00:11:08.880 align:center
And if you haven't noticed,
50 years is not what it used to be.

00:11:08.904 --> 00:11:11.360 align:center
This is 50 years in months.

00:11:11.384 --> 00:11:13.224 align:center
This is how long we've had the iPhone.

00:11:13.944 --> 00:11:16.544 align:center
This is how long "The Simpsons"
has been on television.

00:11:17.184 --> 00:11:19.560 align:center
Fifty years is not that much time

00:11:19.584 --> 00:11:22.744 align:center
to meet one of the greatest challenges
our species will ever face.

00:11:24.144 --> 00:11:28.160 align:center
Once again, we seem to be failing
to have an appropriate emotional response

00:11:28.184 --> 00:11:30.880 align:center
to what we have every reason
to believe is coming.

00:11:30.904 --> 00:11:34.880 align:center
The computer scientist Stuart Russell
has a nice analogy here.

00:11:34.904 --> 00:11:39.800 align:center
He said, imagine that we received
a message from an alien civilization,

00:11:39.824 --> 00:11:41.520 align:center
which read:

00:11:41.544 --> 00:11:43.080 align:center
"People of Earth,

00:11:43.104 --> 00:11:45.464 align:center
we will arrive on your planet in 50 years.

00:11:46.304 --> 00:11:47.880 align:center
Get ready."

00:11:47.904 --> 00:11:52.160 align:center
And now we're just counting down
the months until the mothership lands?

00:11:52.184 --> 00:11:55.184 align:center
We would feel a little
more urgency than we do.

00:11:57.184 --> 00:11:59.040 align:center
Another reason we're told not to worry

00:11:59.064 --> 00:12:02.080 align:center
is that these machines
can't help but share our values

00:12:02.104 --> 00:12:04.720 align:center
because they will be literally
extensions of ourselves.

00:12:04.744 --> 00:12:06.560 align:center
They'll be grafted onto our brains,

00:12:06.584 --> 00:12:08.944 align:center
and we'll essentially
become their limbic systems.

00:12:09.624 --> 00:12:11.040 align:center
Now take a moment to consider

00:12:11.064 --> 00:12:14.240 align:center
that the safest
and only prudent path forward,

00:12:14.264 --> 00:12:15.600 align:center
recommended,

00:12:15.624 --> 00:12:18.424 align:center
is to implant this technology
directly into our brains.

00:12:19.104 --> 00:12:22.480 align:center
Now, this may in fact be the safest
and only prudent path forward,

00:12:22.504 --> 00:12:25.560 align:center
but usually one's safety concerns
about a technology

00:12:25.584 --> 00:12:29.240 align:center
have to be pretty much worked out
before you stick it inside your head.

00:12:29.264 --> 00:12:31.280 align:center
(Laughter)

00:12:31.304 --> 00:12:36.640 align:center
The deeper problem is that
building superintelligent AI on its own

00:12:36.664 --> 00:12:38.400 align:center
seems likely to be easier

00:12:38.424 --> 00:12:40.280 align:center
than building superintelligent AI

00:12:40.304 --> 00:12:42.080 align:center
and having the completed neuroscience

00:12:42.104 --> 00:12:44.784 align:center
that allows us to seamlessly
integrate our minds with it.

00:12:45.304 --> 00:12:48.480 align:center
And given that the companies
and governments doing this work

00:12:48.504 --> 00:12:52.160 align:center
are likely to perceive themselves
as being in a race against all others,

00:12:52.184 --> 00:12:55.440 align:center
given that to win this race
is to win the world,

00:12:55.464 --> 00:12:57.920 align:center
provided you don't destroy it
in the next moment,

00:12:57.944 --> 00:13:00.560 align:center
then it seems likely
that whatever is easier to do

00:13:00.584 --> 00:13:01.784 align:center
will get done first.

00:13:03.064 --> 00:13:05.920 align:center
Now, unfortunately,
I don't have a solution to this problem,

00:13:05.944 --> 00:13:08.560 align:center
apart from recommending
that more of us think about it.

00:13:08.584 --> 00:13:10.960 align:center
I think we need something
like a Manhattan Project

00:13:10.984 --> 00:13:13.000 align:center
on the topic of artificial intelligence.

00:13:13.024 --> 00:13:15.760 align:center
Not to build it, because I think
we'll inevitably do that,

00:13:15.784 --> 00:13:19.120 align:center
but to understand
how to avoid an arms race

00:13:19.144 --> 00:13:22.640 align:center
and to build it in a way
that is aligned with our interests.

00:13:22.664 --> 00:13:24.800 align:center
When you're talking
about superintelligent AI

00:13:24.824 --> 00:13:27.080 align:center
that can make changes to itself,

00:13:27.104 --> 00:13:31.720 align:center
it seems that we only have one chance
to get the initial conditions right,

00:13:31.744 --> 00:13:33.800 align:center
and even then we will need to absorb

00:13:33.824 --> 00:13:36.864 align:center
the economic and political
consequences of getting them right.

00:13:38.264 --> 00:13:40.320 align:center
But the moment we admit

00:13:40.344 --> 00:13:44.344 align:center
that information processing
is the source of intelligence,

00:13:45.224 --> 00:13:50.024 align:center
that some appropriate computational system
is what the basis of intelligence is,

00:13:50.864 --> 00:13:54.624 align:center
and we admit that we will improve
these systems continuously,

00:13:55.784 --> 00:14:00.240 align:center
and we admit that the horizon
of cognition very likely far exceeds

00:14:00.264 --> 00:14:01.464 align:center
what we currently know,

00:14:02.624 --> 00:14:03.840 align:center
then we have to admit

00:14:03.864 --> 00:14:06.504 align:center
that we are in the process
of building some sort of god.

00:14:07.904 --> 00:14:09.480 align:center
Now would be a good time

00:14:09.504 --> 00:14:11.457 align:center
to make sure it's a god we can live with.

00:14:12.624 --> 00:14:14.160 align:center
Thank you very much.

00:14:14.184 --> 00:14:19.277 align:center
(Applause)

