Month: November 2020
Will AI Enhance or Hack Humanity? – Fei-Fei Li & Yuval Noah Harari in Conversation with Nicholas Thompson
Will AI Enhance or Hack Humanity? – Fei-Fei Li & Yuval Noah Harari in Conversation with Nicholas Thompson
In a discussion that covers ethics in technology, hacking humans, free will, and how to avoid potential dystopian scenarios, historian and philosopher Yuval Noah Harari speaks with Fei-Fei Li, renowned computer scientist and Co-Director of Stanford University’s Human-Centered AI Institute — in a conversation moderated by Nicholas Thompson, WIRED’s Editor-in-Chief.
Transcript
My name is Rob Reich, I’m delighted to welcome you here to Stanford University for an evening of conversation with Yuval Harari, Fei-Fei Li, and Nick Thompson.
I’m a professor of political science here
and the Faculty Director of
the Stanford Center for Ethics and Society,
which is a co-sponsor of tonight’s event,
along with the Stanford Institute
for Human Centered Artificial Intelligence
and the Stanford Humanities Center.
Our topic tonight is a big one.
We’re going to be thinking together
about the promises and perils of artificial intelligence.
The technology quickly reshaping our economic,
social, and political worlds, for better or for worse.
The questions raised by the emergence of AI
are by now familiar, at least to many people
here in Silicon Valley but, I think it’s fair
to say that their importance is only growing.
What will the future of work look like
when millions of jobs can be automated?
Are we doomed or perhaps blessed to live in a world
where algorithms make decisions instead of humans.
And these are smaller questions in the big scheme of things.
What, might you ask you’re the large ones?
Well, here are three.
What will become of the human species
if machine intelligence approaches
or exceeds that of an ordinary human being?
As a technology that currently relies
on massive centralized pools of data,
does AI favor authoritarian centralized governments
over more decentralized democratic governance?
And are we at the start now of an AI arms race?
And what will happen if powerful systems of AI,
especially when deployed for purposes
like facial recognition, are in the hands
of authoritarian rulers?
These challenges only scratch the surface when it comes
to fully wrestling with the implications of AI,
as the technology continues to improve
and its use cases continue to multiply.
I want to mention the format of the evening event.
First, given the vast areas of expertise
that Yuval and Fei-Fei have,
when you ask questions via Slido,
those questions should pertain
or be limited to the topics under discussion tonight.
So, this web interface that we’re using,
Slido allows people to upvote and downvote questions.
So, you can see them now if you have
an internet communication device.
If you don’t have one, you can take one of these postcards,
which hopefully you got outside
and on the back you can fill in a question you might have
about the evening event and collect it at the end,
and the Stanford Humanities Center
will try to foster some type of conversation
on the basis of those questions.
Couple housekeeping things,
if you didn’t purchase one already,
Yuval’s books are available for sale
outside in the lobby after the event.
A reminder to please turn your cell phone ringers off.
And we will have 90 minutes
for our moderated conversation here
and will end sharp after 90 minutes.
Now, I’m going to leave the stage in just a minute
and allow a really amazing undergraduate student
here at Stanford to introduce our guests.
Her name is Anna-Sofia Lesiv,
let me just tell you a bit about her.
She’s a junior here at Stanford majoring in Economics
with a minor in Computer Science
and outside the classroom, Anna-Sofia is a journalist
whose work has been featured in The Globe and Mail,
Al Jazeera, The Mercury News, The Seattle Times,
and this campuses paper of record, The Stanford Daily.
She’s currently the Executive Editor of The Daily
and her daily magazine article
from earlier in the year called CS Plus Ethics,
examined the history of computer science
and ethics education at Stanford
and it won the student prize for best journalism of 2018.
She continues to publish probing examinations
of the ethical challenges faced by technologists here
and elsewhere so, ladies and gentlemen
I invite you to remember this name
for you’ll be reading about her
or reading her articles, or likely both,
please welcome Stanford junior, Anna-Sofia Lesiv.
[audience clapping]
Thank you very much for the introduction, Rob.
Well it’s my great honor now,
to introduce our three guests tonight,
Yuval Noah Harari, Fei-Fei Li, and Nicholas Thompson.
Professor Yuval Noah Harari is a historian,
futurist, philosopher, and professor at Hebrew University.
The world also knows him for authoring some of
the most ambitious and influential books of our decade.
Professor Harari’s internationally best-selling books,
which have sold millions of copies worldwide,
have covered a dizzying array of subject matter
from narrativizing the entire history
of the human race in Sapiens,
to predicting the future awaiting humanity,
and even coining a new faith called Dadaism, in Homo Deus.
Professor Harari has become a beloved figure
in Silicon Valley, whose readings are assigned
in Stanford’s classrooms and whose name
is whispered through the hallways
of the comparative literature
and computer science departments, alike.
His most recent book is 21 Lessons for the 21st Century,
which focuses on the technological,
social, political, and ecological challenges
of the present moment.
In this work, Harari cautions
that as technological breakthroughs
continue to accelerate, we will have less
and less time to reflect upon the meaning
and consequences of the changes they bring.
And this urgency, is what charges
Professor Fei-Fei Li’s work everyday,
in her role as the Co-Director of Stanford’s
Human-Centered AI Institute.
This institute is one of the first
to insist that AI is not merely the domain of technologists
but a fundamentally interdisciplinary
and ultimately human issue.
Her fascination with the fundamental questions
of human intelligence is what piqued her interest
in neuroscience, as she eventually became
one of the world’s greatest experts
in the fields of computer vision, machine learning,
and cognitive and computational neuroscience.
She’s published over a hundred scientific articles
in leading journals and has had research supported
by the National Science Foundation, Microsoft,
and the Sloan Foundation.
From 2013 to 2018, Professor Fei-Fei Li served as
the Director of Stanford’s AI lab
and between January, 2017 and September, 2018,
Professor Fei-Fei Li served as Vice President at Google
and Chief Scientist of AI and Machine Learning
at Google Cloud.
Nicholas Thompson is the Editor-In-Chief of Wired magazine,
a position he’s held since January, 2017.
Under Mr. Thompson’s leadership,
the topic of artificial intelligence
has come to hold a special place at the magazine.
Not only has Wired assigned more feature stories
on AI than on any other subject,
but it is the only specific topic
with a full-time reporter assigned to it.
It’s no wonder then, that Professors Harari
and Li are no strangers to its pages.
Mr. Thompson has led discussions
with the world’s leaders in technology and AI,
including Mark Zuckerberg on Facebook and Privacy,
French President, Emmanuel Macron on France’s AI strategy,
and Ray Kurzweil on the ethics and limits of AI.
Mr. Thompson is a Stanford University graduate
who earned his BA, double majoring
in earth systems and political science
and impressively even completed a third degree in economics.
Of course, I would be remiss if I did not mention
that Mr. Thompson cut his journalistic teeth
in the opinions section of the Stanford Daily so,
Nick, that makes both of us.
Like all our guests today, I’m at once fascinated
and worried by the challenges
that artificial intelligence poses for our society.
One of my goals at Stanford has been
to write about and document the challenge
of educating a generation of students whose lives
and workplaces, will eventually be transformed by AI.
Most recently, I published an article
called Complacent Valley, with the Stanford Daily.
In it I critiqued our propensity
to become overly comfortable with the technological
and financial achievements that Silicon Valley
has already reached, lest we become complacent
and lose our ambition and momentum
to tackle the greater challenges the world has in store.
Answering the fundamental questions
of what we should spend our time on,
how we should live our lives,
has become much more difficult,
particularly on the doorstep of the AI revolution.
I believe that the kind of crisis of agency
that Author JD Vance wrote of in Hillbilly Elegy,
for example, is not confined to Appalachia
or the de-industrialized Midwest
but is emerging even at elite institutions like Stanford.
So conversations like hours this evening,
hosting speakers that aim to re-center
the individual at the heart of AI,
will show us how to take responsibility
in a moment when most decisions
can seemingly be made for us, by algorithms.
There are no narratives to guide us through a future
with AI, no ancient myths or stories
that we may rely on to tell us what to do.
At a time when Humanity is facing
its greatest challenge yet,
somehow we cannot be more at a loss for ideas or direction.
It’s this momentous crossroads in human history
that pulls me towards journalism and writing in the future.
And it’s why I’m so eager to hear
our three guests discuss exactly such a future, tonight.
So, please join me in giving them
a very warm welcome this evening.
[audience applause]
Wow, thank you so much Anna-Sofia, thank you, Rob.
Thank you, Stanford for inviting us all here.
I’m having a flashback to the last time
I was on a stage at Stanford,
which was playing guitar at the coho
and I didn’t have either Yuval or Fei-Fei with me
so, there were about six people in the audience,
one of whom had her headphones on but, I did meet my wife.
[audience croons] Isn’t that sweet?
All right so, a reminder, housekeeping,
questions are going to come in, in Slido.
You can put them in, you can vote up questions,
we’ve already got several thousand
so please vote up the ones you really like.
If someone can program an AI that can get
a really devastating question in
and stump Yuval, I will get you
a free subscription to Wired.
[audience laughs]
I want this conversation to kind of have three parts.
First, lay out where we are,
then talk about some of the choices
we have to make now, and last talk about some advice
for all the wonderful people in the halls.
So, those are the three general areas,
I’ll feed in questions as we go.
We may have a specific period for questions
at the end but, let’s get cracking.
Yuval.
[Yuval] Yeah.
So, the last time we talked you said many,
many brilliant things but one that stuck out,
it was a line where you said,
We are not just in a technological crisis,
we are in a philosophical crisis.
So, explain what you meant, explain how it ties to AI,
and let’s get going with a note of existential angst.
[all laughing]
Yes, I think what’s happening now
is that the philosophical framework of the modern world
that has been established in the 17th and 18th century,
around ideas like human agency and individual free will,
are being challenged like never before.
Not by philosophical ideas but by practical technologies.
And we see more and more questions,
which used to be, you know, the bread and butter
of the philosophy department, being moved
to the engineering department.
And that’s scary, partly because, unlike philosophers,
who are extremely patient people,
they can discuss something for thousands of years
without reaching any agreement and they are fine with that,
[light audience laughter] the engineers won’t wait
and even if the engineers are willing to wait,
the investors behind the engineers, won’t wait.
So, it means that we don’t have a lot of time
and in order to encapsulate what the crisis is,
I know that, you know engineers,
especially in a place like Silicon Valley,
they like equations so, maybe I
can try to formulate an equation [laughing]
to explain what’s happening.
And the equation is B times C times D equals ah.
Which means, biological knowledge
multiplied by computing power multiplied by data
equals the ability to hack humans.
And the AI revolutional crisis is not just AI,
it’s also biology, it’s biotech.
We haven’t seen anything yet
because the link is not complete.
There is a lot of hype now around AI in computers
but just that it is just half the story.
The other half is the abilities,
the biological knowledge coming from brain science
and biology and once you link that to AI,
what you get is the ability to hack humans.
And maybe I’ll explain what it means,
the ability to hack humans to create an algorithm
that understands me better than I understand myself
and can therefore manipulate me, enhance me, or replace me.
And this something that our philosophical baggage
and all our belief in, you know, human agency,
and free will, and the customer is always right,
and the voter knows best, this just falls apart
once you have this kind of ability.
Once you have this kind of ability
and it’s used to manipulate or replace you,
not if it’s used to enhance you?
Also when it’s used to enhance you,
the question is, who decides what is a good enhancement
and what is a bad enhancement.
So, our immediate fallback position
is to fall back on the traditional humanist ideas
that the customer is always right,
the customers will choose the enhancement,
or the voter is always right.
The voters will vote.
There will be a political decision about enhancement,
or if it feels good, do it.
We’ll just follow our heart, we’ll just listen to ourselves.
None of this works when there is a technology
to hack human on a large scale.
You can’t trust your feelings,
or the voters, or the customers on that.
The easiest people to manipulate
are the people who believe in free will
because they think they cannot be manipulated.
So, how do you decide what to enhance if,
and this a very deep ethical and philosophical question.
Again, it philosophers have been debating
for thousands of years.
What is good?
What are the good qualities we need to enhance?
So, if you can’t trust the customer,
if you can’t trust the voter,
if you can’t trust your feelings, who do you trust?
What do you go by?
All right Fei-Fei, you have a PhD,
you have a CS degree, you’re Professor at Stanford.
Does A times B times C equal H? [laughing]
Is Yuvals theory the right way
to look at where we’re headed?
Wow, what a beginning, thank you, Yuval.
Well, one of the things, I’ve been reading Yuval’s book
for the past couple of years, and talking to you,
and I’m very envious of philosophers now,
because they can propose questions
and crisis but they don’t have to answer them.
[laughing loudly]
Now, as an engineer and scientist,
I feel like we have to now solve the crisis.
So, honestly I think I’m very thankful.
I mean, personally I’ve been reading your book
for two years and I’m very thankful
that Yuval, among other people,
have opened up this really important question
for us and it’s also quite a…
When you said the AI crisis
and I was sitting there thinking,
this a field I loved, and felt passionate about,
and researched for 20 years,
and that was just a scientific curiosity
of a young scientist entering PhD and AI.
What happened, that 20 years later, it has become a crisis?
And it actually speak of the evolution of AI
that got me where I am today
and got my colleagues at Stanford where we are today
with the Human-Center AI,
is that this a transformative technology.
It’s a nascent technology, it’s still a budding science
compared to physics, chemistry, biology but,
with the power of data, computing,
and the kind of diverse impact AI is making,
it is like you said, is touching human lives
and business in broad and deep ways.
And responding to that kind of questions
in crisis that’s facing humanity,
I think one of the proposed solution,
or if not solution at least a try
that Stanford is making an effort about,
is can we reframe the education,
the research, and the dialogue of AI
and technology in general, in a human centered way.
We’re not necessarily gonna find solution today but,
can we involve the humanists, the philosophers,
the historians, the political scientists,
the economists, the ethicist, the legal scholars,
the neuroscientists, the psychologists,
and many more other disciplines,
into the study and development of AI
in the next chapter, in the next phase.
Don’t be so certain we’re not gonna get an answer today.
I’ve got two of the smartest people in the world
glued to their chairs and I’ve got Slido
for 72 minutes so, let’s give it a shot.
But he said we have thousands of years.
[all laughing]
But let me go a little bit further in Yuval’s questions.
So, there are lots, or Yuval’s opening statement,
there are a lot of crises about AI
that people talk about, right?
They talk about AI becoming conscious
and what will that mean,
they talk about job displacement,
They talk about biases, when Yuval has very clearly laid out
what he thinks is the most important one,
which is the combination of biology plus
computing plus data leading to hacking.
He’s laid out a very specific concern.
Is that specific concern, what people
who were thinking about AI should be focused on?
So, absolutely.
So, alien technology humanity has created,
starting from fire, is a double-edged soul.
So, it can bring improvements to life and to work
and to society but it can bring the perils
and AI has the perils, you know?
I wake up every day worried
about the diversity inclusion issue in AI.
We worry about fairness or the lack of fairness,
privacy, the labor market so,
absolutely we need to be concerned
and because of that we need to expand the study,
the research, and the the development of policies,
and the dialogue of AI beyond just the codes
and the products into these human realms,
into these societal issues.
So, I absolutely agree with you on that,
that this the moment to open the dialogue,
to open the research in those issues.
Okay. I would just say that again,
part of my fear is that the dialogue,
I don’t fear AI experts talking with philosophers,
I’m fine with that, historians good,
literary critics wonderful, I fear the moment
you start talking with biologists.
[crowd chatter]
That’s my biggest fear.
When you and the biologist will,
Hey, we actually had a common language
and we can do things together.
And that’s when the really scary things, I think.
Can you elaborate on the what is scaring you
that we talk to biologists?
That’s the moment when you can really hack human beings,
not by collecting data about our search words,
or our purchasing habits, or where do we go about town,
but you can actually start peering inside
and collect data directly
from our hearts and from our brains.
Okay, can I be specific?
First of all, the birth of AI is AI scientist
talking to biologists, specifically neuro scientists.
Right, the birth of AI is very much inspired
by what the brain does.
Fast-forward to sixty years later,
today’s AI is making great improvement in healthcare.
There’s a lot of data from our physiology
and pathology being collected
and using machine learning to help us but,
I feel like you’re talking about something else.
That’s part of it, I mean,
if there wasn’t a great promise in the technology,
there would also be no danger
because nobody would go along that path.
I mean, obviously, there are enormously beneficial things
that AI can do for us, especially
when it is linked with how is biology.
We are about to get the best health care in the world,
in history, and the cheapest,
and available for billions of people via smartphones,
which today they have almost nothing.
And this is why it is almost impossible to resist
the temptation and with all the issue now, of privacy.
If you have a big battle between privacy and health,
health is likely to win hands down.
So, I fully agree with that and, you know,
my job as a historian, as a philosopher,
as a social critic, is to point out the dangers in that
because especially in Silicon Valley,
people are very much familiar with the advantages
but they don’t like to think so much
about the dangers and the big danger
is what happens when you can hack the brain
and that can serve not just your healthcare provider,
that can serve so many things from a crazy dictator, to–
Let’s focus on that, what it means to hack the brain.
Like what, right now in some ways,
my brain is hacked, right?
There’s an allure of this device,
it wants me to check it constantly.
Like, my brain has been a little bit hacked.
Yours hasn’t because you meditate two hours a day
but mine has and probably [laughter]
most of these people have.
But what exactly is the future brain hacking
going to be, that it isn’t today?
Much more of the same, but on a much larger scale.
I mean, the point when for example,
more and more of your personal decisions in lives
are being outsourced to an algorithm
that is just so much better than you.
So, you know we have two distinct dystopias
that kind of mesh together.
We have the dystopia of surveillance capitalism
in which there is no like, Big Brother dictator
but more and more of your decisions
are being made by an algorithm
and it’s not just decisions about what to eat,
or what to shop, but decisions like,
where to work, and where to study, and whom to date,
and whom to marry, and whom to vote for.
It’s the same logic and I would be curious to hear
if you think that there is anything in humans,
which is by definition un-hackable,
that we can’t reach a point when the algorithm
can make that decision better than me.
So, that’s one line of dystopia
which is a bit more familiar in this part of the world
and then you have the full-fledged dystopia
of a totalitarian regime
based on a total surveillance system.
Something like the totalitarian regimes
that we have seen in the twentieth century
but augmented with biometric sensors
and the ability to basically track
each and every individual, 24 hours a day.
And you know, which in the days of,
I don’t know, Stalin or Hitler, was absolutely impossible
because it didn’t have the technology
but maybe, might be possible in 20 years or 30 years.
So, we can choose which dystopia to discuss
but they are very close in–
Let’s choose the liberal democracy dystopia.
Fei-Fei, do you want answer Yuval’s specific question,
which is, is there something in dystopia,
a liberal democracy dystopia, is there something endemic
to humans that cannot be hacked?
So, when you ask me that question just two minutes ago,
the first word that came to my mind is love.
Is love hackable?
Ask Tinder, I don’t know.
[crowd and panel laughing]
Dating–
It depends–
Dating is not the entirety of love, I hope.
The question is which kind of love are you referring to?
If you are referring to this, you know I don’t know,
Greek philosophical love or the loving kindness of Buddhism,
that’s one question,
which I think it’s much more complicated.
If you are referring to the
biological mammalian courtship rituals,
then I think yes.
I mean, why not?
But humans– Why is it different
from anything else that is happening in the body?
But humans are humans because there’s some part of us
that are beyond the mammalian courtship, right?
So, is that part hackable?
That’s the question?
I mean, you know in in most science fiction books
and movies, they give your answer.
When the extra-terrestrial evil robots
are about to conquer planet Earth
and nothing can resist them, resistance is futile,
at the very last moment,
Humans win It’s just one thing,
Because the robots don’t understand love.
Last moment there’s one heroic white dude that saves us.
[audience cheering and applause] [laughter]
Why we do this?
No, no, it was a joke, don’t worry.
[audience and panel laughter]
But, okay so, the two dystopia,
I do not have answers to the two dystopias
but I want to keep saying is
this is precisely why this is the moment
that we need to seek for solutions.
This is precisely why this is the moment
that we believe the new chapter of AI needs to be written
by cross-pollinating efforts from humanists,
social scientists, to business leaders,
to civil society, to governments to come at the same table
to have that multilateral and cooperative conversation.
I think you really bring out the urgency
and the importance and the scale of this potential crisis
but I think in the face of that, we need to act.
Yeah, and I agree that we need cooperation,
that we need much closer cooperation
between engineers and philosophers
or engineers and historians
and also from a philosophical perspective,
I think there is something wonderful
about engineers, philosophically.
Thank you. [laughing]
That they you really cut the bullshit.
I mean, philosophers can talk and talk you know,
in cloudy in flowery metaphors
and then the engineers can really focus the question.
Like, I just had a discussion the other day
with an engineer from Google about this
and he said, Okay, I know how to maximize
people’s time on the website.
If somebody comes to me and tells me,
Look, your job is to maximize time on this application.
I know how to do it because I know how to measure it.
But if somebody comes along and tells me,
Well you need to maximize human flourishing
or You need to maximize universal love,
I don’t know what it means.
So, the engineers go back to the philosophers
and ask them, what do you actually mean.
Which, you know, a lot of philosophical theories
collapse around that because they can’t really explain
what and we need this kind of collaboration.
Yeah.
We need a equation for that. In order to move forward.
But then Yuval, is Fei-Fei right?
If we can’t explain and we can’t code love,
can artificial intelligence ever recreate it
or is it something intrinsic to humans
that the machines will never emulate.
I don’t think that machines will feel love
but you don’t necessarily need to feel it
in order to be able to hack it,
to monitor it, to predict it, to manipulate it.
I mean, machines don’t like to play candy crush.
But you think they can– But they can still–
This device, in some future
where it’s infinitely more powerful
than it is right now, could make me fall in love
with somebody in the audience?
Hmm, that goes to the question of consciousness
and in mind.
Let’s go there. I don’t think that we have
the understanding of what consciousness is
to answer the question, whether a non-organic consciousness
is possible or is not possible.
I think we just don’t know but again
the bar for hacking humans is much lower.
The machines don’t need to have consciousness of their own
in order to predict our choices
and manipulate our choices, they just need…
If you accept that something like love is,
in the end, a biological process in the body.
If you think that AI can provide us
with wonderful health care
by being able to monitor and predict
something like the flu or something like cancer,
what’s the essential difference between flu and love?
[audience applause]
In the sense of, is this biological
and this something else, which is so separated
from the biological reality of the body,
that even if we have a machine
that is capable of monitoring and predicting flu,
it still lacks something essential
in order to do the same thing with love.
Fei-Fei.
So, I want to make two comments
and this is where my engineering,
you know, personality is speaking.
We’re making two very important assumptions
in this part of the conversation.
One is that AI is so omnipotent
that it’s achieved to a state
that it’s beyond predicting anything physical,
its guarding to the consciousness level
and getting to the, even the ultimate,
the love level of capability
and I do want to make sure that we recognize
that we’re very, very, very far from that.
This technology is still very nascent.
Part of the concern I have about today’s AI
is that super-hyping of its capability so,
I’m not saying that, that’s not a valid question
but I think that part of this conversation
is built upon that assumption that this technology
has become that powerful and there’s,
I don’t even know how many decades we are from that.
Second related assumption, I feel we are,
our conversation is being based on this
that we’re talking about the world or state of the world
that owning that powerful AI exists
or that small group of people
who have produced the powerful AI
and is intended to hack human, are existing.
But in fact our human society is so complex
there’s so many of us, right?
I mean, humanity in its history,
have faced so many technology,
if we left it in the hands of a bad player,
alone without any regulation, multinational collaboration,
rules, laws, moral codes, that technology could have,
maybe not hack human but destroy human
or hurt human in massive ways.
It has happened but by and large,
our society in a historical view
is moving to a more civilized and controlled state.
So, I think it’s important to look at that greater society
and bringing other players and people into this dialogue
so we don’t talk like there is only this omnipotent AI,
you know, deciding it’s gonna hack everything to the end.
And that brings to your topic that in addition
of hacking human at that level that you’re talking about,
there are some very immediate concerns already.
Diversity, privacy, labor, legal changes,
you know, international geopolitics
and I think it’s critical to tackle those now.
I love talking to AI researchers
because five years ago, all the AI researchers were like,
It’s much more powerful than you think and now
they’re all like, It’s not as powerful as you think.
[audience and panel laughter]
All right so,
Let me ask, It’s because five years ago
you have no idea what AI is,
I’m not saying it’s wrong Now, you’re extrapolating
too much. [laughs]
I didn’t say it was wrong, I just said it was a thing.
I want to go into what you just said
but before you do that I want to take one question here
from the audience because once we move
into the second section, we won’t be able to answer it.
So, the question is, it’s for you Yuval,
this from Mara and Lucini, How can we avoid
the formation of AI power digital dictatorships?
So, how do we avoid dystopia number two?
Let’s answer that and then let’s go Fei-Fei,
into what we can do right now,
not what we can do in the future.
The key issue is how to regulate the ownership of data
because we won’t stop research in biology
and we won’t stop research in computer science and AI.
So, for the three components of biological knowledge,
computing power, and data, I think data is the easiest
and it’s also very difficult but still the easiest,
kind of, to regulate or to protect.
Place some protections there and there are efforts
now being made and they are not just political efforts but,
also philosophical efforts to really conceptualize,
what does it mean to own data
or to regulate the ownership of data
because we have a fairly good understanding
what it means to own land,
we had thousands of years of experience with that,
we have a very poor understanding
of what it actually means to own data
and how to regulate it.
But this the very important front
that we need to focus on in order to prevent
the worst dystopian outcomes
and I agree that AI is not nearly as powerful
as some people imagined but this why,
and again, I think we need to place the bar low
for to reach a critical threshold,
we don’t need the AI to know us perfectly,
which will never happen, we just need the AI
to know us better than we know ourselves.
Which is not so difficult because most people
don’t know themselves very well
and often make [laughter and audience applause]
huge mistakes in critical decisions.
So, whether it’s finance, or career, or love life,
to have this shift in authority
from humans to algorithm, they can still be terrible
but as long as they are a bit less terrible
than us, the authority will shift to them.
Yuv, in your book you tell a very illuminating story
about your own self and your own coming in terms
with you with who you are and how you could be manipulated.
Will you tell that story here,
about coming to terms with your sexuality
and the story you told about Coca-Cola
and your book because I think that will make it clear
what you mean here, very well.
Yes so, I said that I only realized
that I was gay when I was 21.
And I look back at the time when I was,
I don’t know, 15, 17 and it should’ve been so obvious.
And it’s not like a stranger like,
I’m with myself 24 hours a day [laughter]
and I just don’t notice any, of like,
the screaming signs that saying,
There, you were gay and I don’t know how
but the fact is, I missed it.
Now, an AI, even a very stupid AI,
today, will not miss it.
[audience and panel laughing] I’m not so sure.
So imagine, this not like, you know like,
a science fiction scenario of a century from now,
this can happen today, that you can write
all kinds of algorithms that, you know,
they are not perfect but they are still better,
say than the average teenager
and what does it mean to live in a world
in which you learn about something so important
about yourself, from an algorithm.
What does it mean?
What happens if the algorithm doesn’t
share the information with you
but it shares the information
with advertisers or with governments?
So, if you want to, and I think we should,
go down from the cloudy heights of,
you know, the extreme scenarios
to the practicalities of day-to-day life,
this a good example because this is already happening.
Yeah, all right well let’s take the elevator
down to the more conceptual level
of this particular shopping mall
that we’re shopping in today
and Fei-Fei, let’s talk about what we can do today
as we think about the risks of AI, the benefits of AI,
and tell us you know, sort of your punch list,
of what you think the most important things
we should be thinking about with AI are.
Wow, boy there are so many things we could do today
and I cannot agree more with Yuval,
that this is such an important topic.
Again I’m gonna try to speak about all the efforts
that’s being made at Stanford
because I think this a good representation
of what we believe, there are so many efforts we can do.
So, in human-centered AI in which,
this the overall theme we believe,
that the next chapter of AI should be, is human-centered.
We believe in three major principles.
One principle is to invest in the next generation
of AI technology that reflects more
of the kind of human intelligence we would like.
I was just thinking about your comment
about AI’s dependence on data and how that the policy
and governance of data should emerge
in order to regulate and govern the AI impact.
Well, we should be developing technology
that can explain AI, in technical field
we call it explainable AI or AI interpretability studies.
We should be focusing on technology that have
the more nuanced understanding of human intelligence.
We should be investing in the development
of less data dependent AI technology
that would take into considerations of intuition, knowledge,
creativity, and other forms of human intelligence.
So, that kind of human intelligence inspired AI
is one of our principles.
The second principle is to, again welcome in
the kind of multidisciplinary study
of AI cross-pollinating with economics,
with ethics, with law, with philosophy,
with history, cognitive science, and so on
because there is so much more we need to understand
in terms of AI’s social, human,
anthropological, ethical impact
and we cannot possibly do this alone as technologists.
Some of us shouldn’t even be doing this,
it’s the ethicist, philosophers should participate
and work with us on these issues.
So, that’s the second principle and the third principle…
Oh, and within this we work with policymakers,
we convene the kind of dialogues
of multilateral stakeholders.
Then the third, the last but not the least,
I think Nick, you said that at the very beginning
of this conversation that we need to promote
that the human enhancing and collaborative
and augmentative aspect of this technology.
You have a point, even there it can become manipulative
but we need to start with that sense of alertness,
understanding, but still promote
that kind of benevolent applications
and design of this technology.
At least these are the three principles
the Stanford’s Human-Centered AI Institute is based on
and I just feel very proud, within a short few months
of the birth of this Institute,
there are more than 200 faculty involved on this campus
in this kind of research dialog, you know,
study education and that number is still growing.
Wow.
Of those three principles let’s start digging into them.
So, let’s go to number one, explainability,
’cause this a really interesting debate
in artificial intelligence so,
there are some practitioners who say
you should have algorithms that can explain
what they did and the choices they made.
It sounds eminently sensible but how do you do that?
I make all kinds of decisions that I can’t entirely explain
like, why did I hire this person off that person?
I can tell a story about why I did it
but I don’t know for sure.
Like, we don’t know ourselves well enough
to always be able to truthfully
and fully explain what we did.
How can we expect a computer using AI, to do that?
And, if we demand that here in the West
then there are other parts of the world
that don’t demand that, who may be able to move faster.
So, why don’t we start, why don’t I ask you
the first part of that question,
Yuval the second part of that question.
So, the first part is, can we actually get explainability
if it’s super hard even within ourselves?
Well, it’s pretty hard for me to multiply two digits
but you know, computers can do that.
Yeah.
So, the fact that something is hard for humans
doesn’t mean we shouldn’t try to get the machines to do it,
especially, after all, these algorithms
are based on very simple mathematical logic.
Granted, we’re dealing with newer networks these days
of millions of nodes and billions of connections so,
explainability is actually tough, it’s an ongoing research.
But I think this is such a fertile ground
and it’s so critical when it comes to health care decisions,
financial decisions, legal decisions,
there’s so many scenarios where this technology
can be potentially, positively useful
but with that kind of explainable capabilities so,
we’ve gotta try and I’m pretty confident
with a lot of smart minds out there,
this a crackable thing
and on top of that– Got 200 professors on it.
Right, not all of them doing AI algorithms.
On top of that, I think you have a point that
if we have technology that can explain
the decision making process of algorithms,
it makes it harder for it to manipulate and cheat, right?
It’s a technical solution, not the entirety of the solution,
that will contribute to the clarification
of what this technology is doing.
But because the, presumably the AI,
makes decision in a radically different way than humans
then even if the AI explains its logic
the fear is it will make absolutely no sense to most humans.
Most humans, when they are asked to explain a decision
they tell a story in a narrative form,
which may or may not reflect
what is actually happening within them,
in many cases it doesn’t reflect.
It’s just a made-up rationalization and not the real thing.
Now, in AI it could be much better than a human
in telling me like, I applied to the bank for a loan
and the bank says no and I ask why not
and the bank says, Okay, we’ll ask our AI
and the AI gives this extremely long,
statistical analysis based,
not on one or two salient feature of my life
but on 2,517 different data points
which it took into account and gave different weights
and why did you give this, this weight
and why did you give oh, there is another book about that
and most of the data points would seem,
to a human, completely irrelevant.
You applied for a loan on Monday
and not on Wednesday and the AI discovered that
for whatever reason, it’s after the weekend, whatever,
people who apply for loans on a Monday
are 0.075 percent less likely to repay the loan.
So, it goes into the equation
and I get this book of the real explanation,
finally I get a real explanation.
It’s not like sitting with a human banker
that just bullshit’s me [audience laughing]
What do I do with a book? Are you rooting for AI?
Are you saying AI’s good in this case?
In many cases, yes I mean, I think in many cas…
I mean, it’s two sides of the coin.
I think that in many ways the AI in this scenario
will be an improvement over the human banker
because for example, you can really know
what the decision is based on presumably,
but it’s based on something that I,
as a human being, just cannot grasp.
I know how to deal with simple narrative stories.
I didn’t give you a loan because you’re gay,
that’s not good or because you didn’t repay
any of your previous loans.
Okay, I can understand that.
But my mind doesn’t know what to do
with the real explanation that the AI will give,
which is just this crazy statistical thing, which–
Okay so, there are two layers to your comment.
One, is how do you trust
and be able to comprehend AI’s explanation?
Second, is actually, can AI be used
to make humans more trustable
or be more trustable than the human’s?
On the first point, I agree with you.
If AI gives you two thousand dimensions
of potential features with probability,
it’s now human understandable
but the entire history of science in human civilization
is to be able to communicate the result of science
in better and better ways, right?
Like, I just had my annual physical
and the whole bunch of numbers came to my cell phone
and well, first of all, my doctors can,
the expert can help me to explain these numbers.
Now, even Wikipedia can help me
to explain some of these numbers.
But the technological improvements
of explaining these will improve.
It’s our failure as AI technologists
if we just throw two hundred or two thousand dimensions
of probability numbers at you.
But I mean, this the explanation and I think
that the point you raise
is very important but, I see differently.
I think science is getting worse and worse
in explaining its theories and findings to general public.
Which is the reason for things like,
doubting climate change and so forth
and it’s not really even the fault of the scientists
because the science is just getting more
and more complicated and reality is extremely complicated
and the human mind wasn’t adapted
to understanding the dynamics of climate change
or the real reasons for refusing to give somebody a lone.
That’s the point when you have…
Again, let’s put aside the whole question of manipulation
and how can I trust.
Let’s assume the AI is benign
and let’s assume that there are no hidden biases,
everything is okay but, still I can’t understand,
the decision of the AI. That’s why Nick,
people like Nick, the storyteller, says to expla…
What I’m saying, you’re right it’s very complex
but there are people like–
I’m gonna lose my job to computer like, next week
but I’m happy to have your confidence with me.
But that’s the job of the society collectively
to explain the complex science.
I’m not saying we’re doing a great job, at all but,
I’m saying there is hope if we try.
But my fear is that we just really can’t do it
because the human mind is not built
for dealing with these kinds of explanations
and technologies and it’s true for,
I mean, it’s true for the individual customer
who goes to the bank
and the bank refused to give them a loan
and it can even be on the level, I mean,
how many people today on earth
understand the financial system?
[silence followed by light laughter]
How many presidents and prime ministers
understand the financial system?
In this country at zero? [audience laughter and applause]
So, what does it mean to live in a society
where the people who are supposed
to be running the business, and again,
it’s not the fault of a particular politician
it’s just the financial system has become so complicated
and I don’t think that economies
are trying on purpose to hide something for general public,
it’s just extremely complicated.
You had the some of the wisest people in the world
go into the finance industry
and creating these enormously complex models
and tools, which objectively, you just can’t explain it
to most people unless first of all,
they study economics and mathematics
for 10 years or whatever so, I think this a real crisis.
And this again, this part of
the philosophical crisis we started with
and the undermining of human agency.
That’s part of what’s happening,
that we have these extremely intelligent tools
that are able to make, perhaps better decisions
about our health care, about our financial system,
but we can’t understand what they are doing
and why they are doing it and this undermines our autonomy
and our authority and we don’t know
as a society, how to deal with that.
Well, ideally, Fei-Fei’s Institute will help that.
Before we leave this topic though,
I want to move to a very closely related question,
which I think is one of the most interesting,
which is the question of bias in algorithms,
which is something you’ve spoken eloquently about
and let’s stay with the financial systems.
So, you can imagine a loan used by a bank
to determine whether somebody should get a loan
and you can imagine training it on historical data
and historical data is racist and we don’t want that,
so let’s figure out how to make sure the data isn’t racist
and that it gives loans to people regardless of race.
And we probably all, everybody in this room agrees that,
that is a good outcome but let’s say that
analyzing the historical data suggests
that women are more likely to repay their loans than men.
Do we strip that out or do we allow that to stay in?
If you allow it to stay in,
you get a slightly more efficient financial system.
If you strip it out,
you have a little more equality between men and women.
How do you make decisions about
what biases you want to strip
and which ones are okay to keep?
That’s a excellent question Nick, I mean,
I’m not gonna have the answers personally
but I think you touched on the really important question.
It’s, first of all, a machine learning system bias
is a real thing you know, like you said.
It starts with data, it probably starts
with the very moment we’re collecting data
and the type of data were collecting
all the way through the whole pipeline
and then all the way to the application
but biases come in very complex ways.
At Stanford, we have machine learning scientists
studying the technical solutions of bias like,
you know de-biasing data
and normalizing certain decision-making
but we also have humanists debating about what is biased,
what is fairness, when is bias good,
when it’s bias bad so, I think you
just opened up a perfect topic for research
and debate and conversation in this topic
and I also want to point out that Yuval,
you already used a very closely related example,
machine learning algorithm has a potential
to actually expose bias, right?
Like, one of my favorite study was a paper
a couple of years ago analyzing Hollywood movies
and using machine learning face recognition algorithm,
which is a very controversial technology these days,
to recognize Hollywood systematically gives more screen time
to male actors than female actors.
No human being can sit there
and count all the frames of faces
and gender bias and this a perfect example
of using machine learning to expose bias.
So, in general there’s a rich set of issues
we should study and again, bring the humanists,
bring the ethicists, bring the legal scholars,
bring the gender study experts.
Agree though, standing up for humans,
I knew Hollywood was sexist
even before that paper but yes, agreed.
You are a smart human. [light laughter]
Yuval, on that question of the loans,
do you strip out the racist data,
do you strip out the gender data,
what biases do you get rid of,
what biases do you not?
I don’t think there is a one-size-fits-all.
I mean, it’s a question…
we need this day-to-day collaboration
between engineers, and ethicists,
and psychologists, and political scientists–
But not biologists, right?
[laughter] But not biologists? and increasing– [laughter]
And increasingly, also biologists.
It goes back to the question, what should we do?
So, we should teach ethics
to coders as part of their curriculum.
The people today in the world,
that most need a background in ethics
is the people in the computer science departments,
so it should be an integral part of the curriculum
and it’s also in the big corporations,
which are designing these tools,
they should be embedded within the teams,
people with background in things like ethics,
like politics, that they always think
in terms of what biases might we inadvertently
be building into our system.
What could be the cultural or political implications
of what we are building?
It shouldn’t be a kind of afterthought
that you create this neat technical gadget,
it goes into the world, something bad happens,
and then you start thinking,
Oh, we didn’t see this one coming. What do we do now?
From the very beginning, it should be clear
that this is part of the process.
Yep, I do want to give a shout out to Rob Reich
who just introduced this whole event
He and my colleagues, Mehran Sahami
and a few other Stanford professors have opened this course
called Ethics Computation and sorry Rob,
I’m abusing the title of your course
but this exactly the kind of classes it’s…
I think this quarter, the offering
has more than 300 students signed up to that.
Fantastic.
I wish the course the existed when I was a student here.
Let me ask an excellent question
from the audience, it ties into this.
This is From Yu Jin Lee;
how do you reconcile the inherent trade-offs
between explainability and efficacy
and accuracy of algorithms?
Great question.
This question seems to be assuming if you can explain it,
you’re less good or less accurate.
Well, you can imagine that if you require explainability
you lose some level of efficiency,
you’re adding a little bit of complexity to the algorithm.
So okay, first of all,
I don’t necessarily believe in that,
there’s no mathematical logic to this assumption.
Second let’s assume there is a possibility
that an explainable algorithm suffers efficiency.
I think this a societal decision we have to make.
You know, when we put the seatbelt in our car,
driving that’s a little bit of an efficiency loss
’cause I have to do that seatbelt movement
instead of just hopping and drive
but as a society we decided
we can afford that loss of efficiency
because we care more about human safety.
So, I think AI is the same kind of technology
as we make these kind of decisions going forward
in our solutions, in our products,
we have to balance human wellbeing
and societal well-being with efficiency.
So Yuval, let me ask you,
the global consequences of this is something
that a number of people have asked about
in different ways and we’ve touched on
but we haven’t hit head-on.
There are two countries, imaginative country A,
and you have country B.
Country A says all of you AI engineers,
you have to make it explainable,
you have to take ethics classes,
you have to really think about
the consequences of what you’re doing,
you got to have dinner with biologists,
you have to think about love,
and you have to like, read you know, John Locke.
That’s group A.
Group B country says just go build some stuff, right?
These two countries, at some point,
are gonna come in conflict and I’m gonna guess
that country B’s technology might be ahead of country A’s.
Is that a concern?
Yeah, that’s always the concern with arms races,
which become a race to the bottom
in the name of efficiency and domination
and we are in, I mean…
What is extremely problematic or dangerous
about the situation now is, with AI,
is that more and more countries are waking up
to the realization that this could be
the technology of domination in the 21st century.
So, you’re not talking about just any economic competition
between the different textile industries
or even between different oil industries,
like one country decides, we don’t care
about environment at all, we’ll just go full gas ahead
and the other countries is much more environmentally aware.
The situation with AI is potentially much worse
because it could be really, the technology of domination
in the 21st century and those left behind
could be dominated, exploited,
conquered by those who forge ahead.
So, nobody wants to stay behind
and I think the only way to prevent
this kind of catastrophic arms race to the bottom
is greater global cooperation around AI.
Now this sounds utopian because we are now moving
in exactly the opposite direction,
of more and more rivalry and competition
but this is part of, I think, of our job
like with the nuclear arms race,
to make people in different countries realize that
this is an arms race, that whoever wins, humanity loses.
And it’s the same with AI, if AI becomes an arms race
then this is extremely bad news for all the humans
and it’s easy for say, people in the US,
to say we are the good guys in this race,
you should be cheering for us
but this becoming more and more difficult
in a situation when the motto of the day is, America first.
I mean, how can we trust the USA
to be the leader in AI technology
if ultimately it will serve only American interests
in American economic and political domination.
So it’s really, I think most people
when they think arms race in AI,
they think USA versus China
but there are almost 200 other countries in the world
and most of them are far, far behind
and when they look at what is happening
they are increasingly terrified and for a very good reason.
The historical example you’ve made is a little unsettling.
If I heard your answer correctly,
it’s that we need global cooperation
and if we don’t we’re gonna lead to an arms race.
In the actual nuclear arms race
we tried for global cooperation from,
I don’t know, roughly 1945 to 1950
and then we gave up and then we said
we’re going full-throttle the United States
and then why did the Cold War end the way it did?
Who knows, but one argument would be that the United States,
you know, build up and it’s relentless build up
of nuclear weapons helped to keep the peace
until the Soviet Union collapsed.
So, if that is the parallel, then what might happen here
is we’ll try for global cooperation in 2019,
2020, 2021, and then we’ll be off in an arms race.
A, is that likely and, B if it is,
would you say, well then the US,
it needs to really move full-throttle in AI
because it would better for the liberal democracies
to have artificial intelligence than totalitarian states.
Well, I’m afraid it is very likely
that cooperation will break down
and we will find ourselves in an extreme version
of an arms race and in a way,
it’s worse than the nuclear arms race
because with nukes, at least until today,
countries develop them but never use them.
AI will be used all the time.
It’s not something you have on the shelf
for some doomsday war.
It will be used all the time to create
potentially, total surveillance regimes
in extreme totalitarian systems,
in one way or the other.
From this perspective, I think the danger is far greater.
You could say that the nuclear arms race
actually saved democracy, and the free market,
and you know, rock and roll,
and Woodstock, and then the hippies.
They all owe a huge debt to nuclear weapons [smirking]
because if nuclear weapons weren’t invented,
there would have been a conventional arms race
and conventional military buildup
between the Soviet bloc and the American bloc
and that would have meant total mobilization of society.
If the Soviets are having total mobilization
the only way the Americans can compete is to do the same.
Now, what actually happened
was that you had an extreme totalitarian mobilized Society
in the communist bloc but thanks to nuclear weapons
you didn’t have to do it in the United States,
or in western Germany, or in France
because you relied on nukes.
You don’t need millions of conscripts in the army
and with AI it going to be just the opposite
that the technology will not only be developed,
it will be used all the time
and that’s a very scary scenario.
[Nick] So–
Wait, can I just add one thing?
I don’t know history like you do
but you said AI is different from nuclear technology.
I do want to point out, it is very different
because the same time as you are talking
about these more scarier situation,
this technology has a wide
international scientific collaboration basis
that is being used to make transportation better,
is to improve healthcare, to improve education and,
so it’s a very interesting, new time
that we haven’t seen before because while we have this,
kind of, competition we also have
massive international scientific community collaboration
on these benevolent users
and democratization of this technology.
I just think it’s important to see both side of this.
You’re absolutely right, there also,
as I said, there are also enormous benefits
to this technology.
And in a global collaborative way,
especially among the scientists.
The global aspect is more complicated
because the question is, what happens
if there is a huge gap in abilities
between some countries and most of the world?
Would we have a re-run of the 19th century
Industrial Revolution, when the few industrial powers
conquer, and dominate, and exploit the entire world,
both economically and politically?
What’s to prevent that from repeating?
So, even in terms of, you know,
without this scary war scenario
we might still find ourselves
with a global exploitation regime
in which the benefits, most of the benefits,
go to a small number of countries
at the expense of everybody else.
Have you heard of archive.org?
Archive.org? [light laughs]
So, students in the audience might laugh at this
but we are in a very different scientific research climate
is that the kind of globalization of technology
and technique happens in a way
that the 19th century even 20th century never saw before.
Any paper that is a basic science research paper
in AI today, or technical technique that is produced,
let’s say, this week at Stanford,
it’s easily get globally distributed
through this thing called archive, or GitHub, or repository.
The information is out there, yeah.
Globalization of this scientific technology
travels in a very different way
from the 19th and 20th century.
I mean, I don’t doubt there are,
you know, confined development of this technology,
maybe by regimes but we do have to recognize
that this global, the differences is pretty sharp now
and we might need to take that into consideration
that the scenario you’re describing is harder.
I’m not say impossible, but harder to happen.
So, you think that the way–
Just say that it’s not just the scientific papers.
Yes, the scientific paper’s out there
but if I live in Yemen, or in Nicaragua,
or in the Indonesia, or in Gaza,
yes I can connect to the internet and download the paper.
What will I do with that?
I don’t have the data.
I don’t have the infrastructure.
I mean, you look at
where the big corporations are coming from
that hold all the data of the world,
they are basically coming from just two places.
I mean even Europe is not really in the competition.
There is no European Google,
or European Amazon, or European Baidu,
or European Tencent and if you look beyond Europe,
you think about Central America,
you think about most of Africa,
the Middle East, much of Southeast Asia,
it’s yes, the basic scientific knowledge is out there
but this just one of the components
that go to creating something that can compete
with Amazon or with Tencent or with the abilities
of governments like the US government
or like the Chinese government.
So, I agree that the dissemination of information
and basic scientific knowledge,
we’re at completely different place,
than in the 19th century.
Let me ask you about that
’cause it’s something three or four people
have asked in the questions which is,
it seems like there could be a centralizing force
of artificial intelligence, that it will make
whoever has the data and the best compute,
more powerful and that it could then accentuate
income inequality both within countries
and within the world, right?
You can imagine the countries you’ve just mentioned:
The United States, China, Europe lagging behind,
Canada somewhere behind, way ahead of Central America.
It could accentuate global income inequality.
A, do you think that’s likely
and B, how much does it worry you?
We have about four people who’ve asked a variation on that.
As I said, it’s very, very likely.
It’s already happening and it’s extremely dangerous
because the economic and political consequences
could be catastrophic.
We are talking about the potential collapse
of entire economies and countries.
Countries that depend say, on cheap manual labor
and they just don’t have the educational capital
to compete in a world of AI,
so what are these countries going to do?
I mean if, say you shift back
most production from say, Honduras or Bangladesh,
to the USA into Germany because,
the human salaries are no longer part of the equation
and it’s cheaper to produce the shirt in California
than in Honduras, so what will the people there do?
And you can say, okay but there will be many more jobs
for software engineers but we are not teaching
the kids in Honduras to be software engineers so,
maybe a few of them could somehow immigrate to the US
but most of them won’t and what will they do?
And we at present, we don’t have the economic answers
and the political answers to these questions.
Fei-Fei, you wanna jump in here?
I think that’s fair enough.
I think Yuval definitely has laid out
some of the critical pitfalls of this
and that’s why we need more people to be studying
and thinking about this.
One of the things we over and over noticed,
even in this process of building a community
of human-centered AI and also talking to people,
both internally and externally,
is that there are opportunities
for business around the world
and governments around the world
to I think about their data and AI strategy.
There are still many opportunities
for, you know, outside of the big players
in terms of companies and countries,
to really come to the realization
it’s an important moment for their country,
for their region, for their business,
to transform into this digital age
and I think when you talk about these potential dangers
and lack of data in parts of the world
that hasn’t really caught up
with this digital transformation,
the moment is now and we hope to,
you know, raise that kind of awareness
and then encourage that kind of transformation.
Yeah, I think it’s very urgent.
I mean, what we are seeing at the moment
is on the one hand, what you could call
some kind of data colonization,
that the same model that we saw in the 19th century
that you have the Imperial hub
where they have the advanced technology,
they grow the cotton in India or Egypt,
they send the raw materials to Britain,
they produce the shirts,
the high-tech industry of the 19th century in Manchester,
and they send the shirts back, to sell them in in India
and out-compete the local producers.
And we in a way, might beginning to see the same thing now,
with the data economy, that they harvest the data
in places also like Brazil and Indonesia
but they don’t process the data there.
The data from Brazil and Indonesia
goes to California or goes to Eastern China,
being processed there, later produced
the wonderful new gadgets and technologies,
and sell them back as finished products
to the provinces or to the colonies.
Now, it’s not a one-to-one,
it’s not the same, there are differences
but I think we need to keep this analogy in mind
and another thing that maybe we need to keep in mind
in this respect, I think is re-emergence of stone walls
that I’m kind of, you know…
Originally my specialty was medieval military history.
This how I began my academic career
with the Crusades and castles and knights
and so forth and now I’m doing all these cyborgs
and AI stuff but suddenly there is something
that I know from back then, the walls are coming back.
And I try to kind of, what’s happening here?
I mean, we have virtual realities, we have 3G, AI,
and suddenly the hottest political issue
is building a stone wall.
Like, the most low-tech thing you can imagine [applause]
and what is the significance of a stone wall
in a world of interconnectivity and all that?
And it really frightens me that
there is something very sinister there,
the combination of data is flowing around everywhere
so easily but more and more countries,
and also my home country of Israel, it’s the same thing.
You have the, you know, the startup nation
and then the wall and what does it mean, this combination?
Fei-Fei, you wanna answer that?
[audience and panel laughing]
Maybe you can look at the next question.
[loud laughing]
You know what, let’s go to the next question
which is tied to that and the next question is,
you have the people there at Stanford
who will help be building these companies,
who will either be furthering the process
of data colonization or reversing it,
or who will be building you know,
the efforts to create a virtual wall.
A world based on artificial intelligence
are being created, or funded at least,
by a Stanford Graduate so,
you have all these students here, in the room,
how do you want them to be thinking
about artificial intelligence
and what do you want them to learn?
Let’s spend the last 10 minutes of this conversation
talking about what everybody here should be doing.
So, if you’re a computer science or engineering student,
take Rob’s class.
If you’re humanists, take my class.
And all of you read Yuval’s books.
Are his books on your syllabus?
Not on mine, sorry.
I teach hard-core, deep learning.
His book doesn’t have equations.
I don’t know B plus C plus D equalls H.
But seriously, you know what I meant to say
is that Stanford students, you have a great opportunity
We have a proud history of bringing this technology to life.
Stanford was at the forefront of the birth of AI,
in fact our very Professor John McCarthy
coined the term artificial intelligence
and came to Stanford in 1963 and started this nation’s,
one of the two oldest AI labs in this country
and since then, Stanford’s AI research
has been at the forefront of every wave of AI changes
and this 2019, we’re also at the forefront
of starting the human-centered AI revolution
or writing of the new AI chapter
and we did all this for the past 60 years, for you guys.
For the people who come through the door
and who will graduate and become practitioners,
leaders, and part of the civil society,
and that’s really what the bottom line is about.
Human-centered AI needs to be written
by the next generation of technologists
who have taken classes like Rob’s class,
to think about the ethical implications,
the human well being and it’s also gonna be written
by those potential future policymakers
who came out of Stanford’s humanity studies
and Business School, who are versed
in the details of the technology,
who understand the implications of this technology,
and who has the capability to communicate
with the technologies.
No matter how we agree and disagree,
that’s the bottom line, is that we need
this kind of multilingual leaders
and thinkers and practitioners and that is
what Stanford’s Human-Center AI Institute is about.
Yuval, how do you wanna answer that question?
Well, on the individual level,
I think it’s important for every individual,
whether in Stanford, whether an engineer or not,
to get to know yourself better
because you are now in a competition.
You know, it’s the all the old advice in the book,
in philosophy, is know yourself.
We’ve heard it from Socrates,
from Confucius, from Buddha, get to know yourself.
But there is a difference,
which is that now, you have competition.
In the day of Socrates or Buddha,
if you didn’t make the effort, so okay,
so you missed on enlightenment but
still the king wasn’t competing with you.
They didn’t have the technology.
Now you have competition, you’re competing
against these giant corporations and governments.
If they get to know you better than you know yourself,
the game is over.
So you need to buy yourself some time
and the first way to buy yourself some time
is to get to know yourself better
and then they have more ground to cover.
For engineers and students I would say,
I’ll focus on engineers maybe,
the two things that I would like
to see coming out from the laboratories
and the engineering departments is first,
tools that inherently work better
in a decentralized system, then in a centralized system.
I don’t know how to do it but if you…
I hope this something that engineers can work with.
I heard this blockchain is like the big promise,
in that area, I don’t know.
But whatever it is, part of when you start designing a tool,
part of the specification of what this tool should be like,
I would say, this tool should work better
in a decentralized system than in a centralized system.
That’s the best defense of democracy.
the second thing that I would like to see coming out–
I don’t want to cut you off
’cause I want you to get to this second thing,
how do you make a tool work better in a democracy than–
I’m not an engineer, I don’t know. [laughter]
Okay.
All right, well then go to part two.
Take that, someone in this room, figure that out
’cause it’s very important, whatever it means.
I can think about it and then…
I can give you a historical examples
of tools that work better in this way
or in that way but I don’t know how to translate it
into present-day technological terms.
Go to part two ’cause I got a few more questions
to ask from the audience.
Okay so, the other thing that I would like to see coming
is an AI sidekick that serves me
and not some corporation or government.
We can’t stop the progress of this kind of technology
but I would like to see it serving me.
So yes, it can hack me but it hacks me
in order to protect me.
Like, my computer has an anti-virus
but my brain hasn’t, it has a biological antivirus
against the flu or whatever
but not against hackers and fraud and so forth.
So, one project to work on is to create an AI sidekick
which I paid for, maybe a lot of money,
and it belongs to me, and it follows me,
and it monitors me, and what I do,
and my interactions, but everything it learns,
it learns in order to protect me from manipulation
by other AI’s, by other outside influencers.
This something that I think,
with the present day technology,
I would like to see more effort in that direction.
Not to get into too technical terms,
I think you would feel comforted to know that
the budding efforts in this kind of research is happening,
you know, trustworthy AI, explainable AI,
and security motivated,
so I’m not saying we have the solution
but a lot of technologists around the world
are thinking along that line
and trying to make that happen.
It’s not that I want an AI that belongs to Google
or to the government, that I can trust,
I want an AI that I’m its master, it’s serving me,
And it’s powerful, it’s more powerful than my AI
because otherwise my AI could manipulate your AI.
[audience and panel laughter]
It will have the inherent advantage
of knowing me very well, so it might not be able to hack you
but because it follows me around
and it has access to everything I do and so forth,
it gives it an edge in the specific realm of just me.
So, this a kind of counterbalance
to the danger that the people–
But even that would have a lot of challenges
in their society.
Who is accountable, are you accountable
for your action or your sidekick?
Oh, good question. This is going to be
a more and more difficult question
that we will have to deal with.
The sidekick defense. [light laughter]
All right, Fei-Fei,
let’s go through a couple questions quickly.
We often talk of, this is from Regan Pollock,
we often talk about top-down AI from the big companies,
how should we design personal AI
to help accelerate our lives and careers?
The way I interpret that question is
so much of AI is being done at the big companies.
If you want to have AI at a small company
or personally, can you do that?
So, well first of all, one solution
is what Yuval just said [laughing]
But probably, those things will be built by Facebook.
So, first of all, it’s true
there’s a lot of investment and efforts putting
and resource putting big companies in AI research
and development but it’s not that
all the AI is happening there.
I want to say that academia continue to play a huge role
in AI’s research and development,
especially in the long term exploration of AI
and what is academia?
Academia is a worldwide network
of individual students and professors
thinking very independently and creatively
about different ideas.
So, from that point of view,
it’s a very grassroot kind of effort in AI research
that continues to happen and small businesses
and independent research institutes,
also have a role to play, right?
There are a lot of publicly available data sets,
it’s a global community that is very open about sharing
and disseminating knowledge and technology,
so yes, please, by all means,
we want global participation in this.
All right here’s my favorite question.
This is from anonymous, unfortunately.
If I am in eighth grade, do I still need to study?
[loud laughter and applause]
As a mom, I will tell you yes.
Go back to your homework.
All right Fei-Fei, what do you want
Yuval’s next book to be about?
Wow, I didn’t know this, I need to think about that.
All right well, while you think about that,
Yuval, what area of machine learning
do you want Fei-Fei to pursue next?
The sidekick project. [laughing]
Yeah, I mean, just what I said, an AI,
can we create a kind of AI which can serve individual people
and not some kind of big network?
I mean, is that even possible
or is there something about the nature of AI
which inevitably will always lead back
to some kind of network defect
and winner-takes-all and so forth?
All right, we’re gonna wrap with Fei-Fei,
Okay, his next book is gonna be a science fiction book
between you and your sidekick. [all laughing]
All right, one last question for Yuval
’cause we’ve got two of the top voted questions are this,
without the belief in free will,
what gets you up in the morning?
Without the belief in free will…
I don’t think that the question of, I mean, is very
interesting, or very central.
It has been central in Western civilization
because of some kind of basically,
theological mistake made thousands of years ago [laughing]
but really it’s a misunderstanding of the human condition.
The real question is,
how do you liberate yourself from suffering?
And one of the most important steps in that direction
is to get to know yourself better
and for that, you need to just push aside
this whole, I mean, for me the biggest problem
with the belief in free will is that
it makes people incurious about themselves
and about what is really happening inside themselves
because they basically say, I know everything
I know why I make decisions, this my free will.
And they identify with whatever thought
or emotion pops up in their mind
because ey, this my free will
and this makes them very incurious
about what is really happening inside
and what is also the deep sources
of the misery in their lives.
And so, this what makes me wake up in the morning
to try and understand myself better,
to try and understand the human condition better,
and free will is, it’s just irrelevant for that.
And if we lose it, your sidekick can get you up
in the morning. [light laughter]
Fei-Fei, 75 minutes ago
you said we weren’t gonna reach any conclusions.
Do you think we got somewhere?
Well, we opened a dialogue between the humanist
and the technologists and I want to see more of that.
Great, all right, thank you so much.
Thank you Fei-Fei, thank you Yuval Noah Harari.
It was wonderful to be here, thank you to the audience.
Interesting quote from “21 Lessons for the 21st Century”
“The Political Challenge The merger of infotech and biotech threatens the core modern values of liberty and equality. Any solution to the technological challenge has to involve global cooperation. But nationalism, religion, and culture divide humankind into hostile camps and make it very difficult to cooperate on a global level. California is used to earthquakes, but the political tremor of the 2016 U.S. elections still came as a rude shock to Silicon Valley. Realizing that they might be part of the problem, the computer wizards reacted by doing what engineers do best: they searched for a technical solution. Nowhere was the reaction more forceful than in Facebook’s headquarters in Menlo Park. This is understandable. Since Facebook’s business is social networking, it is most attuned to social disturbances. After three months of soul-searching, on February 16, 2017, Mark Zuckerberg published an audacious manifesto on the need to build a global community, and on Facebook’s role in that project.1 In a follow-up speech at the inaugural Communities Summit on June 22, 2017, Zuckerberg explained that the sociopolitical upheavals of our time—from rampant drug addiction to murderous totalitarian regimes—result to a large extent from the disintegration of human communities. He lamented the fact that “for decades, membership in all kinds of groups”
Start reading this book for free: https://a.co/12jrFSb
How Humans Get Hacked: Yuval Noah Harari & Tristan Harris Talk with WIRED
How Humans Get Hacked: Yuval Noah Harari & Tristan Harris Talk with WIRED
https://www.wired.com/video/watch/yuval-harari-tristan-harris-humans-get-hacked
Yuval Noah Harari, historian and best-selling author of Sapiens, Homo Deus and 21 Lessons for the 21st Century, and Tristan Harris, co-founder and executive director of the Center for Humane Technology, speak with WIRED Editor in Chief Nicholas Thompson.
Hello I’m Nicholas Thompson,
I’m the editor-in-chief of Wired magazine.
I’m here with two of my favorite thinkers.
Yuval Noah Harari.
He’s the author of three number one best-selling books
including 21 Lessons for the 21st Century
which came out this week
and which I just finished this morning
which is extraordinary.
And Tristan Harris,
who’s the man who got the whole world
to talk about time well spent
and has just founded the Center for Humane Technology.
I like to think of Yuval as the man
who explains the past and the future
and Tristan as the man who explains the present.
We’re gonna be here for a little while talking
and it’s a conversation inspired
by one of the things that matters most to me
which is that Wired magazine
is about to celebrate its 25th anniversary.
And when the magazine was founded,
the whole idea was that it was a magazine
about optimism and change,
and technology was good and change is good.
25 years later you look at the world today,
you can’t really hold the entirety of that philosophy.
So I’m here to talk with Yuval and Tristan.
Hello!
[Yuval] Hello. Thank you.
Good to be here.
Tristan why don’t you tell me a little bit about
what you do and then Yuval you tell me too.
I am a director of the Center for Humane Technology
where we focus on realigning technology
with a clear-eyed model of human nature.
And I was before that a design ethicist at Google,
where I studied the ethics of human persuasion.
I’m a historian and I try to understand
where humanity’s coming from and where we are heading.
Let’s start by hearing about how you guys met each other
’cause I know it goes back a little while,
so when did the two of you first meet?
Funnily enough on an expedition to Antarctica.
(laughing)
Not with Scott and Amundsen,
just we were invited by the Chilean government
to the congress of the future,
to talk about the future of humankind
and one part of the congress
was an expedition to the Chilean base in Antarctica
to see global warming with our own eyes.
It was still very cold and it was interesting
and so many interesting people on this expedition.
A lot of philosophers,
Nobel Laureates and I think we particularly connected
with Michael Sandel.
He’s a really amazing philosopher in moral philosophy.
It’s almost like a reality show.
I would have loved to be able to see the whole thing.
Let’s get started with one of the things
that I think is one of the most interesting continuities
between both of your work.
You write about different things
you talk about different things
but there are a lot of similarities.
And one of the key themes is the notion
that our minds don’t work the way
that we sometimes think they do.
We don’t have as much agency over our minds
as perhaps we believed.
Or we believed until now.
So Tristan why don’t you start talking about that
and then Yuval,
jump in and we’ll go from here.
[Tristan] I actually learned a lot of this
from one of Yuval’s early talks
where he talks about democracy as the,
where should we put authority in a society?
And we should put it in the opinions and feelings of people.
But my whole background,
I actually spent the last 10 years studying persuasion.
Starting when I was a magician as a kid where you learned
that there’s things that work on all human minds.
It doesn’t matter whether they have a PhD
or what education level they have,
whether they’re nuclear physicists,
what age they are.
It’s not like if you speak Japanese
I can’t do this trick on you,
it’s not going to work.
Or if you have a PhD.
It works on everybody.
So somehow there’s this discipline
which is about universal exploits on all human minds.
And then I was at a lab called the Persuasive Technology Lab
that teaches at Stanford that teaches engineering students
how do you apply the principles
of persuasion to technology.
Could technology be hacking human feelings,
attitudes, beliefs,
behaviors to keep people engaged with products.
And I think that’s the thing we both share
is that the human mind is not the total secure enclave
root of authority that we think it is.
And if we want to treat it that way
we’re gonna have to understand
what needs to be protected first,
is my perspective.
Yeah I think that we are now facing
not just a technological crisis
but a philosophical crisis
because we have built our society,
certainly liberal democracy with elections
and the free market and so forth,
on philosophical ideas from the 18th Century
which are simply incompatible
not just with the scientific findings of the 21st Century
but above all with the technology
we now have at our disposal.
Our society’s built on the ideas that the voter knows best,
that the customer is always right,
that ultimate authority as Tristan said
is the feelings of human beings.
And this assumes that human feelings and human choices
are this sacred arena which cannot be hacked,
which cannot be manipulated.
Ultimately my choices,
my desires reflect my free will
and nobody can access that or touch that.
And this was never true
but we didn’t pay a very high cost
for believing in this myth in the 19th or 20th Century
because nobody had the technology to actually do it.
Now some people,
corporations,
governments,
they are gaining the technology to hack human beings.
Maybe the most important fact
about living in the 21st Century
is that we are now hackable animals.
Explain what it means to hack human being
and why what can be done now is different
from what could be done a hundred years ago
with religion or with the book
or with anything else that influences what we see
and changes the way we think about the world.
To hack a human being
is to understand what’s happening inside you
on the level of the body,
of the brain,
of the mind so that you can predict
what people will do.
You can understand how they feel.
And once you understand and predict
you can usually also manipulate
and control and even replace.
Of course it can’t be done perfectly,
and it was possible to do it to some extent a century ago.
But the difference in the level is significant.
I would say the real key
is whether somebody can understand you
better than you understand yourself.
The algorithms that are trying to hack us,
they will never be perfect.
There is no such thing
as understanding perfectly everything
or predicting everything.
You don’t need perfect.
You just need to be better than the average human being.
And are we there now?
Or are you worried that we’re about to get there?
I think Tristan might be able to answer
where we are right now better than me
but I guess that if we are not there now
we are approaching very very fast.
I think a good example of this is YouTube.
Relatable example.
You open up that YouTube video your friend sends you
after your lunch break.
You come back to your computer.
And you think okay I know those other times
I end up watching two or three videos
and I end up getting sucked into it.
But this time it’s gonna be really different.
I’m just gonna watch this one video
and then somehow that’s not what happens.
You wake up from a trance three hours later
and you say what the hell just happened
and it’s because you didn’t realize
you had a supercomputer pointed at your brain.
So when you open up that video
you’re activating Google Alphabet’s
billions of dollars of computing power.
And they’ve looked at what has ever gotten
two billion human animals to click on another next video.
And it knows way more about
what’s gonna be the perfect chess move
to play against your mind.
If you think of your mind as a chessboard
and you think you know the perfect move to play,
I’ll just watch this one video.
But you can only see so many moves ahead on the chessboard.
But the computer sees your mind and it says no no no,
I’ve played a billion simulations of this chess game before
on these other human animals watching YouTube
and it’s gonna win.
Think about when Garry Kasparov loses against Deep Blue.
Garry Kasparov can see so many moves ahead on the chessboard
but he can’t see beyond a certain point.
Like a mouse can see so many moves ahead in a maze,
but a human can see so way more moves ahead
and then Garry can see even more moves ahead.
But when Garry loses against IBM Deep Blue,
that’s checkmate against humanity for all time
because he was the best human chess player.
So it’s not that we’re completely losing human agency.
You walk into YouTube and it always addicts you
for the rest of your life
and you never leave the screen.
But everywhere you turn on the internet
there’s basically a supercomputer pointed at your brain
playing chess against your mind
and it’s gonna win a lot more often than not.
[Nicholas] Let’s talk about that metaphor
because chess is a game with a winner and a loser.
So you set up the technology fully as an opponent.
But YouTube is also gonna,
I hope,
please gods of YouTube,
recommend this particular video to people
which I hope will be elucidating and illuminating.
So is chess really the right metaphor?
A game with a winner and a loser?
The question is what is the game that’s being played?
If the game being played was,
Hey Nick go meditate in a room for two hours
then come back to me and tell me
what do you really want right now in your life?
And if YouTube is using two billion human animals
to calculate based on everybody who’s ever wanted
how to learn how to play ukulele,
they can say here’s the perfect video I have
to teach you how to play ukulele.
That could be great.
The problem is it doesn’t actually care about what you want.
It just cares about what will keep you next on the screen.
And we’ve actually found,
we have an ex-YouTube engineer who works with us,
who’s shown that there’s a systemic bias in YouTube.
So if you airdrop a human animal and they land on,
let’s say a teenage girl and she watches a dieting video,
the thing that works best at keeping that girl
who’s watching a dieting video on YouTube the longest
is to say here’s an anorexia video.
Because that’s between,
here’s more calm stuff and true stuff
and here’s the more insane divisive
outrageous conspiracy intense stuff.
YouTube always if they want to keep your time
they want to steer you down that road.
And so if you airdrop a person on a 9/11 video
about the 9/11 news event,
just a fact-based news video,
the autoplaying video is the Alex Jones Infowars video.
So what happens to this conversation?
What follows us?
Ray Kurtzweil?
(laughing)
Yeah I guess it’s gonna really depend.
(laughing)
And the problem is you can also kind of hack these things.
There’s governments who actually can manipulate
the way that the recommendation system works
by throwing thousands of headless browsers,
versions of Firefox to watch one video
and then get it to search for another video
so that we search for Yuval Hirari,
we’ve watched that one video,
then we get thousands of computers
to simulate people going from Yuval Hirari
to watching The Power of Putin or something like that.
And then that’ll be the top recommended video.
And so as Yuval says,
these systems are kind of out of control
and algorithms are running
where two billion people spend their time.
70% of what people watch on YouTube
is driven by recommendations from the algorithm.
People think what you’re watching on YouTube is a choice.
People are sitting there,
they sit there,
they think and then they choose.
But that’s not true.
70% of what people are watching
is the recommended videos on the right hand side.
Which means 70% of where 1.9 billion users,
that’s more than the number of followers of Islam,
about the number of followers of Christianity,
of what they’re looking at on YouTube for 60 minutes a day.
That’s the average time people spend on YouTube.
60 minutes and 70% is populated by a computer.
So now the machine is out of control.
Because if you thought 9/11 conspiracy theories
were bad in English try,
what are 9/11 conspiracies
in Burmese and Sri Lankan and Arabic.
No-one’s looking at that.
And so it’s kind of a digital Frankenstein.
It’s pulling on all these levers
and steering people in all these different directions.
And Yuval we got into this point
by you saying that this scares you for democracy.
It makes you worry whether democracy can survive
or I believe you say,
the phrase you use in your book
is democracy will become a puppet show.
[Yuval] Explain that. Yeah.
If it doesn’t adapt to these new realities
it will become just an emotional puppet show.
If you go on with this illusion
that human choice cannot be hacked,
cannot be manipulated
and we can just trust it completely
and this is the source of all authority
then very soon you end up with an emotional puppet show.
This is one of the greatest dangers that we are facing
and it really is the result of philosophical impoverishment.
Of taking for granted philosophical ideas
from the 18th Century and not updating them
with the findings of science.
It’s very difficult because you go to people,
people don’t want to hear this message
that they are hackable animals.
That their choices,
their desires,
their understanding of who am I?
What is my most authentic aspirations?
This can actually be hacked and manipulated.
To put it briefly,
my amygdala may be working for Putin.
I don’t want to know this.
I don’t want to believe that.
No I am a free agent.
If I am afraid of something this is because of me.
Not because somebody implanted this fear in my mind.
If I choose something this is my free will
and who are you to tell me anything else?
I’m hoping that Putin will soon be working for my amygdala
but that’s a side project I have going.
It seems inevitable from what you wrote in your first book
that we would reach this point
where human minds would be hackable
and where computers and machines and AI
would have a better understanding of us.
But it’s certainly not inevitable
that it would lead us to negative outcomes,
to 9/11 conspiracy theories and to a broken democracy.
Have we reached the point of no return?
How do we avoid the point of no return
if we haven’t reached there?
What are the key decision points along the way?
Nothing is inevitable in that.
The technology itself is going to develop.
You can’t just stop all research in AI
and you can’t stop all research in biotech.
And the two go together.
I think that AI gets too much attention now
and we should put equal emphasis
on what’s happening on the biotech front.
Because in order to hack human beings you need biology.
Some of the most important tools and insights,
they are not coming from computer science.
They’re coming from brain science.
And many of the people who design
all these amazing algorithms,
they have a background in psychology and in brain science.
This is what you’re trying to hack.
But what we should realize is
we can use the technology in many different ways.
For example we’re now using AI
mainly in order to surveil individuals
in the service of corporations and governments
but it can be flipped to the opposite direction.
We can use the same surveillance systems
to monitor the government in the service of individuals.
To monitor for example government officials,
that they are not corrupt.
The technology is willing to do that,
the question is whether we’re willing
to develop the necessary tools to do it.
One of Yuval’s major points here
is that the biotech lets you understand,
by hooking up a sensor to someone,
features about that person
that they won’t know about themselves.
And we’re increasingly reverse-engineering the human animal.
One of the interesting things that I’ve been following
is also the ways you can ascertain those signals
without an invasive sensor.
We were talking about this a second ago.
There’s something called Eulerian Video Magnification
where you point a computer camera at a person’s face
and a human being can’t,
I can’t look at your face and see your heart rate.
My intelligence doesn’t let me see that.
You can see my eyes dilating right?
[Tristan] I can see your eyes dilating–
‘Cause I’m terrified of you.
(laughing)
But if I put a supercomputer behind the camera
I can actually run a mathematical equation
and I can find the micropulses of blood to your face
that I as a human can’t see but the computer can see.
So I can pick up your heart rate.
What does that let me do?
I can pick up your stress level
because heart rate variability gives you your stress level.
There’s a woman named Poppy Crum
who gave a TED Talk this year
about the end of the poker face.
We have this idea that there can be a poker face.
We can actually hide our emotions from other people.
But this talk is about the erosion of that.
That we can point a camera at your eyes
and see when your eyes dilate
which actually detects cognitive strains,
when you’re having a hard time understanding something
or an easy time understanding something.
We can continually adjust this based on your heart rate,
your eye dilation.
One of the things with Cambridge Analytica
is the idea that if we have,
which is all about the hacking of Brexit
and Russia and all the US elections,
that was based on,
if I know your big five personality traits,
if I know Nick Thompson’s personality
through his OCEAN,
openness,
conscientiousness,
extravertedness,
agreeableness and neuroticism.
That gives me your personality
and based on your personality
I can tune a political message to be perfect for you.
Now the whole scandal there was that Facebook
let go of this data to be stolen by a researcher
who used to have people fill in questions to figure out
what are Nick’s big five personality traits.
But now there’s a woman named Gloria Mark at UC Irvine
who has done a research showing
you can actually get people’s big five personality traits
just by their click patterns alone with 80% accuracy.
So again the end of the poker face,
the end of the hidden parts of your personality.
We’re gonna be able to point AIs at human animals
and figure out more and more signals from them
including their microexpressions,
when you smirk and all these things.
We’ve got face ID cameras on all of these phones.
So now if you have a tight loop
where I can adjust the political messages
in real time to your heart rate and to your eye dilation
and to your political personality,
that’s not a world we want to live in.
It’s a kind of dystopia.
There are many contexts you can use that.
It can be used in class to figure out
that the student isn’t getting the message,
that the student is bored which can be a very good thing.
It can be used by lawyers like you negotiate a deal
and if I can read what’s behind
your poker face and you can’t
that’s a tremendous advantage for me.
So it can be done in a diplomatic setting
like two prime ministers are meeting to,
I don’t know,
resolve the Israeli-Palestinian conflict
and one of them has an earbud
and the computer is whispering in his ear
what is the true emotional state.
What’s happening in the brain,
in the mind of the person on the other side of the table.
And what happens when two sides have this?
And you have kind of an arms race
and we have absolutely no idea how to handle these things.
I’ll give a personal example
when I talked about this in Davos.
For me maybe my entire approach to these issues
is shaped by my experience of coming out.
That I realized that I was gay when I was 21
and ever since then I’m haunted by this thought,
what was I doing for the previous five or six years?
How is it possible,
I’m not talking about something small
that you don’t know about yourself.
Everybody there is something you don’t know about yourself.
But how can you possibly not know this about yourself?
And then the next thought is,
a computer,
an algorithm could have told me that when I was 14
so easily just by something as simple
as following the focus of my eyes.
I don’t know,
I walk on the beach or I even watch television and there is,
what was in the 1980s?
Baywatch or something,
and there is a guy in a swimsuit
and there is a girl in a swimsuit
and which way my eyes are going,
it’s as simple as that.
And then I think,
what would my life have been like,
first if I knew when I was 14,
secondly if I got this information from an algorithm.
There is something incredibly deflating for the ego
that this is the source of this deep wisdom about myself?
An algorithm that followed my eye movement?
[Nicholas] And there’s an even creepier element
which you write about in your book,
what if Coca-Cola had figured it out first and
[Yuval] was selling you Coke Exactly!
with shirtless men when you didn’t even know you were gay.
Exactly although Coca-Cola versus Pepsi,
Coca-Cola knows this about me
and shows me a commercial with a shirtless man,
Pepsi doesn’t know this about me
because they are not using these sophisticated algorithms.
They go with the normal commercials
with the girl in the bikini.
And naturally enough I buy Coca-Cola
and I don’t even know why.
Next morning when I go to the supermarket
I buy Coca-Cola and I think,
this is my free choice!
I chose Coke!
But no I was hacked.
[Nicholas] And so this is inevitable.
[Tristan] This is the crux of the whole issue.
This is everything is what we’re talking about.
And how do you trust something
that can pull these signals off of you?
If the system is asymmetric,
if you know more about me than I know about myself,
we usually have a name for that in law.
For example when you deal with a lawyer,
you hand over your very personal details to a lawyer
so they can help you.
But then they have this knowledge of the law
and they know about your vulnerable information
so they could exploit you with that.
Imagine a lawyer who took all the personal information
and then sold it to somebody else.
But they’re governed by a different relationship
which is the fiduciary relationship.
They can lose their license
if they don’t actually serve your interests.
It’s similar to a doctor or psychotherapist.
There’s this big question of
how do we hand over information about us
and say I want you to use that to help me.
On whose authority can I guarantee
that you’re going to help me?
There is no moment when we are handing information.
With the lawyer there is this formal setting like,
okay I hire you to be my lawyer,
this is my information and we know this.
But I’m just walking down the street,
there is a camera looking at me,
I don’t even know that,
and they are hacking me through that.
So I don’t even know it’s happening.
That’s the most duplicitous part.
We often say it’s like imagine a priest,
if you want to know what Facebook is,
imagine a priest in a confession booth
and they listen to two billion people’s confessions
but they also watch you round your whole day,
what you click on,
which ads,
Coca-Cola or Pepsi,
the shirtless men and the shirtless women,
and all your conversations that you have
with everybody else in your life
’cause they have Facebook Messenger,
they have that data too.
But imagine that this priest in a confession booth,
their entire business model is to sell access
to the confession booth to another party.
So someone else can manipulate you.
Because that’s the one way
this priest makes money in this case.
They don’t make money any other way.
There are two giant entities that will have,
I mean there are a million entities that will have this data
but there’s large corporations,
you mentioned Facebook,
and there will be governments.
Which do you worry about more?
It’s the same.
Once you reach beyond a certain point
it doesn’t matter how you call it.
This is the entity that actually rules.
Whoever has the data.
Whoever has this kind of data.
Even in a setting where you still have a formal government
but this data is in the hands of some corporation
then the corporation if it wants
can decide who wins the next elections.
So it’s not really a matter for choice.
There is choice.
We can design a different political and economic system
in order to prevent this immense concentration
of data and power in the hands
of either government or corporations that use it
without being accountable
and without being transparent about what they are doing.
The message is not okay it’s over,
humankind is in the dustbin of history.
That’s not the message. No that’s not the message.
Eyes have stopped dilating,
let’s keep this going.
(laughing)
The real question is,
we need to get people to understand this is real,
this is happening,
there are things we can do.
And you have midterm elections in a couple of months
so in every debate,
every time a candidate goes to meet the potential voters
in person or on television,
ask them this question.
What is your plan,
what is your take on this issue?
What are you going to do if we are going to elect you?
If they say I don’t know what you’re talking about,
that’s a big problem.
I think the problem is most of them
have no idea what we’re talking about
and one of the issues is
I think policy makers as we’ve seen
are not very educated on these issues.
They’re doing better.
They’re doing so much better this year than last year.
Watching the Senate hearings,
the last hearings with Jack Dorsey and Sheryl Sandberg
versus watching the Zuckerberg hearings
or watching the Colin Stretch hearings,
there’s been improvement.
[Tristan] It’s true.
There’s much more, though.
I think these issues open up a whole space of possibility.
We don’t even know yet the kinds of things
we’re gonna be able to predict.
We’ve mentioned a few examples that we know about
but if you have a secret way of knowing something
about a person by pointing a camera at them and AI,
why would you publish that?
So there’s lots of things that can be known about us
to manipulate us right now that we don’t even know about.
How do we start to regulate that?
I think the relationship we want to govern is,
when a supercomputer is pointed at you
that relationship needs to be protected
[Nicholas] and governed by some terms. Okay.
So there’s three elements in that relationship.
There’s the supercomputer.
What does it do,
what does it not do.
There’s the dynamic of how it’s pointed.
What are the rules over what I can collect?
What are the rules over what I can’t collect
and what I can store?
And there’s you.
How do you train yourself to act?
How do you train yourself to have self-awareness?
Let’s talk about all three of those areas
maybe starting with the person.
What should the person do in the future
to survive better in this dynamic?
One thing I would say about that
is I think self-awareness is important.
It’s important that people know the thing
we’re talking about and they realize
that we can be hacked but it’s not a solution.
You have millions of years of evolution
that guide your mind to make
certain judgments and conclusions.
A good example of this is if I put on a VR helmet
and now suddenly I’m in a space where there’s a ledge.
I’m at the edge of a cliff.
I consciously know I’m sitting here
in a room with Yuval and Nick.
I know that consciously.
I’ve got this self-awareness.
I know I’m being manipulated.
But if you push me I’m gonna not want to fall right?
‘Cause I have millions of years of evolution that tell me
you are pushing me off of a ledge.
So in the same way you can say,
Dan Ariely makes this joke actually,
the behavioral economist,
that flattery works on us
even if I tell you I’m making it up.
Like Nick I love your jacket right now.
[Nicholas] It’s a great jacket on you. It is.
It’s a really amazing jacket.
I actually picked it out ’cause I knew
from studying your carbon dioxide exhalation yesterday
that you would like this.
Exactly.
(laughing)
We’re manipulating each other now.
The point is that even if you know
that I’m just making that up,
it still actually feels good.
The flattery feels good.
And so it’s important,
I think of this as a new era,
kind of a new Enlightenment
where we have to see ourselves in a very different way.
And that doesn’t mean that’s the whole answer.
It’s just the first step.
We have to all walk around–
So the first step is recognizing
that we’re all vulnerable.
[Tristan] Hackable.
Vulnerable.
But there are differences.
Yuval is way less hackable than I am
because he meditates two hours a day
and doesn’t use a smartphone.
(laughing)
I’m super hackable.
The last one’s probably key.
(laughing)
What are the other things
that a human can do to be less hackable?
You need to get to know yourself as best as you can.
It’s not a perfect solution,
but somebody’s running after you,
you run as fast as you can.
It’s a competition.
Who knows you best in the world?
So when you’re two years old it’s your mother.
Eventually you hope to reach a stage in life
when you know yourself even better than your mother.
And then suddenly you have this corporation
or government running after you,
and they are way past your mother and they are at your back.
This is the critical moment.
They know you better than you know yourself.
So run a little.
Run a little faster.
There are many ways you can run faster,
meaning getting to know yourself a bit better.
Meditation is one way,
there are hundreds of techniques of meditation,
different works for different people.
You can go to therapy,
you can use art,
you can use sport,
whatever works for you.
But it’s now becoming much more important than ever before.
It’s the oldest advice in the book.
Know yourself. Yeah.
But in the past you did not have competition.
If you lived in Ancient Athens
and Socrates came along and said know yourself,
it’s good for you,
and you say nah I’m too busy,
I have this olive grove,
I don’t have time.
Okay you didn’t get to know yourself better
but there was nobody else who was competing with you.
Now you have serious competition.
So you need to get to know yourself better.
This is the first maxim.
Secondly as an individual,
if we talk about what’s happening to society,
you should realize you can’t do much by yourself.
Join an organization.
If you are really concerned about this,
this week join some organization.
50 people who work together are a far more powerful force
than 50 individuals who each of them is an activist.
It’s good to be an activist,
it’s much better to be a member of an organization.
Then there are other tested and tried methods of politics.
We need to go back to this messy thing
of making political regulations and choices.
Politics is about power
and this is where power is right now.
[Tristan] Out of that,
I think there’s a temptation to say,
okay how can we protect ourselves.
And when this conversation shifts into,
with my smartphone not hacking me,
you get things like,
oh I’ll set my phone to grayscale,
oh I’ll turn off notifications.
But what that misses is that
you live inside of a social fabric.
When we walk outside my life depends
on the quality of other people’s thoughts,
beliefs and lives.
So if everyone around me believes a conspiracy theory
because YouTube is taking 1.9 billion human animals
and tilting the playing field so everyone watches Infowars,
by the way YouTube has driven 15 billion recommendations
of Alex Jones’ Infowars and that’s recommendations
and then two billion views.
If only one in a thousand people
believed those 2 billion views,
[Yuval] that’s still two million? Two million.
Mathematics is not as strong as…
(laughing)
We’re philosophers.
And so if that’s two million people
that’s still two million new conspiracy theorists.
So if everyone else is walking around in the world
you don’t get to do that.
If you say hey I’m a kid,
I’m a teenager and I don’t wanna care
about the number of likes I get
so I’m gonna stop using Snapchat or Instagram.
I don’t want to be hacked
for my self-worth in terms of likes.
If I’m a teenager and I’m using Snapchat or Instagram
and I don’t want to be hacked for my self-worth
in terms of the number of likes I get,
I can say I don’t wanna use those things
but I still live in a social fabric
where all my other sexual opportunities,
social opportunities,
homework transmission where people talk about that stuff.
If they only use Instagram
I have to participate in that social fabric.
So I think we have to elevate the conversation from
how do I make sure I’m not hacked,
it’s not just an individual conversation.
We want society to not be hacked.
Which goes to the political point
in how do we politically mobilize
as a group to change the whole industry.
For me I think about the tech industry.
Alright so that’s step one in this three step question.
What can individuals do,
know yourself,
make society more resilient,
make society less able to be hacked.
What about the transmission
between the supercomputer and the human?
What are the rules and how should we think about
how to limit the ability of the supercomputer to hack you?
That’s a big one. That’s a big question.
That’s why we’re here!
In essence I think that we need to come to terms
with the fact that we can’t prevent it completely.
It’s not because of the AI, it’s because of the biology.
It’s just the type of animals that we are
and the type of knowledge that now we have
about the human body,
about the human brain.
We have reached a point when this is really inevitable.
You don’t even need a biometric sensor,
you can just use a camera in order to tell
what is my blood pressure,
what’s happening now,
and through that what’s happening to me emotionally.
I would say that we need to
reconceptualize completely our world
and this is why I began by saying
that we suffer from philosophical impoverishment.
That we are still running on the ideas of the 18th Century.
Which were good for two or three centuries,
which were very good but which are simply not adequate
to understanding what’s happening right now.
Which is why I also think that
with all the talk about the job market
and what should I study today that will be relevant
to the job market in twenty,
thirty years.
I think philosophy is one of the best bets maybe.
I sometimes joke,
my wife studied philosophy and dance in college.
Which at the time seemed like the two worst professions
’cause you can’t really get a job in either.
But now they’re like the last two things
that will get replaced by robots.
I think Yuval is right and I think
this conversation usually makes people conclude
that there’s nothing about human choice
or the human mind’s feelings that’s worth respecting.
And I don’t think that is the point.
I think the point is we need a new kind of philosophy
that acknowledges a certain kind of thinking
or cognitive process or conceptual process
or social process,
that we want that.
For example Lawrence Fishkin is a professor at Stanford
who’s done work on deliberative democracy
and shown that if you get a random sample of people
in a hotel room for two days
and you have experts come in
and brief them about a bunch of things
they change their minds about issues,
they go from being polarized to less polarized,
they can come to more agreement.
And there’s a process there that you can put in a bin
and say that’s a social cognitive sense-making process
that we might want to be sampling from that one
as opposed to an alienated lonely individual
who’s been shown photos of their friends
having fun without them all day
and then we’re hitting them with Russian ads.
We probably don’t want to be
sampling a signal from that person to be thinking about,
not that we don’t want it from that person,
but we don’t want that process
to be the basis of how we make collective decisions.
So I think we’re still stuck in a mind-body meat suit.
We’re not getting out of it.
So we better learn how do we use it in a way
that brings out the higher angels of our nature.
And the more reflective parts of ourselves.
So I think what technology designers need to do
is ask that question.
A good example just to make it practical,
let’s take YouTube again.
What’s the difference between a teenager,
let’s take an example of you watch a ukulele video.
It’s a very common thing on YouTube.
There’s lots of ukulele videos.
How to play ukulele.
What’s going on in that moment
when it recommends other ukulele videos?
There’s actually a value if someone wants to learn
how to play the ukulele.
But the computer doesn’t know that.
It’s just recommending more ukulele videos.
But if it really knew that about you,
instead of just saying
here’s infinite more ukulele videos to watch,
it might say here’s your ten friends
who know how to play ukulele that you didn’t know
know how to play ukulele
and you can go and hang out with them.
It could put those choices at the top of life’s menu.
The problem is when you watch,
like a teenager watches that dieting video,
the computer doesn’t know that the thing you’re really after
in that moment isn’t that you want to be anorexic.
It just knows that people who watch those
tend to fall for these anorexia videos.
It can’t get at this underlying value,
this thing that people want.
You can even think about it that we just need,
I mean the system in itself can do amazing thing for us,
we just need to turn it around
that it serves our interests whatever that is
and not the interests of the corporation or the government.
Actually to build,
okay now that we realize that our brains can be hacked,
we need an antivirus for the brain.
Just as we have one for the computer.
And it can work on the basis of the same technology.
Let’s say you have an AI sidekick
who monitors you all the time,
24 hours a day,
what you write,
what you’ve seen,
everything.
But this AI is serving you.
It has this fiduciary responsibility.
And it gets to know your weaknesses
and by knowing your weaknesses it can protect you
against other agents trying to hack you
and to exploit your weaknesses.
So if you have a weakness for funny cat videos
and you spend an enormous amount of time,
an inordinate amount of time just watching,
you know it’s not very good for you
but you just can’t stop yourself clicking,
then the AI will intervene
and whenever this funny cat video tries to pop up the AI,
no no no no.
And it will just show maybe a message
that somebody just tried to hack you.
You get these messages about
somebody just tried to infect your computer with a virus.
The hardest thing for us is to admit
our own weaknesses and biases and it can go all ways.
If you have a bias against Trump or against Trump supporters
so you very easily believe any story,
however farfetched and ridiculous.
So I don’t know,
Trump thinks that the world is flat.
Trump is in favor of killing all the Muslims.
You would click on that.
This is your bias.
And the AI will know that so it’s completely neutral,
it doesn’t serve any entity out there.
It just gets to know your weaknesses and biases
and tries to protect you against them.
[Nicholas] But how does it learn
that it’s a weakness and a bias and not something you like?
How come it knows when you click the ukulele video,
that’s good,
and when you click the Trump–
[Tristan] This is where I think we need
a richer philosophical framework because if you have that
then you can make that understanding.
So if a teenager’s sitting there in that moment,
watching the dieting video
then they’re shown the anorexia video,
imagine instead of a 22 year old male engineer
who went to Stanford,
computer scientist thinking about
what can I show them that’s the perfect thing?
You had a 80 year old child developmental psychologist
who studied under the best child developmental psychologists
and thought about in those kinds of moments
the thing that’s usually going on for a teenager aged 13
is a feeling of insecurity,
identity development,
experimentation and what would be best for them?
So we think about this is,
the whole framework of humane technology
is we think this is the thing,
we have to hold up the mirror to ourselves
to understand our vulnerabilities first,
and you design starting from a view
of what we’re vulnerable to.
I think from a practical perspective,
I totally agree with this idea of an AI sidekick.
But if we’re imagining,
we live in the scary reality
that we’re talking about right now.
It’s not like this is some sci-fi future,
this is the actual state.
So if we’re actually thinking about how do we navigate
to an actual state of affairs that we want,
we probably don’t want an AI sidekick
to be this kind of optional thing
that some people who are rich can afford
and other people who don’t can’t.
We probably want it to be baked in
to the way technology works in the first place
so that it does have a fiduciary responsibility
to our best subtle compassionate vulnerable interests.
So we will have government-sponsored AI sidekicks?
We will have corporations that sell us AI sidekicks
but subsidize them so it’s not just the affluent
that have really good AI sidekicks?
This is a business model conversation but…
One thing is to change the way that,
if you go to university or college
and learn computer science
then an integral part of the course
is to learn about ethics.
About the ethics of coding.
I think it’s extremely irresponsible
that you can finish,
you can have a degree in computer science,
in coding and you can design all these algorithms
that now shape people’s lives
and you just don’t have any background
in thinking ethically and philosophically
about what you’re doing.
You’re just thinking in terms of pure technicality
or in economic terms.
So this is one thing which kind of bakes it
into the cake from the first place.
Let me ask you something that has come up a couple times
I’ve been been wondering about.
So when you were giving the ukulele example,
you talked about well maybe you should
go see ten friends who play ukulele,
you should visit them offline.
And in your book you say that one of the crucial moments
for Facebook will come when an engineer realizes
that the thing that is better for the person
and for community is for them to leave their computer.
And then what will Facebook do with that?
So it does seem from a moral perspective that a platform,
if it realizes it would be better for you to go offline,
they should encourage you to do that.
But then they will lose their money
and they will be out-competed.
[Yuval] Mm-hmm. Yep.
So how do you actually get to the point where the algorithm,
the platform push somebody in that direction.
This is where this business model conversation comes in.
It’s so important.
And also why Apple and Google’s role is so important.
Because they are before the business model of all these apps
that want to steal your time and maximize attention.
Apple doesn’t need to–
Google’s before and after and during
but it is also before. But anyway.
Specifically the Android case.
So Android and iOS,
not to make this too technical
or an industries-focused conversation,
but they should theoretically,
that layer,
you have just the device.
Who should that be serving?
Whose best interest are they serving?
Do they want to make the apps as successful as possible?
And make the addictive maximizing loneliness
and alienation and social comparison,
all that stuff?
Or should that layer be a fiduciary,
as the AI sidekick to our deepest interests,
to our physical embodied lives,
to our physical embodied communities.
We can’t escape this instrument.
It turns out that being inside of community
and having face-to-face contact is,
there’s a reason why solitary confinement
is the worst punishment we give human beings.
And we have technology that’s basically maximizing isolation
because it needs to maximize
that time we spend on the screen.
So I think one question is
how can Apple and Google move their entire businesses
to be about embodied local fiduciary
responsibility to society.
And this is what we think of as humane technology.
That’s the direction that it can go.
Facebook could also change its business model
to be more about payments and people transacting
based on exchanging things,
which is something they’re looking into
with the blockchain stuff
that they’re theoretically working on.
And also Messenger payments.
If they move from an advertising-based
business model to micropayments,
they could actually shift the design of some of those things
and there could be whole teams of engineers at Newsfeed
that are just thinking about what’s best for society
and then people would still ask these questions of,
well who’s Facebook to say what’s good for society?
But you can’t get out of that situation
because they do shape what two billion human animals
will think and feel every day.
So this gets me to one of the things
I most want to hear your thoughts on which is,
Apple and Google have both done this
to some degree in the last year
and Facebook has,
I believe every executive at every tech company has said
time well spent at some point in the last year.
We’ve had a huge conversation about it
and people have bought 26 trillion of these books.
Do you actually think that we are
heading in the right direction at this moment
because change is happening and people are thinking?
Or do you feel like we’re still
going in the wrong direction?
[Yuval] I think that in the tech world
we are going in the right direction in the sense
that people are realizing the stakes.
People are realizing the immense power
that they have in their hands.
I’m talking about the people in the tech world.
They are realizing the influence they have on politics,
on society,
and most of them react I think not in the best way possible
but certainly they react in the responsible way.
In understanding yes we have this huge impact on the world.
We didn’t plan that maybe but this is happening
and we need to think very carefully what we do with that.
They still don’t know what to do with that.
Nobody really knows.
But at least the first step has been accomplished
of realizing what is happening
and taking some responsibility.
The place where we see a very negative development
is on the global level because all this talk so far
has really been internal,
Silicon Valley,
California USA talk.
But things are happening in other countries.
All the talk we’ve had so far
relied on what’s happening in
liberal democracies and in free market.
In some countries maybe you have got no choice whatsoever.
You just have to share all your information and have to do
what the government-sponsored algorithm tells you to do.
So it’s a completely different conversation.
And another complication
is the AI arms race
that five years,
even two years ago,
there was no such thing.
And now it’s maybe the number one priority
in many places around the world,
that there is an arms race going on in AI
and our country needs to win this arms race.
And when you enter an arms race situation,
then it becomes very quickly a race to the bottom.
Because you very often hear this,
okay it’s a bad idea to do this,
to develop that but they’re doing it
and it gives them some advantage
and we can’t stay behind.
We’re the good guys!
We don’t want to do it!
But we can’t allow the bad guys to be ahead of us
so we must do it first.
And you ask the other people,
they will say exactly the same thing.
They don’t want to do it but they have to.
Yeah and this is an extremely dangerous development
in the last two years.
It’s a multipolar trap
No-one wants to build slaughterbot drones
but if I think you might be doing it
even though I don’t want to I have to build it
and you build it and we both hold them.
Even at a deeper level,
if you want to build some ethics
into your slaughterbot drones
but it’ll slow you down by one week
and one week you double the intelligence.
This is actually one of the things I think
we talked about when we first met
was the ethics of speed,
of clockrate.
We’re in essence competing on
who can go faster to make this stuff
but faster means more likely to be dangerous,
less likely to be safe so it’s basically
we’re racing as fast as possible
to create the things we should probably be going
as slow as possible to create.
And I think that much like
high-frequency trading in the financial markets,
if we had this open-ended thing of
who can beat who by trading a microsecond faster.
What that turns into,
this has been well documented,
is people blowing up whole mountains
so they can lay these copper cables
so they can trade a microsecond faster.
You’re not even competing based on
an Adam Smith version of what we value or something.
We’re competing based on who can blow up mountains
and make transactions faster.
When you add high-frequency trading to
who can trade hackable programming human beings faster
and who’s more effective at manipulating
culture wars across the world,
that just becomes this race to the bottom
of the brain stem of total chaos.
I think we have to say how do we slow this down
and create a sensible pace
and I think that’s also about a humane technology.
Instead of a child developmental psychologist,
ask someone like a psychologist,
what are the clockrates of human decision making
where we actually tend to make good thoughtful choices?
We probably don’t want a whole society revved-up
to making a hundred choices per hour
about something that really matters.
So what is the right clockrate?
I think we have to actually have technology
steer us towards those kinds of decision-making processes.
[Nicholas] So back to the original question,
you’re somewhat optimistic about some of the small things
that are happening in this very small place
but deeply pessimistic about
the complete obliteration of humanity?
I think Yuval’s point is right.
There’s a question about US tech companies,
which are bigger than many governments.
Facebook controls 2.2 billion people’s thoughts.
Mark Zuckerburg’s editor-in-chief
of 2.2 billion people’s thoughts.
But then there’s also world governments
or national governments
that are governed by a different set of rules.
I think the tech companies are
very very slowly waking up to this.
And so far with the time well spent stuff for example,
it’s let’s help people,
because they’re vulnerable to how much time they spend,
set a limit on how much time they spend.
But that doesn’t tackle any of these bigger issues
about how you can program the thoughts of a democracy,
how mental health and alienation
can be rampant among teenagers leading to
doubling the rates of teen suicide
for girls in the last eight years.
We’re going to have to have a much more comprehensive view
and restructuring of the tech industry
to think about what’s good for people.
There’s gonna be an uncomfortable transition.
I use this metaphor it’s like climate change when…
There’s certain moments in history
when an economy is propped up by something we don’t want.
The biggest example of this is slavery in the 1800s.
There is a point at which slavery
was propping up the entire world economy.
You couldn’t just say we don’t wanna do this anymore,
let’s just suck it out of the economy.
The whole economy would collapse if you did that.
But the British Empire when they decided to abolish slavery,
they had to give up 2% of their GDP every year for 60 years.
And they were able to make that transition
over a transition period.
I’m not equating advertising
or programming human beings to slavery.
I’m not.
But there’s a similar structure of the entire economy now,
if you look at the stock market,
a huge chunk of the value is driven by
these advertising programming human animals based systems.
If we wanted to suck out that model,
the advertising model,
we actually can’t afford that transition.
But there could be an awkward years
where you’re basically in that long transition path.
I think in this moment we have to do it much faster
than we’ve done it in other situations
because the threats are more urgent.
Yuval do you agree that that’s one of the things
we have to think about as we think about trying to
fix the world system over the next decades?
It’s one of the things but again
the problem of the world,
of humanity is not just the advertising model.
The basic tools were designed,
you had the brightest people in the world
10 or 20 years ago cracking this problem
of how do I get people to click on ads.
Some of the smartest people ever,
this was their job.
To solve this problem.
And they solved it.
And then the methods that they initially used
to sell us underwear and sunglasses and vacations
in the Caribbean and things like that.
They were hijacked and weaponized
and are now used to sell us all kinds of things
including political opinions and entire ideologies.
It’s now no longer under the control
of the tech giants in Silicon Valley
that pioneered these methods.
These methods are now out there.
So even if you get Google and Facebook to
completely give it up the cat is out of the dog.
People already know how to do it.
There is an arms race in this arena.
So yes we need to figure out this advertising business,
it’s very important but it won’t solve the human problem.
Now the only really effective way to do it
is on the global level and for that
we need global cooperation and regulating AI,
regulating the development of AI and of biotechnology
and we are of course heading in the opposite direction,
of global corporation.
I agree in that there’s this notion of the game theory.
Sure Facebook and Google could do it
but that doesn’t matter because the cat’s out of the bag
and governments are gonna do it
and other tech companies are gonna do it
and Russia’s tech infrastructure’s gonna do it.
So how do you stop it from happening?
Not to equate slavery in a similar way but
when the British Empire decided to abolish slavery
and subtract their dependence on that for their economy,
they actually were concerned that if we do this
France’s economy is still gonna be powered by slavery
and they’re gonna soar way past us.
So from a competition perspective we can’t do this.
But the way they got there was by turning it into
a universal global human rights issue
that took a longer time but I think like Yuval says
this is a global conversation
about human nature and human freedom,
if there is such a thing,
but at least kinds of human freedom
that we want to preserve.
That I think is something that is actually
in everyone’s interest and it’s not necessarily
equal capacity to achieve that end
because governments are very powerful
but we’re gonna move in that direction
by having a global conversation about it.
Let’s end this with giving some advice
to someone who is watching this video.
They’ve just watched an Alex Jones video
and the YouTube algorithm has changed
and they sent ’em here and they somehow got to this point.
They’re 18 years old,
they want to devote their life to making sure
that the dynamic between machines and humans
does not become exploitative and becomes one in which
we continue to live our rich fulfilled lives.
What should they do or what advice could you give them?
I would say get to know yourself much better
and have as little illusions about yourself as possible.
If a desire pops in your mind don’t just say,
well this is my free will,
I chose this therefore it’s good,
I should do it.
Explore much deeper.
Secondly as I said join an organization.
There is very little you can do
just as an individual by yourself.
These are the two most important advices I could give
an individual who is watching us now.
[Tristan] And I think your earlier suggestion of,
understand that the philosophy of
simple rational human choice.
We have to move from an 18th Century model
of how human beings work
to a 21st Century model of how human beings work.
Speaking personally our work is trying to coordinate
a global movement towards fixing some of these issues
around humane technology and I think like Yuval says
you can’t do it alone.
It’s not a let me turn my phone grayscale
or let me petition my Congress member by myself.
This is a global movement.
The good news is no-one kind of wants the dystopic end point
of the stuff that we’re talking about.
It’s not like someone says no I’m really excited
about this dystopia.
I just wanna keep doing what we’re doing!
No-one wants that so it’s really a matter of,
can we all unify in the thing that we do want
and it’s somewhere in this vicinity
of what we’re talking about
and no-one has to capture the flag but we have to move away
from the direction that we’re going.
And I think everyone should be on the same page on that.
We started this conversation by talking about
whether we’re optimistic and I am certainly optimistic
that we have covered some of the hardest questions
facing humanity and that you have offered brilliant insights
into them so thank you for talking
and thank you for being here.
Thank you Tristan,
thank you Yuval.
Thank you. Thanks.
NYTimes: Can America Restore the Rule of Law Without Prosecuting Trump?
NYTimes.com: Inside the Chaotic, Cutthroat Gray Market for N95 Masks
From The New York Times:
Inside the Chaotic, Cutthroat Gray Market for N95 Masks
As the country heads into a dangerous new phase of the pandemic, the government’s management of the P.P.E. crisis has left the private sector still straining to meet anticipated demand.
https://www.nytimes.com/2020/11/17/magazine/n95-masks-market-covid.html?smid=em-share
In Georgia, private equity is investing in a divided government.
Interesting quote from “21 Lessons for the 21st Century”
“It’s very dangerous to be redundant. The future of the masses will then depend on the goodwill of a small elite. Maybe there is goodwill for a few decades. But in a time of crisis—like climate catastrophe—it would be very tempting and easy to toss the superfluous people overboard. In countries such as France and New Zealand, with a long tradition of liberal beliefs and welfare-state practices, perhaps the elite will go on taking care of the masses even when”
Start reading this book for free: https://a.co/a6Hik2P
Interesting quote from “21 Lessons for the 21st Century”
“Nevertheless, ancient hunter-gatherer bands were still more egalitarian than any subsequent human society, because they had very little property. Property is a prerequisite for long-term inequality. Following the Agricultural Revolution, property multiplied and with it inequality. As humans gained ownership of land, animals, plants, and tools, rigid hierarchical societies emerged, in which small elites monopolized most wealth and power for generation after generation. Humans came to accept this arrangement as natural and even divinely ordained. Hierarchy was not just the norm but also the ideal. How could there be order without a clear hierarchy between aristocrats and commoners, between men and women, or between parents and children? Priests, philosophers, and poets all over the world patiently explained that just as in the human body not all members are equal—the feet must obey the head—so also in human society equality would bring nothing but chaos. In the late modern era, however, equality became an ideal in almost all human societies. This was partly due to the rise of the new ideologies of communism and liberalism. But it was also due to the Industrial Revolution, which made the masses more important than ever before. Industrial economies relied on masses of common workers, while industrial armies relied on masses of common soldiers.”
Start reading this book for free: https://a.co/c87wPrK
Interesting quote from “21 Lessons for the 21st Century”
“If we are not careful, we will end up with downgraded humans misusing upgraded computers to wreak havoc on themselves and on the world. Digital dictatorships are not the only danger awaiting us.