N/A Delighted to be able to have
Larry Lessig with us today. I hope you remember, Larry,
a little over 20 years ago, you invited me to come
speak at your class with-- Of course. --Jonathan Zittrain
which, at the time, I considered a great
favor and honor to me. And so now I'm
inviting you back, which I also consider a favor
and honor to me that you're joining me here in Stanford. So thank you. Larry, as you may know, is
the Roy Furman professor of law and leadership
at Harvard Law School. He's also been a professor
here at Stanford-- what was it, until about 2010? '09, yeah. 2009 where, among other
things, he founded Stanford's Center for Internet and
Society, still going strong. And he's a founding member of
so many other organizations like Equal Citizens
and founding board member of Creative Commons. I won't mention all of them. He was a former
candidate for president. We'll you be making
an announcement later. No, not today, no. And, OK. Just had to check. And he's written more
books than I can count. Lots of accolades. This is one of the
ones I like who's considered by the New Yorker,
the most important thinker on intellectual property
in the internet era. So Larry's going to talk to
us today a little bit about AI and democracy. And the plan is for
him-- first, he's going to do one of his
patented presentations that, if you
haven't seen before, I think you'll enjoy a lot. And then we'll have
a fireside chat and then open up to
questions from all of you. So welcome, Larry,
and take it away. Thanks a lot. So you'll all remember the
kind of Democratic stroke that afflicted our nation
on January 6, 2021. The important thing to
recognize about this event is that those people believed
that the election was stolen. And tell me what you
would do if you believed the election was stolen. What is the appropriate
thing to do? They believed it that 70%
of Republicans believed it. And it wasn't like the smart
Republicans didn't believe it and the not so smart
Republicans did believe it. A majority of college
educated Republicans believed the
election was stolen. This was their perception. And that perception is reflected
in this is a little bit more encouraging because,
obviously, Republicans are just a portion of America. So this is saying
32% of Americans believe the election
was stolen January 2021. That graph is scary. This graph is even more scary. In the time since
January 6, 2021, there's been no
change in the number of people who will say they
believe the election was stolen. We've had four years almost
of debate and evidence and every single
analysis you can make. Those debates and analyzes
show overwhelmingly there's no evidence the
election was stolen. None of the contests could have
flipped the ultimate results. And yet, the persistence of this
view is the reality of our time. I want to argue this
is something new. Think about Richard Nixon. Nixon was a president as
popular among Republicans as Donald Trump was
for the first chunk of his administration
time in office. He was hated not as much as
Donald Trump was by Democrats. Independents are
somewhere in the middle. But then, starting about six
months before he ultimately resigned, this is the pattern
of support for Richard Nixon. And what's striking about
that pattern, of course, is that he is facing a decline
in all three categories democrats, republicans,
independents, at about the same rate. The whole country was
watching the same story. The story of the outbreak
of Watergate, and they reacted in the same way. By contrast, this is Donald
Trump's approval rating over the whole of
his administration. No change in his support,
effectively no change in support regardless of what happens
because people in this world are in bubbles of reality
and having their views affirmed and reinforced
by media that feeds them exactly what
they want to believe. This is the new normal. And this truth should bother us. Indeed, I think we should
develop a kind of paranoia about it. I'm going to describe it as a
particular kind of paranoia. It's the paranoia of the hunted. So like the birds in Alfred
Hitchcock's characterization or a modern version
of this, we should think about the hunting in
the sense of an intelligence in a particular
sense out to get us because our perceptions,
our collective perceptions, our collective misperceptions
are not accidental. Not necessarily intended,
but they are expected, maybe intended the product of what
I'm going to call the thing. By the thing, I'm
going to talk about AI. But you can't talk about
AI without genuflecting a little bit to our
future overlords. So I'm not going to say AI
is necessarily terrible. Obviously, it's the best
technology man has ever created. It's also possibly the
technology that ends mankind. So we don't have to pick
between those two right now. But I want to think of AI
in a little bit of a broader perspective. I want to recognize
it as an intelligence. And as the term
suggests, we distinguish between artificial and
natural intelligence. And we humans claim the kingdom
of natural intelligence, presuming us to be above
everybody else or everything else, whether
that's true or not. And by artificial
intelligence, we refer to the
intelligence we make. So here's the point-- we already have, and for
a very long time now, made artificial intelligence a
central part of our existence. This is the great point
that David Runciman makes in his book the Handover. I don't mean digital
artificial intelligence. I mean analog
artificial intelligence. So think of any
entity or institution that has a purpose and that
acts in the world instrumentally in light of that purpose. I'm going to call
that analog AI. Instrumentally rational in
light of the facts in the world. So in this sense,
democracy is an analog AI. Has institutions,
elections, parliaments, constitutions for the purpose
of some collective end. Here's our aspiration
for we the people to form a more perfect union. The democracy is, therefore, an
analog artificial intelligence devoted to a common good. Or corporations are analog
artificial intelligences, institutions,
boards, management, finance for the purpose
of making money, at least as viewed today, the
kind of absurd, Friedman esque view that the only
purpose of the corporation is to maximize shareholder
value regardless that is its objective
function, and it acts in the world to advance
that objective function. These are AIs. They have purposes
and objectives. Sometimes complementing. So the purpose objective
of a school bus company and school boards
is complementing, sometimes competing. So the idea of a clean,
healthy park and smokestacks from a power plant
are conflicting. And when they
conflict, we play out in our mind a natural
relationship in that conflict. So we imagine power plants
next to a park spewing smoke into the park. Democrats then say--
small d democrats then say we need a
referendum to clean up the park, to clean up the air. And imagine they vote in that
referendum wins and the smog is taken away. This is AIs competing
and democracy winning. It's a happy story. It's also a fantasy
in the United States right now because right now,
corporations are actually more effective analog
AIs than our government is in achieving
their objectives when their objectives conflict
with the objectives of the government. So think about it like this-- across time as the x-axis
instrumental rationality, the y-axis humans. Pretty good, instrumentally
rational, better than cows, maybe not as good as ants. But the point is we are capable
instrumental rational entities. Democracy is a necessary
instrumentally rational entity. More instrumentally rational
for certain purposes. Runcimans point is, if you want
long term stable environments, you need a government
to facilitate that. Humans on their own without
that entity can't do it. Corporations, I want to
suggest, are even more instrumentally rational
than democracy and democracy than humans. Now, each of these
layers tries to aspire to control the layer above it. So humans, through elections,
try to control their democracy and democracies,
through regulations, try to control
their corporations. But that aspiration is
different from reality. The reality of control
in the United States is corporations effectively
control their democracy. That's the consequence of
things super PACs and the way in which money affects and
drives results in politics. And democracy structures
itself to control effectively the humans corrupted
representation systems like gerrymandered districts. So these relate to each
other in not the way the lower level hopes. And if you think about the
observation of the Godfather of AI, Geoffrey
Hinton, where he says, there are very few examples
of a more intelligent thing being controlled by a
less intelligent thing, suggests a corollary. Very few examples of a more
instrumentally rational thing being controlled by a less
instrumentally rational thing. So these AIs, I want to
say, are the analog AIs. Then we think about
digital AI on top of that. And once again, you
have an aspiration of control, the corporations
trying to control the AI. But increasingly, the
reality of the control is not quite as effective as
the corporations might want. My favorite example of this
was in September of 2017, when it was revealed by
ProPublica that Facebook had a category of ads that you
could buy targeting Jew haters. They were very
embarrassed by this. And of course, there's not a
single human inside of Facebook that created the
category "Jew hater." The AI had determined Jew haters
would be a pretty good category to begin to offer ads for. They could be quite
profitable if you offered ads to Jew haters. So it just developed
the category and started selling
ads on the basis of it. Facebook said, we didn't do
it, but that's the point. It was the AI that did it. They don't control the AI, even
if they aspire for the control. But the real
difference here is just the magnitude of this massively
more instrumentally rational thing, the magnitude
of the difference, because this digital AI will
be more efficient at achieving its objective than are we. And here we cue the paranoia
because our perceptions, our collective perceptions
are collective misimpressions are not accidental. They are expected,
they are intended, they are the product of AI. You can think of this as
the AI perceptions machine. We are its targets. So, Tristan Harris described
the first contact with AI, with digital AI,
being the contact we had through social media. Tristan Harris, a student
here, and went to Google, and then left Google to
start the Center for Humane Technology. He was the driving force behind
the social dilemma, which turns out to be the
documentary that's been viewed by more humans than
any documentary in the history of documentaries. But at Google, he was focused
on the science of attention, cued up by people like
Professor Fogg from here, which is using of AI to engineer
attention to overcome resistance to increase engagement with
these digital platforms because engagement is
the business model. It is a kind of, as Tristan
put it, brain hacking. We can think of brain hacking
relative to body hacking. So body hacking would be the
exploiting of food science, engineering food to
exploit evolution, to overcome a
natural resistance. So you can't stop eating
so-called food so as to sell food or sell, quote,
"food" more effectively. Processed food companies
do that as a business. Brain hacking is the
same with attention. Exploiting evolution. The fact that we can't
resist random rewards or we can't stop consuming
bottomless pits of content with the aim to increase
engagement to sell more ads. Now it just so happens
we engage more. The more extreme,
the more polarizing, the more hate
filled the content. So that is what we are
fed with the consequence that we, the people, become
polarized, ignorant, and angry, and democracy is
weakened in the process. They give us what we want. Here's the critical point--
they're not forcing something on us we don't want. We're getting what we want. But the point is what we want
produces reactions like this. Key is, I want to say,
to recognize this is not because AI is so strong. It's because we are so weak. So it's not AGI that we
should be worried about today. Maybe tomorrow, but not today. It's the fact that
way before we get AGI, the AIs are capable of
overcoming what we otherwise would reflect upon the
thing that we wanted. Tristan was focused on the
individual human weakness. I want to suggest
the way it overwhelms collective human
weakness, overwhelms the collective ability
of us to decide what might be in the
interest of the nation because it turns out we're
pretty weak in exercising that muscle. So not just the
individual alone, but it is all of us surrounded
by these metal heads. Long before AGI,
it overwhelms us. So the AI gets what
it seeks, engagements, and we get what I want to say
is a kind of democracy hacked. And that was first contact. What will second
contact produce? What's the nature
of how AI engages in the current version
of AI with election that we're about to enter into? When AI is not just targeting
insanely effectively, but creating and targeting
insanely effectively, how much more
effective will it be in either suppressing
the vote or radicalizing a portion of the vote
or convincing people that, in fact, their
interest is not what they otherwise would have
thought their interest to be? And it raises the fundamental
question, what can we do. What is to be done. Well, when you face a
flood, the first thing to do is to turn around and run away. Run away. You want to move democracy
to higher ground, to protected ground,
to insulate democracy, to shelter democracy from
the manipulative force of AI, from AI's harmful force. You want to find a way to tap
into a democratic will that is not so easily
perverted or distorted either as the unintended
consequence of a business model of engagement or as
the intended consequence of the Chinese or the Russians. The law does this in
the context of a jury. The deliberations of a jury
are protected deliberations. Not anything can be
presented to a jury. The judge decides what is within
the realm of what evidence allows, and they can't listen
to anything beyond the evidence. And then they
deliberate together. That's what a jury is. Democracy reformers
across the world are trying to do this
increasingly with democracy. And the core of this movement
is something called the citizen assemblies movement. So citizen assemblies
are random representative informed and deliberated bodies
that are protected in the work that they do from outside
or manipulative influences that might undermine our
confidence in the outcome. Here at Stanford, you have
well developed institution of this practice through
Jim Fishkin's Center for Deliberative
Democracy and what he calls deliberative polling. Citizen assemblies are more
democratically connected in the sense that they're
producing outcomes. They're not just
producing attitudes. So Iceland used something like a
citizen assembly to craft a new Constitution for starting
with 1,000 randomly selected Icelanders who join together to
identify the values that a new Constitution was to reflect. And then they had an
election to select-- 26 people who would sit
on a drafting committee. 500 people ran
for that election. It's a country the
size of Buffalo. 500 people ran
for that election. 24 were selected. They drafted a Constitution. It was sent out to the
public overwhelmingly. The public supported it. More than 2/3 supported
every single element of that Constitution. Then the parliament
just ignored it because the parliament thought
it was the sovereign, not the people. A more successful
story was Ireland. Ireland started around the
time of the 2008 crisis. That's the same time Iceland did
what it did a process of citizen assemblies. Ireland selects
randomly 99 citizens and then has one politician
type who presides. They've addressed
a series of issues that the Ireland
parliament could never have addressed effectively. For example, abortion,
same sex marriage. Those are two issues that would
be like the Texas legislature on those two issues. But both of them
were overwhelmingly supported by the
citizen assembly in a progressive direction. So same sex marriage
was approved, abortion was deregulated. They then set those
results out to the public, and the public supported them
at an even higher percentage than the citizen
assembly has done. France has begun to make this a
central part of what presidents run on. So Macron ran, promising he
would have won on climate. And there was one on climate. Then he said he would
have won at end of life, and they had won an end of life. These have 150 randomly
selected people who serve for over 9 months,
seven sessions, basically once a month in Paris. Of course, it's
pretty easy to get people to agree to come to
Paris for something like that. But the point is,
it produced results that then drove the
results, the decisions that the French government made. And end of life, not so
successfully with climate. It's happening across Germany. The point here is that
it's happening everywhere around the world and not so
much here, which is weird because our
Constitution basically builds into it a commitment
to a very imperfect citizen assembly-- the juries. For example, you can't be
prosecuted for a federal crime unless a grand jury decides
you should be prosecuted. And a grand jury is
just a random selection of people who sit together and
decide whether you have likely violated the law. And then you can't be
convicted unless a petit jury, the regular jury, a 12 person
jury, decides to convict you. That's a commitment of
governmental power to a randomly selected group of people. Doesn't seem to
touch our lives much. I mean, I think
you were commenting you were sitting on a jury
three times in your life. In Philadelphia,
in the beginning of the Republic, the
average juror, of course, a white male property owner-- so smaller set of
the population-- but the average juror served
on three juries a year. Three times a year
they made a decision about somebody's property
or somebody's life or somebody's liberty. And so they rotated
government power in a process that forced people
to deliberate with people they didn't know, and they
practiced democracy in that way. Now, I think this movement
is extraordinarily hopeful and exciting. And I want to insist it's
not just a good idea. I think it's kind of
existential for democracy as a kind of security
for democracy, a way of protecting us from
a certain kind of hacking, a hacking that would steer
us against a public will. So it's a change not just
to make democracy better. I think as we think about the
evolution of this technology, it's a change to let
democracy survive, recognizing both the terrifying
and exhilarating moment that we are in that long
before superintelligence, long before AGI, AI threatens
a democracy like this. But that there's
something we could do. And while we still can, we
should do that something. We should know that we can't
trust democracy just now. We should see that we
still have time right now to build
something different, and we should act to make
that difference happen. Now, not because we
know it will succeed. Quite frankly, I don't
think it will succeed. I kind of think it's hopeless. But there's an
attitude that we have to embrace with respect
to anything we love. I once gave a lecture and
a woman stood up at the end and she said, professor, you've
convinced me, it's hopeless, there's nothing I can do. And I thought, OK,
that's a failure. And at that point,
I, for some reason, had an image of my
then six-year-old son who's now your age, basically. And I thought, what if a
doctor came to me and said, your son has
terminal brain cancer and there's nothing you can do. Would you do nothing? Would you just give up? And I realized that's
what love means. The odds don't matter. It's you do whatever you need
to do to save the thing that you love. And that's what
you need to think when you think about the threats
that we have to this democracy, if indeed you can feel that
love for this democracy while there's still time, while
our robot overlord is still just a Sci-fi fantasy. Thanks very much. [APPLAUSE] Thank you, Larry. That was amazing. I feel sorry for the one last
guest speaker we have after you. A sort of unattainable
standard for people. But I'm glad you guys
all got to see Larry. There's some really
powerful ideas and an amazing presentation. So earlier today, you
and I were at a workshop for what we call the
digilist papers, which is a audaciously inspired
by the Federalist Papers. But the premise is that
just as John Jay and Madison and Hamilton were trying to come
up with some of the principles for governing society, as
technology was rapidly changing and democracy was
pretty precarious, we need to rethink those
things for the 21st century. You wrote the essay
that folks here have read on protected democracy. And it was good that you touched
on a lot of the themes there. One that you didn't talk
that much about just now, but want to give you a chance
to talk a little bit about is the idea of
vetocracy and that there are these moneyed interests
and polarization that are making it harder
to make any changes or get anything accomplished. Could you flesh out
that part of it? Yeah, so Francis Fukuyama,
about 20 years ago, started talking about America as
a vetocracy, by which he meant vetocracy is any system where
a small number of actors have effective power to block
the capacity of the entity to make a decision or to act. Now, of course, the framers of
our Constitution crafted America to be something
like a vetocracy, because there are many points
in the process of getting a bill passed where entities can block
that bill from being passed. So if house doesn't
pass a bill, it just doesn't become law, period. If the Senate
doesn't pass a bill, it just doesn't become law. If the president vetoes a bill,
it can be overridden by 2/3 of congress. So it can become law, but
it requires a supermajority. If the Supreme Court
strikes a law down-- this wasn't compelled
at the beginning, but it's the way the courts
have interpreted it-- that's it, doesn't become law. So there were veto points. But what Fukuyama was
arguing is that we've added to the institutions
the framers have given us many more veto points. So parties, committees inside of
congress, they are veto points. The filibuster,
as it has evolved, don't believe the BS that it's
the traditional filibuster we have right now. It's a brand new filibuster
given to us by Mitch McConnell. The filibuster basically means
as little as 20% of America can block any bill
from passing Congress. Just total veto, not
even overrideable. And the one that's
most salient to me is the effect of super
PACs on money in politics. So in 2010, the
Koch brothers made it known that if a Republican
candidate acknowledged the truth of climate
change, he would be or she would be primaried
in the Republican primary. And since 2010, you've
seen a dramatic drop off among Republicans
willing to even entertain the idea that climate
change could be true because two brothers decided
that they were going to spend their money to stop the
possibility of climate change as a cross partisan issue. I mean, you don't remember
this, but in 2008, there was a fierce debate
between McCain and Obama about who had the better
climate change plan. And both were pretty good. I don't know whether McCain's
was better or not, but the point that was possible in 2008. Two brothers made it impossible
from 2010 on even to this day. And so that's a characteristic
of the vetocracy. And the point about
the connection to AI and the way the media
works is that polarization exacerbates the capacity
for those vetocracy to have an effect. If you can link an issue to the
identity of a political party, if you can make this a liberal
issue or a Democratic issue or you make it a Republican
issue or a conservative issue, you make it impossible
for the other side to embrace it without becoming
a traitor to their identity. So the extent to which politics
becomes identity focused, it makes it easier to find
ways to block the capacity to get anything done. And I was really astonished
as I was doing work to try to find the origin
of the word vetocracy. It's only appeared once before
Fukuyama starts using it. But the most prominent
deployer of the term today is the Chinese government,
because the Chinese government has a very compelling critique
of the American democracy, and it is all about the
vetocracy of American democracy. America can't do anything. And so they will put--
for example, 20 years, they've built 20,000
miles of high speed rail. The United States
government has built 0 miles of high speed rail. Any number of problems-- I mean, there's lots to
complain about with China. So don't get me wrong. I'm not saying, let's
become Chinese-- regulate social media
pretty effectively. So maybe that's pretty good. But my point is that their
critique of us is true. We've built a government that
just can't tackle big issues to the extent they
become polarized, and every issue
becomes polarized. I mean, think about COVID. I remember in March of 2020,
there was this amazing moment when everybody was
willing to say, OK, let's just
pause for a second. Let's just deal with the crisis. And there was no partisanship
about what it was. And very quickly,
[INAUDIBLE] in particular decided that there was a
huge return to beginning to trigger this kind of
question of whether it's true, whether it's exaggerated,
whether-- and you began to see the whole system
become driven by a need to create the [INAUDIBLE]
versus them there, too, even when the consequence
is not just whether a bill gets passed, it's whether
people live or not. So it's the pathology
of it is deep. And I'm not sure how
we get out of that. Suppose Donald Trump
wins in November-- No, I'm not going
to suppose that. Suppose the election
doesn't go the way you want, would you be worried that the US
was too much of a vetocracy then or would you be
happy about that? Yeah. Well, Donald Trump
would do a lot to decrease the vetocracy
of the American government. He's already signaled that
2025 project by heritage is filled with astonishing
innovations, hacks to get around the vetocracy
of the American government. And he would claim
authorities that, so far, no president has
thought to claim, or they probably
thought to claim, but since the Civil War,
have not claimed effectively. And then the question is
whether the courts would resist. And this court has not
demonstrated its commitment to principles that seem to
be principles in this space. So there would be
less vetocracy. So we have this, I think,
really interesting conversation earlier today. I'm not against vetocracy
completely because I think there was some wisdom in
structuring the republic as we originally did to avoid kind
of passions running away. So it's fine to have some. But there's such a thing as
too much of a good thing. So we get to a point
where we are right now and we can't address
any serious issue-- climate change, inequality,
health care, investment. I mean, pick the issue. Can't deal any
issue effectively. And just because you would break
it to make authoritarianism easier doesn't negate
the horror that it has for ordinary democracy. Fair enough. And to address all those issues,
you need a common understanding of fact that, you
wrote in your paper, democracy requires a
common understanding and a common set of facts to
resolve questions rationally. And that was possible in the
age of broadcast technology. It's unimaginable today. Say more about that. And that's how you started
your talk a little bit about the way that
different groups have different perceptions
of what truth is and the fragmentation of that. I think there are
two technologies we have to keep track
of at the same time. So one technology is
broadcast technology. So at the birth of the
nation, broadcasting was basically pamphlets. But basically, pamphlets are not
what we mean by broadcasting. You could print and distribute
and everybody in the country could get access. But it took four months
to get information from one corner of
the United States to the other part of
the United States. So it's not like
ever was everybody listening to the same
story at the same time. Broadcasting changed that. And especially during
the Second World War, it became a central organizing
technique for both the fascists and those fighting fascists. Both Hitler and
Goebbels, but FDR too, using broadcast fireside
chat to unite the nation. Yeah, that's what
we're doing here. Tens of millions of
people are obviously listening right at
the second to this. It's quality, not quantity. Yeah, OK. We tell ourselves
that, we academics. We sell 500 books. You've never sold
just 500 books. But the point is, there's
this period of time-- Marcus Prior at Princeton--
if we're allowed to mention Princeton here-- describes a broadcast
democracy, which he says is kind of the beginning of the 1960s
to the middle of the 1980s. And in that period
of time, everybody's basically watching video
presentation of the news on one of three news stations that
show the news at the same time. So you don't have a choice
to watch home shopping network while the news is on. If you want to watch TV,
you're going to watch the news. And TV's pretty compelling. So you're watching the news. And that regular diet
of down the middle news created a certain Republic. I don't mean to
say it was unbiased or the understanding was
complete or it was a golden age. I'm not saying it's
a golden age at all. I'm just saying it had a
certain characteristic which was the agenda was presented
to the American public and the American public
responded to the agenda. And we accomplished an
extraordinary amount in those 40, 35 years. I mean, you think of civil
rights rose and became a huge issue that
then was resolved with massive
legislative changes. The environment, Richard
Nixon, the Vietnam War-- these were really big issues the
nation struggled with and worked through them in ways
that was progressive. So one technology
is broadcasting. And so you can say that we
went from a non broadcast era to a broadcast
era, and now we're going back to a
non broadcast era where we're not all watching
the same three shows. We're all watching our
own little channels. They might be polarized into
coherent political spaces, but it's not like anybody is
consuming the same content at the same time
in the same way. So that's the first dimension. The second dimension is the
legibility of the public. So the weird coincidence is
that broadcasting and polling, modern scientific polling, are
born basically at the same time. The first dramatic appearance
of the modern polling technique is 1936 election,
when everybody was convinced that Alf
Landon was going to beat FDR overwhelmingly. The fact that you've
probably never heard of the word Alf Landon means
you know that didn't happen. The reason people thought that
the then prevalent technique for polling was
basically straw polling. Literary Digest
basically asked people to send in coupons that said
who they were voting for. They collected
millions of ballots. The millions of ballots went
overwhelmingly for Landon. So they said, Landon is
clearly going to sweep. But the population
of people that were asked to respond
to the Literary Digest were registered
owners of automobiles who, in 1936, were not a
random selection of Americans. So Gallup, who was a graduate of
Northwestern journalism school or something, said, I'm going
to do a different technique, random representative sampling. And he said, FDR is going to
overwhelmingly trounce Landon. Everybody said it was a joke. It was a joke. That was impossible. And of course,
that's what happened. So with that event, the
world all of a sudden realized there was a
technique for understanding what the people thought
at any particular time. But what's striking
about learning how to read the people in the
middle of broadcast democracy is that we learn how to read
the people just at the moment that the people have something
interesting to say because they are all being educated by the
same basic set of information, and we can track their
views on the basis of this, not comprehensive, but
coherent set of information. And it convinced people-- like a book by Ben
Page and Robert Shapiro called the
Rational Public. It's kind of bizarre title
of a book for us today. Who could think the
public is rational? But what they did is they
looked at the period basically of broadcast democracy,
and they could show how the American
public responded rationally to the information they were
given about policy issues. And they concluded this is
the nature of democracy. They were not sensitive
to the contingency of the technological
environment within which this reality was created. They just thought this was
the nature of democracy. But what we've seen since
the birth of cable news, cable television and,
therefore, cable news and then the internet, is that
the multiplication of the number of outlets
means that people watch whatever they want to watch. And the only people who watch
the news are the news junkies. And the news junkies are the
most partisan, most politically engaged of the public. And so the news plays
to those people, and that playing to the
people drives polarization at the context of the news. And at the same time, we
are able to pull them, pull the public. And we increasingly see a crazy
public through the polling. So the scariest statistic I
come across here is in 1998, Pew started asking
the American people, do you have faith in
American's political judgment. And 2/3 of Americans said,
yes, we have faith in Americans political judgment. Today, those numbers
are reversed. 2/3 say they don't have faith
in Americans political judgment. And part of the reason is
we can read the people, and we read the
most extreme people because they're
the ones that are most visible and most engaged. And we look at
the crazies and we say, oh, my gosh, why
would we trust government to these crazy people. And the consequence of
that is this erosion in confidence in the democracy. So in one sense,
we are like today is very much like
1880 or 1870 where there's plenty of polarized
and partisan press. But the people were
invisible in 1870 and 1880. They didn't matter to
what policymakers did. Today, they're visible. They're legible. So on January 6th, there were
many Republicans in Congress who thought oh, my gosh,
thank god that's over. That guy is gone. We never have to
worry about that guy. Lindsey Graham, on the
floor of the Senate said, I'm off the train
now, it's finished, No more Donald Trump. And then they got
the overnight polls that showed that the base
of the Republican Party was still deeply
committed to Donald Trump. And they're like, what
choice do we have. We got to follow our people. So these two things
together, I think, produce this
particular place where we don't have an easy
capacity to imagine moving out of the consequences. We'll say more. It sounds like
you're saying it's a bad thing that the
politicians are following what their voters want. Well, if they're following
what their voters want and their voters are being
hijacked to consume and believe the most polarizing content-- So to connect the
dots on that part-- so is how is AI and the
internet polarizing? Right. So if you are running a
processed food company, and you're deciding what kind
of food are we going to make, you have your food
scientists figure out what's the mix of salt,
fat, and sugar that's going to be most addictive. And you produce that and
people consume your food. You might notice they
become less healthy. And you're like,
I don't like that. Craft for a period of time
decided, no more of this. We're going to produce
healthy foods, healthy snacks. And of course, the public
said, we don't want that. We want the Cheetos or we
want something unhealthy. So the market
turned against them. That executive was kicked out. They went back to their
old ways of producing food. So the point is
the business model of selling fast food
or processed food drove them to do things they
knew were harmful for America, but that was their business. In the media context,
when you've got engagement as the business model
of social media, its objective is to figure out
what's going to get you hooked and keep you glued
to the screen. Is there a way to have media
be fun and engaging, but also healthy? I mean, can you have healthy
food that tastes good? Can you have healthy
media that-- one way to think about it is system
1 and system 2 and Tversky. So I go to Twitter. It's fun to get those
little hits of something exciting happening. But there's also
community nodes. There's econ Twitter
for those of you who go to that has links
to papers from NBER. And I don't know if
have a balanced diet, but there may be a way of
getting it to be interesting, but also enlightening. Would be wonderful,
would be amazing. I haven't seen it yet. So a lot of experimentation, a
lot of people-- it's not like-- You got to give
us some hope here. It's not my job
to give you hope. That's not my business here. Is there things we can do? Yes. Start with that. So one thing that you or your
kids will do, I think, I hope is to move a significant
chunk of Democratic decisions outside of easily
manipulated decision-makers. So that's the protected
democracy move. But in the immediate
term, what we can do is to try to dismantle
the most poisonous of these vetocracy triggers. And the most poisonous
fitocracy trigger is the role of money
in American politics. And you pick your
issue, and I'll tell you exactly why it's so
ridiculous because of money. And some of them
it's not politically appropriate to talk about. But the reality is money is, at
the high end, a bunch every one of these most significant ones. And if we could find a
way to deal with that, we could begin to move
governments away from a place where the politicians
are responding to the perverse incentives
that money creates. So money creates two different
kinds of perverse incentives. First, the Super
PAC money already described with climate
change, like the Super PAC money is the most
polarizing poisonous money in American politics. It has its effect not
necessarily just by being spent, but by being threatened. There's a great paper by-- I'm blanking on their names-- It's called the Iceberg Theory
of Political Contributions. Basically says that
if you spend money, that can be just as
effective as if you credibly threaten to spend money. So all you have to do is to be
able to threaten that you're going to spend against
somebody, and that has a disciplining effect on them. Yeah, so that's the iceberg
underneath the water is this incredible effect. So Super PAC money is
perverse in that way. But then there's even
the small dollar money. So if you are somebody like
Marjorie Taylor Greene or Matt Gaetz, you raise
most of your money through small dollar
contributions. And the way you do
that is you behave in a way that
makes sure that you get to the top of everybody's
Twitter feed or everybody's Instagram feed or you
get on cable news. How do you do that? You perfect clown show behavior. That's how you do it in
American politics today. The most sensible, balanced,
serious members of congress, you can't tell me who they are
because they are invisible. I want to focus on some of
the technology platforms and some of the ways that
LLMs and others are changing the way people get information. So Google search,
earlier today, we were with Eugene Volokh, one
of your colleagues in law, my colleague at
Stanford, who argued that the business model of
search engines is speech. And now that business
model is changing as LLMs are delivering a lot
of summarized information, not even pointing people to
the sources on the internet. How do you see those-- as people start consuming more
and more of their information, as they likely will,
if they already are, from LLMs and chatbots
and maybe the search engines summarize what your
answers are in some form. How do you see the nature
of that information being affected by economic incentives
versus what other incentives? Yeah, so I think it's
serious, but we first have to recognize the
fundamental difference between push and pull content. So one huge difference in
the world today versus 1970 is people will flip
on Fox News and be consuming Fox News all
day long in their house in the background. Or their news feed
from Facebook-- probably nobody here uses
Facebook-- but news feed from Facebook or
Instagram constantly pushing stuff at them. That content is enormously
consequential for developing people's attitudes about what-- Yeah, we just saw Matt
Gentzkow and [INAUDIBLE] have a paper that just came
out about how they got people to turn off Facebook,
a random set of people to turn off Facebook for six
weeks before the election, and it made them make it
the number a little bit wrong, 2.6% less likely to vote
for Trump if they turned it off, which implies that there's a
big effect of watching Facebook or there was. This was during
the 2020 election. Yeah, right. So that's a percentage
that Biden won by. So it would have been
very significant. Right. So the point is that you've got
the deployments in this context is about driving engagement
that nature of that engagement becomes different if
you've got pull media. So LLMs are pull media. You want to sit down, you
want to ask a question, you're going to get an answer. I think we have to
worry about what the incentives of
the platform are. So we talked about
this earlier today. The early days of
Google was very pure. Google was just
giving you whatever website happened to have
the most links given the nature of your query. And so in that
sense, it was just reporting the nature of
the internet at the time. It was extremely clean
and extremely valuable. Now what you get fed to you is
a function both of what you want and what the advertisers who
are buying your attention have succeeded in convincing
the algorithm to feed you. And so the incentive of
advertising inside of the Google search engine changes
the nature of what it's going to feed
you to drive you more to the incentive of
whatever the advertisers are. Well, they separate out the paid
ads and the organic content. But then the ordering-- even
the ordering in the organic can be affected by the content
of what the advertising pitching is. I mean, the ads are
certainly at the top. Yeah. OK. But the point is
the same concern should exist in the
context of LLMs. Yeah. Well, in the LLMs, I think
the concern is potentially a lot bigger because it all
gets kind of mixed in there. And then they have to figure
out what they present and it is a challenge to their revenue
model how they're going to-- So we talked about
an example today. So imagine ways AI is
giving you a choice and it can see there are
two paths you can take and they're roughly equivalent
in length, but one of them passes by an advertiser
of them, and they send you via the advertiser. We need to know what the
underlying incentives of the AI is or are to be able to evaluate
what the effect of that's going to be. So one more question, and
then I'll open it up to-- this is a really big question. So our office up just-- my office just before, we
were talking about a world-- we don't know how
far away it is where AI can do most of the
things that humans can do. And that's going to have
some big economic effects. We're going to talk
about some of them next week with Daniel Susskind. But it also has some
political implications because if people are no
longer economically essential and they lose their
economic bargaining power, it's likely to have effect
on their political bargaining power. And it'd be interesting to
hear some of your thoughts on what kind of
challenges that creates for democracy if economic power
is no longer dependent on labor. Yeah, so the first
consequence is, as you observed earlier today,
labor is, by its nature, decentralized. It's wherever people are. If you eliminate
the need for labor, then capital can be
extremely centralized. And so the power of
centralized capital becomes enormous relative
to what it is today. And the way I think about
this-- we talked about this book earlier-- Daron Acemoglu and
Simon Johnson's book, Power and Progress. And the thesis of
Power and Progress is that there's this really
weird period in history in America from
roughly 1955 to 1975 where there's almost
perfect correlation between the rise in productivity
and the rise of equality. So productivity goes
up, inequality goes up. And for all of
history before that and all of history after that,
there is no such connection. And so they ask, why is it
during this weird period of time there was this connection
between the rise of productivity and the rise of equality. And the claim was because
during that period of time, we had effective countervailing
power, government power through antitrust enforcement
and redistribution of income and economic power
through labor unions. And so this
countervailing power was able to make sure that this
extreme growth and wealth was distributed equally. But the lesson of this is if you
don't have that countervailing power, and when you see an
explosion of productivity, you're not going to see an
explosion of equality or wealth for everybody. You're going to see a
concentration of wealth in the very few. And so we're right now
at one of these moments, we're going to see an
explosion in this productivity. And yet we have a government
incapable of exercising these countervailing powers. And more importantly,
the ideology of people like Marc
Andreessen kind dominate the debate
about this is that it would be a
disaster for the government to step in and do anything
in the context of this growth in innovation and wealth
because that would just destroy it or kill it or
it would be the end of it. So at the moment where
we most dramatically need the capacity of
government to do something to deal with the radical
consequences of this explosion in wealth, we don't have
a government to do it. Now if we did have a government,
it could be pretty good. You have these technologies
that do all of our work. Maybe we don't have
to work as much. Maybe we can have a
UBI and a capacity to have lives that are much
more meaningful and less more balanced. When did humanity ever decide
that working 60 hours a week was a good thing,
something we needed to do to be real as humans? But that's the basic
world you live in now. We could live in a
radically different world if we could get
that right, but I don't see the capacity for
getting it right right now, given-- Well, shame on us
if we mess that up. But let's open up to
questions and comments. How about right over here? Hello. Thank you so much for your talk. You can say your name. And I have a question
about, I guess, the role of centralization
and decentralization. Yes. What I found striking
about your analysis about news between
the broadcaster and the current era is the role
of decentralization, rather centralization back then in the
consolidation of news sources for people to
engage in versus now everyone can consume
what they want to hear. Do you think that there's
an inherent tension between decentralization and,
I guess, the ability of society to reach consensus on truth? The critical part of
the question that I just want to emphasize
is that you get to hear what you want to hear. Yes. If you get to hear
what you want to hear, the unintended consequence
of that freedom is a growing gap between you
and the rest of the world, where your type and the
rest and the other types. I have an app on my
phone called Read Across. I haven't opened it in a while. I don't know if it still works. But it says it
monitors what I read. And then it suggests
to me that things I should read to balance
my reading so that I get a fair view of the world. I hate that app. I hate it because I don't
want to waste my time reading this other junk. I think I know
what the truth is. And so I want to read
stuff that reinforce makes me feel good about
my views, that convinces me I'm right. That's what I want to do. And so does everybody else. If you're not like
that, then good for you. But you are 0.001% of the world. And so if that's the
nature of humanity, the consequence of that
ability to perfectly pick what you get to
consume means we're going to have these
bubbles, and those bubbles are not going to be able
to understand each other. I mean, I find it
just astonishing. I just don't understand
people and their views. I feel like when I was your
age, I understood them. When I was really young, I
was a conservative Republican. I'm not at all now, but
I understood liberals. I understood why they
thought what they did. But I look at
Republican MAGA now. I just don't understand
what they're talking about. I just can't even get it. And I think part
of it is that we live in these universes
of constant separation. And the most striking
fact about that is if you ask the American
public, the latest poll that of asked the American public,
which political party is more protective of democracy,
the average American will say, the majority of Americans
will say the Republicans because the Fox spin
on what has happened to the former president this
is banana Republic behavior. This is basically politician-- party in power is using
their government power to persecute their
political opponents. It's the thing that
third world nations do, not the United States. They've so effectively sowed
that view that the net of it is that we just
think, wow, Democrats don't care about democracy. It's only Republicans that do. So I like what you said, though. You said not having
a centralized source would be more likely
to lead you to truth, but more likely to lead you
to a consensus about what the truth is. You said that intentionally. Yes, what I noticed was
I guess there was always this argument that
decentralization would somehow lead to, I guess, a more
accurate depiction of the truth, less vulnerability to bias. So having a consensus
about the truth is different than
having the truth. And you may have
more divergents, but that might be closer
to the truth in some sense. Is that part of
what you're saying? Or maybe neither of them are. I don't know, unfortunately. Yeah. I mean, more likely that
somebody knows the truth. It's just that not many
people or not everybody knows the truth. There's no consensus about that. That's a great distinction. All right. Let's go over for a question
in that corner in the back. Hi, professor. Thanks for coming
to class today. I was wondering, I
am personally very frustrated by the lack
of sensible social media regulation in the US. So I was pleasantly surprised
to see some action over TikTok, despite the motivations being
over national security and not just education and
the well informing and well being of people. Do you think in a less
American dominant world, there will be incentives for more
social media regulation, for example, maybe
more media regulation, more barriers to entry in
terms of information quality and perhaps a more
functional democracy? Yeah. I mean, I think that
we like to criticize-- I was praising Chinese before. I'm going to praise them again
in the context of social media regulations-- kind of striking
to recognize-- first of all, we're threatening to
get rid of TikTok. TikTok doesn't exist in China. The version of TikTok
that exists in China is a very different platform. And that platform effectively
regulates access of young people to the content like amount of
time that they can consume it. Heavy regulation of gaming. So the number of hours that you
can play as a kid games online is restricted. Shuts down after I think 10
o'clock or 10:30 or something like that at night. And the whole
society is organized around making sure that the
online environment for kids is safe and productive. And the content on Doyen, which
is the equivalent of TikTok, it convince kids
they should become astronauts or entrepreneurs. 13-year-olds in America
want to become social-- whatever that word is. Media influencers. Media influencers,
social influencers. This is terrifying. But it's a product of this. And what's striking
to me is that it's almost unthinkable in the United
States to say something like, OK, let's just make it
so you can't have access to the platform after 10
o'clock or that, as a kid, you can't be online
for more than-- playing games for
more than three hours a week or something like that
even though we increasingly recognize-- I think Jonathan Haight's
work here is really powerful-- just how destructive it is,
especially for people in the 12 to 15-year-old range,
especially girls. And so this is
another consequence of the incapacity
of us to govern. And I felt that
most dramatically in the early debate about
getting rid of TikTok. And after Frances Haugen-- I had the honor of
being her lawyer when she first became the
Facebook whistleblower. And when she testified in
Congress in October 21, I guess-- I can't remember now-- but-- Earlier, I think. It was after the election. So it must have been 21, yeah. She testified and there was this
broad consensus on Capitol Hill that Capitol had
to do something. And Republicans
Marsha Blackburn was like raving about the need
to do something about it. And then AOC gave
her first TikTok. And her first TikTok
was criticizing the idea that you should regulate TikTok. And her argument was, it's
not fair to regulate TikTok until we have a privacy bill in
the United States that's passed. And that was depressing both
because I thought, oh, my god, she can't really think
the issue is privacy. She can't really think that's
what the problem of TikTok is. And number two,
it signaled money had come into the
Democratic Party to begin to split the Democratic
Party about the question of whether to regulate here. Now, I want to move this
a little off politics and into some of the
technology issues. And so is your
perception that it's more important for regulation
now in an era of machine learning and social
media than it would have been in the era of
print newspapers or broadcast television? And explain would you have
had these views 50 years ago? And sharpen what's different now
that requires more government intervention. The difference is
just the dose effect. You don't have to
regulate newspapers because you read newspapers
for 30 minutes in the day and that's the end of it. It's not going to
create a worldview. You're not constantly being
inundated with the content to create a certain way of
thinking about your body or thinking about
politics or identity. But when you have these feeds
that are the business model is how to keep you
focused, just how to get you to eat as
many Buffalo wings as you can, how to get
you to spend constantly your time online, that dose
effect is hugely significant. And so we keep referring
to some of our conversation we had earlier today. But you mentioned
government failure. Clearly, there can
be market failure where it's not working in
response to social interest or breakdowns in public
place, et cetera. But governments aren't
necessarily doing that either. And you brought that up in
our conversation earlier. So help me understand
why you have confidence that the government
control of the social media wouldn't be just as pernicious. Well, I don't like the
idea of the government control of social
media, but I think the government can change the
incentives of social media. So, for example, imagine
an engagement tax, a quadratic engagement tax. One unit is one tax and
then two units is four, and three units is nine. If the units are right and the
numbers are right, very quickly, you make it so that
the engagement business model no longer is profitable. Engagement per
person or OK, Gotcha. So at a certain point-- Romer had something
a little similar. Yeah. So at a certain point, they
say, Eric, get a life stop, going through your TikTok feed. You've done it for two hours
and we don't need you anymore. In fact, if you were
on here, your value is less than your cost. And so that would be a change. It doesn't require
the government deciding some content
is good or bad. But I think the punch
line for the TikTok thing is TikTok would
not have happened but for the Gaza conflict. You mean the regulation of data. Yes, TikTok regulation. Let's get a few more
question in quick. We have a little
over 10 minutes. So why don't we just go a couple
right here next to each other? First in the back
and then-- yeah. Thank you. I wonder how much this
conversation about engagement in social media is
divorced from the actual AI that we're talking
about in this class. Do you think that-- I mean, there's been a
completely new business model for how AI is
deployed to our lives. I never paid Google
for its services. I've never paid Facebook. Engagement mattered. I do pay for Gemini, I pay
for cloud, I pay for ChatGPT, and they provide a
lot of use for me. So I wonder, in this
dynamic, I feel like you're conflating AI and social media. Social media, it clearly has
bad ramifications for democracy. But I wonder if there's a new
business model that actually is aligning incentives. Can you speak about that? Well, social media only
works because of AI. So you're right, there's
a new flavor of AI that's producing a
different business model. And the business model of GPT
or ChatGPT or any of these is not advertising
driven right now. We don't know how it evolves. Google wasn't advertising driven
when it was first born either. But right now, it's
not advertising driven. So there's not an
engagement component to it. Absolutely. But you can't forget the
fact that there still is legacy AI in your lives. What it decides to feed you is
not a decision by some intern. But what about-- but
specifically to this question of-- I mean, we talked about
the revenue model as well. And Michael Spence and
Owen had a nice paper where he argued that
different revenue models lead to different kinds
of content being produced. And to the extent, is
that a way to do this? If the customer is the customer
instead of the product, the user being the
product, is that something that's more likely
to have the AI work in the interest
of the consumer? Yeah, absolutely. And if people naturally
chose the subscription model for all of their content,
and that's what they wanted, I would be less concerned
about the AI in that context because it would be delivering
something different. But if it's not
that, if it continues to be as social
media is, engagement based driven by four-year-old
AI, the AI that figures out how to target you and
to engage you like that, it still has a consequence,
even though it's not the sexy AI that
you might be studying or want to be deploying. Got a question up here. Hi, thank you for coming out. And my question is related to
the concept of instrumental rationality, and
it's somewhat related to the question
that asked earlier. You kind of discussed this
idea of incapacity to govern. And the question I have
is how much of this is a consequence of just
polarization versus how our government was designed. We're designed to have a
decentralized state where we need consensus
and we need a bunch of different people working
together to get things done. And so I'm just
unclear about how do we confront the challenges
that you've described in a way that's consistent with
the Constitution or the systems that we've built
around ourselves. Yeah. Thank you. So you're absolutely right. As I said, at the
founding, the design was a vetocracy
of a certain kind. But it wasn't as debilitating
as the one we've evolved. And the capacity of
the United States to do things, whether you
like the new deal or not-- forget the new deal. What happened from 1950
through 1980 or even like '87, the last Tax Act of Reagan, was
hugely significant government steps to deal with
problems in society, whether it's the Voting Rights
Act or any number of these very significant pieces
of legislation that the system was
able to get over and actually do something about. And whether you like
it or not, the point is they could do
something about it. I look at Europe, and I
see all of the regulation they've achieved with
the GDPR or DSEA or DMA. I don't like any of them,
but I admire the fact that they can do it. They have a capacity to do it. And my only point is
you could get to a point where the democracy
is so great, it disables you from
being able to act even when there's an overwhelming
compelling need to be acting. Climate change, I think, is
the easiest example of that. But there are many. Every significant
issue that I think we would identify as things the
vast majority of Americans care about are things that are
blocked because of exactly this. And it's not a partisan point. I mean, the point is
it's just structural. You can invoke and
block in ways that are valuable to both sides
in this political debate. We don't have much time. I want to say two things. Let's have our
remaining questions and answers to be short. And two, they all have
to have the letters A and I in the answer. [LAUGHTER] So let's go right over here. Hi, professor. Thank you so much. I think you've said a lot
of really interesting points about the role of money in tech
and tech influencing government. And I think I'm personally very
skeptical that the US democracy has any real influence
over governing these massive corporations
that are modern day Goliath institutions. What do you think
democracy means when it comes to the way
corporations work right now? Because the tech leaders
that are ruling the world or making the choices of what
our consciousness is, how intelligence is going to
look, how the labor market is going to look in 10 years, they
were not democratically elected. So what does democracy
even mean with the way that our economy is
set up right now? Yeah, so the graph I gave you
of the corporations sitting above the democracy is
exactly at that point. You're exactly right. And it's because I think
they've figured out how to hack Democratic
controls, accountability. And it's not so hard. A little bit of money
spread in the right way, a little kind of
lobbying, revolving door. I mean, look at the military. The military is filled
with people who serve in the military-- and I have
enormous respect for them-- and then spin out into
defense contractors where they're paid
order of magnitude more, and then they spin
back to the military. Well, you tell me how when
they get back to the military, they're going to be
able to be independent and judge, do we need this new
tank system or this new weapon system. That's just a structural
way in which they figure out how to hack us or hack the
democracy so that it's not actually capable of
doing things that-- If Congress wanted to
regulate AI right now, do you think it
would be feasible? And then what about
10 years from now? I mean, if our AI tech
leaders and Congress had a disagreement about the
future of the country, who would you bet on? Yeah. Well, right now, I think it's
feasible, but it's just barely. Depends on the way you frame it. But already-- you
probably didn't see-- but Ted Cruz has the
classic anti-regulation op Ed in the New York
Times that set up the Republican Party's
position, which is this is like the internet. And we all learned that you
shouldn't regulate the internet that will kill the internet. So therefore, you
shouldn't be regulating AI. So we've already created
a partisan valence around the question
of regulation here. But it's certainly going to
be easier to do something now than it would be in 10 years. Not to say it's possible,
it's just would be easier. And it's exactly for the
reasons that you've said, these people, Sam Altman-- not just trying
to be Steve Jobs-- Sam Altman thinks of himself
as Winston Churchill. He thinks of himself that
kind of level of significance to the history of humanity. And that's a real
threat to the capacity of Democratic governance. Let's get some more questions
right here in the back. Hey there. Thanks for your talk. My question is on AI
tools that purport to increase deliberation. I'd say take away from
your talk is that we should talk to each other more. Yep. There are systems
like [INAUDIBLE]. OpenAI had this grant
application last year. There were some the Taiwan
system, for example, came out of that. I'm wondering your
thoughts on AI mediating this kind of deliberation,
whether this is appropriate, whether it faces the same kinds
of issues you've described previously, and so on. Yeah, I think there's
enormous potential to use AI to lower the
costs of real deliberation. We have a project
where we've just purchased a really fantastic
deliberative platform called Chasm, which
facilitates small group deliberation, similar
to what Jim Fishkin has in his center for
deliberative democracy. We're going to open source
it and invite developers from anywhere to take it and
begin to integrate it into more of our lives because
we think deliberation is the essential cure. We got to exercise the
muscle of deliberation to get us back towards a
democracy where people feel responsibility and connection to
what their government is doing. So I think it's
an essential part. And if we could make sure it
happened, it would be curative, I think. And the citizen assembly
movement that I'm describing is born out of the
same sense that you don't have to just imagine
citizen assemblies of 500 people meeting in one place. Imagine you could have
these virtual deliberations of millions of people
meeting at the same time. The platform we could
have a million people deliberating at the same
time in small groups. So I certainly think
that's part of the hope. That's part of the strategy. How do we multiply and build
that up, build that muscle up, because the more people do it-- when you see citizen
assemblies, you see people sitting in small
tables talking in small groups. And they see the other
side as not a lizard. They have kids,
they have dreams, they have hopes like they do. It is the most
curative technique for the kind of
polarization that we've got. And so I'm absolutely on board. That's what we should be doing. There are some
very creative stuff being done with AI enabled
and mediated deliberation. Polis is a good example of this. These digital papers, I guess,
12 of them and several of them talk about specifically about
polis and other media ones. So I'll see if I can make
them available before the end of class. They're all in
process right now. But Sandy Pentland, who just
joined the Stanford faculty at the digital economy
lab, is leading some of that related work. Pentland left MIT? Yes, yes, we got
Sandy to come on over. So that just-- you
didn't know that? And he's been doing work
with Jose Ramon and others to explore how this-- and
also Audrey Tang from Taiwan is writing one of
the papers as well. So she'll talk
about that as well. So we have time for one
more real quick question. Let's go right up here up front. You've been patiently
holding your hand up. I'm a [INAUDIBLE] student here. I have a podcast, too. I'm going to ask the one
Devil's Advocate question here. Is it possible that we're kind
of making much ado about nothing in terms of AI hacking people? I mean, we saw this
boardroom coup [INAUDIBLE]. N/A I don't think that
it would necessarily be possible to even
do it intelligently. Also, I'm just curious
about the system, too. I agree. January 6 is not a great
moment or was it a bad moment. But the system works. The people got arrested. The court-- What's your question? The whole AI hacking
people, it's possible that we're overstating. I'm coming away from this talk,
just feeling like, I don't know, the end is near, we're in this
terrible period in history. And I fully agree
that things are getting very crazy politically. In terms of how AI is, things
have been crazy politically for a while. 2016 was crazy where Clinton
denied the election year. [INAUDIBLE] N/A I just don't quite see
that we're there yet, that we've lost our
free will and that AI is manipulating us to do things
and provide us with information. Sometimes it's wrong. But I'm just curious
if maybe we're overstating some of these things
that we're not quite there yet in terms of getting to
that first [INAUDIBLE]. Is it possible we're
overstating it? Sure. Are we? No, we're not. We're not. It's just not true to say
Al Gore challenged the 2000 results. Al Gore gave in after
the Supreme Court said what the Supreme Court did. Donald Trump still doesn't give
in to what the Supreme Court-- Let's talk about AI. Yeah, but the point
is it's related because certain
kinds of candidates are possible because of
certain kinds of technologies. A candidate like
Donald Trump-- you don't want me to use his name. I get it. [INTERPOSING VOICES] The point is, it is connected. --could not have been a
candidate in broadcast media, period, because nobody would
have taken him seriously, put him on television. He wouldn't have
been interviewed. It wouldn't have been possible. It's only when you begin to have
these machine-driven platforms of content that are
rewarding crazy that you begin to see crazy rewarded. We are optimizing for,
we are selecting for. Look at congress,
selecting for crazy. And so I think, to the extent,
that's a function not explained wholly, but a function of the
incentives the platforms create. We ought to be concerned
about the incentives. And those platforms create
those incentives because of AI. It's an important point. Not LLMs. It's not the AI that you're cool
with today, but it's still AI. It's still as much an
AI issue as anything. And it is having a significant
dramatic effect and measurable and the consequences are real. Now we could shut
our eyes to it. And I think many
people do because they don't see what we could do. You characterize these two board
members who I know very well. It's not a true characterization
of either Helen Toner or Tasha McCauley. The issues at OpenAI are real. They are serious issues
at not just OpenAI. I have a friend who's very close
to one of the senior people at Anthropic. That guy says, my children are
not going to see high school, not because they're going
to eliminate high school, but because he's
convinced they're not going to be able to
control what they're doing. So when the people
inside are jumping out, the whole bunch of the
safety team that just jumped from OpenAI, and said they
jumped from OpenAI because they just don't think that
they have safety in place to protect against the
catastrophic risks or the risks of the technology, how do you
have the confidence to doubt, based on what? Based on the fact that
it hasn't blown up in the last two million years? Yeah, well, that's true. Technology-- I mean, except
for nuclear weapons-- has not blown up the world in
the last two million years. But a lot of people who
are pretty close to it, who are pretty terrified
about what it's doing and what its potential is
and its lack of governance that I think we
should take seriously and you should take seriously. I mean, I'm going
to be long gone by the time it's a real problem. All right. Well, on that optimistic note-- [APPLAUSE]
Opening discussion good afternoon again everybody sean nick's coming at you for the winter circle sports betting podcast it's thursday the 28th i see here on the calendar of august and at the thursday show we got chip ch doug upstone joining me as we always do a college football thursday i guess this... Read more
Okay let's talk we were talking a about the pack 12 a little bit the pack two if you want to call it uh obviously some things are changing some headlines what is the latest what can you tell us about what's going on out there so the the latest is that the the the tupac is a six-pack right now and we... Read more
Welcome to corvalis oregon in the house ooso and hush manz built reer stadium home to the oregon state beavers today we have one of those games that test your focus top 25 team against an unranked opponent can you take care of business as we'll see a squad from the mountain west the unlb rebels taking... Read more
A receiver yeah i mean uh he he's out of this world um he he does everything right you know you can tell that that he wants to be great um with the work that he puts in every single day um taking care of his body you know he he for a guy to have a year like he did coming off of an acl injury his freshman... Read more
A powerful swing that's run down in the back by e par kurt to keep the point alive back to crawford and an emphatic start for the badgers play for the badgers yesterday ellia ruben their star outside hitter blocked twice and the big front line for wisconsin arrives wisconsin's front line final fours... Read more
Alex's got it all speed agility um he's a route runner he's a [music] dog ell i manor is a big physical wide receiver with lots of tools um as an opportunity and ability to create space with his speed and uh he can make tough catches in [applause] traffic obviously everybody has talent and that's say... Read more
All right ben parker of cardinal sports support.com here with stanford wide receiver alec i am manor so first of all just talk about how has fall camp been treating you uh uh so far yeah it's good it's always a blessing to be able to play football and play the sport that you love and that's where're... Read more
Williams scores 19 points in his ncaa debut understandable for stanford considering that their opener was canceled so they have yet to play a game this is their opener we see early williams one of the things for him offensive defender and playmaker and worked on his three-point shooting a lot over the... Read more
The opening weekend of college football is always filled with excitement and this year's week one was no exception amidst all the thrilling matchups tcu's season opener against stanford stood out as a game to remember the haed frogs despite a rocky start pulled off a remarkable 3427 comeback victory... Read more
Stop what you're doing you need to know this scientists at stanford have made mice transparent that's right researchers at stanford have figured out how to make live mice partially transparent using a food dy called tartrazine you know that same yellow dye in doritos by simply applying a solution of... Read more
[music] an atome game with an atome win smu demolished houston christian university in the first week of the college football season 59 to7 the jets were on for the mustangs running back rashard smith had nine carries for 108 yards and two touchdowns while quarterback kevin jennings took the reigns... Read more
I'm roy lee lindsay with the north carolina port council and i want everyone to remember bacon makes everything [music] better hey folks david glenn back here with the north carolina sports network coming at you from the 2024 acc kickoff event as always brought to you by our friends at the north carolina... Read more