Applause Lights

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-09-11T18:31:48.000Z · LW · GW · Legacy · 93 comments

At the Singularity Summit 2007, one of the speakers called for democratic, multinational development of artificial intelligence. So I stepped up to the microphone and asked:

Suppose that a group of democratic republics form a consortium to develop AI, and there’s a lot of politicking during the process—some interest groups have unusually large influence, others get shafted—in other words, the result looks just like the products of modern democracies. Alternatively, suppose a group of rebel nerds develops an AI in their basement, and instructs the AI to poll everyone in the world—dropping cellphones to anyone who doesn’t have them—and do whatever the majority says. Which of these do you think is more “democratic,” and would you feel safe with either?

I wanted to find out whether he believed in the pragmatic adequacy of the democratic political process, or if he believed in the moral rightness of voting. But the speaker replied:

The first scenario sounds like an editorial in Reason magazine, and the second sounds like a Hollywood movie plot.

Confused, I asked:

Then what kind of democratic process did you have in mind?

The speaker replied:

Something like the Human Genome Project—that was an internationally sponsored research project.

I asked:

How would different interest groups resolve their conflicts in a structure like the Human Genome Project?

And the speaker said:

I don’t know.

This exchange puts me in mind of a quote from some dictator or other, who was asked if he had any intentions to move his pet state toward democracy:

We believe we are already within a democratic system. Some factors are still missing, like the expression of the people’s will.

The substance of a democracy is the specific mechanism that resolves policy conflicts. If all groups had the same preferred policies, there would be no need for democracy—we would automatically cooperate. The resolution process can be a direct majority vote, or an elected legislature, or even a voter-sensitive behavior of an artificial intelligence, but it has to be something. What does it mean to call for a “democratic” solution if you don’t have a conflict-resolution mechanism in mind?

I think it means that you have said the word “democracy,” so the audience is supposed to cheer. It’s not so much a propositional statement or belief, as the equivalent of the “Applause” light that tells a studio audience when to clap.

This case is remarkable only in that I mistook the applause light for a policy suggestion, with subsequent embarrassment for all. Most applause lights are much more blatant, and can be detected by a simple reversal test. For example, suppose someone says:

We need to balance the risks and opportunities of AI.

If you reverse this statement, you get:

We shouldn’t balance the risks and opportunities of AI.

Since the reversal sounds abnormal, the unreversed statement is probably normal, implying it does not convey new information.

There are plenty of legitimate reasons for uttering a sentence that would be uninformative in isolation. “We need to balance the risks and opportunities of AI” can introduce a discussion topic; it can emphasize the importance of a specific proposal for balancing; it can criticize an unbalanced proposal. Linking to a normal assertion can convey new information to a bounded rationalist—the link itself may not be obvious. But if no specifics follow, the sentence is probably an applause light.

I am tempted to give a talk sometime that consists of nothing but applause lights, and see how long it takes for the audience to start laughing:

I am here to propose to you today that we need to balance the risks and opportunities of advanced artificial intelligence. We should avoid the risks and, insofar as it is possible, realize the opportunities. We should not needlessly confront entirely unnecessary dangers. To achieve these goals, we must plan wisely and rationally. We should not act in fear and panic, or give in to technophobia; but neither should we act in blind enthusiasm. We should respect the interests of all parties with a stake in the Singularity. We must try to ensure that the benefits of advanced technologies accrue to as many individuals as possible, rather than being restricted to a few. We must try to avoid, as much as possible, violent conflicts using these technologies; and we must prevent massive destructive capability from falling into the hands of individuals. We should think through these issues before, not after, it is too late to do anything about them . . .


Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Ray · 2007-09-11T18:45:08.000Z · LW(p) · GW(p)

You have, I think, come upon the essence of modern political speeches.

Replies from: theguyfromoverthere, Kevin92
comment by theguyfromoverthere · 2014-06-23T17:16:37.922Z · LW(p) · GW(p)

I was going to say this as well. Your last paragraph here is like every presidential speech that I've ever watched.

comment by Kevin92 · 2017-10-18T20:35:07.101Z · LW(p) · GW(p)

I voted for Justin Trudeau but DAMN! Listen to his speeches! They're terrible!

comment by David_J._Balan · 2007-09-11T18:53:06.000Z · LW(p) · GW(p)

The democracy booster probably meant that people with little political power should not be ignored. And that's not an empty statement; people with little political power are ignored all the time.

Replies from: Gray
comment by Gray · 2011-03-15T19:54:44.625Z · LW(p) · GW(p)

Actually, that seems to be an extremely empty statement. "Having little political power" seems to imply, and is implied by, "being ignored". I wouldn't doubt that the two predicates are coextensive. Since people with little political power are, by definition, ignored; saying that people with little political power should not be ignored makes as much sense as saying that squares should be circular.

But maybe I'm not being very charitable here. You can make the shape that was once square more circular, only as long as you note that the shape isn't a square anymore. Similarly, people with little political power can, over time, gain more political power, which is a positive thing. But even if everyone has an equal amount of political power, the proposition that "people with little political power are ignored" would still be true, even if the predicates contain the null set.

Replies from: CuSithBell
comment by CuSithBell · 2011-03-15T20:16:13.160Z · LW(p) · GW(p)

I disagree.

Even if your interpretation of these terms were accurate, "the elements of this set should (in the future) not be elements of this set" isn't an empty statement.

Second, a benevolent dictator (or, say, an FAI) could certainly advance the interests of a group with absolutely no say in what said dictator does.

Replies from: Gray
comment by Gray · 2011-03-18T03:13:26.884Z · LW(p) · GW(p)

Eeek, I think the differences in interpretations are due to the de re / de dicto distinction.

Compare the following translations of the statement "people without political power should not be ignored."

De dicto: "It should not be the case that any person without political power is also a person who is ignored."

De re: "If there is a person without political power, then that person should not be ignored."

If the two predicates in the de re interpretation ("person without political power" and "person who is ignored") are coextensive, and thus equivalent, we should be able to substitute like terms and derive "If there is a person without political power, then that person should not be without political power." Given that I wanted to use the more charitable interpretation, this is the interpretation I should use, and so you're correct :)

But look what happens to the de dicto interpretation when you substitute like terms. It turns into "It should not be the case that a person without political power is a person without political power." This is the sort of thing I was objecting to, to begin with. But it was the wrong interpretation, and thus my error.

(Yeah, I decided to go into an extensive analysis here mainly to refine my logic skills and in case anyone else is interested. Mathematicians, I suppose, would probably not have studied the de re / de dicto distinction; mainly because I don't see much relevance to mathematics.)

Replies from: CuSithBell
comment by CuSithBell · 2011-03-18T04:03:33.894Z · LW(p) · GW(p)

Huh! Thanks for the thorough analysis :) I'd say the most likely intent behind the statement is that people with direct political power should use it for the benefit of those without direct political power - i.e. elected officials and so forth should provide support for minority groups without much voting power. In which case your initial thought that they intended a "de dicto" reading could be right!

Did I tip my hand about being a mathematician by mentioning set theory? ;)

comment by Robin_Hanson2 · 2007-09-11T19:35:22.000Z · LW(p) · GW(p)

Alas, for most audiences I think you would find no one laughing even after an entire applause light speech.

Replies from: patrissimo, Matt_Simpson, Chrysophylax
comment by patrissimo · 2010-12-07T04:55:53.760Z · LW(p) · GW(p)

Yeah, but you'd get lots of applause!

comment by Matt_Simpson · 2013-01-28T21:51:17.217Z · LW(p) · GW(p)

Evidence: any graduation speech I've ever been subject to.

comment by Chrysophylax · 2013-01-31T17:30:14.663Z · LW(p) · GW(p)

I tried this for my valedictoral speech and I gave up after about 15 seconds due to the laughter.

My preferred method is to use long sentences, to speak slowly and seriously, with great emphasis, and to wave my hands in small circles as I speak. If you don't speak to this audience regularly, it is also a good idea to emphasise how grateful you are to be asked to speak on such an important occasion (and it is a very important occasion...). You get bonus points for using the phrase "just so chuffed", especially if you use it repeatedly (a technique I learned from my old headmaster, who never expressed satisfaction in any other way while giving speeches).

I also recommend this technique, this way of speaking, to anyone who wishes to wind up, by which I mean annoy or irritate, a family member. It's quite effective when used consistently, even if you only do it for a minute or two. Don't you agree?

comment by Peter_de_Blanc · 2007-09-11T21:12:26.000Z · LW(p) · GW(p)

I remember at the AGIRI workshop in DC last year, Alexei Samsonovich talked about sorting a list of English words along two dimensions - "valence" and "arousal," indicating some component of the emotional response which words evoke.

Maybe audiences respond to speeches by summing the emotion vectors of each word in the speech, rather than parsing sentences.

Quick test: who here is excited by the prospects of anthropic quantum computing?

Replies from: Arthur1981, JohnWittle, shminux
comment by Arthur1981 · 2010-03-21T08:28:18.847Z · LW(p) · GW(p)

What I find interesting is that there are some obvious parallels between applause lights and Barnum statements - so named after P.T. Barnum.

Barnum statements are essentially statements which anyone can apply to themselves as true, which essentially say nothing, and which feel unique to each individual hearing themselves described that way.

Barnum statements are a stock-in-trade of cold-readers such as mentalists and psychics. It seems to me that applause lights are nothing more than the abstract, impersonal version of the same phenomena; or perhaps the same phenomena used in a rhetorical and ideological application.

comment by JohnWittle · 2012-09-05T16:27:14.323Z · LW(p) · GW(p)

anthropic quantum computing? if i were flipping through the channels and heard that phrase uttered by someone who looked like he was giving a speech, i would be immediate interested in learning more and would definitely stay on the channel. I have no idea what the phrase means, but my immediate guesses are indeed exciting.

comment by shminux · 2012-09-05T16:55:11.890Z · LW(p) · GW(p)

anthropic quantum computing

I'd think that it came out of a random abstract generator like snarxiv.

comment by Vladimir_Nesov2 · 2007-09-11T21:28:10.000Z · LW(p) · GW(p)

Such speech could theoretically perform "bringing to attention" function. Chunks of "bringing to attention" are equivalent to any kind of knowledge, it's just an inefficient form, and abnormality of that speech in its utter inefficiency, not lack of content. People can bear such talk as similar inefficiency can be present in other talks in different form. Inefficiency makes it much simpler to obfuscate eluding certain topics.

comment by michael_vassar3 · 2007-09-11T22:20:43.000Z · LW(p) · GW(p)

I'm pretty sure that many people and organizations routinely DO argue that "we shouldn't balance the risks and opportunities of X". In ethics, deontological systems claim this. In policy, environmentalists are the first example that spring to mind, though they have been getting substantially better in the last few years. Radical pacifists like Gandhi have often been praised for asserting that people should not balance the risks and opportunities of war. More broadly, display of this attitude seems to me to be necessary for anyone who is attempting to portray that they are extraordinarily "virtuous" as virtue is normally understood, at least in our broadly Christian derived civilization. I actually think that it would be a good idea to try presenting all applause lights, but I think that it has been done. "The Gentle Art of Verbal Self Defense" claims in the appendix that such a speech has been written and presented to applause on a variety of topics. It seems to me though that the speech you were proposing above was actually an endorsement of a reasonable set of meta-policies which are in fact generally not engaged in, and was thus substantive, not empty, so I'm not sure it counts.

comment by Anders_Sandberg · 2007-09-11T22:44:50.000Z · LW(p) · GW(p)

David's comment that we shouldn't ignore people with little political power is a bit problematic. People who are not ignored in a political process have by definition some political power; whoever is ignored lacks power. So the meaning becomes "people who are ignored are ignored all the time". The only way to handle it is to never ignore anybody on anything. So please tell me your views on whether Solna muncipality in Sweden should spend more money on the stairs above the station, or a traffic light - otherwise the decision will not be fully democratic.

I wonder if the sensitivity for applause lights is different in different cultures. When I lectured in Madrid I found mine and several friend's speeches fall relatively flat, despite being our normally successful "standard speeches". But a few others got roaring responses at the applause lights - we were simply not turning them on brighly enough. The reward of a roaring applause is of course enough to bias a speaker to start pouring on more applause lights.

Hmm, was my use of "bias" above just an applause light for Overcoming Bias?

Replies from: bigjeff5
comment by bigjeff5 · 2011-01-31T16:20:35.655Z · LW(p) · GW(p)

The reward of a roaring applause is of course enough to bias a speaker to start pouring on more applause lights. Hmm, was my use of "bias" above just an applause light for Overcoming Bias?

Perhaps a better word would be "train".

comment by Luke_G. · 2007-09-11T23:01:51.000Z · LW(p) · GW(p)

Eliezer's nothing-but-applause-lights speech sounds strangely like every State of the Union address I've ever heard...

comment by Arnold_Kling · 2007-09-11T23:18:37.000Z · LW(p) · GW(p)

See also Trust Cues.

Replies from: NickRetallack
comment by NickRetallack · 2013-06-27T04:46:17.200Z · LW(p) · GW(p)

When I click that link, my browser downloads a file called redirect.php.

comment by Michael_Rooney · 2007-09-12T00:04:13.000Z · LW(p) · GW(p)

Rather than just "applause lights", sloganeering often is a cue to group-identification. Cf. postmodern text generators.

comment by J_Thomas · 2007-09-12T01:15:13.000Z · LW(p) · GW(p)

"The democracy booster probably meant that people with little political power should not be ignored. And that's not an empty statement; people with little political power are ignored all the time."

But isn't it precisely the people with little political power who can most safely be ignored?

Replies from: bigjeff5, omeganaut
comment by bigjeff5 · 2011-01-31T16:26:12.319Z · LW(p) · GW(p)

In standard democracy, yes, that is the case.

Perfect democracy is pure majority rule. Through history we have learned that this is probably the worst possible idea for a form of government. The mob has no concern for those who are not in the mob, and the apathy of the crowd can lead to some horrific consequences for those in the minority.

This is why most democracies are not really democracies, but have strong constraints that boost the power of the weakest members to prevent them from being overruled on every decision, while still giving the majority the larger share of the power.

For example, in the US the democratic process is split between two houses, The House of Representatives, which is population based and represents majority rule, and The Senate, for which each state gets only two representatives regardless of population. That balances the power while still giving the majority the majority of the power.

It's constraints similar to this (everyone does it differently, the point is that you always need to do it) that allow democratically based systems to work. In the US we also put in a president to make sure things get done, and then went as far outside the democratic system as the founders were comfortable with to install the third constraint on the system - the courts.

It could work just fine if there were plenty of well thought out constraints on it, but "democracy" by itself probably would not work at all; it rarely ever does. Therefore, saying "democracy" without any intention of discussing it is clearly just an applause word. Either that, or the man was totally ignorant. Leave it to someone like that to require the absolute destruction of a major effort like AGI just to learn the pitfalls of democracy that have been learned over and over and over again.

comment by omeganaut · 2011-05-11T20:08:18.400Z · LW(p) · GW(p)

But that in now way implies that they should be ignored.

Replies from: thomblake
comment by thomblake · 2011-05-11T22:05:32.631Z · LW(p) · GW(p)

But that in now way implies that they should be ignored.

It at least to some extent implies that they should be ignored. To illustrate:

Someone who is has great political power should not be ignored. This statement is not vacuous; it is instead making a worthwhile statement of fact. Given that, we know that people who do not have great political power should be ignored to a greater extent than people who do have great political power. Thus, that one does not have great political power (at least weakly) implies that one should be ignored (ceteris paribus). This contradicts the claim "That in no way implies that they should be ignored" (emphasis added).

As a side note, the comment you're responding to was left in 2007, and even on a different website. As a general rule, unless you're making a significant contribution, it's not worth responding to comments that were left before 2009.

If you do believe the parent comment is a worthwhile contribution, I'd suggest correcting "now" to "no" (assuming that's what you meant).

comment by James_Bach · 2007-09-12T03:50:24.000Z · LW(p) · GW(p)

Curiously Eliezer, I feel like applauding. Good post.

comment by Shakespeare's_Fool · 2007-09-12T04:12:13.000Z · LW(p) · GW(p)


Thank you for the quotation:

"We believe that we are already living in a democracy, although some factors are still missing, such as the expression of the people's will"

I hope someone can tell us who said it.


comment by Patri_Friedman · 2007-09-12T06:12:21.000Z · LW(p) · GW(p)

It might not convey information, but I bet you could get thunderous applause. Often, the latter outweighs the former when it comes t the goals of a speech.

comment by jeff_gray2 · 2007-09-12T07:29:11.000Z · LW(p) · GW(p)

link to 1981 Time magazine interview with the president of Argentina - source of Eliezer's quote about democracy absent the people's will.,9171,954853,00.html?promoid=googlep

comment by Leonard · 2007-09-12T19:14:42.000Z · LW(p) · GW(p)

The substance of a democracy is the specific mechanism that resolves policy conflicts. ... What does it mean to call for a "democratic" solution if you don't have a conflict-resolution mechanism in mind?
I think that for many people the "substance" of democracy is not the specific mechanism, but rather the general mechanism, and the nature of the output. The mechanism must include at least some formal representation of every member. The details of this don't matter so much: it might be direct voting (strictly equal power), or it might be a representative system (so long as the reps for each voter are more or less equal in power). And the general nature of the output is that it should be fair. Exactly what fair is, is a good question, and probably varies a lot. But at least this: conflicts should not always be resolved in favor of the same person or group or class.

This is not a particularly well-defined notion; clearly it does not resonate with you, who want a stricter definition. But it is hardly a meaningless notion, either. It is not an applause sign.

It is also, I think, a much more useful concept than you seem to have in mind. You are hung up on specifics: "the resolution process can be a direct majority vote, or an elected legislature, or even a voter-sensitive behavior of an AI, but it has to be something." Yes, in any actual project for developing AI, it would have to be something, and something specific. But specifically which of these methods (or an infinity of other specific implementations of "democracy") did not matter to the speaker you refer to.

Replies from: pnrjulius
comment by pnrjulius · 2012-05-19T04:32:50.275Z · LW(p) · GW(p)

But what is really that it didn't MATTER, or simply that he didn't KNOW?

I think it was the latter---what's more, it didn't even occur to him to ask the question. He seemed to think that saying "democratic" was enough.

comment by Miguel · 2007-09-13T00:47:21.000Z · LW(p) · GW(p)

I know where your quote came from:,9171,954853,00.html?promoid

It's from "President Roberto Eduardo Viola, formerly Argentina's army commander in chief".

It's an answer to the first question in the interview:

"Q. How soon do you expect Argentina to be returned to democratic government?

A. We believe we are already within a democratic system. Some factors are still missing, like the expression of the people's will, but nevertheless we still think we are within a democracy. We say so because we believe these two fundamental values of democracy, freedom and justice, are in force in our country. There are, it is true, several conditioning aspects as regards political or union activity, but individual freedom is nowhere infringed in an outstanding manner."

BTW, I googled it. Apparently my Google-fu is better than yours ;) (But I do applaud your excellent memory, or ele I wouldn't be able to find it).

And keep up with the great posts. I'm a daly reader of this blog.

  • Miguel
comment by Alan_Crowe · 2007-09-15T19:35:47.000Z · LW(p) · GW(p)

[rhetorical pose] We shouldn't balance the risks and opportunities of AI. Enthusiasts for AI are biased. They under estimate the difficulties. They would not be so enthusiastic if they grasped how disappointing progress is likely to be. Detractors of AI are also biased. They under estimate the difficulties too. You will have a hard time convincing them of the difficulties, because you would be trying to pursuade them that they had been frightened of shadows.

So there are few opportunities which are likely to be altogether lost if we hang back through unnecessary fear. [/rhetorical]

Well, I happen to believe the two paragraphs above, but distinct from the question of whether I am right or not is the question of whether the phrase "We need to balance the risks and opportunities of AI." means something or whether it is merely an applause light.

I think it is trivially true that we need to balance the actual risks and actual opportunities of AI. There is room for disagreement about whether we need to balance the perceived risks and perceived opportunities. If perceptions are accurate we should, but there is scope to say, for example, that the common perception is wrong and a rogue AI will in fact be quite stupid and easily unplugged. This opens the way to a decoding of language in which

o We need to balance the risks and opportunities of AI.

is the position that we are assessing the risks and opportunities correctly and

o We shouldn't balance the risks and opportunities of AI.

is the position that we are assessing the risks and opportunities incorrectly and should follow a different path from that indicated by our inaccurate assessments. Such a position needs fleshing out with a rival account of the risks and opportunities.

One question that I dwell on is "how do intelligent and well-intention persons fall to quarrelling?". The idea of an Applause Light is illuminating, but I think it is also quite tangled. There is the ambiguity between whether a phrase is an Applause Light or a Policy Proposal. I suspect that the core problem is that it is awfully tempting to exploit this ambiguity rhetorically, deliberating coding ones policy proposals in language that also functions as an Applause Light so that they come across as obviously correct.

The fun starts when one does this subconsciously and some-one else thinks it is deliberate and takes offence. Once this happens there is little chance of discovering the actual disaggreement (which might be about the accuracy of risk assessments) for the conversation will be derailed into meta-conversations about empty phrases and rhetoric.

Replies from: bigjeff5
comment by bigjeff5 · 2011-01-31T16:45:36.589Z · LW(p) · GW(p)

o We need to balance the risks and opportunities of AI.

is the position that we are assessing the risks and opportunities correctly and

o We shouldn't balance the risks and opportunities of AI.

is the position that we are assessing the risks and opportunities incorrectly and should follow a different path from that indicated by our inaccurate assessments. Such a position needs fleshing out with a rival account of the risks and opportunities.

I don't get that at all. If "We shouldn't balance the risks and opportunities of AI" means they are being assessed incorrectly, isn't that a part of balancing the risks and opportunities of AI? I don't see how you can get that out of the statement. If they are being done incorrectly, then in the discussion of the risks and opportunities you say "No, you're doing it wrong, you need to look at it like this blah blah blah".

When you say "We shouldn't balance the risks and opportunities of AI" it means to stop making an assessment altogether. It says nothing about continuing to go forward with the project or not. It doesn't say "Stop the project! This is all wrong!" That would fall under balancing the risks and opportunities - an assessment that came against AI.

That's foolishness, which is why no one would ever utter the phrase in the first place. That makes the prior phrase an applause phrase, because it is obvious to anyone involved that such an assessment is necessary. You're only saying it because you know people will nod their head in agreement and possibly clap.

Replies from: Polymeron
comment by Polymeron · 2011-02-23T15:38:48.157Z · LW(p) · GW(p)

It would make sense in the context of a strong bias toward a specific outcome, e.g. religious indignation toward an idea.

A person believing that thinking machines are an abomination would tell you to stop assessing and forget the whole idea. A person believing that AI is the only thing that could possibly rescue us from imminent catastrophe might well tell you to stop analyzing the risks and get on with building the AI before it's too late.

Either position would have a substantive position that you don't need to balance the risks and opportunities any further, without claiming that you have some error in your assessment.

Replies from: bigjeff5
comment by bigjeff5 · 2011-02-23T18:06:35.440Z · LW(p) · GW(p)

Yet building an AI that eventually destroys all mankind, even after it averts this particular looming catastrophe, could easily be the worse choice. Does the catastrophe we need AI for outweigh the potential dangers of a poorly built AI?

It must still be considered. You may not have time to consider it thoroughly (as time is now a factor to consider), and that must be part of your assessment, but you still have to weigh the new risks against the potential reward.

Same with the abomination. Upon what basis is it an abomination? What are the consequences if we create the abomination? Do we spend a few extra years in purgatory, or do we burn in hell for all eternity?

It still must be considered. A few years in purgatory for a creation that saves mankind from the invading squid monsters may very much be worth doing.

Consider the atomic bomb before the first live tests. There were real concerns that splitting the atom could create an unstoppable chain of events which would set the very air on fire, destroying the whole world in that single moment. I can't really imagine a scenario that is more dire, and more strongly argues for the ceasing of all argument.

Yet they did the math anyway, considered the risks (tiny chance of blowing up the world) vs the reward (ending the war that is guaranteed to kill millions more people), and decided it was worth it to continue.

I still see no rational case for ever halting argument, except in the case of time for assessment simply running out (if you don't act before X, the world blows up - obviously you must finish your assessment before X or it was all pointless). You may weigh the risks vs the opportunities and decide the risks are too great, and decide not to continue. However, you can not rationally cease all argument without consideration because of a particularly strong or dire argument. To do so is irrational.

Replies from: Polymeron
comment by Polymeron · 2011-02-23T18:39:38.256Z · LW(p) · GW(p)

Of course you can cease argument without consideration - if you deem the risks of continuing consideration to outweigh the benefits of weighing them. For instance, if you have 1 minute to try something that would save your life, and you require at least 5 minutes to properly assess anything further, you generally can't afford to weigh whether the idea would result in a worse situation somehow - beyond whatever assessment you have already made. At that point, the time for assessment is over.

For the most part, however, I agree with your point. I did not argue that one can rationally disagree with the statement "We need to balance the risks and opportunities of AI"; just that they can sincerely say it, and even argue for it. This was a response to you saying that "no one would ever utter the phrase in the first place". This just strikes me as false.

Never underestimate the power of human stupidity ;)

Replies from: bigjeff5
comment by bigjeff5 · 2011-02-23T22:25:25.231Z · LW(p) · GW(p)

You're right, in that regard I was certainly mistaken.

Replies from: viktor-riabtsev-1
comment by Viktor Riabtsev (viktor-riabtsev-1) · 2018-10-14T14:40:57.964Z · LW(p) · GW(p)

Upvoted for the "oops" moment.

comment by Venkat · 2007-09-24T23:59:40.000Z · LW(p) · GW(p)

That was kinda hilarious. I like your reversal test to detect content-free tautologies. Since I am working right now on a piece of AI-political-fiction (involving voting rights for artificial agents and questions that raises), I was thrown for a moment, but then tuned in to what YOU were talking about.

The 'Yes, Minister' and 'Yes, Prime Minister' series is full of extended pieces of such content-free dialog.

More seriously though, this is a bit of a strawman attack on the word 'democracy' being used as decoration/group dynamics cueing. You kinda blind-sided this guy, and I suspect he'd have a better answer if he had time to think. There is SOME content even to such a woolly-headed sentiment. Any large group (including large research teams) has conflict, and there is a spectrum of conflict resolution ranging from dictatorial imposition to democracy through to consensus.

Whether or not the formal scaffolding is present, an activity as complex as research CANNOT work unless the conflict resolution mechanisms are closer to the democracy/consensus end of the spectrum. Dictators can whip people's muscles into obedience, maybe even their lower-end skills ("do this arithmetic or DIE!"), but when you want to engage the creativity of a gang of PhDs, it is not going to work until there is a mechanism for their dissent to be heard and addressed. This means making the group itself representative (the 'multinational' part) automatically brings in the spirit if not the form of democratic discourses. So yes, if there are autocentric cultural biases today's AI researchers bring to the game, making the funding and execution multinational would help. Having worked on AI research as an intern in India 12 years ago, and working today in related fields here in the US, I can't say I see any such biases in this particular field, but perhaps in other fields, making up multinational, internationally-funded research teams would actually help.

On the flip side, you can have all the mechanisms and still allow dictatorial intent to prevail. My modest take on ruining democratic meetings run on Robert's Rules:

The 15 laws of Meeting Power

comment by Jamais_Cascio · 2007-10-10T16:35:51.000Z · LW(p) · GW(p)

C'mon, Eliezar, be fair: identify who the speaker was that you "probed" in this way, so that people can find the recordings of the talk and exchange at to decide for themselves how it went.

As you have it above, aside from the paraphrasing, you omit a couple of important parts of my replies. With regards to the Reason/Hollywood comparison, I go on to say:

"That is, they're both caricatures, and neither one is terribly plausible or complete. There would be some critical benefits to the messy process of the first scenario, and some important drawbacks to the second."

With regards to the "I don't know," I then say:

"This is a point I've tried to make a couple of times here: this is not a solved problem, but it's an important problem, and we need to figure out how to address it."

I certainly did not talk about democracy with any intent of it serving as "applause lights" for my talk -- in fact, given the audience, I expected a semi-hostile response, given my argument against the kind of "rebel nerd" heroism self-image a lot of the AGI community seems to have.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-10T17:15:16.000Z · LW(p) · GW(p)

BTW, if anyone wants to go to and download the audio, you'll note that the actual event did not occur the exact way I remembered it, which should surprise no one here who knows anything about human memory. In particular, Cascio spontaneously provided the Genome Project example, rather than needing to be asked for it.

Generally, the reason I avoid identifying the characters in my examples is that it feels to me like I'm dumping all the sins of humankind upon their undeserving heads - I'm presenting one error, out of context, as exemplar for all the errors of this kind that have ever been committed, and showing none of the good qualities of the speaker - it would be like caricaturing them, if I called them by name.

That said, the reason why I picked this example is that, in fact, I was thinking of Orwell's "Politics and the English Language" while writing this post. And as Orwell said:

In the case of a word like democracy, not only is there no agreed definition, but the attempt to make one is resisted from all sides. It is almost universally felt that when we call a country democratic we are praising it: consequently the defenders of every kind of regime claim that it is a democracy, and fear that they might have to stop using that word if it were tied down to any one meaning.

If you simply issue a call for "democracy", why, no one can disagree with that - it would be like disagreeing with a call for apple pie. As soon as you propose a specific mechanism of democracy, whether it is Congress passing a law, or an AI polling people by phone, or government funding of a large research project whose final authority belongs to an appointed committee of eminent scientists, et cetera, people can disagree with that, because they can actually visualize the probable consequences.

So there is a tremendous motive to avoid criticism, to keep to the safely vague areas where people will applaud you, and not to make the concrete proposals where people might - gasp! - disagree.

Now I do not accuse you too much of this, because you did say "Genome Project" when challenged instead of squirting out an immense cloud of ink. But it is why I challenged you to define "democracy". I think that the real value in these discussions comes from people willing to make concrete proposals and expose themselves to criticism.

Replies from: capybaralet
comment by capybaralet · 2015-09-30T00:50:32.940Z · LW(p) · GW(p)

Really bad example...

My impression is that democracy is seeing a sharp uptick in attacks from elites and intellectuals. There are many who now believe, e.g., that the US should be more like China (see: the success of Trump).

As the speaker noted, he expected his speech to be controversial in that crowd, and in a way, it was, as evidenced by this blog post :)

comment by Fyrius · 2010-04-22T06:49:22.582Z · LW(p) · GW(p)


When I hear the sort of thing you would call "applause lights", I don't always think of that as an obvious fact that everyone in their right mind would agree on. Rather, I get the impression the speaker is implying that someone they strongly disagree with does believe this obvious fact is not true, or that this ridiculous notion is.

If for example I hear someone say "we shouldn't be hugging criminals, we should be locking them up", I interpret that as a very one-sided opposition to a grossly misrepresented opponent who goes a bit easier on convicts. Of course this person wouldn't literally believe the reverse that "we should be hugging criminals instead of locking them up", but she might believe something that a bigot could paraphrase as such with a straight face.

I think this is also the reason why the speaker's supporters applaud to statements like that - it implies the issue is very simple and clear-cut, only one side (ours) is remotely sensible, and you'd have to be insane to disagree. One-sidedness feels good. Very blatant one-sidedness feels even better.

(Excuse me if this has been said already.)

Replies from: NancyLebovitz, pnrjulius
comment by NancyLebovitz · 2010-04-22T09:59:12.586Z · LW(p) · GW(p)

I haven't seen it laid out so clearly anywhere.

The only thing I'd add is that it's very easy to fall into that error reflexively. It isn't generally a matter of conscious strategy.

comment by pnrjulius · 2012-05-19T05:43:54.955Z · LW(p) · GW(p)

Hence, an applause light is a form of strawman argumentation?

That sounds about right actually.

Replies from: Fyrius
comment by Fyrius · 2012-05-31T12:45:56.519Z · LW(p) · GW(p)

It can be used for that, at least.

comment by DSimon · 2010-06-14T15:42:49.280Z · LW(p) · GW(p)

(Hi everyone; this is my first time posting here.)

If someone delivered that 100%-applause-light paragraph to me in a speech, my first impulse would be to interpret it as an honest attempt to remind the audience of obvious but not necessarily currently-in-context ideas. For example, this statement from the middle:

"To achieve these goals, we must plan wisely and rationally. We should not act in fear and panic, or give in to technophobia; but neither should we act in blind enthusiasm."

Taken literally as a set of assertions, this really is quite empty of novel or unexpected content. However, directed at an audience of humans, aware of but still vulnerable to cognitive bias, the statement above implies another statement which is more useful: "We should be careful to not act like who, despite intending not to, panicked rather than thinking productively. We should also be careful to not act like whose enthusiasm overwhelmed their necessary sense of caution, even though they knew the value of that caution."

People who agree with the part of the 1st virtue that says "A burning itch to know is higher than a solemn vow to pursue truth" may still sometimes need to be reminded to check themselves and make sure they're doing the former rather than the latter.

comment by Document · 2010-12-07T07:28:33.286Z · LW(p) · GW(p)

This sounds similar to the idea of a "motherhood statement" as defined here.

Replies from: pnrjulius
comment by pnrjulius · 2012-05-19T05:45:32.393Z · LW(p) · GW(p)

That second definition applies to most depictions of transhumanism in fiction. It's the rare author who is bold enough to say, "The implants that we put in our brains? Yeah, they actually make us better."

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-19T15:00:51.006Z · LW(p) · GW(p)

Pretty much all the fiction I read in which brain implants are mentioned at all treat them as improvements.

Replies from: pnrjulius
comment by pnrjulius · 2012-05-23T03:57:22.522Z · LW(p) · GW(p)

Really? Got any examples?

I've read some in which the transhuman technologies were ambiguous (had upsides and downsides), but I can't think of any where it was just better, the way that actual technologies often are---would any of us willingly go back to the days before electricity and running water?

Replies from: Swimmer963, TheOtherDave, Nornagest, Hul-Gil
comment by Swimmer963 · 2012-05-23T04:05:02.030Z · LW(p) · GW(p)

I've read some in which the transhuman technologies were ambiguous (had upsides and downsides), but I can't think of any where it was just better, the way that actual technologies often are---would any of us willingly go back to the days before electricity and running water?

Having upsides and downsides isn't the same thing as being ambiguous. Running water and electricity do have downsides–namely, depletion of water tables due to overuse, and pollution, resource depletion, and possibly global warming due in part to the efforts required to make electricity...But I wouldn't say that either technology is ambiguous. The advantages pretty clearly outweigh the disadvantages, which are avoidable with some thought and creativity.

comment by TheOtherDave · 2012-05-23T04:32:57.207Z · LW(p) · GW(p)

Most of Peter Hamilton's stuff comes to mind, for example. Implants are just another technology, treated no differently than guns or cars. The Greg Mandel books have a few characters who do end up with implants that they would prefer not to have, but they're the exceptions.

comment by Nornagest · 2012-05-23T07:04:31.649Z · LW(p) · GW(p)

would any of us willingly go back to the days before electricity and running water?

Well, they're hardly common, but anarcho-primitivists do exist.

comment by Hul-Gil · 2012-05-23T07:07:32.757Z · LW(p) · GW(p)

but I can't think of any where it was just better, the way that actual technologies often are

I find that a little irritating - for people supposedly open to new ideas, science fiction authors sure seem fearful and/or disapproving of future technology.

Replies from: Nornagest
comment by Nornagest · 2012-05-23T07:23:17.646Z · LW(p) · GW(p)

Part of me thinks that that's encoded into the metaphorical DNA of the SF genre (or one branch of it) at a very basic level. It's been conventional for a while to think of SF as Enlightenment and the rest of spec-fic as Romantic, but the history of the genre's actually more complicated than that; Mary Shelley, for example, definitely fell on the Romantic side of the fence, and later writers haven't exactly been shy about following her lead. The treading-in-God's-domain motif is a powerful one, and it's the bedrock that an awful lot of SF is built on.

Replies from: taelor
comment by bigjeff5 · 2011-01-31T06:11:31.042Z · LW(p) · GW(p)

Oy, now that you've said it, I hear speeches like that at the end all the time. Whole discussion between opposing sides even. Perhaps that's why I haven't been able to stand cable news for a while now?

comment by Polymeron · 2011-02-23T15:30:24.178Z · LW(p) · GW(p)

When I first read this, I imagined a favorite politician (I won't mention who) giving this mock speech.

To my embarrassment, I found myself nodding in completely genuine enthusiasm. This guy clearly knows what he's talking about!

(This in turn made me consider just how much of this politician's speeches was similarly composed. I came to the conclusion that quite a significant amount of it was)

...Nobody ever told me cognitive bias would be this annoying!

Replies from: TheOtherDave
comment by TheOtherDave · 2011-02-23T17:57:52.180Z · LW(p) · GW(p)

Upvoted because I endorse the willingness to notice one's own biases.

So, next question, if you're willing: what are three things you could do to reduce the degree to which this sort of empty rhetoric leads you to endorse the speaker?

Replies from: Polymeron
comment by Polymeron · 2011-02-23T19:02:48.178Z · LW(p) · GW(p)

TheOtherDave, that is a very constructive approach :)

I am already prone to requiring policy specifics from politicians and being dissatisfied with vague points. But one thing I (and many others) do have is a tendency to note, when hearing a few specifics in a sea of "general direction" applause cues, is that my own preference for solutions is compatible with the speech; and from compatibility, I get hope that they would implement it - despite a lack of evidence that they're even aware of such a solution, much less want to implement it. So this is something to be cautious of and to note mid-speech.

I could go further and try to strike from mental record anything that isn't specifics, making a point-by-point list of substantive statements. An easy way to do this is ask "is anyone really considering doing otherwise? No? Then it doesn't count. Yes? Then why are they?" This method might not always be wise - motivations and beliefs are also important in trying to predict a politician's future choices they did not yet address, and the speech can pronounce those. However it would be a good mental exercise when trying to evaluate positions on a specific policy question.

Lastly, try to separate emotional jargon from actual policy. If your politician says we "need to be prepared for the 21st century", recognize the fuzzy excitement that this statement gives you and squash it - it's caused by the phrase "21st century" being linked in your mind with progress and technology. Wait until that politician says they're going to specifically invest in technological literacy of 8th graders before you give it any significance, and treat it as suspect until then. (This is very similar to the first thing I suggested, except it focuses on recognizing an immediately triggered emotion in response to a phrase, rather than your own mind building scenarios which then in turn excite you).

I'll try to remember all that for the next speech I hear :P

Replies from: TheOtherDave
comment by TheOtherDave · 2011-02-23T20:12:33.241Z · LW(p) · GW(p)

I definitely endorse tracking specific proposals/substantive assertions, and explicitly labeling vague or empty assertions that nevertheless elicit positive feelings or invite you to project your own preferences onto the speaker.

I definitely endorse asking the "is anyone really considering doing otherwise, and why?" question.

Something I also find useful is explicitly labeling implied affiliations.

E.g., consider the difference between "we need to prepare our children with the tools they need to be leaders in the 21st century," versus "we need to instill our children with the values they need to make the right choices in the 21st century." They are both empty statements -- I mean, who would ever claim otherwise? -- but in the U.S. today the former signals affiliation with teachers and thereby implies support for public schools, education funding, etc., while the latter signals something I understand less clearly.

And those in turn signal alliances with major political parties, because it's understood by most U.S. voters that party A is more closely tied to education and party B to values.

In fact, even if the statement includes a specific proposal, it is often worth labeling the implied affiliation.

Replies from: pnrjulius
comment by pnrjulius · 2012-05-19T05:48:42.827Z · LW(p) · GW(p)

It's interesting; with the connotations and associations in our discourse, I can actually make some predictions about planned policies from those two supposedly "empty" statements.

The former is probably going to spend more money on math and science education.

The latter is probably going to fund "faith-based initiatives" or something similarly silly and religious (but I repeat myself), because "values" in American politics is almost always code for "conservative Evangelical Christianity".

So does this mean that they really aren't empty at all?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-19T14:47:11.683Z · LW(p) · GW(p)

Well, yes, I chose those statements precisely because of their connotative affiliations.

As for whether they're really empty... (shrug).

In ordinary conversation I would consider "I like likable things!" an empty statement, but of course it conveys an enormous amount of information: that I am capable of constructing a grammatical English sentence, for example, which the Vast majority of equivalent-mass aggregations of particles in the universe are not. I can use a different term to describe that category of statement if this one is too ambiguous.

comment by Ronny (potato) · 2011-08-02T20:36:55.279Z · LW(p) · GW(p)

"Applause Light" is a wonderful name for that tactic; it's funny, catchy, and makes the problem with that tactic intuitively obvious. That term should be further proliferated throughout the internet if it hasn't already been. Adding that meme to the average internet goer's repertoire could have wonderful side-effects on the support decisions of people in meat space everywhere.

comment by MarkusRamikin · 2011-08-11T09:50:01.208Z · LW(p) · GW(p)

That applause-light speech at the end just needs some variation, and I'm pretty sure it would fly. I'd replace about half of the "we should" with something else, like "it is important that we", and "it would be dangerous to neglect" and so on, because right now it's so repetitive that surely a lot of people would notice and realize what's being done.

Or maybe i'm yet again overestimating my fellow human beings, as past experience says I am prone to do...

comment by Jeremygbyrne · 2011-08-21T04:08:16.732Z · LW(p) · GW(p)

I am tempted to give a talk sometime that consists of nothing but applause lights (appropriately titled "Unthink").

comment by jeromeapura · 2011-08-25T11:37:59.229Z · LW(p) · GW(p)

You got him on nice Socratic question. Well, a good question seeks good idea and eliminate inane idea. Nice

comment by Matvey_Ezhov · 2011-09-03T10:27:22.948Z · LW(p) · GW(p)

There might be one another case: in casual conversation, something that looks like an applause light, could be just expression of recent insights of a particular person on the subject. Like, he just yesterday deduced (based on some fragments of rational texts on the web) that we should balance risks and opportunities of this. Or, maybe the audience level on the subject is so low that even the applause-like statements do convey some information.

comment by buybuydandavis · 2011-09-22T11:31:24.725Z · LW(p) · GW(p)

"Let's do everything right."

Yep. Standard political speech.

comment by Maha · 2011-12-14T00:32:56.426Z · LW(p) · GW(p)

I don't think these statements are entirely vacuous. Even when their content is little more than a tautology, their actual meaning is something else entirely, at least in politics; they represent that the speaker is aware of the jargon, willing to use it, essentially moderate/"pragmatic" and prone to maintaining the status quo.

comment by macronencer · 2012-01-16T23:37:26.660Z · LW(p) · GW(p)

I couldn't resist adding another link as an example of a speech that seems to consist almost entirely of applause lights. This one is vintage Peter Sellers.

comment by taelor · 2012-02-24T23:03:51.796Z · LW(p) · GW(p)

Applause Lights also have more sinister, dark artsy application: they can be used to bait people into agreeing with seemingly trivial propositions, which nevertheless cause the target to modify their self image, rendering them more likely to agree with less trivial propositions in the future. For example, Cialdini's Influence reports on a study that found that households that had been visited by a volunteer collecting signatures in favor of the vague statement "keep California beautiful" (without ever specifying how this was to be accomplished) were much more likely to agree to prominently display a large, ugly sign reading "prevent drunk driving" on their yards than households that hadn't been so visited.

comment by olalonde · 2012-04-27T17:09:57.840Z · LW(p) · GW(p)

can convey new information to a bounded rationalist

Why limit it to bounded rationalists?

comment by PhilGoetz · 2014-06-29T16:54:03.492Z · LW(p) · GW(p)

How would different interest groups resolve their conflicts in a structure like the Human Genome Project?

Oh, my, the unintentional humor of that speaker's comment. There's an entire book written on how groups resolved their conflicts in the Human Genome Project, The Genome War: They didn't. The outcome was a horrific case study of how science really "works" today.

comment by poi99 · 2015-01-07T06:49:51.000Z · LW(p) · GW(p)

You'll often see feminists on the Internet pointing out how just about everyone seems to be in favor of "gender equality" and yet hardly anyone of either gender self-identifies as feminist anymore, even though gender equality is what feminism claims to be all about.

This article explains that disconnect. "Equality" is an applause light. It's something we can all agree is great in the abstract, but as soon as someone starts talking specifics, the applause thins and we all go back to being polarized again because everyone's idea of what equality, especially social and economic equality, actually entails is different.

comment by Nisan · 2015-04-14T05:28:22.065Z · LW(p) · GW(p)

"I am here to propose to you today that we need to balance the risks and opportunities of advanced Artificial Intelligence..."

Seven years later, this open letter was signed by leaders of the field. It's amusing how similar it is to the above speech, especially considering how it actually marked a major milestone in the advancement of the field of AI safety.

comment by Jiro · 2017-02-09T16:45:55.401Z · LW(p) · GW(p)

Then what kind of democratic process did you have in mind?

The speaker replied:

Something like the Human Genome Project—that was an internationally sponsored research project.

I asked:

How would different interest groups resolve their conflicts in a structure like the Human Genome Project?

And the speaker said:

I don't know.

In this old post, Eliezer is being insufficiently charitable and not steelmanning.

It is possible that I know that X can do A, but I don't know how X can do A. "Look at X and do A similarly to that" may be a reasonable response when I am asked how to do A (or when I am told that A is just impossible). It may fail in some cases (such as when I am incorrect in my assumption that X and the current situation are similar enough), but it isn't devoid of content to say that, and is more than an applause light.

comment by zby · 2018-09-28T20:53:38.159Z · LW(p) · GW(p)

Sounds like

comment by Robin Caesar (robin-caesar) · 2018-11-05T21:16:16.052Z · LW(p) · GW(p)

Hi, i made this german translation if anyone is interested:



Von Eliezer_Yudkowsky

11. September 2007

Auf dem Singularity Summit 2007 rief einer der Redner zu einer demokratischen, multinationalen Entwicklung der KI auf. Also trat ich ans Mikrofon und fragte:

Angenommen, eine Gruppe demokratischer Republiken bildet ein Konsortium zur Entwicklung der KI, und während des Prozesses gibt es viel politisches Taktieren - einige Interessengruppen haben einen ungewöhnlich großen Einfluss, andere werden über den Tisch gezogen - mit anderen Worten, das Ergebnis sieht aus wie die Produkte moderner Demokratien. Alternatives Szenario: Angenommen, eine Gruppe von Rebellen-Nerds entwickelt eine KI in ihrem Keller und weist der KI an, mit jedem auf der Welt eine Umfrage zu machen (dafür bekommt jeder der noch keines hat ein Mobiltelefon) - und dann das zu machen, was die Mehrheit wünscht. Welche von diesen Optionen ist Ihrer Meinung nach "demokratischer", und würden Sie sich bei beiden sicher fühlen?

Ich wollte herausfinden, ob er an die pragmatische Angemessenheit des demokratischen politischen Prozesses glaubte oder ob er an die moralische Rechtmäßigkeit der Stimmabgabe glaubte. Der Sprecher antwortete:

Das erste Szenario klingt wie ein Leitartikel im Reason-Magazin und das zweite klingt wie ein Hollywood-Plot.

Verwirrt fragte ich:

An was für einen demokratischen Prozess haben Sie gedacht?

Der Sprecher antwortete:

So etwas wie das Human Genome Project - das war ein international gesponsertes Forschungsprojekt.

Ich habe gefragt:

Wie würden verschiedene Interessengruppen ihre Konflikte in einer Struktur wie dem Human Genome Project lösen?

Und der Sprecher sagte:

Ich weiß es nicht.

Dieser Austausch erinnert mich an ein Zitat (das ich bei Google nicht gefunden habe, aber das von Jeff Gray und Miguel gefunden wurde) von irgendeinem Diktator, der gefragt wurde, ob er beabsichtige, seinen Bananen-Republik in Richtung Demokratie zu bewegen:

Wir glauben, dass wir uns bereits in einem demokratischen System befinden. Einige Faktoren fehlen noch, wie die freie Meinungsäußerung.

Der Kern einer Demokratie ist der spezifische Mechanismus, der politische Konflikte löst. Wenn alle Gruppen die gleiche bevorzugte Politik hätten, wäre keine Demokratie erforderlich - wir würden automatisch zusammenarbeiten. Der Konfliktlösungsprozess kann eine direkte Stimmenmehrheit oder eine gewählte Legislative oder sogar ein wählersensitives Verhalten einer KI sein, aber es muss jedoch etwas sein. Was bedeutet es, eine "demokratische" Lösung zu fordern, wenn Sie keinen Konfliktlösungsmechanismus im Sinn haben?

Ich denke, es bedeutet, dass das Wort "Demokratie" gesagt wurde, und dass das Publikum Beifall klatschen soll. Es ist nicht so sehr eine aussagekräftige Aussage, als ein Äquivalent des "Applause" -Lichts, das einem Studiopublikum sagt, wann es klatschen soll.

Dieser Fall ist nur insofern bemerkenswert, als dass ich das Applaus-Licht mit einem politischen Vorschlag verwechselt habe, mit anschließender allgemeiner Peinlichkeit. Die meisten Applaus-Lichter sind offensichtlicher und können durch einen einfachen Umkehrtest erkannt werden. Angenommen, jemand sagt:

Wir müssen die Risiken und Chancen der KI ausbalancieren.

Wenn Sie diese Aussage umkehren, erhalten Sie:

Wir sollten die Risiken und Chancen der KI nicht ausbalancieren.

Da die Umkehrung abnormal klingt, ist die nicht umgekehrte Aussage wahrscheinlich normal, was bedeutet, dass keine neuen Informationen übermittelt werden. Es gibt viele legitime Gründe, um einen Satz auszusprechen, der isoliert nicht aussagekräftig wäre. "Wir müssen die Risiken und Chancen der KI ausbalancieren" kann ein Diskussionsthema einführen; es kann die Bedeutung eines spezifischen Vorschlags für die Abwägung betonen; es kann einen unausgewogenen Vorschlag kritisieren. Das Verknüpfen mit einer normalen Behauptung kann einem eingeschränkten Rationalisten neue Informationen vermitteln - die Verbindung selbst ist möglicherweise nicht offensichtlich. Wenn aber keine Besonderheiten folgen, ist der Satz wahrscheinlich ein Applaus-Licht.

Ich bin versucht, einmal einen Vortrag zu halten, der nur aus Applaus-Lichtern besteht, und zu sehen, wie lange es dauert, bis das Publikum anfängt zu lachen:

Ich möchte Ihnen heute vorschlagen, die Risiken und Chancen fortschrittlicher künstlicher Intelligenz in Einklang zu bringen. Wir sollten die Risiken vermeiden und, soweit möglich, die Chancen wahrnehmen. Wir sollten uns nicht unnötig mit unnötigen Gefahren auseinandersetzen. Um diese Ziele zu erreichen, müssen wir vernünftig und rational planen. Wir sollten nicht in Furcht und Panik handeln oder der Technophobie nachgeben. aber wir sollten auch nicht mit blinder Begeisterung handeln. Wir sollten die Interessen aller Parteien respektieren, die an der Singularität beteiligt sind. Wir müssen sicherstellen, dass die Vorteile fortschrittlicher Technologien so vielen Menschen wie möglich zur Verfügung stehen, anstatt auf einige beschränkt zu sein. Wir müssen versuchen, gewalttätige Konflikte mit diesen Technologien so weit wie möglich zu vermeiden. und wir müssen verhindern, dass massive zerstörerische Fähigkeiten in die Hände von Individuen fallen. Wir sollten diese Fragen vorher durchdenken, nicht danach, es ist zu spät, um etwas dagegen zu unternehmen ...

comment by xSciFix · 2019-04-12T20:19:27.984Z · LW(p) · GW(p)
I am tempted to give a talk sometime that consists of nothing but applause lights, and see how long it takes for the audience to start laughing:

I'm reminded of your Tom Riddle a bit heh.

I think "a speech that consists of nothing but applause lights" pretty much applies to 99% of political discourse these days and instead being amused at how long it takes the audience to realize you'd be embittered at how seriously everyone took the whole exercise. Maybe I have some bias to sort out but I think the actual content of what is being said often matters very little to most people, as long as you hit the right buzzwords and look convincing/confident.

comment by Omeg · 2019-05-28T20:26:09.339Z · LW(p) · GW(p)
I am here to propose to you today that we should not balance the risks and opportunities of advanced artificial intelligence. We should welcome the risks and remain blind to opportunities. We should needlessly confront entirely unnecessary dangers. To achieve these goals, we must plan stupidly and irrationally. We should act in fear and panic, and give in to technophobia; alternatively, we should act in blind enthusiasm. We should only respect the interests of some parties with a stake in the Singularity. We must try to ensure that the benefits of advanced technologies remain restricted to a small number of people, rather than accrue to as many individuals as possible. We must encourage, even if it's impossible, violent conflicts using these technologies; and we must see that the massive destructive capability falls into the hands of individuals. We should think through these issues later, when it is too late to do anything about them . . .

I like those reversals tests. They are not only useful, but also quite hilarious.

comment by astralbrane · 2019-10-22T23:18:05.742Z · LW(p) · GW(p)

"do whatever the majority says" is not democracy.

True democracy finds the solution that maximizes the utility of all the voters, not maximizing the utility of half while completely ignoring the other half.

Replies from: jeronimo196
comment by jeronimo196 · 2020-02-09T13:07:23.596Z · LW(p) · GW(p)

As do true communism. Has there ever been such a democracy? How did it found out the utility for all its voters? Sounds to me like a "no true Scotsman" encompassing all known government systems. I think you should modify your statement to something like "A democracy should try and protect the interests of its minorities, as well as those of the majority."

comment by Art Vandele · 2020-07-11T16:40:48.010Z · LW(p) · GW(p)

I think it's possible that there's another purpose to these kinds of statements. When someone says, "We must acknowledge the potential risks and benefits of AGI," they're signalling - at least in principle - that they're aware that there *are* both risks and benefits to AGI.

So its purpose is in some cases as a signal to listeners that the speaker has avoided ideological possession at least long enough to acknowledge the existence of factors on more than one side of an argument.

comment by toothpaste · 2021-01-30T03:11:25.490Z · LW(p) · GW(p)

It's hard to judge this particular case without context, but such sentences can be valid if they convey a general direction a person wants something to move on in a situation where they can't or shouldn't be overspecific, for example if they don't know much about the specific subject, or if they want to remain on topic during a talk about a particular issue.

For example, I could say "it's time someone developed a machine that is able to fetch things around the house and bring them to us". It doesn't mean I know anything about engineering or about how this machine would operate, just that I think it would be a good thing.

In the same way, the speaker might just have wanted to say that they believe that it would be good if AI development went in two directions: 1-Multinational 2-Democratic. That did not involve them claiming they were an expert in developing democratic frameworks for international decision making. He was just expressing that it should move in the direction of those ideals, maybe because he liked the outcome of other projects who shared them, like the Human Genome Project.

comment by outside_path · 2021-02-23T19:01:04.514Z · LW(p) · GW(p)

Back in the day I would've agreed and thought that indeed the last paragraph was prime example of political speech with nothing inside it. After recent years in politics, I wouldn't be suprised to see leaders in certain countries make a very different speech about AI. So perhaps it is indeed useful to have these kinds of speeches, just to signal that there is still reason in this world.

comment by Azgerod · 2021-04-09T19:04:41.841Z · LW(p) · GW(p)

When you get down to it, all politics is about conflict resolution. That's not particular to democracy.

Democracy can be viewed as a government in which policy decisions are intended to reflect the will of the people, as opposed to, for example, the will of the nobility, or a single ruler. When people say that a set of decisions should be made democratically, they mean that the conflict resolution mechanism should be such that the decisions made are reflective of the will of the people.

I think the speaker advocating for a democratic, multinational push for AGI was saying A. that we need to push for AGI and B. that if we do so, our decisions about it should reflect the will of the people. This leaves the particular conflict resolution mechanism an open question but constrains it to the set of mechanisms that are intended to reflect the will of the people.

It's not particularly surprising that this speaker wouldn't have an opinion on which democratic conflict resolution mechanism to use. I imagine that sort of thing is usually left to political scientists.

The speaker's statement is also non-obvious. It may be clear that such a push should be democratic, but it's not at all clear that it would be. The speaker is advocating for making sure that the push happens, and that it's democratic, as opposed to pushing for the AGI movement whilst leaving the conflict resolution entirely up to others who may not have the interests of the people at heart.

There are many different implementations of democracy, but they are all very different from oligarchies. It is meaningful to say that a set of decisions should be made democratically. Your criticism of this speaker is analogous to someone saying, "We should believe true things, not things that make us feel good," and you responding, "You'd better know how to find true beliefs or you're just virtue signaling." It is both poor manners and invalid criticism.

That is not to say that I disagree with your overall point; I just don't think this person's statement was an example of it. Perhaps, if I'd been there, there would have been some noncommunicable social cues that would lead me to your conclusion, but the text alone does not.