[SEQ RERUN] Blue or Green on Regulation?

post by Tyrrell_McAllister · 2011-05-10T18:31:46.512Z · LW · GW · Legacy · 6 comments

Contents

6 comments

Today's post, Blue or Green on Regulation?, was originally published on 15 March 2007. A summary (taken from the LW wiki):

Both sides are often right in describing the terrible things that will happen if we take the other side's advice; the universe is "unfair", terrible things are going to happen regardless of what we do, and it's our job to trade off for the least bad outcome.

(alternate summary:)

In a rationalist community, it should not be necessary to talk in the usual circumlocutions when talking about empirical predictions. We should know that people think of arguments as soldiers and recognize the behavior in our selves. When you think about all the truth values around you come to see that much of what the Greens said about the downside of the Blue policy was true - that, left to the mercy of the free market, many people would be crushed by powers far beyond their understanding, nor would they deserve it. And imagine that most of what the Blues said about the downside of the Green policy was also true - that regulators were fallible humans with poor incentives, whacking on delicately balanced forces with a sledgehammer.

(alternate summary:)

Burch's law isn't a soldier-argument for regulation; estimating the appropriate level of regulation in each particular case is a superior third option.

Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was The Scales of Justice, the Notebook of Rationality, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

6 comments

Comments sorted by top scores.

comment by beriukay · 2011-05-11T09:40:54.511Z · LW(p) · GW(p)

Close your eyes and imagine it. Extrapolate the result. If that were true, then... then you'd have a big problem and no easy way to fix it, that's what you'd have. Does this universe look familiar?

I can't stress this bit enough. If we could truly get over this Blue/Green dance, maybe we could get to finding out what works. Instead, it seems we only keep finding out which rhetorical techniques work (Spoiler: they are the same ones that worked 2500 years ago).

Replies from: nazgulnarsil
comment by nazgulnarsil · 2011-05-11T22:23:26.998Z · LW(p) · GW(p)

A consequentialist high IQ community is likely to converge on a solution that maximizes living standards rather than ability to smash others and take their stuff (and rationalize it). Guess which one has won historically?

every successful social organization meme is about taking something away from others, even if it's just a moral high ground.

That's why I always advocate rationalists shutting up and getting rich (unless you're directly working on existential threats). Just a few of us getting rich will have a larger impact than all the prosthelytizing in the world.

Replies from: David_Gerard
comment by David_Gerard · 2011-05-12T09:59:27.683Z · LW(p) · GW(p)

That's why I always advocate rationalists shutting up and getting rich (unless you're directly working on existential threats). Just a few of us getting rich will have a larger impact than all the prosthelytizing in the world.

Making people less stupid greatly reduces my personal irritation at stupidity, and I find that a really strong motivation to continue. YMMV, of course.

comment by Alex_Altair · 2011-05-13T22:01:56.716Z · LW(p) · GW(p)

LW has definitely made me more rational. My first thought while reading this article was; Ah, but any attempt to regulate necessarily make things worse! But then I caught myself; Wait, that's not why you're against it. You're against regulation because you believe the initiation of the use of force is immoral. Then I evaluated it from that standpoint.

Replies from: SimonF
comment by Simon Fischer (SimonF) · 2011-06-28T11:06:33.748Z · LW(p) · GW(p)

Being rational does not mean that you "improve" your arguments but never change the bottom line.

(Just saying, I'm not sure if you meant it that way.)

Replies from: Alex_Altair
comment by Alex_Altair · 2011-06-28T22:50:59.906Z · LW(p) · GW(p)

Completely understood. This was about internal honesty.