Overcoming the mind-killer

post by woozle · 2010-03-17T00:56:01.710Z · LW · GW · Legacy · 128 comments

Contents

  Rationality and Politics
  Methodology for Determination of Fact
    flaws in the existing methodologies
    Issuepedia's methodology
      criticism of this methodology
  

Notes
None
128 comments

I've been asked to start a thread in order to continue a debate I started in the comments of an otherwise-unrelated post. I started to write a post on that topic, found myself introducing my work by way of explanation, and then realized that this was a sub-topic all its own which is of substantial relevance to at least one of the replies to my comments in that post -- and a much better topic for a first-ever post/thread .

So I'm going to write that introductory post first, and then start another thread specifically on the topic under debate.

I run issuepedia.org, a wiki site largely dedicated to the rational analysis of politics.

As part of that analysis, it covers areas such as logical fallacies (and the larger domain of what I call "rhetorical deceptions" and which LessWrong calls "the dark arts"), history, people, organizations, and any other topics necessary to understand an issue. Coverage of each area generally includes collecting sources (such as news articles, editorials, and blog posts), essential details to provide a quick overview, and usually an attempt to draw some kind of conclusion1 about the topic's ethical significance based, as much as possible, on the sources collected. (Readers are, of course, free to use the wiki format to offer counterarguments and additional facts/sources.)

I started Issuepedia in 2005, largely in an attempt to understand how Bush could possibly have been re-elected (am I deluded, or is half the country deluded? if the latter, how did this happen?). Most of the content is my writing, as I am better at writing than at community-building, but it is all freely copyable under a CC license. I do not receive any money for my work on the site; it does accept donations, but this fact is not heavily advertised and so far there have been no donors. It does not display advertisements, nor have I advertised it (other than linking to it in contexts where its content seems relevant, such as comments on blog entries). I am considering doing the latter at some point when I have sufficient time to give the project the focus it will need in order to grow successfully.

Rationality and Politics

My main hypothesis2 in starting Issuepedia is that it is, in fact, possible to be rational about politics, to overcome its "mind-killing" qualities -- if given sufficient "thinking room" in which to record and work through all the relevant (and often mind-numbing) details involved in most political issues in a public venue where you can "show your work" and others may point out any errors and omissions. I'm trying to use wiki technology as an intelligence-enhancing, bias-overcoming device.

Politics contains issues within issues within issues. Arriving at a rational conclusion about any given issue will often depend on being able to draw reasonable conclusions about a handful of other issues, each of which may have other sub-issues affecting it, and so on.

Keeping track of all of these dependencies, however, is somewhat beyond most people's native emotional intuition and working memory capacity (including mine). Even when we consciously try to overcome built-in biases (such as allegiance to our perceived "tribes", unexamined beliefs acquired in childhood, etc.), our hind-brains want to take the fine, complex grain of reality and turn it into a simple good-vs.-bad or us-vs.-them moral map drawn with a blunt magic marker -- something we can easily remember and apply.

On the other hand, many issues really do seem to boil down to such a simple narrative, something best stated in quite stark terms. Individuals who are making an effort to be measured and rational often seem to reject out of hand the possibility that such simple, clearcut conclusions could possibly be valid, leading to the opposite bias -- a sort of systemic "fallacy of moderation". This can cause popular acquiescence to beliefs that are essentially wrong, such as the claim that "the Democrats do it too" when pointing out evils committed by the latest generation of Republicans. (Yes, they do it too -- but much less often, and much less egregiously overall.)

I propose that there must exist some set of factual information upon which each question ultimately rests, if followed far enough "down". In other words, if you exhaustively and recursively map out the sub-issues for each issue, you must eventually arrive at an issue which can be resolved by reference to facts known or knowable. If no such point can be reached, then the issue cannot possibly have any real-world significance -- because if anyone is in any way affected by the issue, then there is the fact of that dependency which must somehow tie in; the trick is figuring out the actual nature of that dependency.

My approach in issuepedia is to break each major issue down into sub-issues, each of which has its own page for collecting information and analysis on that particular issue, then do the same to each of those issues until each sub-branch (or "rootlet", if you prefer to stay in-metaphor) has encountered the "bedrock" of questions which can be determined factually. Once the "bedrock" questions have been answered, the issues which rest upon those questions can be resolved, and so on.

Documenting these connections, and the facts upon which they ultimately rest, ideally allows each reader to reconstruct the line of reasoning behind a given conclusion. If they disagree with that conclusion, then the facts and reasoning are available for them to figure out where the error lies -- and the wiki format makes it easy for them to post corrections; eventually, all rational parties should be able to reach agreement.

I won't go so far as to claim that Issuepedia carries out this methodology with any degree of rigor, but it's what I'm working towards.

I'm also aware that recent studies have shown that many people aren't influenced by facts once they've made up their minds (e.g. here). Since I have many times observed myself change my own opinion3 in response to facts, I am working with the hypothesis that this ability may be a cognitive attribute that some people have and others lack -- in much the same way that (apparently) only 32% of the adult population can reason abstractly. If it turns out that I do not, in fact, possess this ability to a satisfactory degree, then finding some way to improve it will become a priority.

Methodology for Determination of Fact

The question of how to methodically go about determining fact -- i.e. which assertions may be provisionally treated as true and which should be subjected to further inquiry -- came up in the earlier discussion, and is something which I think is ripe for formalization.

flaws in the existing methodologies

Up until now, society has depended upon a sort of organic, slow and inefficient but reasonably thorough vetting of new ideas by a number of processes. Those who are more familiar with this area of study should feel free to note any errors or omissions in my understanding, but here is my list of processes (which I'll call "epistemic arenas"4) by which we have traditionally arrived at societal truths:

The flaws in each of these methodologies have become much clearer due to the ease and speed with which they may now be researched because of the Internet. A brief summary:

The scientific process is clearly the best of the lot, but can be gamed and exploited: fake papers with sciencey-looking graphs and formulas (e.g. this) -- sometimes published in fake journals with sciencey-sounding names (e.g. JP&S) or backed by sciencey-sounding institutions (SEPP, SSRC, SPPI, OISM) -- in order to promote ideas which have been soundly defeated by the real scientific process. Lists of hundreds of scientists dissenting from the prevailing view may not, in fact, contain any scientists actually qualified to make an authoritative statement (i.e. one deserving credence without having to hear the actual argument) on the subject, and only gain popular credibility because of the use of the word "scientist".

On the other hand, legitimate ideas which for some reason are considered taboo sometimes cannot gain entry to this process, and must publish their findings by other means which can look very similar to the methods used to promote illegitimate ideas. How can we tell the difference? We can, but it takes time -- thus "a lie can travel around the world while the truth is still putting on its boots" by exploiting these weaknesses.

Bringing the machinery of the scientific process to bear on any given issue is also quite expensive and time-consuming; it can be many years (or decades, in the case of significant new ideas) before enough evidence is gathered to overturn prior assumptions. This fact can be exploited in both directions: important but "inconvenient" new facts can be drowned in a sea of publicity arguing against them, and well-established facts can be taken out politically by denialist "sniping" (repeating well-refuted claims over and over again until more people are familiar with those claims than with the refutations thereof, leading to popular belief that the claims must be true).

Also, because the public is generally unaware of how the scientific process functions, they are likely to give it far less weight than it deserves (when they correctly identify that a given conclusion truly is scientifically supported, anyway). For example, an attack commonly used by creationists against the theory of evolution by natural selection is that it is "only a theory". Such an argument is only convincing to someone lacking an understanding of the degree to which a hypothesis must withstand interrogation before it starts to be cited as a "theory" in scientific circles.

It should be pretty obvious that government's epistemic process is flawed; nonetheless, many bad or outright false ideas become "facts" after being enshrined in law or upheld by court decisions. (I could discuss this at length if needed.)

Social processing seems to do much better at spotting ethical transgressions (harm and fairness violations), but isolated social groups and communities are vulnerable to memic infection by ideas which become self-reinforcing in the absence of good communication outside the group. Such ideas tend to survive by discouraging such communication and encouraging distrust of outside ideas (e.g. by labeling those outside the community as untrustworthy or tainted in some way), perpetuating the cycle.

The mainstream media was, for many decades, the antidote to the problems in the other arenas. Independent newspapers would risk establishment disfavor in exchange for increased circulation -- and although publishing politically inconvenient truth is not the only way to do that, it was certainly one of them.

Whether deliberately and conspiratorially or simply by many different interests arriving at the same solutions for their problems (and the people with the power to stop it looking the other way as the industry's lobbyists rewrote the laws to encourage and promote those common solutions), media consolidation has effectively taken the mainstream media out of the game as a voice of dissent.

Issuepedia's methodology

The basic idea behind Issuepedia's informal epistemic methodology is that truth -- at least on issues where there is no clear experiment which can be performed to resolve the matter -- is best determined by successive approximation from an initial guess, combined with encouragement of dissenting arguments.

Someone makes a statement -- a "claim". Others respond, giving evidence and reasoning either supporting or in contradicting the claim (counterpoint). Those who still agree with the initial statement then defend it from the counterpoints with further evidence and/or reasoning. If there are any counterpoints nobody has been able to reasonably contradict, then the claim fails; otherwise it stands.

By keeping a record of the objections offered -- in a publicly-editable space, via previously unavailable technology (the internet) -- it becomes unnecessary to rehash the ensuing debate-branch if someone raises the same objection again. They may add new twigs, but once an argument has been answered, the answers will be there every time the same argument is raised. This is an essential tool for defeating denialism, which I define as the repeated re-use of already-defeated but otherwise persuasive arguments; to argue with a denialist, one simply need refer to the catalogue of arguments against their position, and reuse those arguments until the denialist comes up with a new one. This puts the burden on the denialists (finally!) and takes it off those who are sincerely trying to determine the nature of reality.

This also makes it possible for large decisions involving many complex factors to be more accurately updated if knowledge of those factors changes significantly. One would never end up in a situation where one is asking "why do we do things this way?", much less "why do we still do things this way, even though it hasn't made sense since X happened 20 years ago?" because the chain of reasoning would be thoroughly documented.

At present, the methodology has to be implemented by hand; I am working on software to automate the process.

criticism of this methodology

[dripgrind] Your standard of verification seems to be the Wikipedia standard - if you can find a "mainstream" source saying something, then you are happy to take it as fact (provided it fits your case).

[woozle] I am "happy to take it as fact" until I find something contradictory. When that happens, I generally make note of both sources and look for more authoritative information. If you have a better methodology, I am open to suggestions. .. The "Wikipedia standard" seems to work pretty well, though -- didn't someone do a study comparing Wikipedia's accuracy with Encyclopedia Britannica's, and they came out about even?

[dripgrind] So your standard of accepting something as evidence is "a 'mainstream source' asserted it and I haven't seen someone contradict it". That seems like you are setting the bar quite low. Especially because we have seen that [a specific claim woozle made] was quickly debunked (or at least, contradicted, which is what prompts you to abandon your belief and look for more authoritative information) by simple googling. Maybe you should, at minimum, try googling all your beliefs and seeing if there is some contradictory information out there.

"Setting the bar quite low": yes, the initial bar for accepting a statement is low. This is by design, based on the idea of successive approximation of truth (as outlined above) and my secondary hypothesis "that it is important for people to share their opinions on things, regardless of how much thought has been put into those opinions." (See note 2 below.)

Certainly this methodology can lead to error if the size of the observing group is insufficiently large and active -- but it only takes one person saying "wait, that's nonsense!" to start the corrective process. I don't see that degree of responsiveness in any of the other epistemic arenas, and I don't believe it adds any new weaknesses -- except that there is no easy/quick way to gauge the reliability of a given assertion. That is a weakness which I plan to address via the structured debate tool (although I had not until now consciously realized that it was needed).

If this explanation of the process still seems problematic, I'm quite happy to discuss it further; getting the process right is obviously critical.

 


I will be posting next on the specific claims we were discussing, i.e. 9/11 "conspiracy theories". It will probably take several more days at least. Will update this post with a link when the second one is ready.

 


Notes

1. For example, the article about Intelligent Design concludes that "As with creationism in its other forms, ID's main purpose was (and remains) to insinuate religion into public school education in the United States. It has no real arguments to offer, its support derives exclusively from Christian ideological protectionism and evangelism, and its proponents have no interest in revising their own beliefs in the light of evidence new to them. It is a form of denialism."

The idea here is to "call a spade a spade": if something is morally wrong (or right), say so -- rather than giving the appearance of impartiality priority over reaching sound conclusions (e.g. "he-said/she-said journalism" in the media, or the "NPOV" requirement on Wikipedia). You may start out with a lot of wrong statements, but they will be statements which someone believed firmly enough to write -- and when they are refuted, everyone who believed them will have access to the refutation, doing much more towards reducing overall error than if you only recorded known truths.

2. A secondary hypothesis is that it is important for people to share their opinions on things, regardless of how much thought has been put into those opinions. I have two reasons for this: (1) it helps overcome individual bias by pooling the opinions of many (in an arena where hopefully all priors and reasoning may eventually be discussed and resolved), and (2) there are many terrible things that happen but which we lack the immediate power to change; if we neither do nor say anything about them, others may reasonably assume that we consent to these things. Saying something at least helps prevent the belief that there is no dissent, which otherwise might be used to justify the status quo.

3. I am hoping that this observation is not itself a self-delusion or some form of akrasia. Toward the end of confirming or ruling out akrasia, I have made a point of posting my positions on many topics, with an open offer to defend any of those positions against rational criticism. If anyone believes, after observing any of the debates I have been involved in, that I am refusing to change my position in response to facts which clearly indicate such a change should take place, then I will add a note to that effect under the position in question.

4. These have a lot in common with what David Brin calls "disputation arenas", but they don't seem to be exactly the same thing.

128 comments

Comments sorted by top scores.

comment by Jack · 2010-03-17T14:29:59.210Z · LW(p) · GW(p)

In other words, if you exhaustively and recursively map out the sub-issues for each issue, you must eventually arrive at an issue which can be resolved by reference to facts known or knowable. If no such point can be reached, then the issue cannot possibly have any real-world significance

I think someone could have the same factual beliefs I have but disagree with me on a myriad of policy issues because she has different terminal values.

Replies from: woozle
comment by woozle · 2010-03-17T22:18:38.741Z · LW(p) · GW(p)

Can you come up with an example?

Wouldn't that violate Aumann? -- and if so, are you saying you disagree with that?

Replies from: mattnewport, Jack
comment by mattnewport · 2010-03-17T22:34:45.144Z · LW(p) · GW(p)

No, it doesn't violate Aumann. Two perfectly rational agents should agree on their probability estimates for the outcomes of certain policies but if they disagree on terminal values, for example whether they view equality as about egalitarianism or equity (one of the distinctions Jonathan Haidt makes) then they could prefer different policies to be implemented.

Replies from: woozle
comment by woozle · 2010-03-17T23:41:05.212Z · LW(p) · GW(p)

I think we're still talking in vagaries, here. "Egalitarianism" and "equity" are two ideals which are themselves simplified maps (kind of like "morals") of two possible ways of achieving optimal outcomes -- where by "optimal" I mean "harm-minimizing".

The only thing we should be arguing about is the exact algorithm by which individual levels of harm are summed to produce a single number -- e.g. which is worse: taking $10 from someone who earns $10k/year or $1 each from 1000 millionaires? If society can easily afford to properly care for every single person while still allowing a great deal of economic diversity above the poverty line (i.e. we can still have people who are ridiculously rich), what possible justification is there for not doing so?

Replies from: mattnewport
comment by mattnewport · 2010-03-17T23:47:09.673Z · LW(p) · GW(p)

You appear to be taking a utilitarian view of ethics for granted. I'm not a utilitarian so it makes more sense to me to reverse your question and ask what possible justification is there for taking $1 or $10 from anyone without their consent. Terminal values and ethical foundations do not appear to be universal even among humans.

Replies from: torekp, woozle
comment by torekp · 2010-03-18T01:15:35.915Z · LW(p) · GW(p)

Bypassing the question of terminal values, it would still be very useful to have a good argument map of factual issues which are hotly disputed.

comment by woozle · 2010-03-18T00:52:43.011Z · LW(p) · GW(p)

Sure, that's a good question to be asking.

First: would you agree that the benefits we receive from society obligate us to return something to society or else to refrain from receiving those benefits? Or not?

Second, I wasn't arguing that either action would be right, but asking which would be worse, if you had to choose between actions which would result in one of those two outcomes?

Replies from: mattnewport
comment by mattnewport · 2010-03-18T01:11:07.328Z · LW(p) · GW(p)

First: would you agree that the benefits we receive from society obligate us to return something to society or else to refrain from receiving those benefits? Or not?

I get the impression that our conceptions of ethics and morality differ significantly enough that we probably wouldn't even agree on a valid interpretation of this question. It doesn't really make sense to me to talk of obligations to society. I may have obligations to individuals (or particular groups of individuals) as a result of explicit or implicit contracts I've entered into with them but I don't believe I am morally obligated to reciprocate 'benefits' conferred on me through an arrangement that I have not consented to.

Second, I wasn't arguing that either action would be right, but asking which would be worse, if you had to choose between actions which would result in one of those two outcomes?

Again, I think the gulf between our ethical foundations prevents a simple answer to this question that would be informative. I'm basically a classical liberal / libertarian. If you understand the ethical framework that implies then you should be able to see why this question doesn't make much sense to me.

Replies from: woozle
comment by woozle · 2010-03-18T01:13:06.142Z · LW(p) · GW(p)

Are you saying we should stop trying to bridge that gulf, or should I try to explain myself a different way?

Replies from: mattnewport
comment by mattnewport · 2010-03-18T01:16:16.721Z · LW(p) · GW(p)

No, I'm in favour of attempts to bridge the gulf and the fact that you are posting here is a promising sign that it might be possible. I'm reluctant to engage further based on what I've seen of your writing on your site so far however - time is a limited resource and I fear that the value I would gain from engaging with you is not worth the time investment. Your comments in this thread have not exhibited the level of partisan blindness I've been worried by on your site however so there may be hope.

Replies from: woozle, woozle
comment by woozle · 2010-03-18T01:34:28.491Z · LW(p) · GW(p)

Also: I think what you're misunderstanding about the POV on the site is that I am prepared to rationally defend everything I have said there, and I am prepared to retract or alter it if I cannot do so. (Note that there are a few articles posted by others, and I don't necessarily agree with what they have said -- but if I have not responded, it means I also don't disagree strongly enough to bother. Maybe you do, and maybe I will too once the flaws are pointed out.)

comment by woozle · 2010-03-18T01:31:16.212Z · LW(p) · GW(p)

Okay, then, let me try again.

If someone loans you a car, and it runs out of gas, do you (a) only put in enough gas to get you where you need to go (including returning it to the owner), or do you (b) fill the tank up?

I would argue that it is foolish to do (a), because if you become known as someone who pulls crap like that, people aren't likely to loan you their cars in the future.

Libertarianism seems to be arguing, however, that (a) is the correct and proper action.

Next question:

Let's say there's some kind of widespread natural disaster where you live. Maybe outside help is coming but it will be a couple of weeks before it arrives in force. A group forms to work out what resources are available and who needs them. Let's say you know all the people in that group, and have no reason to be suspicious of their motives. They decide that you have some supplies that are more urgently needed by others -- people you don't specifically know -- and ask you to donate those supplies, even knowing that you may or may not ever be compensated for them given the extent of the disaster.

Would you say you have any... [let's not say "obligation" or "compulsion"...] ...rephrase: Would you feel like a jerk if you didn't comply, or do you think it would be perfectly ok?

Replies from: thomblake, wedrifid, mattnewport
comment by thomblake · 2010-03-18T01:37:22.393Z · LW(p) · GW(p)

Libertarianism seems to be arguing, however, that (a) is the correct and proper action.

You don't seem like someone well-acquainted with the relevant literature. If a policy seems obviously correct, and doesn't involve coercing someone else into doing things against their will, then Libertarianism (at least, read as roughly equivalent to Lockean classical liberalism) won't tell you not to do it.

A lot of libertarians are very enthusiastic about charity and philanthropy; they are less enthusiastic about being forced into it at gunpoint.

Is there any point to having this conversation here?

comment by wedrifid · 2010-03-18T01:52:56.491Z · LW(p) · GW(p)

Libertarianism seems to be arguing, however, that (a) is the correct and proper action.

No, it really doesn't. "Naively optimised self interest" suggests (a) and libertarianism is almost irrelevent to the question. Maybe if the question was "should people be coerced into b independently of any contract (formal or implicit) with the owner?"

comment by mattnewport · 2010-03-18T01:48:36.784Z · LW(p) · GW(p)

If you think libertarianism argues that a) is the correct and proper action then you don't understand libertarianism. I'm not even sure how you'd arrive at the idea that it does. I'm guessing that you are trying to make some kind of analogy between libertarian attitudes to government and libertarian attitudes to individual interactions but that you are assuming ideas about government that libertarians do not share.

As for the natural disaster scenario, the basis of libertarian ethics is that people should not be compelled to do anything by force. Voluntary charity is perfectly compatible with libertarianism and indeed libertarians often believe that voluntary charity is a much more satisfactory solution to most of the social problems that governments currently take it upon themselves to address.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-03-18T03:01:09.056Z · LW(p) · GW(p)

I think woozle believes that libertarians are parody versions of objectivists.

Replies from: ata
comment by ata · 2010-03-18T03:36:09.324Z · LW(p) · GW(p)

I thought objectivists were a parody version of libertarians.

comment by Jack · 2010-03-17T22:40:55.053Z · LW(p) · GW(p)

What I have in mind is Jonathan Haidt's discucssion of the different moral foundations liberals and conservatives have. Conservatives tend to value respect for authority, in-group loyalty and sexual purity while liberals do not (or do so only minimally). A google search should pull up some papers, a website and a TED Talk.

Haidt talks about what are in fact the values humans have but we can certainly imagine hypothetical persons with even more divergent values. I doubt, for example, that you and Clippy could ever come to significant agreement on policy issues even if you were in perfect Aumann agreement about every single fact about the world.

I don't take terminal values to be the sorts of things you can run Bayesian updates for and so I don't believe Aumann applies here.

Replies from: woozle, woozle
comment by woozle · 2010-03-18T11:13:53.527Z · LW(p) · GW(p)

If I'm understanding correctly, "terminal values" are end-goals.

If we have different end-goals, we need to understand what they are. (Actually, we should understand what they are anyway -- but if they're different, it becomes particularly important to examine them and identify the differences.)

This seems related to a question that David Brin once suggested as a good one to bring up in political debate: Describe the sort of world do you hope your preferred policies will create. ...or, in other words, describe your large-scale goals for our society -- your terminal values.

My terminal values are to minimize suffering and maximize individual freedom and ability to create, explore, and grow in wisdom via learning about the universe.

If anyone has different terminal values, I'd like to hear more about that.

Replies from: Jack, mattnewport
comment by Jack · 2010-03-24T02:55:33.900Z · LW(p) · GW(p)

I don't think any bumper sticker successfully encapsulates my terminal values. I'm highly sympathetic to ethical pluralism and particularism. I value fairness and happiness (politically I'm a cosmopolitan Rawlsian liberal) with additional values of freedom and honesty which under certain conditions can trump fairness and happiness. I also value the existence of what I would recognize as humanity and limiting the possibility of the destruction of humanity can sometimes trump all of the above. Values weighted toward myself, my family and friends. It's possible all of these things could be reduced to more fundamental values, I'm not sure. There are cases where I have no good procedure for evaluating which outcome is more desirable.

My terminal values are to minimize suffering and maximize individual freedom and ability to create, explore, and grow in wisdom via learning about the universe.

It is worth noting, if you think these are rationally justifiable somehow, that maximizing two different values is going to leave with an incomplete function in some circumstances. Some options will maximize not suffering but fail to maximize freedom and vice versa.

If anyone has different terminal values, I'd like to hear more about that.

If you were looking for people here with different values, see above (though I don't know how much we differ). But note that the people here are going to have heavy overlap on values for semi-obvious reasons. But there are people out there who assign intrinsic moral relevance to national borders, race, religion, sexual purity, tradition etc. Do you still deny that?

Replies from: woozle
comment by woozle · 2010-03-25T21:13:35.699Z · LW(p) · GW(p)

It seems to me that the terminal values you list are really just means to an end, and that the end in question is similar to my own -- i.e. some combination of minimizing harm and maximizing freedom (to put it in terms which are a bit of an oversimplification).

For example: I also favor ethical pluralism (I'm not sure what "particularism" is), for the reasons that it leads to a more vibrant and creative society, whilst the opposite (which I guess would be suppressing or discouraging any but some "dominant" culture) leads to completely unnecessary suffering.

You are right that maximizing two values is not necessarily solvable. The apparaent duality of the goal as stated has more to do with the shortcomings of natural language than it does with the goals being contradictory. If you could assign numbers to "suffering" (S) and "individual freedom" (F), I would think that the goal would be to maximize aS + bF for some values of a and b which have yet to be worked out.

[Addendum: this function may be oversimplifying things as well; there may be one or more nonlinear functions applied to S and/or F before they are added. What I said below about the possible values of a and b applies also to these functions. A better statement of the overall function would probably be fa(S) + fb(F), where fa() and fb() are both - I would think - positively-sloped for all input values.]

[Edit: ACK! Got confused here; the function for S would be negative, i.e. we want less suffering.]

[Another edit in case anyone is still reading this comment for the first time: I don't necessarily count "death" as non-suffering; I suppose this means "suffering" isn't quite the right word, but I don't have another one handy]

The exact values of a and b may vary from person to person -- perhaps they even are the primary attributes which account for one's political predispositions -- but I would like to see an argument that there is some other desirable end goal for society, some other term which belongs in this equation.

...there are people out there who assign intrinsic moral relevance to national borders, race, religion, sexual purity, tradition etc. Do you still deny that?

I do not deny this, but I also do not believe they are being rational in those assignments. Why should the "morality" of a particular act matter in the slightest if it has been shown to be completely harmless?

Replies from: Jack
comment by Jack · 2010-03-25T23:48:15.380Z · LW(p) · GW(p)

For example: I also favor ethical pluralism (I'm not sure what "particularism" is), for the reasons that it leads to a more vibrant and creative society, whilst the opposite (which I guess would be suppressing or discouraging any but some "dominant" culture) leads to completely unnecessary suffering.

This is my fault. I don't mean multiculturalism or politcal pluralism. I really do mean pluralism about terminal values. By particularism I mean that there are no moral principles and that the right action is entirely circumstance dependent. Note that I'm not actually a particularist since I did give you moral principles. I would say that I am a value pluralist.

It seems to me that the terminal values you list are really just means to an end, and that the end in question is similar to my own -- i.e. some combination of minimizing harm and maximizing freedom (to put it in terms which are a bit of an oversimplification).

But I'm explicitly denying this. For example, I am a cosmopolitan. In your discussion with Matt you've said that for now you care about helping poor Americans, not the rest of the world. But this is totally antithetical to my terminal values. I would vastly prefer to spend political and economic capital to get rid of agricultural subsides in the developed world, liberalize as many immigration and trade laws as I can and test strategies for economic development. Whether or not the American working class has cheap health care really is quite insignificant to me by comparison.

Now, when I say I have a terminal value of fairness I really do mean it. I mean I would sacrifice utility or increase overall suffering in some circumstances in order to make the world more fair. I would do the same to make the world more free and the same to make the world more honest in some situations. I would do things that furthered the happiness of my friends and family but increased your suffering (nothing personal). I don't know what gives you reason to deny any of this.

I do not deny this, but I also do not believe they are being rational in those assignments. Why should the "morality" of a particular act matter in the slightest if it has been shown to be completely harmless?

Now you're just begging the question. My whole point this entire time is that there is no reason for morality to always be about harm. Indeed, there is no reason for morality to ever be about harm except that we make it so. I frankly don't even understand the application of the word "rationality" as we use it here to values. Unless you have a third meaning for the word your usage here is just a category error!

Replies from: woozle
comment by woozle · 2010-03-26T11:30:03.710Z · LW(p) · GW(p)

By particularism I mean that there are no moral principles and that the right action is entirely circumstance dependent.

So how do you rationally decide if an action is right or wrong? -- or are you saying you can't do this?

Also, just to be clear: you are saying that you do not believe rightness or wrongness of an action ultimately derives from whether or not it does harm? ("Harm" being the more common term; I tried to refine it a bit as "personally-defined suffering", but I think you're disagreeing with the larger idea -- not my refinement of it.)

In your discussion with Matt you've said that for now you care about helping poor Americans, not the rest of the world.

Matt (I believe) misinterpreted me that way too. No, that is not what I said.

What I was trying to convey was that I thought I had a workable and practical principle by which poor Americans could be helped (redistribution of American wealth via mechanisms and rules yet to be worked out), while I don't have such a solution for the rest of the world [yet].

I tried to make it quite clear that I do care about the rest of the world; the fact that I don't yet have a solution for them (and am therefore not offering one) does not negate this.

I also tried to make it quite clear that my solution for Americans must not come at the price of harming others in the world, and that (further) I believe that as long as it avoids this, it may be of some benefit to the rest of the world as we will not be allowing unused resources to languish in the hands of the very richest people (who really don't need them) -- leaving the philanthropists among us free to focus on poverty worldwide rather than domestically.

(At a glance, I agree with your global policy position. I don't think it contradicts my own. I'm not talking about reallocation of existing expenditures -- foreign aid, tax revenues, etc. -- I'm talking about reallocating unused -- one might even use the word "hoarded" -- resources, via means socialistic, capitalistic, or whatever means seems best*.)

(*the definition of this slippery term comes back ultimately to what we're discussing here: "what is good?")

Now you're just begging the question. My whole point this entire time is that there is no reason for morality to always be about harm. Indeed, there is no reason for morality to ever be about harm except that we make it so. I frankly don't even understand the application of the word "rationality" as we use it here to values. Unless you have a third meaning for the word your usage here is just a category error!

First of all, when I say "harm" or "suffering", I'm not talking about something like "punishing someone for bad behavior"; the idea behind doing that (whether correct or not) is that this ultimately benefits them somehow, and any argument over such punishment will be based on whether harm or good is being done overall. "Hitting a masochist" would not necessarily qualify as harm, especially if you will stop when the masochist asks you to.

Second... when we look at harm or benefit, we have to look at the system of people affected. This isn't to say that if {one person in the system benefits more than another is harmed} then it's ok, because then we get into the complexity of what I'll call the "benefit aggregation function" -- which involves values that probably are individual.

It's also reasonable (and often necessary) to look at a decision's effects on society (if you let one starving person get away with stealing a cookie under a particular circumstance, then other hungry people may think it's always okay to steal cookies) in the present and in the long term. This is the basis of many arguments against gay marriage, for example -- the idea that society will somehow be harmed -- and hence individuals will be harmed as society crumbles around them -- by "changing the definition of marriage". (The evidence is firmly against those arguments, but that's not the point.)

Third: I'm arguing that "[avoiding] harm" is the ultimate basis for all empathetic-human arguments about morality, and I suggest that this would be true for any successful social species (not just humans). (by which I mean "humans with empathy" -- specifically excluding psychopaths and other people whose primary motive is self-gratification)

I suggest that If you can't argue that an action causes harm of some kind, you have absolutely no basis for claiming the action is wrong (within the context of discussions with other humans or social sophonts).

You seem to be arguing, however, that actions can be wrong without causing any demonstrable harm. Can you give an example?

Replies from: Jack
comment by Jack · 2010-03-26T19:13:12.075Z · LW(p) · GW(p)

So how do you rationally decide if an action is right or wrong? -- or are you saying you can't do this?

There is no such thing as "rationally deciding if an action is right or wrong". This has nothing to do with particularism. It's just a metaethical position. I don't know what can be rational or irrational about morality.

Again though, I'm not a particularist, I do have principles I can apply if I don't have strong intuitions. A particularist only has her intuitions.

Also, just to be clear: you are saying that you do not believe rightness or wrongness of an action ultimately derives from whether or not it does harm? ("Harm" being the more common term; I tried to refine it a bit as "personally-defined suffering", but I think you're disagreeing with the larger idea -- not my refinement of it.)

I don't believe my own morality can be reduced to language about harm. I'm not sure what "ultimately derives" means but I suspect my answer is no. My morality happens to have a lot to do with harm (again, I'm a Haidtian liberal). But I don't think that makes my morality more rational than a morality that is less about harm. There is no such thing a "rational" or "irrational" morality only moralities I find silly or abhorrent.

I tried to make it quite clear that I do care about the rest of the world; the fact that I don't yet have a solution for them (and am therefore not offering one) does not negate this.

If it's the case that you care about the rest of the world then I don't think you realize how non-ideal your prescriptions are. You're basically advocating for redistributing wealth from part of the global upper class to part of the global middle class and ignoring those experiencing the most pain and the most injustice.

I also tried to make it quite clear that my solution for Americans must not come at the price of harming others in the world, and that (further) I believe that as long as it avoids this, it may be of some benefit to the rest of the world as we will not be allowing unused resources to languish in the hands of the very richest people (who really don't need them) -- leaving the philanthropists among us free to focus on poverty worldwide rather than domestically.

But of course it comes at the price of harming the rest of the world. You're advocating sacrificing political resources to pass legislation. Those resources are to some extent limited which means you're decreasing the chances of or at least delaying changes in policy which would actually benefit the poorest. Moreover, social entitlements are notoriously impossible to overturn which means you're putting all this capital in a place we can't take it from to give to the people who really need it. Shoot, at least the mega-rich are sometimes using their money to invest in developing countries.

This doesn't even get us into preventing existential risk. When ever you have a utility-like morality using resources inefficiently is about as bad as actively doing harm.

You seem to be arguing, however, that actions can be wrong without causing any demonstrable harm. Can you give an example?

None you'll agree with! You've already said your morality is about preventing harm! But like it or not there are people who really don't care about suffering outside their own country. There are people who thing gay marriage is wrong no matter what effects it has on society (just as there are those, like me, who think it should be legal even if it damages society). There are those who do not believe we should criticize our leader under certain circumstances. There are those who believe our elders deserve respect above and beyond what they deserve as humans. There are those who believe sex outside of marriage is wrong. There are those who believe eating cow is immoral; there are others who believe eating cow is delicious. None of these people are necessarily rational or irrational.

I'll reiterate one question: What do you mean by rational in "rational morality"?

Replies from: woozle, thomblake
comment by woozle · 2010-03-27T02:31:03.047Z · LW(p) · GW(p)

You're basically advocating for redistributing wealth from part of the global upper class to part of the global middle class and ignoring those experiencing the most pain and the most injustice.

I've explained repeatedly -- perhaps not in this subthread, so I'll reiterate -- that I'm only proposing reallocating domestic resources within the US, not resources which would otherwise be spent on foreign aid of any kind. I don't see how that can be harmful to anyone except (possibly) the extremely rich people from whom the resources are being reallocated.

(Will respond to your other points in separate comments, to maximize topic-focus of any subsequent discussion.)

comment by thomblake · 2010-03-26T20:36:57.502Z · LW(p) · GW(p)

So how do you rationally decide if an action is right or wrong? -- or are you saying you can't do this?

I don't know what can be rational or irrational about morality.

This is taken out of context, but I must take issue with it. If you can decide whether an action is right or wrong, then that decision can be made rationally, for any decent definition of 'rationality' that is about decisions.

So if you want to claim, "One cannot rationally decide whether an action is right or wrong", that reduces to "One cannot decide whether an action is right or wrong". In that case, would it be because your decisions can't affect your beliefs, or because there is objective morality, or some other reason?

Replies from: Jack
comment by Jack · 2010-03-26T21:16:28.910Z · LW(p) · GW(p)

I'm not sure I understand your issue. If this response doesn't work you may have to reexplain.

If you have some values-- say happiness-- then there can be irrational ways of evaluating actions in terms of those values. So if I'm concerned with happiness but only look at the effects of the action on my sneakers, and not the emotions of people, well that seems irrational if happiness is really what I care about. Certainly there are actions which can either be consistent or inconsistent with some set of values and taking actions that are inconsistent with your values is irrational. What I don't see is what it could mean for those values to be rational or irrational in the first place. I don't think people "decide" on terminal values in the way they decide on breakfast or to give to some charity over another.

Does that address your concern?

Replies from: woozle
comment by woozle · 2010-03-27T02:42:01.729Z · LW(p) · GW(p)

See my comment about "internal" and "external" terminal values -- I think possibly that's where we're failing to communicate.

Internal terminal values don't have to be rational -- but external ones (goals for society) do, and need to take individual ones into account. Violating an individual internal TV causes suffering, which violates my proposed universal external TV.

For instance... if I'm a heterosexual male, then one of my terminal values might be to form a pair-bond with a female of my species. That's an internal terminal value. This doesn't mean that I think everyone should do this; I can still support gay rights. "Supporting gay rights" is an external value, but not a terminal one for me. For a gay person, it probably would be a terminal value -- so prohibiting gays from marrying would be violating their internal terminal values, which causes suffering, which violates my proposed universal external terminal value of "minimizing suffering / maximizing happiness" -- and THAT is why it is wrong to prohibit gays from marrying, not because I personally happen to think it is wrong (i.e. not because of my external intermediate value of supporting gay rights).

Replies from: Jack
comment by Jack · 2010-03-27T19:32:16.459Z · LW(p) · GW(p)

I'm fine with that distinction but it doesn't change my point. Why do external terminal values have to be rational? What does it mean for a value to be rational?

Can you just answer those two questions?

Replies from: woozle
comment by woozle · 2010-08-26T15:35:06.728Z · LW(p) · GW(p)

Here's my answer, finally... or a more complete answer, anyway.

Replies from: Emile
comment by Emile · 2010-08-26T15:38:57.055Z · LW(p) · GW(p)

It's not visible, I think you have to publish it.

Replies from: woozle
comment by woozle · 2010-08-26T18:30:43.848Z · LW(p) · GW(p)

I finally figured out what was going on, and fixed it. For some reason it got posted in "drafts" instead of on the site, and looking at the post while logged in gave no clue that this was the case.

Sorry about that!

comment by mattnewport · 2010-03-18T18:02:34.481Z · LW(p) · GW(p)

My terminal values are to minimize suffering and maximize individual freedom and ability to create, explore, and grow in wisdom via learning about the universe.

By 'minimize suffering' I assume you mean some kind of utilitarian conception of minimizing aggregate suffering equally weighted across all humans (and perhaps extended to include animals in some way). If so this would be one area we differ. Like most humans, I don't apply equal weighting to all other individuals' utilities. I don't expect other people's weightings to match my own, nor do I think it would be better if we all aimed to agree on a unique set of weightings. I care more about minimizing the suffering of my family and friends than I do about some random stranger, an animal, a serial killer, a child molester or a politician. I do not think this is a problem.

Replies from: woozle, woozle
comment by woozle · 2010-03-25T22:42:21.900Z · LW(p) · GW(p)

Much discussion about "minimization of suffering" etc. ensued from my first response to this comment, but I thought I should reiterate the point I was trying to make:

I propose that the ultimate terminal value of every rational, compassionate human is to minimize suffering.

(Tentative definition: "suffering" is any kind of discomfort over which the subject has no control.)

All other values (from any part of the political continuum) -- "human rights", "justice", "fairness", "morality", "faith", "loyalty", "honor", "patriotism", etc. -- are not rational terminal values.

This isn't to say that they are useless. They serve as a kind of ethical shorthand, guidelines, rules-of-thumb, "philosophical first-aid": somewhat-reliable predictors of which actions are likely to cause harm (and which are not) -- memes which are effective at reducing harm when people are infected by them. (Hence society often works hard to "sugar coat" them with simplistic, easily-comprehended -- but essentially irrelevant -- justifications, and otherwise encourage their spread.)

Nonetheless, they are not rational terminal values; they are stand-ins.

They also have a price:

  • they do not adapt well to changes in our evolving rational understanding of what causes harm/suffering, so that rules which we now know cause more suffering than benefit are still happily propagating out in the memetic wilderness...
  • any rigid rule (like any tool) can be abused.

...

I seem to have taken this line of thought a bit further than I meant to originally -- so to summarize: I'd really like to hear if anyone believes there are other rational terminal values other than (or which cannot ultimately be reduced to) "minimizing suffering".

Replies from: Strange7, Morendil, mattnewport
comment by Strange7 · 2010-03-25T22:48:11.912Z · LW(p) · GW(p)

I propose that the ultimate terminal value of every rational, compassionate human is to minimize suffering.

I disagree. I'll take suffering rather than death any day, thank-you-very-much.

Furthermore, I have reason to believe that, if I were offered the opportunity to instantaneously and painlessly wipe out all life in the universe, many compassionate humans would support my decision not to do so, despite all the suffering which is thereby allowed to continue.

Replies from: woozle
comment by woozle · 2010-03-26T01:07:22.569Z · LW(p) · GW(p)

You think you're disagreeing with me, but you're not; I would say that for you, death would be a kind of suffering -- the very worst kind, even.

I would also count the "wipe out all life" scenario as an extreme form of suffering. Anyone with any compassion would suffer in the mere knowledge that it was going to happen.

Replies from: Strange7
comment by Strange7 · 2010-03-26T01:14:01.556Z · LW(p) · GW(p)

If you're going to define suffering as 'whatever we don't like,' including the possibility that it's different for everyone, then I agree with your assertion but question it's usefulness.

Replies from: woozle
comment by woozle · 2010-03-26T12:02:05.718Z · LW(p) · GW(p)

It's not what "we" -- the people making the decision or taking the action -- don't like; it's what those affected by the action don't like.

comment by Morendil · 2010-03-26T07:47:39.567Z · LW(p) · GW(p)

Learning is a terminal value for me, which I hold irreducible to its instrumental advantages in contributing to my well-being.

Replies from: woozle
comment by woozle · 2010-03-27T02:07:58.596Z · LW(p) · GW(p)

That seems related to what I was trying to get at with the placeholder-word "freedom" -- I was thinking of things like "freedom to explore" and "freedom to create new things" -- both of which seem highly related to "learning".

It looks like we're talking about two subtly different types of "terminal value", though: for society and for one's self. (Shall we call them "external" and "internal" TVs?)

I'm inclined to agree with your internal TV for "learning", but that doesn't mean that I would insist that a decision which prevented others from learning was necessarily wrong -- perhaps some people have no interest in learning (though I'm not going to be inviting them to my birthday party).

If a decision prevented learnophiles from learning, though, I would count that as "harm" or "suffering" --- and thus it would be against my external TVs.

Taking the thought a little further: I would be inclined to argue that unless an individual is clearly learnophobic, or it can be shown that too much learning could somehow damage them, then preventing learning in even neutral cases would also be harm -- because learning is part of what makes us human. I realize, though, that this argument is on rather thinner rational ground than my main argument, and I'm mainly presenting it as a means of establishing common emotional ground. Please ignore it if this bothers you.

Take-away point: My proposed universal external TV (prevention of suffering) defines {involuntary violation of internal TVs} as harm/suffering.

Hope that makes sense.

comment by mattnewport · 2010-03-25T22:51:43.128Z · LW(p) · GW(p)

I think you are wrong but I don't think you've even defined the goal clearly enough to point to exactly where. Some questions:

  • How do we weight individual contributions to suffering? Are all humans weighted equally? Do we consider animal suffering?
  • How do we measure suffering? Should we prefer to transfer suffering from those with a lower pain threshold to those with a greater tolerance?
  • How do you avoid the classic unfriendly AI problem of deciding to wipe out humanity to eliminate suffering?
  • Do you think that people actually generally act in accordance with this principle or only that they should? If the latter to what extent do you think people currently do act in accordance with this value?

There are plenty of other problems with the idea of minimizing suffering as the one true terminal value but I'd like to know your answers to these questions first.

Replies from: woozle
comment by woozle · 2010-03-26T01:35:29.101Z · LW(p) · GW(p)

Points 1 and 2:

I don't know. I admitted that this was an area where there might be individual disagreement; I don't know the exact nature of the fa() and fb() functions -- just that we want to minimize [my definition of] suffering and maximize freedom.

Actually, on thinking about it, I'm thinking "freedom" is another one of those "shorthand" values, not a terminal value; I may personally want freedom, but other sentients might not. A golem, for example, would have no use for it (no comments from Pratchett readers, thank you). Nor would a Republican. [rimshot]

The point is not that we can all agree on a quantitative assessment of which actions are better than others, but that we can all agree that the goal of all these supposedly-terminal values (which are not in fact terminal) is to minimize suffering*.

(*Should I call it "subjective suffering"? "woozalian suffering"?)

Point 3 again arises from a misunderstanding of my definition of suffering; such an action would hugely amplify subjective suffering, not eliminate it.

Point 4: Yes, with some major caveats...

First, I think this principle is at the heart of human wiring. Some people may not have it (about 5% of the population lacks any empathy), but we're not inviting those folks to the discussion table at this level.

Second... many people have been socialized into believing that certain intermediate values (faith, honor, patriotism, fairness, justice, honesty...) are themselves terminal values -- but when anyone tries to justify those values as being good and right, the justifications inevitably come down to either (a) preventing harm to others, or (b) preventing harm to one's self. ...and (b) only supercedes (a) for people whose self-interest outweighs their integrity.

Replies from: Strange7, mattnewport
comment by Strange7 · 2010-03-26T01:45:33.876Z · LW(p) · GW(p)

Point 3 again arises from a misunderstanding of my definition of suffering; such an action would hugely amplify subjective suffering, not eliminate it.

So, how much suffering would you say an unoccupied volume of space is subject to? A lump of nonliving matter? A self-consistent but non-instantiated hypothetical person?

Replies from: woozle
comment by woozle · 2010-03-26T12:54:24.201Z · LW(p) · GW(p)

It's true that there would be no further suffering once the destruction was complete.

This is a bit of an abstract point to argue over, but I'll give it a go...

I started out earlier arguing that the basis of all ethics was {minimizing suffering} and {maximizing freedom}; I later dropped the second term because it seemed like it might be more of a personal preference than a universal principle -- but perhaps it, or something like it, needs to be included in order to avoid the "destroy everything instantly and painlessly" solution.

That said, I think it's more of a glitch in the algorithm than a serious exception to the principle. Can you think of any real-world examples, or class of problems, where anyone would seriously argue for such a solution?

Replies from: Strange7
comment by Strange7 · 2010-03-26T13:06:56.211Z · LW(p) · GW(p)

The classic one is euthanasia.

Replies from: woozle
comment by woozle · 2010-03-27T02:22:16.261Z · LW(p) · GW(p)

Your example exposes the flaw in the "destroy everything instantly and painlessly" pseudo-solution: the latter assumes that life is more suffering than pleasure. (Euthanasia is only performed -- or argued for, anyway -- when the gain from continuing to live is believed to be outweighed by the suffering.)

I think this shows that there needs to be a term for pleasure/enjoyment in the formula...

...or perhaps a concept or word which equates to either suffering and pleasure depending on signage (+/-), and then we can simply say that we're trying to maximize that term -- where the exact aggregation function has yet to be determined, but we know it has a positive slope.

comment by mattnewport · 2010-03-26T04:30:58.887Z · LW(p) · GW(p)

I don't know. I admitted that this was an area where there might be individual disagreement; I don't know the exact nature of the fa() and fb() functions -- just that we want to minimize [my definition of] suffering and maximize freedom.

So you want to modify your original statement:

I propose that the ultimate terminal value of every rational, compassionate human is to minimize suffering.

To something like: "I propose that the ultimate terminal value of every rational, compassionate human is to minimize [woozle's definition of] suffering (which woozle can't actually define but knows it when he sees it)"?

Your proposal seems to be phrased as a descriptive rather than normative statement ('the ultimate terminal value of every rational, compassionate human is' rather than 'should be'). As a descriptive statement this seems factually false unless you define 'rational, compassionate human' as 'human who aims to minimize woozle's definition of suffering'. As a normative statement it is merely an opinion and one which I disagree with.

So I don't agree that minimizing suffering by any reasonable definition I can think of (I'm having to guess since you can't provide one) is or should be the terminal value of human beings in general or this human being in particular. Perhaps that means I am not rational or compassionate by your definition but I am not entirely lacking in empathy - I've been known to shed a tear when watching a movie and to feel compassion for other human beings.

again arises from a misunderstanding of my definition of suffering; such an action would hugely amplify subjective suffering, not eliminate it.

Well you need to make some effort to clarify your definition then. If killing someone to save them from an eternity of torture is an increase in suffering by your definition what about preventing a potential someone from ever coming into existence? Death represents the cessation of suffering and the cessation of life and is extreme suffering by your definition. Is abortion or contraception also a cause of great suffering due to the denial of a potential life? If not, why not?

Second... many people have been socialized into believing that certain intermediate values (faith, honor, patriotism, fairness, justice, honesty...) are themselves terminal values -- but when anyone tries to justify those values as being good and right, the justifications inevitably come down to either (a) preventing harm to others, or (b) preventing harm to one's self. ...and (b) only supercedes (a) for people whose self-interest outweighs their integrity.

So everyone shares your self declared terminal value of minimizing suffering but many of them don't know it because they are confused, brainwashed or evil? Is there any point in me debating with you since you appear to have defined my disagreement to be confusion or a form of psychopathy?

Replies from: woozle
comment by woozle · 2010-03-27T01:44:04.817Z · LW(p) · GW(p)

Are you saying that I have to be able to provide you an equation which produces a numeric value as an answer before I can argue that ethical decisions should be based on it?

But ok, a rephrase and expansion:

I propose that (a) the ultimate terminal value of every rational, compassionate human is to minimizing aggregate involuntary discomfort as defined by the subjects of that discomfort, and (b) that no action or decision can be reasonably declared to be "wrong" unless it can at least be shown to cause significant amounts of such discomfort. (Can we at least acknowledge that it's ok to use qualitative words like "significant" without defining them exactly?)

I intend it as a descriptive statement ("is"), and I have been asking for counterexamples: show me a situation in which the "right" decision increases the overall harm/suffering/discomfort of those affected.

I am confident that I can show how any supposed counterexamples are in fact depending implicitly on the rationale I am proposing, i.e. minimizing involuntary discomfort.

comment by woozle · 2010-03-24T00:51:22.546Z · LW(p) · GW(p)

Actually, no, that's not quite my definition of suffering-minimization. This is an important issue to discuss, too, since different aggregation functions will produce significantly different final sums.

This is the issue I was getting at when I asked (in some other comment on another subthread) "is it better to take $1 each from 1000 poor people, or $1000 from one millionaire?" (That thread apparently got sucked into an attractor-conversation about libertarian values.)

First, I'm inclined to think that suffering should be weighted more heavily than benefit. More specifically: it's more important to make sure nobody falls below a certain universal level of comfort than it is to allow people who are already at that level (or higher) to do better. (Free-marketeers will probably spit out their cigars at this statement -- and I would ask them how they can possibly justify any other point of view.)

Second... I don't apply equal weighting to all humans either. I think those who are perceived as "valuable" (an attribute which may have more than one measure) should be able to do better than the "basic level of comfort" (BLoC) -- but that this ability can reasonably be limited if it impairs society's ability to provide the BLoC.

(Question to ponder: does it really make sense for one person to make ten thousand times as much money as another person? Yes, some people are more valuable than others -- but 10e4 times as valuable? Yet we have people living on $10k a year, while others earn hundreds of millions. This suggests to me that the economic system severely overvalues certain types of activity, while severely undervaluing others.)

(Counter: it could be argued that some people have zero, or even negative, value. It gets complicated at this point, and we can have that conversation if it seems relevant.)

Third... it makes sense to me for each individual to have their own set of "most valuable people" to protect, but then most of us don't (yet) participate directly in policy decisions affecting millions of people. I think if we're going to do that, we need to be able to see the broader perspective enough to compromise with it -- while still looking out for the interests of those we personally know to be valuable. Thus we should be able to reach a consensus decision which does not pay for apparent overall social progress using a currency of scattered personal agonies and individual sacrifices.

A lot of random strangers are very nice people, and I wouldn't want some random person to suffer just for my benefit.

And finally... I think the evidence is in that we as a society are sufficiently wealthy to provide for everyone at a BLoC while still allowing plenty of room for a large chunk of the population to be everything from "slightly well-off" to "extremely rich".

Replies from: mattnewport
comment by mattnewport · 2010-03-24T01:07:30.904Z · LW(p) · GW(p)

First, I'm inclined to think that suffering should be weighted more heavily than benefit. More specifically: it's more important to make sure nobody falls below a certain universal level of comfort than it is to allow people who are already at that level (or higher) to do better. (Free-marketeers will probably spit out their cigars at this statement -- and I would ask them how they can possibly justify any other point of view.)

Part of the justification is in the discussion with Hayek I linked here.

But we are up against this very strong, and in a sense justified resistance of our instincts and that's our whole problem. A society which is efficient cannot be just. And unfortunately a society which is not efficient cannot maintain the present population of the world. So I think our instincts will have to learn. We shall perhaps for generations still be fighting the problem and fluctuating from one position to the other.

Replies from: woozle
comment by woozle · 2010-03-24T01:23:06.445Z · LW(p) · GW(p)

Stop me if I'm misunderstanding the argument -- I won't have time to watch the video tonight, and have only read the quotes you excerpted -- but you seem to be posing "markets" against "universal BLoC" as mutually exclusive choices.

I am suggesting that this is a false dilemma; we have more than adequate resources to support socialism at the low end of the economic scale while allowing quite free markets at the upper end. If nobody can suffer horribly -- losing their house, their family, their ability to live adequately -- the risks of greater market freedom can be borne much more reasonably.

To put this in more concrete terms: I wouldn't be as bothered by a few people being absurdly stinkin' rich if I wasn't constantly worried about whether I could pay my next credit-card bill, or whether I could afford to go to the hospital if I got really sick, or whether we are going to have to declare bankruptcy and possibly lose our home.

Am I missing your point?

On a related note: totally unregulated markets at any level lead to dangerous accumulations of power by largely unaccountable individuals. Does Hayek (or his school of thought) have an answer for that problem?

Addendum: I also disagree with his statement that an efficient society cannot be just. Does he offer a supporting argument for that claim in the video?

Replies from: mattnewport
comment by mattnewport · 2010-03-24T06:33:54.284Z · LW(p) · GW(p)

Stop me if I'm misunderstanding the argument -- I won't have time to watch the video tonight, and have only read the quotes you excerpted -- but you seem to be posing "markets" against "universal BLoC" as mutually exclusive choices.

Your 'universal basic level of comfort' seems an awfully slippery concept to me. I imagine the average American's idea of what it is differs rather markedly from someone living in rural Africa. Both would differ from that of a medieval peasant.

That's somewhat besides the point though. The reason we can support an unprecedented human population with, on average, a level of health, comfort and material well-being that is historically high is that markets are extremely good at allocating resources efficiently and encouraging and spreading innovations. This efficiency stems in large part from the way that a market economy rewards success and not good intentions. Profits tend to flow to those who can most effectively produce goods or services valued by other market participants. Hayek's point is that this can lead to a distribution of wealth that offends many people's natural sense of justice but that attempts to enforce a more 'just' distribution tend to backfire in all kinds of ways, not least of which is through a reduction in the very efficiency we rely on to maintain our standard of living.

I am suggesting that this is a false dilemma; we have more than adequate resources to support socialism at the low end of the economic scale while allowing quite free markets at the upper end. If nobody can suffer horribly -- losing their house, their family, their ability to live adequately -- the risks of greater market freedom can be borne much more reasonably.

Part of the problem is that I believe this reflects an overly static view of the way the economy functions and neglects the effects of changes in incentives on individual behaviour and, in time, on societal norms. The idea of a 'culture of dependency' reflects these types of concern. Moral Hazard doesn't only affect too big to fail banks.

This ties in with my earlier point about defining a 'basic level of comfort'. I believe Hayek was actually supportive of some level of unemployment insurance. The tremendous inequalities between nations complicate the politics of this issue - many people in the developed world feel they are entitled to a basic level of comfort when unemployed that exceeds the level of comfort of productive workers in the developing world and this has consequences for the politics of free trade, immigration and foreign aid.

On a related note: totally unregulated markets at any level lead to dangerous accumulations of power by largely unaccountable individuals. Does Hayek (or his school of thought) have an answer for that problem?

I'm not sure exactly what Hayek's position on this issue is but the standard Austrian/libertarian view is that such problems are generally caused by government intervention and would not exist in a true free market. There's actually quite a bit of common ground between the Chomsky-esque 'left' and the libertarian/anarcho-capitalist 'right' regarding critiques of the Corporatist nature of the US and other western democracies and the special interest control of government. The diagnosis is pretty similar but the proposed solutions tend to differ.

Replies from: woozle, RobinZ
comment by woozle · 2010-03-24T10:58:16.213Z · LW(p) · GW(p)

I don't think BLoC has to be slippery, though of course in reality (with the current political system, anyway) it would become politicized. This is not a rational reason to oppose it, however.

I don't know if we can do it for everyone on Earth at the moment, though that is worth looking at and getting some numbers so we know where we are. I was proposing it for the US, since we are the huge outlier in this area; most other "developed" societies (including some less wealthy than the US) already have such a thing.

I would suggest a starting definition for BLoC as living conditions meeting minimum sanitary standards, access to medical care priced affordably for each person (which will mean "free" for some), one or two decent meals a day, access to laundry facilities.

(sanitary standards: working indoor toilet, no leaks in the roof, reasonably well-insulated walls and roof, a phone (prepaid cellular is cheap now), some kind of heat in the winter, a decent bed, and possibly a few other things.)

The reason we can support an unprecedented human population with, on average, a level of health, comfort and material well-being that is historically high is that markets are extremely good at allocating resources efficiently and encouraging and spreading innovations. ... Hayek's point is that this can lead to a distribution of wealth that offends many people's natural sense of justice but that attempts to enforce a more 'just' distribution tend to backfire in all kinds of ways, not least of which is through a reduction in the very efficiency we rely on to maintain our standard of living.

This is still juxtaposing markets and BLoC as being mutually exclusive. Why can't we have markets for everyone over a certain income level, and universal welfare for those who fall below it? As I said, other countries do this already, and it doesn't seem to hurt their overall wealth at all.

I would argue that our current system is greatly harmful to overall wealth, as people and businesses often spend valuable resources helping others when they could be spending those resources developing new businesses or being creative. This is also highly selective, actually punishing those who help (they are not compensated for their expenditures), rather than being equitably distributed across all those who benefit from society.

(I'm a perfect example of this: because of the lack of universal health care, I've spent most of the last 3 years fighting bureaucracy to get help for an autistic child -- and helping to provide care for him, which I am eminently unqualified to do -- instead of working on my online store or my computer consulting. Because of that situation, I can't afford medical insurance, which means I now owe $7k for a life-threatening incident late last year -- which I won't be able to pay anytime soon, and which is therefore driving up costs for everyone else in the Duke Health system, while employees of at least one collection agency are wasting their resources trying to get it from me. This is not "market efficiency".)

The idea of a 'culture of dependency' reflects these types of concern.

Is there any data suggesting that this is actually a problem? As far as I can tell, it's a conservative myth -- like Reagan's "welfare queens".

...the standard Austrian/libertarian view is that such problems are generally caused by government intervention and would not exist in a true free market.

This is WAY wrong. Totally unregulated markets are essentially anarchy; this is unstable, as some of those who gain early advantage will inevitably use their power to take down other players (rather than contributing positively) until the board consists of a few very powerful players (and their helpers) with everyone else being essentially powerless. (I suspect that the stable end-state is essentially feudalism, but with the enhanced concentrations of power made possible by modern technology, I can only think it would be far worse than any past feudal system.)

I put it to you that you can't have markets without regulation -- just as you can't have a game without rules, or an internal combustion engine without valves.

Replies from: mattnewport
comment by mattnewport · 2010-03-24T16:26:01.339Z · LW(p) · GW(p)

I don't think BLoC has to be slippery, though of course in reality (with the current political system, anyway) it would become politicized. This is not a rational reason to oppose it, however.

I think it is a rational reason to oppose a role for government in providing it. Governments are bad enough at providing well defined services, a poorly defined goal exacerbates the problem. Just as the vague threat of terrorism provides a cover for ever increasing government encroachment on civil liberties, the vague promise of an ever-rising 'basic level of comfort' provides cover for ever increasing government encroachment on economic liberties.

I don't know if we can do it for everyone on Earth at the moment, though that is worth looking at and getting some numbers so we know where we are. I was proposing it for the US, since we are the huge outlier in this area; most other "developed" societies (including some less wealthy than the US) already have such a thing.

As I lack much of a nationalist instinct I am endlessly puzzled by the idea that we should draw arbitrary boundaries of care along national borders. If your concern is helping others you can do so with much greater efficiency by targeting that help where it is most needed, which is largely outside the US.

Incidentally I don't think it is a coincidence that many developed countries with advanced welfare states are less wealthy than the US. The difficulties of making economic comparisons across countries with different cultures and histories make it difficult to draw any conclusive conclusions from the data on these differences but some of it is highly suggestive.

Why can't we have markets for everyone over a certain income level, and universal welfare for those who fall below it? As I said, other countries do this already, and it doesn't seem to hurt their overall wealth at all.

The claim that it doesn't seem to hurt their overall wealth at all is highly controversial. Due to the difficulties of controlling for other factors it is always possible to explain away the wealth differences in the data but there are suggestive trends. I don't really want to get into throwing studies back and forth but saying 'it doesn't seem to hurt their overall wealth at all' suggests either ignorance of the relevant data or unjustified confidence in interpretation of it.

This is WAY wrong. Totally unregulated markets are essentially anarchy; this is unstable, as some of those who gain early advantage will inevitably use their power to take down other players (rather than contributing positively) until the board consists of a few very powerful players (and their helpers) with everyone else being essentially powerless.

Those who gain early advantage inevitably using their power to take down other players sounds like a description of the current corporatist system in the US to me, where incumbents use their political influence to buy state protection from competition. You appear to be ignorant of the kinds of problems highlighted by public choice theory and unaware of the libertarian analysis of how genuinely free markets tend to work against such power concentrations.

A productive market economy requires mechanisms to discourage the use of force as a negotiating chip and a framework to resolve contractual disputes through the rule of law rather than through political patronage, arbitrary decisions or violence but that looks rather different from what I suspect you mean by 'regulation'.

Replies from: woozle
comment by woozle · 2010-03-24T21:34:05.992Z · LW(p) · GW(p)

I think it is a rational reason to oppose a role for government in providing it.

If the government doesn't provide it, just who is going to?

As I lack much of a nationalist instinct I am endlessly puzzled by the idea that we should draw arbitrary boundaries of care along national borders. If your concern is helping others you can do so with much greater efficiency by targeting that help where it is most needed, which is largely outside the US.

It's not a matter of loyalty, but of having the knowledge and resources to work with to make something possible. I would certainly like to see such a level of care provided to everyone worldwide, but knowing what little I know about average wealth worldwide, this seems an unrealistic goal for the immediate future. It seems entirely realistic for the US, however.

(Plus... as little influence as I have over US politics, I am at least a citizen and resident; I'm much less likely to be able to have an effect on how things are handled in China... or Uganda, where there are more serious worries like the current campaign to make homosexuality a capital offense.)

Incidentally I don't think it is a coincidence that many developed countries with advanced welfare states are less wealthy than the US.

There are also countries more wealthy than us who have universal welfare. My understanding is that if you look at the correlation between social welfare and overall wealth, it is positive -- not negative, as you seem to imply. However, we do need some numbers for this so we're not arguing subjective impressions.

I don't really want to get into throwing studies back and forth...

I think this is exactly what we should be doing, until we have a sufficient sampling of studies that it might actually average out to something approaching reality.

Those who gain early advantage inevitably using their power to take down other players sounds like a description of the current corporatist system in the US to me...

Yes, the current corporatist system is what you get from partial deregulation. I think it is clear that we do not want to go further in that direction.

You appear to be ignorant of the kinds of problems highlighted by public choice theory and unaware of the libertarian analysis of how genuinely free markets tend to work against such power concentrations.

I'm aware of the broad outlines of the theory and some of the particulars, but I'm not aware of how it supports your conclusions.

There is the argument that deregulation of Germany's currency led to greater prosperity -- to which I say (a) we are agreed that too much regulation is a bad thing, but I still maintain that too little is just as bad or worse; (b) I would really like to see some numbers on this; were people really that badly off trading cigarettes? Yes, it was an inconvenience -- but how much did it actually affect material wealth?

As you pointed out, recent US history seems to argue for a return to increased regulation.

If you believe there is data which contradicts my conclusions about the dangers of total deregulation and the economic benefits of social welfare for at least the bottom economic tier of society, please do share it.

Replies from: mattnewport
comment by mattnewport · 2010-03-24T23:23:05.016Z · LW(p) · GW(p)

If the government doesn't provide it, just who is going to?

Charities, family, friends, well meaning strangers... The desire to help others does not exist because of government. There might be more or less resources devoted to charity in the absence of government intervention, I haven't seen much evidence either way. Libertarians commonly argue that private charity is more effective than government welfare and that it makes for a healthier society. A typical example of this case is here - the first such argument I found on google. Now you can certainly dispute these claims but you talk as if you are not even aware that such alternative arguments exist.

It's not a matter of loyalty, but of having the knowledge and resources to work with to make something possible. I would certainly like to see such a level of care provided to everyone worldwide, but knowing what little I know about average wealth worldwide, this seems an unrealistic goal for the immediate future. It seems entirely realistic for the US, however.

You appear to be shifting the goalposts. You started out arguing that your main concern is to minimize suffering:

First, I'm inclined to think that suffering should be weighted more heavily than benefit. More specifically: it's more important to make sure nobody falls below a certain universal level of comfort than it is to allow people who are already at that level (or higher) to do better.

Now you are saying that because that is an unrealistic goal you instead think it is important to make people who are already relatively well off by global standards (poor Americans) better off than it is to minimize suffering of the global poor. If your goal is really to minimize human suffering I don't see how you can argue that guaranteeing housing and healthcare for Americans is a more effective approach than anti-malarial medications, vaccines or antibiotics for African children.

My understanding is that if you look at the correlation between social welfare and overall wealth, it is positive -- not negative, as you seem to imply. However, we do need some numbers for this so we're not arguing subjective impressions.

Subtly different question. It is true that many wealthy countries also have relatively generous welfare systems (particularly in Europe) but they have been able to afford these systems because they were already relatively wealthy. Studies that find negative effects are generally looking at relative growth rates but the difficulty of properly controlling such studies makes them somewhat inconclusive.

I think this is exactly what we should be doing, until we have a sufficient sampling of studies that it might actually average out to something approaching reality.

I've been down this road before in discussions of this nature and they usually degenerate into people throwing links to studies back and forth that neither side really has taken the time to read in detail. The discussion usually just derails into arguing about why this or that study is not adequately controlled. I think it is fair to say that existing studies are rarely definitive enough to overcome pre-existing ideological biases.

I'm aware of the broad outlines of the theory and some of the particulars, but I'm not aware of how it supports your conclusions.

The two main points to note are that governments in reality are not in the business of implementing 'enlightened' policies that address imbalances of power but rather are in the business of creating imbalances of power for the benefit of special interests and the politicians they own and that many government regulations sold as protecting the interests of the general electorate are in actual fact designed to protect particular special interest groups.

Genuine deregulation has a good track record. It should not be confused with false deregulation like that seen in the financial industry which is really just 're-regulation': adjusting an existing distorted playing field to favour the incumbents even more heavily.

Replies from: woozle
comment by woozle · 2010-03-25T01:17:26.359Z · LW(p) · GW(p)

[woozle] If the government doesn't provide it, just who is going to?

[mattnewport] Charities, family, friends, well meaning strangers...

So, why aren't they? How can we make this happen -- what process are you proposing by which we can achieve universal welfare supported entirely by such means?

You appear to be shifting the goalposts. You started out arguing that your main concern is to minimize suffering...

I didn't state the scope; you just assumed it was global. My goal remains as stated -- minimizing suffering -- but I am not arguing for any global changes (yet).

Now you are saying that because that is an unrealistic goal you instead think it is important to make people who are already relatively well off by global standards (poor Americans) better off than it is to minimize suffering of the global poor.

Umm... no, I said no such thing. My suggestion does not have any clear effects on the rest of the world. If anything, it would allow Americans to be more charitable to the world at large, as we would not have to be worrying about taking care of our own first.

I am not proposing taking away any existing American aid to other countries. I'm arguing we are wealthy enough to provide for ourselves solely through reallocation of resources internally, not by diverting them away from external causes (or worse, by stealing resources from other countries as we have a long history of doing; indeed, if it turns out that we cannot achieve universal welfare without robbing others, then I would say that we need to stop robbing others first, then re-assess the situation).

It is true that many wealthy countries also have relatively generous welfare systems (particularly in Europe) but they have been able to afford these systems because they were already relatively wealthy. Studies that find negative effects are generally looking at relative growth rates but the difficulty of properly controlling such studies makes them somewhat inconclusive.

Then I would say that your claim that welfare hurts a country's overall wealth is based on weak data. What is the basis for your belief in this theory, since the studies which might resolve that question have returned inconclusive results?

I've been down this road before in discussions of this nature and they usually degenerate into people throwing links to studies back and forth that neither side really has taken the time to read in detail.

If nobody has time to at least compile the results, then there's really no point in even having the discussion -- since looking at existing data is really the only rational way to attempt to resolve the question.

If you actually have some links, I will at least file them on Issuepedia and give you a link to the page where they -- and any other related information which may have been gathered (or may be gathered in the future) -- can be found.

I will also try to summarize any further debating we do on the subject, so that neither of us will need to rehash the same ground in future debates (with each other or with other people).

I hope that this addresses your apparent concern (i.e. that exchanging actual data on this topic would be a waste of time).

The two main points to note are that governments in reality are not in the business of implementing 'enlightened' policies that address imbalances of power but rather are in the business of creating imbalances of power for the benefit of special interests and the politicians they own and that many government regulations sold as protecting the interests of the general electorate are in actual fact designed to protect particular special interest groups.

That is the reason I think we need to re-invent government. I don't think government is automatically evil, though it is certainly vulnerable to just the sorts of mechanisms you identify.

This is certainly not the purpose for which government was invented, and saying that government is the problem just because it becomes a problem is like saying guns are always evil because they kill people -- or dynamite is always evil because it is used in warfare. Any tool can be misused.

I propose that government is necessary, and that rather than declaring it evil and trying to render it as small and weak as possible, we should be learning how to build it better -- and how to regain control of it when it goes astray.

Genuine deregulation has a good track record. It should not be confused with false deregulation...

Can you give me some examples (mainly of genuine deregulation -- I got the financial industry non-deregulation; will have to ponder that example)?

Replies from: mattnewport
comment by mattnewport · 2010-03-25T01:36:26.988Z · LW(p) · GW(p)

Can you give me some examples (mainly of genuine deregulation -- I got the financial industry non-deregulation; will have to ponder that example)?

I don't have time to reply to your whole post right now (I'll try to give a fuller response later) but telecom deregulation is the first example that springs to mind of (imperfect but) largely successful deregulation.

Replies from: woozle
comment by woozle · 2010-03-25T11:40:38.922Z · LW(p) · GW(p)

Amen to that... I remember when it was illegal to connect your own equipment to Phone Company wires, and telephones were hard-wired by Phone Company technicians.

The obvious flaw in the current situation, of course, is the regional monopolies -- slowly being undercut by competition from VoIP, but still: as it is, if I want wired phone service in this area, I have to deal with Verizon, and Verizon is evil.

This suggests to me that a little more regulation might be helpful -- but you seem to be suggesting that the lack of competition in the local phone market is actually due to some vestiges of government regulation of the industry -- or am I misunderstanding?

(No rush; I shouldn't be spending so much time on this either... but I think it's important to pursue these lines of thought to some kind of conclusion.)

Replies from: woozle, mattnewport
comment by woozle · 2010-03-25T11:54:19.001Z · LW(p) · GW(p)

A little follow-up... it looks like the major deregulatory change was the Telecommunications Act of 1996; the "freeing of the phone jack" took place in the early 1980s or late 1970s, and modular connectors (RJ11) were widespread by 1985, so either that was a result of earlier, less sweeping deregulation or else it was simply an industry response to advances in technology.

comment by mattnewport · 2010-03-26T04:52:13.403Z · LW(p) · GW(p)

This suggests to me that a little more regulation might be helpful -- but you seem to be suggesting that the lack of competition in the local phone market is actually due to some vestiges of government regulation of the industry -- or am I misunderstanding?

An example of the type of special-interest driven regulation presented as consumer protection that I'm talking about is the established phone companies trying to use the E911 regulations to hamper VOIP companies that threaten their monopolies. This type of regulatory capture is very common.

comment by RobinZ · 2010-03-24T10:41:11.199Z · LW(p) · GW(p)

What about Works Progress Administration-style programs?

Replies from: mattnewport
comment by mattnewport · 2010-03-24T15:55:39.563Z · LW(p) · GW(p)

I think they are a terrible idea. I'm not sure what Hayek's position was on them but I imagine he would too. They result in the government making decisions about how to invest resources with all the problems that entails.

comment by woozle · 2010-03-17T23:24:47.760Z · LW(p) · GW(p)

Thanks for bringing up Haidt; I've taken a close look at some of this writing (e.g. this) and concluded that he is full of it... or at least not being rational.

The Five Pillars of Morality theory in particular falls apart if you scratch the surface too hard.

I'm not sure what you mean by "terminal values", however.

Replies from: Jack, Jack, Nick_Tarleton, mattnewport, mattnewport
comment by Jack · 2010-03-18T01:01:23.750Z · LW(p) · GW(p)

"Terminal" is just to distinguish them from "instrumental. If you value freedom because it makes people happy then it is an instrumental value. If you value freedom for it's own sake then it is terminal.

comment by Jack · 2010-03-18T01:00:12.802Z · LW(p) · GW(p)

I think Matt is right. This is a rationalist intervention.

Take it from another liberal/leftist (I can give damn good bona fides if need be): Politics has killed your mind. At least on this Haidt stuff but possibly elsewhere. The guy isn't a neoconservative. But more importantly, he isn't a pundit or a hack. He's a scientist and he's got bucketloads of data to back up his hypothesis. Your response to Haidt is written like someone trying to win a fight, not like someone trying to understand the world.

Edit: Oh, these things always end with "I love you. Please get help today."

Replies from: woozle, woozle
comment by woozle · 2010-03-18T01:09:15.989Z · LW(p) · GW(p)

I also think you're misunderstanding my criticism of Haidt. Yes, he has lots of data to support his claims -- but he rigged the experiments in the way he asked his questions, and he hasn't responded to the obvious flaws in his analysis.

Nor have you.

Replies from: Jack, mattnewport
comment by Jack · 2010-03-18T01:36:15.625Z · LW(p) · GW(p)

but he rigged the experiments in the way he asked his questions

!?!?!??! What evidence have you for this? Note that the theory wasn't designed to say anything about politics. It was designed to describe cross-cultural moral differences in different parts of the world, only later was it applied to the American culture wars.

comment by mattnewport · 2010-03-18T01:12:59.422Z · LW(p) · GW(p)

He's been criticized by some libertarians for neglecting them as a political group and they have raised similar concerns. His reply is here.

Replies from: woozle
comment by woozle · 2010-03-18T01:38:46.034Z · LW(p) · GW(p)

So yes, liberals would consider voting for a republican as a kind of treason.

Does he have data for this? I would vote for whoever seemed the most sensible, regardless of party. If Ron Paul had run against Obama, I would have had a much harder time deciding.

comment by woozle · 2010-03-18T01:07:22.355Z · LW(p) · GW(p)

Does everyone else agree with Jack and Matt? I don't think either of them have rationally justified their criticisms, but that may be my personal akrasia.

comment by Nick_Tarleton · 2010-03-18T01:57:09.609Z · LW(p) · GW(p)

I'm not sure what you mean by "terminal values", however.

"Terminal Values and Instrumental Values". You might also want to check out the metaethics sequence, if you haven't already.

Replies from: woozle
comment by woozle · 2010-03-18T11:08:21.732Z · LW(p) · GW(p)

"Terminal Values" are goals, then, and "Instrumental Values" are the methods used in an attempt to reach those goals. Does that sound right? So now I need to go reply to Jack again...

comment by mattnewport · 2010-03-17T23:57:05.983Z · LW(p) · GW(p)

I've been wary of posting this because I have had trouble finding a way to phrase it in a non-antagonistic manner but I think at this point it needs saying.

Your personal political leanings are transparent in your original post. I took a quick look around issuepedia after reading the post to see if my impression of your distinctly partisan position was confirmed by what I found there and it quickly became clear that it was. The gulf between your stated claim of aiming for un-biased political discussion and truth seeking and the actual content of your site is vast.

On the other hand, it hasn't been clear to me exactly what Jonathan Haidt's party politics were when reading his work. I guessed somewhat liberal but wouldn't have put money on it. He certainly does a much better job of appearing unbiased and non-partisan to me than you do. Your linked post on What Makes People Vote Republican suggests you think he's some kind of neo-con ("As a piece of neoconservative propaganda, it is splendid") which didn't seem to fit the facts to me. I couldn't find anything definitive on Haidt's personal politics but this interview suggests he's basically a liberal:

JH: We would become much more tolerant, and some compromise might be possible, for example, on gay marriage. Even though personally I would like to see it legalized everywhere, I think it would be a nice compromise if each state could decide whether to legalize it, and nobody was forced one way or the other by the Supreme Court.

...

JH: Well, for one thing, I am more tolerant of others. I was much more tolerant of Republicans and conservatives until the last two years. George Bush and his administration have got me so angry that I find my hard-won tolerance fast disappearing. I am now full of anger. And I find my press secretary drawing up the brief against Bush and his administration. So I can say that doing this work, coming up with this theory, has given me insight into what I’m doing. When I fulminate, my press secretary writes a brief against Bush. Once passions come into play, reason follows along. At least now I know that I’m doing it.

...

JH: It’s fine with me. Doesn’t bother me in the least. Remember: I’m a liberal. So if it doesn’t involve harm to someone, it’s not a big deal to me.

Replies from: woozle
comment by woozle · 2010-03-18T00:49:34.076Z · LW(p) · GW(p)

These are understandable concerns... but I make no claim of being unbiased in content. (Did I actually say this anywhere? If so, I need to revise that.)

The ultimate resolution of a debate should not be partisan (i.e. adhering to any particular party's viewpoint just because that party holds it), but that doesn't mean the initial claims have to be neutral. The point of the exercise is to make claims one is prepared to defend -- partisan or not -- and then invite others to attack them.

Either my mind ends up being changed, the minds of the attackers are changed, some combination, or some kind of impasse is reached. Regardless of which scenario ensues, I should think that we all learn something about the process and how to avoid impasse.

Also, just because something happens to align with a particular party's views does not make it wrong. I would argue (and have done so frequently) that the Democrats are right far more often than the Republicans; the latter are often egregiously wrong, in extremely harmful ways.

Reality is not defined by averaging all viewpoints.

I don't know if Haidt even realizes his arguments are basically a cover for neocon propaganda; he comes across as honestly believing he is a liberal. His arguments, however, are clearly irrational (which is something we can't "agree to disagree" about; if you can counter my explanations of how they are irrational, then go for it.) -- and have certainly been used by conservatives as a put-down for liberalism. In any case, I don't dispute your claim that Haidt's expressed political views seem to be primarily liberal -- for now. I would like, however, to see his answers (or anyone's answers) to the charges I have raised in on those pages.

Replies from: mattnewport
comment by mattnewport · 2010-03-18T01:00:13.238Z · LW(p) · GW(p)

I think we've looped back to the whole 'politics is the mind killer' issue again. From what I've read of your writing on issuepedia you strike me as firmly in 'mind killer' territory on the subject of politics and I'm neither inclined to read more of your writing or to engage in discussion with you because I expect it to be unproductive. If you honestly want to turn your site into a mechanism for political truth seeking then in my opinion you need to adjust your mindset and writing style. While I don't object to political discussion here as a matter of principle - I think some discussions here have demonstrated the ability to discuss politics rationally - I would object to political discussion that takes the tone and approach you do on your site.

comment by mattnewport · 2010-03-17T23:28:50.226Z · LW(p) · GW(p)

You apparently haven't kept up with developments of the theory. You say:

Liberals are generally far more concerned about purity of environmental conditions than are conservatives. Food is a good example: filtered water, organic foods, avoidance of over-processing, and avoidance of synthetic ingredients in food are all very much liberal causes, ignored or even disparaged by conservatives.

Which Haidt has recently addressed specifically in a blog post entitled "In Search of Liberal Purity".

Replies from: woozle
comment by woozle · 2010-03-18T00:59:57.591Z · LW(p) · GW(p)

You're right, I hadn't encountered any new items from Haidt since the "why do people vote Republican" piece in edge.

(Lest there be any misunderstanding: the number of follow-ups I would like to investigate seems to grow exponentially with each bit of investigating I actually do. Time is obviously not available to keep up with more than a tiny percentage of what I would like to keep up with.)

I'm glad to see he is at least acknowledging the existence of "liberal purity" -- and even seems to realize that it exposes a weakness in his "Five Pillars" argument -- but, as far as I can tell, he does absolutely nothing to address that weakness.

comment by Morendil · 2010-03-17T13:04:11.766Z · LW(p) · GW(p)

The post title (and a few section titles) would be improved by a capital O.

Replies from: woozle
comment by woozle · 2010-03-17T22:58:25.985Z · LW(p) · GW(p)

Is there a style guide somewhere? I didn't want to be pretentious...

comment by thomblake · 2010-03-17T13:20:57.420Z · LW(p) · GW(p)

am I deluded, or is half the country deluded?

Do you have to ask that in every Presidential election? Should everyone in the country be asking that? Or is there a simpler, alternative hypothesis? (for example, half the country liked a different guy than you did)

Replies from: woozle
comment by woozle · 2010-03-17T22:57:23.169Z · LW(p) · GW(p)

No, there has never before been a presidential election where I thought the other side had to be absolutely bat-guano blind stinkin' stark raving Zombo-watchin' hypnotoad-lovin' frelling, frakking, and other copulatory synonyms INSANE.

Understanding how anyone could possibly like "a different guy", in that particular election, is the whole point. ...or, rather, that particular different guy. I wouldn't have balked at anyone who voted for a third-party or indy candidate; Kerry was never terribly inspiring. But Bush? Given everything he did between 2001 and 2004?

Replies from: Unnamed
comment by Unnamed · 2010-03-18T02:08:23.313Z · LW(p) · GW(p)

If you're trying to understand US Presidential election results, you can explain a lot of variance if you just assume that about 40% of voters will favor the Republican, 40% favor the Democrat, and the rest tend towards the incumbent but will shift their vote depending on how things are going in the country. Which is what you might expect in a country with a two-party system and voters who don't follow politics very closely. Mapping out political arguments over specific issues in detail can have some value, but it's probably not going to do much to explain election results.

Replies from: woozle, ata
comment by woozle · 2010-03-18T10:58:29.537Z · LW(p) · GW(p)

I'm trying to understand them in a rational context. Most years, that pattern made some kind of sense -- the major candidates were both lizards, neither one obviously better or worse than the other.

Continuing to vote along party lines after 3 years of experience with Bush, however, is a different beast. Either people were simply unaware of many of the key points (or possibly were aware of positive points unknown to anyone that I talked to), or else they were using an entirely different process for evaluating presidential fitness. In the former case, we have a problem; in the latter, something worthy of study.

Replies from: Unnamed
comment by Unnamed · 2010-03-18T22:45:30.226Z · LW(p) · GW(p)

I think your method works better as an attempt to engage in politics without having your mind killed (avoiding the mistakes that are typical of the political world) than as a way to explain real-world political outcomes. If you want more a detailed explanation of a particular election than the structural account in my last comment, I'd offer something like this:

In 2004, the campaign involved a lot of noise and harsh criticisms of both candidates, and it wasn't easy to filter out the accurate, damning criticisms of Bush from the rest. This would be especially hard for voters who were inclined to trust Bush over Kerry, and the post-9/11 rally-around-the-flag effect (along with the tendency for Republicans to be more trusted on national defense and patriotism) meant that a lot of voters at least started out with an inclination to trust Bush, especially on the salient issue of national defense. Plus, many of the bad things about Bush also cast the country in a bad light, which meant that voters' natural defensiveness would kick in.

The focus is on voters' perceptions, trying to analyze them like a social scientist, rather than more rigorously evaluating the content of political arguments.

Replies from: woozle
comment by woozle · 2010-03-23T23:46:01.787Z · LW(p) · GW(p)

That's actually my main goal, at least now -- to be able to make rational decisions about political issues. This necessarily involves achieving some understanding of the methods by which voter perceptions are manipulated, but that is a means to an end.

In 2004, I thought it entirely possible that I was simply highly biased in some hitherto unnoticed way, and I wanted to come to some understanding of why half the country apparently thought Bush worthy of being in office at all, never mind thinking that he was a better choice than Kerry.

I was prepared to find that there were certain values which I did not share but could at least respect. What I found was... quite incredible: a large part of the population seems to implicitly believe that it's more important for someone (or some small group) to have power than it is for that person or group to have the first clue about how to use that power.

Outright lies are apparently an acceptable method for reinforcing that power, as long as they work. Claiming the Constitution as the basis for your actions, while subtly working to undermine every protection it provides, is also acceptable. Displays of "strength" and power are more important than displays of intelligence or judgment.

While a large segment of the population does not follow these values, the very existence of these values unfortunately warps the whole dialogue to the point where they are seen as a reasonable position -- even though they obviously are not -- and the best "compromise" is therefore perceived as being somewhere about halfway between rationality and utter insanity.

This is obviously a problem.

What I am trying to do now is find a process by which non-privileged citizens can make the "best" possible policy decisions -- where I propose "best possible policy decisions" should mean something like "decisions which cause the least individual harm while maximizing society's overall progress towards whatever goals we can all agree are acceptable" -- and be able to do so even in the face of the disinformation presented by the insane viewpoint described above (and to avoid the temptation of the "fallacy of moderation" in negotiating with it).

This process is presumed to be rational in nature -- perhaps by definition, since rationality is essentially "best practices in the area of thinking".

comment by ata · 2010-03-18T02:17:05.311Z · LW(p) · GW(p)

but it's probably not going to do much to explain election results.

If I understand correctly, that's not the goal in the first place. The goal is to map out issues, evidence, arguments, values, etc. in a way that is much more rigorous than most voters' thought processes, so as to help participants improve their own clarity on such issues. Clearly that won't be a good way to model the mind of a typical voter or average voter behaviour, but I would doubt anyone would suggest using it for that.

comment by Morendil · 2010-03-17T13:51:19.436Z · LW(p) · GW(p)

Most of the content is my writing, as I am better at writing than at community-building

I'd be a lot more worried about that than you seem to be. There is some reason to believe that collective efforts at truth-finding are more likely to be fruitful than individual ones.

Replies from: woozle
comment by woozle · 2010-03-17T22:32:57.741Z · LW(p) · GW(p)

Even if my community size never grows beyond n=1 (which arguably it already has, since I've had several lengthy debates with people who aren't me), the level of error still can't be any worse than it would be had I not started the site (or posted my opinions anywhere) in the first place, right?

That said, yes, community is important. (I'm not sure "worried" is what you really meant, but I am certainly concerned about it.) Unfortunately, I don't see any options at present for improving the situation, given my time availability.

(This also strikes me as a further reason for "being bold" when making assertions -- or "setting the bar quite low" for verification, as dripgrind put it -- an extreme assertion is more likely to draw in a debate than a mild one. I don't go out of my way to make a given statement more extreme, but I try to avoid tempering what I say any more than is necessary to ensure defensibility based on the facts as I understand them.)

Replies from: mattnewport, Kevin
comment by mattnewport · 2010-03-17T22:53:59.697Z · LW(p) · GW(p)

the level of error still can't be any worse than it would be had I not started the site (or posted my opinions anywhere) in the first place, right?

It could be - you could interpret lack of response as evidence that there are not good arguments against your position rather than as lack of interest in engaging with you on the issues or lack of discovery / knowledge of the site for example. Not saying you do that but it is an example of a way in which you could have your confidence in incorrect opinions increased through the creation of this site if the community is small or significantly self selecting.

comment by Kevin · 2010-03-17T22:38:13.439Z · LW(p) · GW(p)

Community building is hard, but some things you can try, like submitting to reddit, are easy.

comment by PlaidX · 2010-03-17T12:13:16.751Z · LW(p) · GW(p)

On the other hand, many issues really do seem to boil down to such a simple narrative, something best stated in quite stark terms. Individuals who are making an effort to be measured and rational often seem to reject out of hand the possibility that such simple, clearcut conclusions could possibly be valid, leading to the opposite bias -- a sort of systemic "fallacy of moderation". This can cause popular acquiescence to beliefs that are essentially wrong, such as the claim that "the Democrats do it too" when pointing out evils committed by the latest generation of Republicans.

For example...

Replies from: woozle, Kevin
comment by woozle · 2010-03-17T23:08:31.569Z · LW(p) · GW(p)

The arguability of the claim that "Republicans are slightly better" was significantly higher in 2000, especially given the absence of much information which only became widely available later on. Still, I would be interested in hearing a defense of that statement, if Eliezer still believes it.

(Belated worry: what is the site policy on political discussion? Is it still discouraged?)

Replies from: Unnamed, Kevin
comment by Unnamed · 2010-03-18T07:24:54.329Z · LW(p) · GW(p)

Eliezer wrote about his 2000 misjudgment a couple years ago, let's see ... here:

In 2000, the comic Melonpool showed a character pondering, "Bush or Gore... Bush or Gore... it's like flipping a two-headed coin." Well, how were they supposed to know? In 2000, based on history, it seemed to me that the Republicans were generally less interventionist and therefore less harmful than the Democrats, so I pondered whether to vote for Bush to prevent Gore from getting in. Yet it seemed to me that the barriers to keep out third parties were a raw power grab, and that I was therefore obliged to vote for third parties wherever possible, to penalize the Republicrats for getting grabby. And so I voted Libertarian, though I don't consider myself one (at least not with a big "L"). I'm glad I didn't do the "sensible" thing. Less blood on my hands.

Replies from: mattnewport
comment by mattnewport · 2010-03-18T07:51:53.274Z · LW(p) · GW(p)

It's interesting to consider an alternative universe where Gore won that election and there was a win-win scenario: no invasion of Iraq and no extra publicity for global warming alarmists due to An Inconvenient Truth never being made.

It's also quite possible however that Gore would have jumped on the bandwagon even if he'd been elected in which case he might have done far more damage than Bush did by enacting some kind of cap and trade legislation.

Despite the many and varied catastrophic policy choices of the Bush administration it's still far from obvious that Gore would have been a better choice.

Replies from: Unnamed, thomblake
comment by Unnamed · 2010-03-18T08:38:19.534Z · LW(p) · GW(p)

Gore has a long history of concern about global warming, and it's pretty clear that he would've at least tried to enact restrictions on carbon emissions if he'd been President. But let's not turn this thread into a debate over whether that would've been a good or bad policy, or over global warming or Al Gore more generally.

comment by thomblake · 2010-03-18T13:54:08.634Z · LW(p) · GW(p)

no extra publicity for global warming alarmists due to An Inconvenient Truth never being made.

I would not be surprised if in that universe, it became the first full-length movie made by a president in office.

comment by Kevin · 2010-03-18T01:54:28.120Z · LW(p) · GW(p)

I think we can safely assume that Eliezer disagrees with Eliezer_2000 with regards to that statement.

Political discussion was discouraged, but I think we've probably all practiced rationality enough to talk politics now without degenerating into shouting matches. Thanks for starting the discussion.

Replies from: wedrifid, thomblake, Document
comment by wedrifid · 2010-03-18T02:12:06.390Z · LW(p) · GW(p)

Political discussion was discouraged, but I think we've probably all practiced rationality enough to talk politics now without degenerating into shouting matches. Thanks for starting the discussion.

Maybe not shouting matches (it becomes too easy for oponents to get away with mass downvotes). But political discussions here degenerate the quality of discussion and the quality of thinking drastically. This applies to some of the conversations on mainstream political issues. It was frustratingly obvious when it came to any conversations about Knox after the people who didn't care finished using it as a case study. But when politics really becomes the mind killer is when it comes to actual lesswrong social politics, explicit and otherwise.

Mind killing isn't about shouting matches. It's about bullshit.

Replies from: komponisto
comment by komponisto · 2010-03-18T07:33:14.554Z · LW(p) · GW(p)

But political discussions here degenerate the quality of discussion and the quality of thinking drastically...It was frustratingly obvious when it came to any conversations about Knox after the people who didn't care finished using it as a case study

Thinking about why I disagree with the latter sentence has led me to discover another reason why I agree with the former.

There is really nothing political about the Knox case; it's simply a question of what did or did not happen at Via della Pergola 7 in Perugia between November 1 and 2 of 2007. And yet, almost everywhere it was discussed, people were unable to avoid turning it into a political issue: it was always about the Italian legal system, anti-Americanism, American arrogance, sexual mores, white-middle-class privilege, or what have you. (When Senator Maria Cantwell reacted to the verdict, did she dare express outrage that the life of one of her constituents had been ruined by the failure of eight people to understand probability theory? No, she spoke of "anti-Americanism".)

Everywhere, that is, except on Less Wrong -- where there was little or no discussion of these perhaps-interesting but strictly tangential matters. Here, it was pretty much exclusively about the facts of the case and the epistemic issues involved. (Contrast the discussion in the Richard Dawkins Forum, where people could not resist the temptation to lapse into ad-hominem attacks on the nationality -- stated or supposed -- of their opponents; there was nothing like that here at all.)

Now, I don't know for sure that our informal policy of discouraging political discussions was causally decisive in keeping the quality high in this instance. But I can't escape the conclusion that people have a natural tendency to see tribal politics in everything -- so that Less Wrong's "taboo" against politics not only prevents standard political flamewars but also, through learned cognitive habit, helps us avoid turning our ordinary discussions into political disputes.

What we're wanting to avoid, in other words, is not just political talk but also the political mindset. Our unusually positive experience with the Knox case suggests to me that restricting the former may actually help fight the latter.

Replies from: woozle
comment by woozle · 2010-03-22T14:48:30.759Z · LW(p) · GW(p)

This is a good example of why we need a formalized process for debate -- so that irrelevant politicizations can be easily spotted before they grow into partisan rhetoric.

Part of the problem also may be that people often seem to have a hard time recognizing and responding to the actual content of an argument, rather than [what they perceive as] its implications.

For example (loosely based on the types of arguments you mention regarding Knox, but using a topic I'm more familiar with):

  • [me] Bush was really awful.
  • [fictional commenter] You're just saying that because you're a liberal, and liberals hate Bush.

The reply might be true, but it doesn't address the claim that "Bush was awful"; it is an ad hominem based on an assumption about me (that I am a liberal), my intellectual honesty (that I would make an assertion just to be agreeing with a group to which I belong), and the further presumption that there aren't any good reasons for loathing Bush.

As a rational argument, it is plainly terrible -- it doesn't address the content to which it is responding. I suspect this was also the problem with the politicalism that happened regarding the Knox issue -- if respondents had

It should be easier to identify arguments of that nature, and "take them down" before they spawn the kind of discussion we all want to avoid.

"Vote down" is presumably one way to do that -- if enough people vote down such comments, then they get automatically "folded" by the comment system, and are more likely to be ignored (hopefully preventing further politicalism) -- but apparently that mechanism hasn't been having the desired effect.

Another problem with "Vote down" is that many people seem to be using it as a way of indicating their disagreement with a comment, rather than to indicate that the comment was inappropriate or invalid.

Are there any ongoing discussions about improving/redesigning/altering the comment-voting/karma system here at LessWrong?

(I was going to type more, but there were interruptions and I've lost the thread... will come back later if there's more.)

Replies from: mattnewport, komponisto, NancyLebovitz
comment by mattnewport · 2010-03-22T18:01:17.681Z · LW(p) · GW(p)

Another problem with "Vote down" is that many people seem to be using it as a way of indicating their disagreement with a comment, rather than to indicate that the comment was inappropriate or invalid.

I've always felt that a valid use of the karma system is to vote up things that you believe are less wrong and vote down things that you believe to be more wrong.

Replies from: kpreid, RobinZ, woozle
comment by kpreid · 2010-03-22T19:43:47.235Z · LW(p) · GW(p)

I have voted this comment up because I think this idea should be discussed.

comment by RobinZ · 2010-03-22T18:59:04.322Z · LW(p) · GW(p)

Agreed - I often downvote because I believe a comment contains wrong data, such that believing the comment would be harmful to the reader.

comment by woozle · 2010-03-23T22:11:34.840Z · LW(p) · GW(p)

This seems a valid interpretation to me -- but is "wrongness" a one-dimensional concept?

A comment can be wrong in the sense of having incorrect information (as RobinZ points out) but right in the sense of arriving at correct conclusions based on that data -- in which case I would still count it as a valuable contribution by offering the chance to correct that data, and by extension anyone who arrived at that same conclusion by believing that same incorrect data.

By the same token, a comment might include only true factual statements but arrive at a wrong conclusion by faulty logic.

I think would be inclined, in any ambiguous case such as that (or its opposite), to base an up-or-down vote on the question of whether I thought the commenter was honestly trying to seek truth, however poorly s/he might be doing so.

Should commenters be afraid to repeat false information which they currently believe to be true, for fear of being voted down? (This may sound like a rhetorical question, but it isn't.)

Replies from: mattnewport
comment by mattnewport · 2010-03-23T23:00:00.926Z · LW(p) · GW(p)

I think would be inclined, in any ambiguous case such as that (or its opposite), to base an up-or-down vote on the question of whether I thought the commenter was honestly trying to seek truth, however poorly s/he might be doing so.

I don't think that is in keeping with the overall goals of this site. You should get points for winning (making true statements) not for effort. "If you fail to achieve a correct answer, it is futile to protest that you acted with propriety."

This doesn't necessarily mean instantly downvoting anyone who is confused but it does mean that I'm not inclined to award upvotes for well meaning but wrong comments.

Should commenters be afraid to repeat false information which they currently believe to be true, for fear of being voted down? (This may sound like a rhetorical question, but it isn't.)

Yes. Commenters should assume their comments will be read by multiple people and so should make a reasonable effort to check their facts before posting. A few minutes spent fact checking any uncertain claims to avoid wasted time on the part of readers is something I expect of commenters here and punishing factual inaccuracies with a downvote signals that expectation.

'Reasonable effort' is obviously somewhat open to interpretation but if one's readers can find evidence of factual inaccuracy in a minute or two of googling then one has failed to clear the bar.

Replies from: woozle
comment by woozle · 2010-03-24T01:12:09.480Z · LW(p) · GW(p)

I would suggest that it makes no sense to reward getting the right answer without documenting the process you used, because then nobody benefits from your discovery that this process leads (in at least that one case) to the right answer.

Similarly, I don't see the benefit of punishing someone for getting the wrong answer while sincerely trying to follow the right process. Perhaps a neutral response is appropriate, but we are still seeing a benefit from such failed attempts: we learn how the process can be misunderstood (because if the process is right, and followed correctly, then by definition it will arrive at the right answer), and thus how we need to refine the process (e.g. by re-wording its instructions) to prevent such errors.

Perhaps "Rationality is the art of winning the truth."?

Actually, I really don't like the connotations of the word "winning" (it reminds me too much of "arguments are soldiers"); I'd much rather say something like "Rationality is the art of gradually teasing the truth from the jaws of chaos." Karma points should reflect whether the commenter has pulled out more truth -- including truth about flaws in our teasing-process -- or (the opposite) has helped feed the chaos-beast.

comment by komponisto · 2010-03-22T17:53:53.568Z · LW(p) · GW(p)

This is a good example of why we need a formalized process for debate

At the risk of harping on what is after all a major theme of this site, we do in fact have one -- it's called Bayesianism.

How should a debate look? Well, here is how I think it should begin, at least. (Still waiting to see how this will work, if Rolf ever does decide to go through with it.)

In fact, let's try to consider your example from a Bayesian perspective:

(A) Bush was really awful.

(B) You're just saying that because you're a liberal, and liberals hate Bush.

Now, of course, you're right that (A) "doesn't address" (B) -- in the sense that (A) and (B) could both be true. But suppose instead that the conversation proceeded in the following way:

(A) Bush was really awful.

(B') No he wasn't.

In this case (B') directly contradicts (A); which is about the most extreme form of "addressing" there is. Yet, this hardly seems an improvement.

The reason is that, at least for Bayesians, the purpose of such a conversation is not to arrive at logical contradictions; it's to arrive at accurate beliefs.

You'll notice, in this example, that (A) itself isn't much of an argument; it just consists of a statement of the speaker's belief. The actual implied argument is something like this:

(A1) I say that Bush was really awful.

(A2) Something I say is likely to be true.

(A3) Therefore, it is likely that Bush was really awful.

The response,

(B) You're just saying that because you're a liberal, and liberals hate Bush.

should in turn be analyzed like this:

(B1) You belong to a set of people ("liberals") whose emotions tend to get in the way of their forming accurate beliefs.

(B2) As a consequence, (A2) is likely to be false.

(B3) You have therefore failed to convince me of (A3).

So, why are political arguments dangerous? Basically, because people tend to say (A) and (B) (or (A) and (B')) -- which are widely-recognized tribal-affiliation-signals -- rather than (A1)-(A3) and (B1)-(B3), at which point the exchange of words becomes merely a means of acting out standard patterns of hostile social interaction. It's true that (A) and (B) have the Bayesian interpretations (A1)-(A3) and (B1)-(B3), but the habit of interpreting them that way is something that must be learned (indeed, here I am explaining the interpretation to you!).

Replies from: woozle
comment by woozle · 2010-03-23T22:46:41.252Z · LW(p) · GW(p)

I probably should have inserted the word "practical" in that sentence. Bayesianism would seem to be formalized, but how practical is it for daily use? Is it possible to meaningfully (and with reasonable levels of observable objectivity) assign the necessary values needed by the Bayesian algorithm(s)?

More importantly, perhaps, would it be at least theoretically possible to write software to mediate the process of Bayesian discussion and analysis? If so, then I'm interested in trying to figure out how that might work. (I got pretty hopelessly lost trying to do explicit Bayesian analysis on one of my own beliefs.)

The process I'm proposing is one that is designed specifically to be manageable via software, with as few "special admin powers" as possible.

...

"Bush was really awful" was intended more as {an arbitrary "starter claim" for me to use in showing how {rational debate on political topics} becomes "politicized"} than {an argument I would expect to be persuasive}.

If a real debate had started that way, I would have expected the very first counterargument to be something like "you provide no evidence for this claim", which would then defeat it until I provided some evidence... which itself might then become the subject of further counterarguments, and so on.

In this structure, "No he isn't." would not be a valid counterargument -- but it does highlight the fact that the system will need some way to distinguish valid counterarguments from invalid ones, otherwise it has the potential to degenerate into posting "fgfgfgfgf" as an argument, and the system wouldn't know any better than to accept it.

I'm thinking that the solution might be some kind of voting system (like karma points, but more specific) where a supermajority can rule that an argument is invalid, with some sort of consequence to the arguer's ability to participate further if they post too many arguments ruled as invalid.

comment by NancyLebovitz · 2010-03-22T15:45:35.689Z · LW(p) · GW(p)

How google translation works "n practice, languages are used to say the same things over and over again. "

How potentially informative conversations go redundant

There's a dynamic in conversations I'm noticing here, which is probably obvious to everyone else. I think for any given conversation, there are some "attractors"--directions the conversation could go which would be easy for many of the participants, but which would ultimately end all the interesting and useful parts of the conversation. And good moderation/guidance/curation involves steering the conversation away from those attractors.

For example, the talking heads shows I saw when the NYT ran the big story about massive, warrantless wiretapping by the NSA tended to quickly go from a potentially informative discussion about the specifics of the case, to a much easier-to-have discussion[1] about whether the NYT should have published the story, perhaps even about whether publishing it amounted to treason or should have gotten someone arrested.

These attractors happen both because they're easy conversation and because they're useful for propagandists to set up

I'm not sure that the karma system needs to be redesigned-- there's a limit to how much you can say with a number. It might help to have a "that was fun" category, but I think part of the point of karma is that it's easy to do, and having a bunch of karma categories might mean that people won't use it at all or will spend a lot of time fiddling with the categories.

We may have reached the point in this group where enough of us can recognize and defuse those conversations which merely wander around the usual flowchart and encourage people to add information.

Replies from: Morendil, woozle
comment by Morendil · 2010-03-22T16:49:52.795Z · LW(p) · GW(p)

enough of us can recognize and defuse those conversations which merely wander around the usual flowchart

Ahem.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-03-22T17:00:46.739Z · LW(p) · GW(p)

A fair example.

I may have overestimated the skill level of the group. Or maybe bringing up redundancy as a problem is the first move in developing that skill.

Replies from: RobinZ
comment by RobinZ · 2010-03-22T18:44:09.539Z · LW(p) · GW(p)

One method for dealing with such would be to have designated posts for threads on observed attractors, indexed on the wiki, and fork tangents into those threads.

In keeping with General Order Six: other methods include, as suggested, downvoting any derail into a recognized attractor, with explanation; adding known attractors to a list of banned subjects ... it might be best to combine some of these, actually.

Replies from: mattnewport, NancyLebovitz
comment by mattnewport · 2010-03-22T18:48:15.449Z · LW(p) · GW(p)

Before we start planning solutions should we perhaps establish whether there is a consensus that we even have a problem? One vote for 'no problem' from me.

Replies from: RobinZ
comment by RobinZ · 2010-03-22T18:50:56.390Z · LW(p) · GW(p)

Good question - let's watch for attractors for a month, and pay attention to how many turn up.

comment by NancyLebovitz · 2010-03-22T18:57:54.330Z · LW(p) · GW(p)

Atrractors aren't just subjects, they're subjects which are commonly discussed in a way that couldn't pass a Turing test.

If we can manage to bring out new material on one of those subjects, so much the better for us.

Replies from: RobinZ
comment by RobinZ · 2010-03-22T19:05:38.256Z · LW(p) · GW(p)

That's a good reason to continue permitting such discussions, but given the continuing influx of new posters, I suspect there will still be repetition.

comment by woozle · 2010-03-23T23:10:18.881Z · LW(p) · GW(p)

The existence of conversational attractors is why I think any discussion tool needs to be hierarchical -- so any new topic can instantly be "quarantined" in its own space.

The LW comment system does this in theory -- every new comment can be the root of a new discussion -- but apparently in practice some of the same "problem behaviors" (as we say here in the High Energy Children Research Laboratory) still take place.

Moreover, I don't understand why it still happens. If you see the conversation going off in directions that aren't interesting (however popular they may be), can't you just press the little [-] icon to make that subthread disappear? I haven't encountered this problem here myself, so I don't know if there might be some reason that this doesn't work for that purpose.

Just now I tried using that icon -- not because I didn't like the thread, but just to see what happened -- and it very nicely collapsed the whole thing into a single line showing the commenter's name, timestamp, karma points, and how many "children" the comment has. What would be nice, perhaps, is if it showed the first line of content -- or even a summary which I could add to remind myself why I closed the branch. That doesn't seem crucial, however.

comment by thomblake · 2010-03-18T01:57:17.221Z · LW(p) · GW(p)

I think we've probably all practiced rationality enough to talk politics now without degenerating into shouting matches

I disagree.

Politics is something we're wired to care about waaay too much, and "talking politics" is just not a good idea.

ETA: I avoid political discussions for my own sanity.

Replies from: Kevin
comment by Kevin · 2010-03-18T02:09:30.009Z · LW(p) · GW(p)

By talking politics, I meant talking meta-politics.

Replies from: thomblake
comment by thomblake · 2010-03-18T02:12:04.242Z · LW(p) · GW(p)

Oh, I think that's probably fine. If by meta-politics you mean something like political philosophy. Like, "Fascists believe X" "No they don't, they clearly believe Y, which is inconsistent with X"

Replies from: wedrifid
comment by wedrifid · 2010-03-18T02:22:37.410Z · LW(p) · GW(p)

"No they don't, they clearly believe Y, which is inconsistent with X"

Then... so? They are ists. ists believe inconsistent things all the time! That's how they signal how truly ist they are. @#%$ing ists.

Replies from: thomblake
comment by thomblake · 2010-03-18T02:29:29.662Z · LW(p) · GW(p)

I attempted to type up a humorous response and found it to be not very humorous, and then my touchpad ate it. Feel free to imagine a funnier response.

comment by Document · 2010-03-18T02:02:19.125Z · LW(p) · GW(p)

I think we've probably all practiced rationality enough to talk politics now without degenerating into shouting matches.

For the record, I started sporadically checking this site around mid January.

Replies from: Kevin
comment by Kevin · 2010-03-18T02:09:04.692Z · LW(p) · GW(p)

I meant that the community as a whole has practiced enough and reached a certain standard of discourse.

comment by Kevin · 2010-03-17T22:37:10.852Z · LW(p) · GW(p)

Oh, Eliezer_2000...

I think I am done with voting in national elections, though I have a while to be persuaded otherwise.

comment by bogus · 2010-03-17T23:00:09.428Z · LW(p) · GW(p)

If possible*, I recommend that you get in touch with the developers of the website www.openpolitics.ca: they were the first to actually apply 'open politics' (which is just about what you're describing here) in a real-world environment with the aim of influencing actual party policy.

[*] The disclaimer is there because I'm not sure that the site is maintained anymore--they ran into a serious issue with Canadian libel law (which is unusually strict) although the offending content has since been removed. But a lot of useful theoretical stuff will be there, licensed as free and open content.

ETA: forgot to mention, most of the people who contributed to that wiki used to hang out at the openpolitics yahoogroup. Again, you'll find a lot of useful theory in the mailing list archives.

Replies from: woozle
comment by woozle · 2010-03-17T23:33:55.736Z · LW(p) · GW(p)

That looks very useful -- thank you. Their deliberative democracy page looks like it is talking about much the same thing I'm aiming for with the InstaGov project.