Posts

Criteria for Rational Political Conversation 2010-08-26T15:53:19.223Z
Overcoming the mind-killer 2010-03-17T00:56:01.710Z

Comments

Comment by woozle on Criteria for Rational Political Conversation · 2010-08-26T22:45:15.874Z · LW · GW

Exposition... disinformative?... contradiction... illogical, illogical... Norman, coordinate!

Comment by woozle on Criteria for Rational Political Conversation · 2010-08-26T22:11:16.121Z · LW · GW

I'm not sure it's important that my conclusions be "interesting". The point was that we needed a guideline (or set thereof), and as far as I know this need has not been previously met.

Once we agree on a set of guidelines, then I can go on to show examples of rational moral decisions -- or possibly not, in which case I update my understanding of reality.

Re ethical vs. other kinds: I'm inclined to agree. I was answering an argument that there is no such thing as a rational moral decision. Jack drew this distinction, not me. Yes, I took way too long coming around to the conclusion that there is no distinction, and I left too much of the detritus of my thinking process lying around in the final essay...

...but on the other hand, it seemed perhaps a little necessary to show a bit of my work, since I was basically coming around to saying "no, you're wrong".

If what you're saying is that there should have been no point of contention, then I agree with that too.

"How can a terminal value be rational?": As far as this argument goes, I assert no such thing. I'm not clear on how that question is important for supporting the point I was trying to make in that argument, much less this one.

I have another argument for the idea that it's not rational to argue on the basis of a terminal value which is not at least partly shared by your audience -- and that if your audience is potentially "all humanity", then your terminal value should probably be something approaching "the common good of all humanity". But that's not a part of this argument.

I could write a post on that too, but I think I need to establish the validity of this point (i.e. how to spot the loonie) first, because that point (rationality of terminal values) builds on this one.

Comment by woozle on Criteria for Rational Political Conversation · 2010-08-26T21:47:50.699Z · LW · GW

Yes, I agree, it's a balancing act.

My take on references I don't get is either to ignore them, to ask someone ("hey, is this a reference to something? I don't get why they said that."), or possibly to Google it if looks Googleable.

I don't think it should be a cause for penalty unless the references are so heavy that they interrupt the flow of the argument. It's possible that I did that, but I don't think I did.

Comment by woozle on Criteria for Rational Political Conversation · 2010-08-26T21:40:38.815Z · LW · GW

Yes, that is quite true. However, as you can see, I was indeed discussing how to spot irrationality, potentially from quite a long way away.

Comment by woozle on Criteria for Rational Political Conversation · 2010-08-26T21:35:57.683Z · LW · GW

Nobody likes me, everybody hates me, I'm gonna go eat worms...

I suppose it would be asking too much to just suggest that if a sentence or phrase seems out of place or perhaps even surreal, that readers could just assume it's a reference they don't get, and skip it?

If the resulting argument doesn't make sense, then there's a legit criticism to be made.

Comment by woozle on Criteria for Rational Political Conversation · 2010-08-26T21:31:02.330Z · LW · GW

For what it's worth, here are the references. I'll add a link here from the main post.

  • "Spot the Loonie!" was a Monty Python satire of a game show. I'm using it here to refer to the idea of being able to tell when someone's argument doesn't make sense.
  • "How to Identify the Essential Elements of Rationality from Quite a Long Way Away" refers to the title of a Monty Python episode whose title was, I think, "How to Identify Different Types of Trees from Quite a Long Way Away".
  • "Seven and a Half Million Years Later" refers to the length of time it took the computer Deep Thought, The Second-Greatest Computer in All Time and Space, to calculate The Answer to The Ultimate Question of Life, The Universe, And Everything (in The Hitch-Hiker's Guide to the Galaxy, aka H2G2).
  • "I really have no idea if you're going to like it" refers to Deep Thought's reluctance, seven and a half million years later, to divulge The Answer: "You're really not going to like it." "is... is..." refers to this same dialogue, where Deep Thought holds off actually giving The Answer as long as possible.
  • "The Question to the Ultimate Answer" refers to the fact that, having divulged The Answer, it pretty quickly became clear that it was necessary to know what the Ultimate Question of Life, The Universe, And Everything actually was, in order for the answer to make any sense.
  • "So, have we worked out any kind of coherent picture... Well no..." refers to a scene in H2G2 where usage of The Infinite Improbability Drive gives rise (quite improbably) to the existence of "a bowl of petunias and a rather surprised-looking sperm whale", the latter of which immediately begins trying to make cognitive sense of his surroundings. After assigning names (amazingly, they are the correct English words) to several collections of perceptions in his immediate environment, he pauses to ask "So, have we built up any coherent picture of things? Well... no, not really"... or something like that.
  • "you know what I am saying, darleengs?" is a catchphrase used by Billy Crystal's SNL parody of Fernando Lamas. (Note: comedian Billy Crystal should not be confused with evil neoconservative pundit Bill Krystol.)
  • "A Theory About the Brontosaurus" refers to a Monty Python sketch in which a talk show interviewee has a theory (about the brontosaurus) which she introduces many times ("This is my theory. (cough cough) It goes like this. (cough) Here is the theory that I have (cough cough cough) My Theory About the Brontosaurus, and what it is too. Here it goes.") before finally revealing her utterly trivial and non-enlightening conclusion.
  • "ahem ahem" refers to the interviewee's repeated coughing-delays in the above sketch.
Comment by woozle on Criteria for Rational Political Conversation · 2010-08-26T18:35:17.082Z · LW · GW

I can certainly attempt that. I considered doing so originally, but thought it would be too much like "explaining the joke" (a process notorious for efficient removal of humor). I also had this idea that the references were so ubiquitous by now that they were borderline cliche. I'm glad to discover that this is not the case... I think.

Comment by woozle on Overcoming the mind-killer · 2010-08-26T18:30:43.848Z · LW · GW

I finally figured out what was going on, and fixed it. For some reason it got posted in "drafts" instead of on the site, and looking at the post while logged in gave no clue that this was the case.

Sorry about that!

Comment by woozle on Criteria for Rational Political Conversation · 2010-08-26T18:27:56.889Z · LW · GW

The subjective part probably could have been shortened, but I thought it was at least partly necessary in order to give proper context, as in "why are you trying to define rationality when this whole web site is supposed to be about that?" or similar.

The question is, was it informative? If not, then how did it fail in that goal?

Maybe I should have started with the conclusions and then explained how I got there.

Comment by woozle on Criteria for Rational Political Conversation · 2010-08-26T18:21:53.835Z · LW · GW

They were references -- Hitchhicker's Guide to the Galaxy and Monty Python, respectively. I didn't expect everyone to get them, and perhaps I should have taken them out, but the alternative seemed too damn serious and I thought it worth entertaining some people at the cost of leaving others (hopefully not many, in this crowd of geeks) scratching their heads.

I hope that clarifies. In general, if it seems surrealistic and out of place, it's probably a reference.

Comment by woozle on Criteria for Rational Political Conversation · 2010-08-26T18:12:47.178Z · LW · GW

My main conclusions are, oddly, enough, in the final section:

[paste]

I propose that the key elements of a rational conversation are (where "you" refers collectively to all participants):

  • 1)you must use only documented reasoning processes: [1.1] using the best known process(es) for a given class of problem [1.2] stating clearly which particular process(es) you use [1.3] documenting any new processes you use
  • 2) making every reasonable effort to verify that: [2.1] your inputs are reasonably accurate, and [2.2] there are no other reasoning processes which might be better suited to this class of problem, and [2.3] there are no significant flaws in in your application of the reasoning processes you are using, and [2.4] there are no significant inputs you are ignoring ... So... can we agree on this? [/paste]

P.S. The list refuses to format nicely in comment mode; I did what I could.

Comment by woozle on Criteria for Rational Political Conversation · 2010-08-26T18:09:46.513Z · LW · GW

Umm... why not?

Comment by woozle on Criteria for Rational Political Conversation · 2010-08-26T18:07:45.099Z · LW · GW

I'm not sure I follow. Are you using "values" in the sense of "terminal values"? Or "instrumental values"? Or perhaps something else?

Comment by woozle on Criteria for Rational Political Conversation · 2010-08-26T18:03:49.568Z · LW · GW

I don't think I have anything to add to your non-length-related points. Maybe that's just because you seem to be agreeing with me. You've spun my points out a little further, though, and I find myself in agreement with where you ended up, so that's a good sign that my argument is at least coherent enough to be understandable and possibly in accordance with reality. Yay. Now I have to go read the rest of the comments and find out why at least seven people thought it sucked...

Comment by woozle on Criteria for Rational Political Conversation · 2010-08-26T17:55:28.157Z · LW · GW

Yes, it could have been shorter, and that would probably have been clearer.

It also could have been a lot longer; I was somewhat torn by the apparent inconsistency of demanding documentation of thought-processes while not documenting my own -- but I did manage to convince myself that if anyone actually questioned the conclusions, I could go into more detail. I cut out large chunks of it after deciding that this was a better strategy than trying to Explain All The Things.

It could probably have been shorter still, though -- I ended up arriving at some fairly simple conclusions after a very roundabout process, and perhaps I didn't need to leave as much of the scaffolding and detritus in place as I did. I was already on the 4th major revision, though, having used up several days of available-focus-time on it, and after a couple of peer-reviews I figured it was time to publish, imperfections or no... especially when a major piece of my argument is about the process of error-correction through rational dialogue.

Will comment on your content-related points separately.

Comment by woozle on Criteria for Rational Political Conversation · 2010-08-26T17:45:21.340Z · LW · GW

"Immense" wouldn't be "reasonable" unless the problem was of such magnitude as to call for an immense amount of research. That's why I qualify pretty much every requirement with that word.

Comment by woozle on Overcoming the mind-killer · 2010-08-26T15:35:06.728Z · LW · GW

Here's my answer, finally... or a more complete answer, anyway.

Comment by woozle on Overcoming the mind-killer · 2010-03-27T02:42:01.729Z · LW · GW

See my comment about "internal" and "external" terminal values -- I think possibly that's where we're failing to communicate.

Internal terminal values don't have to be rational -- but external ones (goals for society) do, and need to take individual ones into account. Violating an individual internal TV causes suffering, which violates my proposed universal external TV.

For instance... if I'm a heterosexual male, then one of my terminal values might be to form a pair-bond with a female of my species. That's an internal terminal value. This doesn't mean that I think everyone should do this; I can still support gay rights. "Supporting gay rights" is an external value, but not a terminal one for me. For a gay person, it probably would be a terminal value -- so prohibiting gays from marrying would be violating their internal terminal values, which causes suffering, which violates my proposed universal external terminal value of "minimizing suffering / maximizing happiness" -- and THAT is why it is wrong to prohibit gays from marrying, not because I personally happen to think it is wrong (i.e. not because of my external intermediate value of supporting gay rights).

Comment by woozle on Overcoming the mind-killer · 2010-03-27T02:31:03.047Z · LW · GW

You're basically advocating for redistributing wealth from part of the global upper class to part of the global middle class and ignoring those experiencing the most pain and the most injustice.

I've explained repeatedly -- perhaps not in this subthread, so I'll reiterate -- that I'm only proposing reallocating domestic resources within the US, not resources which would otherwise be spent on foreign aid of any kind. I don't see how that can be harmful to anyone except (possibly) the extremely rich people from whom the resources are being reallocated.

(Will respond to your other points in separate comments, to maximize topic-focus of any subsequent discussion.)

Comment by woozle on Overcoming the mind-killer · 2010-03-27T02:22:16.261Z · LW · GW

Your example exposes the flaw in the "destroy everything instantly and painlessly" pseudo-solution: the latter assumes that life is more suffering than pleasure. (Euthanasia is only performed -- or argued for, anyway -- when the gain from continuing to live is believed to be outweighed by the suffering.)

I think this shows that there needs to be a term for pleasure/enjoyment in the formula...

...or perhaps a concept or word which equates to either suffering and pleasure depending on signage (+/-), and then we can simply say that we're trying to maximize that term -- where the exact aggregation function has yet to be determined, but we know it has a positive slope.

Comment by woozle on Overcoming the mind-killer · 2010-03-27T02:07:58.596Z · LW · GW

That seems related to what I was trying to get at with the placeholder-word "freedom" -- I was thinking of things like "freedom to explore" and "freedom to create new things" -- both of which seem highly related to "learning".

It looks like we're talking about two subtly different types of "terminal value", though: for society and for one's self. (Shall we call them "external" and "internal" TVs?)

I'm inclined to agree with your internal TV for "learning", but that doesn't mean that I would insist that a decision which prevented others from learning was necessarily wrong -- perhaps some people have no interest in learning (though I'm not going to be inviting them to my birthday party).

If a decision prevented learnophiles from learning, though, I would count that as "harm" or "suffering" --- and thus it would be against my external TVs.

Taking the thought a little further: I would be inclined to argue that unless an individual is clearly learnophobic, or it can be shown that too much learning could somehow damage them, then preventing learning in even neutral cases would also be harm -- because learning is part of what makes us human. I realize, though, that this argument is on rather thinner rational ground than my main argument, and I'm mainly presenting it as a means of establishing common emotional ground. Please ignore it if this bothers you.

Take-away point: My proposed universal external TV (prevention of suffering) defines {involuntary violation of internal TVs} as harm/suffering.

Hope that makes sense.

Comment by woozle on Overcoming the mind-killer · 2010-03-27T01:44:04.817Z · LW · GW

Are you saying that I have to be able to provide you an equation which produces a numeric value as an answer before I can argue that ethical decisions should be based on it?

But ok, a rephrase and expansion:

I propose that (a) the ultimate terminal value of every rational, compassionate human is to minimizing aggregate involuntary discomfort as defined by the subjects of that discomfort, and (b) that no action or decision can be reasonably declared to be "wrong" unless it can at least be shown to cause significant amounts of such discomfort. (Can we at least acknowledge that it's ok to use qualitative words like "significant" without defining them exactly?)

I intend it as a descriptive statement ("is"), and I have been asking for counterexamples: show me a situation in which the "right" decision increases the overall harm/suffering/discomfort of those affected.

I am confident that I can show how any supposed counterexamples are in fact depending implicitly on the rationale I am proposing, i.e. minimizing involuntary discomfort.

Comment by woozle on Overcoming the mind-killer · 2010-03-26T12:54:24.201Z · LW · GW

It's true that there would be no further suffering once the destruction was complete.

This is a bit of an abstract point to argue over, but I'll give it a go...

I started out earlier arguing that the basis of all ethics was {minimizing suffering} and {maximizing freedom}; I later dropped the second term because it seemed like it might be more of a personal preference than a universal principle -- but perhaps it, or something like it, needs to be included in order to avoid the "destroy everything instantly and painlessly" solution.

That said, I think it's more of a glitch in the algorithm than a serious exception to the principle. Can you think of any real-world examples, or class of problems, where anyone would seriously argue for such a solution?

Comment by woozle on Overcoming the mind-killer · 2010-03-26T12:02:05.718Z · LW · GW

It's not what "we" -- the people making the decision or taking the action -- don't like; it's what those affected by the action don't like.

Comment by woozle on Overcoming the mind-killer · 2010-03-26T11:30:03.710Z · LW · GW

By particularism I mean that there are no moral principles and that the right action is entirely circumstance dependent.

So how do you rationally decide if an action is right or wrong? -- or are you saying you can't do this?

Also, just to be clear: you are saying that you do not believe rightness or wrongness of an action ultimately derives from whether or not it does harm? ("Harm" being the more common term; I tried to refine it a bit as "personally-defined suffering", but I think you're disagreeing with the larger idea -- not my refinement of it.)

In your discussion with Matt you've said that for now you care about helping poor Americans, not the rest of the world.

Matt (I believe) misinterpreted me that way too. No, that is not what I said.

What I was trying to convey was that I thought I had a workable and practical principle by which poor Americans could be helped (redistribution of American wealth via mechanisms and rules yet to be worked out), while I don't have such a solution for the rest of the world [yet].

I tried to make it quite clear that I do care about the rest of the world; the fact that I don't yet have a solution for them (and am therefore not offering one) does not negate this.

I also tried to make it quite clear that my solution for Americans must not come at the price of harming others in the world, and that (further) I believe that as long as it avoids this, it may be of some benefit to the rest of the world as we will not be allowing unused resources to languish in the hands of the very richest people (who really don't need them) -- leaving the philanthropists among us free to focus on poverty worldwide rather than domestically.

(At a glance, I agree with your global policy position. I don't think it contradicts my own. I'm not talking about reallocation of existing expenditures -- foreign aid, tax revenues, etc. -- I'm talking about reallocating unused -- one might even use the word "hoarded" -- resources, via means socialistic, capitalistic, or whatever means seems best*.)

(*the definition of this slippery term comes back ultimately to what we're discussing here: "what is good?")

Now you're just begging the question. My whole point this entire time is that there is no reason for morality to always be about harm. Indeed, there is no reason for morality to ever be about harm except that we make it so. I frankly don't even understand the application of the word "rationality" as we use it here to values. Unless you have a third meaning for the word your usage here is just a category error!

First of all, when I say "harm" or "suffering", I'm not talking about something like "punishing someone for bad behavior"; the idea behind doing that (whether correct or not) is that this ultimately benefits them somehow, and any argument over such punishment will be based on whether harm or good is being done overall. "Hitting a masochist" would not necessarily qualify as harm, especially if you will stop when the masochist asks you to.

Second... when we look at harm or benefit, we have to look at the system of people affected. This isn't to say that if {one person in the system benefits more than another is harmed} then it's ok, because then we get into the complexity of what I'll call the "benefit aggregation function" -- which involves values that probably are individual.

It's also reasonable (and often necessary) to look at a decision's effects on society (if you let one starving person get away with stealing a cookie under a particular circumstance, then other hungry people may think it's always okay to steal cookies) in the present and in the long term. This is the basis of many arguments against gay marriage, for example -- the idea that society will somehow be harmed -- and hence individuals will be harmed as society crumbles around them -- by "changing the definition of marriage". (The evidence is firmly against those arguments, but that's not the point.)

Third: I'm arguing that "[avoiding] harm" is the ultimate basis for all empathetic-human arguments about morality, and I suggest that this would be true for any successful social species (not just humans). (by which I mean "humans with empathy" -- specifically excluding psychopaths and other people whose primary motive is self-gratification)

I suggest that If you can't argue that an action causes harm of some kind, you have absolutely no basis for claiming the action is wrong (within the context of discussions with other humans or social sophonts).

You seem to be arguing, however, that actions can be wrong without causing any demonstrable harm. Can you give an example?

Comment by woozle on Overcoming the mind-killer · 2010-03-26T01:35:29.101Z · LW · GW

Points 1 and 2:

I don't know. I admitted that this was an area where there might be individual disagreement; I don't know the exact nature of the fa() and fb() functions -- just that we want to minimize [my definition of] suffering and maximize freedom.

Actually, on thinking about it, I'm thinking "freedom" is another one of those "shorthand" values, not a terminal value; I may personally want freedom, but other sentients might not. A golem, for example, would have no use for it (no comments from Pratchett readers, thank you). Nor would a Republican. [rimshot]

The point is not that we can all agree on a quantitative assessment of which actions are better than others, but that we can all agree that the goal of all these supposedly-terminal values (which are not in fact terminal) is to minimize suffering*.

(*Should I call it "subjective suffering"? "woozalian suffering"?)

Point 3 again arises from a misunderstanding of my definition of suffering; such an action would hugely amplify subjective suffering, not eliminate it.

Point 4: Yes, with some major caveats...

First, I think this principle is at the heart of human wiring. Some people may not have it (about 5% of the population lacks any empathy), but we're not inviting those folks to the discussion table at this level.

Second... many people have been socialized into believing that certain intermediate values (faith, honor, patriotism, fairness, justice, honesty...) are themselves terminal values -- but when anyone tries to justify those values as being good and right, the justifications inevitably come down to either (a) preventing harm to others, or (b) preventing harm to one's self. ...and (b) only supercedes (a) for people whose self-interest outweighs their integrity.

Comment by woozle on Overcoming the mind-killer · 2010-03-26T01:07:22.569Z · LW · GW

You think you're disagreeing with me, but you're not; I would say that for you, death would be a kind of suffering -- the very worst kind, even.

I would also count the "wipe out all life" scenario as an extreme form of suffering. Anyone with any compassion would suffer in the mere knowledge that it was going to happen.

Comment by woozle on Overcoming the mind-killer · 2010-03-25T22:42:21.900Z · LW · GW

Much discussion about "minimization of suffering" etc. ensued from my first response to this comment, but I thought I should reiterate the point I was trying to make:

I propose that the ultimate terminal value of every rational, compassionate human is to minimize suffering.

(Tentative definition: "suffering" is any kind of discomfort over which the subject has no control.)

All other values (from any part of the political continuum) -- "human rights", "justice", "fairness", "morality", "faith", "loyalty", "honor", "patriotism", etc. -- are not rational terminal values.

This isn't to say that they are useless. They serve as a kind of ethical shorthand, guidelines, rules-of-thumb, "philosophical first-aid": somewhat-reliable predictors of which actions are likely to cause harm (and which are not) -- memes which are effective at reducing harm when people are infected by them. (Hence society often works hard to "sugar coat" them with simplistic, easily-comprehended -- but essentially irrelevant -- justifications, and otherwise encourage their spread.)

Nonetheless, they are not rational terminal values; they are stand-ins.

They also have a price:

  • they do not adapt well to changes in our evolving rational understanding of what causes harm/suffering, so that rules which we now know cause more suffering than benefit are still happily propagating out in the memetic wilderness...
  • any rigid rule (like any tool) can be abused.

...

I seem to have taken this line of thought a bit further than I meant to originally -- so to summarize: I'd really like to hear if anyone believes there are other rational terminal values other than (or which cannot ultimately be reduced to) "minimizing suffering".

Comment by woozle on Overcoming the mind-killer · 2010-03-25T21:13:35.699Z · LW · GW

It seems to me that the terminal values you list are really just means to an end, and that the end in question is similar to my own -- i.e. some combination of minimizing harm and maximizing freedom (to put it in terms which are a bit of an oversimplification).

For example: I also favor ethical pluralism (I'm not sure what "particularism" is), for the reasons that it leads to a more vibrant and creative society, whilst the opposite (which I guess would be suppressing or discouraging any but some "dominant" culture) leads to completely unnecessary suffering.

You are right that maximizing two values is not necessarily solvable. The apparaent duality of the goal as stated has more to do with the shortcomings of natural language than it does with the goals being contradictory. If you could assign numbers to "suffering" (S) and "individual freedom" (F), I would think that the goal would be to maximize aS + bF for some values of a and b which have yet to be worked out.

[Addendum: this function may be oversimplifying things as well; there may be one or more nonlinear functions applied to S and/or F before they are added. What I said below about the possible values of a and b applies also to these functions. A better statement of the overall function would probably be fa(S) + fb(F), where fa() and fb() are both - I would think - positively-sloped for all input values.]

[Edit: ACK! Got confused here; the function for S would be negative, i.e. we want less suffering.]

[Another edit in case anyone is still reading this comment for the first time: I don't necessarily count "death" as non-suffering; I suppose this means "suffering" isn't quite the right word, but I don't have another one handy]

The exact values of a and b may vary from person to person -- perhaps they even are the primary attributes which account for one's political predispositions -- but I would like to see an argument that there is some other desirable end goal for society, some other term which belongs in this equation.

...there are people out there who assign intrinsic moral relevance to national borders, race, religion, sexual purity, tradition etc. Do you still deny that?

I do not deny this, but I also do not believe they are being rational in those assignments. Why should the "morality" of a particular act matter in the slightest if it has been shown to be completely harmless?

Comment by woozle on Overcoming the mind-killer · 2010-03-25T11:54:19.001Z · LW · GW

A little follow-up... it looks like the major deregulatory change was the Telecommunications Act of 1996; the "freeing of the phone jack" took place in the early 1980s or late 1970s, and modular connectors (RJ11) were widespread by 1985, so either that was a result of earlier, less sweeping deregulation or else it was simply an industry response to advances in technology.

Comment by woozle on Overcoming the mind-killer · 2010-03-25T11:40:38.922Z · LW · GW

Amen to that... I remember when it was illegal to connect your own equipment to Phone Company wires, and telephones were hard-wired by Phone Company technicians.

The obvious flaw in the current situation, of course, is the regional monopolies -- slowly being undercut by competition from VoIP, but still: as it is, if I want wired phone service in this area, I have to deal with Verizon, and Verizon is evil.

This suggests to me that a little more regulation might be helpful -- but you seem to be suggesting that the lack of competition in the local phone market is actually due to some vestiges of government regulation of the industry -- or am I misunderstanding?

(No rush; I shouldn't be spending so much time on this either... but I think it's important to pursue these lines of thought to some kind of conclusion.)

Comment by woozle on Overcoming the mind-killer · 2010-03-25T01:17:26.359Z · LW · GW

[woozle] If the government doesn't provide it, just who is going to?

[mattnewport] Charities, family, friends, well meaning strangers...

So, why aren't they? How can we make this happen -- what process are you proposing by which we can achieve universal welfare supported entirely by such means?

You appear to be shifting the goalposts. You started out arguing that your main concern is to minimize suffering...

I didn't state the scope; you just assumed it was global. My goal remains as stated -- minimizing suffering -- but I am not arguing for any global changes (yet).

Now you are saying that because that is an unrealistic goal you instead think it is important to make people who are already relatively well off by global standards (poor Americans) better off than it is to minimize suffering of the global poor.

Umm... no, I said no such thing. My suggestion does not have any clear effects on the rest of the world. If anything, it would allow Americans to be more charitable to the world at large, as we would not have to be worrying about taking care of our own first.

I am not proposing taking away any existing American aid to other countries. I'm arguing we are wealthy enough to provide for ourselves solely through reallocation of resources internally, not by diverting them away from external causes (or worse, by stealing resources from other countries as we have a long history of doing; indeed, if it turns out that we cannot achieve universal welfare without robbing others, then I would say that we need to stop robbing others first, then re-assess the situation).

It is true that many wealthy countries also have relatively generous welfare systems (particularly in Europe) but they have been able to afford these systems because they were already relatively wealthy. Studies that find negative effects are generally looking at relative growth rates but the difficulty of properly controlling such studies makes them somewhat inconclusive.

Then I would say that your claim that welfare hurts a country's overall wealth is based on weak data. What is the basis for your belief in this theory, since the studies which might resolve that question have returned inconclusive results?

I've been down this road before in discussions of this nature and they usually degenerate into people throwing links to studies back and forth that neither side really has taken the time to read in detail.

If nobody has time to at least compile the results, then there's really no point in even having the discussion -- since looking at existing data is really the only rational way to attempt to resolve the question.

If you actually have some links, I will at least file them on Issuepedia and give you a link to the page where they -- and any other related information which may have been gathered (or may be gathered in the future) -- can be found.

I will also try to summarize any further debating we do on the subject, so that neither of us will need to rehash the same ground in future debates (with each other or with other people).

I hope that this addresses your apparent concern (i.e. that exchanging actual data on this topic would be a waste of time).

The two main points to note are that governments in reality are not in the business of implementing 'enlightened' policies that address imbalances of power but rather are in the business of creating imbalances of power for the benefit of special interests and the politicians they own and that many government regulations sold as protecting the interests of the general electorate are in actual fact designed to protect particular special interest groups.

That is the reason I think we need to re-invent government. I don't think government is automatically evil, though it is certainly vulnerable to just the sorts of mechanisms you identify.

This is certainly not the purpose for which government was invented, and saying that government is the problem just because it becomes a problem is like saying guns are always evil because they kill people -- or dynamite is always evil because it is used in warfare. Any tool can be misused.

I propose that government is necessary, and that rather than declaring it evil and trying to render it as small and weak as possible, we should be learning how to build it better -- and how to regain control of it when it goes astray.

Genuine deregulation has a good track record. It should not be confused with false deregulation...

Can you give me some examples (mainly of genuine deregulation -- I got the financial industry non-deregulation; will have to ponder that example)?

Comment by woozle on Overcoming the mind-killer · 2010-03-24T21:34:05.992Z · LW · GW

I think it is a rational reason to oppose a role for government in providing it.

If the government doesn't provide it, just who is going to?

As I lack much of a nationalist instinct I am endlessly puzzled by the idea that we should draw arbitrary boundaries of care along national borders. If your concern is helping others you can do so with much greater efficiency by targeting that help where it is most needed, which is largely outside the US.

It's not a matter of loyalty, but of having the knowledge and resources to work with to make something possible. I would certainly like to see such a level of care provided to everyone worldwide, but knowing what little I know about average wealth worldwide, this seems an unrealistic goal for the immediate future. It seems entirely realistic for the US, however.

(Plus... as little influence as I have over US politics, I am at least a citizen and resident; I'm much less likely to be able to have an effect on how things are handled in China... or Uganda, where there are more serious worries like the current campaign to make homosexuality a capital offense.)

Incidentally I don't think it is a coincidence that many developed countries with advanced welfare states are less wealthy than the US.

There are also countries more wealthy than us who have universal welfare. My understanding is that if you look at the correlation between social welfare and overall wealth, it is positive -- not negative, as you seem to imply. However, we do need some numbers for this so we're not arguing subjective impressions.

I don't really want to get into throwing studies back and forth...

I think this is exactly what we should be doing, until we have a sufficient sampling of studies that it might actually average out to something approaching reality.

Those who gain early advantage inevitably using their power to take down other players sounds like a description of the current corporatist system in the US to me...

Yes, the current corporatist system is what you get from partial deregulation. I think it is clear that we do not want to go further in that direction.

You appear to be ignorant of the kinds of problems highlighted by public choice theory and unaware of the libertarian analysis of how genuinely free markets tend to work against such power concentrations.

I'm aware of the broad outlines of the theory and some of the particulars, but I'm not aware of how it supports your conclusions.

There is the argument that deregulation of Germany's currency led to greater prosperity -- to which I say (a) we are agreed that too much regulation is a bad thing, but I still maintain that too little is just as bad or worse; (b) I would really like to see some numbers on this; were people really that badly off trading cigarettes? Yes, it was an inconvenience -- but how much did it actually affect material wealth?

As you pointed out, recent US history seems to argue for a return to increased regulation.

If you believe there is data which contradicts my conclusions about the dangers of total deregulation and the economic benefits of social welfare for at least the bottom economic tier of society, please do share it.

Comment by woozle on Overcoming the mind-killer · 2010-03-24T10:58:16.213Z · LW · GW

I don't think BLoC has to be slippery, though of course in reality (with the current political system, anyway) it would become politicized. This is not a rational reason to oppose it, however.

I don't know if we can do it for everyone on Earth at the moment, though that is worth looking at and getting some numbers so we know where we are. I was proposing it for the US, since we are the huge outlier in this area; most other "developed" societies (including some less wealthy than the US) already have such a thing.

I would suggest a starting definition for BLoC as living conditions meeting minimum sanitary standards, access to medical care priced affordably for each person (which will mean "free" for some), one or two decent meals a day, access to laundry facilities.

(sanitary standards: working indoor toilet, no leaks in the roof, reasonably well-insulated walls and roof, a phone (prepaid cellular is cheap now), some kind of heat in the winter, a decent bed, and possibly a few other things.)

The reason we can support an unprecedented human population with, on average, a level of health, comfort and material well-being that is historically high is that markets are extremely good at allocating resources efficiently and encouraging and spreading innovations. ... Hayek's point is that this can lead to a distribution of wealth that offends many people's natural sense of justice but that attempts to enforce a more 'just' distribution tend to backfire in all kinds of ways, not least of which is through a reduction in the very efficiency we rely on to maintain our standard of living.

This is still juxtaposing markets and BLoC as being mutually exclusive. Why can't we have markets for everyone over a certain income level, and universal welfare for those who fall below it? As I said, other countries do this already, and it doesn't seem to hurt their overall wealth at all.

I would argue that our current system is greatly harmful to overall wealth, as people and businesses often spend valuable resources helping others when they could be spending those resources developing new businesses or being creative. This is also highly selective, actually punishing those who help (they are not compensated for their expenditures), rather than being equitably distributed across all those who benefit from society.

(I'm a perfect example of this: because of the lack of universal health care, I've spent most of the last 3 years fighting bureaucracy to get help for an autistic child -- and helping to provide care for him, which I am eminently unqualified to do -- instead of working on my online store or my computer consulting. Because of that situation, I can't afford medical insurance, which means I now owe $7k for a life-threatening incident late last year -- which I won't be able to pay anytime soon, and which is therefore driving up costs for everyone else in the Duke Health system, while employees of at least one collection agency are wasting their resources trying to get it from me. This is not "market efficiency".)

The idea of a 'culture of dependency' reflects these types of concern.

Is there any data suggesting that this is actually a problem? As far as I can tell, it's a conservative myth -- like Reagan's "welfare queens".

...the standard Austrian/libertarian view is that such problems are generally caused by government intervention and would not exist in a true free market.

This is WAY wrong. Totally unregulated markets are essentially anarchy; this is unstable, as some of those who gain early advantage will inevitably use their power to take down other players (rather than contributing positively) until the board consists of a few very powerful players (and their helpers) with everyone else being essentially powerless. (I suspect that the stable end-state is essentially feudalism, but with the enhanced concentrations of power made possible by modern technology, I can only think it would be far worse than any past feudal system.)

I put it to you that you can't have markets without regulation -- just as you can't have a game without rules, or an internal combustion engine without valves.

Comment by woozle on Overcoming the mind-killer · 2010-03-24T01:23:06.445Z · LW · GW

Stop me if I'm misunderstanding the argument -- I won't have time to watch the video tonight, and have only read the quotes you excerpted -- but you seem to be posing "markets" against "universal BLoC" as mutually exclusive choices.

I am suggesting that this is a false dilemma; we have more than adequate resources to support socialism at the low end of the economic scale while allowing quite free markets at the upper end. If nobody can suffer horribly -- losing their house, their family, their ability to live adequately -- the risks of greater market freedom can be borne much more reasonably.

To put this in more concrete terms: I wouldn't be as bothered by a few people being absurdly stinkin' rich if I wasn't constantly worried about whether I could pay my next credit-card bill, or whether I could afford to go to the hospital if I got really sick, or whether we are going to have to declare bankruptcy and possibly lose our home.

Am I missing your point?

On a related note: totally unregulated markets at any level lead to dangerous accumulations of power by largely unaccountable individuals. Does Hayek (or his school of thought) have an answer for that problem?

Addendum: I also disagree with his statement that an efficient society cannot be just. Does he offer a supporting argument for that claim in the video?

Comment by woozle on Overcoming the mind-killer · 2010-03-24T01:12:09.480Z · LW · GW

I would suggest that it makes no sense to reward getting the right answer without documenting the process you used, because then nobody benefits from your discovery that this process leads (in at least that one case) to the right answer.

Similarly, I don't see the benefit of punishing someone for getting the wrong answer while sincerely trying to follow the right process. Perhaps a neutral response is appropriate, but we are still seeing a benefit from such failed attempts: we learn how the process can be misunderstood (because if the process is right, and followed correctly, then by definition it will arrive at the right answer), and thus how we need to refine the process (e.g. by re-wording its instructions) to prevent such errors.

Perhaps "Rationality is the art of winning the truth."?

Actually, I really don't like the connotations of the word "winning" (it reminds me too much of "arguments are soldiers"); I'd much rather say something like "Rationality is the art of gradually teasing the truth from the jaws of chaos." Karma points should reflect whether the commenter has pulled out more truth -- including truth about flaws in our teasing-process -- or (the opposite) has helped feed the chaos-beast.

Comment by woozle on Overcoming the mind-killer · 2010-03-24T00:51:22.546Z · LW · GW

Actually, no, that's not quite my definition of suffering-minimization. This is an important issue to discuss, too, since different aggregation functions will produce significantly different final sums.

This is the issue I was getting at when I asked (in some other comment on another subthread) "is it better to take $1 each from 1000 poor people, or $1000 from one millionaire?" (That thread apparently got sucked into an attractor-conversation about libertarian values.)

First, I'm inclined to think that suffering should be weighted more heavily than benefit. More specifically: it's more important to make sure nobody falls below a certain universal level of comfort than it is to allow people who are already at that level (or higher) to do better. (Free-marketeers will probably spit out their cigars at this statement -- and I would ask them how they can possibly justify any other point of view.)

Second... I don't apply equal weighting to all humans either. I think those who are perceived as "valuable" (an attribute which may have more than one measure) should be able to do better than the "basic level of comfort" (BLoC) -- but that this ability can reasonably be limited if it impairs society's ability to provide the BLoC.

(Question to ponder: does it really make sense for one person to make ten thousand times as much money as another person? Yes, some people are more valuable than others -- but 10e4 times as valuable? Yet we have people living on $10k a year, while others earn hundreds of millions. This suggests to me that the economic system severely overvalues certain types of activity, while severely undervaluing others.)

(Counter: it could be argued that some people have zero, or even negative, value. It gets complicated at this point, and we can have that conversation if it seems relevant.)

Third... it makes sense to me for each individual to have their own set of "most valuable people" to protect, but then most of us don't (yet) participate directly in policy decisions affecting millions of people. I think if we're going to do that, we need to be able to see the broader perspective enough to compromise with it -- while still looking out for the interests of those we personally know to be valuable. Thus we should be able to reach a consensus decision which does not pay for apparent overall social progress using a currency of scattered personal agonies and individual sacrifices.

A lot of random strangers are very nice people, and I wouldn't want some random person to suffer just for my benefit.

And finally... I think the evidence is in that we as a society are sufficiently wealthy to provide for everyone at a BLoC while still allowing plenty of room for a large chunk of the population to be everything from "slightly well-off" to "extremely rich".

Comment by woozle on Overcoming the mind-killer · 2010-03-23T23:46:01.787Z · LW · GW

That's actually my main goal, at least now -- to be able to make rational decisions about political issues. This necessarily involves achieving some understanding of the methods by which voter perceptions are manipulated, but that is a means to an end.

In 2004, I thought it entirely possible that I was simply highly biased in some hitherto unnoticed way, and I wanted to come to some understanding of why half the country apparently thought Bush worthy of being in office at all, never mind thinking that he was a better choice than Kerry.

I was prepared to find that there were certain values which I did not share but could at least respect. What I found was... quite incredible: a large part of the population seems to implicitly believe that it's more important for someone (or some small group) to have power than it is for that person or group to have the first clue about how to use that power.

Outright lies are apparently an acceptable method for reinforcing that power, as long as they work. Claiming the Constitution as the basis for your actions, while subtly working to undermine every protection it provides, is also acceptable. Displays of "strength" and power are more important than displays of intelligence or judgment.

While a large segment of the population does not follow these values, the very existence of these values unfortunately warps the whole dialogue to the point where they are seen as a reasonable position -- even though they obviously are not -- and the best "compromise" is therefore perceived as being somewhere about halfway between rationality and utter insanity.

This is obviously a problem.

What I am trying to do now is find a process by which non-privileged citizens can make the "best" possible policy decisions -- where I propose "best possible policy decisions" should mean something like "decisions which cause the least individual harm while maximizing society's overall progress towards whatever goals we can all agree are acceptable" -- and be able to do so even in the face of the disinformation presented by the insane viewpoint described above (and to avoid the temptation of the "fallacy of moderation" in negotiating with it).

This process is presumed to be rational in nature -- perhaps by definition, since rationality is essentially "best practices in the area of thinking".

Comment by woozle on Overcoming the mind-killer · 2010-03-23T23:10:18.881Z · LW · GW

The existence of conversational attractors is why I think any discussion tool needs to be hierarchical -- so any new topic can instantly be "quarantined" in its own space.

The LW comment system does this in theory -- every new comment can be the root of a new discussion -- but apparently in practice some of the same "problem behaviors" (as we say here in the High Energy Children Research Laboratory) still take place.

Moreover, I don't understand why it still happens. If you see the conversation going off in directions that aren't interesting (however popular they may be), can't you just press the little [-] icon to make that subthread disappear? I haven't encountered this problem here myself, so I don't know if there might be some reason that this doesn't work for that purpose.

Just now I tried using that icon -- not because I didn't like the thread, but just to see what happened -- and it very nicely collapsed the whole thing into a single line showing the commenter's name, timestamp, karma points, and how many "children" the comment has. What would be nice, perhaps, is if it showed the first line of content -- or even a summary which I could add to remind myself why I closed the branch. That doesn't seem crucial, however.

Comment by woozle on Overcoming the mind-killer · 2010-03-23T22:46:41.252Z · LW · GW

I probably should have inserted the word "practical" in that sentence. Bayesianism would seem to be formalized, but how practical is it for daily use? Is it possible to meaningfully (and with reasonable levels of observable objectivity) assign the necessary values needed by the Bayesian algorithm(s)?

More importantly, perhaps, would it be at least theoretically possible to write software to mediate the process of Bayesian discussion and analysis? If so, then I'm interested in trying to figure out how that might work. (I got pretty hopelessly lost trying to do explicit Bayesian analysis on one of my own beliefs.)

The process I'm proposing is one that is designed specifically to be manageable via software, with as few "special admin powers" as possible.

...

"Bush was really awful" was intended more as {an arbitrary "starter claim" for me to use in showing how {rational debate on political topics} becomes "politicized"} than {an argument I would expect to be persuasive}.

If a real debate had started that way, I would have expected the very first counterargument to be something like "you provide no evidence for this claim", which would then defeat it until I provided some evidence... which itself might then become the subject of further counterarguments, and so on.

In this structure, "No he isn't." would not be a valid counterargument -- but it does highlight the fact that the system will need some way to distinguish valid counterarguments from invalid ones, otherwise it has the potential to degenerate into posting "fgfgfgfgf" as an argument, and the system wouldn't know any better than to accept it.

I'm thinking that the solution might be some kind of voting system (like karma points, but more specific) where a supermajority can rule that an argument is invalid, with some sort of consequence to the arguer's ability to participate further if they post too many arguments ruled as invalid.

Comment by woozle on Overcoming the mind-killer · 2010-03-23T22:11:34.840Z · LW · GW

This seems a valid interpretation to me -- but is "wrongness" a one-dimensional concept?

A comment can be wrong in the sense of having incorrect information (as RobinZ points out) but right in the sense of arriving at correct conclusions based on that data -- in which case I would still count it as a valuable contribution by offering the chance to correct that data, and by extension anyone who arrived at that same conclusion by believing that same incorrect data.

By the same token, a comment might include only true factual statements but arrive at a wrong conclusion by faulty logic.

I think would be inclined, in any ambiguous case such as that (or its opposite), to base an up-or-down vote on the question of whether I thought the commenter was honestly trying to seek truth, however poorly s/he might be doing so.

Should commenters be afraid to repeat false information which they currently believe to be true, for fear of being voted down? (This may sound like a rhetorical question, but it isn't.)

Comment by woozle on Overcoming the mind-killer · 2010-03-22T14:48:30.759Z · LW · GW

This is a good example of why we need a formalized process for debate -- so that irrelevant politicizations can be easily spotted before they grow into partisan rhetoric.

Part of the problem also may be that people often seem to have a hard time recognizing and responding to the actual content of an argument, rather than [what they perceive as] its implications.

For example (loosely based on the types of arguments you mention regarding Knox, but using a topic I'm more familiar with):

  • [me] Bush was really awful.
  • [fictional commenter] You're just saying that because you're a liberal, and liberals hate Bush.

The reply might be true, but it doesn't address the claim that "Bush was awful"; it is an ad hominem based on an assumption about me (that I am a liberal), my intellectual honesty (that I would make an assertion just to be agreeing with a group to which I belong), and the further presumption that there aren't any good reasons for loathing Bush.

As a rational argument, it is plainly terrible -- it doesn't address the content to which it is responding. I suspect this was also the problem with the politicalism that happened regarding the Knox issue -- if respondents had

It should be easier to identify arguments of that nature, and "take them down" before they spawn the kind of discussion we all want to avoid.

"Vote down" is presumably one way to do that -- if enough people vote down such comments, then they get automatically "folded" by the comment system, and are more likely to be ignored (hopefully preventing further politicalism) -- but apparently that mechanism hasn't been having the desired effect.

Another problem with "Vote down" is that many people seem to be using it as a way of indicating their disagreement with a comment, rather than to indicate that the comment was inappropriate or invalid.

Are there any ongoing discussions about improving/redesigning/altering the comment-voting/karma system here at LessWrong?

(I was going to type more, but there were interruptions and I've lost the thread... will come back later if there's more.)

Comment by woozle on Overcoming the mind-killer · 2010-03-18T11:13:53.527Z · LW · GW

If I'm understanding correctly, "terminal values" are end-goals.

If we have different end-goals, we need to understand what they are. (Actually, we should understand what they are anyway -- but if they're different, it becomes particularly important to examine them and identify the differences.)

This seems related to a question that David Brin once suggested as a good one to bring up in political debate: Describe the sort of world do you hope your preferred policies will create. ...or, in other words, describe your large-scale goals for our society -- your terminal values.

My terminal values are to minimize suffering and maximize individual freedom and ability to create, explore, and grow in wisdom via learning about the universe.

If anyone has different terminal values, I'd like to hear more about that.

Comment by woozle on Overcoming the mind-killer · 2010-03-18T11:08:21.732Z · LW · GW

"Terminal Values" are goals, then, and "Instrumental Values" are the methods used in an attempt to reach those goals. Does that sound right? So now I need to go reply to Jack again...

Comment by woozle on Overcoming the mind-killer · 2010-03-18T10:58:29.537Z · LW · GW

I'm trying to understand them in a rational context. Most years, that pattern made some kind of sense -- the major candidates were both lizards, neither one obviously better or worse than the other.

Continuing to vote along party lines after 3 years of experience with Bush, however, is a different beast. Either people were simply unaware of many of the key points (or possibly were aware of positive points unknown to anyone that I talked to), or else they were using an entirely different process for evaluating presidential fitness. In the former case, we have a problem; in the latter, something worthy of study.

Comment by woozle on Overcoming the mind-killer · 2010-03-18T01:38:46.034Z · LW · GW

So yes, liberals would consider voting for a republican as a kind of treason.

Does he have data for this? I would vote for whoever seemed the most sensible, regardless of party. If Ron Paul had run against Obama, I would have had a much harder time deciding.

Comment by woozle on Overcoming the mind-killer · 2010-03-18T01:34:28.491Z · LW · GW

Also: I think what you're misunderstanding about the POV on the site is that I am prepared to rationally defend everything I have said there, and I am prepared to retract or alter it if I cannot do so. (Note that there are a few articles posted by others, and I don't necessarily agree with what they have said -- but if I have not responded, it means I also don't disagree strongly enough to bother. Maybe you do, and maybe I will too once the flaws are pointed out.)

Comment by woozle on Overcoming the mind-killer · 2010-03-18T01:31:16.212Z · LW · GW

Okay, then, let me try again.

If someone loans you a car, and it runs out of gas, do you (a) only put in enough gas to get you where you need to go (including returning it to the owner), or do you (b) fill the tank up?

I would argue that it is foolish to do (a), because if you become known as someone who pulls crap like that, people aren't likely to loan you their cars in the future.

Libertarianism seems to be arguing, however, that (a) is the correct and proper action.

Next question:

Let's say there's some kind of widespread natural disaster where you live. Maybe outside help is coming but it will be a couple of weeks before it arrives in force. A group forms to work out what resources are available and who needs them. Let's say you know all the people in that group, and have no reason to be suspicious of their motives. They decide that you have some supplies that are more urgently needed by others -- people you don't specifically know -- and ask you to donate those supplies, even knowing that you may or may not ever be compensated for them given the extent of the disaster.

Would you say you have any... [let's not say "obligation" or "compulsion"...] ...rephrase: Would you feel like a jerk if you didn't comply, or do you think it would be perfectly ok?

Comment by woozle on Overcoming the mind-killer · 2010-03-18T01:13:06.142Z · LW · GW

Are you saying we should stop trying to bridge that gulf, or should I try to explain myself a different way?

Comment by woozle on Overcoming the mind-killer · 2010-03-18T01:09:15.989Z · LW · GW

I also think you're misunderstanding my criticism of Haidt. Yes, he has lots of data to support his claims -- but he rigged the experiments in the way he asked his questions, and he hasn't responded to the obvious flaws in his analysis.

Nor have you.