What would a good article on Bayesian liberty look like?
post by DataPacRat · 2011-07-08T15:01:57.992Z · LW · GW · Legacy · 27 commentsContents
27 comments
Politics is the mind-killer; but rationality is the science of /winning/, even when dealing with political issues.
I've been trying to apply LessWrong and Bayesian methods to the premises and favored issues of a particular political group. (Their most basic premise is roughly equivalent to declaring that Iterated Prisoner's Dilemma programs should be 'nice'.) But, given how quickly my previous thread trying to explore this issue was downvoted into disappearing, and many of the comments I've received on similar threads, I may have a rather large blind spot preventing me from being able /to/ properly apply LW methods in this area.
So I'll try a different approach - instead of giving it a go myself again, I'll simply ask, what do /you/ think a good LW post about liberty, freedom, and fundamental human rights would look like?
27 comments
Comments sorted by top scores.
comment by [deleted] · 2011-07-08T16:59:40.205Z · LW(p) · GW(p)
what do /you/ think a good LW post about liberty, freedom, and fundamental human rights would look like?
As other commenters in this thread have pointed out, these topics are politically charged and are little more than applause lights in most contexts. The LW community is really good at spotting applause lights and meaningless statements, so a post that discusses liberty in conventional terms would be downvoted into oblivion--and rightfully so, I think, because a post that treats abstract concepts like liberty as black boxes would probably be pretty confused. However, if you wrote a reductionistic post that dissolved our intuitions about freedom and liberty, that would be far more interesting and useful.
Replies from: endoselfcomment by jimrandomh · 2011-07-08T16:19:17.931Z · LW(p) · GW(p)
So I'll try a different approach - instead of giving it a go myself again, I'll simply ask, what do /you/ think a good LW post about liberty, freedom, and fundamental human rights would look like?
The problem with writing about these concepts directly is that they're (a) very hard to define, and (b) applause lights. So while everyone agrees that they're good and important, few people agree what they are. In order to write a meaningful post about "freedom", you have to get specific and talk about "freedom to do X" - and in that case you're usually better off talking about X without the applause light. When people try to talk about freedom, liberty and/or human rights from a thousand-mile-high perspective, without zooming in on a specific and concrete example, they end up missing all the important distinctions and getting hopelessly confused.
Replies from: DataPacRat↑ comment by DataPacRat · 2011-07-08T16:28:57.327Z · LW(p) · GW(p)
Thank you for that reply - it was cogent, descriptive, and helps me figure out what I can try doing next.
(Eg, maybe something along the lines of "Man is a rational animal - he doesn't use claws or poison to survive, he uses his brain. In order /to/ use his brain most effectively, he has to be able to do certain things - most fundamentally, he has to stay alive, and in order to do that, he has to X, Y, and Z; in order to come up with new ideas to know better how to stay alive, he has to be able to discuss ideas freely; etc, etc, etc.")
Replies from: Desrtopa, endoself, ata↑ comment by Desrtopa · 2011-07-09T01:55:49.030Z · LW(p) · GW(p)
Is that really your analysis of human society from the ground up though, or did you try to figure out how to create a rational argument for liberty?
It's not at all clear to me that if people are primarily concerned with staying alive, we should be preserving their liberty to discuss ideas freely; reasonably competent authorities passing restrictions can keep people quite safe without providing them with many liberties at all. In fact, if I really wanted to design a society optimized for keeping people alive, it would probably look rather like a prison system.
The question you should be asking yourself is not "what justifies my package of political beliefs," but "what do I think people really want out of society, and how do I optimize for that?"
Replies from: DataPacRat↑ comment by DataPacRat · 2011-07-09T03:03:10.702Z · LW(p) · GW(p)
Is that really your analysis of human society from the ground up
Not quite from the ground up; the version that /does/ start from the ground up is summarized in http://www.datapacrat.com/sketches/Rational01ink.jpg and http://www.datapacrat.com/sketches/Rational02ink.jpg .
The question you should be asking yourself is not "what justifies my package of political beliefs," but "what do I think people really want out of society, and how do I optimize for that?"
How about, "What do I think /I/ want out of society, and how do I optimize for that?"?
Replies from: Desrtopa↑ comment by Desrtopa · 2011-07-09T03:08:36.232Z · LW(p) · GW(p)
How about, "What do I think /I/ want out of society, and how do I optimize for that?"?
In theory that might be the best way of going about things, but if it doesn't generalize well to other people, you're unlikely to get others on board with it, which limits the usefulness of framing the question that way.
Replies from: TobyBartels, DataPacRat↑ comment by TobyBartels · 2011-07-10T01:15:10.849Z · LW(p) · GW(p)
But surely DataPacRat's is the correct question. (Of course, if what DataPacRat really desires is that other people get what they want, then it's hard to distinguish the questions.) Once answered, and in the optimisation phase, then we consider how best to frame discussions of the issues to convince other people that they want it too (or whatever is most effective).
↑ comment by DataPacRat · 2011-07-09T03:37:08.934Z · LW(p) · GW(p)
I wonder if it's possible to try to resolve the difference between the two. (I remember reading about something called 'desire utilitarianism' which, IIRC, was focused on reconciling such matters.)
↑ comment by endoself · 2011-07-09T04:33:46.446Z · LW(p) · GW(p)
Man is a rational animal - he doesn't use claws or poison to survive
Why is survival one of your goals ("I want it." is an acceptable answer, but you have to accept that you might only want it due to being misinformed; even if it is probably correct, it it extremely unlikely that all your desires would be retained in your reflective equilibrium.)? Is it your only goal? Why?
he uses his brain.
Intelligence may be our comparative advantage over other animals, but we're not trading with them. Brain are useful because they solve problems, not because they're our skill as a species.
In order /to/ use his brain most effectively, he has to be able to do certain things - most fundamentally, he has to stay alive
If survival infringes on your other desires it becomes counterproductive. Beware lost purposes. Even if this doesn't hold, maximizing your probability of survival is not the same as maximizing whatever you actually prefer to maximize. If you only focus on survival, you risk giving up everything (or everything else if you value survival in itself - I don't think I do, but I'm very uncertain) for a slightly increased lifespan.
Replies from: DataPacRat↑ comment by DataPacRat · 2011-07-09T04:51:09.511Z · LW(p) · GW(p)
Why is survival one of your goals ("I want it." is an acceptable answer, but you have to accept that you might only want it due to being misinformed; even if it is probably correct, it it extremely unlikely that all your desires would be retained in your reflective equilibrium.)? Is it your only goal? Why?
At the moment, my primary goal is the continued existence of sapience. Partly it is due to the fact that since purpose and meaning aren't inherent qualities of anything, but are projected onto things by sapient minds, and I want my existence to have had some meaning, then in order to do that, sapients have to continue to exist. Or, put another way, for just about /any/ goal I can seriously imagine myself wanting, the continued existence of sapience is a necessary prerequisite.
If survival infringes on your other desires it becomes counterproductive. Beware lost purposes. Even if this doesn't hold, maximizing your probability of survival is not the same as maximizing whatever you actually prefer to maximize. If you only focus on survival, you risk giving up everything (or everything else if you value survival in itself - I don't think I do, but I'm very uncertain) for a slightly increased lifespan.
If I seriously come to the conclusion that my continued life has a measurable impact that /reduces/ the probability that sapience will continue to exist in the universe... then I honestly don't know whether I'd choose personal death. For example, one of the goals I've imagined myself working for is "Life forever or die trying", which, as usual, requires at least some sapience in the universe (if only myself), but... well, it's a problem I hope never to have to encounter... and, fortunately, at present, I'm trying to use my existence to /increase/ the probability that sapience will continue to exist, so it's unlikely I'll never encounter that particular problem.
Replies from: endoself↑ comment by endoself · 2011-07-09T05:10:46.329Z · LW(p) · GW(p)
Partly it is due to the fact that since purpose and meaning aren't inherent qualities of anything, but are projected onto things by sapient minds, and I want my existence to have had some meaning, then in order to do that, sapients have to continue to exist. Or, put another way, for just about /any/ goal I can seriously imagine myself wanting, the continued existence of sapience is a necessary prerequisite.
The two ways of putting it are not equivalent; it is possible for a sapient mind to decide that its purpose is to maximize the number of paperclips in the universe, which can be achieved without its continued existence. You probably realize this already though; the last quoted sentence makes sense.
I'm trying to use my existence to /increase/ the probability that sapience will continue to exist, so it's unlikely I'll never encounter that particular problem.
If you had a chance to preform an action that led to a slight risk to your life but increased the chance of sapience continuing to exist (in such a way as to lower your overall chance of living forever), would you do so? It is usually impossible to perfectly optimize for two different things at once; even if hey are mostly unopposed, near the maxima there will be tradeoffs.
Replies from: DataPacRat↑ comment by DataPacRat · 2011-07-09T05:26:36.484Z · LW(p) · GW(p)
If you had a chance to preform an action that led to a slight risk to your life but increased the chance of sapience continuing to exist (in such a way as to lower your overall chance of living forever), would you do so?
A good question.
I have at least one datum suggesting that the answer, for me in particular, is 'yes'. I currently believe that what's generally called 'free speech' is a strong supporting factor, if not necessary prerequisite, for developing the science we need to ensure sapience's survival. Last year, there was an event, 'Draw Muhammad Day', to promote free speech; for which, before it actually happened, there was a non-zero probability that anyone participating in it would receive threats and potentially even violence from certain extremists. While that was still the calculation, I joined in. (I did get my very first death threats in response, but nothing came of them.)
Replies from: endoself↑ comment by endoself · 2011-07-11T00:00:08.954Z · LW(p) · GW(p)
You have evidence that you do, in fact, take such risks, but, unless you considered the issue very carefully, you don't know whether you really want to do so. Section 1 of Yvain's consequentialism FAQ covers the concept of not knowing and then determining when you really want. (The rest of the FAQ is also good but not directly relevant to this discussion and I think you might disagree with much of it.)
↑ comment by ata · 2011-07-09T15:03:46.029Z · LW(p) · GW(p)
I know this is tangential, but what is it with libertarians and unnecessarily gendered language? I truly don't mean that as a rhetorical question or an attack on you personally or any kind of specific political point, it's something I've been sincerely curious about before and maybe you know the answer; why do so many (obviously not all) libertarian and Randian types seem to be so attached to the whole everyone-is-"man"/"he" schema, including the ones who are way too young to have lived in times before people started realizing why that was a bad idea? Proportionally, even social conservatives don't seem to do that nearly as much anymore.
Replies from: Emile, JoshuaZ, DataPacRat, None↑ comment by Emile · 2011-07-11T16:55:08.024Z · LW(p) · GW(p)
"Use gender-neutral language" is motivated by an egalitarian instinct, and is said by (moral) authorities - both are things libertarians don't seem very fond of.
(I don't identify very strongly as a libertarian, but can relate to the kneejerk reflex against being told what to do)
Also, some people might not phrase it as "people started realizing why that was a bad idea" but rather as "sanctimonious politically correct busybodies started telling everybody how to speak resulting in some horrible eyesores like he/she or ey all over the place". I don't really buy the second version , but I don't think the first one is a fair description either (though it's hard to judge from a French point of view, gender and grammar work a bit differently in French).
↑ comment by JoshuaZ · 2011-07-09T15:45:55.487Z · LW(p) · GW(p)
I suspect that it is due to emotional reactions against feeling like one is being told what to do. I don't know what the correlation v. causation is in what comes first (the philosophical attitude leading to such emotional reactions or the emotional attitude making one more likely to accept a libertarian philosophical viewpoint). But given such an emotional reaction, one can easily see people going out of their way to avoid using the terminology that they might feel like they are being told to use.
↑ comment by DataPacRat · 2011-07-09T15:35:48.983Z · LW(p) · GW(p)
That's a good question - though I'm not sure I can think of a good answer.
I know that, in most of my writing, I tend to use 'they' as a gender-neutral third-person singular pronoun... when I wrote 'Man is a rational animal, etc', I was aware that I could have rephrased the whole thing to be gender-neutral... but when writing, I felt that it wouldn't have provided the same feeling - short, sharp, direct, to-the-point. The capitalized term 'Man' is, for good or ill, shorter than the word 'humanity', and "Man is a rational animal" has a different sense about it than (I wanted to insert 'the mealy-mouthed' here, which isn't a term I remember actually having used) "humans are rational creatures".
There's probably something Dark-Artish in there somewhere, though it wasn't a conscious invocation thereof.
comment by rwallace · 2011-07-08T17:14:27.194Z · LW(p) · GW(p)
Here's my shot at what it would look like:
"Hey guys, I understand contemporary politics is not considered appropriate for this site, so I've started writing a series of posts on my own blog; here's the link if anyone wants to check it out."
It so happens that, as a libertarian, I sympathize with your agenda (and would happily follow the link if you did the above) but at the same time I don't think you can write a political post while leaving out the politics. (Eliezer managed to write some good posts about politics while leaving out the politics, which was a non-trivial feat in itself; I think that's about the best you can do.) And there are good reasons why we try to avoid politics on Less Wrong. So the best I can suggest is to write what you want to write on a blog where it's appropriate.
Replies from: jsalvatier↑ comment by jsalvatier · 2011-07-08T17:55:57.498Z · LW(p) · GW(p)
I agree with this.
I have libertarian intuitions (I take Keep Your Identity Small to heart). I have spent quite a while thinking about policy and economics and I think I can discuss it fairly rationally.
Even if you can discuss policy rationally, lots of people have not invested the time into developing that skill. Moreover, it's a much less useful skill than it seems like; coming up with correct answers in policy isn't very useful because it's very difficult to implement the implications of those correct answers (see here and here).
Replies from: jsalvatier↑ comment by jsalvatier · 2011-07-08T18:12:32.377Z · LW(p) · GW(p)
Also, I used to believe in libertarian Fundamental Rights. However, I came to the conclusion that 'rights' is purely rhetorical (used to move people's emotions) or descriptive (descriptions of options that people generally have).
People can benefit psychologically or materially from themselves or others having certain kinds of options (e.g. 'right to free speech'), but these not 'fundamental' in any important sense.
The way I came to this view is by thinking about 'where do the definitions of rights come from?'.
comment by Jayson_Virissimo · 2011-07-08T16:08:10.541Z · LW(p) · GW(p)
So I'll try a different approach - instead of giving it a go myself again, I'll simply ask, what do /you/ think a good LW post about liberty, freedom, and fundamental human rights would look like?
The set of posts I would like to see on Less Wrong about that conjunction of concepts is null.
comment by Eugine_Nier · 2011-07-09T04:47:26.002Z · LW(p) · GW(p)
I'd recommend you start by reading the sequence on ethical injunctions, i.e., norms you shouldn't violate even when you've convinced yourself it's a good idea, and use that concept as the basis for your analysis.
Replies from: DataPacRat↑ comment by DataPacRat · 2011-07-09T05:01:10.829Z · LW(p) · GW(p)
"For the good of the tribe, do not murder even for the good of the tribe." is one of my favoured LW cached thoughts.
Replies from: Leonhart