Open Thread: September 2011
post by Pavitra · 2011-09-03T19:50:59.689Z · LW · GW · Legacy · 447 commentsContents
447 comments
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
If continuing the discussion becomes impractical, that means you win at open threads; a celebratory top-level post on the topic is traditional.
447 comments
Comments sorted by top scores.
comment by [deleted] · 2011-09-06T06:15:06.890Z · LW(p) · GW(p)
Wondering vaguely if I'm the only person here who has attempted to sign up for cryonics coverage and been summarily rejected for a basic life insurance plan (I'm transgendered, which automatically makes it very difficult, and have a history of depression, which apparently makes it impossible to get insurance according to the broker I spoke with).
I see a lot of people make arguments (some of them suggesting a hidden true rejection) about why they don't want it, or why it would be bad. I see a lot of people here make arguments for its widespread adoption, and befuddlement at its rejection (the "Life sucks, but at least you die" post) and the difficulties this poses for spreading the message. And I see a few people argue (somewhat mendaciously in my opinion) for its exclusivity or scarcity, arguing that it's otherwise of little to no value if just anyone can get signed up.
What I don't see is a lot of people who'd like to and can't, particularly for reasons of discrimination. For me, my biggest rejection for a long time was the perception that it was just out of reach of anyone who wasn't very wealthy, and once I learned otherwise, that obstacle dissipated. Now I'm kind of back to feeling like it's that way in practice -- if you're not one of the comparatively small number of people who can pay for it out of hand, or a member of any group who's already statistically screwed by the status quo, then it may as well be out of reach for you.
I doubt the average person who has heard of, and rejected cryonics has gone through this specifically, but it certainly suggests some reasons why it might be a tough sell outside the "core communities" who're already well-represented in cryonics. Even if we want it, we can't get it, and the more widely-known that is, the more difficult PR's going to be among people who've already had their opportunities and futures scuppered by the system as it stands.
I'm not saying it's rational, but from where I stand it's very hard to blame someone for cynically dismissing the prospect out of hand, or actively opposing it. IMO, the cryonics boosters either need to acknowledge the role that stuff like this plays in people's relationship to Shiny New Ideas Proposed By Well Educated Financially-Comfortable White Guys From The Bay Area, or just concede that, barring massive systematic reforms in other sectors of society, this will not be an egalitarian technology.
Replies from: lsparrish, handoflixue, Dennis, Jayson_Virissimo↑ comment by lsparrish · 2011-09-07T05:24:15.397Z · LW(p) · GW(p)
I hope you don't mind, I've copied your message to the New Cryonet mailing list. This is an important issue for the cryonics community to discuss. I think there needs to be a system in place for collecting donations and/or interest to pay for equal access for those who can't get life insurance. There are a couple of cases I'm aware of where the community raised enough donations to cover uninsurable individuals for CI suspensions.
Replies from: None↑ comment by [deleted] · 2011-09-07T05:31:28.053Z · LW(p) · GW(p)
I don't mind.
While my personal case is obviously important to me (it is my life after all), it's important to me in a more general sense -- a lot of people are talking on this site about various ways to fix the world or make it better, yet they're often not members of the groups who've had to pay the costs (through exploitation, marginalization or just by being subject to some society-wide bias against them) to get it to where it is now.
↑ comment by handoflixue · 2011-09-07T05:59:51.293Z · LW(p) · GW(p)
I'm both transgendered and diagnosed with depression, and I've had good luck getting insured via Rudi Hoffman. I don't recall what the name of the insurance company was, and I haven't heard the final OK since the medical examination, but I don't foresee any difficulties. I was warned they'll most likely put me down on male rates (feh) despite being legally female, but I can deal with that even if I don't like it.
Replies from: None↑ comment by [deleted] · 2011-09-07T06:01:29.520Z · LW(p) · GW(p)
Same broker. Did you mention the depression to him explicitly?
Replies from: handoflixue↑ comment by handoflixue · 2011-09-07T19:03:48.024Z · LW(p) · GW(p)
Yes. I'm not taking any medication for it, which might have affected it.
Replies from: None↑ comment by Dennis · 2011-09-07T10:04:57.860Z · LW(p) · GW(p)
If you don't mind me asking - how old are you and how much money do you typically save a year?
Replies from: None↑ comment by [deleted] · 2011-09-07T15:37:32.459Z · LW(p) · GW(p)
Bad assumption, but I'll answer.
I am 28. long-term unemployed, cannot get a bank account due to issues years ago, living on disability payments and now with support of my domestic partner (which is the main reason my situation isn't actually desperate any longer). We have to keep our finances pretty separate or my income (~7k a year, wholly inadequate to live on by myself anyplace where I could actually do so) goes away.
I keep a budget, I'm pragmatic and savvy enough to make sure our separate finances on paper don't unduly restrict us from living our lives as necessary, but I can't remember the last time I made it to the end of the month with money left over from my benefits check. Sometimes if I'm having a very good month, I'll not need to use my food stamps balance for that cycle, meaning it's there when I need extra later.
Replies from: None↑ comment by [deleted] · 2011-09-07T15:41:22.077Z · LW(p) · GW(p)
And to stave off questions about how I could afford cryonics on this level of income: Life insurance can fall within a nice little window of 50 dollars or less, which could plausibly be taken out of my leisure and clothing budgets (it doesn't consume all of them, but those are the only places in the budget with much wiggle room). Maintaining a membership with the Cryonics institute that depends on a beneficiary payout of that insurance is something like 120 dollars a year - even I can find a way to set that aside.
↑ comment by Jayson_Virissimo · 2011-09-07T04:17:14.056Z · LW(p) · GW(p)
What I don't see is a lot of people who'd like to and can't, particularly for reasons of discrimination.
Are you saying you disagree with the probability estimates of insurance companies regarding the of death of transgendered people with a history of depression for a given year (or did you mean something else)? I'm willing to consider any arguments you have for that proposition, but, as far as I know, the SPRs used by insurance companies are the gold standard of instrumental rationality, so there is a strong presumption that they are (more) correct and that you (or any human expert for that matter) are (more) wrong.
Replies from: None↑ comment by [deleted] · 2011-09-07T05:22:32.537Z · LW(p) · GW(p)
I think they don't have any deep understanding of it at all -- the statistics tell the story the insurance adjusters need to decide on an investment (well, sort of -- there actually is no really good data about our long-term heatlh outcomes apart from our rates of violent murder, and it's hard to tell what would even constitute a reasonable null hypothesis to default to when so many complicated variables are churned up by the medical procedures we often seek), but that decision and those statistics are not truly value-neuitral.
Show me a trans person who hasn't dealt with depression. I'm sure they exist, but it does not appear to be common. Depression is such a common symptom for us because we're a mostly-despised minority in the wider world, and just being coerced into our birth-assigned gender roles is often painful and stressful for us (and it only gets worse as we grow up).
Transgendered people in the US face one-in-eight to one-in-twelve murder rates depending on race and geographic location [edit: this claim is unsourced and should be considered retracted; investigation recorded further downthread attempts to pin down the rate more precisely-Jandila]; we're also something like four times more likely than the national average to be unemployed. From an actuarial perspective, this is clear-cut: bad investment prospect, and that is the purpose of insurance after all.
It's not neutral to the person affected by it though, because those conditions stem from discrimination against trans people -- we aren't murdered at such high rates because of some evopsychological predisposition in cis people to murder us, or because we're inherently less capable of fitting into society and/or being value-creating agents in some hypothetical free market. We aren't unemployed at such vastly high rates because we tend not to have skills or education as a population -- and for many of us that don't, it's not because we couldn't cut it in school or the work we were doing before we transitioned.
But instead of, say, considering me on the basis of my actual health (which according to my practicing physician is excellent), it's a look at the tables. Context is irrelevant in the decision.
Because I'm trans and have a medical history of depression, I am rendered me unable to acquire the otherwise-affordable means of obtaining at least some chance of ensuring my future existence, past the limits of my body as it stands.
It may be legal, it may be justifiable with recourse to a profit motive, it may not be willfully directed at my person in order to cause me ill -- but it is discrimination. Our heightened rates of murder and unemployment aren't typically personally-directed either (we're targeted for being what we are, not who we are).
It's also still legal to fire me from a job in most jurisdictions for being transgendered, without even having to hide the fact. Does that tacit authorization in any way cast doubt on whether or not such behavior is discriminatory?
I want to live as much as anybody does. I even want to live an arbitrarily long time, and see the world grow into a better place, as much as any other cryonics booster on this site. I don't take comfort in beliefs of a spiritual afterlife when faced with the seeming inevitability of death, I don't consider the fact that dying would hardly make me unique or rarely-disadvantaged among humanity to be any negative influence on seeking to avoid it by whatever plausible means. I don't think immortality will inherently lead to stagnation or regression in society.
And I don't get the choice. There is a choice available, but not to me, because the only available means (like most people in the world, I am not arbitrarily able to afford setting aside 30k or so) is denied out of hand, no further fact-finding necessary. That shiny future we cryo-types are hoping to see, but that will likely take longer than our natural lifespans to reach? Is closed off to me.
There's a whole lot of people like me in the world, who either don't have financial and social access to the kinds of things that make one rationally able to choose cryo in the first place for whatever reason. I daresay most of them would also reject cryonics because they don't have a rationalist's understanding of death, its implications and what they could do about it -- but rationality training will only solve one of those problems.
Replies from: JoshuaZ, Jayson_Virissimo↑ comment by JoshuaZ · 2011-09-07T05:45:48.344Z · LW(p) · GW(p)
Transgendered people in the US face one-in-eight to one-in-twelve murder rates depending on race and geographic location;
I've seen this claim before but I've never seen it attached to a reliable source. Do you have a citation for it? The HCR estimates that there are about 15-30 murders of transgendered people each year. If we underestimate the percentage of the American population that is trans using the HCR's data and use the lower bound estimate that 1 in every 3000 people are transgendered (here I'm using the cited Conway study that says lower bound of 1 in 2500 and underestimating a bit more both to make the math easier and to make sure we're very definitely not overcounting, note that Conway's upper bound is in 1 in 500) then we get with a US population of around three hundred million, a total of about 100,000 trans people in the US. Now if we assume that all those trans murders are evenly distributed (which seems to be really unlikely), we get assuming that they have around 60 years of time to get murdered, with a 30/100,000 chance each year, we get a chance of 1-(1-(30/100,000))^60 chance of getting murdered in their lifetimes (60 comes from assuming that they know they are transgendered around age 12 and then have 60 years of time to get murdered). That's around a 2.8% chance. That's really high, but nowhere near 1 in 12 which is more than twice that (8.3%) . In this context, this occurs with 1/12 being the claimed lower bound, and with us assuming a generously large number of murders yearly and a generously small transgender population, and are still off by a factor of 2.
Note if one uses for example a population estimate based on the middle of Conway's range (1 in a 1000 being transgendered) then one gets a result of around .006%, which is about 50% percent more likely than the entire US pop but is even farther from the claimed numbers.
Edit: Ok. Th HRC also on the same webpage but with minimal arithmetic claim that 1 in every 1000 murders might be a transgendered person. If we use this estimate and assume that there are then around 140 transgendered murders yearly, and use the reasonable estimate of 1 in every 1000 people being transgendered so a total pop of around 300,000 then one gets (1-(1-140/300,000)^60) which is around a 3% chance.
Edit: If you use the most generous estimate for the murder total (140), and the smallest population estimate for the transgendered population then you can get 8% which is a little under 1/12. Here I'm using my underestimate of Conway's estimate. If one uses Conway's actual lower bound one gets around 7%. I don't think I need to discuss in detail why this estimate is unlikely to be accurate. It seems clear from these estimates that the murder rate of transgendered individuals is much higher than that of the general population (especially when considered as a relative rate), but it is not likely to be anywhere near 1/12th.
Replies from: None↑ comment by [deleted] · 2011-09-07T06:19:35.065Z · LW(p) · GW(p)
You know, I can't find a good source for it now, and it appears to be an apocryphal claim. Wouldn't be the first time I've picked up an oft-quoted but exaggerated statistic about this issue. I'm a bit of a newb, but I'll try to strikethrough that claim. ETA: The Help guide doesn't list that particular markup. Someone throw me a bone?
A look at Carsten Balzer's 2009 study claims that a recent attempt to monitor the rate of reported murders worldwide (their criteria were basically "can be accessed by a newspaper website or some other online source during a google search, after filtering for duplicates") gave a rate of about one reported murder every three days. Source is here:
http://www.liminalis.de/2009_03/TMM/tmm-englisch/Liminalis-2009-TMM-report2008-2009-en.pdf
Replies from: shokwave↑ comment by shokwave · 2011-09-07T06:54:47.707Z · LW(p) · GW(p)
As far as I'm aware, strikethrough is not available through markdown as it is implemented on this site; to get the strikethrough effect you have to retract your entire post.
Replies from: None, shokwave↑ comment by Jayson_Virissimo · 2011-09-07T06:17:22.723Z · LW(p) · GW(p)
The deleted comment was mine. It was deleted before anyone responded or up/down voted it.
I feared that I had completely misunderstood what Jandila had said and didn't think anyone would miss it. Now that I see that I didn't misunderstand the original comment, I regret having deleted it. Is there anyway to recover a deleted comment?
Replies from: JoshuaZ, wedrifid↑ comment by wedrifid · 2011-09-07T06:18:59.742Z · LW(p) · GW(p)
No.
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2011-09-07T06:22:50.880Z · LW(p) · GW(p)
Is it even missing from Jandila's inbox?
comment by wnoise · 2011-09-18T18:11:34.791Z · LW(p) · GW(p)
European Philosophers Become Magical Anime Girls
Author Junji Hotta has blessed the world with “Tsundere, Heidegger, and Me”, a tour de force of European philosophy… in a world where all the philosophers are self-conscious anime girls. The books went on sale September 14.
http://aya.shii.org/2011/09/17/european-philosophers-become-magical-anime-girls/
Replies from: pedanterrific, gwern, Bugmaster, NihilCredo, Vaniver↑ comment by pedanterrific · 2011-09-24T03:07:22.773Z · LW(p) · GW(p)
I... that's... I don't...
...
I'll be in my bunk.
↑ comment by NihilCredo · 2011-09-23T02:47:23.062Z · LW(p) · GW(p)
At first I thought "Oh, nice, I'll finally know what Christians felt when that horrible Manga Gospel got published", but then I clicked the link and I just couldn't help having a good laugh. It seems I can only simulate the more chill Christians.
On further reflection, I got my start on literature through multiple shelves full of comic book adaptations of the classics, so I really shouldn't feel superior. Although to be fair those were a little more faithful to the source material - except for Taras Bulba, which quite shocked me later when I got my hands on the non-bowdlerised version.
↑ comment by Vaniver · 2011-09-23T02:01:41.111Z · LW(p) · GW(p)
That picture of Spinoza displeases me on so many levels.
Replies from: pedanterrific↑ comment by pedanterrific · 2011-09-24T02:56:18.662Z · LW(p) · GW(p)
"Desire is the essence of a man." - Baruch Spinoza
comment by Kaj_Sotala · 2011-09-04T09:59:09.450Z · LW(p) · GW(p)
I'm getting increasingly pessimistic about technology.
If we don't get an AI wiping us out or some form of unpleasant brain upload evolution, we'll get hooked by superstimuli and stuff. We don't optimize for well-being, we optimize for what we (think we) want, which are two very different things. (And often, even calling it "optimization" is a stretch.)
Replies from: None, None, wedrifid, Thomas↑ comment by [deleted] · 2011-09-04T10:22:20.808Z · LW(p) · GW(p)
We don't optimize for well-being, we optimize for what we (think we) want, which are two very different things.
Natural selection does not cease operation. Say, for example, that someone invents a box that fully reproduces in every respect the subjective experience of eating and of having eaten by directly stimulating the brain. Dieters would love this device. Here's a device that implements in extreme form the very danger that you fear. In this case, the specific danger is that you will stop eating and die.
So the question is, will the device wipe out the human race? Almost certainly it will not wipe out the entire human race, simply because there are enough people around who would nevertheless choose to eat despite the availability of the device, possibly because they make a conscious decision to do so. These people will be the survivors, and they will reproduce, and their children will have both their values (transmitted culturally) and their genes, and so will probably be particularly resistant to the device.
That's an extreme case. In the actual case, there are doubtless many people who are not adapting well to technological change. They will tend to die out disproportionately, will tend to reproduce disproportionately less.
We have a model of this future in today's addictive drugs. Some people are more resistant to the lure of addictive drugs than others. Some people's lives are destroyed as they pursue the unnatural bliss of drugs, but many people manage to avoid their fate.
Many people have so far managed the trick of pursuing super stimuli without destroying their lives in the process.
Replies from: Eugine_Nier, None, nerzhin, Kaj_Sotala, Iabalka, MBlume↑ comment by Eugine_Nier · 2011-09-07T07:46:24.693Z · LW(p) · GW(p)
Keep in mind, it's possible to evolve to extinction.
Replies from: smk↑ comment by smk · 2011-09-07T14:36:02.272Z · LW(p) · GW(p)
I wish I could upvote that more than once.
Replies from: wedrifid↑ comment by wedrifid · 2011-09-07T14:47:49.867Z · LW(p) · GW(p)
The post or the comment? If the former then you just prompted me to vote it up for you. :)
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-09-10T17:09:42.413Z · LW(p) · GW(p)
Me too. smk, your wish has been granted.
↑ comment by [deleted] · 2011-09-12T22:46:43.907Z · LW(p) · GW(p)
What struck me about the example in this post that its basically genetically equivalent to reliable easy to use contraception.
And now that I think about it humanity basically is like a giant petri dish where someone dumped some antibiotics. The demographic transition is a temporary affair, a die off of maladapted genotypes and memeplexes.
↑ comment by nerzhin · 2011-09-05T16:18:55.134Z · LW(p) · GW(p)
It is not at all clear that the people resistant to addictive drugs are reproducing at a higher rate than those who aren't.
Replies from: None↑ comment by [deleted] · 2011-09-05T18:26:35.172Z · LW(p) · GW(p)
If drug addicts do not reproduce at a lower rate than non-addicts, and if this equality persists from generation to generation indefinitely so that the average addict has exactly as many great-great-great-great grandchildren as the average non-addict, then we need to seriously rethink the idea that drug addiction is harmful to the addict.
Harm is a concept that best applies to a living creature. A rock can hardly be harmed. Break a rock in half, and you have two rocks, but there's nothing about the rock that makes this count as a harm. Living creatures can be harmed. What makes something count as a harm in a living creature is that it interferes with biological function. But something is a biological function only if it increases the probability of survival and reproduction (and survival matters only because it is necessary for reproduction - so, in the final analysis, what matters is reproduction). Therefore a harm to a living creature, being something that interferes with biological function, necessarily reduces its probability of reproduction. The flip side of this is that if something does not reduce the probability of reproduction, then it is not a harm.
And if drug addicts are not actually harmed by their addiction, then we must seriously question our intuitions about what is and what is not harm. If they look awful, if they look sick, if they look like walking death, and yet if they reproduce just as robustly as the rest of us, have just as many kids, grandkids, great grand kids, etc, then we need to seriously question our intuition that drugs harm the addict. Once we've done that, then we need to seriously question our specific intuition that getting "hooked by superstimuli and stuff" is as harmful as it looks.
However, I actually think we don't need to seriously question any such things because I really don't think that drug addicts are unharmed by drugs.
Replies from: ArisKatsaris, None, Jack, NancyLebovitz, Oscar_Cunningham↑ comment by ArisKatsaris · 2011-09-05T19:55:42.109Z · LW(p) · GW(p)
So basically you redefine "harm" to mean "whatever impedes reproduction" -- something which grossly does NOT coincide with common human understanding of the word harm. And then using that absolutely different definition, you go on to reach some more ludicrous conclusions, which are however only significant to the extent that people agree to that redefinition.
Which NOBODY does, not even you -- because I bet that if someone tortured you to death but at the same time collected your semen to impregnate a dozen women, I bet you'd still consider this a "harmful" thing.
So what the hell is this whole comment of yours about? Downvoted for sheer nonsense.
Replies from: None↑ comment by [deleted] · 2011-09-05T20:20:57.153Z · LW(p) · GW(p)
So basically you redefine "harm" to mean "whatever impedes reproduction" -- something which grossly does NOT coincide with common human understanding of the word harm.
Take typical examples of harm. Getting cut. Breaking a bone. Losing an eye. What these all have in common is that they adversely impact one or another function.
But our body functions only because it is a product of natural selection, and natural selection occurs on the basis of reproduction. Our skin is closed to protect against infection, and if it is cut we are exposed to infection, which reduces our probability of survival, which reduces our expected number of offspring. If we lose an eye or break a bone, same thing. Break an animal's bone, and you'll reduce the probability that it reproduces. Poke out its eye, and you'll do the same thing. Harms reduce your ability to reproduce.
Do you really not realize that every part of you that has evolved, every tiny little part, every little mechanism of your body which has a function and which has the potential to be harmed, evolved because it helped your ancestors to survive and reproduce and ultimately to produce you? Every little evolved part of you is part of a reproduction machine, whether you acknowledge it or not. And it follows from this as night follows on day, that if you harm any one of those little bits of you that have a function, then you are harming a mechanism whose function is to help you to reproduce, and so you are harming your own ability to reproduce (to the extent that those mechanisms still function in your generation and are not mere functionless leftovers of an earlier generation).
I bet that if someone tortured you to death but at the same time collected your semen to impregnate a dozen women, I bet you'd still consider this a "harmful" thing.
But in that case I am being harmed and helped. The fact that my murderer is simultaneously also helping me to reproduce (in his own way) does not mean that he has not also harmed me. Taken in itself, the murder reduces my own evolved ability to reproduce, and therefore the murder is a harm. That the murderer also committed this other act which helped me through the introduction of a novel method of reproduction (collecting my semen) does not change this.
So what the hell is this whole comment of yours about? Downvoted for sheer nonsense.
I am surprised that it is controversial that harms adversely impact reproduction. Of course they do. Sure, as someone pointed out, if you have already lost your ability to reproduce, then obviously a harm won't impact your ability to reproduce - because it was already impacted. But the harm still would have impacted your ability to reproduce had you not already lost it.
Let's think about it. Imagine some typical harm happening to you, say, somebody throws a stone at you and it hits you in the head. But roll back time. Now roll forward and suppose you duck, avoiding the stone. Why do you duck? You duck because you have evolved to duck. Why have you evolved to duck? Why do we have this duck-when-something-is-flying-at-me instinct cooked into us as firmly as it is? And I do think it's cooked in (I don't think it's learned, though I might be wrong). It's cooked in by natural selection. But natural selection only cares about reproduction. What, did you think natural selection cared about your happiness? No, it cares about whether you reproduce. Your instincts are what they are because they help you reproduce. Therefore you fear the things you do, you object to the things you do, you consider "harms" the things you do, because they adversely affect your ability to reproduce. Whether you think so or not! Because natural selection doesn't care what you think, and natural selection made you.
Replies from: ArisKatsaris, APMason, hairyfigment↑ comment by ArisKatsaris · 2011-09-05T21:29:08.650Z · LW(p) · GW(p)
Do you really not realize that every part of you that has evolved, every tiny little part, every little mechanism of your body which has a function and which has the potential to be harmed, evolved because it helped your ancestors to survive and reproduce and ultimately to produce you?
You keep saying things that are both true and utterly besides the point. You suffer from vast confusions of words.
It's cooked in by natural selection. But natural selection only cares about reproduction.
No, natural selection doesn't have a mind, and DOESN'T CARE ABOUT ANYTHING.
We care. Natural selection made us to care. But natural selection itself doesn't give a damn. Thou Shalt Not Anthropomorphize Natural Selection.
Your instincts are what they are because they help you reproduce.
Again irrelevant.
Because natural selection doesn't care what you think, and natural selection made you.
Gravity also made me, since it held me and my ancestors on the planet, and gravity only cares about mass, by which argument harm must be anything that reduces my mass. Because gravity only cares about mass.
Your argument is nothing but a mishmashed jumble of anthropomorphisms, imaginary duties to the impersonal processes that created, and term confusions.
You're effectively saying two things: We should redefine "harm" to mean something different than people think when they talk about "harm", because natural selection intended "harm" to mean something else, and when our human intuitions come in conflict with natural selection's "intent" we must obediently bow to our creator's intent.
Well our creator was not just stupid, it was utterly brainless and purposeless and intentionless. So we ought to screw what natural selection thinks, because it DOES NOT THINK.
Replies from: None, sam0345, sam0345↑ comment by [deleted] · 2011-09-05T21:53:25.485Z · LW(p) · GW(p)
No, natural selection doesn't have a mind, and DOESN'T CARE ABOUT ANYTHING.
We care. Natural selection made us to care. But natural selection itself doesn't give a damn. Thou Shalt Not Anthropomorphize Natural Selection.
Now you're just being obtuse. Talk about what natural selection "cares about" is obviously nonliteral and has an obvious meaning. To say that natural selection "cares about" A and does not "care about" B simply means that A, and not B, is a factor in natural selection.
We should redefine "harm" to mean something different than people think when they talk about "harm", because natural selection intended "harm" to mean something else,
That's not what I'm saying at all. I'm saying that the things that we think of as harms (and I listed a few as examples) actually do interfere with reproduction (with obvious but completely understandable exceptions, such as when they happen to people who have already lost their ability to reproduce), whether or not we are aware of this fact or not. It is not a definition. It is a factual claim about the actual things we consider to be harms.
and when our human intuitions come in conflict with natural selection's "intent" we must obediently bow to our creator's intent.
No, I am not saying that at all. I am saying that we can be wrong about whether something that looks like a harm, is a harm. It's certainly possible. Oh, come on, you have got to admit that it is possible to be wrong about whether someone is harmed.
But please not that I also asserted my confidence that drug addicts really are being harmed. Why am I confident? Because while I accept the theoretical possibility of being wrong, I don't think I actually am wrong. I trust my perception on this matter. When I look at the photographs of meth heads, I have great confidence that they are harmed by their use of the drug. We can in principle be wrong about whether someone is harmed, but in all likelihood, we are not wrong.
And for this reason, I fully expect that if we were to do a multi-generational study of meth-heads, we would find that they don't do all that well in the reproduction department in comparison to a control group.
Replies from: ArisKatsaris, APMason↑ comment by ArisKatsaris · 2011-09-05T22:05:27.817Z · LW(p) · GW(p)
That's not what I'm saying at all. I'm saying that the things that we think of as harms (and I listed a few as examples) actually do interfere with reproduction (with obvious but completely understandable exceptions, such as when they happen to people who have already lost their ability to reproduce),
Most of the things we think of as harms also interfere one's heartbeat, with one's brainwaves, with one's breathing, with one's digestive systems, with one's sleep patterns, etc, since we're our biologies and every biological function is interconnected with the other.
Depending how you define your terms, your whole argument is either obviously false (we don't perceive condoms and contraception to be "harm" though it interferes with reproduction, but we do perceive that being enslaved for breeding purposes is harm, even though it increases the probability of our reproduction) or trivially true (as everything done to us, both good and bad, interferes with every biological function in our bodies).
And for this reason, I fully expect that if we were to do a multi-generational study of meth-heads, we would find that they don't do all that well in the reproduction department in comparison to a control group.
I also don't expect they do well in the life expectancy or health or prosperity department, which is a more customary method of determining well-being and "harm" than "number of descendants" is.
Replies from: None↑ comment by [deleted] · 2011-09-05T22:23:48.228Z · LW(p) · GW(p)
Most of the things we think of as harms also interfere one's heartbeat, with one's brainwaves, with one's breathing, with one's digestive systems, with one's sleep patterns, etc, since we're our biologies and every biological function is interconnected with the other.
Indeed, and for this reason, we might be able to measure harms indirectly by looking at heartbeat. A doctor actually does this sort of thing - he looks at your vital signs to see how you're doing.
Nothing I wrote should be interpreted as excluding these other ways of measuring harms. I was talking about one measure, but that doesn't mean there aren't others.
your whole argument [could be read as] trivially true...
Indeed, and I expected it to be read as trivially true, and therefore as the claim I was responding to as trivially false. I wasn't expecting any pushback on such an obvious point. If drug addicts really are harmed by drugs, we should expect this to show up in lower reproduction. It's really quite a trivial point.
I also don't expect they do well in the life expectancy or health or prosperity department, which is a more customary method of determining well-being and "harm" than "number of descendants" is.
Indeed, but these other respects aren't relevant to the claim I was addressing, which concerned the reproduction of drug addicts.
↑ comment by APMason · 2011-09-05T22:01:25.925Z · LW(p) · GW(p)
Okay, you seem to be claiming something and then claiming that you're not, so just answer me this: if meth addicts systematically out-breed not-meth-addicts, is it still possible their meth addiction is harming them?
Replies from: None↑ comment by [deleted] · 2011-09-05T22:30:54.534Z · LW(p) · GW(p)
Recall that I referred to reproduction to several generations, the point being that the meth addicts need not only produce babies but also raise them well enough that the babies grow up to reproduce - and so on, for many generations.
If meth addicts really did better than the rest of us in that department, then yes, I would seriously question whether meth addicts weren't actually a superior breed of human. I mean, really, take this very seriously and extrapolate out many generations. After, I don't know, a hundred generations, the whole planet is populated with meth heads, with just small pockets here and there of non-meth-heads. It's a bit like the planet of the apes scenario, but with meth heads instead of apes. In this scenario, yes, I would have to seriously question whether meth heads' addiction is harming them.
But an argument can still be made. For example, it might be that meth-heads, however much better they do than the rest of us, would do even better if they kicked the habit. In that case, they would be better than the rest of us despite their addiction, not because of it. In this case, the addiction would be, possibly, some negative side-effect of an otherwise superior genetic makeup.
Is that enough of an answer or do you want more?
Replies from: APMason↑ comment by APMason · 2011-09-05T22:53:39.056Z · LW(p) · GW(p)
Well, now it seems once again that you're using reproductive success as your criteria for whether or not somebody is being harmed, but you say that's not what you believe. Maybe it's better to go full thought-experiment on this, so:
There is a drug called Reproductene. Taking it causes extreme pain, permanently disables your ability to feel happiness, damages your memory, destroys your imagination, and causes you to have strong cravings for more Reproductene. It also creates a new human being with 50% of your genetic code every time you take it. This human being is created an adult already addicted to Reproductene. After taking a hundred doses of the drug and creating a hundred half-copies of you you die. Reproductene is in large supply. Is taking Reproductene harmful?
Replies from: smk, None↑ comment by smk · 2011-09-07T14:44:09.774Z · LW(p) · GW(p)
There is a drug called Reproductene. Taking it causes extreme pain, permanently disables your ability to feel happiness, damages your memory, destroys your imagination, and causes you to have strong cravings for more Reproductene.
Is it awful that that makes me burst into giggles?
↑ comment by sam0345 · 2011-09-06T00:22:06.194Z · LW(p) · GW(p)
But natural selection only cares about reproduction.
No, natural selection doesn't have a mind, and DOESN'T CARE ABOUT ANYTHING.
Anthropomorphic terminology for natural selection is standard and well understood. Darwin used it, explicitly making the analogy between conscious breeding and natural. Those who use it correctly demonstrate that they read Darwin. Those who fuss about it inappropriately demonstrate that they did not read Darwin.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2011-09-06T01:02:27.971Z · LW(p) · GW(p)
Those who fuss about it inappropriately demonstrate that they did not read Darwin.
I fussed over it appropriately, as Constant seemed to have kept forgetting that natural selection, being blind deaf and stupid, isn't obliged to result in our intuitions of harm coincide with direct influence on reproductive capacity.
And also, indeed I've not read Darwin (Other significant scientists I've not read: Galileo, Copernicus, Newton). Why does it matter for the purposes of this argument whether I've read Darwin or not? You seem to be playing status games (i.e. "haha, I've read Darwin and you have not") instead of focusing on the merits or demerits of the argument itself.
Replies from: sam0345, sam0345↑ comment by sam0345 · 2011-09-06T02:44:02.022Z · LW(p) · GW(p)
But it will make our intuitions about harm correspond to diminished capability to survive and reproduce well enough. Our intuitions about harm were formed as if for the purpose of ensuring we would avoid impairment to our capability to survive and reproduce, in the sense that our eyes were formed as if for the purpose of seeing.
Darwin then, after explicitly explaining the "as if", made the analogy with a human breeder consciously breeding to a purpose, and then proceeded with anthropomorphic language.
↑ comment by sam0345 · 2011-09-06T02:38:48.789Z · LW(p) · GW(p)
Your argument similarly condemns Darwin.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2011-09-06T06:51:00.129Z · LW(p) · GW(p)
Your argument similarly condemns Darwin.
Downvoted. I'm sure it condemns lots of people, that's not an argument against it.
You seem to have misunderstood how this community functions in regards to historical figures of the past. We strive to do better than them.
Replies from: sam0345↑ comment by sam0345 · 2011-09-06T08:24:45.674Z · LW(p) · GW(p)
Words mean what the great used them to mean. If you fail to recognize that meaning in context, you are stupid and ignorant.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2011-09-06T08:38:37.401Z · LW(p) · GW(p)
Downvoted for...oh, so many reasons -- insults, generic nastiness, worship of authority, trying to argue about words instead of about meanings, mind-projection fallacy... Your level of discourse is miles beneath what I've come to expect and want from LessWrong in all possible metrics.
↑ comment by sam0345 · 2011-09-06T03:09:03.204Z · LW(p) · GW(p)
Thou Shalt Not Anthropomorphize Natural Selection.
But Darwin did anthropomorphize natural selection:
Replies from: shokwave, JackIf man can by patience select variations useful to him, why, under changing and complex conditions of life, should not variations useful to nature's living products often arise, and be preserved or selected? What limit can be put to this power, acting during long ages and rigidly scrutinising the whole constitution, structure, and habits of each creature,- favouring the good and rejecting the bad? I can see no limit to this power, in slowly and beautifully adapting each form to the most complex relations of life.
↑ comment by shokwave · 2011-09-06T05:09:31.643Z · LW(p) · GW(p)
But Darwin did anthropomorphize natural selection:
So?
Replies from: sam0345↑ comment by sam0345 · 2011-09-06T08:42:21.070Z · LW(p) · GW(p)
So, anthropomorphizing natural selection is scientific terminology with well understood meaning among the educated. When using this terminology to the less educated it is necessary to qualify, explain, and clarify, but such qualification and clarification should not be needed among the intelligent and educated.
Replies from: shokwave↑ comment by shokwave · 2011-09-06T08:50:49.830Z · LW(p) · GW(p)
Among the many things Darwin did, some could be called science. Anthropomorphizing natural evolution is not one of the science-things he did.
Replies from: sam0345↑ comment by sam0345 · 2011-09-06T09:09:55.728Z · LW(p) · GW(p)
Language means what the great use it to mean. You can disapprove of that usage, but misunderstanding the meaning is not a sign of superiority.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2011-09-06T09:24:10.479Z · LW(p) · GW(p)
misunderstanding the meaning is not a sign of superiority.
Misunderstanding indeed isn't a sign of superiority, but neither is being misunderstood.
↑ comment by Jack · 2011-09-06T21:15:02.261Z · LW(p) · GW(p)
Metaphorical anthropomorphizing is fine so long as everyone is on the same page about what the metaphor is and it doesn't lead to any equivocations or confusions. Constant's use of anthropomorphizing language seems to have lead to him making a very troubling equivocation between what 'harms' genes and what harms human beings. One good strategy for clearing up such confusions is moving away from metaphorical language.
↑ comment by APMason · 2011-09-05T21:21:48.811Z · LW(p) · GW(p)
Your claim that harms adversely impact reproduction is controversial because of the obvious counter examples. X has already lost their ability to reproduce. X may not even care about reproducing. It's still harmful to X to strap him down and torture him. Therefore X can be harmed without adversely affecting his ability to reproduce. This is not to mention the imaginable beings who can undergo suffering (and this is not a claim that harm reduces to suffering alone, but that suffering is a kind of harm), but which were not built by natural selection, and which don't have biology even remotely related to reproducing. There are also counter examples going the other way, equally as obvious. Y doesn't want to have children. She uses a contraception. She avoids pregnancy. Her ability to reproduce has been impeded, but she quite clearly has not been harmed (indeed, she has been liberated from the shackles of biology by glorious technology).
I don't think anyone would claim that adverse affects to reproductive ability are completely orthogonal with harm - perhaps a decrease in your ability to reproduce would be more likely to be harmful than not - but to be honest, it doesn't even look to me like the correlation's all that strong. What seems downright obvious, though, is that one does not reduce to the other.
Also, natural selection doesn't care about whether I can reproduce. It's not a caring-type thing. It's an optimisation process which doesn't make use of caring at any point during the process, and I would only care about what natural selection selects if it were important to me to be naturally selected. And why would it be? Producing APMason-like forms is not even close to being my biggest concern.
Replies from: None↑ comment by [deleted] · 2011-09-05T22:17:00.268Z · LW(p) · GW(p)
Your claim that harms adversely impact reproduction is controversial because of the obvious counter examples.
My point is probabilistic (natural selection is about probability) and statistical (in large groups, probabilities become statistical regularities). So let us see how the objections fare.
X has already lost their ability to reproduce.
This doesn't prevent harm from adversely impacting reproduction probabilistically even if we include the groups that lost their ability to reproduce. Take a large and representative sample of humanity, and harm them all in some way - say, reduce their ability to see. While there will be some who have no ability to reproduce that can be lost in the first place, many - the majority - still have that ability. And so we can see an overall reduction of reproduction in that group despite the fact that some subgroup had already lost the ability.
X may not even care about reproducing.
In a large and representative sample of humanity, many care about reproducing. So, same as above.
It's still harmful to X to strap him down and torture him.
I would wager that, on average, someone who has been badly tortured does not fare as well in his later life as someone who has not.
Also, natural selection doesn't care about whether I can reproduce.
Actually it does, in the sense meant. To say that natural selection "cares about" A and does not "care about" B is simply to say that A is a factor and B is not, in determining selection. Obviously, whether you reproduce is a factor in determining whether your genes are selected.
Replies from: APMason↑ comment by APMason · 2011-09-05T22:43:34.301Z · LW(p) · GW(p)
I'm not really sure what you're trying to argue. At first, it seemed fairly obvious that you were trying to say that harm is that which reduces reproduction. Now it seems you're saying that harm is that, which, if widely inflicted upon a group, probabilistically reduces that group's rate of reproduction - but I don't want to take that for granted. Maybe I'm misinterpreting you again. If suppose what you could be saying is that harming people, in general, tends to make them have less children. I don't in fact think it's obvious that there's any such correlation - having lots of children is not generally a sign of high levels of wealth, health, income, education or freedom - but let's assume that there is. What does that have to do with drug addiction. When we look at a drug addict we don't think "poor guy - probably won't have many kids", nor indeed would we accept a demonstration of his high reproductive capacity as evidence that being a drug addict isn't harmful. So, confused as I am, could you state your position one more time, as clearly as you can?
Replies from: None↑ comment by [deleted] · 2011-09-05T23:00:07.866Z · LW(p) · GW(p)
When we look at a drug addict we don't think "poor guy - probably won't have many kids"
That's right. But I was addressing this point:
It is not at all clear that the people resistant to addictive drugs are reproducing at a higher rate than those who aren't.
Since someone had made this comment, and since I was addressing this comment, then it's neither here nor there whether we typically think about the drug addict's reproduction. That comment concerned that question, so regardless of whether it is something we normally think about, it's something that the commenter was thinking about.
, nor indeed would we accept a demonstration of his high reproductive capacity as evidence that being a drug addict isn't harmful.
Well, it's hard to really gather the relevant evidence. To really sort out cause and effect it's really very weak just to gather evidence from the population like that. So, sure, in the practical world we probably should doubt such evidence unless it was really overwhelming (such as a planet-of-the-apes scenario in which the meth heads take over the planet).
We can easily recognize most harms simply by looking - we can recognize when someone is hurt. E.g., they're bleeding, or they're bruised, or they have trouble walking, or they're disoriented, etc. We can tell. So we don't need to do a vast demographic study to see whether the particular harm in question would reduce probability reproduction. Nevertheless, I think we can be sure that it would in fact reproduce probability of reproduction, simply by considering it from an evolutionary standpoint. I don't think there can be any serious doubt that, on average, a recognizable harm does reduce the probability of reproduction (on average, over a large enough population).
I'm quite certain that harms reduce probability of reproduction. This is why I'm quite certain that if drug addicts are harmed by their addiction, this must reduce their probability of reproduction. Obviously, on average. If you come up with crazy scenarios such as one of the commenters did, then in individual cases addiction might lead to enhanced reproduction. For example, if someone puts a gun to your head and tells you they'll shoot you if you don't become an addict, then obviously in this specific situation, becoming an addict will increase your average lifespan and, if you're fertile, will give you more chance to reproduce. But strange hypotheticals aside, I am sure that harms adversely impact reproduction.
Replies from: ArisKatsaris, APMason↑ comment by ArisKatsaris · 2011-09-05T23:06:35.439Z · LW(p) · GW(p)
The problem is that half the time you make a very strong (and obviously false) claim, e.g. that impeding reproduction is a necessity for something to be considered harmful, and the other half time you make a claim as weak (and trivially true) as "what we call harms tend to be negatively correlated with reproductive success".
Replies from: sam0345, APMason↑ comment by sam0345 · 2011-09-06T03:02:28.831Z · LW(p) · GW(p)
The problem is that you are reading Constant looking for Gotchas, rather than reading him for intended meaning. If you read him as if he was Darwin, his meaning is apparent.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2011-09-06T07:03:13.243Z · LW(p) · GW(p)
"Apparent" isn't a function with one parameter isApparent(meaning) but rather two: isApparent(reader, meaning) . See illusion of transparency
If "his meaning is apparent" to you, then perhaps you can attempt to answer all of the questions and hypotheticals that Constant either failed to answer or seemed to me to answer in a contradictory manner. Among other things:
- is being enslaved for breeding purposes "harm"?
- Is a woman denying you fertile sex doing you "harm"?
- Are people using contraception harming themselves? Are they aware of this harm?
- If an uFAI reduced us to the intellectual level of cattle to be bred in ranches (but didn't kill us or reduce our reproductive potential) would it be harming us?
- What about APMason's thought experiment regarding Reproductene ?
If Constant's meaning is apparent to you (as it is not apparent to me), and you agree with that meaning, then perhaps you can answer all of the above questions.
Replies from: sam0345↑ comment by sam0345 · 2011-09-06T08:19:44.363Z · LW(p) · GW(p)
is being enslaved for breeding purposes "harm"?
No one is going to enslave a male for breeding purposes, and the once common practice enslaving a female for breeding purposes is harm. In the ancestral environment, she will in the long term have fewer offspring, since obviously the offspring of freewomen did better, had more assets invested in them, and so forth.
Is a woman denying you fertile sex doing you "harm"?
No. And neither is someone who turns you down at a job interview doing you harm.
Are people using contraception harming themselves?
Sometimes.
Are they aware of this harm?
When they become cat ladies and start giving their cats birthday parties.
If an uFAI reduced us to the intellectual level of cattle to be bred in ranches (but didn't kill us or reduce our reproductive potential) would it be harming us?
This would require it to first conquer, dominate and rule us, which certainly would harm us. If it subsequently decided to breed us like cattle, this would make its rule slightly less harmful.
What about APMason's thought experiment regarding Reproductene ?
Thought experiments are apt to be contrary to the ancestral environment.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2011-09-06T08:30:52.284Z · LW(p) · GW(p)
In the ancestral environment, she will in the long term have fewer offspring, since obviously the offspring of freewomen did better, had more assets invested in them, and so forth.
The words "ancestral environment" were nowhere in the definitions and claims about "harm" that were offered previously in the thread.
If you use the "ancestral environment" context to qualify previous claims about what constitutes harm (by arguing that harm are things that tended to reduce reproductive capacity in the ancestral enviroment), then it follows you ought also use the differences between the ancestral environment and the CURRENT environment (or hypothetical future environments) to figure out how our moral intuitions now about what constitutes harm now is different from what promotes or reduces reproductive capacity now.
Replies from: sam0345↑ comment by sam0345 · 2011-09-06T19:35:56.637Z · LW(p) · GW(p)
The words "ancestral environment" were nowhere in the definitions
My reply assumed I was speaking to someone at least vaguely familiar with ideas, language, and assumptions of natural selection.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2011-09-06T19:51:31.730Z · LW(p) · GW(p)
I see you plan to insist on this policy of insults and attempts at knocking the other person's status, whenever you're asked to offer arguments instead.
You and Constant have not been able to produce a coherent functional philosophy about what constitutes "harm" to people. Repeatedly we give you counterexamples that prove your arguments false. That prove that modern human intuitions about harm coincide little with reproductive success.
To this you only repeat (time and again): "But natural selection produced us, so they necessarily need to." Which is blatantly obvious in the first part, and blatantly false in the second part.
Natural selection produced us, but no they don't need to. With numerous examples and thought experiments we show you how the human concept of "harm" does NOT correspond to reproductive success. That modern people do not consider contraception "harm". That modern people do consider sexual slavery bad.
To this you, sam, only have insults to offer. Your philosophy fails both to correspond with reality, and in regards with any few falsifiable predictions it offers, it FAILS at them too.
tldr; Go away and don't return until you learn that beliefs should seek to correspond with reality (not with Darwin), and that language should be used to communicate meaning, not insults.
↑ comment by APMason · 2011-09-05T23:09:18.904Z · LW(p) · GW(p)
Okay. I get it now. If you really were just saying that bad things happening to people make it less likely they'll reproduce, well... I don't necessarily agree. I don't have the relevant data. But that's not the position I thought I was arguing against before, I don't think it can be resolved without more information, and I'm not sure it's all that important to resolve it. I'll assume that you're answer to the Reproductene question is that, yes, it's harmful, unless you say otherwise.
↑ comment by hairyfigment · 2011-09-06T05:15:25.397Z · LW(p) · GW(p)
And it follows from this as night follows on day, that if you harm any one of those little bits of you that have a function, then you are harming a mechanism whose function is to help you to reproduce, and so you are harming your own ability to reproduce (to the extent that those mechanisms still function in your generation and are not mere functionless leftovers of an earlier generation).
I don't think that argument works given the fact that evolution never looks forward. It doesn't ask (even metaphorically) if some bodily feature or desire will continue to serve reproduction in the future. So we could easily wind up caring about goals that no longer serve reproduction. (ETA: This requires our environment to have changed faster than evolution could keep up, which however seems clearly true to some degree.) Calling them "functionless" would just beg the question.
↑ comment by [deleted] · 2011-09-05T19:11:37.048Z · LW(p) · GW(p)
It is possible to harm childless elderly women, yes?
Replies from: None↑ comment by [deleted] · 2011-09-05T19:56:28.401Z · LW(p) · GW(p)
Imagine some harm done to the elderly woman. That elderly woman has, let us say, eyes, and can see. So if her eyes are harmed, then this typically means her sight is reduced or lost. Sight has a biological function. We have evolved eyes because it enhances our biological fitness. Biological fitness is our ability to survive and reproduce. Therefore if this elderly woman's eyes are harmed, then something has happened to her which, had it happened to someone still fertile, would have adversely affected her ability to reproduce. And similarly for other harms done to her.
Harms done to a person are things which would have reduced that person's ability to reproduce, had that person not already lost that ability due, e.g., to other harms, or to age. Harms done to a young fertile person are things which actually do reduce that person's ability to reproduce.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2011-09-05T19:59:33.671Z · LW(p) · GW(p)
Harms done to a person are things which would have reduced that person's ability to reproduce,
Are you saying that if someone enslaves you and forces you to breed, that doesn't constitute harm? But a woman not wanting to have fertile sex with you does constitute harm done to you?
Replies from: None↑ comment by [deleted] · 2011-09-05T20:28:34.624Z · LW(p) · GW(p)
Are you saying that if someone enslaves you and forces you to breed, that doesn't constitute harm?
Yes it does constitute a harm because they are depriving you of freedom, which interferes with the function of your evolved ability to choose.
But a woman not wanting to have fertile sex with you does constitute harm done to you?
That is not a harm done to you by someone else but a failure on your part. You've failed to attract the woman, and so you've failed in your reproductive role. Your own body has the function of attracting a mate, and if the mate is not attracted, then it is your body which has failed in its function. I suggest a diet, or possibly getting out more.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2011-09-05T21:13:26.841Z · LW(p) · GW(p)
Yes it does constitute a harm because they are depriving you of freedom, which interferes with the function of your evolved ability to choose.
You said that "Harms done to a person are things which would have reduced that person's ability to reproduce". Now you seem to be changing your claim.
That is not a harm done to you by someone else but a failure on your part.
You said that harms done to a person are things that reduce their ability to reproduce. A woman denying you fertile sex certainly reduces your ability to reproduce.
You're not being consistent with your own argument.
Replies from: None↑ comment by [deleted] · 2011-09-05T21:37:04.337Z · LW(p) · GW(p)
You said that "Harms done to a person are things which would have reduced that person's ability to reproduce". Now you seem to be changing your claim.
What I originally wrote was this:
Therefore a harm to a living creature, being something that interferes with biological function, necessarily reduces its probability of reproduction. The flip side of this is that if something does not reduce the probability of reproduction, then it is not a harm.
I am clearly saying that a harm interferes with a biological function. I am not, in that statement, saying that a harm is anything at all that reduces a person's ability to reproduce. And now I am saying:
Yes it does constitute a harm because they are depriving you of freedom, which interferes with the function of your evolved ability to choose.
Notice the very close relationship between "harm ... being something that interferes with biological function..." and "...interferes with the function of..."
I don't seem to be changing my claim at all from the beginning to now. At worst, my wording slipped momentarily in some intervening comment. It really shouldn't be the sort of thing that you seize on for a criticism, if you're seriously trying to rebut my argument. Instead of opportunistically seizing on a specific slip in wording that I made somewhere in the middle of the argument, you should try to address the argument as it is stated from beginning to end.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2011-09-05T21:47:17.704Z · LW(p) · GW(p)
Repeatedly throughout the comment I originally responded to, and now again in the text you quote above, --- you argue this:
a harm to a living creature [...] necessarily reduces its probability of reproduction.
And again you say:
If something does not reduce the probability of reproduction, then it is not a harm.
That's the whole core of your argument. If you are now denying this, SPEAK CLEARLY. Are you claiming the above two sentences as true, or do you consider them a misstatement or otherwise reject them?
Edited to add:
I am clearly saying that a harm interferes with a biological function.
Since every being is nothing but a group of biological functions, all you are doing is saying that harm to a being is something that interferes with it. Or the equivalent statement of "something that does not interfere with a being does not constitute harm to it".
Which is true but quite obvious, and doesn't justify any of your subsequent points about addiction being or not being harmful -- as addiction quite clearly DOES interfere with people's biological functions.;
Replies from: None↑ comment by [deleted] · 2011-09-05T21:59:55.463Z · LW(p) · GW(p)
Since every being is nothing but a group of biological functions,
The ultimate function of which (in each individual) is to enhance probability of reproduction.
all you are doing is saying that harm to a being is something that interferes with it.
And since the ultimate function of each function is to enhance probability of reproduction, then harms can in principle be measured by observing reproduction patterns. It would in practice be difficult to do this, but in principle it can be done.
Or the equivalent statement of "something that does not interfere with a being does not constitute harm to it".
You've completely left out the bit about reproduction, which is key.
Which is true but quite obvious,
It's obvious but also not everything I said.
and doesn't justify any of your subsequent points about addiction being or not being harmful -- as addiction quite clearly DOES interfere with people's biological functions.
I agree with the bit after the hyphen and this is why I think it almost certainly reduces the probability of reproduction. As I stated.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2011-09-05T22:23:47.534Z · LW(p) · GW(p)
You've completely left out the bit about reproduction, which is key.
I asked you explicitly about whether you believe the sentences that involved reproduction and you didn't answer. Previously you seemed to be denying them. Now you seem to be reaffirming them -- but never clearly enough.
You don't seem to want to convey clarity, and I don't have the time for this.
And since the ultimate function of each function is to enhance probability of reproduction,
By "ultimate" you just mean "something that had to get partially optimized by natural processes or we wouldn't be here discussing this".
And since the ultimate function of each function is to enhance probability of reproduction, then harms can in principle be measured by observing reproduction patterns.
We're Godshatter, which means that our minds don't consider "harm" or "good" a single thing, not even reproductive success.
If you are saying that theoretically our minds (or an artificial mind) could have been made to consider a single thing to be harm: failure in reproduction, then that's obviously true. An artificial mind could be considered to treat our reproductive success as the only metric for our well-being. One of the most obvious ways for an uFAI to destroy us: It can remove most of our brains and any other desire other than our reproductive drive, and keep us packed up as cattle to be bred (while doing us no "harm" by your definition, as we'd be enjoying reproductive success as its cattle)
But if you're saying that what actual real-life current-day people consider 'harm" is only what impedes their reproductive success, that's obviously and unquestionably false. If you truly believe that, then you must bite the bullet and accept that the above-mentioned cattle scenario doesn't constitute harm.
Once again your words reduce to either trivially true or obviously false.
↑ comment by Jack · 2011-09-06T14:43:51.710Z · LW(p) · GW(p)
What we want for ourselves and our descendants is obviously not (or at least no longer) the same a reproductive fitness. The obvious example is contraception. It seems plausible that given environmental factors such as contraception and the modern welfare state that drug addicts might produce at an above average rate. That the human race might perpetuate under these conditions is no reason to rejoice about a future in which humanity is made up of crack-heads.
Replies from: None↑ comment by [deleted] · 2011-09-06T19:37:13.770Z · LW(p) · GW(p)
What we want for ourselves and our descendants is obviously no (or at least no longer) the same a reproductive fitness.
I don't think any non-human animal ever (consciously) wanted reproductive fitness as such, this being a long-term, multi-generational desire that humans may be the only animal capable of caring about, and I don't think all that many humans do. On second thought, maybe they do, in the sense that many humans want children and grandchildren. But the things we typically do want by and large enhance our reproductive fitness, so reproductive fitness is, in practice, a good proxy for what we want (though it might be more accurate to say that what we want is a good proxy for reproductive fitness). Good music, for example, comes from good musicians, and a high musical ability is thought to be closely related with sexual selection. Same with athletic ability, intelligence, empathy reliability, trustworthiness, good parenting, and so on and so forth. Pretty much everything we rejoice in about ourselves, Is closely tied one way or another with reproductive fitness.
no reason to rejoice about a future in which humanity is made up of crack-heads.
When I survey the traits that have been and currently still are correlated in the human species with greater reproductive success, I find admirable traits such as those I've listed above. You are speculating that at some future date, all these other traits will be swamped by a tendency of welfare crack-heads not to use contraceptives. I don't have any proof that your scenario is improbable, but still I think it is.
Replies from: Jack, ArisKatsaris↑ comment by Jack · 2011-09-06T21:02:10.804Z · LW(p) · GW(p)
The problem with your comments on this subject isn't that you dispute the claim that drug addicts are reproducing at a higher than average rate. No one in this thread seems prepared to discuss, in detail that object-level question. The problem is your claim in the comment I replied to that if drug use did not undermine reproduction we should reconsider whether or not drug use is harmful. The reason why others are downvoting you is because regardless of whether or not drug addiction is conducive to reproduction we still have lots of good reasons for considering it harmful and not wanting people to be addicted to drugs.
As for non-admiral traits being selected for-- there is an obvious example.
↑ comment by ArisKatsaris · 2011-09-06T20:08:10.842Z · LW(p) · GW(p)
Good music, for example, comes from good musicians, and a high musical ability is thought to be closely related with sexual selection
Castrati: "In the 1720s and 1730s, at the height of the craze for these voices, it has been estimated that upwards of 4,000 boys were castrated annually in the service of art."
Pretty much everything we rejoice in about ourselves, Is closely tied one way or another with reproductive fitness.
Blessed Virgin Mary
The Virgin Queen
The Maid of Orleans
Plus all the numerous male and female religious priesthoods in numerous religions (from Christianity to Buddhism) that take vows of sexual abstinence, and were nonetheless honored for such.
Replies from: grouchymusicologist↑ comment by grouchymusicologist · 2011-09-06T22:44:28.876Z · LW(p) · GW(p)
Castrati: "In the 1720s and 1730s, at the height of the craze for these voices, it has been estimated that upwards of 4,000 boys were castrated annually in the service of art."
I don't think this in any way undercuts your point, but it is very interesting that the most successful castrati were in fact insanely popular sex symbols during the height of their popularity. They had love affairs and were not infrequently implicated in scandal of one kind or another, although I don't think they ever got married. Farinelli, the greatest of the great, had rockstar popularity for decades in the 18th century, which came along with sexual status to match. As you might expect, all this business has been written about pretty extensively by historical musicologists in the age of gender studies.
As I've said elsewhere, our enjoyment of music seems to be wrapped up in multiple (at times competing) cognitive faculties, and is clearly not reducible to mere sexual/status display, although that is surely a component of it. (It's not clear to me whether or not Constant was claiming in the grandparent that that is the main or only reason we enjoy music.)
↑ comment by NancyLebovitz · 2011-09-05T18:36:39.181Z · LW(p) · GW(p)
Vervet monkeys have drinking patterns similar to humans
In particular, it's interesting that there are competent leaders who are pretty serious drinkers. Is it possible that addiction is having too much of a trait which is useful in moderation?
↑ comment by Oscar_Cunningham · 2011-09-05T21:02:40.546Z · LW(p) · GW(p)
Have you read http://lesswrong.com/lw/l0/adaptationexecuters_not_fitnessmaximizers/ ?
Replies from: None↑ comment by Kaj_Sotala · 2011-09-04T16:31:05.276Z · LW(p) · GW(p)
Sure, I don't think humanity is in any danger of being destroyed by conventional technologies, and I'm pretty sure the Singularity will be happen - in one form or another - way before then. But there may very well be a lot of suffering on the way.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-09-10T17:11:30.759Z · LW(p) · GW(p)
Have you checked out CFAI? It's like CEV but with less of an emphasis on humans. I really don't like humans and would rather only deal with them via implicit meta-level 'get information about morality from your environment' means, which is more explicit in CFAI than CEV.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2011-09-10T19:17:52.385Z · LW(p) · GW(p)
I've read part of it, though not all. (I'm a bit confused as to how your comment relates to mine.)
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-09-10T22:57:47.963Z · LW(p) · GW(p)
CEV takes more of an economic perspective where agent-extrapolations make deals with each other. The "good" agent-extrapolations might win out in the end (due to having a more-timeless discount rate, say), but there might be a lot of suffering along the way. CFAI on the other hand takes a less deal-centric perspective where the AI's more directly supposed to reason everything through from first principles, which can avoid predictably-stupid-in-retrospect agents getting much of the future's pie, so to speak. So I'm more afraid of CEV-like thinking than CFAI-like thinking, even though both are scary, because I am more afraid of humans being evil than I'm afraid of me not getting what I want. This may or may not overlap at all with your concerns.
(The difference isn't necessarily whether or not they converge on the same policy, it might also be how quickly they converge on that policy. CFAI seems like it'd converge on justifiedness more quickly, but maybe not.)
↑ comment by MBlume · 2011-09-10T17:19:57.177Z · LW(p) · GW(p)
I feel like this needs to go in some high-level FAQ somewhere:
Genetic natural selection is done operating on people. There is no need to speculate about its future effects.
Genetic natural selection takes tens of thousands of years to operate, and it is incredibly unlikely, short of some planet-wide catastrophe that sets back technology thousands of years, that it gets tens of thousands of years to operate without our either starting to seriously re-engineer our own genomes, or abandoning the genetic game all together.
Replies from: Jack, None, lessdazed, sam0345↑ comment by [deleted] · 2011-09-19T11:24:30.293Z · LW(p) · GW(p)
I feel like this needs to go in some high-level FAQ somewhere:
Genetic natural selection is done operating on people. There is no need to speculate about its future effects.
I think it is pretty obvious that natural selection is as we speak having massive effect on the frequencies of various alleles and consequently phenotypes. Among other things we are currently experiencing a massive genetic pruning comparable in scope to the Black Death in the form of exposure to modern contraceptives (as I mention elsewhere) .
Genetic natural selection takes tens of thousands of years to operate, and it is incredibly unlikely, short of some planet-wide catastrophe that sets back technology thousands of years, that it gets tens of thousands of years to operate without our either starting to seriously re-engineer our own genomes, or abandoning the genetic game all together.
In this context, not really, especially considering I find among other things Henry Harpending and Gregory Cochran arguments convincing.Unless you are a firm believer in the singularity being here before 2040, there is still time for marked changes in what genetically constitutes the "average" human.
I'm very interested in your reasoning though, since considering you seem to at least be familiar with the arguments in favour of recent evolutionary change brought about by the advent of agriculture and civilization, I may be missing something here. :)
↑ comment by lessdazed · 2011-09-10T21:48:48.260Z · LW(p) · GW(p)
There is no need to speculate about its future effects.
This could be right or wrong, but is ambiguous.
Genetic natural selection takes tens of thousands of years to operate
Likewise, but probably wrong. Still ambiguous, you should put it differently.
↑ comment by sam0345 · 2011-09-10T21:21:13.338Z · LW(p) · GW(p)
Genetic natural selection is done operating on people. There is no need to speculate about its future effects.
We will not be done with genetic natural selection till we give up the flesh, and perhaps not even then.
In populations recently exposed to the modern diet, and ill adapted to it, there has been significant genetic adaption in a couple of generations. S. G.; Ewbank, D.; Govindaraju, D. R.; Stearns, S. C. (2009). "Evolution in Health and Medicine Sackler Colloquium: Natural selection in a contemporary human population". Proceedings of the National Academy of Sciences 107: 1787. Similarly, populations recently exposed to alcohol.
There have been significant and substantial changes in skeletal structure over the last ten thousand years. One can reasonably define the white race in such a way that it is only ten thousand years old, that everyone before then was non white, not that we know what skin color they were. In Jamaica, we are arguably seeing sympatric race formation, as the upper classes develop a significant genetic difference from the lower classes.
Ashekenazi Jews have evolved very substantial genetic differences from Sephardic Jews since the crusades, even though they have single culture, and no one discriminates between them, they have become two quite different races, a single culture, a single folk, yet two races.
Replies from: JoshuaZ, None, None, lessdazed, None↑ comment by JoshuaZ · 2011-09-19T13:15:34.328Z · LW(p) · GW(p)
Ashekenazi Jews have evolved very substantial genetic differences from Sephardic Jews since the crusades, even though they have single culture, and no one discriminates between them, they have become two quite different races, a single culture, a single folk, yet two races.
There are a lot of cultural differences. Different prayers, different foods, different accents, different values, different humor, different cultural history.
There also is discrimination between them if one looks at the right people who are aware of what they are looking for. This is more akin to how most Americans can't tell the difference between various East Asian populations.
The genetic evidence also suggests that much of the difference between the Sephardim and Askenazim arose from the Askenazim getting an influx of European genetic material not from evolution. See this paper for example (although to be clear Askenazim do not genetically look very European compared to most Europeans).
Replies from: None↑ comment by [deleted] · 2011-09-19T18:13:23.689Z · LW(p) · GW(p)
(although to be clear Askenazim do not genetically look very European compared to most Europeans)
Really depends on who you compare them to.
Naturally American Whites with their predominantly Northern European (German, English, Irish, Scottish) origins aren't really that close to unmixed Askeanazi. But on nearly every study I've run into they are for example closer to Greeks and Italians than the Souther Europeans are to Austrians, British or Russians.
Replies from: None↑ comment by [deleted] · 2011-09-19T18:33:48.747Z · LW(p) · GW(p)
In any case regardless of their genetics, Askenazi Jews are European because:
- They basically do come from Europe (in the geographic sense of where they really became a people different from other Jews complete with their own High German language)
- In the first approximation they think of themselves and others think of them as European-derived/White or at the very least Western nearly anywhere in the world they live in (be it France, South Africa, the US or even, rather interestingly, in Israel).
- Extensive memeplex exchange with the Christian peoples of Europe.
- High rates of intermarriage in the 20th and 19th century.
If history had gone a bit differently and there was a Yiddish speaking Askenazi state somewhere near Poland/Ukraine/Belorussia, geneticists would say that its an interesting example of a Eastern European population being genetically closer to Souther Europeans than their neighbours but wouldn't really break them out as "genetically non-European" as say some Roma populations are.
Of course Askenazi identity is now somewhat tied to Israel which is a homeland for all Jews. But even if this evolves into a true new Jewish Middle Eastern identity, these are still quibbles about geography, religion (quick question do you think Turks would have ever been considered non-European had they been predominantly Christian?) and culture that have little to do with the genetic reality (though those things do correlate in many circumstances).
↑ comment by [deleted] · 2011-09-19T13:38:59.440Z · LW(p) · GW(p)
Ashkenazic and Sephardic Jews were first identified as distinct less than a millenium ago. That is fast. It took tens of thousands of years for Europeans to diverge from Africans. If Ashkenazim look different from Sephardim, it's because people living in northern Europe are more likely to marry northern Europeans, and people living in the Near East are more likely to marry Arabs.
Replies from: None↑ comment by [deleted] · 2011-09-19T18:03:23.429Z · LW(p) · GW(p)
Askenazi Jews seem to be about ~40 to 50% Southern European, the second largest component is basically classical Near Eastern (think Druze or Syrians) and they (according to the current interpretation of genetic data) seem to have been this way for centuries.
Its rather surprising that they seem to have Southern European admixture while there is only a few extra % of Eastern and Northern Admixture, they are among European peoples closest to Italians. Perhaps the mixture stabilised in the Late Roman Empire after Europe became less cosmopolitan? Perhaps both Italians and Jews had much in common to start with due to ancient Greek admixture (they are remarkably close to modern Greeks as well)? Also Germanic migrations in the 5th century where there are already some indications of Jews settling in what is now Germany might be another common imprint.
Recent admixture that occurred after Jewish and Christians started integrating seems to have gone mostly into the gentile population, though naturally in places like America with its massive out marriage rate and considering the large population of marginally Jewish Soviet immigrants in Israel this has probably changed recently.
As to the Shepardim, depends on how the word is used. In the narrow sense of "Spanish Jews" I don't think the differences are that pronounced (though I must admit I don't recall much of the data regarding them). But if one under the term includes Mizrahi Jews as often it is, then the differences are rather significant and yes they do seem to have non-negligible Arab admixture or rather a greater similarity to them (someone really needs to recover some Jewish DNA from the Roman and Hellenic period, lots of interesting stuff might be found).
↑ comment by lessdazed · 2011-09-10T21:29:26.087Z · LW(p) · GW(p)
Ashekenazi Jews have evolved very substantial genetic differences from Sephardic Jews since the crusades, even though they have single culture
Efshar punkt farkert.
Replies from: None↑ comment by [deleted] · 2011-09-19T17:17:58.346Z · LW(p) · GW(p)
No love on LW for Yiddish?
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-09-19T17:36:19.890Z · LW(p) · GW(p)
I figured it was something in Yiddish but I couldn't translate it. The first word looks like it might be "even" or "although" just basing off of the Hebrew equivalent. Unfortunately, I can't quite recognize the later words and Google translate only translates Yiddish that is written in Hebrew characters and I don't know which correspond to what in this.
Edit: I would think from context and potential guesses that it is a point about how the Ashkenaz and Sephard have different languages.
↑ comment by [deleted] · 2011-09-19T11:35:12.025Z · LW(p) · GW(p)
There have been significant and substantial changes in skeletal structure over the last ten thousand years. One can reasonably define the white race in such a way that it is only ten thousand years old, that everyone before then was non white, not that we know what skin color they were.
Modern races or rather population groups though they have some deep roots due to archaic admixture are probably mostly rather young. For example the West African type seems to have only arisen with tropical agriculture a few thousand years ago, and has expanded its range in basically historical times (one of the reasons that until quite recently people liked to think of the Khosians as something of urhumans).
↑ comment by [deleted] · 2011-09-04T14:38:24.948Z · LW(p) · GW(p)
.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2011-09-04T16:56:10.134Z · LW(p) · GW(p)
Lots of things, but some off the top of my head:
Communication technologies probably top the list. Sure, the Internet has given birth to lots of great communities, like the one where I'm typing this comment. But it has also created a hugely polarized environment. (See the picture on page 4 of this study.) It's ever easier to follow your biases and only read the opinions of people who agree with you, and to think that anyone who disagrees is stupid or evil or both. On one hand, it's great that people can withdraw to their own subcultures where they feel comfortable, but the groupthink that this allows...
"Television is the first truly democratic culture - the first culture available to everybody and entirely governed by what the people want. The most terrifying thing is what people do want." -- Clive Barnes. That's even more true for the Internet.
Also, it's getting easier and easier to work, study and live for weeks without talking to anyone else than the grocery store clerk. I don't think that's a particularly good thing from a mental health perspective.
Replies from: Alex_Altair, Vaniver, kurokikaze↑ comment by Alex_Altair · 2011-09-05T19:31:06.768Z · LW(p) · GW(p)
I gain great confidence from the principle that rational people win, on average. It is rational people that make the world, and if it gets to be something we don't want, we change it. The only real threat is rationalists with different utility functions (e.g. Quirrelmort).
(Disclaimer: please don't take this as a promotion of an "us/them" dichotomy.)
↑ comment by Vaniver · 2011-09-04T21:31:17.415Z · LW(p) · GW(p)
Also, it's getting easier and easier to work, study and live for weeks without talking to anyone else than the grocery store clerk. I don't think that's a particularly good thing from a mental health perspective.
Talking with your mouth, or talking? Because it's not clear to me that talking online is significantly worse than talking in person at sustaining mental health. I suspect getting a girlfriend/boyfriend will do more for your mental health and social satisfaction than interacting with people face-to-face more.
Replies from: Kaj_Sotala, SilasBarta, luminosity↑ comment by Kaj_Sotala · 2011-09-05T12:34:04.198Z · LW(p) · GW(p)
Personally I find that if I don't hang out with people in real life every 2-4 days I will get increasingly lethargic and incapable of getting anything done. To what degree this generalizes is another matter.
Replies from: Raemon, None↑ comment by Raemon · 2011-09-06T23:08:20.939Z · LW(p) · GW(p)
I find the same thing as Kaj. I've started literally percieving myself as having that set of "needs" bars in the Sims. Bladder bar gets empty, and I need to use the toilet or I'll be uncomfortable. Sleep bar gets low, and I'll be tired until I get enough. Social bar (face to face time) gets low, and I'll feel bleah until I get some face to face time.
The good news is that I've noticed this, become able to distinguish between "not enough facetime Bleah" and other types of Bleah, and then make sure to get face-to-face time when I need it.
Replies from: None↑ comment by [deleted] · 2011-09-06T21:36:06.008Z · LW(p) · GW(p)
Very much the same way. The internet has been a mixed blessing -- it allowed me to have the life I have at all, way back when, but now it's also a massive hook for akrasia and encourages sub-optimal use of free time. I'm still trying to get that under control.
↑ comment by SilasBarta · 2011-09-06T17:19:30.763Z · LW(p) · GW(p)
If you mean a face-to-face bf/gf, you're not actually disagreeing with Kaj. Also, I concur with his points about social deprivation leading to lethargy, based on personal experience.
↑ comment by luminosity · 2011-09-04T23:27:35.195Z · LW(p) · GW(p)
I've been working from home for a year now. I don't get out and see people often, my family live far away, so I don't have many opportunities to see people in person. The exception is, my brother is staying with me while he studies at University. There have been a few periods however where he's been away up with our parents, or off at a different university in a different state. I have a few friends I talk with regularly online through IM, and it helps, but the periods when my brother was away were still very difficult and I was getting very stressed towards the end, even though we don't interact all that much on a day to day basis, and even though I've always been much more tolerant and even thriving on loneliness than most people I know.
Maybe video chatting with people would be an adequate substitute? I haven't tried that, but my anecdote is that IM / talking online alleviates some of the stress, but goes nowhere near to mitigating it.
↑ comment by kurokikaze · 2011-09-19T16:40:19.001Z · LW(p) · GW(p)
Sorry, but isn't this the criticism of inappropriate use of technologies rather than technologies itself?
Replies from: Erebus↑ comment by Erebus · 2011-09-19T17:42:26.270Z · LW(p) · GW(p)
What would be the point of criticizing technology on the basis of its appropriate use?
Technologies do not exist in a vacuum, and even if they did, there'd be nobody around to use them. Thus restricting to only the "technology itself" is bound to miss the point of the criticism of technology. When considering the potential effects of future technology we need to take into account how the technologies will be used, and it is certainly reasonable to believe that some technologies have been and will be used to cause more harm than good. That a critical argument takes into account the relevant features of the society that uses the technology is not a flaw of the argument, but rather the opposite.
Replies from: kurokikaze↑ comment by kurokikaze · 2011-09-20T13:59:12.742Z · LW(p) · GW(p)
No, I'm not talking about the basis to criticize technology, but more about of actual target of criticism. Disclaimer: there sure are technologies that can do more harm than good. Here I will concentrate on communications, as you picked it as being one of the top problematic technologies.
For me, it all boils down to constructive side of criticism: should we change the technologies of the way we use them? Because I think in first case, new technologies will be used with the same drawbacks for humans as old ones. In the second case, successful usage patterns can be applied to new technologies as well.
For example, rather than limit the usage of communication technologies or change the comm technology itself, maybe we should focus on how the people use them. Make television more social. Or make going out with other people more easy and fun. Promote social interaction and activities using existing technologies, not relying on some magic future technology that will solve the existing problems. I think building the solution around existing technologies is a faster way than waiting for new ones.
Surely, there are technology side and social/culture side of the problem. But we cannot change any of these fast. We can only expand one to help the other. For example, on one programming site, around two years after its creation, people started to organize meetups in local places, much like LW meetups. Then, year later, other group on the site organized soccer games between different site users. The people liked it. And it doesn't take much time because they were building around existing stuff.
Also, sorry for my english. It's not my main language.
Replies from: Erebus↑ comment by wedrifid · 2011-09-04T10:17:43.875Z · LW(p) · GW(p)
We don't optimize for well-being, we optimize for what we (think we) want, which are two very different things. (And often, even calling it "optimization" is a stretch.)
You think we optimize for what we think we want? That's a stretch in itself. ;)
(Totally agree with what you are saying!)
↑ comment by Thomas · 2011-09-05T17:50:02.654Z · LW(p) · GW(p)
I, on the contrary, remain a techno optimist, even more so.
It's a kind of sad, that so many clever people here are losing their confidence into the techno progress. Well, maybe not sad, but it certainly means, that they are not onto something big themselves.
comment by knb · 2011-09-08T06:24:06.164Z · LW(p) · GW(p)
Paul Graham's essay "Why Nerds are Unpopular" has been mentioned a few times on LW, in a very positive way.
My initial reaction upon reading that essay a couple years ago was also very positive. However, upon rereading it, I realized it doesn't really fit with my observations or what I know from social science research at all. I want to write a top level post about why I disagree with Graham, but I'm not really sure if that would be on-topic enough for a top-level.
So I guess I'll just put this to a vote. Please upvote this if you think I should write a top-level post.
Replies from: knb, lessdazed, knb, knbcomment by ahartell · 2011-09-07T19:51:10.157Z · LW(p) · GW(p)
Would it be really stupid to use Harry James Potter-Evans-Verres as the fictional character that had an impact on me for my CommonApp essay? On one hand it seems right since he introduced me to lesswrong which has certainly had a big effect but on the other hand... it's... you know... fanfiction.
Replies from: None, thomblake, imaxwell, Alicorn↑ comment by [deleted] · 2011-09-19T13:45:43.581Z · LW(p) · GW(p)
You can do it. It's good countersignaling. But you have to be absurdly careful about writing quality. It's your job to convey to a skeptical audience that fanfiction can be transformative. You have to be absolutely brutal in avoiding language that signals immaturity -- or, better, find an editor who can be absolutely brutal to you.
My M.O., back in my college-essay days, was to read a New Yorker before sitting down to write. Inhale the style. Better yet, find some essays by Gene Weingarten, the modern master of long-form narrative journalism. Imagine what Gene Weingarten could do with HP:MOR. Then try to do it.
Replies from: Tripitaka↑ comment by Tripitaka · 2011-09-19T14:38:09.864Z · LW(p) · GW(p)
Well, he already did! ---> Here you can help him with his actual text.
↑ comment by thomblake · 2011-09-07T21:27:38.279Z · LW(p) · GW(p)
In general, honesty is the best policy. If you really were influenced to great things by HJPEV, explain it well and it should go over well. If the admissions folks are going to say "This well-written and inspiring essay is about fanfiction" and thus throw it in the garbage, it could just as well have been thrown away for the room's lighting or what they had for breakfast.
Replies from: gwern, None↑ comment by gwern · 2011-09-08T00:09:02.163Z · LW(p) · GW(p)
If you really were influenced to great things by HJPEV, explain it well and it should go over well.
This is important. Deliberately choosing to write about fanfiction is a high-risk move, and so is high-status if you pull it off well! But you might just face-plant. (You don't try out unpracticed tricks in front of a girl you want to impress.)
Or to put it another way:
- a high-status fictional character like Hamlet treated mediocrely is a mainstream submission
- a low-status fictional character like Bella Swan treated mediocrely is a contrarian submission, and penalized accordingly - the intellectual equivalent of misspelling "it's/its"
- a high-status fictional character like Ahab treated well is a conspicuous mainstream signal
- a low-status fictional character like MoR!Harry treated well is a meta-contrarian submission, and thus is a conspicuous contrarian signal
All else equal, 3<4.
Replies from: shokwave, Normal_Anomaly, thomblake, Will_Newsome↑ comment by shokwave · 2011-09-08T00:26:42.077Z · LW(p) · GW(p)
Also, recognising a low-status character as a low-status character is an important part of 4. Trying to pretend it's high status ("the author is an AI researcher, it is the most reviewed fanfiction ever, it's better than Rowling's Harry Potter", etc) will usually backfire.
Honestly, I'd start by baldly and confidently acknowledging that characters from fanfiction about popular books are low-status, and that you are going to do your piece on him anyway.
↑ comment by Normal_Anomaly · 2011-09-08T00:33:28.207Z · LW(p) · GW(p)
As someone currently going through this process (I just wrote the same essay about Terry Pratchett's character Tiffany Aching), the impression I get is that it's very important to be unique: if your essay is the same as 200 others, it will be penalized as much as if it is poorly written. Using a rationalist fanfiction character, if you can write it well and have the guts to write it sincerely (but not too sincerely, or you'll signal naivete), is a good idea. If you don't want to deal with a fanfiction character, write about some other rationalist. Either way, don't mention lesswrong. And please don't write about Howard Roark. I enjoyed The Fountainhead, but it's worse signaling than fanfiction. You'll look like a shallow thinker who falls for propaganda, and most universities lean to the liberal end of the spectrum.
Important note: I'm applying to highly selective colleges with student bodies that think of themselves as contrarian or meta-contrarian. If you aren't, this advice may not apply.
↑ comment by thomblake · 2011-09-09T14:06:06.569Z · LW(p) · GW(p)
I stand by my statement.
If the essay asked about "the fictional character that had the greatest impact on you" or something to that effect and that person is HJPEV, then that's what you should write about. Otherwise, you'd be lying, and apart from the general wrongness of lying, you're going to write better about something that's true.
Replies from: gwern↑ comment by gwern · 2011-09-09T14:12:50.266Z · LW(p) · GW(p)
I stand by my statement.
I didn't disagree.
Replies from: ahartell↑ comment by ahartell · 2011-09-09T19:37:34.667Z · LW(p) · GW(p)
Thank you by the way. Your post convinced me to write about him and illuminated the best way to handle it.
Replies from: gwern↑ comment by Will_Newsome · 2011-09-10T17:14:38.204Z · LW(p) · GW(p)
Has anyone done a thorough social psychological game theoretic analysis of college admissions? Seems right up your alley, gwern.
Replies from: gwern↑ comment by gwern · 2011-09-10T18:31:03.955Z · LW(p) · GW(p)
I only play a deep thinker online, I don't think I could write such a thing in a way that isn't merely extensive plagiarism of, say, Steve Sailer.
(That said, reading over my comment, I missed an opportunity: I should have pointed out that the reason why 4>3 is because it is an expensive signal in the sense that attempting to do #4 but only achieving a #2 exposes one to considerable punishment whereas one doesn't run such a risk with#1 and #3, and expensive signals are, of course, the most credible signals.)
↑ comment by [deleted] · 2011-09-07T21:49:53.880Z · LW(p) · GW(p)
The other way to look at the situation is that the admissions folks are looking for a very specific essay. That essay requires you to identify yourself with a character from some postmodern South American novel (or possibly Elie Wiesel in "Night") and certainly has no place in it for fan fiction.
Replies from: Kevin, None↑ comment by imaxwell · 2011-09-14T07:24:54.971Z · LW(p) · GW(p)
Hmm... I'm not sure. I'd take the word of someone with experience on an admissions committee, if you can get it.
If you do it, I think you'd be better off talking just a little about the character and much more about the community you found. Writing to the prompt is not really important for this sort of thing. (Usually one of the prompts is pretty much "Other," confirming that.)
↑ comment by Alicorn · 2011-09-07T20:15:40.785Z · LW(p) · GW(p)
What's your second choice?
Replies from: ahartell↑ comment by ahartell · 2011-09-07T20:27:41.636Z · LW(p) · GW(p)
I can't think of any other fictional characters with a significant impact so if I don't use him I would write about one of the other prompts. Only I can't think of anything for the other choices and when I saw the fictional character option he immediately jumped to mind.
Replies from: None↑ comment by [deleted] · 2011-09-07T20:31:41.923Z · LW(p) · GW(p)
Howard Roark is usually a shoe-in for these "which fictional character" essays.
EDIT: This is in no way an endorsement of Ayn Rand. She has severe and myriad issues.
Replies from: ahartell↑ comment by ahartell · 2011-09-07T20:38:08.195Z · LW(p) · GW(p)
Thanks, I haven't read any Ayn Rand but Atlas Shrugged is next in my queue. I guess I'll swap it out for Fountainhead and see what I think.
I suspect it will be a bit dishonest to say that he had a great impact on me though if I read the book basically for the sake of the essay.
Replies from: None, None↑ comment by [deleted] · 2011-09-07T21:08:03.900Z · LW(p) · GW(p)
I bet admissions committees hate when you say you were influenced by Ayn Rand. You want something either very prestigious, or very unexpected, and Atlas Shrugged is neither. You might well be better off with fanfiction, if you can sell it with a really good essay and leave yourself a little bit of ironic detachment wiggle room.
Replies from: Kevin↑ comment by Kevin · 2011-09-09T03:55:53.383Z · LW(p) · GW(p)
Agreed, Rand is a total no-go for college admissions essays.
Replies from: wedrifid↑ comment by wedrifid · 2011-09-09T04:49:41.803Z · LW(p) · GW(p)
Who are some suitably high status inspirational folks to put on such essay.
Mind you college admissions here (Austrailia) are almost entirely based on high school exam scores so the information is completely useless to me!
Replies from: Kevin, John_Maxwell_IV↑ comment by Kevin · 2011-09-09T08:41:18.715Z · LW(p) · GW(p)
This is the only inspirational thing I have ever read -- the now deleted post-movie option journals of a blind man that had his vision restored and had to teach himself how to see. http://web.archive.org/web/20040401192741/http://www.senderogroup.com/mikejournal.htm#Q1%202000
Replies from: wedrifid↑ comment by John_Maxwell (John_Maxwell_IV) · 2011-09-09T04:52:51.710Z · LW(p) · GW(p)
Richard Feynman?
Replies from: wedrifid↑ comment by wedrifid · 2011-09-09T05:32:23.554Z · LW(p) · GW(p)
Oh, yeah. Him. I would be cringing as I wrote that. I'd be imagining myself rolling my eyes as I read piles of cookie cutter password guessing applications. Ick. But I'd force myself to write him anyway.
I wonder how much status you can get by dropping the name everyone drops. I suppose you at least wouldn't lose points.
↑ comment by [deleted] · 2011-09-07T21:42:37.098Z · LW(p) · GW(p)
Oh, dear. This wasn't meant in any way as an endorsement of Ayn Rand.
I suspect it will be a bit dishonest to say that he had a great impact on me though if I read the book basically for the sake of the essay.
Eh, admission essays are games; they must be played.
comment by Bill_McGrath · 2011-09-07T23:50:01.128Z · LW(p) · GW(p)
I have been wondering recently about how to rationally approach topics that are naturally subjective. Specifically, this came up in conversation about history and historiography. Historic events are objective of course, but a lot of historical scholarship concerns itself with not just describing events, but speculating as to their causes and results. This is naturally going to be influenced by the historian's own cultural context and existing biases.
How can rationalists engage with this inherently subjective topic, and apply rationality techniques? We can try to take account of the historian's biases, but in many cases that will require us to do some historical research - it is probably not possible to get an accurate, objective account.
This applies to a certain extent to other fields I am sure, but history and historiography are perhaps the most scholarly ones I can bring to mind.
Replies from: Bill_McGrath↑ comment by Bill_McGrath · 2011-09-08T22:09:32.464Z · LW(p) · GW(p)
Hmm. I was a little tired and rushed when I wrote this. There are a few thoughts I'd like to add concerning historiography.
As I said above, history, because of its subjective nature, is always influenced by the historian's bias. Historiography could maybe be called the study of these biases, but is in itself subject to the same flaws.
No historian's viewpoint on a historical event will be fully objective. But just because no approach can be perfect, does not mean that all approaches can be equally imperfect. My question isn't so much about how to be a rational historian, but more: is there a rational way to evaluate the relative worths of different historical viewpoints?
comment by byrnema · 2011-09-12T15:11:36.399Z · LW(p) · GW(p)
I noticed a bias about purchasing organic milk this morning, that is perhaps a combination of the sunk cost fallacy, ugh fields and compartmentalization.
My mother is sending me information this morning that I should be giving my children organic milk (to avoid hormones, etc). I don't disagree with her, but I'm probably not going to start buying organic milk. This makes me feel a little sorry for my mother, that she is going to some effort to convince me I ought to take this precaution, and I'm going to nod and agree, and then finally not change my behavior.
The twinge of guilt makes me examine the 'why', and I believe the reason I won't buy organic is because my children already drink much less milk than they used to. If there was one year I should have bought organic, it should have been during their first year of drinking cow milk when they drank several bottles a day and it was a major source of their nutrition. Now they only drink a couple glasses a day, and this milk is mixed with many other food sources.
I'm sure the logic is still opaque... Even if they don't drink as much milk as they used to, the milk drinking continues over the rest of their lives and switching to organic now would make a difference. If one of the main objections is the cost of organic milk (and at first I would claim that it was) then this fact means that by switching to organic milk now, I can pay less per day to completely free them of any contaminants normal milk would expose them to. For a few extra dollars a week, my children could be rBGH-free the rest of their lives.
What is my true objection? My true objection, perhaps, is that some part of my brain is already computing what it would feel like to purchase organic milk next time in the store. I'm paying a significant amount more, so I should be feeling good about the purchase, that I am making such-and-such good choices for my family. However, I know I will only feel badly! If the marginal price of organic milk is justified now, I should have been buying it before -- when my kids were small -- and so every single time that I purchase organic milk I will feel a dissonance that I wasn't purchasing it before. Either organic milk is important or it isn't, and in deciding to ignore my mother and continue to buy regular milk, I am making a choice to behave consistently with past choices.
Some compartmentalization is at work here, because I realize all this quite consciously, and it doesn't matter. I still feel like going to the milk aisle and glibly throwing in the carton that costs $3.49 rather than $5.50 is a viable option that I choose. I can even resolve to look at the label and chant "I am buying this rather than something else that I know is better because I don't want to have to renounce past decisions", and it doesn't matter.
A factor in this locus of irrationality is that I don't feel strongly that organic milk is better, and the extra cost is a weighing factor. Thus, the desire to avoid negative feelings is operating in a landscape that is nearly even. I trust that if I deemed it was more important to go with organic milk, I would do so. On the other hand, this is a reminder that such psychological tensions can affect more important decisions, if the need to avoid negative feelings is stronger, and I should continue to be honest with myself and be aware of them.
Replies from: AdeleneDawner↑ comment by AdeleneDawner · 2011-09-12T16:27:38.336Z · LW(p) · GW(p)
Past-you, using the evidence that past-you had, came to a particular conclusion. Present-you, using more evidence, may come to a different conclusion. Future-you, using still more evidence, may come to yet another conclusion. This is as it should be; that's what evidence is for.
comment by smk · 2011-09-07T14:30:39.980Z · LW(p) · GW(p)
A kind of uncomfortably funny video about turning yourself bisexual, a topic that's come up a few times here on LW. http://youtu.be/zqv-y5Ys3fg
Replies from: atucker↑ comment by atucker · 2011-09-09T05:23:43.659Z · LW(p) · GW(p)
I don't know why I clicked on this link, but the video is pretty funny. I feel like its a parody, mostly because everyone fits their stereotypical role so well.
...Upon reading the bottom of the page, yeah its a parody.
Replies from: smkcomment by KPier · 2011-09-21T03:51:25.631Z · LW(p) · GW(p)
I've been debating the validity of reductionism with a friend for a while, and today he presented me with an article (won't link it, it's a waste of your time) arguing that the consciousness-causes-collapse interpretation of QM proves that consciousness is ontologically fundamental/epiphenomenal/ect..
To which I responded: "Yeah, but consciousness-causes-collapse is wrong."
And then realized that the reasons I have rejected it are all reductionist in nature. So he pointed out, fairly, that I was begging the question. And unfortunately, I'm not sufficiently familiar with the literature on QM to point him to an explanation. Does anyone know an explanation of reasons to reject consciousness-causes-collapse that isn't explicitly predicated on reductionism?
Replies from: Kingreaper, Mitchell_Porter, Owen, Manfred, Vladimir_Nesov, JoshuaZ↑ comment by Kingreaper · 2011-09-22T00:25:42.733Z · LW(p) · GW(p)
You don't need to reject CCC without reductionism to defeat his argument. His argument is "If CCC is true, reductionism is false"
That's not a reason to reject reductionism, unless you have better reason to hold to CCC than to reductionism.
↑ comment by Mitchell_Porter · 2011-09-21T05:01:58.018Z · LW(p) · GW(p)
From the perspective of the Copenhagen interpretation, this is like a debate about whether 'consciousness updates the prior', in which 'the prior' is treated as a physical entity which exists independently of observers and their ignorance.
In the Copenhagen interpretation - at least as originally intended! - a wavefunction is not a physical state. It is instead like a probability distribution.
From this perspective, the mystery of quantum mechanics is not, why do wavefunctions collapse? It is, why do wavefunctions work, and what is the physical reality behind them?
The reification of wavefunctions has apparently become an invisible background assumption to a lot of people. But in the Copenhagen interpretation, wavefunctions do not exist, only "observables" exist: the quantities whose behavior the wavefunction helps you to predict.
Examples of observables are: the position of an electron; the rate of change of a field; the spin of a photon. In the Copenhagen interpretation, these are what exists.
Some examples of things which are not observables and which do not exist: An electron wavefunction with a peak here and a peak there; a photon in a superposition of spin states; in fact, any superposition.
Because quantum mechanics does not offer a nonprobabilistic deeper level of description, it is very easy for people to speak and think as if the wavefunctions are the physical realities, but that is not how Copenhagen is supposed to work.
To reiterate: "consciousness collapses the wavefunction" in exactly the same sense that "consciousness updates the prior". You are free to invent subquantum physical theories in which wavefunctions are real, in an attempt to explain why quantum mechanics works, and maybe in those theories you want to have something "collapsing" wavefunctions, but you probably wouldn't want that to be "consciousness".
↑ comment by Owen · 2011-09-21T04:09:10.694Z · LW(p) · GW(p)
Perhaps that extremely simple systems, that no one would consider conscious, can also "cause collapse"? It doesn't take much: just entangle the superposed state with another particle - then when you measure, canceling can't occur and you perceive a randomly collapsed wavefunction. The important thing is the entangling, not the fact that you're conscious: measuring a superposed state (i.e. entangling your mind with it) will do the trick, but it's entirely unnecessary.
I used to believe the consciousness-causes-collapse idea, and it was quite a relief when I realized it doesn't work like that.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-09-21T04:11:57.658Z · LW(p) · GW(p)
Some of the consciousness causes collapse people would claim that you intended to cause that entanglement. (If you are thinking this sounds like an attempt to make their claims not falsifiable, I'd be inclined to agree.)
Replies from: Owen↑ comment by Owen · 2011-09-21T04:21:11.464Z · LW(p) · GW(p)
I can intentionally do lots of things, some of which cause entanglement and "collapse", and some of which don't. I'd say to them that it still seems like the conscious intent isn't what's important.
If you'd like to substitute a better picture for the layperson, I'd go with "disturbing the system causes collapse". (Where "disturb" is really just a nontechnical way of saying "entangle with the environment.") Then it's clear that conscious observation (which involves disturbing the system somehow to get your measurement) will cause (apparent) collapse, but doesn't do so in a special depends-on-consciousness way. And if they want a precise definition of "disturb", you can get into the not-too-difficult math of superposition and entanglement.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-09-21T04:28:17.691Z · LW(p) · GW(p)
And if they want a precise definition of "disturb", you can get into the not-too-difficult math of superposition and entanglement.
I'm a math grad student and I consider the math of entanglement and the like to be not easy. There are two types of consciousness-causes-collapse proponents. The first type who doesn't know much physics will find entanglement to be pretty difficult (they need to already understand complex numbers and basic linear algebra to get the structure of what is going on). Even a genuinely curious individual will likely have trouble following that unless they are a mathematically inclined individual. The second, much smaller group of people, are people who already understand entanglement but still buy into consciousness-causes collapse.They seem to have developed very complicated and sometimes subtle notions of what it means for things to be conscious or to have intent (almost akin to theologians). So in either case this avenue of attack seems unlikely to be successful.
If one is more concerned with convincing bystanders (as is often more relevant on the internet. People might not change their minds often. But people reading might), then this could actually do a good job when encountering the first category by making it clear that one knows a lot more about the subject than they do. This seems to empirically work in real life also as one can see in various discussions. See for example the cases Deepak Chopra has try to invoke a connection between QM and consciousness and he gets shot down pretty bluntly when there's anyone with a bit of math or physics background.
Replies from: Owen↑ comment by Owen · 2011-09-21T13:49:24.989Z · LW(p) · GW(p)
You're right; maybe I'm overestimating my ability to explain things so that laypeople will understand. But there are some concessions you can make to get the idea across without the full background of complex linear algebra - often I use polarizers as an example, because most people have some experience with them (from sunglasses or 3D movies), and from there it's only a hop, skip, and a jump to entangled photons.
I do try to explain so that people feel like the explanation is totally natural, but then I often run into the problem of people trying to reason about quantum mechanics "in English", so to speak, instead of going to the underlying math to learn more. Any suggestions?
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-09-21T13:55:58.619Z · LW(p) · GW(p)
It seems to me that it is easier to get people to realize just that they can't use their regular language to understand what is going on than to actually explain it. People seem to have issues with understanding this primarily because of Dunning-Kruger and because of the large number of popularizations of difficult science that just uses vague analogies.
I'd ask "ok. This is going to take some math. Did you ever take linear algebra?" If yes, then I just explain things. When they answer no (vast majority of the time)I then say "ok do you remember how matrix multiplication works?" They will generally not or have only a vague memory. At that point I then tell them that I could spending a few hours or so developing the necessary tools but that they really don't have the background without a lot of work. This generally results in annoyance and blustering on their part. At this point one tells them the story of Oresme and how he came up with the idea of gravity in the 1300s but since he didn't have a mathematical framework it was absolutely useless. This gets the point across sometimes.
Edit: Your idea of using polarization as an example is an interesting one and I may try that in the future.
Replies from: Owen↑ comment by Manfred · 2011-09-21T11:48:45.231Z · LW(p) · GW(p)
I wouldn't call occam's razor an explicit part of reductionism. It's basically equivalent to saying you can't just make up information.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-09-22T00:22:38.726Z · LW(p) · GW(p)
I wouldn't call occam's razor an explicit part of reductionism. It's basically equivalent to saying you can't just make up information.
I don't think so. This may be the case when your hypotheses are something like "A" and "A v B" but if your hypotheses you are comparing are "A" and "C ^ D ^ E" this sort of summary of Occam's razor seems to be insufficient.
Replies from: Manfred↑ comment by Manfred · 2011-09-22T01:16:24.743Z · LW(p) · GW(p)
If both hypotheses explain some set of data, I've usually been able to make a direct comparison even in what look like tough cases by following the information in the data - what sort of process generates it, etc. Keeping things in terms of the "language" of the data is in fact also justified by the idea that pulling information from nowhere is bad.
This sort of reliance on our observations is certainly an empiricist assumption, but I don't think a reductionist one.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-09-22T02:23:49.913Z · LW(p) · GW(p)
Consider the following problem. You know that there is some some property that some integers have and others don't and you are trying to figure out what the property is. After testing every integer under 10^4, you find that there are 1229 integers under 10^4 that work. You have two hypotheses that describe these. One is that they are every prime number. The other is a given by a 1228 degree polynomial where P(n) gives the nth number in your set. One of these is clearly simpler. This isn't just a language issue- if I tried to right these out in any reasonable equivalent of a Turing machine or programming language one of them will be a much shorter program. The distinction here however is not just one of one of them making up information. One is genuinely shorter.
If one wants we can give similar historical examples. In 1620 you could make a Copernican model of the solar system that would rival Kepler's model in accuracy. But you would need a massive number of epicycles. The problem here doesn't seem to be pulling information from nowhere. The problem seems to be that one of the hypotheses is simpler in a different way.
Both of these examples do have something in common which is that in both of the complicated examples there are a lot of parameters that are observationally dependent whereas the other has many fewer of those. But that seems to be a distinct issue (although it is possibly a good very rough way of measuring complexity of hypotheses).
↑ comment by Vladimir_Nesov · 2011-09-21T10:58:21.695Z · LW(p) · GW(p)
I've been debating the validity of reductionism with a friend for a while [...] Does anyone know an explanation of reasons to reject consciousness-causes-collapse that isn't explicitly predicated on reductionism?
This quite possibly can't be done. If you handicap yourself by refusing to use an idea while examining its merits, you may well draw inferior conclusions about it, and modify it in a way that makes it worse. You should use your whole mind to reflect on itself (unless you conclude some of its parts are not to be trusted). See these posts in particular:
↑ comment by JoshuaZ · 2011-09-21T04:04:48.455Z · LW(p) · GW(p)
There are a variety of different issues.
First, it assumes that consciousness exists as an ontological unit. This isn't just a problem with reductionism but is a problem with Occam's razor. What precisely one means by reductionism can be complicated and subtle with some versions more definite or plausible than others. But regardless, there's no good evidence that consciousness is an irreducible.
Second, it raises serious questions about what things were like before there were conscious entities. If no collapse occurred prior to conscious entities what does that say about the early universe and how it functioned? Note that this actually raises potentially testable claims if one can use telescopes to look back before the dawn of life. Unfortunately, I've never seen any consciousness causes collapse proponent either explain why this doesn't lead to any observable difference or make any plausible claim about what differences one would observe.
Third, it violates a general metapattern of history. As things have progressed the pattern has consistently been that minds don't interact with the laws of physics in any fundamental way and that more and more ideas about how minds might interact have been thrown out (ETA: There are a few notable exceptions such as some of the stuff involving the placebo effect.). We've spent much of the last few hundred years establishing stronger and stronger versions of this claim. Thus, as a simple matter of induction, one would expect that trend if anything to continue. (I don't know how much inducting on the pattern of discoveries is justified.)
Fourth, it is ill-defined. What constitutes a conscious mind? Presumably people are conscious. Are severely mentally challenged people conscious? Are the non-human great apes conscious? Are ravens and other corvids conscious? Are dogs or cats conscious? Are mice conscious? Etc. down to single celled organisms and viruses.
Fifth, consciousness causes collapse is a hypothesis that is easily supported by standard human biases. This raises two issues one of which is not that relevant but is worth mentioning and the other which is very relevant. The first, less relevant issue, is that this means we should probably assume that we are likely to overestimate our chance that the hypothesis is correct. This is not however an argument against the hypothesis. But there's a similar claim that is a sort of meta-argument against the hypothesis. Since this hypothesis is one which is supported by human biases one would expect a lot of motivated cognition for evidence and arguments for the hypothesis. So if there are any really good arguments one should consider it more likely that they would have been hit on. The fact that they have not suggests that there really aren't any good arguments for it.
comment by anonym · 2011-09-05T17:30:24.664Z · LW(p) · GW(p)
I don't recall any discussion on LW -- and couldn't find any with a quick search -- about the "Great Rationality Debate", which Stanovich summarizes as:
An important research tradition in the cognitive psychology of reasoning--called the heuristics and biases approach--has firmly established that people’s responses often deviate from the performance considered normative on many reasoning tasks. For example, people assess probabilities incorrectly, they display confirmation bias, they test hypotheses inefficiently, they violate the axioms of utility theory, they do not properly calibrate degrees of belief, they overproject their own opinions onto others, they display illogical framing effects, they uneconomically honor sunk costs, they allow prior knowledge to become implicated in deductive reasoning, and they display numerous other information processing biases (for summaries of the large literature, see Baron, 1998, 2000; Dawes, 1998; Evans, 1989; Evans & Over, 1996; Kahneman & Tversky, 1972, 1984, 2000; Kahneman, Slovic, & Tversky, 1982; Nickerson, 1998; Shafir & Tversky, 1995; Stanovich, 1999; Tversky, 1996).
It has been common for these empirical demonstrations of a gap between descriptive and normative models of reasoning and decision making to be taken as indications that systematic irrationalities characterize human cognition. However, over the last decade, an alternative interpretation of these findings has been championed by various evolutionary psychologists, adaptationist modelers, and ecological theorists (Anderson, 1990, 1991; Chater & Oaksford, 2000; Cosmides & Tooby, 1992; 1994b, 1996; Gigerenzer, 1996a; Oaksford & Chater, 1998, 2001; Rode, Cosmides, Hell, & Tooby, 1999; Todd & Gigerenzer, 2000). They have reinterpreted the modal response in most of the classic heuristics and biases experiments as indicating an optimal information processing adaptation on the part of the subjects. It is argued by these investigators that the research in the heuristics and biases tradition has not demonstrated human irrationality at all and that a Panglossian position (see Stanovich & West, 2000) which assumes perfect human rationality is the proper default position to take.
Stanovich, K. E., & West, R. F. (2003). Evolutionary versus instrumental goals: How evolutionary psychology misconceives human rationality. In D. E. Over (Ed.), Evolution and the psychology of thinking: The debate, Psychological Press. [Series on Current Issues in Thinking and Reasoning]
The lack of discussion seems like a curious gap given the strong support to both the schools of thought that Cosmides/Tooby/etc. represent on the one hand, and Kahneman/Tversky/etc. on the other, and that they are in radical opposition on the question of the nature of human rationality and purported deviations from it, both of which are central subjects of this site.
I don't expect to find much support here for the Tooby/Cosmides position on the issue, but I'm surprised that there doesn't seem to have been any discussion of the issue. Maybe I've missed discussions or posts though.
Replies from: rehoot, Vaniver↑ comment by rehoot · 2011-09-06T03:39:42.322Z · LW(p) · GW(p)
I don't understand the basis for the Cosmides and Tooby claim. In their first study, Cosmides and Tooby (1996) solved the difficult part of a Bayesian problem so that the solution could be found by a "cut and paste" approach. The second study was about the same with some unnecessary percentages deleted (they were not needed for the cut and paste solution--yet the authors were surprised when performance improved). Study 3 = Study 2. Study 4 has the respondents literally fill in the blanks of a diagram based on the numbers written in the question. 92% of the students answered that one correctly. Studies 5 & 6 returned the percentages and the students made many errors.
Instead of showing innate, perfect reasoning, the study tells me that students at Yale have trouble with Bayesian reasoning when the question is framed in terms of percentages. The easy versions do not seem to demonstrate the type of complex reasoning that is needed to see the problem and frame it without somebody framing it for you. Perhaps Cosmides and Tooby are correct when they show that there is some evidence that people use a "calculus of probability" but their study showed that people cannot frame the problems without overwhelming amounts of help from somebody who knows the correct answer.
Reference
Cosmides, L. & Tooby, J. (1996). Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgment under uncertainty. Cognition 58, 1–73, DOI: 10.1016/0010-0277(95)00664-8
Replies from: anonym↑ comment by anonym · 2011-09-07T03:41:36.494Z · LW(p) · GW(p)
I agree. I was hoping somebody could make a coherent and plausible sounding argument for their position, which seems ridiculous to me. The paper you referenced shows that if you present an extremely simple problem of probability and ask for the answer in terms of a frequency (and not as a single event), AND you present the data in terms of frequencies, AND you also help subjects to construct concrete, visual representations of the frequencies involved by essentially spoon-feeding them the answers with leading questions, THEN most of them will get the correct answer. From this they conclude that people are good intuitive statisticians after all, and they cast doubt on the entire heuristics and biases literature because experimenters like Kahneman and Tversky don't go to equally absurd lengths to present every experimental problem in ways that would be most intuitive to our paleolithic ancestors. The implication seems to be that rationality cannot (or should not) mean anything other than what the human brain actually does, and the only valid questions and problems for testing rationality are those that would make sense to our ancestors in the EEA.
Replies from: JonathanLivengood↑ comment by JonathanLivengood · 2011-09-14T08:52:21.788Z · LW(p) · GW(p)
I was hoping somebody could make a coherent and plausible sounding argument for their position.
I'm not sure I'm up to the challenge, but here goes anyway ...
I think you are being ungenerous to the position Tooby and Cosmides mean to defend. As I read them (see especially Section 22 of their paper), they are trying to do two things. First, they want to open up the question of how exactly people reason about probabilities -- i.e., what mechanisms are at work, not just what answers people give. Second, they want to argue that humans are slightly more rational than Kahneman and Tversky give them credit for being.
First point. Tooby and Cosmides do not actually commit to the position that humans use a probability calculus in their probabilistic reasoning. What they do argue is that Kahneman and Tversky were too quick to dismiss the possibility that humans do use a probability calculus -- not just heuristics -- in their probabilistic reasoning. If humans never gave the output demanded by Bayes' theorem, then K&T would have to be right. But T&C show that in more ecologically valid cases, (most) humans do give the output demanded by Bayes. So, the question is re-opened as to what brain mechanism takes frequency inputs and gives frequency outputs in accordance with Bayes' theorem. That mechanism might or might not instantiate a rule in a calculus.
Second point. If you are tempted (by K&T's research) to say that humans are just dreadfully bad at statistical reasoning, then maybe you should hold off for a second. The question is a little bit under-specified. Do you mean "bad at statistical reasoning in general, in an abstract setting" or do you mean "bad at statistical reasoning in whatever form it might take"? If the former, then T&C are going to agree. If you frame a statistics problem with percentages, you get all kinds of errors. But if you mean the latter, then T&C are going to say that humans do pretty well on problems that have a particular form, and not surprisingly, that form is more ecologically valid.
General rule of charity: If someone appears to be defending a claim that you think is obviously ridiculous, make sure they are actually defending what you think they are defending and not something else. Alternatively (or maybe additionally), look for the strongest way to state their claim, rather than the weakest way.
↑ comment by Vaniver · 2011-09-07T21:33:45.067Z · LW(p) · GW(p)
Typically, the "optimal thinking" argument gets brought up here in the context of evolutionary psychology. Loss aversion makes sound reproductive sense when you're a hunter-gatherer, and performing a Bayesian update carefully doesn't help all that much. But times have changed, and humans have not changed as much.
comment by gwern · 2011-09-20T21:40:41.052Z · LW(p) · GW(p)
I tried writing an essay arguing that popular distaste for politicians is due largely to base rate neglect leading people to think they are worse than they are: http://www.gwern.net/Notes#politicians-are-not-unethical (I don't think it works, though.)
Replies from: Vaniver, Oscar_Cunningham↑ comment by Vaniver · 2011-09-23T02:00:40.110Z · LW(p) · GW(p)
Heart -> Hearst.
Also, the Edwards example you gives suggests that one story may not be sufficient (I don't know how many times the Enquirer reported on it before other media picked it up, but I know the rest did only months later).
Replies from: gwern↑ comment by Oscar_Cunningham · 2011-09-21T07:00:59.076Z · LW(p) · GW(p)
I can't find that content on that page.
Replies from: gwern↑ comment by gwern · 2011-09-21T13:35:30.160Z · LW(p) · GW(p)
Caching. (This has been enough of a problem with linking to new content - people having the old page cached - that I've been thinking of turning it off, even with the speed/bandwidth hit.)
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2011-09-21T19:34:35.728Z · LW(p) · GW(p)
Thanks.
comment by ataftoti · 2011-09-08T05:40:26.924Z · LW(p) · GW(p)
Has anyone been able to play Mafia using bayesian methods? I have tried and failed due to encountering situations that eluded my attempts to model them mathematically. But since I am not strong at math, I'm hoping others have had success?
And the related question: any mafiascum.net players here?
Edit: I mean specifically using bayesian methods for online forum-based Mafia games. These seem to me to give the player enough time to do conscious calculations.
Replies from: Jack, katydee, Oscar_Cunningham↑ comment by Jack · 2011-09-12T04:39:53.848Z · LW(p) · GW(p)
I wonder if there aren't any group rationality games that don't seriously undermined group moral and cohesion. The last time I played Mafia people ended up crying and my relationship with my brother and cousin went through traumatic upheaval. Diplomacy is not a better option.
Replies from: katydee, ataftoti, shokwave, lessdazed↑ comment by katydee · 2011-09-13T05:24:24.449Z · LW(p) · GW(p)
This seems like an unusual experience to have. I have played Mafia with 3+ non-overlapping groups in person and 4+ non-overlapping groups online, and have yet to encounter any trouble; in fact, in two of the cases we were explicitly playing as a bonding exercise to improve group morale and cohesion, and it seems to have worked both times.
↑ comment by ataftoti · 2011-09-13T04:27:40.595Z · LW(p) · GW(p)
The last time I played Mafia people ended up crying
And what about the times before that?
Playing mafia has never undermined real social relationships in my experience, and I've introduced this game to perhaps 20 people in real life, with at least 2 completely non-overlapping groups.
Also, I doubt face-to-face mafia should be considered a game that especially exercises rationality. It seems to me that you get thrown a huge fuckton of cognitive biases with no time to combat them.
(again, my original question should specify "forum based mafia games"...let me edit that now...)
Replies from: Jack, Will_Sawin, shokwave↑ comment by Will_Sawin · 2011-09-13T05:39:37.582Z · LW(p) · GW(p)
It's more like it teaches a sort of mini-rationality: "You're swimming in cognitive biases, but your intuitions can also be helpful. Empirically develop a few techniques to separate good intuitions from bad with decent error probability."
↑ comment by shokwave · 2011-09-13T08:01:29.826Z · LW(p) · GW(p)
It seems to me that you get thrown a huge fuckton of cognitive biases with no time to combat them.
In my experience playing with a rationality crowd (at a meet-up), it was excellent for learning the visceral feeling of motivated cognition.
↑ comment by katydee · 2011-09-13T05:22:34.428Z · LW(p) · GW(p)
I play online Mafia but haven't attempted to use explicit Bayesian reasoning to do so.
Replies from: ataftoti↑ comment by ataftoti · 2011-09-14T01:59:37.320Z · LW(p) · GW(p)
Please attempt and see if you have better results than I did. And if you succeed come back and tell us all about it!
:-)
Replies from: katydee↑ comment by katydee · 2011-09-14T05:20:09.817Z · LW(p) · GW(p)
I'm not sure that doing so would be useful. It seems like normal Mafia techniques already approximate Bayesian reasoning, and formalizing it would be very challenging and IMO unlikely to offer unusual insights. That said, I'm fairly good at online Mafia and I suspect such techniques would better benefit less advanced players.
Replies from: Normal_Anomaly, ataftoti↑ comment by Normal_Anomaly · 2011-09-16T20:59:50.160Z · LW(p) · GW(p)
There are such things as Mafia techniques? I've never seen anyone do better than chance. Care to explain?
Replies from: katydee, bcoburn, ataftoti↑ comment by katydee · 2011-09-18T04:57:11.405Z · LW(p) · GW(p)
Certainly. A basic Mafia technique is examining the past play of the person you're suspicious of, then looking at whether their current play is more similar to their play as scum or their play as town. There is also wide knowledge (at least online) of moves that are generally "scummy," such as congratulating the doctor after he or she successfully protects, as these moves have been determined to be commonly used by scum. Of course, all of this is constantly evolving, since once something is generally known as a scumtell, advanced scum players avoid it. Further, different things are tells at different levels of play, which tends to make the game much more complicated than my above description might indicate.
That said, I think it's certainly possible to do better than chance-- my own record, at least of games that I can remember, is 4 wins to 1 loss, all as town (I have yet to be scum in my recent games).
Further, there are some situations where certain tactics have been determined, over wide periods of play, to be dominant, and applying these strategies gives you a very high chance to win. For instance, if the town has a doctor and a cop (and knows this) and also knows the scumgroup has no roleblocker, the best strategy is to stop voting to lynch, have the cop claim, and have the cop constantly investigate while protected by the doctor. The scum must then start hitting other targets in hopes of getting the doc. A truly advanced doctor will then, knowing the scum is doing this, not actually protect the cop but instead protect other members of the town in the hopes of blocking the scum's pseudorandom flailing, but a truly advanced scum player might anticipate this and try to kill the cop instead-- so there are mindgames all over the place, but dominant strategies are still known.
Generally, I feel like Mafia-- at least online Mafia-- is a rather good rationality exercise. I could expand this to a top-level post if there's interest.
Replies from: ataftoti, Normal_Anomaly↑ comment by ataftoti · 2011-09-19T03:46:31.444Z · LW(p) · GW(p)
Do make the top-level post please. I think there is use in the making Mafia more well-known in demographics such as the one we have here.
It sounds like online Mafia is a totally different and much better game than what I've played at various icebreaker functions, camps, and times when there's a substitute teacher
In my experience the outcome of face-to-face mafia can be even more dependent on the players' skill, once you get past the newbie phase. Not just because newbies can't read others well, but I think they are also less readable due to undeveloped meta and making vastly suboptimal plays that regular scumhunting techniques do not read well. Once there is some standard in the players' moves and some meta is available, one can read much more accurately in face-to-face games than online due to factors such as tone, moments of hesitation, and body language.
And thus for a given single game, I would rather play mafia face-to-face with groups of regular players than online, though I would prefer playing online to face-to-face with a whole group of newbies.
Replies from: katydee↑ comment by katydee · 2011-09-19T07:29:51.811Z · LW(p) · GW(p)
Face-to-face Mafia is certainly easier to read people in, but this actually (IMO) makes it a worse game. There are other issues as well, such as the inability of the Mafia to communicate articulately at night, but if you're a good lie detector (or the scum are bad liars) the game becomes almost trivial, and introducing the difficulties of online communication IMO adds an appealing element of challenge. That said, I agree that face-to-face Mafia with a regular group can certainly be fun and even educational in itself.
↑ comment by Normal_Anomaly · 2011-09-18T05:07:05.603Z · LW(p) · GW(p)
It sounds like online Mafia is a totally different and much better game than what I've played at various icebreaker functions, camps, and times when there's a substitute teacher. I'll check it out if I ever have a clear enough schedule. Also, I'd definitely enjoy a top-level post if you made one.
↑ comment by bcoburn · 2011-09-17T21:44:05.068Z · LW(p) · GW(p)
I don't know how well it works in games with only 1 scum player, but with at least two just the fact that there are two players who know they each have a partner changes their behavior enough that the game isn't random. There's also some change in what people say just because each side has a different win condition, although again this is less true with just one scum player.
As just a simple example, when you're playing as the scum it can be really hard (at least for me) to make a good argument that someone I know is a normal villager isn't, which can be enough for another player to deduce my role.
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2011-09-18T00:33:11.779Z · LW(p) · GW(p)
That's interesting; I haven't played enough mafia to really study it. And in all the games I have played, the town always lynches the first player someone bothers to accuse--there aren't any actual arguments.
↑ comment by ataftoti · 2011-09-17T03:25:01.674Z · LW(p) · GW(p)
I just had 4 games with the same 5 players (setup is 4 town 1 scum) that all ended in scum victory. Random lynching should yield only 53% chance of scum victory. 0.53^4 seems low enough that this is likely a case of better than random.
The players in this case were new to the game with the exception of myself (and after the first couple games I was constantly night killed). I was going to say that this seems to suggest that scum is stronger in newbie games, but then I realized I have no data to draw this comparison with. :-(
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2011-09-17T11:36:20.026Z · LW(p) · GW(p)
Were you the scum in any of the games?
Replies from: ataftoti↑ comment by ataftoti · 2011-09-15T03:47:54.451Z · LW(p) · GW(p)
I want to read some games of mafia players who browse this site. Do you mind pointing me to some of your games?
Replies from: katydee↑ comment by katydee · 2011-09-15T22:27:40.854Z · LW(p) · GW(p)
Unfortunately I play mostly as a diversion on a private site, not on mafiascum or epicmafia, so they aren't as out in the open as you'd like. If you want I can link you to a recent newbie game that I was in on mafiascum, but the number of replacements makes it a little hard to follow and it's not exactly anyone's best play either.
Replies from: ataftoti↑ comment by Oscar_Cunningham · 2011-09-08T09:16:33.935Z · LW(p) · GW(p)
Trying to update even on just the well-defined data looks impossible for humans, trying to update on what other people are saying would be difficult, even with a computer. Also, it seems like there might be certain disadvantages if you turn out to be Mafia.
Replies from: ataftoti↑ comment by ataftoti · 2011-09-12T04:22:04.408Z · LW(p) · GW(p)
Allow me to specify: I am referring to online forum mafia games.
These games are slow enough that one can do some calculations, if one can find the numbers (and that seems to be the hard part, along with deciding how they should be calculated).
I've thought and am still thinking that the fact that I've never heard of bayesian methods being used in mafia is simply an observation about the failures of players, not that it inherently cannot be done using available tools.
Frankly I'm surprised mafia does not seem to attract more attention from the demographic concerned with rationality. If some set of methods were developed that consistently worked and cut through the jungle of biases that is the nature of the game, then that would be an achievement for the progress of rationality, would it not? I think many methods that may develop would easily transfer to other uses as well.
comment by [deleted] · 2011-09-06T12:20:39.138Z · LW(p) · GW(p)
EDIT: this comment was made when I was in a not-too-reasonable frame of mind, and I'm over it.
Is teaching, learning, studying rationality valuable?
Not as a bridge to other disciplines, or a way to meet cool people. I mean, is the subject matter itself valuable as a discipline in your opinion? Is there enough to this? Is there anything here worth proselytizing?
I'm starting to doubt that. "Here, let me show you how to think more clearly" seems like an insult to anyone's intelligence. I don't think there's any sense teaching a competent adult how to change his or her habits of thought. Can you imagine a perfectly competent person -- say, a science student -- who hasn't heard of "rationalism" in our sense of the world, finding such instruction appealing? I really can't.
Of course I'm starting to doubt the value (to myself) of thinking clearly at all.
Replies from: lessdazed, orthonormal, None, wedrifid, handoflixue, Morendil, None↑ comment by lessdazed · 2011-09-06T13:44:35.632Z · LW(p) · GW(p)
Yesterday I spoke with my doctor about skirting around the FDA's not having approved of a drug that may be approved in Europe first (it may be approved in the US first). I explained that one first-world safety organization's imprimatur is good enough for me until the FDA gives a verdict, and that harm from taking a medicine is not qualitatively different than harm from not taking a medicine.
We also discussed a clinical trial of a new drug, and I had to beat him with a stick until he abandoned "I have absolutely no idea at all if it will be better for you or not". I explained that abstractly, a 50% chance of being on a placebo and a 50% chance of being on a medicine with a 50% chance of working was better than assuredly taking a medicine with a 20% chance of working, and that he was able to give a best guess about the chances of it working.
In practice, there are other factors involved, in this case it's better to try the established medicine first and just see if it works or not, as part of exploration before exploitation.
This is serious stuff.
Replies from: wedrifid↑ comment by wedrifid · 2011-09-07T08:15:30.017Z · LW(p) · GW(p)
We also discussed a clinical trial of a new drug, and I had to beat him with a stick until he abandoned "I have absolutely no idea at all if it will be better for you or not". I explained that abstractly, a 50% chance of being on a placebo and a 50% chance of being on a medicine with a 50% chance of working was better than assuredly taking a medicine with a 20% chance of working, and that he was able to give a best guess about the chances of it working.
Better yet, if you aren't feeling like being altruistic you go on the trial then test the drug you are given to see if it is the active substance. If not you tell the trial folks that placebos are for pussies and go ahead and find either an alternate source of the drug or the next best thing you can get your hands on. It isn't your responsibility to be a control subject unless you choose to be!
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2011-09-07T09:01:36.405Z · LW(p) · GW(p)
Downvoted for encouraging people to screw over other people by backing out of their agreements... What would happen to tests if every trial patient tested their medicine to see if it's a placebo? Don't you believe there's value in having control groups in medical testing?
Replies from: AlanCrowe, wedrifid, lessdazed↑ comment by AlanCrowe · 2011-09-07T09:47:26.590Z · LW(p) · GW(p)
Lessdazed is describing quite a messy situation. Let me split out various subcases.
First is the situation with only one approval authority running randomised controlled trials on medicines. These trials are usually in three phases. Phase I on healthy volunteers to check for toxicity and metabolites. Phase II on sufferers to get an idea of the dose needed to affect the course of the illness. Phase III to prove that the therapeutic protocol established in Phase II actually works.
I have health problems of my own and have fancied joining a Phase III trial for early access to the latest drugs. Reading around for example it seems to be routine for drugs to fail in Phase III. Outcomes seem to be vaguely along the lines of three in ten are harmful, six in ten are useless, one in ten is beneficial. So the odds that a new drug will help, given that it was the one out of ten that passed Phase III, are good, while the odds that a new drug will help, given that it is about to start on Phase III are bad.
Joining a Phase III trial is a genuinely altruistic act by which the joiner accepts bad odds for himself to help discover valuable information for the greater good.
I was confused by the idea of joining a Phase III trial and unblinding it by testing the pill to see whether one had been assigned to the treatment arm of the study or the control arm. Since the drug is more likely to be harmful than to be beneficial, making sure that you get it is playing against the odds!
Second, Lessdazed seemed to be considering the situation in which EMA has approved a drug and the FDA is blocking it in America, simply as a bureaucratic measure to defend its home turf. If it were really as simple as that, I would say that cheating to get round the bureaucratic obstacles is justified.
However the great event of my lifetime was man landing on the Moon. NASA was brilliant and later became rubbish. I attribute the change to the Russians dropping out of the space race. In the 1960's NASA couldn't afford to take bad decisions for political reasons, for fear that the Russians would take the good decision themselves and win the race. The wider moral that I have drawn is that big organisations depend on their rivals to keep them honest and functioning.
Third: split decisions with the FDA and the EMA disagreeing, followed by a treat-off to see who was right, strike me as essential. I dread the thought of a single, global medicine agency that could prohibit a drug world wide and never be shown up by approval and successful use in a different jurisdiction.
Hmm, my comment is losing focus. My main point is that joining a Phase III trial is, on average, a sacrifice for the common good.
Replies from: lessdazed↑ comment by wedrifid · 2011-09-07T09:30:12.778Z · LW(p) · GW(p)
Downvoted for encouraging people to screw over other people by backing out of their agreements... What would happen to tests if every trial patient tested their medicine to see if it's a placebo? Don't you believe there's value in having control groups in medical testing?
Downvoted for actively polluting the epistemic belief pool for the purpose of a shaming attempt. I here refer especially (but not only) to the rhetorical question:
Don't you believe there's value in having control groups in medical testing?
I obviously believe there's a value in having control groups. Not only is that an obvious belief but it is actually conveyed by my comment. It is a required premise for the assertion of altruism to make sense.
My comment observes that sacrificing one's own (expected) health for the furthering of human knowledge is an act of altruism. Your comment actively and directly sabotages human knowledge for your own political ends. The latter I consider inexcusable and the former is both true and necessary if you wish to encourage people who are actually capable of strategic thinking on their own to be altruistic.
You don't persuade rationalists to conform to your will by telling them A is made of fire or by trying to fool them into believing A, B and C don't even exist. That's how you persuade suckers.
Replies from: lessdazed, ArisKatsaris↑ comment by lessdazed · 2011-09-07T10:08:29.434Z · LW(p) · GW(p)
Your comment actively and directly sabotages human knowledge for your own political ends.
OK, see, I thought this might happen. I love your first comment, much more than ArisKatsaris', but despite it having some problems ArisKatsaris is referring to, not because it is perfect. I only upvoted his comment so I could honestly declare that I had upvoted both of your comments, as I thought that might diffuse the situation - to say I appreciated both replies.
Don't get me wrong - I don't really mind ArisKatsaris' comment and I don't think it's as harmful as you seem to, but I upvoted it for the honesty reason.
You just committed an escalation of the same order of magnitude that he did, or more, as his statements were phrased as questions and were far less accusatory. I thought you might handle this situation like this and I mildly disapprove of being this aggressive with this tone this soon in the conversation.
Replies from: wedrifid↑ comment by wedrifid · 2011-09-07T10:40:36.395Z · LW(p) · GW(p)
I don't think it's as harmful as you seem to
A very slightly harmful instance of a phenomenon that is moderately bad when done on things that matter.
I thought you might handle this situation like this and I mildly disapprove of being this aggressive with this tone this soon in the conversation.
Where 'this soon' means the end. There is nothing more to say, at in this context. (As a secondary consideration my general policy is that conversations which begin with shaming terminate with an error condition immediately.) I do, however, now have inspiration for a post on the purely practical downsides of suppression of consideration of rational alternatives in situations similar to that discussed by the post.
EDIT: No, not post. It is an open thread comment by yourself that could have been a discussion post!
Replies from: lessdazed↑ comment by lessdazed · 2011-09-07T10:51:21.257Z · LW(p) · GW(p)
Compare and contrast my(September 7th, 2011) approach to yours(September 7th, 2011), I guess.
Where 'this soon' means the end.
ADBOC, it didn't have to be.
It is an open thread comment by yourself that could have been a discussion post!
↑ comment by ArisKatsaris · 2011-09-07T09:45:42.696Z · LW(p) · GW(p)
I obviously believe there's a value in having control groups Not only is that obvious but it is actually conveyed by my comment. It is a required premise for the assertion of altruism to make sense.
Not so, there exists altruism that is worthless or even of negative value. An all-altrustic CooperateBot is what allows DefectBots to thrive. Someone can altruistically spend all his time praying to imaginary deities for the salvation of mankind, and his prayers would still be useless. To think that altruism is about value is a map-territory confusion.
My comment observes that sacrificing one's own (expected) health for the furthering of human knowledge is an act of altruism.
Your comment doesn't just say it's altruistic. It also tells him that if he doesn't feel like being an altruist, that he should tell people that "placebos are for pussies". Perhaps you were just joking when you effectively told him to insult altruists, and I didn't get it.
Either way, if he defected in this manner, not just he'd be partially sabotaging the experiment he signed up for, he'd probably be sabotaging his future chances of being accepted in any other trial. I know that if I was a doctor, I would be less likely to accept you in a medical trial.
Your comment actively and directly sabotages human knowledge for your own political ends.
Um, what? I don't understand. What deceit do you believe I committed in my above comment?
Replies from: Jack↑ comment by Jack · 2011-09-07T17:00:39.889Z · LW(p) · GW(p)
Let me see if I can summarize this thread:
Wedrifid made a strategic observation that if a person cares more about their own health then the integrity of the trial it makes sense to find out whether they are on placebo and, if they are, leave the trial and seek other solutions. He did this with somewhat characteristic colorful language.
You then voted him down for expressing values you disagree with. This is a use of downvoting that a lot of people here frown on, myself included (though I don't downvote people for explaining their reasons for downvoting, even if those reasons are bad). Even if wedrifid thought people should screw up controlled trials for their own benefit his comment was still clever, immoral or not.
Of course, he wasn't actually recommending the sabotage of controlled trials-- though his first comment was sufficiently ambiguous that I wouldn't fault someone for not getting it. Luckily, he clarified this point for you in his reply. Now that you know wedrifid actually likes keeping promises and maintaining the integrity of controlled trials what are you arguing about?
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2011-09-07T17:57:25.683Z · LW(p) · GW(p)
Wedrifid made a strategic observation that if a person cares more about their own health then the integrity of the trial it makes sense to find out whether they are on placebo and, if they are, leave the trial and seek other solutions.
To me it didn't feel like an observation, it felt like a very strong recommendation, given phrases like "Better yet", "tell them placebos are for pussies", "It isn't your responsibility!", etc
Even if wedrifid thought people should screw up controlled trials for their own benefit his comment was still clever, immoral or not.
Eh, not really. It seemed shortsighted -- it doesn't really give an alternate way of procuring this medicine, it has the possibilty to slightly delay the actual medicine from going on the market (e.g. if other test subjects follow the example of seeking to learn if they're on a placebo and also abandon the testing, that forcing the thing to be restarted from scratch), and if a future medicine goes on trial, what doctor will accept test subjects that are known to have defected in this way?
Now that you know wedrifid actually likes keeping promises and maintaining the integrity of controlled trials what are you arguing about?
Primarily I fail to understand what deceit he's accusing me of when he compares my own attitude to claiming that "A is made of fire" (in context meaning effectively that I said defectors will be punished posthumously go to hell; that I somehow lied about the repercussions of defections).
He attacks me for committing a crime against knowledge -- when of course that was what I thought he was committing, when I thought he was seeking to encourage control subjects to find out if they're a placebo and quit the testing. Because you know -- testing = search for knowledge, sabotaging testing = crime against knowledge.
Basically I can understand how I may have misunderstood him --- but I don't understand in what way he is misunderstanding me.
↑ comment by orthonormal · 2011-09-06T12:53:47.963Z · LW(p) · GW(p)
You're confuting two things here: whether rationality is valuable to study, and whether rationality is easy to proselytize.
My own experience is that it's been very valuable for me to study the material on Less Wrong- I've been improving my life lately in ways I'd given up on before, I'm allocating my altruistic impulses more efficiently (even the small fraction I give to VillageReach is doing more good than all of the charity I practiced before last year), and I now have a genuine understanding (from several perspectives) of why atheism isn't the end of truth/meaning/morals. These are all incredibly valuable, IMO.
As for proselytizing 'rationality' in real life, I haven't found a great way yet, so I don't do it directly. Instead, I tell people who might find Less Wrong interesting that they might find Less Wrong interesting, and let them ponder the rationality material on their own without having to face a more-rational-than-thou competition.
Replies from: Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-09-06T17:23:33.369Z · LW(p) · GW(p)
Instead, I tell people who might find Less Wrong interesting that they might find Less Wrong interesting, and let them ponder the rationality material on their own without having to face a more-rational-than-thou competition.
This phrase jumped out in my mind as "shiny awesome suggestion!" I guess in a way it's what I've been trying to do for awhile, since I found out early, when learning how to make friends, that most people and especially most girls don't seem to like being instructed on living their life. ("Girls don't want solutions to their problems," my dad quotes from a book about the male versus the female brain, "they want empathy, and they'll get pissed off if you try to give them solutions instead.")
The main problem is that most of my social circle wouldn't find LW interesting, at least not in its current format. Including a lot of people who I thought would benefit hugely from some parts, especially Alicorn's posts on luminosity. (I know, for example, that my younger sister is absolutely fascinated by people, and loves it when I talk neuroscience with her. I would never tell her to go read a neuroscience textbook, and probably not a pop science book either. Book learning just isn't her thing.)
Replies from: AdeleneDawner↑ comment by AdeleneDawner · 2011-09-06T18:57:59.609Z · LW(p) · GW(p)
Depending on what you mean by 'format', you might be able to direct those people to the specific articles you think they'd benefit from, or even pick out particular snippets to talk to them about (in a 'hey, isn't this a neat thing' sense, not a 'you should learn this' sense).
Replies from: Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-09-06T19:27:21.738Z · LW(p) · GW(p)
"Pick out particular snippets" seems to work quite well. If something in the topic of conversation tags, in my mind, to something I read on LessWrong, I usually bring it up and add it to the conversation, and my friends usually find it neat. But except with a few select people (and I know exactly who they are) posting an article on their facebook wall and writing "this is really cool!" doesn't lead to the article actually being read. Or at least they don't tell me about reading it.
Replies from: AdeleneDawner↑ comment by AdeleneDawner · 2011-09-06T19:42:51.951Z · LW(p) · GW(p)
If facebook is like twitter in that regard, I mostly wouldn't expect you to get feedback about an article having been read - but I'd also not expect an especially high probability that the intended person actually read it, either. What I meant was more along the lines of emailing/IMing them individually with the relevant link. (Obviously this doesn't work too well if you know a whole lot of people who you think should read a particular article. I can't advise about that situation - my social circle is too small for me to run into it.)
Replies from: orthonormal, Swimmer963↑ comment by orthonormal · 2011-09-07T01:56:26.097Z · LW(p) · GW(p)
I, uh, just did that, and received this reply half an hour later:
Wow, thanks for destroying my chance of getting any work done for the next 7-10 days! Some friend you are!
I think that counts as a success.
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2011-09-08T00:40:54.295Z · LW(p) · GW(p)
Upvotes to you for trying something instead of defaulting to doing nothing.
Replies from: orthonormal↑ comment by orthonormal · 2011-09-08T03:46:48.278Z · LW(p) · GW(p)
It wasn't actually on account of this discussion that I introduced my friend to LW (since I didn't read Swimmer and Adelene's comments till afterward)- I just posted the reaction here because it was funny and relevant.
↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-09-10T13:15:16.140Z · LW(p) · GW(p)
Sorry for the delayed reply...
I don't know what Twitter is like, but the function on Facebook that I prefer to use (private messages) is almost like email and seems to be replacing email among much of my social circle. I will preferentially send my friends FB messages instead of emails, since I usually get a reply faster.
Writing on someone's wall is public, and might result in a slower reply because it seems less urgent. But it's still directed at a particular person, and it would be considered rude not to reply at all. But when I post an article or link, the reply I often get is "thanks, looks neat, I'll read that later."
↑ comment by wedrifid · 2011-09-07T08:18:06.556Z · LW(p) · GW(p)
Not as a bridge to other disciplines, or a way to meet cool people. I mean, is the subject matter itself valuable as a discipline in your opinion?
A little bit but it varies wildly based on who you are.
Is there enough to this? Is there anything here worth proselytizing?
Not really.
↑ comment by handoflixue · 2011-09-09T22:53:30.606Z · LW(p) · GW(p)
"Here, let me show you how to think more clearly"
I was recently around some old friends who are lacking in rationality, and kept finding myself at a complete loss. I wanted to just grab them and say exactly that.
In other news, I've learned that some lessons in how to politely and subtly teach rationality would be quite welcome >.>
↑ comment by Morendil · 2011-09-06T13:01:10.331Z · LW(p) · GW(p)
I'm starting to doubt
Where's that coming from, then?
Replies from: None↑ comment by [deleted] · 2011-09-06T13:23:08.552Z · LW(p) · GW(p)
Well, there's been some talk about organizing a meetup group in my area, and I'm not really comfortable with that.
Replies from: Morendil↑ comment by Morendil · 2011-09-06T14:24:21.402Z · LW(p) · GW(p)
Are you not comfortable with that happening at all, or not comfortable with being involved in one?
What are your concerns - wasting your time, being perceived as belonging to a "weird" group, being drawn into a group process that is a net negative value to you?
I realize I'm not answering your original question. I'm still thinking about that one.
Replies from: None↑ comment by [deleted] · 2011-09-06T15:02:52.690Z · LW(p) · GW(p)
I'm not comfortable with it existing. I think it's not useful.
Replies from: Morendil↑ comment by Morendil · 2011-09-06T15:53:25.000Z · LW(p) · GW(p)
I'm more than a little surprised to see you say this, given your past writings on the subject - if asked I would certainly have guessed that your reply to your own question would have been "yes, of course".
I'm curious to know more, if you're comfortable saying more. Not sure what to say otherwise.
People with a common interest meeting up seems natural enough. I have reservations about normativism with respect to ways of thinking, but it does seem to me that what we are learning here is worthwhile in and of itself: because it is about finding out exactly what we are, and because - just like a zebra - what we are is something rare and peculiar and fascinating.
Replies from: None↑ comment by [deleted] · 2011-09-06T16:30:52.580Z · LW(p) · GW(p)
Well, if there are other people who feel that way, they're free to meet up to share that interest.
My serious answer: I'm not sure there's a well-defined, cumulative, discipline-like body of knowledge in the LessWrong memeplex. I don't know how it could be presented to an intelligent outsider who's never heard of it. I don't know whether it could be presented in a way that makes us look good.
My not-so-serious answer: a lot of the time I just don't care any more.
Replies from: jklsemicolon↑ comment by jklsemicolon · 2011-09-06T17:21:09.332Z · LW(p) · GW(p)
It sounds to me like you might be in some kind of depression or low-enthusiasm state. I don't hear a coherent critique in these comments, so much as a general sense of "boo 'rationality'/LW".
Contrast:
Are you not comfortable with that happening at all, or not comfortable with being involved in one?
I'm not comfortable with it existing. I think it's not useful.
and
People with a common interest meeting up seems natural enough.
Well, if there are other people who feel that way, they're free to meet up to share that interest
This feels inconsistent; as if you had been caught giving a non-true rejection.
Replies from: None↑ comment by [deleted] · 2011-09-07T04:11:11.655Z · LW(p) · GW(p)
.
Replies from: arundelo↑ comment by arundelo · 2011-09-07T04:30:35.709Z · LW(p) · GW(p)
Then we're all doomed.
You might be reading SarahC as saying that teaching a competent adult to change his or her habits of thought is not possible (if you're not, ignore this comment), but I think she's saying that it's not worthwhile.
Replies from: handoflixue↑ comment by handoflixue · 2011-09-09T22:54:49.296Z · LW(p) · GW(p)
If it is not worthwhile for competent adults to learn something as basic as "how to change their mind" then I would have to agree with the conclusion that we are doomed.
Replies from: Jack, Normal_Anomaly↑ comment by Jack · 2011-09-09T23:06:36.097Z · LW(p) · GW(p)
Er why, exactly? Most competent adults in history have not known how to change their mind? The worlds has improved because of those who do. It seems to me that the key variable in teaching rationality is whether the student is willing. Most people just don't care that much about the truth of their far-beliefs. But occasional people do and those are the people you can teach. Thats why everyone here is a truth fetishist.
What we need is more pro-truth propaganda so that in the next generation the pool of potential rationalists is larger.
Replies from: handoflixue↑ comment by handoflixue · 2011-09-09T23:30:55.090Z · LW(p) · GW(p)
The emphasis here is on worthwhile: the idea that changing your mind, and knowing how to, has a tangible benefit, and one that is (generally, on average) worth the effort it takes to learn. If there's no particular benefit to changing your mind, then either (a) you have already selected the best outcome or (b) your choices are irrelevant.
If this is the best possible world, then I feel okay calling us doomed; it's a pretty lousy world.
As to irrelevancy, well, to think that I'd live the same life regardless of whether "Will you marry me?" is met with yes or no? That is not a world I want. The idea that given a set of choices, the outcome remains the same across them is just a terrifying nihilistic idea to me.
Replies from: Jack↑ comment by Jack · 2011-09-09T23:34:10.084Z · LW(p) · GW(p)
The claim isn't that it isn't worthwhile to learn rationalism, period. The claim is that for lots of people, it isn't worthwhile.
Replies from: handoflixue↑ comment by handoflixue · 2011-09-09T23:56:49.560Z · LW(p) · GW(p)
The claim is that, for lots of people, the net gain from changing their mind is so minimal as to not be worth the time spent studying. This implies strongly that, for lots of people, they have either (a) already made the best choice or (b) are not faced with any meaningful choices.
(a) implies that either lots of people are completely incapable of good decisions or are the Chosen Of God, their every selection Divinely Inspired from amongst the best of all possible worlds. Which goes back to this being a pretty lousy world.
(b) flies in the face of all the major decisions people normally make (marriage, buying a house, having children, etc.), and suggests that, statistically, a lot of the "important decisions" in my own life are probably meaningless unless I am the Chosen Of Bayes, specially exempt from the nihilism that blights the mundane masses.
For some people there may be the class (c) that the cost of learning rationality is much, much higher than normal. If your focus is on this group, that's a whole different conversation about why I think this is really rare :)
Replies from: Jack↑ comment by Jack · 2011-09-10T00:33:06.087Z · LW(p) · GW(p)
Just to begin with, the above is a terrible way to structure an inductive argument about something as variable has human behavior. Obviously few people are "completely incapable of good decisions or are the Chosen Of God" and no important decisions in life are "meaningless". It is, however, the case that most decisions don't matter all that much and that, when they do, people usual do a pretty good job without special training.
But the real issue that you're missing is opportunity cost. Lots of people don't know how to read or do arithmetic. Lots of people can't manage personal finances. Lots of people need more training to get a better job. Lots of people suffer from addiction. Lots of people don't have significant chunks of free time. Lots of people have children to raise. Almost everyone could benefit from learning something but many people either do not have the time or would benefit far more from learning a particular skill or trade rather than Bayesian math and how to identify cognitive biases.
Replies from: handoflixue↑ comment by handoflixue · 2011-09-11T02:21:05.473Z · LW(p) · GW(p)
Almost everyone could benefit from learning something but many people either do not have the time or would benefit far more from learning a particular skill or trade rather than Bayesian math and how to identify cognitive biases.
I'm not disagreeing with this at all. But given the option of teaching someone nothing or teaching them this? I think it's a net gain for them to learn how to change their mind. And I think most people have room in their life to pretty easily be casually taught a simple skill like this, or at least the basics. I've been teaching it as part of casual conversations with my roommate just because I enjoy talking about it.
Replies from: Jack↑ comment by Jack · 2011-09-11T20:20:21.868Z · LW(p) · GW(p)
But given the option of teaching someone nothing or teaching them this?
But that isn't the question.
I think it's a net gain for them to learn how to change their mind.
I think it is a net gain for a person to learn the arguments of Christian apologetics, that doesn't mean it is worthwhile for everyone to learn the arguments of Christian apologetics. Time is a limited resource.
I've taught aspects of rationality to lots of people because I like talking about it too. But my friends and family have learned it as a side effect of doing something they would be doing anyway, having interesting conversations with me. Some of them are interested in things like cognitive biases and learn on their own. But we don't yet have anything here that makes dramatic differences in people's lives such that it is important they spend precious resources on learning it.
ETA: That was a bit brisk of me. I think we just have different definitions of "worthwhile". :-)
↑ comment by Normal_Anomaly · 2011-09-16T21:04:51.828Z · LW(p) · GW(p)
If something's being worthwhile or not is a major consideration in whether or not we are doomed, doesn't that make it worthwhile? OTOH, if you mean "If we are the same amount of doomed whether or not people learn to change their minds, then we are very doomed," you are right.
Replies from: handoflixue↑ comment by handoflixue · 2011-09-16T23:04:37.562Z · LW(p) · GW(p)
"If we are the same amount of doomed whether or not people learn to change their minds, then we are very doomed"
I think that concisely summarizes the point I was trying to make. Thank you! :)
comment by klkblake · 2011-09-05T11:18:19.511Z · LW(p) · GW(p)
I'm confused about Kolmogorov complexity. From what I understand, it is usually expressed in terms of Universal Turing Machines, but can be expressed in any Turing-complete language, with no difference in the resulting ordering of programs. Why is this? Surely a language that had, say, natural language parsing as a primitive operation would have a very different complexity ordering than a Universal Turing Machine?
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2011-09-05T11:40:09.608Z · LW(p) · GW(p)
The Kolmogorov complexity changes by an amount bounded by a constant when you change languages, but the order of the programs is very much allowed to change. Where did you get that it wasn't?
Replies from: printing-spoon, klkblake↑ comment by printing-spoon · 2011-09-09T00:26:42.843Z · LW(p) · GW(p)
(this is because all Turing-complete languages can simulate each other)
↑ comment by klkblake · 2011-09-05T22:52:19.067Z · LW(p) · GW(p)
I knew Kolmogorov complexity was used in Solomonoff induction, and I was under the impression that using Universal Turing Machines was an arbitrary choice.
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2011-09-05T22:54:14.998Z · LW(p) · GW(p)
Solomonoff induction is only optimal up to a constant, and the constant will change depending on the language.
comment by jefftk (jkaufman) · 2012-03-19T20:29:21.378Z · LW(p) · GW(p)
Testing nofollow on a link that contains 'lesswrong' somewhere but doesn't point to lesswrong.com.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2012-03-19T20:48:35.617Z · LW(p) · GW(p)
LessWrong does in fact fail to properly nofollow the link. I've reported it to Trike.
comment by Armok_GoB · 2011-09-03T20:29:30.593Z · LW(p) · GW(p)
I keep running into problems with various versions of what I internally refer to as the "placebo paradox", and can't find a solution that doesn't lead to Regret Of Rationality. Simple example follows:
You have an illness from wich you'll either get better, or die. The probability of recovering is exactly half of what you estimate it to be due to the placebo effect/positive thinking. Before learning this you have 80% confidence in your recovery. Since you estimate 80%, your actual chance is 40% so you update to this. Since the estimate is now 40%, the actual chance is 20%, so you update to this. Then it's 10%, so you update to that. etc. Until both your estimated and actual chance of recovery are 0. then you die.
An irrational agent, on the other hand, upon learning this could self delude to 100% certainty of recovery, and have a 50% chance of actually recovering.
This is actually causing me real world problems, such as inability to use techniques based on positive thinking, and a lot of cognitive dissonance.
Another version of this problem features in HP:MoR, in the scene where harry is trying to influence the behaviour of dementors.
And to show this isn't JUST a quirk of human mind design, one can envision Omega setting up an isomorphic problem for any kind of AI.
Replies from: Risto_Saarelma, Eliezer_Yudkowsky, Torben, None, shokwave, None, Pfft, handoflixue, Normal_Anomaly, gwern, Richard_Kennaway, brazil84, christina, Dorikka, None↑ comment by Risto_Saarelma · 2011-09-03T20:44:36.904Z · LW(p) · GW(p)
For actual humans, I'd look into ways of possibly activating the placebo effect without explicit degrees of belief, such as intense visualization of the desired outcome.
Replies from: JoshuaZ, endoself, Armok_GoB↑ comment by JoshuaZ · 2011-09-03T21:42:40.317Z · LW(p) · GW(p)
This is an interesting idea but I'm skeptical that this would actually work. There are studies which I don't have the citations for (they are cited in Richard Wiseman's "59 Seconds") which strongly suggest that positive thinking in many forms doesn't actually work. In particular, having people visualize extreme possibilities of success (e.g. how strong they'll be after they've worked out, or how much better looking they will be when they lose weight, etc.) make people less likely to actually succeed (possibly because they spend more time simply thinking about it rather than actually doing it.). This is not strong evidence but it is suggestive evidence that visualization is not sufficient to do that much. These studies didn't look at medical issues where placebos are more relevant.
Replies from: Manfred↑ comment by Manfred · 2011-09-03T23:34:03.240Z · LW(p) · GW(p)
http://articles.latimes.com/2010/dec/22/health/la-he-placebo-effect-20101223
The human brain is a weird thing. Also, see the entire body of self-hypnosis literature.
↑ comment by endoself · 2011-09-04T05:51:23.930Z · LW(p) · GW(p)
Another method to try is affirmations.
↑ comment by Armok_GoB · 2011-09-03T21:19:25.099Z · LW(p) · GW(p)
any data on if this is actually possible, and if so how to do it? Does it work for other things such as social confidence, positive thinking, etc.?
It certainly SEEMS like it's the declarative belief itself, not visualizations of outcomes, that cause effects. And the fact so many attempts at perfect deception have failed seems to indicate it's not possible to disentangle [your best rational belifs] from what your "brain thinks" you believe.
(... I really need some better notation for talking about these kind of things unambiguously.)
Replies from: atucker, pjeby↑ comment by atucker · 2011-09-05T05:04:09.235Z · LW(p) · GW(p)
It certainly SEEMS like it's the declarative belief itself, not visualizations of outcomes, that cause effects. And the fact so many attempts at perfect deception have failed seems to indicate it's not possible to disentangle [your best rational belifs] from what your "brain thinks" you believe.
I'm skeptical as to how common it is for your beliefs to influence anything outside of your head, except through your actions. If your belief X makes Y happen because of method Z, then in order to get Y you only need to know about Z, and that it works. Then you can do Z regardless of X, because what you do mostly screens off what you think.
If you can't get yourself to do something because of a particular belief, that's another issue.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-09-05T10:42:18.419Z · LW(p) · GW(p)
No, in humans this is not the case, unless you have a much broader definition of "action" than is useful. For example, other humans can read your intentions and beliefs from your posture and facial expression, the body reacts autonomously to beliefs with stuff like producing drugs and shunting around blood flow, and some entire classes of problems such as mental illness or subjective well being reside entirely in your brain.
Replies from: atucker↑ comment by atucker · 2011-09-05T15:01:29.643Z · LW(p) · GW(p)
Sorry about my last sentence in the previous post sounding dismissive, that was sloppy, and not representative of my views.
I guess my real issue with this is that I don't think that there's a 50% placebo, and disagree that the "declarative belief" does things directly. My anticipation of success or failure has an influence on my actions, but a 50% placebo I would imagine would work in real life based on hidden, unanticipated factors to the point that someone with accurate beliefs could say that "my anticipation contributes this much, X contributes this much, Y contributes this much, Z contributes this much, and given my x,y,z I anticipate this" and be pretty much correct.
In the least convenient possible universe, there seems to be enough hacks that rationality enables that I would reject the 50% placebo, and still net a win. I don't think we live in a universe where the majority of utility is behind 50% placebos.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-09-05T18:36:35.896Z · LW(p) · GW(p)
Why does everyone get stuck on that highly simplified example that I just made like that so that the math would be easy to follow?
Or are you simply saying that placebos and the like are an unavoidable cost of being a rationalist and we just have to deal with it and it's not that big a cost anyway?
Replies from: atucker↑ comment by atucker · 2011-09-05T18:52:11.133Z · LW(p) · GW(p)
More the latter, with the added caveat that I think that there are fewer things falling under the category of "and the like" than you think there are.
I used to think that my social skills were being damaged by rationality, but then through a combination of "fake it till you make it", learning a few skills, and dissolving a few false dillemas, they're now better than they were pre-rationality.
If you want to go into more personal detail, feel free to PM.
↑ comment by pjeby · 2011-09-05T02:47:00.932Z · LW(p) · GW(p)
It certainly SEEMS like it's the declarative belief itself, not visualizations of outcomes, that cause effects.
Taboo "declarative". To me, it sounds like you're talking about a verbal statement ("declared"), in which case it's pretty obviously false. AFAIK, priming effects work just fine without words.
Replies from: Armok_GoB↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-09-04T06:58:41.995Z · LW(p) · GW(p)
Actually, you can solve this problem just by snapping your fingers, and this will give you all the same benefits as the placebo effect! Try it - it's guaranteed to work!
Replies from: handoflixue, lessdazed, Armok_GoB↑ comment by handoflixue · 2011-09-09T23:49:57.177Z · LW(p) · GW(p)
I've been doing this for years, and it really does work!
(No, really, I actually have; it actually does. The placebo effect is awesome ^_^)
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2011-09-16T21:22:15.409Z · LW(p) · GW(p)
Relevant and amusing (to me at least) story: A few months ago when I had a cold, I grabbed a box of zinc cough drops from my closet and started taking them to help with the throat pain. They worked as well or better than any other brand of cough drops I've tried, and tasted better too. Later I read the box, and it turned out they were homeopathic. I kept on taking them, and kept on enjoying the pain relief.
↑ comment by lessdazed · 2011-11-06T16:03:13.717Z · LW(p) · GW(p)
it's guaranteed to work!
Probably not. Try throwing a coin in a wishing well or lighting a dollar bill on fire for more effect.
In the regular-price group, 85.4% (95% confidence interval [CI], 74.6%-96.2%) of the participants experienced a mean pain reduction after taking the pill, vs 61.0% (95% CI, 46.1%-75.9%) in the low-price (discounted) group (P = .02). Similar results occurred when analyzing only the 50% most painful shocks for each participant (80.5% [95% CI, 68.3%-92.6%] vs 56.1% [95% CI, 40.9%-71.3%], respectively; P = .03).
↑ comment by Armok_GoB · 2011-09-04T12:30:55.747Z · LW(p) · GW(p)
... Even YOU miss the point? guess I utterly failed at explaining it then.
IF I could solve the problem I'm stating in the first post, then this would indeed be almost true. It might be true in 99% of cases, but 0.99^infinity is still ~0. Thus that is the only probability I can consistently assign to it. I MIGHT be able to self modify to be able to hold inconsistent beliefs, but that's double think and you have explicitly, loudly and repeatedly warned against and condemned it.
I'm baffled at how I seem unable to point at/communicate the concept. I even tired pointing at a specific instance of you using something very similar in MoR.
Replies from: shokwave, khafra, shokwave↑ comment by shokwave · 2011-09-04T15:37:36.518Z · LW(p) · GW(p)
... Even YOU miss the point? guess I utterly failed at explaining it then.
Eliezer is not "the most capable of understanding (or repairing to an understandable position) commentor on LessWrong". He is "the most capable of presenting ideas in a readable format" AND "the person with the most rational concepts" on LessWrong. Please stop assuming these qualities are good proxies for, well, EVERYTHING.
Replies from: wedrifid, Jack, Armok_GoB↑ comment by wedrifid · 2011-09-04T18:30:00.665Z · LW(p) · GW(p)
Eliezer is not "the most capable of understanding (or repairing to an understandable position) commentor on LessWrong".
Agree. I wouldn't go as far as to say he was worse than average at understanding others but it certainly isn't what he is renowned for!
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-09-04T20:41:48.043Z · LW(p) · GW(p)
I though it was all just g factor + understanding of language.
Replies from: wedrifid↑ comment by wedrifid · 2011-09-05T08:12:47.894Z · LW(p) · GW(p)
Not quite. Having the right priors about other people's likely beliefs, patience and humility are all rather important.
There are some people who I consider incredibly intelligent and who clearly understand the language that I basically expect to be replying to a straw man whenever they make a reply, all else being equal. (Not Eliezer.)
Replies from: Armok_GoB↑ comment by Jack · 2011-09-06T14:51:26.876Z · LW(p) · GW(p)
"the person with the most rational concepts"
What does this mean?
Replies from: shokwave↑ comment by shokwave · 2011-09-07T00:27:51.021Z · LW(p) · GW(p)
Each one of his sequence posts represents a concept in rationality - so he has many more of these concepts than anyone else here on LW.
(I just noticed there's some ambiguity - it's the largest amount of rational concepts, not concepts of the highest standard of rational. [most] [rational concepts], not [most rational] [concepts].)
↑ comment by khafra · 2011-09-07T16:14:11.547Z · LW(p) · GW(p)
The probability of recovering is exactly half of what you estimate it to be due to the placebo effect/positive thinking.
It would take an artificially bad situation for this to be the case. In the real world, the placebo effect still works, even if you know it's a placebo--although with diminished efficacy.
But that's beside the point. More on-point is that intentional self-delusion, if possible, is at best a crapshoot. It's not systematic; it relies on luck, and it's prone to Martingale-type failures.
The HPMOR and placebo examples appear, to me, to share another confounding factor: The active ingredient isn't exactly belief. It's confidence, or affect, or some other mental condition closely associated with belief. If it weren't, there'd be no way Harry could monitor his level of belief that the dementors would do what he wanted them to, while simultaneously trying to increase it. Anecdotally, my own attempts at inducing placebo effects feel similar.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-09-07T19:11:27.247Z · LW(p) · GW(p)
The placebo effect works if your brain thinks that you think that it will work, if I understood things correctly.
And yes, that I can't reliably self delude, and even if I could it would be prone to backfire, is exactly what causes this to be a problem.
I'm decently sure that my brain does not store beliefs separately from confidence, affect, etc.
I thoguh that was exactly the point of the dementor sequence; that it was an impossible paradox.
↑ comment by shokwave · 2011-09-04T15:40:51.773Z · LW(p) · GW(p)
The supposed equivalent version in HP:MOR... (I do not wish to speak for anyone else - feel free to chime in yourselves)
That scene was a clear example - to me - of TDT being successful outside of the prisoner's dilemma scheme. In a case where apparently only ignorance would help, TDT can transcend and provide (almost) the same power.
Replies from: Armok_GoB↑ comment by Torben · 2011-09-04T04:34:01.397Z · LW(p) · GW(p)
Your model assumes a constant effect in each iteration. Is this justified?
I would envisage a constant chance of recovery and an asymptotically declining estimate of recovery. It seems more realistic, but maybe it's just me?
Replies from: Armok_GoB↑ comment by [deleted] · 2011-09-07T14:09:26.979Z · LW(p) · GW(p)
Speaking of Omega setting up an isomorphic situation, the Newcomb's Box problems do a good job of expressing this.
http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/
However, I also though of a side question. Is the person who is caught in a cycle of negative thinking like the placebo effect that you mention, engaging in confirmation bias?
I mean, if that person thinks "I am caught in a loop of updates that will inexorably lead to my certain death." And they are attempting to establish that that is true, they can't simply say "I went from 80%/40% to 40%/20% to 20%/10%, and this will continue. I'm screwed!" as evidence of it's truth, because that's like saying "4,6,8" "6,8,10" "8,10,12" as the guesses for the rule that you know "2,4,6" follows. and then saying "The rule is even numbers, right? Look at all this evidence!"
If a person has a hypothesis that their thoughts are leading them to an inexorable and depressing conclusion, then to test the hypothesis, the rational thing to do is for that person to try proving themselves wrong. By trying "10,8,6" and then getting "No, that is not the case." (Because the real rule is numbers in increasing order.)
I actually haven't confirmed that this idea myself yet. I just thought of it now. But casting it in this light makes me feel a lot better about all the times I perform what appear at the time to be self delusions on my brain when I'm caught in depressive thinking cycles, so I'll throw it out here and see if anyone can contradict it.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-09-07T19:34:16.788Z · LW(p) · GW(p)
Thanks for restating parts of the problem in a much clearer manner!
And yea, that article is why this problem is wreaking such havock on me, and I were thinking of it as I wrote the OP. I'm not sure why I didn't link it.
However, I still can't resolve the paradox. Although I'm finally starting to see how one might start on doing so: formalizing an entire decision theory that solves the entire class of problems, and them swapping half my mindware out in a single operation. Doesn't seem like a very^good solution thou so I'd rather keep looking for third options.
I don't think I understand the middle paragraph with all the examples. Probably because the way I actually think of it is not the way I used in the OP, but rather an equation where expectation must be equal to actual probability to call my belief consistent, and jumping straight there. Like so: P=E/2, E=P, thus E=0.
Hmm, I just got a vague intuition saying roughly "Hey, but wait a moment, probability is in the mind. The multiverse is timeless and in each Everett branch you either do recover or you don't! ", but I'm not sure how to proceed from there.
↑ comment by shokwave · 2011-09-04T15:32:05.231Z · LW(p) · GW(p)
Updating on the evidence of yourself updating is almost as much as a problem as is updating on the evidence of "I updated on the evidence of myself updating". Tongue-in-cheek!
That is to say, the decision theory you are currently running is not equipped to handle the class of problems where your response to a problem is evidence that changes the nature of the very problem you are responding to - in the same way that arithmetic is not equipped to handle problems requiring calculus or CDT is not equipped to handle Omega's two-box problem.
(If it helps your current situation, placebo effects are almost always static modifiers on your scientific/medical chances of recovery)
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-09-04T16:34:02.866Z · LW(p) · GW(p)
Do you have a suggestion for a better decision theory, or a suggestion on how exactly I have misinterpreted TDT to cause my current problems?
Knowing that MIGHT help, but probably not in practice. Specifically I'd need to know for every given instance of the problem a probability to assign which if it is assigned is also the actual chance.
↑ comment by [deleted] · 2011-09-06T03:16:56.169Z · LW(p) · GW(p)
Can you see what an absurdly implausible scenario you must use as a ladder to demonstrate rationality as a liability? Rather than being a strike against strict adherence to reality. The fact that we have to stretch so hard to paint it this way, further legitimizes the pursuit of rationality.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-09-06T07:58:54.013Z · LW(p) · GW(p)
Except I happen to, as far as I can tell, be in that "implausible" scenario IRL, or at least an isomorphic one.
Replies from: None, handoflixue↑ comment by [deleted] · 2011-09-06T15:37:46.739Z · LW(p) · GW(p)
I mean no disrespect for your situation whatever it may be. I gave this some additional thought. You are saying that you have an illness in which the rate of recovery is increased by fifty percent due to a positive outlook and the placebo effect this mindset produces. Or that an embrace of the facts of your condition lead to an exponential decline at the rate of fifty percent. Is it depression, or some other form of mental illness? If it is, then the cause of death would likely be suicide. I am forced to speculate because you were purposefully vague.
For the sake of argument I will go with my speculative scenario. It is very common for those with bi-polar disorder and clinical depression to create a negative feedback loop which worsens their situation in the way you have highlighted. But it wouldn't carry the exacting percentages of taper (indeed no illness would carry that exact level of decline based merely on the thoughts in the patients head). But given your claims that the illness exponentially declines, wouldn't the solution be knowledge of this reality? It seems that the delusion has come in the form of accepting that an illness can be treated with positive thinking alone. The illness is made worse by an acceptance not of rationality, but of this unsupported data which by my understanding is irrational.
I am very skeptical of your scenario, merely because I do not know of any illnesses which carry this level of health decline due to the absence of a placebo. If you have it please tell me what it is as I would like to begin research now.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-09-06T15:56:41.212Z · LW(p) · GW(p)
It's not depression or bipolarity, probably, but for the purposes of this discussion the difference is probably irrelevant.
I never claimed the 50% thing was ever anything other than a gross simplification to make the math easier. Obviously it's much more complicated than that with other factors, less extreme numbers, and so on, but the end result is still isomorphic to it. Maybe it's even polynomial rather than exponential, but it's still a huge problem.
↑ comment by handoflixue · 2011-09-09T23:34:23.514Z · LW(p) · GW(p)
Can you actually describe the scenario you really are in? I can think of ways I'd address a lot of real-world analogues, but none of them are actually isomorphic to the example you gave. The solutions generally rely on the lack of a true isomorphism, too.
Replies from: Armok_GoB↑ comment by Pfft · 2011-09-06T22:01:05.917Z · LW(p) · GW(p)
atucker wrote a Discussion post about this.
Replies from: Armok_GoB↑ comment by handoflixue · 2011-09-09T23:39:31.866Z · LW(p) · GW(p)
http://www.guardian.co.uk/science/2010/dec/22/placebo-effect-patients-sham-drug It is also well worth noting that the Placebo Effect works just fine even if you know it's just a Placebo Effect. I hadn't realized it worked for others, but I've been abusing this one for a lot of my life, thanks to a neurological quirk that makes placebos especially potent for me.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-09-10T10:00:34.164Z · LW(p) · GW(p)
Yes, but you have to BELIEVE the placebos will help. In fact, the paradox ONLY appears in the case you know it's a placebo because that's when the feedback loop can happen.
Replies from: handoflixue↑ comment by handoflixue · 2011-09-11T02:23:39.765Z · LW(p) · GW(p)
I'm not aware of any research that says a placebo won't help a "non-believer" - can you cite a study? Given the study I linked where they were deliberately handed inert pills and told that they were an inert placebo, and they still worked, I actually strongly doubt your claim.
And given the research I linked, why in the world wouldn't you believe in them? They do rationally work.
Replies from: Armok_GoB, Morendil↑ comment by Armok_GoB · 2011-09-11T09:17:09.309Z · LW(p) · GW(p)
A placebo will help if you think the pill you're taking will help. This may be because you think it's a non-placebo pill that'd help even if you didn't know you were taking it, or because you know it's a placebo but think placebos work. If you were given a placebo pill, told it was just a candy and given no indication it might help anything, it wouldn't do anything because it's just sugar. Likewise if you're given a placebo, know it's a placebo, and are convince on al levels that there is no chance of it working.
Replies from: handoflixue↑ comment by handoflixue · 2011-09-12T18:26:25.307Z · LW(p) · GW(p)
Right. So find someone who will tell you it's a placebo, and read up on the research that says it does work. It'd be irrational to believe that they don't work, given the volume of research out there.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-09-12T18:36:14.786Z · LW(p) · GW(p)
facepalms Did you even read any other post in this thread?
Replies from: handoflixue, lessdazed↑ comment by handoflixue · 2011-09-12T20:27:44.004Z · LW(p) · GW(p)
Yes, but you have to BELIEVE the placebos will help.
Quite a few of them. You're being vague enough that I can only play with the analogies you give me. You gave me the analogy of a placebo not working if you don't believe in it; I pointed out that disbelief in placebos is rather irrational.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-09-12T21:09:45.353Z · LW(p) · GW(p)
Trying to figure out if it's rational or not, and if so HOW it's rational so I can convince my brain of it, is exactly what the entire discussion is about starting from the first post here: http://lesswrong.com/lw/7fo/open_thread_september_2011/4r8q
↑ comment by Morendil · 2011-09-11T11:04:03.593Z · LW(p) · GW(p)
A single study is not sufficient grounds to believe in something, especially a proposition as complicated as "placebos work" (it may not sound complicated expressed in this way, but if you taboo the words 'placebo' and 'work' you'll see that there is a lot of machinery in there).
See previous discussion here and note my remarks, I recommend reading the linked articles.
Replies from: handoflixue↑ comment by handoflixue · 2011-09-12T18:24:57.087Z · LW(p) · GW(p)
http://scienceblogs.com/insolence/2011/07/dangerous_placebo_medicine_in_asthma.php for a second study, and one that explicitly addresses your concern of psychological vs health benefits (summary: placebos have no actual health benefits, they just manage the psychological side)
Given Armok is looking for a psychological solution, this still seems relevant. There have been a number of interesting studies on placebo effects; whether it's the actual pill or just priming, it does have a well document and noted beneficial effect, and it seemed relevant to Armok's situation.
↑ comment by Normal_Anomaly · 2011-09-04T19:02:32.471Z · LW(p) · GW(p)
I think one way to avoid having to call this regret of rationality would be to see optimism as deceiving, not yourself, but your immune system. The fact that the human body acts differently depending on the person's beliefs is a problem with human biology, which should be fixed. If Omega does the same thing to an AI, Omega is harming that AI, and the AI should try to make Omega stop it.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-09-04T21:02:57.128Z · LW(p) · GW(p)
Well, deceiving somehting else by means of deceiving yourself still involves doublethink. It's the same as saying humans should not try to be rational.
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2011-09-04T22:12:42.645Z · LW(p) · GW(p)
It's saying that it may be worth sacrificing accuracy (after first knowing the truth so you know whether to deceive yourself!) in order to deceive another agent: your immune system. It's still important to be rational in order to decide when to be irrational: all the truth still has to pass through your mind at some point in order to behave optimally.
On another note, you may benefit from reciting the Litany of Tarski:
If lying to myself can sometimes be useful, I want to believe that lying to myself can sometimes be useful.
If lying to myself cannot be useful, I want to believe that lying to myself cannot be useful.
Let me not become attached to beliefs I may not want.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-09-04T22:26:44.727Z · LW(p) · GW(p)
I know by brain is a massively parallel neural network with only smooth fitness curves, and certainly isn't running an outdated version of Microsoft Windows, but for how it's behaving in response to this you couldn't tell. I'm a sucky rationalist. :(
↑ comment by gwern · 2011-09-04T01:22:56.682Z · LW(p) · GW(p)
And to show this isn't JUST a quirk of human mind design, one can envision Omega setting up an isomorphic problem for any kind of AI.
An AI can presumably self-modify. For a sufficient reward from Omega, it is worth degrading the accuracy of one's beliefs, especially if the reward will immediately allow one to make up for the degradation by acquiring new information/engaging in additional processing.
(A hypothetical: Omega offers me 1000 doses of modafinil, if I will lie on one PredictionBook.com entry and say -10% what I truly believe. I take the deal and chuckle every few minutes the first night, when I register a few hundred predictions to make up for the falsified one.)
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-09-04T12:15:48.341Z · LW(p) · GW(p)
This entirely misses the point. Yes, you could self modify, but it's a self modification away from rationality and that gives rise to all sorts of trouble as has been elaborated many times in the sequences. For example: http://lesswrong.com/lw/je/doublethink_choosing_to_be_biased/
Also, LYING about what you believe has nothing to do with this. Omega can read your mind.
Replies from: gwern↑ comment by gwern · 2011-09-04T13:57:45.017Z · LW(p) · GW(p)
I was trying to apply the principle of charity and interpret your post as anything but begging the question: 'assume rational agents are penalized. How do they do better than irrational agents explicitly favored by the rules/Omega?'
Question begging is boring, and if that's really what you were asking - 'assume rational agents lose. How do they not lose?' - then this thread is deserving only of downvotes.
And Eliezer was talking about humans, not the finer points of AI design in a hugely arbitrary setup. It may be a bad idea for LWers to choose to be biased, but a perfectly good idea for AIXI stuck in a particularly annoying computable universe.
Also, LYING about what you believe has nothing to do with this. Omega can read your mind.
Since I'm not an AI with direct access to my beliefs in storage on a substrate, I was using an analogy to as close as I can get.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-09-04T14:11:00.347Z · LW(p) · GW(p)
Sorry, I were hoping that there were some kind of difference between "penalize this specific belief in this specific way" and "penalize rationality as such in general", some kind of trick to work around the problem, that I hadn't noticed and which resolved the dilemma.
And your analogy didn't work for me, is all I'm saying.
↑ comment by Richard_Kennaway · 2011-11-06T17:37:32.109Z · LW(p) · GW(p)
To fully solve this problem requires answering the question of how the placebo effect physically works, which requires answering the question of what a belief physically is, to have that physical effect.
However, no-one yet knows the answers to those questions, which renders all of these logical arguments about as useful as Zeno's proof that arrows cannot move. The problem of how to knowingly induce a placebo response is a physical one, not a logical one. Nature has no paradoxes.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-11-06T19:09:34.082Z · LW(p) · GW(p)
The first part is wrong, the second is obvious and I never said anything to contradict it. We don't need to know exactly how beliefs are implemented just approximately how they behave.
Of coarse this is a physical problem and of coarse we don't know every detail enough to give an exact answer, the math can still be useful for solving the problem.
Replies from: Richard_Kennaway, TimS↑ comment by Richard_Kennaway · 2011-11-06T20:16:35.826Z · LW(p) · GW(p)
the math can still be useful for solving the problem.
The point of your post was that the mathematics you are doing is creating the problem, not solving it. I haven't seen any other mathematics in this thread that is solving the problem either.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-11-06T22:34:35.595Z · LW(p) · GW(p)
Honestly, this discussion was to long ago for me to really remember what it was about well enough to discus it properly.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2011-11-08T08:54:46.327Z · LW(p) · GW(p)
I have a couple of suggestions more constructive than my earlier comments.
One is that according to a paper recently cited here, placebos can work even if you know they're placebos.
The other is that if belief doesn't work for you, how about visualisation? Instead of trying to believe it will work, just imagine it working. Vividly imagine, not just imagining that it will work. This doesn't raise decision-theoretic paradoxes, and people claim results for it, although I don't know about proper studies. We don't know how placebos work, and "belief" isn't necessarily the key state of mind.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-11-08T11:56:12.060Z · LW(p) · GW(p)
That article was probably what caused me to notice the problem in the first place and write the OP.
Visualization is probably the most promising solution, and even if it's not as strong as placebo might b worth exploring. My main problems with it is that there's still some kind of psychological resistance to it, and that I have no clear idea of what exact concrete image I'm supposed to visualize given some abstract goal description.
↑ comment by brazil84 · 2011-09-05T11:06:21.539Z · LW(p) · GW(p)
I don't think it's a paradox, it's just that the perfect is sometimes the enemy of the good. Your brain has a lot of different components. With a lot of effort, you can change the way some of them think. Some of them will always be irrational no matter what either because they are impossible to change much or because there just isn't enough time in your life to do it.
Given that some components are irretrievably irrational, you may be better off in terms of accomplishing your goals if other components -- which you might be able to change -- stay somewhat irrational.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-09-05T11:17:57.923Z · LW(p) · GW(p)
Thing is I can't consciously chose to be irrational. I'd first have to entirely reject a huge network of ideals that are the only thing making me even attempt to be slightly rational ever.
Replies from: handoflixue↑ comment by handoflixue · 2011-09-09T23:36:33.715Z · LW(p) · GW(p)
I challenge this assumption. I have a very well functioning, blissfully optimistic mindset that I can load when my rationality suggests that this ignorance is indeed my best defense. I wish I had the skill to understand how I reconcile this with the rational compartment in my mind, but the two do seem to co-exist quite happily, and I enjoy many of the perks of a positive outlook.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-09-10T10:02:53.780Z · LW(p) · GW(p)
Well, I don't and I can't. And I strongly doubt I could ever learn anything like that no matter what.
Replies from: shokwave↑ comment by shokwave · 2011-09-10T10:23:25.096Z · LW(p) · GW(p)
Given that a human brain can do it, you are perhaps too confident. A proof of concept would be to edit your brain with neurosurgery.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-09-10T10:48:21.988Z · LW(p) · GW(p)
I don't really count lobotomy as "learn".
Replies from: lessdazed, shokwave↑ comment by lessdazed · 2011-09-10T22:04:02.757Z · LW(p) · GW(p)
About Williams syndrome, I have read in several places that language skills are not sub-normal despite having brain abnormalities in those areas because there is much less than normal development in generally spacial and math/logic type areas. Having less raw brainpower to devote to language, they make up for it by being more subconsciously "focused", though that isn't quite the right word. They can be above or below average with language, depending on how it balances out, "normal" abilities are something like an average.
Also, such people are not naturally racist, unlike "normal" people. This is relevant for the aspie-leaning population here - non-neurotypial isn't inherently normative.
I wonder what severity of Asperger's syndrome is required to be non-racist? I strongly suspect there is a level that would be sufficient.
Replies from: gwern, NancyLebovitz↑ comment by gwern · 2011-09-10T23:18:49.188Z · LW(p) · GW(p)
Language-wise, it's kind of a mixed bag. How much do social things like sarcasm matter for 'language skills'? And how Williams syndrome leads to sociability and lack of racism is very interesting; following extract dump from https://www.nytimes.com/2007/07/08/magazine/08sociability-t.html?reddit
People with Williams tend to lack not just social fear but also social savvy. Lost on them are many meanings, machinations, ideas and intentions that most of us infer from facial expression, body language, context and stock phrasings. If you’re talking with someone with Williams syndrome and look at your watch and say: “Oh, my, look at the time! Well it’s been awfully nice talking with you . . . ,” your conversational partner may well smile brightly, agree that “this is nice” and ask if you’ve ever gone to Disney World. Because of this — and because many of us feel uneasy with people with cognitive disorders, or for that matter with anyone profoundly unlike us — people with Williams can have trouble deepening relationships. This saddens and frustrates them. They know no strangers but can claim few friends....Like most people with Williams, Nicki loves to talk but has trouble getting past a cocktail-party-level chatter. Nicki, however, has fashioned at least a partial solution. “Ever since she was tiny,” Verna Hornbaker told me, “Nicki has always especially loved to talk to men. And in the last few years, by chance, she figured out how to do it. She reads the sports section in the paper, and she watches baseball and football on TV, and she has learned enough about this stuff that she can talk to any man about what the 49ers or the Giants are up to. My husband gets annoyed when I say this, but I don’t mean it badly: men typically have that superficial kind of conversation, you know — weather and sports. And Nicki can do it. She knows what team won last night and where the standings are. It’s only so deep. But she can do it. And she can talk a good long while with most men about it.”...In Williams the imbalance is profound. The brains of people with Williams are on average 15 percent smaller than normal, and almost all this size reduction comes from underdeveloped dorsal regions. Ventral regions, meanwhile, are close to normal and in some areas — auditory processing, for example — are unusually rich in synaptic connections. The genetic deletion predisposes a person not just to weakness in some functions but also to relative (and possibly absolute) strengths in others. The Williams newborn thus arrives facing distinct challenges regarding space and other abstractions but primed to process emotion, sound and language....This window is longer than that for most infants, as Williams children, oddly, start talking a year or so later than most children...Cognitive scientists argue over whether people with Williams have theory of mind. Williams people pass some theory-of-mind tests and fail others. They get many jokes, for instance, but don’t understand irony. They make small talk but tend not to discuss the subtler dynamics of interpersonal relationships. Theory of mind is a slippery, multilayered concept, so the debate becomes arcane. But it’s clear that Williamses do not generally sniff out the sorts of hidden meanings and intentions that lie behind so much human behavior....“And the most important abnormalities in Williams,” he says, “are circuits that have to do with basic regulation of emotions.” The most significant such finding is a dead connection between the orbitofrontal cortex, an area above the eye sockets and the amygdala, the brain’s fear center. The orbitofrontal cortex (or OFC) is associated with (among other things) prioritizing behavior in social contexts, and earlier studies found that damage to the OFC reduces inhibitions and makes it harder to detect faux pas. The Berman team detected a new contribution to social behavior: They found that while in most people the OFC communicated with the amygdala when viewing threatening faces, the OFC in people with Williams did not. This OFC-amygdala connection worked normally, however, when people with Williams viewed nonsocial threats, like pictures of snakes, sharks or car crashes.
↑ comment by NancyLebovitz · 2011-09-10T22:18:07.672Z · LW(p) · GW(p)
In re "natural racism": Has it been determined whether it's always about the same distinctions?
In some places-- for example, Protestant vs. Catholic in Northern Ireland-- the groups look very similar to outsiders. Does "natural racism" kick in as young as American white-black racism?
Replies from: lessdazed↑ comment by lessdazed · 2011-09-10T22:38:03.503Z · LW(p) · GW(p)
Why wouldn't it be about whatever distinctions the kids can perceive cleanly dividing the group? I don't really know. Here are some Discover articles that are relevant and have different implications:
Probably using those one could backtrack and find the actual research and the citations from it, etc. From the first article:
Even autistic children, who can have severe difficulties with social relationships, show signs of racial stereotypes.
Well, it was a good hypothesis. Not really sure what "signs of" means exactly.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2011-09-10T23:05:42.955Z · LW(p) · GW(p)
Why wouldn't it be about whatever distinctions the kids can perceive cleanly dividing the group? I don't really know. Here are some Discover articles that are relevant and have different implications:
My hypothesis is that which distinctions the kids find important are the result of adults' involuntary reactions to people from the various groups.
Replies from: lessdazed↑ comment by lessdazed · 2011-09-10T23:12:37.455Z · LW(p) · GW(p)
It's possible it is the result of multiple factors.
Inexposure leading to less ability to determine facial differences is a good guess. Glomming on to any difference regardless of culture is a good guess. Modeling adults is a good guess.
↑ comment by shokwave · 2011-09-10T13:46:29.266Z · LW(p) · GW(p)
I strongly doubt that no matter what I couldn't ever produce a lobotomy procedure anything like something you would mistake for learning.
Replies from: lessdazed, Armok_GoB↑ comment by lessdazed · 2011-09-10T22:11:26.305Z · LW(p) · GW(p)
After the fact, many changes in the brain would be justified by various possible resultant persons. This is a weakness of CEV, at least, I do not know the solution to the problem. Were you to become the most fundamentalist Christian alive from futuristic brain implants and lobotomies, you would say something like "I am grateful for the surgery because otherwise I never would have known Jesus," and you would be grateful.
Replies from: shokwave↑ comment by shokwave · 2011-09-11T02:59:24.214Z · LW(p) · GW(p)
My layman's understanding of CEV is that the preceding brain should approve of the results of the improvement. So I would have to fervently desire to know Jesus and somehow be incapable of doing so, for CEV to allow me being turned into a fundamentalist.
Replies from: lessdazed↑ comment by lessdazed · 2011-09-11T03:35:29.165Z · LW(p) · GW(p)
The other side of the coin is that if we require such approval, where does that leave most of humanity? The most vicious 10% of humanity? How do we account for the most fundamentalist Christian alive in forming CEV? How do we account for people who think that beating their children for not believing in god is OK, and would even want their community to do the same to them if they didn't believe?
I think the way you phrased it, "allow me being turned," was very good. Humans see a difference between causing and allowing to happen, so it must be reflected somehow in the first stages of CEV.
↑ comment by christina · 2011-09-04T20:02:44.833Z · LW(p) · GW(p)
If the placebo effect actually worked exactly like that, then yes, you would die while the self-deluded person would do better. However, from personal experience, I highly suspect it doesn't (I have never had anything that I was told I'd be likely to die from, but I believe even minor illnesses give you some nonzero chance of dying). Here is how I would reason in the world you describe:
There is some probability I will get better from this illness, and some probability I will die.
The placebo effect isn't magic, it is a real part of the way the mind interacts with the body. It will also decrease my chances of dying.
I don't want to die.
Therefore I will activate the effect.
To activate the effect for maximum efficiency, I must believe that I will certainly recover.
I have activated the placebo effect. I will recover (Probability: 100%). Max placebo effect achieved!
The world I live in is weird.
In the real world, the above mental gymnastics are not necessary. Think about the things that would make you, personally, feel better during your illness. What makes you feel more comfortable, and less unhappy, when you are ill? For me, the answer is generally a tasty herbal tea, being warm (or cooled down if I'm overheated), and sleeping. If I am not feeling too horrible, I might be up to enjoying a good novel. What would make you feel most comfortable may differ. However, since both of us enjoy thinking rationally, I doubt spouting platitudes like "I have 100% chances of recovery! Yay!" is going to make you personally feel better. Get the benefits of pain reduction and possibly better immune response of the placebo effect by making yourself more physically and mentally comfortable. When I do these things, I don't think they help me get better because they have some magical ability in and of themselves. I think they will help me get better because of the positive associations I have for them. Hope that helps you in some way.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-09-04T20:56:06.489Z · LW(p) · GW(p)
Well, yea obviously it's a simplified model to make the math easier, but the end result is the same. The real formula might for example look more like P=0.2+(expectation^2)/3 than P=expectation/2. In that case, the end result is both a real probability and expectation equal to 0.215377 (source: http://www.wolframalpha.com/input/?i=X%3D0.2%2B%28X^2%29%2F3 )
Also, while I used the placebo effect as a dramatic and well known example, it crops up in a myriad other places. I am uncomfortable revealing to much detail, but it has an extremely real and devastating effect on my daily life which means I'm kind of desperate to resolve this and get pissed that people are saying the problem doesn't exist without showing how mathematically.
Replies from: GuySrinivasan↑ comment by SarahNibs (GuySrinivasan) · 2011-09-04T22:11:07.425Z · LW(p) · GW(p)
You're asking too general a question. I'll attempt to guess at your real question and answer it, but that's notoriously hard. If you want actual help you may have to ask a more concrete question so we can skip the mistaken assumptions on both sides of the conversation. If it's real and devastating and you're desperate and the general question goes nowhere, I suggest contacting someone personally or trying to find an impersonal but real example instead of the hypothetical, misleading placebo example (the placebo response doesn't track calculated probabilities, and it usually only affects subjective perception).
Is the problem you're having that you want to match your emotional anticipation of success to your calculated probability of success, but you've noticed that on some problems your calculated probability of success goes down as your emotional anticipation of success goes down?
If so, my guess is that you're inaccurately treating several outcomes as necessarily having the same emotional anticipation of success.
Here's an example: I have often seen people (who otherwise play very well) despair of winning a board game when their position becomes bad, and subsequently make moves that turn their 90% losing position into a 99% losing position. Instead of that, I will reframe my game as finding the best move in the poor circumstances I find myself. Though I have low calculated probability of overall success (10%), I can have quite high emotional anticipation of task success (>80%) and can even be right about that anticipation, retaining my 10% chance rather than throwing 9% of it away due to self-induced despair.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-09-04T22:46:06.345Z · LW(p) · GW(p)
Sounds like we're finally getting somewhere. Maybe.
I have no way to store calculated probabilities other than as emotional anticipations. Not even the logistical nightmare of writing them down, since they are not introspectively available as numbers and I also have trouble with expressing myself linearly.
I can see how reframing could work for the particular example of game like tasks, however I can't find similar workaround for the problems I'm facing and even if I could I don't have the skill to reframe and self modify with sufficient reliability.
One thing that seems like it's relevant here is that I seem to mainly practice rationality indirectly, by changing the general heuristics, and usually don't have direct access to the data I'm operating on nor the ability to practice rationality in realtime.
... that last paragraph somehow became more of an analogy because I cant explain it well. Whatever, just don't take it to literally.
Replies from: Barry_Cotter↑ comment by Barry_Cotter · 2011-09-05T00:00:23.709Z · LW(p) · GW(p)
I can see how reframing could work for the particular example of game like tasks, however I can't find similar workaround for the problems I'm facing and even if I could I don't have the skill to reframe and self modify with sufficient reliability.
I asked a girl out today shortly after having a conversation with her. She said no and I was crushed. Within five seconds I had reframed as "Woo, I made a move! In daytime in a non-pub environment! Progress on flirting!"
My apologies if the response is flip but I suggest going from "I did the right thing, woo!" to "I made the optimal action given my knowledge, that's kinda awesome, innit?"
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-09-05T10:50:52.634Z · LW(p) · GW(p)
that's still the same class of problem: "screwed over by circumstances beyond reasonable control". Stretching it to full generality, "I made the optimal decision given my knowledge, intelligence, rationality, willpower, state of mind, and character flaws", only makes the framing WORSE because you remember how many things you suck at.
↑ comment by Dorikka · 2011-09-04T19:06:53.325Z · LW(p) · GW(p)
I think that humans can mentally self-modify to some extant, especially if it really really matters. If you really needed to be optimistic, you might be able to modify yourself to be such by significantly participating in certain types of organized religion. (This is a rather extreme example -- a couple minutes of brainstorming would probably yield ideas with (much?) lower cost and similar results, but it illustrates the possibility.)
Expected utility maximizers are not necessarily served by updating their map to accurately reflect the territory -- there are cases such as the above when one might make an effort to willingly make one's map reflect the territory less accurately. The reason why expected utility maximizers often do try to update their map to accurately reflect the territory is that it usually yields greater utility in comparison to alternative strategies -- having an accurate map is (I would guess) not much of a source of terminal utility for most.
ETA: Missing words. >.<
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-09-04T21:01:05.129Z · LW(p) · GW(p)
I might theoretically be able to do this, but it would involve rejecting the entirely of rationality and becoming a sophilist or somehting, so after recovery the thing my body would have become would not undo the modification and instead go intentionally create UFAI as an artistic statement or somehting.
Ok, a slight exaggeration, but far less slight than I'm comfortable with.
Replies from: Dorikka↑ comment by Dorikka · 2011-09-04T23:18:52.876Z · LW(p) · GW(p)
Since you're likely the one who would benefit from it, hopefully you brainstormed for a few minutes before you decided that my "religion" approach was really the most effective one -- I just typed the first idea that popped in my head and seemed to work.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-09-05T11:05:56.965Z · LW(p) · GW(p)
Huh? Not only was it just an example, but Sophilism is incompatible with every religion I know of.
Anyway, I didn't brainstorm it for roughly the same reason I don't brainstorm specific ways to build a pepertum mobile. The way my brain is set up, I can't reject rationality in any single situation like that without rejecting the entire concept of rationality, and without that my entire belief structure disintegrates onto postmodern relativist sophilism. Similar but more temporary things have happened before and the consequences are truly catastrophic.
And yea, this obviously isn't how it's supposed to work but I've not been able to fix it, or even figure out what would be needed to do so.
↑ comment by [deleted] · 2011-09-04T17:02:00.296Z · LW(p) · GW(p)
The scenario you propose does seem inevitably to cause a rational agent to lose. However, it is not realistic, and I can't think of any situations in real life that are like this-- your fate is not magically entangled with your beliefs. Though real placebo effects are still not fully understood, they don't seem to work this way: they may make you feel better, but they don't actually make you better. Merely feeling better could actually be dangerous if, say, you think your asthma is cured and decide to hike down into the Grand Canyon.
Maybe there are situations I haven't thought of where this is a problem, though. Can you give a detailed example of how this paradox obtrudes on your life? I think you might get more useful feedback that way.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-09-04T18:44:53.386Z · LW(p) · GW(p)
MAYBE asthma is an exception (I doubt it), but generally, in humans the scenario it actually IS realistic exactly because outcomes are entangled with your beliefs in a great many and powerful ways that influence you every day. It's why you can detect lies, why positive thinking and placebos work, etc.
Edit: realized this might come of as more hostile than i intended, but to lazy to come up with somehting better.
Replies from: Nonecomment by Pavitra · 2011-09-05T17:10:13.503Z · LW(p) · GW(p)
The bitcoin market seems to be experiencing well-funded deliberate market manipulation. Someone who's good at economics should pick up some of that free money.
comment by [deleted] · 2013-11-14T22:59:32.194Z · LW(p) · GW(p)
Hello. I just signed up. I don't understand exactly the architecture of the site. Where can I post an idea, which is also a request for help, in developing a "revolutionary" computer program, for instance?
Replies from: Nisan↑ comment by Nisan · 2013-11-14T23:07:54.112Z · LW(p) · GW(p)
The current Open Thread would be a good place to do that. If you want to wait a day, there will be a new weekly Open Thread and you will see it at the top of the Discussion section.
EDIT: Also, feel free to introduce yourself in the current Welcome Thread.
Replies from: None↑ comment by [deleted] · 2013-11-14T23:48:23.992Z · LW(p) · GW(p)
What is this "new weekly Open Thread"; and how will it be called?
Replies from: Nisan↑ comment by Nisan · 2013-11-15T00:45:35.512Z · LW(p) · GW(p)
Ah, well, if you follow the link to the Discussion section, you'll see a list of the most recent posts, with the newest posts first. Currently, about halfway down the page, you can see "Open Thread, November 8 - 14, 2013". This is a link to what I called the current Open Thread. I expect that in the next 24 hours or so, a new post called "Open Thread, November 15 - 21, 2013" or something similar will appear at the top of the list.
Replies from: Nonecomment by MarkusRamikin · 2011-09-28T06:32:51.339Z · LW(p) · GW(p)
I can't find the delete button. Who's authorised to delete stuff?
Should probably delete the account it was made from too.
comment by MinibearRex · 2011-09-23T01:44:29.238Z · LW(p) · GW(p)
I got in a discussion with a philosophy grad student today, who told me that the question of whether thoughts were "just" patterns of neural flashes, or if there was something epiphenomenal going on, was still a serious open question. I'm really hoping that this is just a description of the current state of affairs in the philosophy world, and not the neuroscience world, but she seemed rather insistent on this point. This isn't actually considered an open question in neurobiology, right?
Replies from: antigonus, wedrifid, Vaniver↑ comment by antigonus · 2011-09-29T05:17:39.202Z · LW(p) · GW(p)
This isn't actually considered an open question in neurobiology, right?
It isn't a question in neurobiology at all. If consciousness is epiphenomenal, then by definition you can't perform any experiment to detect its existence. And insofar as neurology is the attempt to discover the material composition of the brain and the causal structure of brain events, and epiphenomenalism holds that consciousness is immaterial and causally silent, well...
↑ comment by wedrifid · 2011-09-29T03:47:51.487Z · LW(p) · GW(p)
I got in a discussion with a philosophy grad student today
I made that mistake once too.
but she seemed rather insistent on this point
Uh huh.
This isn't actually considered an open question in neurobiology, right?
No. It's crazy talk.
Replies from: pedanterrific↑ comment by pedanterrific · 2011-09-29T04:15:46.985Z · LW(p) · GW(p)
a discussion with a philosophy grad student
crazy talk
Tautology.
↑ comment by Vaniver · 2011-09-23T01:55:23.166Z · LW(p) · GW(p)
I think the question here is not "is this an open question" but "are there people who disbelieve this?". I can imagine neurobiologists who cannot rule out epiphenomena about thoughts.
Replies from: MinibearRex↑ comment by MinibearRex · 2011-09-23T04:02:23.151Z · LW(p) · GW(p)
True, I can imagine that as well. I guess my question was really more about prevalence. How common are these people?
Replies from: Vaniver↑ comment by Vaniver · 2011-09-23T14:18:01.441Z · LW(p) · GW(p)
I came across this in an unrelated discussion:
Neuroscientists generally assume that all mental processes have a concrete neurobiological basis.
Searching for something similar in Google Scholar might give you lots of sources to suggest to the grad student that most neuroscientists are reductionists.
Replies from: Jackcomment by Vivid · 2011-09-22T13:16:26.942Z · LW(p) · GW(p)
Suppose I have already read a few books about institutional microeconomics and evolutionary game theory and I wish to gain a solid grounding in mechanism design and then algorithmic mechanism design. What papers or books on these subjects would you recommend?
comment by Will_Newsome · 2011-09-13T04:52:44.548Z · LW(p) · GW(p)
Is it just me or is Aleister Crowley a pretty cool sanity memeplex?
Replies from: Craig_Heldreth↑ comment by Craig_Heldreth · 2011-09-13T14:55:13.925Z · LW(p) · GW(p)
Yvain posted on this at length here.
His link there is broken; he is referencing the introduction to Book 4.
It ain't just you. Go to your local Barnes and Noble and they will inevitably have a dozen Crowley titles. I still have his books but I don't meet up with his fans any more.
comment by Solvent · 2011-09-05T05:13:00.377Z · LW(p) · GW(p)
What's this SingInst House which I have heard about, which people go to, and it is exciting?
What's the Visiting Fellows program?
Is there some public list of people who've been on it, for verification purposes?
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2011-09-05T12:41:32.157Z · LW(p) · GW(p)
http://singinst.org/aboutus/visitingfellows
http://lesswrong.com/lw/1hn/call_for_new_siai_visiting_fellows_on_a_rolling/
http://lesswrong.com/lw/29c/be_a_visiting_fellow_at_the_singularity_institute/
http://lesswrong.com/lw/3fk/where_in_the_world_is_the_siai_house/
While the SIAI page says that "We are currently accepting applications for new Visiting Fellows", I'm under the impression that the program's no longer running.
comment by wedrifid · 2011-09-27T16:37:00.701Z · LW(p) · GW(p)
Ok, my 'last 30 days' karma just dropped 100 over an 8 hour period. Now I'm trying to work out exactly why I need to be reminded that I must have written some awesome comments a month ago. :P
Replies from: wedrifid↑ comment by wedrifid · 2011-09-29T03:45:38.100Z · LW(p) · GW(p)
Ok, now it is a 200 drop in the 30 days while the absolute increases by about 100. WTF was I doing back then? I didn't write a top level post. Must have been some sort of political drama that I lucked out and got on the popular side of.
comment by lessdazed · 2011-09-03T23:50:27.168Z · LW(p) · GW(p)
Edit: Original version moved to karma sink to hide it away and leave it available for reference. New version:
Is what we refer to as "status" always best thought of as relative? Is a person's status like shares in a corporation or money in an economy, where the production of more diminishes what they have and does not create wealth? Is it an ability to compel others and resist compulsion? Or is it more like widgets, where if I happen to lose out from you getting more widgets, it is only because of secondary effects like your ability to out-compete me with your widgets?
I am not trying to find a really true definition of "status". To some, it seems right to answer the question "Is status all relative or is status not all relative?" with "It depends on which reasonable meaning of status you mean." Everyone (?) agrees that a valid way of discussing status is to talk about something like what portion of the total (subcategory of) status a person has.
Not everyone agrees that there is a reasonable meaning by which one might speak of non-relative status, other than the one that is shorthand for ignoring small or infinitesimal losses by others. In the same way we may say "The government printed one million dollars and gave it to an agency, no one else lost or gained anything." It's fine to say that, but only because: a) the inflation caused by printing a million dollars is miniscule, b) we can count on the listener to infer that increasing money does not increase wealth in that way.
So if one's answer is "It depends," then one thinks it is more than just linguistically valid to think about status in terms of an absolute that can be increased or decreased, but literally, logically, true. Not everyone agrees with that, and the poll is to get a general feel for how many here think each way.
So, as a hypothetical: A person in a room magically becomes awesome - say a guy has knowing kung fu downloaded into his brain, and he tells everyone, and they believe him. Does it make any sense at all to say that the status of others has not changed, other than in a way susceptible to a money/inflation/wealth (simple truth sheep/rock) metaphor?
Poll:
Replies from: lessdazed, Normal_Anomaly, lessdazed, lessdazed, Jack, lessdazed↑ comment by lessdazed · 2011-09-05T02:29:23.011Z · LW(p) · GW(p)
Could someone please explain the response to this comment? What I'm most curious about are the responses to the attached poll replies. Multiple people have downvoted each entry in the poll without comment. This ruins the poll for the participants, as one can no longer tell how many people have voted for each option. Do not do this on polls until either LW shows more than net votes, or there is a better way to poll.
I also don't understand downvoting this comment without criticizing it and helping me fix its problems. I have discussed this topic with several LW participants and have gotten each of the two types of responses multiple times, and I think a previously undiscussed issue that gets divergent intuitions from people who theretofore have believed themselves having very similar philosophies is potentially interesting. If I am not criticized, I do not know how to improve. It is currently sitting at -2 but it has been upvoted several times as well, five or more people have downvoted without comment.
I'm not shy about posting things in discussion if I think they merit it, but I didn't think this topic did, so I posted it in the open thread. If this issue is not appropriate for an open thread, where is it appropriate for?
Replies from: ArisKatsaris, satt↑ comment by ArisKatsaris · 2011-09-06T22:06:44.041Z · LW(p) · GW(p)
I've not downvoted you, nor participated in the poll, but...
...your question about how relative 'status' is, reminds me of debates about whether a tree falling in the forest makes a sound. Depends how one defines the word. You don't seem to have an option in your poll for "Depends how one defines 'status' ".
...also you seem to be first posing a detailed specific scenario with a concrete question about what happens with the fires on the first and second islands -- but then the polls don't offer that specific, concrete question, they offer the vague "status is relative/not all relative" questions instead. Which seems you want to jumble different questions together, or making people seem to support one thing by answering another. Or something.
In short it all seems a bit muddled. Mind you, as I said, I wasn't among the people downvoting this, so I don't know their own reasoning behind their votes.
Replies from: lessdazed↑ comment by lessdazed · 2011-09-06T22:37:10.469Z · LW(p) · GW(p)
In short it all seems a bit muddled.
Thank you for your feedback!
I am not used to making up intuition pumps. I will try to become better at writing them.
Depends how one defines the word.
This is a legitimate response, and I certainly didn't intend to debate or try and discover the true meaning of a word. However, it consists of the claim that for somewhat reasonable definitions of "status", "status is all relative" is true, and for others, "status is not all relative" is true. I consider that equivalent to "status is not all relative" - something I will make clear. By "status is all relative" I mean something like: "for no reasonable (to me, though this is something I expect others can guess at with good accuracy) definition of status is status anything but relative".
Part of the difficult expressing this is part of why I resorted to examples, and I do take to heart that difficulty expressing an idea is often a sign it isn't coherent.
I edit the post to try again.
↑ comment by satt · 2011-09-05T04:29:28.422Z · LW(p) · GW(p)
Multiple people have downvoted each entry in the poll without comment. This ruins the poll for the participants, as one can no longer tell how many people have voted for each option.
One user upvoted "Status is not all relative", two users upvoted "Status is all relative", those three users downvoted the karma sink, and three other users downvoted all three comments.
Replies from: lessdazed↑ comment by Normal_Anomaly · 2011-09-04T15:22:35.941Z · LW(p) · GW(p)
On the first island, everyone likes everyone else's joke equally. All still have equal status from each person's perspective. Is there more status on that island than before?
On the second island, everyone dislikes everyone else's joke equally. All still have equal status from each person's perspective. Is there less status on that island than before?
My intuition is that status is meaningful relative to other people's, so this is similar to the inflation of a currency. In all the ways that status can be used to get people to do things, there isn't any more or less of it.
What happens when one person on the first island asks for help building a fire?
Whether or not the others help em depends on the temperature of the island. Like I said before, my intuition is that status is relative. If they do help em, ey gains some amount of status relative to them. If they don't, ey loses a similar amount of status.
EDIT: The following is based on a misinterpretation of lessdazed.
Assuming you mean third island: The other people help em, and ey gains a bit of status in the process. Ey now has slightly more status than the others. The reverse happens on the fourth island.
Replies from: lessdazed↑ comment by Jack · 2011-09-04T15:35:52.326Z · LW(p) · GW(p)
Status isn't the only variable in these scenarios. One can feel more or less bonded to someone independent of status, for example.
Replies from: lessdazed↑ comment by lessdazed · 2011-09-04T15:45:18.106Z · LW(p) · GW(p)
Or one person could have a firearm, or a conch shell.
Assume variables not mentioned are constant.
Replies from: Jack↑ comment by Jack · 2011-09-04T15:59:53.536Z · LW(p) · GW(p)
But they wouldn't be constant given what you describe which makes me skeptical of the intuitions provoked. The fire is probably more likely to get built cooperatively on the island where the jokes got laughs-- but that has to do with bonding and mood, not status.
Replies from: lessdazed↑ comment by lessdazed · 2011-09-04T16:03:14.121Z · LW(p) · GW(p)
Good point. What example of status changing can I use to best clarify I'm talking about just one variable?
I will try mentioning varying ways of gaining status, each with side effects, and specify that only one variable is considered. Hopefully someone can think of a single good scenario.
Replies from: Jack↑ comment by Jack · 2011-09-06T14:57:18.200Z · LW(p) · GW(p)
I can't really think of a scenario where total status could be raised or lowered-- because I think status is (obviously) always relative. Independent of coming up with intuition pumps I'd like to know if there are people who disagree with this-- it is a shame your poll was ruined.
Replies from: jsalvatier, Raemon, Pavitra, lessdazed↑ comment by jsalvatier · 2011-09-07T17:54:11.061Z · LW(p) · GW(p)
Lets say that humans have special circuits for figuring out if a person is more like the band leader or more like the band outcast. Human minds use these circuits to change their behavior towards that person. It seems plausible that those circuits can be 'gamed' , say people get into the habit of speaking badly about people who don't exist, then perhaps everyone actually existing will seem high status.
Replies from: Jack↑ comment by Jack · 2011-09-07T18:11:45.374Z · LW(p) · GW(p)
Clever- and it seems like this could plausibly make everyone feel better about themselves (though, of course they'll still feel bad when they compare themselves to even higher status people). Note though that this is like making someone popular by giving them imaginary friends-- it's not how the word is ordinarily used. But if this is what people have in mind by 'raising net status', fine, I don't see anything implausible about it.
↑ comment by Raemon · 2011-09-07T16:49:05.464Z · LW(p) · GW(p)
I define status to be "your ability to be treated favorably, all else being equal." I regard bonding as a form of status - members of the in-group have more status than the outgroup. A group of three strangers on an island has collectively less status than the same group after they've bonded. Once they've bonded, they are all willing to do each other favors and treat each other nicely in ways that they weren't willing to before. In my mind, this is entire point of status.
You can define status as "how much more ability to be treated favorably you have compared to other people," but I don't think that's a useful definition. The word "status" has gain popularity particularly because it flexibly describes a wide array of social interactions.
Status SYMBOLS are often zero-sum (buying a big TV makes people want to come over to your house more often to watch football games, and this only works if your TV is bigger than other people's). But those are only one form of status-gain.
(I spoke to less dazed in real life about this. Our conversation was the impetus for this thread)
Replies from: jsalvatier, lessdazed↑ comment by jsalvatier · 2011-09-07T17:20:37.071Z · LW(p) · GW(p)
Beware 'defining' things too early.
Replies from: Raemon↑ comment by Raemon · 2011-09-07T17:28:38.305Z · LW(p) · GW(p)
I think that after multiple years of discussing the word status on this site, we BETTER start actually defining it. If there are disagreements as to the definition we need to get them out into the open, so that at the very least we can start mentally translating "Raemon-Status" and "Jack-Status".
Replies from: Jack, jsalvatier↑ comment by Jack · 2011-09-07T18:04:05.333Z · LW(p) · GW(p)
Luckily, we speak a natural language (English) here which lets us use words without having to define them each time we use them and instead refer to our collective understanding as English speakers to tell us what a word means. Conveniently, this collective understanding is routinely organized into books and databases consisting of words and their meanings. Now sometimes these definitions are ambiguous or different from each other and sometimes we use technical words so obscure they don't appear in such databases. On occasion we may even decide to use a word in an unusual context. In these circumstances we better start defining terms.
Thankfully, the discussion of social status is not one of those cases.
Wikipedia: "In sociology or anthropology, social status is the honor or prestige attached to one's position in society (one's social position). It may also refer to a rank or position that one holds in a group, for example son or daughter, playmate, pupil, etc."
Oxford dictionaries: "the relative social, professional, or other standing of someone or something:an improvement in the status of women high rank or social standing:those who enjoy wealth and status
Britannica which wikipedia plagiarized or vice versa: "social status, also called status, the relative rank that an individual holds, with attendant rights, duties, and lifestyle, in a social hierarchy based upon honour or prestige."
Dictionary.com: "the position of an individual in relation to another or others, especially in regard to social or professional standing."
I could go through anthropology and sociology papers and talk about how they use the word. I could quote Max Weber, too. I think that would be overkill, though. I'm open to an argument that this is "not what we really mean" when we use the word 'status' but I tend to assume people here are just using the regular English word... so you can see how I would have a hard time seeing how status could be non-relative.
Replies from: Raemon↑ comment by Raemon · 2011-09-07T18:24:02.326Z · LW(p) · GW(p)
I don't consider our definitions exclusive - I consider mine to be an unpacking of the general one that explains what it actually means. "What is status?" "Your social position" seems like answering the teacher's password - it doesn't tell me what to actually predict yet. "What is social position?", "Your ability to ability to be treated favorably by people in your tribe" gives me actual information to work with.
Social position occurs within governments (where status determines your ability to influence large section of a country) but also within small social circles (i.e. Queen Bee or Alpha Male status determines your ability to shape the course of conversations, influence the opinions of your group, decide what clothes are fashionable, etc). Telling good jokes is a legitimate way to gain status in social circles large and small.
Now, if the ONLY other thing that can shape the course of conversations, group opinions, or clothing fashionability was other people, then yes, that type of status would be zero sum. You wouldn't be able to gain control of it without someone else losing control of it. But that is not the only factor at work. Clothing fashionability is impacted by the weather. One alpha can't necessarily influence a large group to wear skimpy clothes during a snowstorm, but a collection of people who each have influence might be able to.
In the desert island case, building fires and shelters are hard work. In the scenario where everyone tells good jokes, each person doesn't have any greater ability to influence the others against each OTHER, but they do all have the ability to influence the others against lethargy, hunger, or other factors.
Replies from: Jack, lessdazed↑ comment by Jack · 2011-09-07T19:13:22.856Z · LW(p) · GW(p)
If you want to unpack "social position", "relative ability to be treated favorably by people in your tribe" is a much more plausible candidate.
You wrote:
You can define status as "how much more ability to be treated favorably you have compared to other people," but I don't think that's a useful definition. The word "status" has gain popularity particularly because it flexibly describes a wide array of social interactions.
You don't think it is a useful definition? (!) I could see an argument being made that it is less useful than 'absolute ability to be treated favorably' but how is it not useful at all? Even if it is more useful (and I'm not at all convinced a concept becomes more useful because it is broader) is that a reason to use it despite it straightforwardly contradicting the meaning usual English word (see all the instances of 'rank' and 'relative' in the above definitions). I guess this has become a definition debate which is obviously silly but as far as I can tell your definition just doesn't match the way the word is used at all.
Replies from: Raemon↑ comment by Raemon · 2011-09-07T19:26:33.087Z · LW(p) · GW(p)
If you want to unpack "social position", "relative ability to be treated favorably by people in your tribe" is a much more plausible candidate.
Given two perfect strangers in a post apocalyptic scenario, and two perfect strangers who soon realize they are both high ranking members of their respective tribes, I think the latter group will show more respect for each other, even though they have no one else to compare themselves to. (I will note that this is an empirical prediction which might be false. Anyone know if data exists on this?)
You don't think it is a useful definition? (!) I could see an argument being made that it is less useful than 'absolute ability to be treated favorably' but how is it not useful at all?
I mostly agree with this, and should have worded it that way.
I think we need a word for each of these concepts. I'm not picky about which word gets used to mean what. But I still don't think it's automatically implied that status is relative by the traditional definition ("position" can be on an absolute or relative scale) but the word status gets used on Less Wrong in enough contexts that for our purposes, it's probably more useful as the broader term.
Edit: Using status as the broad term also saves us the trouble of coming up with a new word, since we just say "relative status" whenever we mean that, and if we're in a discussion that's obviously about relative status it can probably be abbreviated anyway.
Replies from: lessdazed↑ comment by lessdazed · 2011-09-08T02:07:37.098Z · LW(p) · GW(p)
Given two perfect strangers in a post apocalyptic scenario, and two perfect strangers who soon realize they are both high ranking members of their respective tribes, I think the latter group will show more respect for each other, even though they have no one else to compare themselves to.
The strangers have more social power to mistreat the other without repercussions than the chiefs have.
I think we need a word for each of these concepts.
Disentangle ability to be treated favorably from relative social position. If every one of some nations has nuclear weapons sufficient for a MAD policy, we can expect them to not mistreat each other too harshly. If this is a post-apocalyptic scenario and each such nation was populated by robots and one human, in a meeting of the humans none would be necessarily be high status, but severe social harm would not be inflictable.
In a group of thousands of otherwise equal sadists with locked-in syndrome in which each could activate a shock collar on a random other sadist with their eyes, one wouldn't say they are all low status.
That is really a grim hypothetical and I hope to think of a better one.
Replies from: Raemon↑ comment by Raemon · 2011-09-08T16:06:29.931Z · LW(p) · GW(p)
Also, I didn't say that all status was absolute. Relative status definitely exists, and contributes more directly to the ability to mistreatment. I was simply disagreeing with the idea that status is relative all the time, either by definition or by example.
↑ comment by lessdazed · 2011-09-08T02:03:44.186Z · LW(p) · GW(p)
"Your ability to ability to be treated favorably by people in your tribe" gives me actual information to work with.
This ignores the unpleasant aspects: your ability to mistreat other people in your tribe, and your ability to not be mistreated. If one person gains the ability to be favorably treated, others are losing the ability to mistreat.
Replies from: Raemon↑ comment by jsalvatier · 2011-09-07T17:58:18.278Z · LW(p) · GW(p)
We've been talking about intelligence for a long time too, but I haven't seen a good definition for it. Sometimes its better not to try to define things and just say "well here are some examples of things I mean when I say status".
Replies from: Raemon↑ comment by Raemon · 2011-09-07T18:01:41.613Z · LW(p) · GW(p)
I can actually see some ways in which my definition might be problematic (I'm not sure if it adequately explains, say, friends who deliberately choose to go to a more expensive restaurant of equivalent quality).
But regardless, I think the question "is status relative?" is a fairly important one, and if we are having disagreements about that, we need to figure out why.
Replies from: Raemon, jsalvatier↑ comment by jsalvatier · 2011-09-07T18:37:39.874Z · LW(p) · GW(p)
I agree that understanding status is useful. I'm not sure arguing about whether status is relative or not is very useful. I get the sense that most people accept that status is often relative or has large relative component, but intuitions about how human brains work will obviously differ a great deal. I think discussion of empirical work on status is likely to be much more useful.
Replies from: Raemon↑ comment by Raemon · 2011-09-07T18:45:11.126Z · LW(p) · GW(p)
I think assumptions about status being relatively lead directly to harmful interactions - beliefs that you must put others down in order to raise yourself up, or that you must choose between high status or other forms of morality.
I agree that most of the work is empirical, but that starting with "status is obviously relative, by definition" is a more dangerous assumption than "status is not inherently relative, by definition," even if it did turn out that the most effective ways to gain status were relative. (It seems to me that, between the two statements, "status is obviously relative" is more guilty of arguing via definitions than "status isn't necessarily relative.")
Replies from: lessdazed, wedrifid, Raemon↑ comment by lessdazed · 2011-09-07T20:49:28.226Z · LW(p) · GW(p)
I think assumptions about status being relatively lead directly to harmful interactions
Whenever I see a belief criticized for a reason other than it being wrong, I can't help but think that the reason was chosen as a fallback and the arguer would have preferred to criticize it as wrong, had he or she been able to.
Especially among rationalists/"rationalists"/aspiring rationalists/"aspiring rationalists".
↑ comment by wedrifid · 2011-09-07T19:13:45.995Z · LW(p) · GW(p)
I think assumptions about status being relatively lead directly to harmful interactions - beliefs that you must put others down in order to raise yourself up, or that you must choose between high status or other forms of morality.
This is how you cooperate, not (only) through denial. Defectors lose. Even in relative terms.
even if it did turn out that the most effective ways to gain status were relative.
That isn't what "status is relative" is about at all.
↑ comment by Raemon · 2011-09-07T18:56:37.831Z · LW(p) · GW(p)
Interesting moment of introspection - I've realized that I'm adopting an adversarial position and attempting to "win" this debate, and using words like "guilty" to describe the arguments of my adversaries.
a) isn't it ironic. Don'cha think?
b) Would you consider me a higher or lower status member of this forum if I were to successfully argue my point in neutral tone or an adversarial one?
↑ comment by lessdazed · 2011-09-07T20:59:20.790Z · LW(p) · GW(p)
You can define status as "how much more ability to be treated favorably you have compared to other people," but I don't think that's a useful definition.
I agree. I think something more like: status is ability to influence or resist influence for social, non-environmental reasons, not covered by other relationship categories such as lust or love - if you have a super-laser-of-doom-and-evil on the moon pointed at the Earth, you may have influence, but not status.
↑ comment by Pavitra · 2011-09-07T14:44:20.244Z · LW(p) · GW(p)
Two grim-trigger strategies are playing the iterated prisoner's dilemma over a noisy telephone line. One mishears the other as saying "defect", and they switch from both always cooperating to both always defecting.
Replies from: Jack, lessdazed↑ comment by lessdazed · 2011-09-07T20:54:31.013Z · LW(p) · GW(p)
So you're saying two people, in a closed system, were able to influence the other to cooperate but are no longer able to, and "status" is the ability to influence, and ability to influence has been lowered?
If so, I think in this case "ability to influence" was for environmental rather than social reasons. The words on the line describe how the environment will react to an individual's cooperation/defection.
I see status as more ability to influence or resist influence for social, non-environmental reasons - if you have a super-laser-of-doom-and-evil on the moon pointed at the Earth, you may have influence, but not status.
Replies from: Pavitra↑ comment by Pavitra · 2011-09-09T19:03:45.568Z · LW(p) · GW(p)
Maybe I have something different in mind when I say "status". To the extent that an agent's treatment of an opponent is based on some sort of persistent state, rather than on the opponent's local behavior, I would say that that state is a form of status information. (Thus, one can have a status with a grim trigger, but not with a tit-for-tat.)
In humans (and I conjecture in approximately-winful strategies generally?), status tends to be one-dimensional, with one end eliciting "do nice things for this person" behavior, and the other end eliciting "do mean things to this person" behavior.
↑ comment by lessdazed · 2011-09-04T15:40:47.230Z · LW(p) · GW(p)
Karma sink
Twelve people are stranded on four almost identical desert islands, three apeice (the fourth island is warmer than the other three, which are identical). They have no hope of rescue. Each person has equal status from each person's perspective. On the first two islands, each person tells a joke to lighten the mood, or perhaps they tell each other of their backgrounds, they interact in a way that alters status alone but not feelings of romance, kinship, etc.
On the first island, everyone likes everyone else's joke equally, or is impressed with everyone's background equally. All still have equal status from each person's perspective. Is there more status on that island than before?
On the second island, everyone dislikes everyone else's joke equally, or despises everyone's background equally. All still have equal status from each person's perspective. Is there less status on that island than before?
On the third island, one person asks for help starting a fire. The other two feel compelled by his status to comply, they do not feel they have the status to refuse.
On the fourth island, one person asks for help starting a fire, but the island is very, very, very slightly warmer than on the other, almost identical islands. The other two don't feel compelled by his status to comply, they feel they have the status to refuse. The temperature plays a role, they don't care quite as much about making a fire as the other nine people, and this small factor was decisive, as was every other factor militating against helping build a fire, such as social ones. All were necessary conditions to not helping.
What happens if subsequently one person on the first island asks for help building a fire?
What happens if subsequently one person on the second island asks for help building a fire? How does this differ from what happened on the first island when someone asked for help building a fire?