Have you changed your mind lately? On what?

post by Emile · 2012-06-04T19:54:46.006Z · LW · GW · Legacy · 106 comments

Contents

106 comments

Admitting to being wrong isn't easy, but it's something we want to encourage.

So ... were you convinced by someone's arguments lately? Did you realize a heated disagreement was actually a misunderstanding? Here's the place to talk about it!

106 comments

Comments sorted by top scores.

comment by lukeprog · 2012-06-05T05:51:27.145Z · LW(p) · GW(p)
  • Six months ago I thought CFAR was probably a bad idea. Now I think it's worth the investment, and have been positively surprised in three major ways in the last two months about the positive effects of already-done CFAR work.
  • Three months ago I thought Amy + some professionals could organize the Summit without much work from the rest of SingInst; I no longer think that's true.
  • Due to updates about simulation shutdown risk and the difficulty of FAI philosophy (I think it's easier than I used to believe, though still very hard), I think an FAI team is a better idea than I thought four months ago.
  • I've downgraded my estimation of my own rationality several times in the past three months.
  • In the past two weeks I switched from thinking CFAR should be a "standard-template" non-profit to thinking it should be a non-profit that acts almost entirely like a for-profit company (it's very complicated, but basically: it will scale better and faster that way).
  • I've updated several times in favor of thinking I can grow into a pretty good long-term executive.
Replies from: PECOS-9, wedrifid, Kawoomba
comment by PECOS-9 · 2012-06-05T06:09:34.114Z · LW(p) · GW(p)

Due to updates about simulation shutdown risk and the difficulty of FAI philosophy (I think it's easier than I used to believe, though still very hard), I think an FAI team is a better idea than I thought four months ago.

Can you elaborate on this? Specifically, what did you learn about simulation shutdown risk, what do you mean by FAI team, and what does one have to do with the other?

Replies from: lukeprog
comment by lukeprog · 2013-07-11T21:01:37.667Z · LW(p) · GW(p)

Kawoomba bumped your comment to my attention, but unfortunately I don't now recall the details of the updates you're asking for more info about. (I don't recall the "three major ways" I was positively surprised by CFAR, either.)

comment by wedrifid · 2012-06-05T06:21:19.677Z · LW(p) · GW(p)

Six months ago I thought CFAR was probably a bad idea. Now I think it's worth the investment, and have been positively surprised in three major ways in the last two months about the positive effects of already-done CFAR work.

I just updated in favor of CFAR being a misleading acronym. Took me a while to work out that this means Center For Applied Rationality, not this. That may become less significant once google actually knows about it.

comment by Kawoomba · 2013-07-11T20:15:37.647Z · LW(p) · GW(p)

Bumping this comment.

comment by David Althaus (wallowinmaya) · 2012-06-04T23:36:16.839Z · LW(p) · GW(p)

Meta-Note: This is great! We should make this into a monthly or bi-monthly recurring thread like "Rationality Quotes" or "What are you working on?".

Back to the topic: I overestimated the efficacy of my anti-depressant and now believe that it was mainly placebo.

Replies from: magfrump
comment by magfrump · 2012-06-05T00:41:27.455Z · LW(p) · GW(p)

I overestimated the efficacy of my anti-depressant and now believe that it was mainly placebo.

What evidence was it that led you to this conclusion?

DISCLAIMER: I'm curious about this because I had the opposite experience; right now I'm just curious but this is a reminder to myself not to get argumentative about the fact that people have different subjective experiences and chemical reactions to things.

Replies from: wallowinmaya
comment by David Althaus (wallowinmaya) · 2012-06-05T07:59:46.286Z · LW(p) · GW(p)

My average happiness and productivity declined over the last two months from around 8 to 5 on a scale of 10 (I measure those things every day through introspection and write it down). Previously the average was almost never below 6-7 in any given week. I also didn't change my diet, exercise regimen, etc. and there were no difficult experiences I had to handle. Heck, I even doubled the dose of my anti-depressant - with no effect.

I still think that this anti-depressant has some effect only that it isn't nearly as great as I imagined and that a changing worldview or belief-system can have a far higher impact.

Replies from: magfrump, athingtoconsider
comment by magfrump · 2012-06-06T03:43:25.470Z · LW(p) · GW(p)

That's a pretty intense effect!

In fact that makes it look like the medication was having a negative effect. That's a little unsettling, although I guess antidepressants are highly variable in how they work with different people?

That seems like a really huge change though, especially considering you say you didn't change your diet/exercise/stress level/etc. Rather than saying "mainly placebo" I would have said "had a significant negative effect on my mood" with that data!

Replies from: wallowinmaya
comment by David Althaus (wallowinmaya) · 2012-06-06T11:09:40.031Z · LW(p) · GW(p)

Sorry, I probably haven't expressed myself clearly. I still believe (~60%) bupropion has some, small positive effects for me and that my mood would be even worse if I didn't take it. Here's a more detailed story:

I've taken this anti-depressant for quite some time, for around 8-9 months. At first I noticed a definite increase in mood and productivity (or so it seemed). I also tried to eliminate confounding effects and so stopped to take it for a week or so - and my happiness declined significantly! However, I was still concerned about placebo-effects so I made some "blind" tests (i.e. took capsules either containing bupropion or not, but without my knowledge which containing which). The next day I guessed if I had taken a mere placebo or not, and I was right about 85% of the time (n was only 7, but still...) 'cause my mood was worse when I only took a placebo. I also took two pills on some days and it felt like I had much more energy and was way happier. So Bupropion seemed like a true panacea to me.

Unfortunately, my mood started to decline around two months ago therefore I went to my psychiatrist and he doubled my dose. This was around a month ago but didn't really help.

I hope this clears things up! ;)

Replies from: tadrinth, magfrump
comment by tadrinth · 2012-06-06T20:30:56.050Z · LW(p) · GW(p)

Are you sure you're not just building up a resistance/dependence? I tried anti-depressants but they eventually stopped really doing anything, I believe somewhere between 6 months to a year after starting them. I think resistance is pretty common.

Also, most anti-depressants take a while to kick in, so I suspect any day-to-day dosage changes are going to be more about withdrawal symptoms than anything else.

Replies from: wallowinmaya
comment by David Althaus (wallowinmaya) · 2012-06-07T10:15:52.797Z · LW(p) · GW(p)

Yeah, increasing tolerance is probably one of the main factors. But I thought I could counter that with the doubling of the dosage.

Yup, I guess it's likely that the negative day-to-day effects were mostly withdrawal symptoms.

What do you recommend? Just not taking antidepressants for a while?

Replies from: MixedNuts, tadrinth
comment by MixedNuts · 2012-06-07T15:44:53.342Z · LW(p) · GW(p)

My first tought was poop-out, but that doesn't happen with bupropion, only with SSRIs, right? You shouldn't stop and start again the same antidepressant (and, to a lesser extent, antidepressants in the same family, or that target the same neurotransmitter), this will build resistance.

With SSRIs, poop-out tends to be resistance at any dose, not tolerance. I've had poop-out from placebo (WTF?) and larger doses don't work there either.

A week is a bit short for a test, but bupropion is unusually fast so that's a decent test. Why are you trying to correct for the placebo effect at all? Improvement from placebo is real and desirable, and there's probably a feedback loop between placebo- and drug-related improvement that helps even further.

How many antidepressants have you tried? You might just be on the wrong one. How large is your dose? If you're at 300mg/day trying going up to 450 is possible, though I'm not very optimistic. Does your depression have a seasonal pattern at all, and if so could the weather be responsible for the mood drop? You seem oddly trusting in self-experiment and advice from strangers; why are you seeking recommendations besides your psychiatrist's? Cost?

Disclaimer: I'm a nutjob who reads Crazy Meds and Neuroskeptic, not a psychiatrist.

Replies from: wallowinmaya, MixedNuts
comment by David Althaus (wallowinmaya) · 2012-06-12T14:07:28.989Z · LW(p) · GW(p)

Yeah, poop-out could be the culprit. However, this would also suggest that the positive effects were mainly placebo because, as you mention in your second comment, the most reasonable account of poop-out is that the placebo effect wears off.

I was correcting for placebo cause I wanted to try lots of different antidepressants and other drugs and see which "really" work. But I guess you're right; the placebo effect is pretty amazing and one shouldn't risk eliminating it by over-zealous self-experimentation.

I haven't tried any other antidepressants since I fear the side-effects of SSRIs/SNRIs and MAOIs seem like too much of a hassle (although selegiline sounds pretty good). But I'm rationalizing. I definitely should try some new stuff.

In the past I was fairly happy during the summer or at least happier than in the winter so the weather has probably nothing to do with my current low. I also didn't change the brand of my antidepressant.

You seem oddly trusting in self-experiment and advice from strangers; why are you seeking recommendations besides your psychiatrist's? Cost?

Nah, it has nothing to do with money. My psychiatrist costs me basically nothing.

So, why do I prefer self-experimentation and seek out advice from strangers? Well, I don't think very highly of my psychiatrist or doctors in general to begin with (not especially eager for a long discussion about the reasons).

Furthermore, I learned a huge deal about my body through self-experimentation that I almost certainly couldn't have learned otherwise.

Replies from: MixedNuts
comment by MixedNuts · 2012-06-13T13:52:34.196Z · LW(p) · GW(p)

Do you know more than I do about models of poop-out? That thing is annoying and I want to know about it.

You can get some interactions from trying too many antidepressants. For example, having been on SSRIs may make you build a tolerance to other SSRIs. This appears to be a big reason why psychiatrists like to stop early, along with side effects and convenience. Still there is some value in exploration.

For comparing antidepressants you probably want open-label tests, and comparing meds directly against each other rather than with placebo.

SSRIs are pretty damn tame; which side effects are you afraid of? Sexual side effects can be a major pain but they'll go away when you go off the meds. Vanilla-ice-cream side effects (nausea, headache, somnolence, insomnia, diarrhea, constipation, dry mouth, sweating, weight gain) are common to the majority of meds, and they all go away except the weight gain. So you should at least try them unless you have some very unusual reason. If you're worried about weight gain, try fluoxetine (which is the usual first resort, and combines well with bupropion) or sertraline.

MAOIs are freaky shit. I hear they're very effective, but they have so many side effects and contraindications that they're often not worth it.

TCAs have a reputation for being effective. Apparently the reason they fell out of style is that they don't tolerate small lapses (unlike e.g. fluoxetine with its absurdly long half-life) and that a couple's week worth is a fatal dose, which is very bad for suicidal patients.

"Throw lots of things at depression" is an area of expertise of psychiatrists, so consider trusting them more.

Replies from: None
comment by [deleted] · 2012-06-25T13:54:13.908Z · LW(p) · GW(p)

There has been talk about side effects actually enhance the placebo effect.

From Efficacy and Effectiveness of Antidepressants: Current Status of Research:

[…]argue that side effects enhance the placebo effects of antidepressants by confirming to patients that they are taking the active medication and thereby increasing their expectation of improvement.

comment by MixedNuts · 2012-06-09T14:52:12.151Z · LW(p) · GW(p)

More ideas:

There's some evidence that poop-out can affect any antidepressant, perhaps any med. The dominant theory is "When a med doesn't work, it can work at first due to placebo effect, but then be conditioned out of working".

Have you been taking the exact same form of bupropion? Different brands of the same med can work differently for some people. Also, if you're taking sustained/extended-release pills, this will have affected your experiments.

comment by tadrinth · 2012-06-07T17:23:32.512Z · LW(p) · GW(p)

I was on an SSRI so I'm not sure any of my experience is actually relevant to bupropion.

If your depression has an obvious cause, fix that instead. I was depressed because of grad school, and I got better when I graduated.

comment by magfrump · 2012-06-06T17:11:07.578Z · LW(p) · GW(p)

Yes, it does, thank you!

comment by athingtoconsider · 2012-06-05T10:46:33.517Z · LW(p) · GW(p)

I assume you're telling all of this to your psychiatrist and following their advice. In addition, speaking with a psychologist about your concerns. Ask yourself if you might have bipolar disorder or seasonal depression rather than just depression. Anti-depressants don't work great in a lot of those cases, and in the case of bipolar disorder, it makes your mood swings worse. Also try multiple drugs, they don't all target the same mechanisms.

As for non-professional/medication steps, you should probably follow some of lukeprogs advice on happiness (find it from the sequences page). He covers the literature on what has empirically been shown to make people happier. It's not intuitive.

it isn't nearly as great as I imagined and that a changing worldview or belief-system can have a far higher impact.

This illustrates how your intuition about what makes you happy is wrong. It's not having a certain model of reality or willing your mind to do something from the inside, it's about finicky lizard-brain mood machinery and daily doses of beating it into happiness by feeding it tiny doses of positive external experiences.

comment by Kaj_Sotala · 2012-06-05T08:28:30.609Z · LW(p) · GW(p)

I read Shalizi's post on the difficulties of central planning, where he noted that even using something as simple as linear optimization to organize things becomes impossible if you need to do it on the scale of a national economy. This made me significantly reduce my belief in the proposition that something like CEV would be anywhere near computationally tractable, at least in the form that it's usually discussed.

That made me consider something like Goertzel & Pitt's human-assisted CBV approach, where much of the necessary computation gets outsourced to humans, as an approach that's more likely to work. Of course, their approach pretty much requires a slow takeoff in order to work, and I consider a hard takeoff pretty likely. Logically I should then have updated more towards considering that we'll end of losing our complexity of value during the Singularity but I didn't, possibly because I was already giving that a very high probability anyway and I can't perceive my intuitive probability estimates in sufficiently high precision for the difference to register. However I did update considerably towards thinking that Goertzel's ideas on Friendliness have more merit than I'd previously presumed, and that people should be looking in a direction like the one Goerzel & Pitt propose.

Replies from: gwern, Mitchell_Porter
comment by gwern · 2012-06-05T16:09:36.047Z · LW(p) · GW(p)

Shalizi's post also points out that if you relax any of the requirements, you can get answers much more quickly, and also notice that modern computers & algorithms run vastly faster. As a matter of fact, linear optimization is one of the best examples of progress:

Grötschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later – in 2003 – this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million.

(1988 is, incidentally, way after the cited short paper pointing out the impossibility of computing in time, IIRC.)

Given that CEV is all about extrapolating, making consistent, simplifying and unifying aggregate preferences, I wouldn't take linear programming as much more relevant to CEV as, say, various ruminations about NP or EXP-time.

Replies from: Vaniver, Kaj_Sotala
comment by Vaniver · 2012-06-05T23:36:54.948Z · LW(p) · GW(p)

The best requirement to relax, in my opinion, is that of optimality (which, incidentally, is a strong reason to be an adaptation executor rather than a utility maximizer!). My professional research is into optimization heuristics that just focus on getting good solutions and not worrying if they're the best ones- which allows tackling problems that are immensely larger. For many problems, it's simply not worth the time to ensure that no better solution exists- it's a lot of effort for little payout.

comment by Kaj_Sotala · 2012-06-06T13:38:58.825Z · LW(p) · GW(p)

Shalizi's post also points out that if you relax any of the requirements, you can get answers much more quickly, and also notice that modern computers & algorithms run vastly faster.

Yes, but he still considers it impossible even with modern computers and algorithms.

Given that CEV is all about extrapolating, making consistent, simplifying and unifying aggregate preferences, I wouldn't take linear programming as much more relevant to CEV as, say, various ruminations about NP or EXP-time.

I'm not sure how extrapolating, making consistent, simplifying, unifying, and implementing the aggregate preferences-in-general of everybody on Earth would be easier than simply implementing the resource-related preferences of everybody in a single nation.

Replies from: gwern, gwern
comment by gwern · 2012-06-15T14:49:23.425Z · LW(p) · GW(p)

To followup from http://cscs.umich.edu/~crshalizi/weblog/919.html

Plans Happen: I should re-iterate that Kantorovich-style planning is entirely possible when the planners can be given good data, an unambiguous objective function, and a problem of sufficiently limited scope. Moreover, what counts as "sufficiently limited" is going to grow as computing power does. The difficulties are about scale, not principle; complexity, not computability. Probably more importantly, there are other forms of control, with good claims on the name "planning", which are not this sort of mathematical programming, and plausibly have much lower computational complexity. (Central banks, for instance, are planning bodies which set certain prices.) In particular, intervening in existing market institutions, or capitalist firms, or creating non-market institutions to do things — none of these are subject to the same critique as Kantorovich-style planning. They may have their own problems, but that's a separate story. I should have been clearer about this distinction.

Let me also add that I focused on the obstacles in the way of planning because I was, at least officially, writing about Red Plenty. Had the occasion for the post been the (sadly non-existent) Red, White, and Blue Plenty, it would have been appropriate to say much more about the flaws of capitalism, not just as we endure it but also it in its more idealized forms.

...Cockshott, and More Equations The most important issue raised, I think, was the claim that Cockshott has shown that central planning is computationally tractable after all. I don't agree, but unfortunately, there's going to need to be a bit more math....When I talked about the complexity of solving the planning problem, I was talking about the complexity of this linear programming problem, and I was allowing for it to be solved only up to an accuracy of — the solution only had to come to within of the optimum, and in fact only to within of satisfting the constraints. Since the computational complexity of doing so only grows proportionally to , however, if we can do this at all we can ask for very good approximations. Or, pessimistically, if some other part of the problem, like the number of variables, is demanding lots of resources, we'd have to make the slop (literally) exponentially larger to make up for it.

(Incidentally, one issue which was not explicitly raised, but which I should have mentioned, was the possibility of replacing approximate optimization with satisficing, say taking the first plan where the value of the output was above some threshold, say , and all constraints were met. [This would still leave the computational-political problem of coming up with the value vector .] I have been unable to discover any literature on the complexity of linear satisficing, but I suspect it is no better than that of approximate linear programming, since you could use the former as a sub-routine to do the latter, by ratcheting up the threshold , with each satisficing Plans as the starting-point for the next round of the ratchet.)

comment by gwern · 2012-06-06T16:33:04.821Z · LW(p) · GW(p)

Yes, but he still considers it impossible even with modern computers and algorithms.

His fundamental conclusion (see the section on non-convexity) is that it's as hard for capitalism as planning, which isn't really an issue for CEV. ('OK, so fine, we'll go with either system as convenient, and apply optimization to as large subunits as we can manage before it breaks down.')

I'm not sure how extrapolating, making consistent, simplifying, unifying, and implementing the aggregate preferences-in-general of everybody on Earth would be easier than simply implementing the resource-related preferences of everybody in a single nation.

I thought it was obvious. The difficulty is related to the number of arbitrary distinct constraints being enforced. Reduce the number of constraints, and you reduce the difficulty.

Whether CEV is actually possible - whether the reduction in constraints happens and a Parfitian convergence of ethical criteria happens - is the fundamental question and doubted by many, but also unaffected by what kind of complexity linear optimization may be!

comment by Mitchell_Porter · 2012-06-06T14:39:06.873Z · LW(p) · GW(p)

The principal challenge of CEV is not the part where you take into account all the specific, contingent individuals who exist on the earth, in the course of following some recipe for extrapolating, aggregating, etc. The important step is figuring out the "recipe" itself. If we had a recipe which we knew to perfectly reflect true human ideals, but it proved to be computationally infeasible, we (or a FAI) could then think about feasible approximations to this ideal.

The important questions are: What sort of decision system is a typical human being? What are the values and value templates with which it comes equipped? These questions should have exact answers just as objective as the facts about how many limbs or how many bones a typical human has, and those answers should in turn imply that "friendliness" and "CEV" ought to work in some specific way.

comment by Sarokrae · 2012-06-06T08:39:41.224Z · LW(p) · GW(p)

I changed my mind about my own effectiveness at relationships, and downgraded my confidence in being actually in control of my brain. I've upgraded my estimation of the extent to which I am like a typical female.

Specifically, what I've learned is that in dealing with things that are, to a large extent, affected by my unconscious, it is helpful to treat my conscious and unconscious as separate agents, only one of which I am in control of. In doing this, I noticed that the proportion of my decision making affected by unconscious "blips" was higher than I thought, and furthermore that my unconscious reacts to stimuli in a way which is predicted by pua-Game to a far greater extent than I believed (despite me being a very atypical female).

Concrete predictions which have changed as a result of this experience: I've increased my confidence in being able to deal with future relationship problems. If future problems do arise, I plan to use trusted sources on LTR-Game (to decipher my unconscious) as well as conscious reasoning. I've also massively decreased my confidence that polyamory is a worthwhile relationship model for me to attempt at any point (while my conscious thinks it's a great idea, I've now narrowed down how my unconscious would react, and accepted that there is little I can do to change that reaction).

A great positive to this experience is that I can now, if I concentrate, notice which of my thoughts are conscious and which are unconscious "blips" which I rationalise. This helps if I'm trying to make a reasoned decision on anything, not just relationships.

Replies from: Raemon, GLaDOS
comment by Raemon · 2012-06-06T18:37:14.170Z · LW(p) · GW(p)

I had a very similar experience a few months ago (replacing "typical female" with "typical male"). Or at least an experience that could have outputted a nearly identical post.

The experience felt incredibly crippling and dehumanizing. Towards the beginning of my experience I predicted ways in which I was likely to make bad decisions, and ways I was likely to be emotionally affected. For a few weeks I made an effort (which felt near-herculean) NOT to make those errors and avoiding those emotional consequences. Eventually I ran out of willpower, and spent a month watching myself making the bad decisions I had predicted.

I came out this with a very different mental model of myself. I'm not sure I consider it a positive yet. I make better predictions about myself but am not noticeably better at actually acting on the information.

Replies from: Sarokrae, khafra
comment by Sarokrae · 2012-06-06T19:29:50.364Z · LW(p) · GW(p)

This experience has definitely been a positive for me, because I now have a more accurate model of my own behaviour which does allow me to more successfully solve problems. (Solving this particular problem has cause relationship satisfaction to shoot up from an albeit quite low slump to a real high point for both me and my OH.)

I'll just share the main specific technique I learned from the experience, just in case it might also work for you. When I treat my conscious and unconscious as separate agents, I accept that I cannot control my unconscious thinking (so just trying really hard not to do it won't help much), but I can /model/ how my unconscious reacts to different stimuli. If you draw the diagram with "Me", "Hamster" (what I call my unconscious) and "World" (including, specifically to here, the behaviour of my OH), and use arrows to represent what can affect what, then it's a two-way arrow between Me and World, a one-way arrow from World to Hamster, and a one-way arrow from Hamster to Me. After I drew that diagram it became pretty bloody obvious that I needed to affect the world in such a way as to cause positive reactions in Hamster (and for which I need accurate models of both World and Hamster), and most ideally, kickstart a Me -> World -> Hamster -> Me positive feedback loop.

comment by khafra · 2012-06-06T20:07:00.725Z · LW(p) · GW(p)

Would the Litany of Tarski and a hug from nyan_sandwich help?

I'm interested in the ways you and Sarokrae actually noticed these "blips." I usually don't notice myself making decisions, when I make them; perhaps if I did spend some time predicting how a person in my circumstances would make bad decisions, I could notice them afterwards.

Replies from: Sarokrae
comment by Sarokrae · 2012-06-07T18:16:27.632Z · LW(p) · GW(p)

I'm not sure if describing what the blip feels like would help without going through the process of discovery, but I'll have a go anyway: it's noticing that you have a thought in your head without remembering the process you got through to reach it. When there's a new thought formed that's within easy mental grasp distance, especially when it's a judgement of a person or an emotion e.g. attraction, and the reason for it is not within an easy grasp distance, then that's a sign for me that it's an unconscious conclusion.

Basically if a new thought that feels "near" appears, but when I ask myself why, the answer feels "far", that's a sign that if I did retrieve the answer it would be a rationalisation rather than the actual explanation, and I attempt to abort the retrieval process (or at least proceed with many mental warning signs).

comment by GLaDOS · 2012-06-15T17:44:39.673Z · LW(p) · GW(p)

Specifically, what I've learned is that in dealing with things that are, to a large extent, affected by my unconscious, it is helpful to treat my conscious and unconscious as separate agents, only one of which I am in control of. In doing this, I noticed that the proportion of my decision making affected by unconscious "blips" was higher than I thought, and furthermore that my unconscious reacts to stimuli in a way which is predicted by pua-Game to a far greater extent than I believed (despite me being a very atypical female).

...

A great positive to this experience is that I can now, if I concentrate, notice which of my thoughts are conscious and which are unconscious "blips" which I rationalise.

I may have read too much Roissy but I've grown used to calling that part of me my hamster. (^_^)

Edit: Heh. It seems I'm not alone in using this terminology. Also overall my personal experience has been very similar, except I haven't yet tried poly.

comment by komponisto · 2012-06-05T09:26:40.041Z · LW(p) · GW(p)

My probability that cryonics will work has gone down after reading this.

Replies from: ciphergoth, CasioTheSane
comment by Paul Crowley (ciphergoth) · 2012-06-06T15:26:44.234Z · LW(p) · GW(p)

I expect to read a dashed-off blog comment that makes half a maybe-plausible technical argument against cryonics but is not turned into a blog post that sets the argument out in detail roughly once every three months. Every day I don't read one, my confidence goes up. Reading that comment returns my confidence down to where it was about three months ago.

Replies from: gwern
comment by gwern · 2012-06-09T19:07:49.888Z · LW(p) · GW(p)

Excellent - now apply a hope function and be even more precise!

comment by CasioTheSane · 2012-06-10T06:03:44.268Z · LW(p) · GW(p)

I think kalla724 might have something there, but I really hope he/she posts something with more details, and some specific examples. However, there is enough information in that post for someone with enough free time, a background in biochemistry, and literature access to begin researching the idea on their own...

comment by gwern · 2012-06-04T22:23:52.925Z · LW(p) · GW(p)

I've continued to research iodine's effect on IQ in adults & children, and the more null studies I manage to find the more pessimistic I get. I think I'm down to 5% from 50% (when I had only Fitzgerald 2012 as a null). The meta-analysis reports, as expected, a very small estimated effect size.

Replies from: CarlShulman
comment by CarlShulman · 2012-06-04T22:36:41.822Z · LW(p) · GW(p)

Do you think the large scale rollout studies were wrong too?

Replies from: gwern
comment by gwern · 2012-06-04T22:38:17.723Z · LW(p) · GW(p)

No, I think they were right; at least, your question suggests you think I am pessimistic on an issue different from the actual issue I am pessimistic on.

Replies from: CarlShulman
comment by CarlShulman · 2012-06-04T23:40:33.021Z · LW(p) · GW(p)

My understanding was that iodine helped with early brain development in children for areas with severe deprivation and poverty diets, and that this was easily vanquished by iodized salt (even if you buy sea salt, you would get iodine from meat and iodized salt in prepared foods). So I haven't worried about going out of my way for it myself, but have been keen on promoting it in regions with widespread goiter and non-iodized salt in Africa and South Asia (and hostile to rich country non-iodized salt for effects on children and pregnant women).

Replies from: gwern
comment by gwern · 2012-06-05T00:55:07.852Z · LW(p) · GW(p)

My current understanding is that 'early' here is turning out to be very early indeed; the window of opportunity is basically pregnancy. This is very bad for my hopes of using it in an adult (myself).

comment by bramflakes · 2012-06-04T21:18:14.177Z · LW(p) · GW(p)

I met a relativist postmodern-type that also understood evolutionary psychology and science in general.

Replies from: Jayson_Virissimo, CarlShulman, None, None
comment by Jayson_Virissimo · 2012-06-05T03:49:21.773Z · LW(p) · GW(p)

What, you hadn't heard of Paul Feyerabend?

comment by CarlShulman · 2012-06-04T21:38:13.110Z · LW(p) · GW(p)

More detail, please.

comment by [deleted] · 2012-06-05T04:57:40.219Z · LW(p) · GW(p)

.

comment by [deleted] · 2012-06-05T22:03:55.034Z · LW(p) · GW(p)

Seconding the call for more details.

Replies from: bramflakes
comment by bramflakes · 2012-06-05T22:17:23.374Z · LW(p) · GW(p)

Well when they made claims like "logic/math is only one way of knowing things, along with metaphor and [something else I forgot]" and some claims about culture being distinct from individuals, I readied myself for explaining how human intuition and culture ultimately reduces to deterministic firing of neurons. When I started on that, they jumped ahead and said that they already knew and accepted a soul-free, lawful universe. I was a bit stumped, until I realised that I'd just pattern-matched those claims onto a stereotype of the kind of people that would write in The Social Text.

Upon reflection, I had been talking more in the vein of "The Universe is lawful, even when we don't know the laws, so math ultimately drives everything", and they had been more along the lines of "We can't go around calculating probabilities all the time, so we might as well go by intuition most of the time". It was a failure to cross inferential gaps on my part.

Replies from: athingtoconsider
comment by athingtoconsider · 2012-06-11T05:59:19.527Z · LW(p) · GW(p)

I think you just broke LW's new commenter CSS.

Replies from: bramflakes
comment by bramflakes · 2012-06-11T07:18:39.153Z · LW(p) · GW(p)

Looks fine to me?

comment by Solvent · 2012-06-05T06:55:03.085Z · LW(p) · GW(p)

Young adult male here.

I've come to the conclusion that I'm nowhere near as attractive or good with girls as I thought I was.

I got my first girlfriend pretty much by accident last year. It was so incredibly amazing that I decided that romantic success was something I needed to become very good at. I spent quite a while reading about it, and thinking about how to be attractive and successful with women. I broke up with my girlfriend as I moved to a small town for two months at the beginning of this year, during which time I practiced approaching girls and flirting with them.

Then I moved to college, and the first attractive, smart girl I saw, I went up to her and got her as a girlfriend pretty much immediately. I thought that I must have been very good and attractive to have gotten such a gorgeous girlfriend so quickly. She broke up with me after a month or two. She immediately moved through two or three boyfriends over the space of a month or two. Meanwhile, I've been looking for a new girlfriend, but haven't had any success.

So I thought I was attractive and good with girls, and then it turned out that I just had a wild stroke of luck. So it goes.

I'm suspicious that I was simply arrogant about how good I was, and if I had thought more dispassionately, I wouldn't have been so wrong in my assessment of my own attractiveness.

Replies from: CasioTheSane
comment by CasioTheSane · 2012-06-05T07:36:26.575Z · LW(p) · GW(p)

Might I suggest that you may be looking at this all wrong- women are more attracted to your confidence than your looks. I suspect that your physical attractiveness is just fine, but the event of being dumped by this smart and beautiful woman hurt your self-confidence, and caused you to seem less attractive to other women afterwards.

The sort of guy who thinks a girl broke up with him because of his unattractiveness is very unattractive to most women, whereas the sort of guy who thinks "it's her loss, I was out of her league anyways" is highly attractive. If you get (or learn to fake) more self confidence, I predict that your success will return. Ironically, being arrogant about how good you are is both necessary and almost sufficient to actually be good.

Replies from: Solvent
comment by Solvent · 2012-06-05T09:19:54.772Z · LW(p) · GW(p)

I don't mean attractiveness just in the sense of physical looks. I mean the whole thing of my social standing, confidence and perceived coolness.

But thanks for the advice.

comment by Nectanebo · 2012-06-05T07:41:49.639Z · LW(p) · GW(p)

Austrian Economics.

I was fairly convinced too, so I am now very worried about how many other more blatant silly things I believe and may believe in the future. I've definitely been more at least a bit more wary than usual after realising this particular mistake.

I initially didn't really want to make this post, but I recognised that it was for reasons perhaps relating to status ( I was embarrassed to admit I believed something comparatively less trivial and more obviously rubbish compared to others in this thread) but it was pretty was easy to get over it once I thought about it and also thanks to the fact that this kind of thing is exactly what the thread was probably looking for.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-06-05T11:13:31.936Z · LW(p) · GW(p)

"Austrian Economics" is a very large set of propositions. Which ones have you significantly changed your degree of belief in?

Replies from: Nectanebo
comment by Nectanebo · 2012-06-05T14:49:13.809Z · LW(p) · GW(p)

The kinds of writing produced based on 'praxeology', by people like Ludwig von Mises and Murray Rothbard, I have changed my mind to think as being not very reliable. I don't think that the arguments put forth by these Austrian Economists trying to justify when they reject empiricism and make weird assumptions are very good. This actually hurts a sizable amount of writing by these people, most of which I haven't read but any that I did end up having believed I now place very little trust in. The wikipedia synopsis of one of Mises' works should give you a fairly good idea of the kinds of assumptions he makes..

The links are fairly broad because I was admittedly trying to get a better grip on this stuff after having accepted much of it for a while, having given the whole school much benefit of the doubt early because the kinds of ideas it concluded with (stuff like lassiez-faire econ, not to regulate) fitted in with my preconcieved, often political, beliefs. Politics is the mind-killer, don'tcha know. Sorry for the lack of specific propositions.

I've read some Hayek too, he seems better, but I don't really know.

edit: changed a word to make a sentence make sense

Replies from: syzygy
comment by syzygy · 2012-06-05T18:25:30.946Z · LW(p) · GW(p)

Austrian-minded people definitely have some pretty crazy methods, but their economic conclusions seem pretty sound to me. The problem arises when they apply their crazy methods to areas other than economics (see any libertarian theory of ethics. Crazy stuff)

Replies from: bramflakes
comment by bramflakes · 2012-06-05T19:16:08.317Z · LW(p) · GW(p)

There's lots of different libertarian theories of ethics. Can you be more specific?

Replies from: syzygy
comment by syzygy · 2012-06-05T19:27:23.421Z · LW(p) · GW(p)

"Universally Preferable Behavior" by Stefan Molyneux, "Argumentation Ethics" by Hans Hermann Hoppe, and of course Objectivism, to name the most famous ones. Generally the ones I'm referring to all try to deduce some sort of Objective Ethics and (surprise) it turns out that property rights are an inherent property of the universe and capitalism is a moral imperative.

Forgive me if you're thinking of some other libertarians who don't have crazy ethical theories. I didn't mean to make gross generalizations. I've just observed that libertarian philosophers who consciously promote their theories of ethics tend to be of this flavor.

comment by wedrifid · 2012-06-05T06:24:08.020Z · LW(p) · GW(p)

I just changed my mind in this direction:

Due to updates about simulation shutdown risk and the difficulty of FAI philosophy (I think it's easier than I used to believe, though still very hard), I think an FAI team is a better idea than I thought four months ago.

... and slightly upgraded my expectation of human non-extinction.

Damn it is easy to let other people do (some of) your thinking for you.

comment by WrongBot · 2012-06-05T02:47:12.911Z · LW(p) · GW(p)

Up until a month or so ago, I was convinced I'd landed my dream job. If I had a soul, it would be crushed now.

Which is not to say that it's awful, not by any means. I've just gained a new perspective on the value of best practices in software development.

Replies from: komponisto, wedrifid
comment by komponisto · 2012-06-05T03:41:29.019Z · LW(p) · GW(p)

Can you explain in more detail? I'm interested in learning about the downsides of programming jobs (which have been strongly promoted around here).

Replies from: Viliam_Bur, WrongBot
comment by Viliam_Bur · 2012-06-05T09:11:27.064Z · LW(p) · GW(p)

Seconded, more explanation is needed.

My experience with the best software practices is the following:

  • When a deadline is near, all best software practices are thrown out of the window. Later in the project, a deadline is always near.

  • While the spirit of the best software practices is ignored, it is still possible to follow their letter religiously, and be proud of it. This frequently leads to promotion.

Replies from: WrongBot
comment by WrongBot · 2012-06-05T22:54:13.996Z · LW(p) · GW(p)

When a deadline is near, all best software practices are thrown out of the window. Later in the project, a deadline is always near.

This is precisely the problem. Not really much more to add.

comment by WrongBot · 2012-06-05T23:06:02.328Z · LW(p) · GW(p)

I work in video games, so my experience isn't at all typical of programming more generally. The big issues are that:

  • Development priorities and design are driven by marketing.
  • Lots of time is spent doing throwaway work for particular demos. I (and many others) wasted a couple weeks hacking together a scripted demo for E3 that will never be seen again.
  • The design for my portion of the project has changed directions numerous times, and each new version of the feature has been implemented in a rush, so we still have bits of code from five iterations ago hanging around, causing bugs.
  • Willingness to work extremely long hours (70+/week) is a badge of pride. I'm largely exempt from this because I'm a contractor and paid hourly, but my salaried coworkers frequently complain about not seeing enough of their families. On the other hand, some of the grateful to have an excuse to get away from their families.
  • The downside of being a contractor is that I don't get benefits like health insurance, sick days, paid time off, etc.

Many of these issues are specific to the games industry and my employer particularly, and shouldn't be considered representative of programming in general. Quality of life in the industry varies widely.

comment by wedrifid · 2012-06-05T02:57:49.022Z · LW(p) · GW(p)

I've just gained a new perspective on the value of best practices in software development.

Which is to say, much less than getting paid a lot and not working with dickheads?

Replies from: Morendil, WrongBot
comment by Morendil · 2012-06-05T06:20:39.055Z · LW(p) · GW(p)

I'm guessing "rewarded much less than knowing your way around office politics".

comment by WrongBot · 2012-06-05T22:55:34.090Z · LW(p) · GW(p)

The people I work with are mostly not dickheads and the pay is reasonable. It's the mountain of ugly spaghetti code I'm expected to build on top of that kills me. There's no time to do refactors, of course.

comment by Filipe · 2012-06-05T21:46:53.144Z · LW(p) · GW(p)

I've changed my mind on whether Cosma Shalizi believes that P=NP. I thought he did, upon reading "Whether there are any such problems, that is whether P=NP, is not known, but it sure seems like it.", at his blog, only to discover after emailing him that he made a typo. I've also learned not to bet with people with such PredictionBook stats, and specially not as much as $100.00.

Replies from: None
comment by [deleted] · 2012-06-06T16:57:33.326Z · LW(p) · GW(p)

Never bet against a Gwern, good advice.

comment by tgb · 2012-06-05T02:16:16.991Z · LW(p) · GW(p)

I have lowered my estimation of how hard it is to write rhyming, rhythmically satisfying poetry (no regard to the literary quality of the product). It has become my hobby on my walk to work. Read some Lewis Carroll and try to think along the beat pattern he uses - just garble words to yourself filling in the occasional blank that sounds good so long as you get the rhythm right. Do that for a while and you can start piecing phrases into that mold with some work and iteration. It's much more fun to just feel the beats than to count.

comment by private_messaging · 2012-06-06T10:34:57.996Z · LW(p) · GW(p)

Changed my mind from lack of particular opinion on SI to SI being complete cranks. Changed my mind from opinion that AIs may be dangerous, to much lower estimate of potential danger. (I have post history on Dmytry account to prove changing mind).

Replies from: jsalvatier
comment by jsalvatier · 2012-06-06T15:39:15.847Z · LW(p) · GW(p)

What changed your mind on the latter?

Replies from: private_messaging
comment by private_messaging · 2012-06-07T08:47:10.030Z · LW(p) · GW(p)

When I seen circulating notion of serious AI danger, without details, I guess I assumed it originated from better/more relevant arguments.

What I see instead is arguments of general difficulty of some aspects of AI (such as real world motivation) crafted as to suggest update of unlikelihood of only "friendly AI that genuinely cares about mankind" but not the general unlikelihood of real world motivation on AI, because the one making the arguments tells you that you should update the former but doesn't tell about the latter.

This is combined with some theoretical notion of "rationality" that would work for updates on a complete inference graph, but which is about as rational on an incomplete inference graph such as the one above, as it is true that due to law of inertia after walking off the top of 10 story building, you'll just keep on walking.

comment by beriukay · 2012-06-05T11:29:57.740Z · LW(p) · GW(p)

I've changed my mind on the persuasiveness of a specific argument. I used to hold a high degree of confidence in the line of reasoning that "since nobody can agree on just about anything about god, it is likely that god doesn't exist". But then, in an unrelated conversation, someone pointed out that it would be foolish to say that "since nobody can agree on the shape of the earth, earth has no shape." I must be question-begging!

Replies from: syzygy
comment by syzygy · 2012-06-05T18:21:43.976Z · LW(p) · GW(p)

I think the correct comparison would be, "since no one can agree on the nature of Earth/Earth's existence, Earth must not exist" but this is ridiculous since everyone agrees on at least one fact about Earth: we live on it. The original argument still stands. Denying the existence of god(s) doesn't lead to any ridiculous contradictions of universally experienced observations. Denying Earth's geometry does.

Replies from: beriukay
comment by beriukay · 2012-06-06T08:32:01.497Z · LW(p) · GW(p)

That's the conclusion I came to as well, but I was worried that I was rationalizing, so I had to downgrade my confidence in the argument.

comment by jsalvatier · 2012-06-05T02:37:20.318Z · LW(p) · GW(p)

After reading the comments on my post on Selfish Reasons to Have More Kids, I think it's somewhat less likely that it's substantially correct. I think this might be largely a social effect rather than a evidence effect, though.

comment by Daniel_Burfoot · 2012-06-05T02:31:51.825Z · LW(p) · GW(p)

I wouldn't say I changed my mind, but I substantially increased my p-estimate that the following recipe could produce something very close to intelligence:

1) a vast unlabeled data set (think 1e8 hours of video and audio/speech data plus the text of every newspaper article and novel ever written)

2) a simple unsupervised learning rule (e.g. the reduced Boltzmann machine rule)

3) a huge computer network capable of applying many iterations of the rule to the data set.

I previously believed that such an approach would fail because it would be very difficult to "debug" the resulting networks. Now I think that might just not matter.

comment by Shmi (shminux) · 2012-06-04T21:54:11.292Z · LW(p) · GW(p)

I expected that my intuitive preference for any number of dust specks over torture would be easy to formalize without stretching it too far. Does not seem like it.

On the other hand, given the preference for realism over instrumentalism on this forum, I'm still waiting for a convincing (for an instrumentalist) argument for this preference.

Replies from: Mitchell_Porter, tenlier, magfrump, Manfred
comment by Mitchell_Porter · 2012-06-06T15:32:00.412Z · LW(p) · GW(p)

If you want a reason to prefer dust specks for others over torture for yourself, consistently egocentric values can do it. That will also lead you to prefer torture for others over torture for yourself. What about preferring torture for others over dust speck for yourself? It's psychologically possible, but the true threshold (beyond which one would choose torture for others) seems to lie somewhere between inconvenience for oneself and torture for oneself.

It seems that LW has never had a serious discussion about the likely fact that the true human value system is basically egocentric, with altruism being sharply bounded by the personal costs experienced; nor has there been a discussion about the implications of this for CEV and FAI.

ETA: OK, I see I didn't say how a person would choose between dust specks for 3^^^3 others versus torture for one other. Will recently mentioned that you should take the preferences of the 3^^^3 into account: would they want someone to be tortured for fifty years, so that none of them got a dust speck in the eye? "Renormalizing" in this way is probably the best way to get a sensible and consistent decision procedure here, if one employs the model of humans as "basically egocentric but with a personal threshold of cost below which altruism is allowed".

comment by tenlier · 2012-06-05T04:25:24.010Z · LW(p) · GW(p)

Do you really have that preference?

For example, if every but one of trillions of humans was being tortured and had dust specks, would you feel like trading the torture-free human's freedom from torture for the removal of specks from the tortured. If that were so, then you just are showing a fairly usual preference (inequality is bad!) which is probably fine as an approximation of stuff you could formalize consequentially.

But that's just an example. Often there's some context in which your moral intuition is reversed, which is a useful probe.

(usual caveat: haven't read the sequences)

Topic for discussion: Less Wrongians are frequentists to a greater extent than most folk who are intuitively Bayesian. The phrase "I must update on" is half code for (p<0.05) and half signalling, since presumably you're "updating" a lot, just like regular humanssssssssssssssssssssssssssssssssss.

Replies from: AspiringRationalist, lessdazed
comment by NoSignalNoNoise (AspiringRationalist) · 2012-06-05T06:27:28.930Z · LW(p) · GW(p)

Less Wrongians are frequentists to a greater extent than most folk who are intuitively Bayesian. The phrase "I must update on" is half code for (p<0.05) and half signalling, since presumably you're "updating" a lot, just like regular humans. When you consciously think "p<.05" do you really believe that the probability given the null hypothesis is less than 1/20, or are you just using a scientific-sounding way of saying "there's pretty good evidence"? Might this just be that people on LessWrong have (I'm assuming) nearly all studied frequentist statistics in the course of their schooling but most probably have not studied Bayesian statistics?

comment by lessdazed · 2012-06-05T16:25:16.013Z · LW(p) · GW(p)

since presumably you're "updating" a lot, just like regular humans

It's a psychological trick to induce more updating than is normal. Normal human updating tends to be insufficient).

comment by magfrump · 2012-06-05T00:38:07.705Z · LW(p) · GW(p)

If I recall correctly Alicorn made a reference to reversing the utilities in this argument... would you think it better for someone to give up a life of the purest and truest happiness, if in exchange they created all of the 10 second or less cat videos that will ever be on youtube throughout all of history and the future?

My intuitions here say yes; it can be worth sacrificing your life (i.e. torturing yourself working at a startup) to create a public good which will do a small amount for a lot of people (i.e. making standard immunization injections also give people immunity to dust specks in their eyes)

Replies from: tenlier
comment by tenlier · 2012-06-05T04:39:43.993Z · LW(p) · GW(p)

Manipulative phrasing. Of course, it will always seem worth torturing yourself, yadda yadda, when framed as a volitional sacrifice. Does your intuition equally answer yes when asked if it is worth killing somebody to do etc etc? Doubt it (and not a deontological phrasing issue)

Replies from: magfrump
comment by magfrump · 2012-06-05T05:38:52.860Z · LW(p) · GW(p)

Certainly there's a difference between what I said and the traditional phrasing of the dilemma; certainly the idea of sacrificing oneself versus another is a big one.

But the OP was asking for an instrumentalist reason to choose torture over dust specks. It is pretty far-fetched to imagine that literally torturing someone will actually accomplish... well, almost anything, unless they're a supervillain creating a contrived scenario in which you have to torture them.

When you will actually be trading quality of life for barely-tangible benefit on a large scale is torturing yourself working at a startup. This is an actual decision that people make to make lives miserable in exchange for minor but widespread public goods. And I fully support the actual trades of this sort that people actually make.

That's my instrumentalist argument for, as a human being, accepting the metaphor of dust specks versus torture, not my philosophical argument for a decision theory that selects it.

Replies from: tenlier
comment by tenlier · 2012-06-05T06:18:51.568Z · LW(p) · GW(p)

Was there any reason to think I didn't understand exactly what you said the first time? You agree with me and then restate. Fine, but pointless. Additionally, unimaginative re: potential value of torture. Defending lack of imagination in that statement by claiming torture defined in part by primary intent would be inconsistent.

Replies from: magfrump
comment by magfrump · 2012-06-05T06:43:38.961Z · LW(p) · GW(p)

The reason I thought you didn't understand what I was talking about was that I was calling on examples from day to day life, this is what I took "instrumentalist" to mean, and you starting talking about killing people, which is not an event from day to day life.

If you are interested in continuing this discussion (which if not I won't object) let's take this one step at a time; does that difference seem reasonable to you?

Replies from: tenlier
comment by tenlier · 2012-06-05T12:28:33.590Z · LW(p) · GW(p)

The day to day life bit is irrelevant. The volitional aspect is not at all. Take the exact sacrifice you described but make it non-volitional. "torturing yourself working at a startup" becomes slavery when non-volitional. Presumably you find that trade-off less acceptable.

The volitional aspect is the key difference. The fact that your life is rich with examples of volitional sacrifice and poor in examples of forced sacrifice of this type is not some magic result that has something to do with how we treat "real" examples in day to day life. It is entirely because "we" (humans) have tried to minimize the non-volitional sacrifices because they are what we find immoral!

Replies from: magfrump
comment by magfrump · 2012-06-06T04:26:06.906Z · LW(p) · GW(p)

Point number one is: I don't understand how you can say, when I am making an argument explicitly restricted to instrumental decision theory, how day to day life is irrelevant. Instrumentalism should ONLY care about day to day life.

With respect to forced sacrifice, my intuitions say I should just do the math, and that the reason volition is so important is that the reasonable expectation that one won't be forced to make sacrifices is a big-ticket public good, meaning the math almost always comes out on its side. I think that you're saying these choices have been screened off, but I think non-volitional choices have been screened off because they are in general bad trades rather than because "volition" is a magic word that lets you get whatever you want.

Point three, let's turn this around... say someone is about to spend their entire life being tortured. Would you rescue them, if you knew it meant throwing a harmless dust speck into the eye of everyone ever to exist or be emulated? This should be equivalent, but both of the sacrifices here are forced since, at a minimum, some human beings are sociopaths and wouldn't agree to take the dust speck.

If you want me to consider volition more closely, can you come up with some forced sacrifice choices that are reasonable exchanges that I might come across if I lived in a different world?

Replies from: magfrump
comment by magfrump · 2012-06-06T04:28:08.411Z · LW(p) · GW(p)

One possible idea: if I was the son of an African warlord, and I had the ability to make my parents' political decrees more compassionate if I talked to them after they blew off steam torturing people, but I could instead make them torture fewer people by talking to them beforehand.

Here my intuitions say I should let the individuals be tortured in exchange for effecting large scale policy decisions.

comment by Manfred · 2012-06-05T00:08:12.807Z · LW(p) · GW(p)

I expected that my intuitive preference for any number of dust specks over torture would be easy to formalize without stretching it too far. Does not seem like it.

Well, preferences are pretty easy to fit. Utility(world) = e^-(# of specks) - 1000*(# of people getting tortured)

However, note that this still requires that there is some probability of someone being tortured that you would trade a dust speck for.

Replies from: shminux
comment by Shmi (shminux) · 2012-06-05T00:53:23.400Z · LW(p) · GW(p)

It doesn't work if you continuously increase the the severity of the minor inconvenience/reduce the severity of torture and try to find where the two become qualitatively comparable, as pointed out in this reply. The only way I see it work is to assign zero disutility to specks (I advocated it originally to be at the noise level). Then I thought that it is possible to have the argument work reasonably well even with a non-zero disutility, but at this point I don't see how.

Replies from: Manfred
comment by Manfred · 2012-06-05T03:09:10.643Z · LW(p) · GW(p)

Utility(world) = e^-(# of specks) + X*e^-(# of people getting tortured), where X is some constant larger than 1/(1-1/e) in the incommensurate case, and less than that in the commensurate limit.

Of course, this assumes some stuff about the number of people getting tortured / specked already - but that can be handled with a simple offset.

Replies from: shminux
comment by Shmi (shminux) · 2012-06-05T06:27:13.308Z · LW(p) · GW(p)

I don't think this addresses the point in the link. What happens when you go from specks to something slightly more nasty, like a pinch? Or slightly increase the time it takes to get rid of the speck? You ought to raise the disutility limit. Or if you reduce the length of torture, you have to lower the disutility amount from torturing one person. Eventually, the two intersect, unless you are willing to make a sharp qualitative boundary between two very similar events.

Replies from: Manfred
comment by Manfred · 2012-06-05T16:29:32.403Z · LW(p) · GW(p)

Yes, the two intersect. That's what happens when you make things quantitative. Just because we are uncertain about where two things should, morally, intersect, does not mean that the intersection itself should be "fuzzy."

Replies from: shminux
comment by Shmi (shminux) · 2012-06-05T19:52:46.511Z · LW(p) · GW(p)

The point is that without arbitrarily drawing the specks/torture boundary somewhere between x stabbed toes and x+epsilon stabbed toes the suggested utility function does not work.

Replies from: Manfred
comment by Manfred · 2012-06-05T20:20:40.092Z · LW(p) · GW(p)

Hm, how can I help you see why I don't think this is a problem?

How about this. The following two sentences contain exactly the same content to me:

"Without arbitrarily drawing the specks/torture boundary somewhere, the suggested utility function does not work."

"Without drawing the specks/torture boundary somewhere, the suggested utility function does not work."

Why? Because morality is already arbitrary. Every element is arbitrary. The question is not "can we tolerate an arbitrary boundary," but "should this boundary be here or not?"

Replies from: shminux
comment by Shmi (shminux) · 2012-06-05T21:18:05.898Z · LW(p) · GW(p)

Are you saying that you are OK with having x stabbed toes being incommensurate with torture, but x+1 being commensurate ? This would be a very peculiar utility function.

Replies from: Manfred
comment by Manfred · 2012-06-06T01:17:19.859Z · LW(p) · GW(p)

Yes, that is what I am saying. One can deduce from this that I don't find it so peculiar.

To be clear, this doesn't reflect at all what goes on in my personal decision-making process, since I'm human. However, I don't find it any stranger than, say, having torture be arbitrarily 3^3^2 times worse than a dust speck, rather than 3^3^2 + 5.

Sarcasm time: I mean, seriously - are you honestly saying that at 3^3^2 + 1 dust specks, it's worse than torture, but at 3^3^2 - 1, it's better? That's so... arbitrary. What's so special about those two dust specks? That would be so... peculiar.

As soon as you allow the arbitrary size of a number to be "peculiar," there is no longer any such thing as a non-peculiar set of preferences. That's just how consistent preferences work. Discounting sets of preferences on account of "strangeness and arbitrariness" isn't worth the effort, really.

Replies from: shminux
comment by Shmi (shminux) · 2012-06-06T04:49:37.604Z · LW(p) · GW(p)

I don't mean peculiar in any negative sense, just that it would not be suitable for goal optimization.

Replies from: Manfred
comment by Manfred · 2012-06-06T07:51:56.652Z · LW(p) · GW(p)

Is that really what you meant? Huh.

Could you elaborate?

comment by RolfAndreassen · 2012-06-05T18:36:04.761Z · LW(p) · GW(p)

Two minutes ago I changed my mind on the source of the problem I was having: It wasn't an off-by-one error in my indexing but rather just failing to supply multiple work spaces for the case of multiple convolutions in the same PDF. D'oh! I must confess, though, that I wasn't convinced by anyone's argument, as such. So perhaps it doesn't count properly for this thread.

comment by Thomas · 2012-06-04T20:11:48.472Z · LW(p) · GW(p)

I was wrong about Neanderthals and us. I was sure that they were much more alien to us, as now has appeared they were. Now we see that some even grandfathered some of us.

I was politically correct, I guess. Australian Aborigines and Europeans are nearly too distant cousins for this correctness.

Replies from: ahartell
comment by ahartell · 2012-06-04T23:48:26.584Z · LW(p) · GW(p)

Sorry, but could you elaborate on that last line?