Posts

Comments

Comment by Fadeway on Interesting new Pew Research study on American opinions about radical life extension · 2013-08-10T04:16:27.330Z · LW · GW

I did not expect this. And it seems weird, since young people are also more optimistic about their futures. And more likely to want to undergo radical life extension. Plus they haven't suffered the effects of aging (having many loved ones die, illness and pain, etc.).

Didn't predictions for the Singularity follow a similar trend? Older people predicting 30-40 years until the event, and younger predictors being more pessimistic because they're likely to still be alive even if it happens in 60 years?

Comment by Fadeway on Seed Study: Polyphasic Sleep in Ten Steps · 2013-07-13T05:49:27.038Z · LW · GW

Those people may have a better chance of succeeding.

Comment by Fadeway on Seed Study: Polyphasic Sleep in Ten Steps · 2013-07-12T03:51:56.607Z · LW · GW

I've failed Uberman twice myself. You have pretty much an optimal plan, except for the naptation.

"Cut your naps down to 6 as quickly as you can without it hurting too much".

From my own knowledge, which may or may not be trustworthy, naptation doesn't need to be ended prematurely - the whole point is to get a huge number of naps in a short timeframe in order to learn to get REM in a 24-minute interval (which dreaming is a sign of). Getting a few more will just decrease your REM dep. The way I would do it is, get 12 naps a day until you find yourself unable to fall asleep for a nap at all - the critical thing is, you stay in bed until the alarm; you don't just get up after ten minutes - and also take care that some people may have trouble falling asleep for naps at all, which is a separate issue. When you fail to fall asleep for a nap, that's a sign that you've had enough and can't sustain 12 a day any longer; either cut two naps or go straight down to 6 a day. I'd choose the latter.

Also, um, give beds a wide berth outside naptime. And get more than two alarms, preferably with one placed more than 10 meters away from the bed - the long walk to it and back will ensure you actually wake up in the process of turning it off.

Comment by Fadeway on Noticing the 5-second mindkill · 2013-03-28T10:07:31.531Z · LW · GW

I discovered this issue for myself by reading a similar article, and going through the same process, but with my third thought being "does that guy [the Prime Minister in this story] really believe this thing that I believe [in this case, pro-choice]?" I think he's bad because he broke the rules, then I forgive him because he's on my side, then for one reason on another I start to wonder if he really is on my side...and notice that I'm trying to decide whether to blame him for breaking the rules or not. (I think this is because I myself use irony a lot, so often when I hear a statement that is in some way ambiguous or silly, I reflexively ask myself if it is sincere or sarcasm, even in a situation where irony would be unacceptable/unthinkable, as is the case with a public statement)

I'm not sure how many times this happened to me before I noticed, but nowadays I just think "broke the rules, -10 points even though I like this guy", and then, "oh and he agrees with me, gotta increase his score for that".

Comment by Fadeway on Solved Problems Repository · 2013-03-28T02:36:33.814Z · LW · GW

Google never fails. The chart shall not allow it.

Comment by Fadeway on Schelling Day: A Rationalist Holiday · 2013-03-27T14:13:51.885Z · LW · GW

Sounds like a fun ritual. Makes me wish I were in Boston so I could attend.

Comment by Fadeway on Soylent Orange - Whole food open source soylent · 2013-03-27T11:46:50.087Z · LW · GW

I've doubted his process from the start - I remember reading a third person's comment that pointed out he had forgotten to add iron - and his subsequent reply that this mistake was the cause of his feeling bad. I know nothing about nutrition (except that it's not a very good science, if it's science at all), yet iron is obvious even to me. To miss it shows that he didn't really do much double checking, much less cross-referencing or careful deliberation of the ingredient list.

I'm really hopeful about Soylent - I'd even jump in and risk poisoning to test it myself, if I were living alone. If anything, this experiment highlights how untrustworthy and limited our dietary knowledge is (and should motivate us to improve it). If this fails due to a new form of scurvy, the cause can be found and the experiment retried. If it fails due to not having read information that's already out there, well, that's a downer.

Comment by Fadeway on Recent updates to gwern.net (2012-2013) · 2013-03-18T17:13:23.111Z · LW · GW

I've read a significant amount of your essays/articles and love the stuff. It's kinda hard to track for new stuff since the RSS feed tends to dump dozens of small changes all at once, so this post is much appreciated.

Comment by Fadeway on Recent updates to gwern.net (2012-2013) · 2013-03-18T17:10:45.588Z · LW · GW

Is it useful to increase reading speed, even if it takes a minimal amount of time (to go from basic level to some rudimentary form of training)? I've always been under the impression that speed increases in reading are paid for with a comprehension decrease - which is what we actually care about. Or is this only true for the upper speed levels?

Comment by Fadeway on Don't Get Offended · 2013-03-07T04:35:14.754Z · LW · GW

What was the name of that rule where you commit yourself to not getting offended?

I've always practiced it, though not always as perfectly as I've wanted (when I do slip up, it's never during an argument though; my stoicism muscle is fully alert at those points in time). An annoying aspect of it is when other people get offended - my emotions are my own problem, why won't they deal with theirs; do I have to play babysitter with their thought process? You can't force someone to become a stoic, but you can probably convince them that their reaction is hurting them and show them that it's desirable for them to ignore offense. To that end, I'm thankful for this post, upvoted.

Comment by Fadeway on "What-the-hell" Cognitive Failure Mode: a Separate Bias or a Combination of Other Biases? · 2013-02-26T04:33:54.726Z · LW · GW

I agree, you can get over some slip-ups, depending on how easy what you're trying is compared to your motivation.

As you said, it's a chain - the more you succeed, the easier it gets. Every failure, on the other hand, makes it harder. Depending on the difficulty of what you're trying, a hard reset is sensible because it saves time from an already doomed attempt, >and< makes the next one easier (due to the deterrent thing).

Comment by Fadeway on Eliezer Yudkowsky Facts · 2013-02-25T11:11:28.987Z · LW · GW

I disagree. This entire thread is so obviously a joke, one could only take it as evidence if they've already decided what they want to believe and are just looking for arguments.

It does show that EY is a popular figure around here, since nobody goes around starting Chuck Norris threads about random people, but that's hardly evidence for a cult. Hell, in the case of Norris himself, it's the opposite.

Comment by Fadeway on "What-the-hell" Cognitive Failure Mode: a Separate Bias or a Combination of Other Biases? · 2013-02-24T04:27:12.738Z · LW · GW

If you want to get up early, and oversleep once, chances are, you'll keep your schedule for a few days, then oversleep again, ad infinitum. Better to mark that first oversleep as a big failure, take a break for a few days, and restart the attempt.

Small failures always becoming huge ones also helps as a deterrent - if you know that that single cookie that bends your diet will end up with you eating the whole jar and canceling the diet altogether, you will be much more likely to avoid even small deviations like the plague, next time.

Comment by Fadeway on Why Politics are Important to Less Wrong... · 2013-02-22T03:43:59.218Z · LW · GW

God. Either with or without the ability to bend the currently known laws of physics.

Comment by Fadeway on Strongmanning Pascal's Mugging · 2013-02-21T15:48:14.009Z · LW · GW

This was my argument when I first encountered the problem in the Sequences. I didn't post it here because I haven't yet figured out what this post is about (gotta sit down and concentrate on the notation and the message of the author and I haven't done that yet), but my first thoughts when I read Eliezer claiming that it's a hard problem were that as the number of potential victims increases, the chance of the claim being actually true decreases (until it reaches a hard limit which equals the chance of the claimant having a machine that can produce infinite victims without consuming any resources). And the decrease in chance isn't just due to the improbability of a random person having a million torture victims - it also comes from the condition that a random person with a million torture victims also for some reason wants $5 from you.

Where is the flaw here? What makes the mugging important, despite how correct my gut reaction appears to me?

Comment by Fadeway on Think Like a Supervillain · 2013-02-20T04:27:14.346Z · LW · GW

The point is that a superhero can't take preemptive action. The author can invent a situation where a raid is possible, but for the most part, superman must destroy the nuke after it has been launched - preemptively destroying the launch pad instead would look like an act of aggression from the hero. And going and killing the general before he orders the strike is absolutely out of the question. This is fine for a superhero, but most of us can't stop nukes in-flight.

A dictatorship is different because aggression from the villain is everywhere anyway - and it's guaranteed that we will be shown at least one poor farm girl assaulted by soldiers before our hero takes action against the mastermind. Only when the villain is breaking the rules egregiously and constantly is the hero allowed to bend them a bit.

If you have a situation with both an antihero and a hero in it, the hero can be easily predicted - as opposed to the antihero,who is actually allowed to plan. Superheroes end up quite simple, since the rules they obey are so strict, they can only take one course of action (their choices tend to be about whether they follow the rules or not, and not between to courses of action that are both allowed). And that course of action often isn't the most effective.

Comment by Fadeway on [SEQ RERUN] Three Worlds Decide (5/8) · 2013-02-17T11:56:51.660Z · LW · GW

I can definitely agree with 5, and to some extent with 3. With 4, it didn't seem to me when I read this months ago that the Superhappies would be willing to wait; it works as a part of 3 (get a competent committee together to discuss after stasis has bought time), but not by itself.

I found it interesting on my first reading that the Superhappies are modeled as a desirable future state, though I never formulated a comprehensive explanation for why Eliezer might have chosen to do that. Probably to avoid overdosing the Lovecraft. It definitely softens the blow from modifying humanity's utility function to match their own.

You definitely hit the nail on the head with 5. Finding the other guy's pain and highlighting it, as well as showing how your offer helps what they actually care about, is both a basic and a vital negotiation technique. Call me when I'm organizing the first contact mission; I might have a space diplomat seat ready for you.

Comment by Fadeway on Three Axes of Prohibitions · 2013-02-16T13:07:31.295Z · LW · GW

What do you mean, specifically? "Having fun" aside, being emotional about a game is socially harmful/uncool in the same way a precommitment can be.

Comment by Fadeway on What are your rules of thumb? · 2013-02-15T17:33:48.854Z · LW · GW

-Hanlon's razor - I always start from the assumption that people seek the happiness of others once their own basic needs are met, then go from there. Helps me avoid the "rich people/fanatics/foreigners/etc are trying to kill us all [because they're purely evil and nonhuman]" conspiracies.

-"What would happen if I apply x a huge amount of times?" - taking things to the absurd level help expose the trend and is one of my favourite heuristics. Yes, it ignores the middle of the function, but more often than not, the value at x->infinity is all that matters. And when it isn't, the middle tends to be obvious anyway.

Comment by Fadeway on The Virtue of Compartmentalization · 2013-02-15T14:26:09.631Z · LW · GW

When you mentioned compartmentalization, I thought of compartmentalization of beliefs and the failure to decompartmentalize - which I consider a rationalistic sin, not a virtue.

Maybe rename this to something about remembering the end goal, or something about abstraction levels, or keeping the potential application in mind; for example "the virtue of determinism"?

Comment by Fadeway on A Little Puzzle about Termination · 2013-02-03T18:22:28.658Z · LW · GW

Doesn't this machine have a set of ways to generate negative utility (It might feel unpleasant when using up resources for example, as a way to prevent a scenario where the goal of 32 paperclips becomes impossible)? With fewer and fewer ways to generate utility as the diminishing returns pile on, the machine will either have to terminate itself (to avoid a life of suffering), or seek to counter the negative generators (if suicide=massive utility penalty).

If there's only one way to generate utility and no way to lose it however, that's going to lead to the behavior of an addicted wirehead.

Comment by Fadeway on My simple hack for increased alertness and improved cognitive functioning: very bright light · 2013-01-19T03:56:11.552Z · LW · GW

At night F.Lux is usually great. Except when waking up or doing polyphasic (where you actually treat night and day as the same thing). I discovered the program a week after I started Uberman, and a shortly after installing it, I started having trouble staying up during the early morning hours between 3am-7am, where previously I had no issue at all. I am no longer doing polyphasic, so it's awesome - I never get blinded by my monitor etc. I only wish I could make it so that it uses the daylight setting if I turn the PC on at night - so it helps me wake up. As it stands, I get two hours of "you should be in bed" lighting before it finally gives up on sending me for a nap.

Comment by Fadeway on Open Thread, January 1-15, 2013 · 2013-01-12T17:34:17.018Z · LW · GW

From rereading the article, which I swear I stumbled upon recently, I took away that I shouldn't take too long to decide after I've written my list, lest I spend the extra time conjuring extra points and rationalizations to match my bias.

As for the meat of the post, I don't think it applies as much due to the importance of the decision. I could go out and gather more information, but I believe I have enough, and now it's just a matter of weighing all the factors; for which purpose, I think, some agonizing and bias removal is worth the pain.

Hopefully I can get somewhere with the bias removal step, as opposed to getting stuck on it. (And, considering that I just learned something, I guess this can be labeled "progress"! Thanks :))

Comment by Fadeway on Open Thread, January 1-15, 2013 · 2013-01-11T19:54:23.615Z · LW · GW

I have an important choice to make in a few months (about what type of education to pursue). I have changed my mind once already, and after hearing a presentation where the presenter clearly favored my old choice, I'm about to revert my decision - in fact, introspection tells me that my decision was already changed at some point during the presentation. In regards to my original change of mind, I may also have been affected by the friend who gave me the idea.

All of this worries me, and I've started making a list of everything I know as far as pros/cons go of each choice. I want to weigh the options objectively and make a decision. I fear that, already favoring one of the two choices, I won't be objective.

How do I decrease my bias and get myself as close as possible to that awesome point at the start of a discussion where you can list pros and cons and describe the options without having yet gotten attached to any position?

Comment by Fadeway on Morality is Awesome · 2013-01-07T06:38:41.204Z · LW · GW

For the preference ranking, I guess I can mathematically express it by saying that any priority change leads to me doing stuff that would be utility+ at the time, but utility- or utilityNeutral (and since I could be spending the time generating utility+ instead, even neutral is bad) now. For example, if I could change my utility function to eating babies, and babies were plentiful, this option would result in a huge source of utility+ after the change. Which doesn't change the fact that it also means I'd eat a ton of babies, which makes the option a huge source of utility- currently - I wouldn't want to do something that would lead to me eating a ton of babies. If I greatly valued generating as much utility+ for myself at any moment as possible, I would take the plunge; however, I look at the future, decide not to take what is currently utility- for me, and move on. Or maybe I'm just making up excuses to refuse to take a momentary discomfort for eternal utility+ - after all, I bet someone having the time of his life eating babies would laugh at me and have more fun than me - the inconsistency here is that I avoid the utility- choice when it comes to changing my terminal values, but I have no issue taking the utility- choice when I decide I want to be in a simulation. Guess I don't value truth that much. I find that changing my memories leads to similar results as changing my utility function, but on a much, much smaller scale - after all, they are what make up my beliefs, preferences, myself as a person. Changing them at all changes my belief system and preference; but that's happening all the time. Changing them on a large scale is significantly worse in terms of affecting my utility function - it can't change my terminal values, so still far less bad than directly making me interested in eating babies, but still negative. Getting lied to is just bad, with no relation to the above two, and weakest in importance.

My gut says that I should treat others as I want them to treat me. Provided a simulation is a bit more awesome, or comparably awesome but more efficient, I'd rather take that than the real thing. Hence, I'd want to give others what I myself prefer (in terms of ways to achieve preferences) - not because they are certain to agree that being lied to is better than angsting about not helping people, but because my way is either better or worse than theirs, and I wouldn't believe in my way unless I though it better. Of course, I am also assuming that truth isn't a terminal value to them. In the same way, since I don't want my utility function changed, I'd rather not do it to them.

Comment by Fadeway on Morality is Awesome · 2013-01-07T05:06:03.728Z · LW · GW

I attach negative utility to getting my utility function changed - I wouldn't change myself to maximize paperclips. I also attach negative utility to getting my memory modified - I don't like the normal decay that is happening even now, but far worse is getting a large swath of my memory wiped. I also dislike being fed negative information, but that is by far the least negative of the three, provided no negative consequences arise from the false belief. Hence, I'd prefer being fed negative information to having my memory modified to being made to stop caring about other people altogether. There is an especially big gap between the last one and the former two.

Thanks for summarizing my argument. I guess I need to work on expressing myself so I don't force other people to work through my roundaboutness :)

Comment by Fadeway on Morality is Awesome · 2013-01-07T04:03:20.923Z · LW · GW

Don't we have to do it (lying to people) because we value other people being happy? I'd rather trick them (or rather, let the AI do so without my knowledge) than have them spend a lot of time angsting about not being able to help anyone because everyone was already helped. (If there are people who can use your help, I'm not about to wirehead you though)

Do you mean to distinguish this from believing that you have flown a spaceship?

Yes. Thinking about simulating achievement got me confused about it. I can imagine intense pleasure or pain. I can't imagine intense achievement; if I just got the surge of warmth I normally get, it would feel wrong, removed from flying a spaceship. Yet, that doesnt mean that I don't have an achievement slider to max; it just means I can't imagine what maxing it indefinitely would feel like. Maxing the slider leading to hallucinations about performing activities related to achievement seems too roundabout - really, that's the only thing I can say; it feels like it won't work that way. Can the pill satisfy terminal values without making me think I satisfied them? I think this question shows that the sentence before it is just me being confused. Yet I can't imagine how an awesomeness pill would feel, hence I can't dispel this annoying confusion.

[EDIT] Maybe a pill that simply maxes the sliders would make me feel achievement, but without flying a spaceship, hence making it incomplete, hence forcing the AI to include a spaceship hallucinator. I think I am/was making it needlessly complicated. In any case, the general idea is that if we are all opposed to just feeling intense pleasure without all the other stuff we value, then a pill that gives us only intense pleasure is flawed and would not even be given as an option.

Comment by Fadeway on Morality is Awesome · 2013-01-07T03:42:49.123Z · LW · GW

No, this is more about deleting a tiny discomfort - say, the fact that I know that all of it is an illusion; I attach a big value to my memory and especially disagree with sweeping changes to it, but I'll rely on the pill and thereby the AI to make the decision what shouldn't be deleted because doing so would interfere with the fulfillment of my terminal values and what can be deleted because it brings negative utility that isn't necessary.

Intellectually, I wouldn't care whether I'm the only drugged brain in a world where everyone is flying real spaceships. I probably can't fully deal with the intuition telling me I'm drugged though. It's not highly important - just a passing discomfort when I think about the particular topic (passing and tiny, unless there are starving children). Whether its worth keeping around so I can feel in control and totally not drugged and imprisoned...I guess that's reliant on the circumstances.

Comment by Fadeway on Morality is Awesome · 2013-01-06T18:45:51.909Z · LW · GW

Can't I simulate everything I care about? And if I can, why would I care about what is going on outside of the simulation, any more than I care now about a hypothetical asteroid on which the "true" purpose of the universe is written? Hell, if I can delete the fact from my memory that my utility function is being deceived, I'd gladly do so - yes, it will bring some momentous negative utility, but it would be a teensy bit greatly offset by the gains, especially stretched over a huge amount of time.

Now that I think about it...if, without an awesomeness pill, my decision would be to go and do battle in an eternal Valhalla where I polish my skills and have fun, and an awesomeness pill brings me that, except maybe better in some way I wouldn't normally have thought of...what is exactly the problem here? The image of a brain with the utility slider moved to the max is disturbing, but I myself can avoid caring about that particular asteroid. An image of an universe tiled with brains storing infinite integers is disturbing; one of an universe tiled with humans riding rocket-powered tyrannosaurs is great - and yet, they're one and the same; we just can't intuitively penetrate the black box that is the brain storing the integer. I'd gladly tile the universe with awesome.

If I could take an awesomeness pill and be whisked off somewhere where my body would be taken care of indefinitely, leaving everything else as it is, maybe I would decline; probably I won't. Luckily, once awesomeness pills become available, there probably won't be starving children, so that point seems moot.

[PS.] In any case, if my space fleet flies by some billboard saying that all this is an illusion, I'd probably smirk, I'd maybe blow it up with my rainbow lasers, and I'd definitely feel bad about all those other fellas whose space fleets are a bit less awesome and significantly more energy-consuming than mine (provided our AI is still limited by, at the very least, entropy; meaning limited in its ability to tile the world to infinity; if it can create the same amount of real giant robots as it can create awesome pills, it doesn't matter which option is taken), all just because they're bothered by silly billboards like this. If I'm allowed to have that knowledge and the resulting negative utility, that is.

[PPS.] I can't imagine how an awesomeness pill would max my sliders for self-improvement, accomplishment, etc without actually giving me the illusion of doing those things. As in, I can imagine feeling intense pleasure; I can't imagine feeling intense achievement separated from actually flying - or imagining that I'm flying - a spaceship - it wouldn't feel as fulfilling, and it makes no sense that an awesomeness pill would separate them if it's possible not to. It probably wouldn't have me go through the roundabout process of doing all the stuff, and it probably would max my sliders even if I can't imagine it, to an effect much different from the roundabout way, and by definition superior. As long as it doesn't modify my utility function (as long as I value flying space ships), I don't mind.

Comment by Fadeway on Morality is Awesome · 2013-01-06T18:15:51.187Z · LW · GW

I can't bring myself to see the creation of an awesomeness pill as the one problem of such huge complexity that even a superintelligent agent can't solve it.

Comment by Fadeway on Group rationality diary, 12/25/12 · 2013-01-03T08:56:24.351Z · LW · GW

I started doing the same thing a few days ago, in an attempt to get back my habit of waking early (polyphasic experimenting got my sleep schedule out of whack). Something I did differently was, I write in the same box twice - once before I go to bed, something like committing to waking up early, and once after I get up. This solved my problem of getting up, making up some reason to postpone the habit-formation process (or even cancel it to start anew later), and going back to bed. My symbols are a bit more complex, so that I can mark a failure on top of the commitment mark as well, but it would be something like writing the left part of the tick just as I'm going to bed - and no sooner - and completing it when I get up (or crossing it out if I fail).

I think I'll do the same thing for exercising. It's been great so far!

Comment by Fadeway on Brain-in-a-vat Trolley Question · 2013-01-01T13:58:21.070Z · LW · GW

Don't blame yourself, blame the author. (which you kinda sorta did but then didn't)

Comment by Fadeway on [SEQ RERUN] True Sources of Disagreement · 2012-12-19T07:16:27.583Z · LW · GW

Thanks. That test was fun.

Comment by Fadeway on [SEQ RERUN] True Sources of Disagreement · 2012-12-18T19:35:19.763Z · LW · GW

I remember that when I went through all of the Sequences a year ago, I was curious about the retina issue that Eliezer keeps referring to, but a cursory search didn't return anything useful. I poked around a bit more just now, and found a few short articles on the topic. Could someone point me to more in-depth information regarding the inverted retina?

Comment by Fadeway on Playing the student: attitudes to learning as social roles · 2012-11-24T07:56:13.269Z · LW · GW

For pep talks, I dislike them because they rely on the "I have this image of you" approach. The motivator is trying to get you to think they think you're great - if you don't agree, you will want to live up to the expectation regardless, as the alternative is disappointment, and disappointment hurts. For me, this gets me thinking about ways to win, which gets me back to my thoughts about not being very good, and thus the cycle is reinforced. I might try harder, but I won't feel good about it, and I'll feel paralyzed quickly, once it becomes apparent that whatever extra effort I've tried putting in hasn't elevated me to "great" and that disappointment is inevitable.

For me, positive pep talks (as opposed to those I see in military movies but never experienced myself, where people are told they're spineless trash and elevated from there) end up having a negative effect when the motivator (the fear of disappointing someone who believes in you) punishes me emotionally before the effort is actually over. Of course, this probably stems from the fact that, as you point out, few people would actually believe someone praising them and change their self-appraisal just based on that; thus, we end up trying to fulfill expectations without believing that we can.

Comment by Fadeway on Playing the student: attitudes to learning as social roles · 2012-11-24T07:54:46.041Z · LW · GW

I share similar behaviors, although with key differences, and you just alerted me - I should be careful with my failure mode. It's gotten to a point where I don't want to try improving particular skillsets around my parents. I've already shown them that I'm bad at them, and that I'm not interested; trying to improve through my usual all-or-nothing approach would feel very awkward, a 180 personality turn.

Comment by Fadeway on Cryonics as Charity · 2012-11-10T15:20:07.539Z · LW · GW

I find a hundredfold cost decrease quite unlikely, but then, I'm not familiar at all with the costs involved or their potential to be reduced. If the idea of cryonics were accepted widely enough for it to be an acceptable alternative to more expensive treatment though, freezing old people with negative utility to society until a potential technological Singularity or merely the advent of mind uploading would not be far off - and that would be efficient even without cheap cryonics.