Posts

Welcome to Berkeley LessWrong Meetup [Edit With Your Details] 2018-03-23T19:35:22.188Z
Meetup : Berkeley: Alpha-testing at CFAR 2014-04-27T04:24:04.055Z

Comments

Comment by PeerInfinity on Human Memory: Problem Set · 2014-01-08T09:25:06.606Z · LW · GW
  1. Lock the door. Then check if the door is locked. Then wait two seconds, then check again if the door is locked. Then walk two steps away, then return and check if the door is locked. Then walk several steps away, then return and check if the door is locked. Repeat with further distances until you're so embarrassed by this process that you'll vividly remember the embarrassment, and also remember that your door is locked. This is especially effective if someone else sees you doing this. Or you could just write yourself a note saying that you locked the door, along with a time/date stamp.

  2. A generally useful technique is to carefully keep track of how many things you are currently trying to remember. That way, hopefully being aware that there is something that you're supposed to remember will make it easier to actually remember the thing. And if you do forget something, at least you'll know how many things you forgot, and you might suddenly remember it later. One technique for remembering how many things you're currently trying to remember is to hold out one finger for each thing you're trying to remember. So far, only twice have I ever had the count exceed 10 before I got a chance to write down the things I was trying to remember, but even then I just started over from one, and it was easy to remember that I restarted the count from one.

  3. Take 5 minutes to practice closing the door properly. Use exaggerated motions. Close the fridge door the way you imagine a professional fridge door closer would do, then make a show of pushing the door to make sure it's sealed. After each repetition, gradually use a more natural method, and experiment with different methods. Check if you can easily seal the door by leaning against it. Check if there is a way to make sure the door is sealed before you remove your hand from the door handle. Find at least one method that you find both effective and convenient. Then try closing the door without sealing it properly. If you're lucky, then this will now feel wrong to you, and you'll be able to notice this feeling of wrongness if you later make the mistake of closing the door without sealing it.

  4. Just write down the information, or at least write down enough hints for you to easily remember the rest. Don't try to remember more than seven things. Or if you somehow can't write down anything, then try using the technique of remembering how many points you are trying to remember, and using whatever other memory techniques you find most useful to remember the points. Spend more effort remembering the final items, since in this case you can safely forget the first items as you finish them. Count down the remaining items as you finish each one.

  5. Again, use the technique of keeping track of how many items you're trying to remember. In this case, it would be helpful to remember the number of each item, if the points need to be presented in a specific order. You could also try making an acronym or other mnemonic, composed of one-word reminders of each item. Or use whatever other memorization tricks you find most useful.

  6. Have a copy of the number someplace easily accessible. Put the card at the front of your wallet, so that you don't need to spend time searching for it in your wallet. Write the number on another piece of paper, preferably strong paper, that's more convenient to pull out than your wallet. Store the number on your cellphone in a place that's just one or two taps from the home screen. Write the number on your hand. Write the number on some other object you often look at. Use other memorization techniques for remembering numbers.

  7. Everyone should have a convenient way to write down ideas they think of in bed. I use an Evernote app on my cellphone, right on the home screen, and with no lock screen on the cellphone. If you're awake enough to think of ideas, then you're awake enough to write them down. Decide for yourself if the idea is important enough to be worth the hopefully trivial effort of writing it down. Or if you're really in brainstorming mode, and thinking of several ideas and don't want to pause to write them down, then use the technique of keeping track of how many points you're currently trying to remember, then when you're finished brainstorming and ready to write stuff down, you'll at least know how many things you've forgotten, and can try to remember them. If the light of the cellphone would interfere with your sleep, or if you don't have a cellphone, then you could try learning to write on paper without any light, and hope that whatever marks you made on the paper are enough to remind you of the idea. I previously tried using a TI-92+ graphing calculator, which has a full qwerty keyboard, with which I had enough experience to type unreliably in the text editor without the light on, but I found the uncertainty of whether I had typed it successfully to be more of a nuisance than turning on a light. Or you don't want to try any of these ideas, you can try to use the technique of remembering how many ideas you thought of, and hope that after you wake up you'll be able to remember the number, and also what the ideas were.

  8. I don't have anything especially helpful to say about this one. Just use whatever memorization techniques you find most helpful. Also try any anxiety-reducing techniques you find helpful.

  9. I don't have anything especially helpful to say about this one either. Though the first step is to stop panicking, so use whatever panic-reducing techniques you find most helpful. Maybe focus on making at least some progress, rather than becoming discouraged by how much there is to be done.

Comment by PeerInfinity on Harry Potter and the Methods of Rationality discussion thread, part 19, chapter 88-89 · 2013-06-30T23:13:03.337Z · LW · GW

"HE IS HERE. THE ONE WHO WILL TEAR APART THE VERY STARS IN HEAVEN. HE IS HERE. HE IS THE END OF THE WORLD."

This reminded me of a dream I had the night before Sunday, Dec 2, 2012, which I posted to my livejournal blog the next day. I'm not sure what I expect to accomplish by posting this here, but I thought you might find it interesting. Here is what I wrote about that dream:

" A scene where I dreamt I was reading the next chapter of HPMOR. It was extremely vivid. As if I was there. Very clear image and sound. Even some dramatic music. Ominous countdown to doom music. At least 3 different instruments.

Quirrel's plan is revealed. He plans to destroy the universe and re-create it "in his own image". Simpler laws of physics, that grant him unlimited power just by physically being at the center of the new universe. The new universe also contains magic, the dream showed a simple two-gesture spell that would allow Quirrel to "become a sun god", allowing him to create, destroy, and manipulate stars.

Quirrel's plan involved some extremely powerful magic, beyond what anyone thought possible. It involved creating a sphere of ultra-condensed matter, energy, space, and time, just outside Hogwarts. Quirrel put his plan into action during the last moments of his life, but as he entered the sphere of "MEST compression", subjective time for him slowed down by orders of magnitude, and he had immense power, allowing him to create the massive structures required for his plan in what looked like just a few seconds to the world outside the sphere. And there were other sentient beings in the sphere with him. Harry was there, near the center of the sphere, tricked into believing that he was saving this universe, not helping to destroy it. Also some other characters, with a generic "shopkeeper" or "smith" personality, who were in charge of helping the construction of something that vaguely resembled a series of Large Hadron Colliders, enormous metal rings and other structures arranged in a precise 3d structure resembling an enormous lattice, or cage. Quirrel giving instructions to these assistants on how to assemble the structure. The dream showed some of their replies. "You're not going to believe this, but there's this giant metal tube floating towards me. It's exactly the shape you described, but Merlin it's huge! I cant even see the end of it! I'm standing by to attach it to the next piece, which is also floating this way now. This won't be easy."

And so Quirrel continued assembling the structure. Most of it went according to plan, but then one of the helpers, panicking, informed him that one of the pieces wasn't lining up correctly. Harry had figured out that something was wrong with Quirrel's plan. He deactivated the barrier around the sphere, and summoned McGonagall, who also earned immense power when she entered the sphere. Harry told her some of what was happening, and said that they needed to find a wristwatch that Quirrel had charmed, which was somehow controlling the time compression. McGonagall found the watch, and started to move it out of place, but then Quirrel found her. Quirrel was far to powerful to be killed directly, but if they could somehow delay his plans long enough, he was already dying. McGonagall didn't stand a chance in the battle, but Quirrel didn't destroy her entirely, he instead left her mostly powerless. He hadn't given up on regaining Harry's trust.

The scene ended on a cliffhanger. "Wait until next week when I'm finished writing the next chapter to find out what happens next" "

Comment by PeerInfinity on Harry Potter and the Methods of Rationality discussion thread, part 13, chapter 81 · 2012-04-04T03:12:21.897Z · LW · GW

"You can't put a price on a human life."

"I agree, but unfortunately reality has already put a price on human life, and that price is much less than 5 million dollars. By refusing to accept this, you are only refusing to make an informed decision about which lives to purchase."

Comment by PeerInfinity on POSITION: Design and Write Rationality Curriculum · 2012-01-26T02:20:30.282Z · LW · GW

I like the idea. This is something that could be useful to anyone, not just as part of the Rationality Curriculum.

Here is a related idea I posted about before:

Another random idea I had was to make a text adventure game, where you participate in conversations, and sometimes need to interrupt a conversation to point out a logical fallacy, to prevent the conversation from going off-track and preventing you from getting the information you needed from the conversation.

See also The Less Wrong Video Game

Comment by PeerInfinity on POSITION: Design and Write Rationality Curriculum · 2012-01-20T01:06:13.211Z · LW · GW

One obvious idea for an exercise is MBlume's Positive Bias Test, which is available online.

But of course everyone taking the course would probably already be familiar with the standard example implemented in the app. I would suggest updating the app to have several different patterns, of varying degrees of complexity, and a way for the user to choose the difficulty level before starting the app. I would expect that to be not too hard to implement, and useful enough to be worth implementing.

Comment by PeerInfinity on HELP! I want to do good · 2011-05-30T05:49:25.067Z · LW · GW

downvote this comment if you want to balance out the karma from an upvote to the other comment.

Comment by PeerInfinity on HELP! I want to do good · 2011-05-30T05:48:56.868Z · LW · GW

Please upvote this comment if you would have at least some use for a "saving the world wiki"

Comment by PeerInfinity on HELP! I want to do good · 2011-05-30T05:38:19.273Z · LW · GW

"persuade other people to do it for me"? Don't you mean "persuade other people to do it with me"?

other than that, this is an awesome post! I totally want to be your ally! :)

Congratulations on your altruism! If you really are as altruistic as you claim to be.

I'm the person who mentioned there should be a "saving the world wiki", by the way. The main thing that's stopping me from going ahead and starting it myself is that noone else expressed any interest in actually using this wiki if I created it.

Also, I've already made some previous attempts to do something like this, and they were pretty much complete failures. Costly failures. Costing lots of time and money.

(sorry for not noticing this post until now)

Comment by PeerInfinity on Overcoming suffering: Emotional acceptance · 2011-05-29T23:22:27.325Z · LW · GW

I'm going to try to apply some Bayesian math to the question of whether it makes sense to believe "if you aren't sad about my bad situation then that means you don't care about me"

In this example, Person X is in a bad situation, and wants to know if Person Y cares about them.

To use Bayes' theorem, we are interested in the following probabilities:

P(A) is 'Person Y cares about Person X'

P(B) is 'Person Y feels sad about Person X's situation'

P(C) is 'Person Y expresses sadness about Person X's situation'

Let's use P(B) as an abbreviation for either P(B given C) or P(B given not C). Because we're doing these calculations after Person X already knows whether or not Person Y expressed sadness. In other words, I'm assuming that P(B) has already been updated on C.

Bayes' theorem says that P(A given B) is P(B given A) times P(A) over P(B).

P(B given A) and P(A) make it go up, P(B) makes it go down.

Bayes' theorem says that P(not A given not B) is P(not B given not A) times P(not A) over P(not B).

P(not B given not A) and P(not A) make it go up, P(not B) makes it go down.

So this tells us:

The more uncertain Person X is about whether Person Y cares about them, the more they'll worry about whether Person Y feels sad about any specific misfortune Person X is experiencing.

Different people probably have different beliefs about what P(A given B) is. Someone who thinks that this value is high will be more reassured by someone feeling sad about their situation, and someone who thinks this value is low will be less reassured. So this value will be different for a different person X, and also for a different person Y.

Different people probably have different beliefs about what P(not A given not B) is. Someone who thinks that this value is high will be more worried by someone not feeling sad about their situation, and someone who thinks this value is low will be less worried. So this value will be different for a different person X, and also for a different person Y.

If Person Y somehow feels equally sad about the misfortune of people e specifically cares about, and people e doesn't even know, then P(B given A) is equal to P(B), and whether they feel sad about any particular misfortune of Person X doesn't give any new information about whether Person Y cares about person X.

Similarly, if Person Y never feels sadness about anyone's misfortune, then P(B given A) is equal to P(B), and the fact that Person Y doesn't feel sad about any particular misfortune of Person X doesn't give any new information about whether Person Y cares about person X.

And if Person Y is somehow less likely to feel sad about the misfortunes of people e cares about, than people e doesn't care about... then all this would be reversed? This isn't really relevant anyway, so I won't bother checking the math.

I am very likely to have made a mistake somewhere in this comment. Halfway through writing this comment I started to get really fuzzyheaded.

Most of this was already obvious before doing the math, but I think there was at least some value to this exercise.

Also, I very strongly suspect that I'm completely missing the point of... something...

oh, right, I was trying to answer the question of whether it makes sense to believe "if you aren't sad about my bad situation then that means you don't care about me"

and the answer is... sometimes. It depends on the variables described in the math above.

Comment by PeerInfinity on Overcoming suffering: Emotional acceptance · 2011-05-29T23:22:01.302Z · LW · GW

To me, it still feels Wrong to not feel bad when bad things are happening. Especially when bad things are happening to the people you know and interact with.

I suspect that the reason why it feels Wrong is because I would assume that if someone you know was in a really bad situation, and they saw you not feeling bad about it, they would assume that you don't care about them. I was assuming that "feeling bad when bad things happen to someone" is part of the definition of what it means to care about someone. And I'm naturally reluctant to choose to not care.

oops, I just realized... if the rule is "only have emotions about situations that were within my immediate control", and you know that the other person will feel upset if they don't see you feeling bad about their situation, then that counts as something that's within your immediate control... though something about this seems like it doesn't quite fit... it feels like I'm interpreting the rule to mean something other than what was intended...

Also, I'll admit that I have almost no idea how many people believe "if you aren't sad about my bad situation then that means you don't care about me", and how many people don't believe this. I'm still not sure if I believe this, but I think I'm leaning towards "no".

but if you happen to have the "gift" of "sadness asymbolia", then you can go ahead and show sadness about other people's bad situations, and not experience the negative affect of this sadness. And of course it also has all those other benefits that Will mentioned.

"fear asymbolia" also seems like it would be extremely helpful.

Something also feels Wrong about enjoying sadness. If you happen to enjoy sadness, then you need to be really careful not to deliberately cause harmful things to happen to yourself or others, just for the sake of experiencing the sadness.

and yet somehow "nonjudgemental acceptance" doesn't feel wrong... these mindfulness techniques seem like an entirely good idea.

Comment by PeerInfinity on Rationalist horoscopes: A low-hanging utility generator. · 2011-05-25T04:30:00.510Z · LW · GW

The git repository is online now at https://github.com/PeerInfinity/Weighted-Horoscopes

Comment by PeerInfinity on Rationalist horoscopes: A low-hanging utility generator. · 2011-05-24T00:19:36.055Z · LW · GW

I'll admit that after I first read that comment, I was about to make this mistake:

"When faced with a choice between doing a task and attempting to prove that it's unnecessary, most people immediately begin on the latter."

(I'm probably misremembering that quote. I tried googling but didn't find the original quote, and I don't remember where it's from.)

So a more appropriate course of action would be for me to at least check how much effort would be required to set up a github account. And so I did. I discovered that it was more complex than I expected, but I started to set up a github account and the project repository anyway. After about an hour of setting stuff up, I got to the step where I need to choose the project name. Another hour later, I still hadn't thought of an obviously correct choice for the project name. "Rationalist Horoscopes" wouldn't be an appropriate name for the project, because there's nothing especially rationalist about it without the database of good horoscopes, except the scoring system. There is already another project on github titled "horoscope". Other names I considered were "Scored Horoscopes", "Rated Horoscopes", or "Weighted Horoscopes", but none of these seemed clearly better than the others. I have lots of trouble making decisions like this, and github doesn't let you change the project name after it has been created. And so I decided to post a comment here asking if anyone can think of a better name, or if any of the names I thought of so far sounds better than the others.

Th inconvenience of posting this to github ended up being a lot less trivial than I expected.

Comment by PeerInfinity on Rationalist horoscopes: A low-hanging utility generator. · 2011-05-23T07:30:13.564Z · LW · GW

I still haven't bothered to set up a github account. But if someone else wants to put it oh github, they're welcome to.

Comment by PeerInfinity on Rationalist horoscopes: A low-hanging utility generator. · 2011-05-22T15:08:18.988Z · LW · GW

one idea is to have 12 separate tumblr accounts, one for each zodiac sign, then the users subscribe to the tumblr account for their own zodiac sign.

Comment by PeerInfinity on Rationalist horoscopes: A low-hanging utility generator. · 2011-05-22T09:58:32.532Z · LW · GW

The code for this project can be downloaded here

The code is written in PHP, and uses a MySQL database. A cron job is set up to post each day's horoscope to the Tumblr account.

This is completely free software, so you're welcome to do whatever you like with it.

Contributions and feedback are appreciated.

Update:

A git repository for this project is online now at https://github.com/PeerInfinity/Weighted-Horoscopes

Comment by PeerInfinity on How to Save the World · 2011-05-21T04:52:05.817Z · LW · GW

sorry, I should have stated explicitly that I'm NOT assuming that "donating to a church = donating to a good cause".

What I am assuming is that the christians think that "donating to a church = donating to a good cause"

Comment by PeerInfinity on A survey of anti-cryonics writing · 2011-03-29T23:31:57.288Z · LW · GW

I recently found this article, that attempts to survey the arguments against cryonics. It only finds two arguments that don't contain any obvious flaws:

  1. Memory and identity are encoded in such a fragile and delicate manner that cerebral ischemia, ice formation or cryoprotectant toxicity irreversibly destroy it.

  2. The cell repair technologies that are required for cryonics are not technically feasible.

Comment by PeerInfinity on Procedural Knowledge Gaps · 2011-02-07T06:18:58.697Z · LW · GW

Thanks for explaining that! But, um... I still have more questions... What is the procedure for washing the surfaces, the utensils, and my hands? How do I know when the meat is cooked enough to not qualify as raw? And for stir-frying raw meat, do I need to pause the stir-frying process to wash the stir-frying utensils, so that I don't contaminate the cooked food with any raw juices that happen to still be on the utensils?

Comment by PeerInfinity on Procedural Knowledge Gaps · 2011-02-07T03:55:51.283Z · LW · GW

I think I have lots of gaps to report, but I'm having lots of trouble trying to write a coherent comment about them... so I'm going to just report this trouble as a gap, for now.

Oh, and I also have lots of trouble even noticing these gaps. I have a habit of avoiding doing things that I haven't already established as "safe". Unfortunately, this often results in gaps continuing to be not detected or corrected.

Anyway, the first gap that comes to mind is... I don't dare to cook anything that involves handling raw meat, because I'm afraid that I lack the knowledge necessary to avoid giving myself food poisoning. Maybe if I tried, I would be able to do it with little or no problem, but I don't dare to try.

Comment by PeerInfinity on Knowledge doesn't just happen · 2011-01-23T22:11:05.886Z · LW · GW

An obvious implication of this post is that if someone tells you that you "should have known better", then rather than getting upset and instantly trying to defend yourself, it might be a better idea to calmly ask the person "How should I have known better?".

Possible answers include:

1) "using this simple and/or obvious method that I recommend as a general strategy" 2) "using this not simple and/or not obvious method that I didn't think of until just now" 3) "I don't know" 4) "how dare you ask that!"

The first two of those answers are useful information about how to do things, and thus valuable. You can then perform a quick cost/benefit analysis to check if the cost of implementing the suggested strategy outweighs the cost of risking another instance of whatever mistake you just made.

The third is a result of a successful use of the technique, generally. The speaker now realizes that maybe you didn't have any way to know better, and so maybe it would be inappropriate to blame you for whatever went wrong.

The fourth is a sign that the person you're talking to is probably someone you would be better off not interacting with if you can help it (and thus is useful information). There are ways of dealing with that kind of person, but they involve social skills that not everyone has.


Another obvious implication of this post is, if you're about to tell someone else that they "should have known better", then it might be a good idea to take a moment to think how they should have known better.

The same 4 possible answers apply here.

Again, in cases 1 and 2, you might want to take a moment to perform a quick cost/benefit analysis to check if the cost of implementing the suggested strategy outweighs the cost of risking another instance of whatever mistake the person just made. If your proposed solution makes sense as a general strategy, then you can tell the person so, and recommend that they implement it. If your proposed solution doesn't make sense as a general strategy, then you can admit this, and admit that you don't really blame the person for whatever went wrong. Or you can let the other person do this analysis themself.

In case 3, you can admit that you don't know, and admit that you don't blame the person for whatever went wrong. Or you can just not tell the person that you think they "should have known better", skipping this whole conversation.

In case 4, you obviously need to take a moment to calm down until you can give one of the other answers.

Comment by PeerInfinity on Humor · 2010-12-20T18:49:40.600Z · LW · GW

There's also this: Arbuckle: Garfield through Jon's eyes

Comment by PeerInfinity on Friendly AI Research and Taskification · 2010-12-17T20:45:05.686Z · LW · GW

"I don't even see how one would start to research the problem of getting a hypothetical AGI to recognize humans as distinguished beings."

I'm still not convinced that human beings should be treated as a special case, as opposed to getting the AGI to recognize sentient beings in general. It's easy to imagine ways in which either strategy could go horribly wrong.

Comment by PeerInfinity on TrailMemes for Sequences · 2010-12-17T20:21:33.982Z · LW · GW

Here are some other links that are relevant to this post:

Andrew Hay's dependency graphs of Eliezer's LW posts

A Java applet for browsing through these dependency graphs Warning: This will take a long time to load, and may crash your browser

A Java applet for browsing through the concepts in the LW wiki Warning: This will take a long time to load, and may crash your browser

All of these graphs are out of date now.

Comment by PeerInfinity on TrailMemes for Sequences · 2010-12-17T17:54:21.900Z · LW · GW

TrailMeme is awesome, thanks for posting this!

If TrailMeme had a tool to import/export to/from a file, then I might have volunteered to create a script to generate trails for the LW sequences.

Creating these trails manually would be tedious, but probably worthwhile.

But I'm not volunteering to do this myself, at least not any time soon, sorry.

Comment by PeerInfinity on What is Evil about creating House Elves? · 2010-12-16T18:18:52.036Z · LW · GW

sorry if this squicks anyone here, but...

Not all of these people are sex slaves. Many of them are "service slaves".

I, personally, want to be a service slave, aka "minion", to someone whose life is dedicated to reducing x-risks.

The main purpose of this arrangement would be to maximize the combined effectiveness of me and my new master, at reducing x-risks. I seem to be no good at running my own life, but I am reasonably well-read on topics related to x-risks, and enjoy doing boring-but-useful things.

And I might as well admit that I would enjoy being a sex slave in addition to being a service slave, but that part of the arrangement is optional. But if you're interested, I'm bisexual, and into various kinds of kink.

Adelene Dawner has generously offered to help train me to be a good minion. I plan to spend the next few months training with her, to gain some important skills, and to overcome some psychological issues that have been causing me lots of trouble.

I haven't set up a profile on callarme.com for myself yet.

Comment by PeerInfinity on What topics would you like to see more of on LessWrong? · 2010-12-16T16:45:48.821Z · LW · GW

Existential Risks

More specifically, topics other than Friendly AI. Groups other than SIAI and FHI that are working on projects to reduce specific x-risks that might happen before anyone has a chance to create a FAI. Cost/benefit analysis of donating to these projects instead of or in addition to SIAI and FHI.

I thought the recent post on How to Save the World was awesome, and I would like to see more like it. I would like to see each of the points from that post expanded into a post of its own.

Is LW big enough for us to be able to form sub-groups of people who are interested in specific topics? Maybe with sub-reddits, or a sub-wiki? Regular IRC/Skype/whatever chat meetings? I still haven't thought through the details of how this would work. Does anyone else have ideas about this?

Comment by PeerInfinity on Expansion of "Cached thought" wiki entry · 2010-12-16T15:54:35.249Z · LW · GW

random trivia: I recently noticed that "The concept of cached thoughts is the most useful thing I learned from Less Wrong" is now a cached thought, in my mind.

Comment by PeerInfinity on One Chance (a short flash game) · 2010-12-16T01:33:02.137Z · LW · GW

I realize that this is kinda stretching the limits of plausibility, but maybe...

obgu lbhe jvsr naq gur ryringbe ynql jrer arire erny va gur svefg cynpr. V zrna, vg znxrf frafr gung gur ryringbe ynql vfa'g erny, fvapr fur unf 4gu-jnyy-oernxvat xabjyrqtr, ohg vs gur thl'f jvsr vf vzntvanel, gura gung zrnaf ur'f ernyyl penml, naq qrfcrengryl arrqrq gurfr qernzf gb fanc uvz bhg bs gur penmvarff. Gur ynpx bs pnef ba gur svany qnl pbhyq or rkcynvarq ol gur bgure pnef nyfb orvat vzntvanel, be ol gur thl orvat fb yngr gung qnl gung ur pbzcyrgryl zvffrq ehfu ubhe. Npghnyyl, guvf pbhyq rkcynva gur nofrapr bs uvf jvsr gbb, znlor ur jnf fb yngr gung qnl gung uvf jvsr naq gur ryringbe ynql jrer nyernql fbzrcynpr ryfr.

Comment by PeerInfinity on One Chance (a short flash game) · 2010-12-15T20:10:59.636Z · LW · GW

Zl vagrecergngvba bs "Rirel qnl gur fnzr qernz" jnf gung rirel qnl rkprcg sbe gur ynfg qnl jnf n qernz, naq gur ynfg qnl jnf ernyvgl. Naq gung gur thl jub lbh frr whzc ng gur raq vfa'g lbh, vg'f gur ynfg bs gur bgure rzcyblrrf, naq rirelbar ryfr va gur pbzcnal unq nyernql whzcrq, nf n erfhyg bs rvgure gur pbzcnal snvyvat (lbh fnj gur tencu?), be gurve bja fgerff, be obgu. Naq gung gur ernfba jul lbh'er abg whzcvat nybat jvgu gurz vf orpnhfr lbh unq guvf frevrf bs qernzf va juvpu lbh rkcyberq nyy bs gur cbffvoyr bcgvbaf, vapyhqvat fhvpvqr, naq orpnzr cflpubybtvpnyyl pncnoyr bs abg whzcvat.

Nf sbe "Bar Punapr", V tbg gur orfg raqvat, ohg nffhzrq gung vg jnf gur frpbaq-jbefg raqvat bhg bs n cbffvoyr guerr, hagvy V ernq gur YJ pbzzragf. V gubhtug gung gur tnzr jnf enaqbzyl chavfuvat zr sbe tbvat gb gur ebbs ba gur frpbaq qnl vafgrnq bs tbvat qverpgyl gb gur yno. V nyfb fhfcrpgrq gung zl cebterff ng gur yno zvtug unir orra fybjrq qbja nf n erfhyg bs orvat oheag bhg, naq gung V fubhyq unir gnxra ng yrnfg bar bs gur bccbeghavgvrf gb gnxr gvzr bss sebz jbex, gb nibvq oheavat bhg. Naq vg jnf naablvat gung gur tnzr qvqa'g fubj nal qrgnvyf nobhg jung jrag ba va gur yno.

Comment by PeerInfinity on Best career models for doing research? · 2010-12-12T00:56:18.324Z · LW · GW

I'm surprised that noone has asked Roko where he got these numbers from.

Wikipedia says that there are about 80 billion galaxies in the "observable universe", so that part is pretty straightforward. Though there's still the question of why all of them are being counted, when most of them probably aren't reachable with slower-than-light travel.

But I still haven't found any explanation for the "25 galaxies per second". Is this the rate at which the galaxies burn out? Or the rate at which something else causes them to be unreachable? Is it the number of galaxies, multiplied by the distance to the edge of the observable universe, divided by the speed of light?

calculating...

Wikipedia says that the comoving distance from Earth to the edge of the observable universe is about 14 billion parsecs (46 billion light-years short scale, i.e. 4.6 × 10^10 light years) in any direction.

Google Calculator says 80 billion galaxies / 46 billion light years = 1.73 galaxies per year, or 5.48 × 10^-8 galaxies per second

so no, that's not it.

If I'm going to allow my mind to be blown by this number, I would like to know where the number came from.

Comment by PeerInfinity on (Virtual) Employment Open Thread · 2010-12-09T14:16:17.897Z · LW · GW

That's the basic idea, yes.

Most people in the network are looking for jobs as programmers. The second most popular job category is finance.

Comment by PeerInfinity on (Virtual) Employment Open Thread · 2010-12-08T18:15:41.829Z · LW · GW

So far it's been only about people helping each other find paid employment, but helping each other find grants is also a good idea, thanks.

Comment by PeerInfinity on How to Save the World · 2010-12-06T20:16:19.602Z · LW · GW

good points, thanks. I made some more edits.

I added a note mentioning that the mean is 2.9%, and that comment "Being charitable ≠ doing good."

I replaced "their mission of converting the whole world to christianity" with "their vaguely defined mission"

Comment by PeerInfinity on How to Save the World · 2010-12-06T19:37:12.895Z · LW · GW

So, I think you just said that the average Christian does X, but doesn't do X, and therefore I should do X. I can't quite figure out if there's a typo in there somewhere, or whether I'm just misunderstanding radically.

You're right, thanks, the previous wording was confusing. I removed the paragraph that said "I suspect that the average christian actually gives significantly less than 10% of their income to the church, and doesn't go to church every sunday, but I haven't actually looked up the statistics yet." The point of that paragraph was that I'm admitting that I'm probably overestimating the contributions of the average christian.

Comment by PeerInfinity on How to Save the World · 2010-12-06T19:34:59.759Z · LW · GW

you're right. thanks. I updated the comment to include your change.

Comment by PeerInfinity on How to Save the World · 2010-12-06T18:28:33.038Z · LW · GW

A random thought:

If you donate less than 10% of your income to a cause you believe in, or you spend less than one hour per week learning how to be more effective at helping a cause you believe in, or you spend less than half an hour per week socializing with other people who support the cause... then you are less instrumentally rational than the average christian.

edit: shokwave points out that the above claim is missing a critical inferential step: "if one of your goals is to be charitable"

edit: Nick_Tarleton points out that the average christian only donates 2.9% of their income to the church. And they don't go to church every sunday either. Also, being charitable ≠ doing good.

explanation:

The average christian donates about 10% of their income to the church. This is known as "tithing". The average christian spends about 1 hour per week listening to a pastor talk about how to become a better christian, and be more effective at helping the cause of christianity. This is known as "going to church", or "listening to a sermon". And going to church usually involves socializing with the other members of the church, for an amount of time that I'm estimating at half an hour.

And that's not counting the time they spend reading advice from other supporters of the cause (i.e. reading the bible), or meditating on how to improve their own lives, or the lives of others, or other ways to support the cause, or hacking their mind to feel happy and motivated despite the problems they're having in life (i.e. praying), or the other ways that they socialize with, and try to help, or get help from other people who support the cause (i.e. being friends with other christians).

The point I'm trying to make is that the christians are investing a lot of resources into their vaguely defined mission, and it would be sad if people who care about other, actually worthwhile causes are less instrumentally rational than the christians.

edit: oops, there's already a good LW article on this topic.

Comment by PeerInfinity on How to Save the World · 2010-12-02T20:24:03.685Z · LW · GW

The LW wiki has made it approximately one order of magnitude easier to find the best content from LW.

You could try to quantify that by:

  • the time it takes to find a specific thing you're looking for
  • the probability of giving up before finding it
  • the probability that you wouldn't even have bothered looking if the information wasn't organized in a wiki.
  • maybe more
Comment by PeerInfinity on How to Save the World · 2010-12-02T18:26:33.047Z · LW · GW

Another obvious suggestion:

  • If there isn't already a wiki for the cause that you are interested in helping, then consider starting one.

Most people reading this are probably well aware of the awesome power of wikis. LW's own wiki is awesome, and LW would be a whole lot less awesome without its wiki.

What we need is a wiki that lists all the people and groups who are working towards saving the world, what projects they are working on, and what resources they need in order to complete these projects. And each user of the wiki could create a page for themselves, listing what specific causes they're interested in, what skills and resources they have that they're willing to contribute to the cause, and what things they could use someone else's help with. The wiki could also have useful advice like this LW post, on how to be more effective at world-saving.

I already made a few attempts to set up something like this, but these involved ridiculously complicated systems that probably wouldn't have worked as well as I hoped. It would probably be a much better idea to start with just a simple wiki, where users can contribute the most important information. We can add more advanced features later, if it looks like the features will be worth the added complexity.

Maybe the wiki will end up saying "Just donate to SIAI. Unless you're qualified to work for SIAI, there really isn't much else you can do to help save the world." But even in this case, I think it would be really helpful to at least have an explanation why there is no point trying to help in any other way. And even then, we could still use the wiki for projects to generate cash.

I find it really disturbing that the cause of saving the world doesn't have its own wiki. And none of the individual groups working towards saving the world have their own wiki. SIAI doesn't have a wiki. Lifeboat doesn't have a wiki. FHI doesn't have a wiki. H+ doesn't have a wiki. GiveWell doesn't have a wiki. Seriously, how did the cause of saving the world manage to violate The Wiki Rule?

Several years ago, Eliezer started the SL4 Wiki, and that was awesome, but then somehow after a few months, everyone lost interest in it, and it died. Then I tried to revive it, by importing all of its content to MediaWiki, and renaming it the transhumanist wiki. But noone other than me made any significant effort to edit or add content to the wiki. And even I haven't done much with the wiki in the past few months.

A few weeks ago, H+ contacted me, expressing interest in making the transhumanist wiki an official part of the humanityplus website, but I haven't heard any more about that since then.

Oh, and there's also the Accelerating Future People Database. This is a database of people who are working towards saving the world. This is a critical component of the system that I was describing, but we also need a list of projects, and a list of ways for volunteers to help.

Does anyone here think that a wiki like this would be a good idea? Does anyone here have any interest in helping to create such a wiki? If I created a wiki like this on my own, would anyone have a use for it? Is there some other reason I'm not aware of, why creating a wiki like this would be a very bad idea?

Comment by PeerInfinity on How to Save the World · 2010-12-01T22:24:38.370Z · LW · GW

good point, thanks, but I think it would still be a very bad idea to avoid having any friends who are world-savers, just to avoid seeming cult-like.

And I should mention that I think that it would also be a bad idea to avoid being friends with anyone who currently isn't a world-saver, because of a mistaken belief that only world-savers are worthy of friendship.

Also, even the cults know that making friends with non-cult-members can be an effective recruitment strategy.

I rephrased the second point as "Spend less time with your current friends, if it's obvious that they are causing you to be significantly less effective at world-saving, and the situation isn't likely to improve any time soon. But don't break contact with any of your current friends entirely, just because they aren't world-savers."

the original version was "Spend less time with your current friends, if it's obvious that they have no interest in world-saving, and they aren't helping you be more effective at world-saving, and you're not likely to make them any more interested in world-saving."

or maybe I should just drop the second point entirely...

Comment by PeerInfinity on How to Save the World · 2010-12-01T21:54:23.192Z · LW · GW

I made this same mistake, and ended up being significantly less optimized at world-saving as a result.

Comment by PeerInfinity on How to Save the World · 2010-12-01T18:47:29.604Z · LW · GW

This is an awesome post! Thanks, Louie :)

some obvious suggestions:

  • Make friends with other world-savers.
  • Spend less time with your current friends, if it's obvious that they are causing you to be significantly less effective at world-saving, and the situation isn't likely to improve any time soon. But don't break contact with any of your current friends entirely, just because they aren't world-savers.
  • Find other world-savers who can significantly benefit from skills or other resources that you have, and offer to help them for free.
  • Find other people who are willing to help you for free, with things that you especially need help with.
  • Look for opportunities to share resources with other world-savers. Share a house, share an apartment, share a car... There's lots of synergy among the people living at the SIAI house.
  • Join the x-risks career network
  • If you know of an important cause that currently doesn't have a group dedicated to that cause, consider starting a group. For example, the x-risks career network didn't exist a year ago.
  • Check out the Rationality Power Tools
  • really, anything that will help make your life more efficient will help you be more efficient at world-saving. Getting Things Done, 4 Hour Workweek, lots more...
Comment by PeerInfinity on Rational Project Management · 2010-11-25T15:14:04.708Z · LW · GW

An obvious suggestion: Getting Things Done

The GTD system is designed for helping individuals decide what to do next, and isn't really designed for organizations.

So GTD would help with:

  • what should be done
  • when it should be done
  • who should do it

but might not help much with:

  • how much of a budget in money, office space, website space, etc. a project should receive
  • when and how to evaluate the success of a project...

Though it might only take some relatively minor adjustments to make the GTD system able to handle these points as well.

I should also mention that most of the details of actually implementing the GTD system are left as an exercise to the reader.

Comment by PeerInfinity on Startups · 2010-11-25T15:04:18.819Z · LW · GW

There have been a few posts recently to the Existential Risk Reduction Career Network lately about startups that want to hire programmers. You might want to check those out.

Comment by PeerInfinity on "Target audience" size for the Less Wrong sequences · 2010-11-25T14:45:13.159Z · LW · GW

I'm a bisexual male, but can't upvote my own comment.

So the results so far are: 4 women, 4 gay/bisexual men, and 5 heterosexual men.

This means that that there are approximately as many women on LW as there are gay/bisexual men. And almost half of the men on LW are gay/bisexual.

And yes, I know that there are probably several reasons why this poll's results are biased or otherwise unreliable, but at least now we have some data.

One obvious problem with this poll: it contradicts a previous survey, which said:

"(96.4%) were male, 5 (3%) were female, and one chose not to reveal their gender."

so it looks like there were lots of heterosexual males who didn't bother voting.

Comment by PeerInfinity on "Target audience" size for the Less Wrong sequences · 2010-11-23T23:15:45.981Z · LW · GW

downvote this comment if you find this poll annoying

Comment by PeerInfinity on "Target audience" size for the Less Wrong sequences · 2010-11-23T23:14:51.332Z · LW · GW

upvote this comment if you somehow don't belong in any of the other categories listed here.

Comment by PeerInfinity on "Target audience" size for the Less Wrong sequences · 2010-11-23T23:14:01.719Z · LW · GW

upvote this comment if you're a heterosexual male

Comment by PeerInfinity on "Target audience" size for the Less Wrong sequences · 2010-11-23T23:09:49.106Z · LW · GW

downvote this comment if you upvoted one of the other comments

Comment by PeerInfinity on "Target audience" size for the Less Wrong sequences · 2010-11-23T23:09:17.846Z · LW · GW

upvote this comment if you're a gay or bisexual male

Comment by PeerInfinity on "Target audience" size for the Less Wrong sequences · 2010-11-23T23:08:56.031Z · LW · GW

upvote this comment if you're female