Comment by xachariah on Why CFAR's Mission? · 2016-01-06T08:13:12.965Z · score: 2 (2 votes) · LW · GW

This is my main question. I've never seen anything to imply that multi-day workshops are effective methods of learning. Going further, I'm not sure how Less Wrong supports Spaced Repetition and Distributed Practice on one hand, while also supporting an organization that's primary outreach seems to be crash courses. It's like Less Wrong is showing a forum wide cognitive dissonance that nobody notices.

That leaves a few options:

  • I'm wrong (though I consider it highly unlikely)
  • CFAR never bothered to look it up or uses self selection to convince themselves it's effective
  • CFAR is trying to optimize for something aside from spreading rationality, but they aren't actually saying what.
Comment by xachariah on Stupid Questions May 2015 · 2015-05-02T19:53:09.679Z · score: 2 (2 votes) · LW · GW

I didn't mean to imply nonlinear functions are bad. It's just how humans are.

Picking gambles 1A and 2B, on the other hand, cannot be described by any utility function.

Prospect Theory describes this and even has a post here on lesswrong. My understanding is that humans have both a non-linear utility function as well as a non-linear risk function. This seems like a useful safeguard against imperfect risk estimation.

[Insurance is] not a Dutch Book in the usual sense: it doesn't guarantee either side a profit.

If you setup your books correctly, then it is guaranteed. A dutch book doesn't need to work with only one participant, and in fact many dutch books only work with on populations rather than individuals, in the same way insurance only guarantees a profit when properly spread across groups.

Comment by xachariah on Stupid Questions May 2015 · 2015-05-02T18:33:49.717Z · score: 2 (2 votes) · LW · GW

The point of the Allais paradox is less about how humans violate the axiom of independence and more about how our utility functions are nonlinear, especially with respect to infinitesimal risk.

There is an existing Dutch Book for eliminating infinitesimal risk, and it's called insurance.

Comment by xachariah on Stupid Questions January 2015 · 2015-01-03T13:49:25.265Z · score: 0 (0 votes) · LW · GW

You may be interested in the term 'inverted classroom', if you're not already aware of it.

The basic idea is that it's the normal school system you grew up with, except students watch video lectures as homework, then do all work in class while they've got an expert there to help. Also, the time when the student is stuck in one place and forced to focus is when they're actually doing the hard stuff.

There's so many reasons why it's better than traditional education. I just hope inverted classrooms start to catch on sooner rather than later.

(Edit: I know this isn't your exact proposal, but it uses many of the features you mention and it can be immediately grafted into the existing public school system with a single change of curriculum and the creation of some videos. It's the low hanging fruit for education.)

Comment by xachariah on Stupid Questions (10/27/2014) · 2014-10-29T06:26:29.573Z · score: 1 (1 votes) · LW · GW

Anecdotally someone close to me did one of those and it was a quick way to burn thousands of dollars.

I tried to dissuade them, but end the end they came back with less knowledge than I did of the subject, and all I did was follow some youtube tutorials and look at stack overflow to create a couple learning apps for android.

Comment by xachariah on Ethical frameworks are isomorphic · 2014-08-15T03:33:58.519Z · score: 1 (1 votes) · LW · GW

All ethical frameworks are equal the same way that all graphing systems are equal.

But I'll be damned if it isn't easier to graph circles with polar coordinates than it is with Cartesian coordinates.

Comment by xachariah on Open thread, 11-17 August 2014 · 2014-08-13T06:56:49.930Z · score: 1 (1 votes) · LW · GW

You don't need to upvote them necessarily. Just flip a coin.

If you downvote them too, then it just looks like they made a bad post.

Comment by xachariah on Harry Potter and the Methods of Rationality discussion thread, July 2014, chapter 102 · 2014-07-27T22:27:12.363Z · score: 4 (4 votes) · LW · GW

Transfiguration sickness isn't because things turn into poison. Your body goes into a transfigured state, minor changes occur, and when you come back from that state things are different. It'd be tiny things. Huge problems would cause you to die instantly, but little transcription errors would kill you in the timeframe described.

Eg, your veins wouldn't match up right. The DNA in your cells would be just a little bit off and you'd get spontaneous cancer in your entire body. Some small percent of neurotransmitters and hormones would be transformed into slightly different ones... etc. None of that would be contagious or even harmful to somebody consuming it. But to the animal itself it'd be devastating.

Also remember that once the transfiguration reverts and you're back to yourself, you're in a stable state. The only issue is that you're not back together perfectly. Quirrell would only get sick if he drank the blood while it was transfigured and then it changed form while inside of him.

Comment by xachariah on Harry Potter and the Methods of Rationality discussion thread, July 2014, chapter 102 · 2014-07-27T00:56:58.743Z · score: 8 (8 votes) · LW · GW

Death iss not truly gainssaid. Real sself is losst, as you ssay. Not to my pressent tasste. Admit I conssidered it, long ago.

It's still not a lie.

He considered it long ago and then he did it. He doesn't want to try it again because he's already got some and/or they wouldn't fix his current situation. Literally truthful but appropriately misleading.

Comment by xachariah on [LINK] Another "LessWrongers are crazy" article - this time on Slate · 2014-07-19T07:57:52.734Z · score: 16 (16 votes) · LW · GW

I thought the article was quite good.

Yes it pokes fun at lesswrong. That's to be expected. But it's well written and clearly conveys all the concepts in an easy to understand manner. The author understands lesswrong and our goals and ideas on a technical level, even if he doesn't agree with them. I was particularly impressed in how the author explained why TDT solves Newcomb's problem. I could give that explanation to my grandma and she'd understand it.

I don't generally believe that "any publicity is good publicity." However, this publicity is good publicity. Most people who read the article will forget it and only remember lesswrong as that kinda weird place that's really technical about decision stuff (which is frankly accurate). Those people who do want to learn more are exactly the people lesswrong wants to attract.

I'm not sure what people's expectations are for free publicity but this is, IMO, best case scenario.

Comment by xachariah on Open Thread April 8 - April 14 2014 · 2014-04-08T21:34:18.295Z · score: 0 (0 votes) · LW · GW

Maybe I should back up a bit.

I agree that at 1000004:1000000, you're looking at the wrong hypothesis. But in the above example, 104:100, you're looking at the wrong hypothesis too. It's just that a factor of 10,000x makes it easier to spot. In fact, at 34:30 or even a fewer number of iterations, you're probably also getting the wrong hypothesis.

A single percentage point of doubt gets blown up and multiplied, but that percentage point has to come from somewhere. It can't just spring forth from nothingness once you get to past 50 iterations. That means you can't be 96.6264% certain at the start, but just a little lower (Eliezer's pre-rounding certainty).

The real question in my mind is when that 1% of doubt actually becomes a significant 5%->10%->20% that something's wrong. 8:4 feels fine. 104:100 feels overwhelming. But how much doubt am I supposed to feel at 10:6 or at 18:14?

How do you even calculate that if there's no allowance in the original problem?

Comment by xachariah on Open Thread April 8 - April 14 2014 · 2014-04-08T14:09:38.471Z · score: 2 (2 votes) · LW · GW

Surely that can't be correct.

Intuitively, I would be pretty ready to bet that I know the correct bookbag if I pulled out 5 red chips and 1 blue. 97% seems a fine level of confidence.

But if we get 1,000,004 red and 1,000,000 blues, I doubt I'd be so sure. It seems pretty obvious to me that you should be somewhere close to 50/50 because you're clearly getting random data. To say that you could be 97% confident is insane.

I concede that you're getting screwed over by the multi-verse at that point, but there's got to be some accounting for ratio. There is no way that you should be equally confident in your guess regardless of if you receive ratios of 5:1, 10:6, 104:100, or 1000004:1000000.

Comment by xachariah on Open thread for January 1-7, 2014 · 2014-01-07T10:22:56.790Z · score: 3 (3 votes) · LW · GW

Hmm, reward myself after a fixed interval of 30 minutes? That's just crazy enough to work! (I have heard of the Pomodoro technique before, and I'm not quite sure why I didn't just go for that at the start.)

The hidden random timer is to make myself resilient to extinction and ingrain the habit even without reward. Although, randomly choosing to reward at the end of pomodoros would work too. IIRC, intermittent time interval is the reward structure that survives the longest without extinction, whereas a variable ratio reward structure creates the most vigorous workers.

Also, I think what you describe is a conditional reinforcer and not an intermittent one. What I mean by that is after a long enough time, they subject would become attached to the coin flip itself as a partial reward. Kind of like clicker training for animals, or how a shot at a jackput pull is a reward even when it doesn't payout. Then you could use the stronger conditional training systems...

Your suggestion is brilliant. Aaaand now I've got "write a gamblerdoro app" on my to-do list.

Comment by xachariah on Open thread for January 1-7, 2014 · 2014-01-07T05:54:33.611Z · score: 6 (6 votes) · LW · GW

Dumb reinforcement question: How do I reward the successful partial-completion of an open ended task without reinforcing myself for quitting?

Basically I'm picking up the practice of using chocolates as reinforcement. I reward myself when I start and when I finish. This normally works very well. Start doing dishes -> chocolate -> do dishes -> finish doing dishes -> chocolate. It seems viable for anything with discrete end states.

Problem - I've got a couple long term tasks (fiction writing and computer program I'm making) that don't have markers, and I can put anywhere from 30 minutes to 3 days into them without necessarily seeing a stopping point. I'm worried that rewarding chocolates whenever I get up from working will (in the long run) reinforce me to quit more frequently. I don't want to end up with a hummingbird work ethic for these tasks.

How should I reinforce to maximize my time-on-task?

(So far my best plan is to write a smartphone app that creates a hidden random timer between 5-55 minutes (bell curve) that goes off, and I reward myself chocolate if I'm on task when the alarm activates. But there's logistical hurtles and it seems like quite a bit of work for something that might be solved easily otherwise. Plus, I don't know what possible bad behavior that might incitivize.)

Comment by xachariah on Harry Potter and the Methods of Rationality discussion thread, part 28, chapter 99-101 · 2013-12-15T00:24:06.703Z · score: 15 (15 votes) · LW · GW

I'd think that unicorn blood has unique properties on phoenix tears.

Otherwise Quirrel would be tracking down Phoenixes and... showing them the first 5 minutes of 'Up' or something.

Comment by xachariah on 2013 Census/Survey: call for changes and additions · 2013-11-15T01:18:16.960Z · score: 2 (2 votes) · LW · GW

I normally understand the LW use of 'taboo'.

It's just that 'taboo sex' brings up its own meaning/mental image faster.

Comment by xachariah on 2013 Census/Survey: call for changes and additions · 2013-11-14T20:37:44.305Z · score: 4 (4 votes) · LW · GW

"Taboo 'sex'" might want to be rephrased though.

Until I saw Luke's response I thought it meant "Yes", "No", and "Yes, and it was really kinky sex!"

Comment by xachariah on 2013 Census/Survey: call for changes and additions · 2013-11-14T20:34:35.292Z · score: 1 (1 votes) · LW · GW

"Voting in primaries" is US specific, but it is significantly stronger than "voting in other elections." We have an order of magnitude more people voting in state elections than in primaries.

In fact, it's probably the strongest thing that you can do to influence politics in America. It's significantly rarer than volunteering to help elect parties or writing letters to your senator, and everyone who's at a primary already does those things.

Comment by xachariah on 2013 Census/Survey: call for changes and additions · 2013-11-14T20:15:54.703Z · score: 0 (0 votes) · LW · GW

Everyone does research before voting according to them.

My family members aren't familiar with even the most basic differences between the executive and legislative branches, and routinely make mistakes that would be cleared up with a 1st year understanding of government. They attribute blame/praise to one branch that they couldn't possibly be responsible due to how the separation of powers works.

But they've all "done their research, and [they] know a lot better than [I] do about who to vote for."

Comment by xachariah on The 50 Shades of Grey Book Club · 2013-08-26T12:45:52.938Z · score: 20 (22 votes) · LW · GW

This seems dangerous. Last time I did something like this, I became a huge fan of ponies.

Comment by xachariah on Are ‘Evidence-based’ policies damaging policymaking? · 2013-08-22T21:37:41.156Z · score: 4 (10 votes) · LW · GW

What the heck does opposing 'Evidence-based' policy mean that you support?

Non-evidence based policy? Really?

Super-evidence-based policy? (That's some damn interesting marketing propaganda.)

I literally cannot wrap my head around what the first article wants us to base our policy on except "listen to what we say, and ignore any contrary evidence."

Comment by xachariah on The Robots, AI, and Unemployment Anti-FAQ · 2013-07-28T05:08:53.070Z · score: 1 (7 votes) · LW · GW

raising the minimum wage makes lower-productivity workers permanently unemployable, because their work is not worth the price, so no one can afford to hire them any more.

Employment is a function of being "worth the price" as you put it. But "worth the price" is not a fixed point; it is a function of demand. If only a handful of people want to buy your product, adding another person for $5/hr may not be worth the price. If everyone in the world were willing and able to buy your product, then you'd hire even if you had to pay $50/hr if you needed to.

Demand is a function of employment and wages. If wages go up then demand goes up... which increases employment.

Increasing minimum wage has never been shown to send away jobs.

Comment by xachariah on Harry Potter and the Methods of Rationality discussion thread, part 25, chapter 96 · 2013-07-27T04:51:29.606Z · score: 6 (6 votes) · LW · GW

until one day he took it off and got screwed

He took it off and gave it to his son. In canon he meets death intentionally.

Comment by xachariah on Instrumental rationality/self help resources · 2013-07-25T11:02:35.934Z · score: 0 (0 votes) · LW · GW

Correct. My point is that the Blueprint has conflicting messages about male attraction. It says one thing explicitly and very different thing implicitly.

I hold that the implicit teachings more closely match reality.

Comment by xachariah on Harry Potter and the Methods of Rationality discussion thread, part 24, chapter 95 · 2013-07-21T23:28:28.446Z · score: 1 (1 votes) · LW · GW

The people who point this out would be asked "Where's the proof?"

And if they could produce some, everyone would believe. And if they couldn't produce any... well why should they believe it in the first place?

Comment by xachariah on Harry Potter and the Methods of Rationality discussion thread, part 24, chapter 95 · 2013-07-21T23:27:09.964Z · score: 2 (4 votes) · LW · GW

Your intuitions about evolution and my intuitions must be drastically different.

I can imagine no possible world where human bodies were attached to an immortal decision-making engine, on an evolutionary timescale, where human brain biology still looks practically indistinguishable from all other mammal brain biology and where human grief behavior still corresponds to other mammal grief behavior.

Comment by xachariah on Instrumental rationality/self help resources · 2013-07-20T09:39:09.789Z · score: 0 (0 votes) · LW · GW

whoops, misplaced a word. I've edited it.

Comment by xachariah on Rationality Quotes July 2013 · 2013-07-20T04:26:20.279Z · score: 5 (5 votes) · LW · GW

If wishes were horses we'd all be eating steak.

  • Jayne Cobb, Objects in Space, Firefly
Comment by xachariah on Harry Potter and the Methods of Rationality discussion thread, part 24, chapter 95 · 2013-07-19T23:44:44.971Z · score: 8 (10 votes) · LW · GW

It's not Harry's observations; it's everybody's observations of the world. People don't act like souls exist. If Dumbledore really thought that people just go on to another great adventure when they die, he wouldn't have a bunch of pedestals of broken wands.

Nobody in HPMOR believes in souls or acts like they exist. That's why Harry can decisively conclude that they don't exist.

Comment by xachariah on Instrumental rationality/self help resources · 2013-07-19T20:32:27.077Z · score: 1 (3 votes) · LW · GW

Yes I disagree. The blueprint covers that both sexes attraction is value based. Women's attraction is dynamic because man's value is dynamic; man's attraction is static because women's value is static (looks based). I'd argue that women's value is static because they don't know how to hold intrinsic value and project that value to others aside from with their looks, just as 90% of men don't know how to do so either.

A repeated message in the blueprint is the idea that you'll become attractive towards women, sleep with a lot of attractive girls, then you'll find the one that you really want and use your blueprint skills maximize your chance to get her, then you'll settle down with the one when you're ready to exit the game. This is basically the promise that's made throughout. However, the one you really want isn't defined as the hottest girl, but the awesome girl that you want to be with more than anything. There's an implicit acknowledgement that traits other than physical attractiveness matter when men look at women.

My argument is that yes those traits matter, and yes they're the same traits that the blueprint teaches men to have.

Comment by xachariah on Instrumental rationality/self help resources · 2013-07-19T10:10:44.250Z · score: 5 (7 votes) · LW · GW

The blueprint makes that distinction but it's wrong. Male attraction is isomorphic to female attraction. The blueprint simply doesn't look into what it takes to attract men, so it doesn't make any statements about male attraction other than the superficial.

Anecdotally, as I became more attractive to women I became more attractive to men too. Not in a gay way, they just wanted my approval more and listened more and wanted to be my friends more than before. I felt the same way about my PUA friends as well; I could tell that they were getting cooler and I just wanted to be around them more.

There's no doubt in my mind that the teachings work on men too.

Comment by xachariah on Instrumental rationality/self help resources · 2013-07-18T23:58:47.277Z · score: 3 (3 votes) · LW · GW

Yes. It's not exclusively about picking up women, and from what I can recall the vast majority is about becoming an attractive person in general. Even the stuff about attracting women should generalize to women/gay men attracting straight/gay men.

Comment by xachariah on Open thread, July 16-22, 2013 · 2013-07-16T04:16:58.769Z · score: 4 (4 votes) · LW · GW

Karen Pryor's Don't Shoot the Dog.

Just kidding... sorta (Spoiler: It's a book on behavior training.)

Comment by xachariah on [LINK] If correlation doesn’t imply causation, then what does? · 2013-07-12T05:54:47.897Z · score: 1 (5 votes) · LW · GW

Seems like a much longer (and harder to read) version of Eliezer's Causal Model post. What can I expect to get out of this one that I wouldn't find in Eliezer's version?

Correlation doesn't imply causation, but it does waggle its eyebrows suggestively and gesture furtively while mouthing 'look over there'.


Comment by xachariah on Seed Study: Polyphasic Sleep in Ten Steps · 2013-07-12T00:57:46.618Z · score: 14 (14 votes) · LW · GW

Having done Uberman in the past, I'd like to recommend a few tweaks and general advice.

1) Consider swapping from 20 minute naps to 24 minute naps. The optimal is 24 minutes asleep with 1-2 minutes to fall asleep. I can't seem to find the original article by NASA atm, but here's an article referencing NASA's original study on optimal naps.

2) Devise a program to get from part 7 to part 10 instead of "try to space out longer". I did a standard uberman schedule and going even 20 minutes past my appointed nap time would immediately put me into sleep deprivation which risked oversleeping an alarm. I'm not sure how you plan to soft adjust to longer spaces between naps, but just winging it while sleep deprived seems like a recipe for danger.

3) Stock up on food. Expect to eat 50% more.

4) Make a list of things to do while sleep deprived at 4am for the adjustment period. I thought I was going to learn a new language with Rosetta Stone; I ended up reading fanfiction and watching through TV series. I simply didn't have the mental juice to do anything until after I'd adjusted.

5) Set up a sleep cycle calendar so you'll know what you're going to do every wedge, and what you're doing every day. It's very easy to lose track of days on a polyphasic schedule.

6) Buy a sleep mask. It's so much easier to sleep during the day with one.

Comment by xachariah on Harry Potter and the Methods of Rationality discussion thread, part 23, chapter 94 · 2013-07-09T01:34:00.951Z · score: 2 (2 votes) · LW · GW

ch 30

He'd sustained that Transfiguration for seventeen days, and would now need to start over.

Could've been worse. He could've done this fourteen days later, after Professor McGonagall had approved him to Transfigure his father's rock. That was one very good lesson to learn the easy way.

Note to self: Always remove ring from finger before completely exhausting magic.

This implies to me that that his finger would've gotten wrecked if that was a rock. Remember, finite on the rock blew out both the front and back of the troll's head. It wasn't just expanding in a confined spot; it expanded fast enough to blow out both sides. Explosive de-transfiguration. Certainly enough to tear off a 10 year old's finger.

Comment by xachariah on Harry Potter and the Methods of Rationality discussion thread, part 23, chapter 94 · 2013-07-08T22:14:33.169Z · score: 5 (5 votes) · LW · GW

Harry will already lose a finger if anyone finite's his ring. He practiced with the marshmallow because the rapid expansion of the rock would tear off his finger.

Ringmione isn't much more of a personal risk to him.

Comment by xachariah on Harry Potter and the Methods of Rationality discussion thread, part 19, chapter 88-89 · 2013-06-30T23:46:54.759Z · score: 3 (3 votes) · LW · GW

He's familiar with cryonics then, or at least the concept of suspended animation.

It was another good outside-the-box idea, but Harry told his brain to keep thinking...

The next line implies that he'd have used the plan if he didn't immediately think up a better one. Any plan he comes up with to save Hermione has to be at least as likely to succeed as cryonics.

Comment by xachariah on Harry Potter and the Methods of Rationality discussion thread, part 19, chapter 88-89 · 2013-06-30T21:57:56.563Z · score: 8 (10 votes) · LW · GW

Why are people here reacting like Hermione is perma-dead?

I get that they'd act that way on reddit, but people here actually believe and sign up for cryonics. Harry's got a whole team of Alcor cryonic specialists right in his wand. And if he can't manage the magic, Dumbledore can. Hermione's soul and magic may have exploded in an impressive lightshow, but her brain is still fully oxygenated and hasn't even begun to decompose.

Everything that makes her her is still doing fine.

(And on a meta level, Elizer knows that fictional examples are strong drivers of behavior. A fictional example of cryonics working would be big for cryonics adoption.)

Comment by xachariah on On manipulating others · 2013-06-16T21:55:33.527Z · score: 26 (28 votes) · LW · GW


Never use "I'm too good at something to win" or "I only lose because other people are so bad". Those sort of explanations are never true. Not ever.

I don't know if there's some kind of word for this fallacy (maybe a relative of the Dunning-Kruger effect), but if your mind ever uses it in the future then you need to give your logic center a curbstomp in the balls. This sort of logic is ego protection bullshit. Hearing this explanation is the number one indicator that a person will never improve in a skillset.

How could they possibly get better if they think they already have the answer and it doesn't involve any work on their part?

Here's my alternate hypothesis. Manipulating people is hard and takes tons of practice. You haven't put in your 10,000 hours.

Edit: Also, you aren't getting downvoted because this belongs in the Open Thread. The downvotes are because you're wrapped in one of the most dangerous self-delusions that exists. It's even more insidious than religion in some ways because it can snake it's way into any thought about any skillset. The good news is that you've given it voice and you can fight it. And I hope you do.

Comment by xachariah on Rationality Quotes June 2013 · 2013-06-09T01:47:42.034Z · score: 7 (7 votes) · LW · GW

I'm fairly certain that's not how you're supposed to develop a habit of gratitude. It's not about doublithinking yourself into believing you like things that you dislike; it's to help you notice more things you like.

I've been doing a gratitude journal. I write three short notes from the last day where I was thankful for something a person did (eg, saving me a brownie or something). Then I take the one that makes me happiest and write a 1 paragraph description of what occurred, how I felt, and such that writing the paragraph makes me relive the moment. Then I write out a note (that is usually later transcribed) to a past person in my gratitude journal.

When I think of that person or think back to that day, I'm immediately able to recall any nice things they did that I wrote down. Also, as I go through my life, I'm constantly looking for things to be thankful for, and notice and remember them more easily.

If you do something like in the quote, it seem more likely that you'll remember negative things (that you pretend to be positive). It goes against the point of the exercise.

Comment by xachariah on Exercise isn't necessarily good for people · 2013-06-09T00:52:40.911Z · score: 18 (20 votes) · LW · GW

For 12% of people it makes blood pressure higher not worse. The presenter chooses the terms 'adverse' and 'worse' for his own purposes.

My girlfriend has chronic hypotension, and can easily faint or become dizzy when standing or if she doesn't have enough water. Anecdotally, regular exercise seems to help prevent that. It's hard to find numbers, but the closest thing to solid I could find is that 26% of people with diabetes (couldn't find general population) suffer from hypotension.

If the 32% of people receiving no change or an increase in blood pressure fall on the bottom end of the bloodpressure spectrum, they're getting exactly what they need. Regardless of how the speaker labels the results.

Comment by xachariah on Post ridiculous munchkin ideas! · 2013-05-11T01:39:19.598Z · score: 2 (2 votes) · LW · GW

This seems very interesting, and it's really cool that you've already been working on it. To clarify, you said you don't eat or drink anything unless it's a reward. Does this mean halting all meals?

How do you manage to eat healthily if all food has to stay within arm's reach? I suppose some fruits could stay out, but what about cooked meats or vegetables?

What do you do for recreation times: hanging out with others, visiting relatives, or just going to the beach or something, etc?

Comment by xachariah on Use Search Engines Early and Often · 2013-05-05T14:20:27.651Z · score: 9 (11 votes) · LW · GW

Why do you advertise for Goodsearch? As far as I can tell, Goodsearch itself is a for-profit LLC that makes it's money by drawing referrals to Yahoo, by way of putting kittens and children on it's front page and making people feel good about doing searches. They're just trading a portion of revenue for increased marketshare.

But you're trading time. You earn 1 cent a search; at minimum wage, that's 5 seconds of labor time. Rough estimate of their criteria says that less than half my searches would be eligible for donation. So, in order for Goodsearch to be worth it, I'd need to be able to find what exactly I'm looking for on Yahoo no more than 2.5 seconds slower than Google/DuckDuckGo. A quick Yahoo search shows that to be off... probably by at least an order of magnitude. If you think you'd personally get better results than that though, by all means go ahead.

Time has value too, and that's what you're spending when you switch to Goodsearch.

Comment by xachariah on [Link] 2012 Winter Intelligence Conference videos available · 2013-04-30T21:57:55.170Z · score: 0 (0 votes) · LW · GW

The labeling isn't very confusing right now, but in 9 months it will be.

Comment by xachariah on Minor, perspective changing facts · 2013-04-24T02:58:23.100Z · score: 4 (4 votes) · LW · GW

Woah. Instead of potato chips, I could be eating an equal amount of bacon.

I'm never going to eat another potato chip again.

Comment by xachariah on Pascal's wager · 2013-04-22T06:40:36.052Z · score: 2 (4 votes) · LW · GW

Regarding whitespace, I usually like this style of spacing in articles. It makes it much easier to see what's going on and identify clusters of ideas.
It's like punctuation for paragraphs!

In this case, however, the whitespace seemed to have been placed at random and did not separate ideas.

Comment by xachariah on New applied rationality workshops (April, May, and July) · 2013-04-16T22:19:54.444Z · score: 2 (2 votes) · LW · GW

Are you suggesting that rationality takes the same level of one-on-one contact that dancing does?

I'm sure it wouldn't be too much trouble to go out and find rationality partners, if that's what it took. I'd still need the curriculum though.

Comment by xachariah on New applied rationality workshops (April, May, and July) · 2013-04-16T04:43:04.604Z · score: 1 (1 votes) · LW · GW

I can see the usefulness of networking. Though I don't feel like I'm in the phase of my life where I'd want to go to such lengths just to network and have excellent conversations. I could imagine that being worthwhile enough to me one day.

It's good to hear about the followups. They mentioned that in the original post, but I didn't know how extensive it was. I suppose it's unfair to demand concrete data, when dealing with such small sample sizes.

Reference class tennis, but I did learn dancing via youtube + a partner. A couple of years of dance classes were pretty useless compared to how much I learned after a month of study online.

Comment by xachariah on New applied rationality workshops (April, May, and July) · 2013-04-16T03:20:34.647Z · score: 5 (5 votes) · LW · GW

I have a couple of questions:
1) Does the $3900 cost primarily cover the cost of the workshop itself, or is it mostly used as a revenue source for CFAR to keep the doors open year-round?

2) What advantages does the weekend retreat system offer that other systems don't, specifically distance learning?

3) Are there any plans to expand into distance learning, for example into the Khan Academy, edX, Udacity model?

Everything you've mentioned sounds great, but I'm inherently skeptical of a weekend retreat format. After all, even gay cure camps 'work' and have their testimonials; how do you control for the self-help effect? Additionally, it seems odd to me to be building a program around teaching rationality in-person, when every other educational institution is trying to move in the opposite direction. I'm just curious about why you've chosen to structure the program the way it is.

Exploiting the Typical Mind Fallacy for more accurate questioning?

2012-07-17T00:46:26.935Z · score: 31 (36 votes)

Punctuality - Arriving on Time and Math

2012-05-03T01:35:57.920Z · score: 84 (87 votes)

Harry Potter and the Methods of Rationality discussion thread, part 12

2012-03-25T11:01:11.948Z · score: 5 (8 votes)