Help create an instrumental rationality "stack ranking"?

post by Elliot_Olds (internety) · 2012-06-28T06:06:24.237Z · LW · GW · Legacy · 12 comments

Contents

12 comments

I recently heard about SIAI's Rationality Minicamp and thought it sounded cool, but for logistical/expense reasons I won't be going to one. 

There are probably lots of people who are interested in improving their instrumental rationality, know about and like LessWrong, but haven't read the vast majority of content because there is just so much material, and the practical payoff is uncertain. 

It would be cool if it was much easier for people to find the highest ROI material on LessWrong.

My rough idea for how this new instrumental rationality tool might work:

 

 

Do you think others would find this useful? Anyone have suggested improvements?

12 comments

Comments sorted by top scores.

comment by JenniferRM · 2012-06-28T17:19:49.417Z · LW(p) · GW(p)

Pretests. Pretests are critical.

Replies from: jsalvatier
comment by jsalvatier · 2012-06-28T17:53:57.292Z · LW(p) · GW(p)

Whoa! Interesting! Here's the pdf for the curious.

Replies from: gwern
comment by gwern · 2012-06-28T22:18:56.419Z · LW(p) · GW(p)

Nifty. I think I'll add those 2 links to my Spaced repetition page.

EDIT: OK, maybe not. The PDF turns out to not support the blog post claims of causal efficacy, since it's doing something different.

comment by Vladimir_Golovin · 2012-06-28T11:15:54.404Z · LW(p) · GW(p)

This is something I've been thinking about for quite some time:

I had an idea for a web-based app for evaluating instrumental rationality techniques, something like Digg or a UserVoice-based forum where techniques get upvoted, downvoted, merged, separated and discussed. However, I don't currently have a solution for the problem of 'impulse upvoting' ("hey, this technique sounds cool, let's upvote it!") -- I don't know how to make the upvotes reflect long-term usefulness of the techniques.

My current best idea regarding the impulse upvote problem is "making techniques pay rent", literally:

Each app user has a fixed small number of homepage slots for techniques. If a technique doesn't work for a user, they can kick it out of the slot and replace it with another promising technique. Also, users can purchase more homepage slots for real-world money. Or the app can go even further: it can be free to download, but each slot will cost the user $0.99 per month.

This way we can rank the techniques based on their "rent time", i.e. the time they spend in their slot, which is only counted for active users of the app.

Replies from: GuySrinivasan
comment by SarahSrinivasan (GuySrinivasan) · 2012-06-28T16:55:25.052Z · LW(p) · GW(p)

Many of the techniques I've found on Less Wrong have increased my available time, money, energy, mood... if the only way I could have learned and used the technique was to have paid money for it, I would gladly. If there was a way to pay back, say, 10% of my actual gains from How to Beat Procrastination to Luke to do with as he wishes, I would press that button. Issues include not correctly estimating the counterfactual (without technique X, how well would I really have done? Surely not a complete crash-and-burn... and what were the actual consequences of not doing well? Surely not as bad as my overestimating-losses-brain estimates...), overcounting extra time that in part gets filled with things I will remove later for "extra time", sending the rewards to the proximate cause of my learning the technique rather than someone further up the origin tree, people gaming the system as soon as it involves money they can steal, and probably others that marginal consideration is too small to bring to mind.

Replies from: gwern
comment by gwern · 2012-06-28T22:15:43.378Z · LW(p) · GW(p)

If there was a way to pay back, say, 10% of my actual gains from How to Beat Procrastination to Luke to do with as he wishes, I would press that button.

I think it's the grey button midway down http://singularity.org/donate/

Replies from: GuySrinivasan
comment by SarahSrinivasan (GuySrinivasan) · 2012-06-28T23:40:54.462Z · LW(p) · GW(p)

Sure, maybe, for Luke. What I want is not for SI to receive more money because Luke shared content that improved my life.

Instead, I want to have removed the inconveniences which make it hard to set in place the incentive "whosoever shares content that significantly improves lives will receive part of that improvement in compensation", regardless of whether they might turn around and send that compensation to SI, and then to do my part in setting that incentive in place by pressing and letting it be known that I press all of those buttons, also magically not incurring the cost of lots of low quality fishing posts.

That said, parts of me were not happy with my giving as much as I have to SI until I pointed out to them that I clearly paid more per benefit-accrued for a college education... those parts are a little silly anyway so they didn't notice that I overpaid for the benefits I gained from the college education and they're happy now. ;)

comment by Viliam_Bur · 2012-06-28T14:24:41.873Z · LW(p) · GW(p)

Usefulness of some techniques may depend on domain where one wants to use them. For example a technique "if you don't know something, use google and follow 3 highest links" depends on whether your problems are described on internet, and how much trustworthy are those answers. -- For "how do I join two strings in Python?" this technique works great. For financial questions, not so great, because website owners have a huge incentive to promote wrong answers.

Also, the same technique may have different results for different kinds of people, because of their environment, previous knowledge, personality, gender, social class, financial situation, or whatever. If you omit those details, you only get average results in general population, which is also not bad, but does not lead to optimal choice.

Measuring an impact of a technique is difficult. How much sure are you it was this technique that helped, and not something else? Maybe it was a placebo effect or just a coincidence. If we had hundreds of data points, the coincidences would average out, but we probably won't have so much data.

comment by ChristianKl · 2012-07-01T22:02:38.722Z · LW(p) · GW(p)

It always amazes me how a self proclaimed rationalist community who wouldn't accept a medical intervention based on anecdotal evidence have no problem accepting techniques for improving rationality based on anecdotal evidence.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-07-01T22:37:50.376Z · LW(p) · GW(p)

It ... amazes me how a self proclaimed rationalist community ... have no problem accepting techniques ... based on anecdotal evidence.

What evidence do you base this observation on? :-) The post is at +4 after a few days, which is barely better than getting into negatives, and there isn't much on-topic discussion in the comments. Whatever positive reception it gets is probably due to hypothetical variations on what is suggested that would indeed have some merit, unlike the specific thing proposed.

comment by Fhyve · 2012-07-01T06:59:12.559Z · LW(p) · GW(p)

I would like basic short answer/MC tests for Lesswrong content. I have not been through a lot of sequences but I have been around people that have and been in discussions about those sequences, and I have also read a lot of important posts from sequences that I haven't read entirely. I would like to know if I actually know the material. It can also be a good teaching tool in itself (see the comment on pretests), and it can also assess knowledge of Lesswrongian terms and phrases.

Even if it doesn't really test one's skill as a rationalist, they can still be incredibly useful.

comment by keefe · 2012-06-30T00:03:27.080Z · LW(p) · GW(p)

This is a nicely written proposal for a practical, actionable idea. If you're not in tech, then you should consider doing this. It starts at a practical idea and has ways to branch out from the long term goal to other like including psychometrics and so forth.

I'm biased due to my open source project, but I think this is the kind of idea that fits well with cryptographically secure peer to peer systems that then aggregate into some groups, as the individual opinions are highly variable (correctly, as different brains need different training)