POSITION: Design and Write Rationality Curriculum
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-01-19T06:50:04.438Z · LW · GW · Legacy · 179 commentsContents
Update March 2012: We are still accepting and processing applications for this work on an ongoing basis. None 179 comments
Update March 2012: We are still accepting and processing applications for this work on an ongoing basis.
Imagine trying to learn baseball by reading essays about baseball techniques. [1]
We're trying to make the jump to teaching people rationality by, metaphorically speaking, having them throw, catch, and hit baseballs in the company of friends. And as we develop curriculum to do that, we're noticing that we often improve quite a lot ourselves in the course of coming up with 20 examples of the sunk cost fallacy. This suggests that the best of us have a lot to gain from practicing basic skills more systematically. Quoth Anna Salamon:
There are huge numbers of basic, obviously useful rationality habits that I do about 10% as often as it would be useful to do them. Like "run cheap experiments/tests often”, and “notice mental flinches, and track down the thought you’re avoiding”.
Eliezer Yudkowsky, Anna Salamon, several others paid on an hourly basis, and a few volunteers, have been designing exercises and exercise-sets for a rationality curriculum. Our current working point is on the exercises for "Motivated Cognition". Currently the only completed session is "Sunk Costs", which is still being tested - yes, we're actually testing these things repeatedly as we build them. The main purpose of the sessions is to be performed in person, not read online, but nonetheless the current version of the Sunk Costs material - presentation and exercise booklets - is available as a sample: [0] [1] [2] [3] [4] [5]. This is a presentation on sunk costs in which background explanations are interspersed with "do as many of these exercises as you can in 3 minutes", followed by "now pair up with others to do the 'transfer step' parts where you look for instances in your past life and probable future life."
We're looking for 1-2 fulltime employees who can help us build more things like that (unless the next round of tests shows that the current format doesn't work), and possibly a number of hourly contractors (who may be local or distant). We will definitely want to try your work on an hourly or monthly basis before making any full-time hires.
The complete labor for building a rationality kata - we are not looking for someone who can do all of this work at once, we are looking for anyone who can do one or more steps - looks something like this:
Select an important rationality skill and clearly perceive the sort of thinking that goes into executing it. Invent several new exercises which make people's brains execute that type of thinking. Compose many instances of those exercises. Compose any background explanations required for the skills. Figure out three things to tell people to watch out for, or do, over the next week. Turn all of that into a complete 90-minute user experience which includes random cute illustrations for the exercise booklets, designing graphics for any low-level technical points made, building a presentation, testing it in front of a live naive audience, making large changes, and testing it again.
We are not looking only for people who can do all of this labor simultaneously. If you think you can help on one or more of those steps, consider applying — for a full-time job, a part-time hourly gig (perhaps from a distance), or as a volunteer position. We will want anyone hired to try hourly work or a trial month before making any full-time hires. Salary will be SIAI-standard, i.e. $3K/month, but if you do strong work and Rationality-Inst takes off your salary will eventually go much higher. Very strong candidates who can do large amounts of work independently may request higher salaries. You will be working mostly with Anna Salamon and will report to her (although in the short term you may also be working directly with Eliezer on the "isolate a useful skill and invent new exercises to develop it" phase).
If you think you have the idea for a complete rationality kata and want to develop the entire thing on your own, send us a short email about your idea - we're open to setting a lump-sum price.
Skills needed:
We need folks with at least one of the following skills (do not feel you need them all; you'll be part of a team; and repeated experience shows that the people we end up actually hiring, report that they almost didn't contact us because they thought they weren't worthy):
- Catchy professional writing. We need folks who can take rough-draft exercises and explanations, and make them fun to read — at the level of published books.
- Curriculum design. We need folks who can zoom in on the component skills for rationality (the analogs of throwing, catching, keeping your eye on the ball), and who can invent new exercises that systematically practice those components. E.g., the thought process that goes from "sunk cost fallacy" to "transform a sunk cost to a purchased option".
- Example generation. Given an exercise, we need someone who can think of lots of specific examples from real life or important real-world domains, which illustrate the exact intended point and not something almost-like the intended point. E.g., turn "Sunk cost fallacy" into 20 story snippets like "Lara is playing poker and has bet $200 in previous rounds..." (Our experience shows that this is a key bottleneck in writing a kata, and a surprisingly separate capacity from coming up with the first exercise.)
- Teaching or tutoring experience in whichever subjects (e.g., math / programming / science, martial arts / sports / dance, cognitive behavioral therapy, corporate trainings, social skills, meditation);
- Technical diagram design. We need someone who can be asked for "A diagram that somehow represents the human tendency to overweight near pains relative to distant pains", understand the concept that is being conveyed, and invent a new diagram that conveys it.
- Presentation design. The current intended form of a rationality kata involves a visual presentation with accompanying spoken words.
- Powerpoint and Photoshop polishing. See above.
- Illustration / cartooning. It would be nice if the exercises were accompanied by small, whimsical drawings. These drawings should prime the reader to both: (a) feel warmly toward the characters in the story-snippets (who will generally be struggling with rationality errors); (b) notice how ridiculous those characters, and the rest of us, are.
- Social initiative enough to gather guinea pigs and run many practice trials of draft curriculum, while collecting data.
Bonuses:
- Skill at running scientific literature searches; knowledge of the heuristics and biases literature, the literature on how to teach critical thinking or rationality, neuroscience literature, or other literatures that should inform our curriculum design;
- Background in game design, curriculum design, or in other disciplines that help with designing exercises that are fun and conducive to learning;
- Having read and understood the core Sequences; having a serious interest in learning and teaching rationality.
If this project appeals to you and you think you may have something to add, apply using this short form or just shoot us an email. Please err on the side of applying; so many freaking amazing people have told us that they waited months before applying because they “didn’t want to waste our time”, or didn’t think they were good enough. This project needs many sorts of talents, and volunteers also welcome — so if you’d like to help launch an awesome curriculum, send us an email. Your email doesn’t have to be super-detailed or polished — just tell us how you might be able to contribute, and any experience we should know about.
[1] If the baseball analogy seems far-fetched, consider algebra. To learn algebra, one typically drills one subskill at a time — one spends a day on exponent rules, for example, understanding why x^a * x^b = x^(a+b) and then practicing it bunches of times, in bunches of algebra problems, until it is a part of your problem-solving habits and reflexes, a step you can do fluently while attending to larger puzzles. If there were a world in which algebra had been learned only through reading essays, without subskill-by-subskill practice, it would not be surprising if the world’s best algebra practitioners could be outperformed by an ordinary student who worked diligently through the exercises in a standard textbook. We’d like you to help us build that first textbook.
179 comments
Comments sorted by top scores.
comment by MC_Escherichia · 2012-01-19T21:20:49.612Z · LW(p) · GW(p)
As an aside; the use of "Org" (i.e. Rationality Org) seems really unusual and immediately makes me think of Scientology (Sea Org); am I unusual in having this reaction?
Replies from: Alicorn, AnnaSalamon, orbenn, Nornagest, Nick_Tarleton, ata, bbarth, David_Gerard, Giles, Raemon, Kaj_Sotala, Aleksei_Riikonen, Solvent, Bugmaster, None↑ comment by AnnaSalamon · 2012-01-20T00:48:38.774Z · LW(p) · GW(p)
Yikes, thanks for mentioning this; I will stop saying "rationality org". JenniferRM actually brought this up to me once, but I forgot, but there's nothing quite like having seven people agree with a point to make it stick in memory.
↑ comment by Nornagest · 2012-01-20T00:22:42.398Z · LW(p) · GW(p)
It crossed my mind as well. Hopefully it's just a placeholder name; an association with Scientology is a really bad thing if we're trying to avoid accusations of cultishness, particularly if katas or similar exercises are going to be a fixture of the organization.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-01-20T01:59:15.691Z · LW(p) · GW(p)
Placeholder name, check. We don't really have any good names at the moment.
Replies from: jimrandomh, Bugmaster, Solvent↑ comment by jimrandomh · 2012-01-26T16:01:23.460Z · LW(p) · GW(p)
One problem with pitching "rationality" is that implying that someone lacks rationality puts them on the defensive; it sounds as though you're trying to bring people from below-baseline up to baseline, rather than from baseline to a higher level. I've gotten better reactions when using the phrase "advanced sanity techniques". Suggesting that someone study them conveys the existence of a higher level, without being perceived as a status attack. I think that if the name is descriptive, it's important that it contains something ("advanced" or an equivalent word) which clearly communicates the fact that it is not aimed at low-status people.
Replies from: thomblake, daenerys↑ comment by thomblake · 2012-01-26T16:08:29.257Z · LW(p) · GW(p)
Absolutely agreed. While rhetorically it might work cross-purposes to "raising the sanity waterline" (in the sense of raising people's expectations about "sanity"), it would be good to have a term that says "There's nothing wrong with you, but here's a way to be even better that you might not know about".
↑ comment by daenerys · 2012-01-26T17:20:47.191Z · LW(p) · GW(p)
Advanced critical thinking skills?
Both "rationality" and "sanity" imply that the person to whom the class is addressed isn't rational or sane. OTOH people already tend to think that "critical thinking skills" are a good thing to learn. (i.e. the popular cached thought: "A liberal arts degree may not prepare you for a specific job, but it does impart valuable critical thinking skills")
↑ comment by Bugmaster · 2012-01-20T04:31:05.661Z · LW(p) · GW(p)
I like "Waterline", as proposed by Alicorn.
Replies from: lessdazed↑ comment by lessdazed · 2012-01-20T20:24:10.444Z · LW(p) · GW(p)
My friend says it makes her think it has to do with boating. There should be a separate focused attempt to come up with the best name.
Replies from: Wrongnesslessness↑ comment by Wrongnesslessness · 2012-01-20T21:50:05.007Z · LW(p) · GW(p)
I agree. The waterline metaphor is not so commonly known outside LW that it would evoke anything except some watery connotations.
So, what about a nice-looking acronym like "Truth, Rationality, Universe, Eliezer"? :)
Replies from: PhilSchwartz, bbarth↑ comment by PhilSchwartz · 2012-01-23T21:41:44.879Z · LW(p) · GW(p)
If there is concern that people outside of LW won't know the metaphor, then the name "Rationality Waterline" can be used at first with the goal of gaining enough recognition to move on to simply "Waterline" at a later date.
↑ comment by Nick_Tarleton · 2012-01-19T21:28:10.885Z · LW(p) · GW(p)
Same here.
↑ comment by David_Gerard · 2012-01-20T00:37:44.642Z · LW(p) · GW(p)
Having spent years as a Scientology critic, I have also become aware of how often people use "org" as an abbreviation for "organisation" in general, so I actually thought it was fine :-) How much do they want for rationality.org?
↑ comment by Kaj_Sotala · 2012-01-19T23:19:15.356Z · LW(p) · GW(p)
I have the same reaction.
↑ comment by Aleksei_Riikonen · 2012-01-20T10:17:38.781Z · LW(p) · GW(p)
No, not unusual. I had the same reaction, and assumed it's probably partly a deliberate joke to have such a placeholder name (or alternatively it's actually so that the Scientology connotation didn't occur to folks at SIAI).
I btw commented on this a couple of days ago in a comment to the SIAI blog, and note now that comments there seem to take a rather long time to be moderated for spam, as apparently no comments have appeared for many months. (Ok, sorry for the joke. More likely you've forgotten about the blog comments or something, than it really being about the spam moderation that commenters are told might take some time when they leave a comment.)
comment by cousin_it · 2012-01-19T13:52:36.730Z · LW(p) · GW(p)
Sometime ago you believed, correctly IMO, that you need a way of testing rationality skills first, and only then get busy on the exercises. What made you change your mind? (I hope it wasn't something like "we need to push ahead asap".) What's the current plan for preventing the slide into epistemic viciousness? (I hope it isn't something like "we will be smart and won't let it happen".)
Replies from: AnnaSalamon, wedrifid, None, John_Maxwell_IV, tetsuo55↑ comment by AnnaSalamon · 2012-01-19T18:13:59.382Z · LW(p) · GW(p)
We are interested in developing rationality measures; if you have ideas for how to do this, please post; if you're interested in doing larger chunks of work toward developing such measures, please fill in the application form or email me. Blake Riley and I and some other rationality campers worked on this some over the summer, and slow work continues on the same front. Aaron Tucker and I made an experimental daily checklist that we've been playing with, for estimating one's own habits and progress. I'd love to see this work go faster. (I just added a checkbox about this to the application form; thanks for pointing that out; there was a similar item on the call for volunteers that I posted locally some weeks ago, but I forgot about it when posting this round).
It seems to me that rationality measures are valuable, but that creating exercises does not make our present lack of robust measures worse than it already is. Take a look at the linked unit on the sunk costs fallacy, above; when I tested it on newbies (and on LWers), they seemed interested, and started noticing sunk cost fallacy examples in their lives, and did not seem to be much flummoxed by questions of who was how rational or how one could really tell. The sequences already teach some thinking skill without measures (much as the dance class I took in a few years ago helped my dancing some without ever measuring my skill). Measures would be helpful; but refraining from creating exercises until after we have measures does not seem helpful to me.
Replies from: cousin_it↑ comment by cousin_it · 2012-01-19T18:31:04.528Z · LW(p) · GW(p)
creating exercises does not make our present lack of robust measures worse than it already is (...) they seemed interested, and started noticing sunk cost fallacy examples in their lives
Martial arts masters and psychotherapy gurus could say the same. Instead of sunk costs you could teach newbies to notice post-colonial alienation or intelligent design, and sure enough they'd get better at noticing that thing in their lives. I hear scientologists do lots of exercises too. Maybe creating exercises before measures is a positive expected value decision, but I wouldn't bet on that.
Replies from: Vladimir_Nesov, AnnaSalamon, John_Maxwell_IV↑ comment by Vladimir_Nesov · 2012-01-19T22:31:00.763Z · LW(p) · GW(p)
"Sunk cost" is a pretty well-defined idea, we can reliably figure out whether something is a sunk cost, and whether a decision commits sunk cost fallacy, by checking whether the decision controls the amount of lost value and whether the (immutable) amount of lost value controls the decision. Skill at noticing sunk cost fallacy would then be ability to parse such situations quickly/automatically.
Testing effectiveness of training a skill is easier than testing usefulness of the skill, and I think figuring out how to train people to avoid a list of fallacies or to find correct decisions of standard kinds faster and more reliably is a reasonable goal, even if practical usefulness of having those skills remains uncertain.
↑ comment by AnnaSalamon · 2012-01-19T18:38:32.347Z · LW(p) · GW(p)
How do you think we should proceed?
Replies from: orthonormal, cousin_it↑ comment by orthonormal · 2012-01-19T19:20:30.813Z · LW(p) · GW(p)
The first task of your full-time hire should be coming up with rationality-measuring tools that are better than human intuition.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-01-19T19:47:38.950Z · LW(p) · GW(p)
If Anna and I can't think of a simple way, you seem to have a rather exaggerated idea of what the fulltime hire needs to be able to do. I don't understand why people are reading this ad and thinking, "Hm, they want Superperson!" But it clearly needs to be rewritten.
Replies from: orthonormal, bbarth, Kaj_Sotala↑ comment by orthonormal · 2012-01-19T21:42:40.011Z · LW(p) · GW(p)
I would be very, very surprised if you and Anna literally came up with nothing of value on measuring rationality; I expect there's some raw material for a full-time employee to test, tweak and build on. This just seems to me like a higher priority than curriculum-building, and achieving a measure that's better than subjective impressions doesn't even seem impossible to me.
↑ comment by bbarth · 2012-01-19T20:30:51.994Z · LW(p) · GW(p)
Here's how typical people read typical job ads (typically), especially ones that are this long: Read the title. Scan for a dollar sign or the words "salary" or "salary range". If both are good enough, scan for the first bulleted list of qualifications. Most ads call these "required qualifications". If the reader meets enough of these, they scan for the second bulleted list of qualifications which is usually called "preferred qualifications". Then, if they meet enough of both of these, they'll go back and start reading in detail to understand the position better before they consider sending in an application or contacting the hiring entity for more information.
I suspect that most people expected your job ad to follow this form since it almost does. Your sections are labeled, effectively "needed" and "bonus". It's not until you get to reading the now-bolded details that you find out that not all of the "needed" stuff is required of the applicant and that essentially any one of the needed qualifications will be sufficient. Basically, you don't have any required qualifications, but you do have a general description of the sort of person you're interested in and a list of preferred qualifications. In this regard, the ad is defective as it fails to comport with the usual format of a typical ad.
Non-standard forms get experienced people's hackles up. It often indicates that there's something unprofessional about the organization.
↑ comment by Kaj_Sotala · 2012-01-19T20:44:23.510Z · LW(p) · GW(p)
It's a project that has people such as you and lukeprog involved in it. (Luke wasn't mentioned, but he was running the rationality camps etc., so people are going to associate him with this regardless of whether his name is actually mentioned.) You two can, with good reason, be considered Superpeople. I expect that many people will automatically assume that for a cause as important as this, you will only accept folks who are themselves Superpeople as well.
↑ comment by cousin_it · 2012-01-19T18:53:51.044Z · LW(p) · GW(p)
Don't proceed. Stay at the drawing board until you figure out a viable attack. Stay there until you die, if you have to.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-01-22T05:16:15.973Z · LW(p) · GW(p)
This seems like a rather extreme position to me. I'd be curious to hear you explain your thinking.
Replies from: cousin_it↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-01-22T05:22:44.808Z · LW(p) · GW(p)
To the extent that irrationality is a result of compartmentalization, this may be the same thing as creating a way to measure how effectively you are accomplishing your goals, which is going to vary between people depending on what their goals are.
For most interesting goals I can think of, creating a rigorous quantitative measure is next to impossible. However, there are a few goals, like running a mile in under four minutes, that lend themselves well to this approach. Perhaps SI could find a group of individuals engaged in such a goal and offer their services as rationality consultants?
↑ comment by wedrifid · 2012-01-19T16:05:26.565Z · LW(p) · GW(p)
Sometime ago you believed, correctly IMO, that you need a way of testing rationality skills first, and only then get busy on the exercises.
This is something that I was expecting them to do - or at least attempt - in the rationality bootcamp they ran last year. Yet they seemed to have lost all interest in testing by the time the camp came around. It seemed like a waste of potential.
↑ comment by [deleted] · 2012-01-19T19:39:41.339Z · LW(p) · GW(p)
Assume Eliezer instead said
"I'm recruiting to put together a rationality test. It's based on how you score on this series of individual questions. I am posting the "Sunk Costs" questions (see these linked PDF files), and we would like to hire people to develop this test further for other things which seem to be components of rationality."
This would appear to meet your objection of "Sometime ago you believed, correctly IMO, that you need a way of testing rationality skills first, and only then get busy on the exercises." because in the way I am casting the argument, they are working on a test.
However, functionally, this seems very similar to what they are doing right now.
That being said, I don't get an intuitive feeling that I'm refuting your central point, so either I need to improve my counter argument, I'm wrong about my refutation, or I need to update my intuition.
After trying to identify possible flaws in my argument, It occurs to me that a "Test" would not have learning material such as the powerpoint. It would also have a grading metric. But it would be hard to develop a grading metric without the full list of topics which are being planned for the .pdf files, (You can't develop a full rationality grading metric off of only sunk cost questions.) and I feel like you would need to develop questions and answer sets like those that are in the .pdf files whether you were making exercises or tests.
If I'm correct, another way of expressing your point might be "Less Powerpoints with M and M rewards and repeating mantras. Those strike me as cultish. More questions like in the .PDF files. You could use those to build a test, and I agree with your earlier point that testing is critical."
If I'm incorrect, can you help me understand where I went wrong?
Replies from: cousin_it↑ comment by cousin_it · 2012-01-19T21:26:37.917Z · LW(p) · GW(p)
Sure, all exercises can also be viewed as tests, but they make for pretty narrow tests and risk being irrelevant to the big picture. I'd like a more comprehensive test that would use many subskills at once. For example, when learning a foreign language, a simple exercise may look like "conjugate this verb", and a comprehensive test may look like "translate this text" or "carry on a freeform conversation". When learning a martial art, a simple exercise may look like "punch the bag exactly as I show you", and a comprehensive test may look like "stay on your feet for two rounds against this guy".
It seems that comprehensive tests are often toy versions of real-life problems. They guide the development of simple exercises and let you tell good exercises from bad ones. If someone cannot imagine a comprehensive test for their skillset, I don't see how they convince themselves that their simple exercises are relevant to anything.
Replies from: rwallace↑ comment by rwallace · 2012-01-20T18:37:24.671Z · LW(p) · GW(p)
Testing rationality is something of an ill posed problem, in part because the result depends greatly on context. People spout all kinds of nonsense in a social context where it's just words, but usually manage to compartmentalize the nonsense in a material context where they will be affected by the results of their actions. (This is a feature! Given that evolution wasn't able to come up with minds that infallibly distinguish true beliefs from false ones, it's good that at least it came up with a way to reduce the harm from false beliefs.) I'm not sure how to create an accurate test in the face of that.
Your martial arts analogy isn't a bad one. The outcome of a karate contest is often not the same as the outcome of a street fight between the same participants. There are any number of cases of a black belt karateka with ten years training getting into a fight with a scrawny untrained criminal, and getting his ass kicked in three seconds flat. Martial arts practitioners have had this testing problem for centuries and still don't seem close to solving it, which doesn't make for optimism about our prospects of solving the rationality testing problem this century. Given that, proceeding as best we can in the absence of a comprehensive and accurate test seems reasonable.
Replies from: None↑ comment by [deleted] · 2012-01-20T19:04:18.223Z · LW(p) · GW(p)
People spout all kinds of nonsense in a social context where it's just words, but usually manage to compartmentalize the nonsense in a material context where they will be affected by the results of their actions.
But doesn't it seem that if you decompartmentalized with correct beliefs you should do way better? Possibly in a testable way?
Martial arts practitioners have had this testing problem for centuries and still don't seem close to solving it, which doesn't make for optimism about our prospects of solving the rationality testing problem this century.
See MMA. There is still a problem of whether being a good fighter is as important or related to being good at self-defense, but martial arts are now measured at least relative to all fighting styles.
Replies from: rwallace↑ comment by rwallace · 2012-01-20T19:36:13.102Z · LW(p) · GW(p)
But doesn't it seem that if you decompartmentalized with correct beliefs you should do way better?
Maybe; there are all sorts of caveats to that. But that aside, more directly on the question of tests:
Possibly in a testable way?
You still run into the problem that the outcome depends greatly on context and phrasing. There is the question with turning over cards to test a hypothesis, on which people's performance dramatically improves when you rephrase it as an isomorphic question about social rules. There are the trolley questions and the specks versus torture question and the ninety-seven percent versus one hundred percent question, on which the right answer depends entirely on whether you treat it as a mathematical question that happens to be expressed in English syntax or a question about what you should do if you believed yourself to really be in that situation. There are questions about uncertain loss isomorphic to questions about uncertain gain where people nonetheless give different answers, which is irrational if considered as a material problem, but rational in the more likely and actual situation where the only thing at stake is social status, which sometimes does depend on how the question was phrased. Etc.
That's why I called the testing problem ill posed; it's not just that it's hard to figure out the solution, it's hard to see what would be the criteria of a good solution in the first place.
Replies from: None↑ comment by [deleted] · 2012-01-20T19:43:49.885Z · LW(p) · GW(p)
Those examples are good evidence for us not being able to test coherently yet, but I don't think they are good evidence that the question is ill-posed.
If the question is "how can we test rationality?", and the only answers we've come up with are limited in scope and subject to all kinds of misinterpretation, I don't think that means we can't come up with broad tests that measure progress. I am reminded of a quote: "what you are saying amounts to 'if it is possible, it ought to be easy'"
I think the place to find good tests will be instead of looking at how well people do against particular biases, look at what it is we think rationality is good for, and measure something related to that.
Replies from: rwallace↑ comment by rwallace · 2012-01-20T19:54:12.312Z · LW(p) · GW(p)
Ill posed does not necessarily mean impossible. Most of the problems we deal with in real life are ill posed, but we still usually manage to come up with solutions that are good enough for the particular contexts at hand. What it does mean is that we shouldn't expect the problem in question to be definitely solved once and for all. I'm not arguing against attempting to test rationality. I'm arguing against the position some posters have taken that there's no point even trying to make progress on rationality until the problem of testing it has been definitely solved.
Replies from: None↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-01-22T05:12:27.890Z · LW(p) · GW(p)
The author of the original epistemic viciousness essay seems to think that culture (in other words, "being smart and not letting it happen", or not) is actually pretty important:
Just last week I was on the way home from a judo class with a friend— a senior judoka and university student—who insisted that although there was nothing wrong with lifting weights, strength was unimportant in judo, and it wouldn’t help one to become a better judo player. To this the appropriate reply is of course, unprintable.
...
Judo is an art in which there is relatively little room for pretence; in randori, either you manage to throw your opponent, or you don’t. In newaza, either you escape from your opponent’s hold or you don’t.
...
Why are there so many fantasists in the martial arts, as compared to other activities? And there are; you won’t find many sprinters or removal-men who would tell you that strength doesn’t matter to their chosen tasks, nor will you find power-lifters who think they can move the bar without touching it or engineers who specialise in ki-distribution.
http://www.artsci.wustl.edu/~grussell/epistemicviciousness.pdf
Replies from: Prismattic↑ comment by Prismattic · 2012-01-22T05:40:18.352Z · LW(p) · GW(p)
I believe the judoka being quoted may have misheard, misremembered, or is misapplying a different point that is sometimes taught and that is not insane. I have elsewhere heard the advice that bulking up too early in one's judo studies is counterproductive, because you have more margin for error in techniques if you can make up for doing them not-quite-correctly by being very strong, so really buff people may fail to notice and correct flaws in their form. Then they get whupped by people who actually mastered the techiques.
Of course, once you've reached yudansha, and already have a good grasp of form, then you're supposed to bulk up to be able to beat other yudansha.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-01-22T05:59:32.820Z · LW(p) · GW(p)
Could be true.
It's not that important to what I was saying, though: the essay is mostly about how martial artists in particular have terrible epistemic hygiene. The idea of lack of measurement is only mentioned in passing, along with the remark that theoretical physics manages to be respectable despite it and that the real problem is not that martial arts lacks measurement, but that martial artists are much more sure of themselves than their paucity of data justifies.
↑ comment by tetsuo55 · 2012-01-19T23:19:30.850Z · LW(p) · GW(p)
Defining key performance indicators for things like these is not very hard, neither is developing ways to measure the performance. Tweaking the accuracy and fixing the gamable parts once the basics are done is the harder part. Also these metrics should like any theory be in a continual beta state and get tweaked, just make clear that the trend compared to previous measurements is broken. I can spend a little time on irc teaching someone how to do this but my time is extremely limited right now so it will have to be a formalish appointment with an eager student.
comment by Mercurial · 2012-01-19T17:14:27.520Z · LW(p) · GW(p)
If there were a world in which algebra had been learned only through reading essays, without subskill-by-subskill practice, it would not be surprising if the world’s best algebra practitioners could be outperformed by an ordinary student who worked diligently through the exercises in a standard textbook.
This actually happened. The ancient Greeks weren't very capable algebraists because they didn't develop a symbol system that they could systematically manipulate according to prescribed rules. Their description of formal logical inferences were insane to read: "If the first and the second, or the third, implies the fourth, then the first or the fourth, implying the third...." The reason our word "algebra" comes from Arabic isn't because the Muslims were better algebraists; it was because they used symbol systems (to avoid making icons of Mohammad) in order to encode the material they were reading in the Greek literature. The result was something reasonably close to our modern symbol-manipulation system, which made it possible to train in algebra.
So this isn't just a theoretical example. Really, honestly, the first textbook ("al-jebr..." I don't quite remember the title) absolutely trounced several hundred years of careful, intelligent Greek thought on the topic of numerical reasoning.
Edit: Please see this. There's some question about the accuracy of my statement here.
Replies from: Anatoly_Vorobey↑ comment by Anatoly_Vorobey · 2012-01-20T21:08:27.168Z · LW(p) · GW(p)
This description is very plausible, but entirely wrong. It was almost completely the opposite of what you're saying. The Muslim mathematicians used fewer symbols than the Greek tradition they inherited for almost the entire timeline of medieval Arabic/Islamic mathematics. The "first textbook" you're referring too, Al-Khwarizmi's Al-jabr wa'l muqabalah, the one ultimately responsible for the word "algebra", did not use any symbols at all, and wrote everything out in words.
Greek mathematicians started to use something like symbols (abbreviated letters with fixed positional meaning) by the time of Diophantus around 3rd century CE. The Arab mathematicians did not adopt that when they translated the Greek texts, and for the first 500 years of their work, wrote everything out fully. Moreover, it is those texts devoid of any symbolic systems that were translated into Latin and used to help fuel the European tradition in 12th-13th centuries CE. Even though some Islamic mathematicians later did develop the beginnings of a symbolic notation, in 14th-15th centuries, this happened roughly in parallel with the Europeans inventing their own symbols, and did not influence the modern tradition that derives from those European symbols.
To be sure, Muslim mathematicians were much better algebraists than the Greeks. But that was because Greeks never quite reached the idea of decoupling numbers (and unknown quantities) from geometry and manipulating them as separate objects on their own. The Muslim mathematicians were able to do that (and as a result, much more), despite not having any symbolic system at their disposal.
Replies from: Mercurial, None↑ comment by Mercurial · 2012-01-21T17:25:59.718Z · LW(p) · GW(p)
Huh. This directly contradicts what I encountered. I'll have to explore this a bit. I knew the Greeks had a problem with decoupling their idea of number from their concepts of geometric construction, but I was told that certainly in formal logic and I thought in numerical reasoning as well, their lack of symbol system machinery handicapped them. The Muslims, on the other hand, wouldn't use pictures of the ideas to which they wanted to refer because of the ban on iconography, so they had to encode their concept of quantity differently, I thought that's where symbol machinery came from.
So... I'll have to look into this. Upvoted for offering a correction, although I don't know yet if it's actually correct. Thank you!
↑ comment by [deleted] · 2012-01-20T21:12:14.979Z · LW(p) · GW(p)
What do you mean by decoupling geometry and numbers/unknowns? Sounds interesting but I don't understand.
Replies from: Anatoly_Vorobey↑ comment by Anatoly_Vorobey · 2012-01-20T23:35:45.421Z · LW(p) · GW(p)
To a Greek mathematician, a number was fundamentally a measure of something geometric, like the length of a segment. The square of a number is not some abstract operation: it's just the area of a particular figure, a square. An equation was a way of describing a geometrical problem. Equations were solved geometrically.
Here's an example. Suppose you have unknown numbers x and y, and you know the difference between them and also their product. Can you find x and y? In algebraic terms, you manipulate some unknowns, express y in terms of x and substitute, arrive at a quadratic equation in x, and find the result. Greek mathematicians weren't able to write it this way, in words or in symbols. That just wasn't a way of looking at this problem, or a method of solving it, that they could recognize.
Here's how they thought: you have two unknown lengths. You know by how much one is greater than the other, and you also have a square whose area is equal to the rectangle built on those lengths. Can you find these unknown lengths? Well, you can do it this way: take the difference between them, drawn as a line segment AB. Find its middle point C. Draw a line BQ perpendicular to AB at point B, of length equal to the side of the square you have. Now take the hypotenuse CQ, and add it to the original line AB, prolonging it to the point D. You have one of the unknown lengths in the segment CD.
This is straight out of Euclid. There's also a proof that what I just described actually solves the problem; the proof is based on considering the rectangle built on the unknown lengths, cutting it into a few parts, reassembling them elsewhere, etc. That's how Greek mathematicians solved equations. They didn't have the mental image of x and y as these abstract entities that you can shuffle around in an equation (another abstract entity), multiply/divide by some numbers (more abstract entities) to simplify, and arrive at the algebraic result. To them, x and y were lengths you don't know how to measure yet, and all the manipulations were inherently geometric.
Arab mathematicians changed that, and opened the way to looking at numbers, unknowns and equations "algebraically", as separate abstract entities of their own which can be manipulated according to strict rules.
Replies from: None↑ comment by [deleted] · 2012-01-21T00:45:11.697Z · LW(p) · GW(p)
Thank you. Makes much more sense now. The greeks failed to abstract number from length, so they failed to develop abstract mathematics.
How aware were they of measurements of time?
Replies from: Anatoly_Vorobey↑ comment by Anatoly_Vorobey · 2012-01-21T10:49:21.508Z · LW(p) · GW(p)
They measured time on the large scale accurately enough for astronomical purposes, and on the small scale to build something as amazing as the Antikythera mechanism. They probably didn't measure their days into minutes and seconds the way we do, the everyday functioning of the society didn't need and couldn't use such precision.
comment by Clippy · 2012-01-19T20:35:17.798Z · LW(p) · GW(p)
I'm going to apply for this role.
Replies from: lavalamp↑ comment by lavalamp · 2012-01-20T16:08:14.542Z · LW(p) · GW(p)
I would love to read a rationality textbook authored by a paperclip maximizer.
Replies from: katydee, wedrifid↑ comment by wedrifid · 2012-01-20T16:47:35.691Z · LW(p) · GW(p)
I would love to read a rationality textbook authored by a paperclip maximizer.
If for no other reason that it means they aren't actually an agent that is maximizing paperclips. That's be dangerous!
Replies from: JamesAndrix↑ comment by JamesAndrix · 2012-01-20T18:46:25.441Z · LW(p) · GW(p)
Almost any human existential risk is also a paperclip risk.
comment by katydee · 2012-01-19T08:13:12.139Z · LW(p) · GW(p)
Example generation. Given an exercise, we need someone who can think of lots of specific examples from real life or important real-world domains, which illustrate the exact intended point and not something almost-like the intended point. E.g., turn "Sunk cost fallacy" into 20 story snippets like "Lara is playing poker and has bet $200 in previous rounds..." (Our experience shows that this is a key bottleneck in writing a kata, and a surprisingly separate capacity from coming up with the first exercise.)
...so, who else just opened up Evernote and started seeing how many they could bang out?
comment by orthonormal · 2012-01-19T19:17:18.996Z · LW(p) · GW(p)
We're looking for 1-2 fulltime employees who can help us build more things like that (unless the next round of tests shows that the current format doesn't work)
Nice to see you taking to heart the lesson you taught MoR!Harry:
"So what's next?" said Hermione.
Harry rested his head against the bricks. His forehead was starting to hurt where he'd been banging it. "Nothing. I have to go back and design different experiments."
Over the last month, Harry had carefully worked out, in advance, a course of experimentation for them that would have lasted until December.
It would have been a great set of experiments if the very first test had not falsified the basic premise.
Harry could not believe he had been this dumb.
"Let me correct myself," said Harry. "I need to design one new experiment. I'll let you know when we've got it, and we'll do it, and then I'll design the next one. How does that sound?"
"It sounds like someone wasted a whole lot of effort."
Thud. Ow. He'd done that a bit harder than he'd planned.
comment by Said Achmiz (SaidAchmiz) · 2012-01-20T03:37:05.108Z · LW(p) · GW(p)
I'm not sure if this is the right thread for this, but I don't see how example #11 in the "Skill 1: Noticing Sunk Costs" PDF is an actual example of the sunk cost fallacy.
Ellery's options are to eat the slightly-burnt soup, or not to eat and instead to spend $12 (and, presumably, some cooking time) to replace the damaged soup with good soup and eat that instead. She has to eat something (again, presumably); the option to simply abandon the sunk cost, by discarding the soup and then eating nothing, seems implausible. In other words, her decision whether to eat or not eat the soup directly affects her future financial situation, and her decision is based on that effect.
Am I misunderstanding something?
EDIT: Of course it's possible that the negative utility Ellery would get from eating untasty soup outweighs the positive utility from eating at all plus the value of $12, but in any case the question concerns future utility, right?
(My apologies if this is a thread derail.)
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-01-20T21:16:40.772Z · LW(p) · GW(p)
Yes, “Ellery wouldn't eat soup like this if it were free” != “Ellery wouldn't eat soup like this if paid $12”
comment by Xom · 2012-01-19T23:34:29.949Z · LW(p) · GW(p)
Please talk to David McRaney (http://youarenotsosmart.com) to see if he'd be interested. His recent book, while far from comprehensive, has become the first place I look whenever I want to reference an accessible explanation of a particular cognitive bias.
comment by Alex Flint (alexflint) · 2012-01-19T14:48:59.884Z · LW(p) · GW(p)
Would it be helpful for us to try out these exercises with a small group of people and report back?
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2012-01-19T17:25:41.272Z · LW(p) · GW(p)
Yes; please do.
Replies from: Kytael↑ comment by Kytael · 2012-01-20T01:13:32.536Z · LW(p) · GW(p)
I'm planning on doing this- is there any particular type of feedback you want?
Replies from: AnnaSalamon, AnnaSalamon, AnnaSalamon↑ comment by AnnaSalamon · 2012-01-20T01:56:48.964Z · LW(p) · GW(p)
Also, tests on non-LWers would be especially valuable, although tests on LW-ers would add info too.
↑ comment by AnnaSalamon · 2012-01-20T01:55:33.324Z · LW(p) · GW(p)
Great question. Yes, there totally is. Best if you could print out copies of this participants survey, or else have folks complete it online, and if you could accompany those survey results with your own report of what you basically did (e.g., did you go through the whole powerpoint as shown?) and with a brief description of anything that struck you as you were going through.
↑ comment by AnnaSalamon · 2012-01-22T01:42:58.732Z · LW(p) · GW(p)
Also, if you run tests, please consider also testing the "transfer step": a set of activities that should occur toward the end of the kata, where participants pair off on their own (after finishing their exercise booklets), and look for examples of sunk costs and the sunk cost fallacy in their own lives.
We have some handouts for the transfer step here: 1, 2, 3, 4, 5.
Printing instructions for the booklets are here.
We aren't quite sure how to structure the transfer step yet -- still in early testing there -- so feel free to play around, try something, and let us know how it goes. Much thanks if you do!
comment by Shmi (shminux) · 2012-01-19T03:17:28.497Z · LW(p) · GW(p)
This forum has a pool of people who have many of the talents you requested, but who would not bother applying for one reason or another (unwillingness to overtly commit a significant chunk of time, general akrasia, you name it). However, they would be happy to comment on a single focused question, be it to suggest an activity, to improve a write-up, to do a literature search, or to polish a presentation. I am not sure whether you have anyone on staff who is good at partitioning a project into tiny little tasks of under-an-hour length, but if so, consider an LW section or a post tag [Rationality_org Task] as a means of tapping the low-availability talent pool.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-01-19T03:39:49.083Z · LW(p) · GW(p)
Keep in mind that we're looking for full-time hires here, not just volunteers.
Replies from: shminux, thomblake↑ comment by Shmi (shminux) · 2012-01-19T17:24:57.639Z · LW(p) · GW(p)
My point is that you might be trying to fill a wrong position.
A qualified part-time volunteer coordinator can do orders of magnitude more good for a non-profit than a full time staff member working on their own. Consider, for example, the VanDusen Botanical Garden. All grounds-keeping and nearly all activities are done by volunteers, with a single coordinator on staff. Some of these volunteer jobs, like the Master Gardener, would be equivalent to probably $50/hr on an open market, maybe more. Some smaller organizations even go one level up, and have a volunteer volunteer coordinator.
Of course, it is harder to properly parcel the jobs in the SI than those in gardening. Then again, none of you in the SI do what you do because you wanted it easy.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2012-01-20T19:29:36.449Z · LW(p) · GW(p)
Next step up is the volunteer volunteer volunteer coordinator coordinator.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-20T19:44:02.965Z · LW(p) · GW(p)
In a meeting this morning I suggested that my company was well on its way to needing a development process management suggestion management process manager. Nobody actually threw anything at me, which I attribute to my having been on the phone.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-01-21T09:28:04.980Z · LW(p) · GW(p)
I am amused by the fact that both of these reports obey the rule - universal in my experience so far - that "All infinite recursions are at most three levels deep."
Replies from: TheOtherDave, Will_Sawin↑ comment by TheOtherDave · 2012-01-21T17:49:13.820Z · LW(p) · GW(p)
Three?
Hm.
By my parsing, it's ((((((development process) management) suggestion) management) process) manager)... that is, a manager for the process of managing suggestions for managing the process of development. What's your parsing?
Of course, it isn't embedded, which makes it much more parseable.
"The goat the cat the dog the stick the fire the water the cow the butcher slaughtered drank extinguished consumed hit ate bit was purchased for the two zuzim my father spent" is a different matter.
Replies from: bogdanb↑ comment by bogdanb · 2012-01-22T23:23:22.296Z · LW(p) · GW(p)
Recursion technically implies means “doing something to (something derived from) the result of doing that same thing earlier”, not just “doing stuff repeatedly”. There are three “management” steps above.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-23T00:44:05.012Z · LW(p) · GW(p)
You're right, of course. I was thinking about nesting depth. Thanks for the correction.
↑ comment by Will_Sawin · 2012-01-21T10:13:43.164Z · LW(p) · GW(p)
One can derive an obvious corollary to this rule...
↑ comment by thomblake · 2012-01-19T14:39:09.978Z · LW(p) · GW(p)
For $3k a month, you're practically looking for volunteers.
Replies from: None, Yvain↑ comment by [deleted] · 2012-01-19T15:50:17.981Z · LW(p) · GW(p)
.
Replies from: bbarth, thomblake, Swimmer963, wedrifid, Jayson_Virissimo, army1987↑ comment by bbarth · 2012-01-19T15:57:53.560Z · LW(p) · GW(p)
I don't know what others think (besides myself and thomblake, clearly), but I think it's between 3 and 4x under market for a person with those skills in the Bay Area. It's between 2 and 3x under market in a place like Austin, TX, depending on experience.
People with experience doing the things listed above make high 5 and low 6-figure salaries plus benefits (medical, 401k with some matching, etc.) in industry jobs, or they are university or secondary school teachers who have reasonable salaries, health care, and other benefits like tenure not available to industry workers.
Replies from: None↑ comment by [deleted] · 2012-01-19T16:23:26.799Z · LW(p) · GW(p)
.
Replies from: bbarth↑ comment by bbarth · 2012-01-19T16:27:18.016Z · LW(p) · GW(p)
It's also possible, for example, that they don't actually want people with work experience doing these things and would settle for folks who are decent at them but have so far only done these activities as a hobby/self-training exercise. If that's the case, then $36k/yr might be OK, and it might be a good opportunity for someone to get these skills on their resume for a later job search in a relevant industry. If that's what they're really looking for, they should state it as such. Otherwise, I remain highly skeptical of the position.
↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-01-19T16:47:07.213Z · LW(p) · GW(p)
It's a lot if you're a student, I guess. The most I've ever made was about $2500/month, and that's working 55 hours a week...at $12/hour. Pretty much any non-student job pays more.
Replies from: bbarth↑ comment by bbarth · 2012-01-19T17:10:44.266Z · LW(p) · GW(p)
Agreed.
We pay grad students ~$45k for 40 hours a week. Most of them only work half time, so they take home a lot less than that. Of course they also get health insurance. Also, this doesn't appear to be seeking a student.
Edited to add: We pay their tuition, too.
↑ comment by wedrifid · 2012-01-19T15:58:20.134Z · LW(p) · GW(p)
Am I the only one who thinks $3k/month is actually a lot of money?
More or less. There would not be many people who meet the criteria mentioned that couldn't earn a lot more than that if they wanted it.
Replies from: daenerys↑ comment by daenerys · 2012-01-19T17:33:01.530Z · LW(p) · GW(p)
There would not be many people who meet the criteria mentioned that couldn't earn a lot more than that if they wanted it.
You're right, but they don't need many people, they only need one.
(Speaking as someone who applied, has most of those skills pretty solidly (from unusual experiences that employers generally don't care for: professional hula hoop instructor???), but has rarely made more than half of what they are offering)
Replies from: None, shminux↑ comment by Shmi (shminux) · 2012-01-19T18:07:42.431Z · LW(p) · GW(p)
Rooting for ya :)
↑ comment by Jayson_Virissimo · 2012-01-21T10:20:15.960Z · LW(p) · GW(p)
No you aren't. $3,000 a month would easily cover rent, utilities, Internet, transportation costs, a healthy diet, a textbook or two per month, and the occasional eating out or moviegoing (at least, it would where I live).
↑ comment by A1987dM (army1987) · 2012-01-21T11:56:55.586Z · LW(p) · GW(p)
It is where I am, but I guess the Bay area is way more expensive...
↑ comment by Scott Alexander (Yvain) · 2012-01-19T20:48:30.736Z · LW(p) · GW(p)
They're offering 150% of the average US income during a recession with 9% unemployment as starting salary for an entry-level position doing satisfying creative work for an organization that could actually improve the world. I like money as much as anyone else, and I would fight for this job if I weren't otherwise engaged. If my hunt for residency positions this summer falls flat, I might still try to fight for it.
Replies from: thomblake↑ comment by thomblake · 2012-01-19T20:59:47.555Z · LW(p) · GW(p)
I do forget not everybody works in computing.
Replies from: Raemon↑ comment by Raemon · 2012-01-19T21:07:45.530Z · LW(p) · GW(p)
I have been continuously weirded out by how people in our circles seem to take for granted ridiculous salaries during what's supposed to be an economic recession.
Replies from: daenerys, satt↑ comment by daenerys · 2012-01-19T21:26:28.448Z · LW(p) · GW(p)
I have been continuously weirded out by how people in our circles seem to take for granted ridiculous salaries during what's supposed to be an economic recession.
This.
Seeing people scoff about how easy it is to make a near six figure income is extremely off-putting.
Replies from: katydee↑ comment by katydee · 2012-01-19T22:53:18.485Z · LW(p) · GW(p)
Keep in mind that SIAI is headquartered in the San Francisco Bay Area, where the cost of living (and thus salaries in general) tend to be higher. I just did a quick Google search and found that in this area, an entry-level police officer can make six figures plus benefits (and eventually pension), so such incomes aren't really outside the realm of normal possibility.
That being said, I think the offered salary is reasonable, especially given the interesting and important nature of the work being done, and will likely apply for the position.
Replies from: Raemon↑ comment by Raemon · 2012-01-20T19:09:42.194Z · LW(p) · GW(p)
How important is it for SIAI to be located where it is? (I know that proximity to the tech industry is relevant, but how relevant, exactly?)
Replies from: katydee↑ comment by katydee · 2012-01-20T22:41:40.570Z · LW(p) · GW(p)
I don't work for SIAI and don't have special knowledge relating to this-- that said, I do know that SIAI has at least considered locating some operations in other areas (and I believe did not always inhabit its current premises), so presumably there has been some analysis of this behind the scenes.
Replies from: gwern, Raemon↑ comment by gwern · 2012-02-04T22:59:23.794Z · LW(p) · GW(p)
Charities benefit a lot from being in a city, I think. GiveWell, known for its numeric focus, relocated to Mumbai India for 3 months and found it a valuable experience, but they returned to their NYC digs and not, say, Appalachia. Similarly, the Wikimedia Foundation moved to SF from Florida the moment it could.
Replies from: katydee↑ comment by satt · 2012-01-21T16:42:10.697Z · LW(p) · GW(p)
I think the Bay Area factor is warping things as well in this case. When I read thomblake's first comment about $3k a month being volunteer-level pay, my first reaction was "$36k a year is practically for volunteers? Are you shitting me? That must be more than most PhD students make!" When he followed up by mentioning it was about what rent might cost in the Bay Area, the penny dropped and I thought "ohhh, right, Bay Area, say no more".
Replies from: ksvanhorn↑ comment by ksvanhorn · 2012-01-21T17:17:22.799Z · LW(p) · GW(p)
Even outside of the Bay Area an experienced software engineer can easily make 3 times that amount.
In the Bay Area... well, my very first job out of college -- in 1989, with a Master's in computer science -- paid $40K a year; adjusting for inflation, that is the equivalent of $76K a year now.
Replies from: satt↑ comment by satt · 2012-01-21T17:57:35.230Z · LW(p) · GW(p)
Even outside of the Bay Area an experienced software engineer can easily make 3 times that amount.
I expect so, but I doubt the Rationality Org is necessarily looking for experienced software engineers. Going by the skills EY listed, even a cartoonist with a knack for PowerPoint might be just who they're looking for, even if they have no degree & no job experience. Were it not for the Bay Area factor, $36k/year would likely be a great salary for them.
comment by Kaj_Sotala · 2012-01-19T20:33:05.208Z · LW(p) · GW(p)
Looking at the sunk cost materials, the answer to question 10 in booklet 1 seems a bit off:
Question: Sandra is starting to grow bored with her current boyfriend. Is it time to move on? She left her job in Los Angeles and moved to New York, just so that she could be with him—but they haven't had a really interesting conversation in months.
Answer: Sunk cost yes, sunk cost fallacy yes.
The time and effort it took to move are already gone and not coming back. Sandra's old job is already gone, also.
Sandra is acting as though her past costs are relevant to her present decision.
I would have marked this as "sunk cost yes, sunk cost fallacy no". The way I read it, Sandra's sunk cost of her leaving her job and moving is mentioned as background information, but it's not stated that Sandra would consider this to be a reason to stay.
comment by PeerInfinity · 2012-01-20T01:06:13.211Z · LW(p) · GW(p)
One obvious idea for an exercise is MBlume's Positive Bias Test, which is available online.
But of course everyone taking the course would probably already be familiar with the standard example implemented in the app. I would suggest updating the app to have several different patterns, of varying degrees of complexity, and a way for the user to choose the difficulty level before starting the app. I would expect that to be not too hard to implement, and useful enough to be worth implementing.
Replies from: Ratheka↑ comment by Ratheka · 2012-01-25T11:31:26.218Z · LW(p) · GW(p)
What do you think of the idea of an RPG type game where the quests are designed to trigger biases in people, and that required clear thinking to win at? I'd be a big fan of a game that required you to read quests and think about them, and moved away from the 'track arrow, kill everything en route' model that many have today. Of course, it still needs to be fun to entice people to play it. Functional edutainment seems to be a rough balance to strike.
Replies from: PeerInfinity↑ comment by PeerInfinity · 2012-01-26T02:20:30.282Z · LW(p) · GW(p)
I like the idea. This is something that could be useful to anyone, not just as part of the Rationality Curriculum.
Here is a related idea I posted about before:
Another random idea I had was to make a text adventure game, where you participate in conversations, and sometimes need to interrupt a conversation to point out a logical fallacy, to prevent the conversation from going off-track and preventing you from getting the information you needed from the conversation.
See also The Less Wrong Video Game
comment by TsviBT · 2012-01-22T23:57:32.387Z · LW(p) · GW(p)
Shouldn’t the presentation suggest what sort of reasoning is actually going on in someone’s head while they are committing the sunk costs fallacy? For example, (skill 1, problem 12) Peggy is at a baseball game with her son Tim, who is bored and wants to leave. Peggy says, “You want to leave? Those tickets were expensive!”. Her expressed thoughts are a fine example of the sunk costs fallacy. But she may be really thinking: “This is event is the sort of thing normal people pay a lot to attend - Tim must not be thinking clearly. We should stay because he will enjoy the game, even though he says he’s bored.” If the presentation pointed out that (unconsciously) applying the scarcity heuristic can lead to or sound like the sunk costs fallacy, rationalists-in-training could more easily catch themselves committing the fallacy.
Also, learning to distinguish solid and fallacious reasoning, even when they produce the same conclusion, helps one correctly cash out the Teleporting Alien viewpoint. Tim responds to his mother, “We’ve already spent the money for the baseball tickets. I’m bored so there’s no point in sitting through the game.” Peggy realizes that she is committing the sunk costs fallacy. She tells Tim, “You’re right. But this baseball game is expensive - which makes it likely that it is entertaining in some way. Let’s stay a little longer and see if something exciting happens.”
The obvious problem with trying to fully explain the reasoning in sunk costs situations is that you have to explain those other biases and heuristics. But I think the exercises can briefly mention what to look out for and reference other kata while remaining focused on the particular skill at hand. And indeed, rationality skills are dependent, which should be reflected even in a training course with a narrow scope.
comment by AnnaSalamon · 2012-01-20T19:38:33.216Z · LW(p) · GW(p)
Since a few people have asked me privately: if you're interested in this position, please send in an application even if I already know you. The application is short, and it will let me know that you are interested, and what parts of the work you are interested in and/or able to do.
comment by duckduckMOO · 2012-01-20T14:03:40.973Z · LW(p) · GW(p)
question 2 sheet 5.
"Answer to 1. Disliked story: Bought a cupboard full of yucky peanut butter. Preferred story: Cleverly purchased good peanut butter in bulk. Fixed reality: You bought a year's supply of a peanut butter you haven't tried, and it's either tasty or yucky; if yucky, pretending to like it and choking it down won't get your money back."
That reality isn't fixed. Tasty/yuckiness is in part determined by your attitude towards the peanut butter. Tastiness is not a one place property. More generally, you can make yourself like different things. Doesn't just familiarity make you like things? There's a large grey area between tasty and yucky that includes a section for "has the potential to be tasty if I decide to damn well like it."
Sometimes, yes, you can make something not be wasted. The point is to reason from the situation as it is, not to avoid making sunk costs not a waste. As things go food is pretty easy to choose to enjoy, or at least not mind.
Replies from: lessdazedcomment by fburnaby · 2012-01-20T00:40:13.732Z · LW(p) · GW(p)
Having seen many concerns about the low salary/skill ratio: it seems like about the same situation that I had for my graduate degree. I took half the salary for a job that was (by my estimate at the time) twice as cool as the alternatives. In that light, the position seems like quite a good deal, as most grad students don't earn that much. If you expect to experience side-benefits from this job, such as benefiting from increased rationality, or having cool stories to tell people about your interesting job, then this seems like a good deal.
Replies from: bbarthcomment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-01-19T19:52:45.696Z · LW(p) · GW(p)
Quick rewrite with some bolding to try harder to correct people's impressions that we're trying to hire someone who can do all these things simultaneously.
comment by [deleted] · 2012-01-19T14:29:38.112Z · LW(p) · GW(p)
The formatting is all messed up for me; the text in some later paragraphs is spaced super far apart and runs off the edge of the page. Might be an Opera thing, but I've only seen this on LW, and I've seen it before. Can someone remind me why Markdown is not used for top-level posts?
Here's what it looks like for me.
Replies from: Vladimir_Nesov, Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-03-03T16:33:54.910Z · LW(p) · GW(p)
Should be fixed now.
↑ comment by Vladimir_Nesov · 2012-03-03T14:55:02.754Z · LW(p) · GW(p)
Is it better now? I cleaned up the markup.
comment by tetsuo55 · 2012-01-19T20:16:49.226Z · LW(p) · GW(p)
I would like to have this as an interactive application. It's not always so easy to get into a teacher student situation.
There are applications out there for developing this curriculum in a better way than the regular approach. Like the app these guys are selling http://www.knowledge-values.com/ I use apps like this on a daily basis to create the training content at work.
I don't need a whole game or anything, something as simple as the math exercises on khan academy will do the trick. And i would prefer each skill told as separately and atomically as possible/logically.
comment by bbarth · 2012-01-19T14:03:28.918Z · LW(p) · GW(p)
The salary for this position seems off by a factor of between 3 and 4 given the sort of background you want. You're asking for someone with professional level design skills coupled to the skills of a university professor or really good high school teacher or video game designer (depending or your perspective). People with these skills get paid a lot more than you're offering. $36k/yr isn't going to get you a bright recent college grad, especially if they have to live in the Bay Area.
It seems to me that you're more interested in hiring folks that are deeply dedicated to the movement so that you can pay them a sub-market salary than hiring the best person you can find. Which is fine, but you should be upfront about it.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-01-19T14:39:14.207Z · LW(p) · GW(p)
I guess we have to emphasize "you do not need all of these skills simultaneously" even harder than bolder text. And stating standard salary straight out is upfront.
Replies from: bbarth↑ comment by bbarth · 2012-01-19T15:38:37.181Z · LW(p) · GW(p)
I guess my points were a little too obtuse. People with even a handful of these skills get paid a lot more than you're offering (e.g. school teachers have curriculum design and teaching experience, and generally make much more than $36k/yr). Clearly, stating the salary is "upfront" about the salary, but that wasn't my complaint. My complaint was that it appears that by offering a well below market salary you're looking for a fellow traveler/true believer/movement participant who is so highly dedicated to the cause that they are willing to sacrifice a good chunk of their potential earnings to advance SIAI's goals. If that's the case, then you should state it directly. If it's not the case, then another possibility that comes to mind is that you're hoping to exploit the passion of a young person who feels strongly about the cause but doesn't realize what they're worth on the open market.
My concern is that by not stating anything about this obviously (to me) below market salary, you're leaving your motivations open to serious question. I think it better to lay out some sort of reasoning behind it than to leave it ambiguous.
Replies from: AnnaSalamon, Yvain, Eliezer_Yudkowsky↑ comment by AnnaSalamon · 2012-01-19T17:51:35.814Z · LW(p) · GW(p)
The motivation is simply that we need help now, that we do not have budget now, that SingInst's experience suggests that at least some skilled people are willing to work for such money (e.g. me, Carl, Michael Anissimov, Lukeprog), and if rationality org's efforts are successful we will probably have more money for skilled people in the future.
There are several reasons someone might apply, given that. The ones that spring to my mind are:
- This sounds like a fun job where they could learn a lot over the next year. (I've learned a huge amount, working here.)
- At their present stage in life, they don't need all that much money. (No kids, etc.; I've been living comfortably on the SI standard salary, and am probably happier here than I would be making more money doing something less varied where I didn't get to learn from the folks here)
- Someone may be passionate about rationality education (either for itself, or because they expect it to help with existential risk) (this is the possible reason you list)
- Someone may think we have a good chance at creating, over the next year or two, the sort of "rationality org" that could afford to pay people market wages, and may be willing to take a risk for the next couple years in order to be part of that.
However, our current low salaries are not a sneaky attempt to obtain only dedicated idealists; they are just an attempt to launch a rationality training effort with the budget we currently have, making the best use we can of our donors' dollars. I'm a little confused as to why this caused so much offense. The job is surely not right for some people, such as people who care a lot about present salaries, or who have high current income needs; but we posted the salary so that people could see for themselves whether it might work for them, and so that folks it might be right for could contact us.
Nonprofit salaries are typically lower than salaries for comparable work outside of nonprofits; start-up salaries are typically low with the potential for more later on. Rationality org at the moment is perhaps somewhere in between.
Replies from: bbarth↑ comment by bbarth · 2012-01-19T18:41:23.424Z · LW(p) · GW(p)
I'm sorry if I came across as overly critical. I had a flashback to the job ad that EY promoted in September of '10 which came off in a similar way to me (though, clearly, this one has much more detail), and that probably drove the tone of my posts. I'm certainly not offended.
Now, that being said, I've noticed that there are a number of young idealists in this community, and I think it would be good if we could help them understand what they're getting into. We have a responsibility to help the up and coming among us to make good decisions. Making it clear that the SIAI "standard" salary is way under market for skillful people and that applicants should understand the opportunity costs associated with working for a non-profit for a period of time should be part of the job description when it comes a rationalist source to this audience. I presume that EY knows this, and so I attribute the lack of it to something being fishy. If nothing's fishy, then this discussion let us clear the air.
↑ comment by Scott Alexander (Yvain) · 2012-01-19T20:57:26.556Z · LW(p) · GW(p)
They're offering 150% of the average US income during a recession with 9% unemployment as starting salary for a potentially entry-level position doing satisfying creative work for an organization that could actually improve the world. I like money as much as anyone else, and I would fight for this job if I weren't otherwise engaged. If my hunt for residency positions this summer falls flat, I might still try to fight for it.
Replies from: bbarth, thomblake↑ comment by bbarth · 2012-01-19T21:12:53.355Z · LW(p) · GW(p)
There's no indication that this is entry-level. Also, if you look further on that page, you'll see that the median full-time employed person over 25 years of age with a Bachelor's degree in the US makes $56k/yr. My read of the position description leans towards college grads given some of the qualifications that they want. If you look at overall median household incomes in the Bay Area, you'll see that they top $74k/yr depending on the county of choice. Given the way that full-time vs. part-time seems to skew the data, I still say they're undershooting for their area of the country.
Don't sell yourself short. Unless you're willing to forgo income now for the possibility that the movement needs you now, perhaps it will be better off if you go and make a lot more money with your skills, improve the world through some different work, and give as much of your income as you don't want or need to SIAI instead.
↑ comment by thomblake · 2012-01-19T21:19:53.511Z · LW(p) · GW(p)
FWIW, I disagree with bbarth's assessment of your prospects. SIAI seems to do good things to bright folks like yourself, and I think you'd both benefit from the arrangement.
Replies from: bbarth↑ comment by bbarth · 2012-01-19T21:28:05.046Z · LW(p) · GW(p)
It's not a question of SIAI not being good enough for Yvain, it's a question of whether they might both do even better if he pursues something else. It clearly sounds like he's pursing a different path than joining SIAI now, so he must have done at least some of the math. He's in med school according to his webpage, so I suspect his prospects for helping the cause might be higher if he does well as a doctor and sends every dime he doesn't need (say his salary as a doctor less $36k/yr) to SIAI. It certainly seems like it might be a waste of his current efforts to drop his medical aspirations and become a curriculum producer at SIAI, but I might be suffering from a form of the Sunk Cost Fallacy here.
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2012-01-19T21:46:52.311Z · LW(p) · GW(p)
Thanks to the magic of guilds, all new trainee doctor jobs in the US start on July 1st*. If I don't get a job by then, I will probably have to wait until next July and find something to occupy me and provide me with money for a year. Hence my comment that I would be interested if my job search fell flat.
Even though it doesn't look like it sometimes, I do give at least five minutes thought to most of my major life decisions.
*which is why some people have very reasonably argued that you should avoid hospitals at that time of year.
Replies from: XFrequentist↑ comment by XFrequentist · 2012-01-20T06:38:37.430Z · LW(p) · GW(p)
Ooh, my wife was just talking about a large study that apparently did not support this claim (using administrative data, I think). I'll find out the title when she wakes up and post a link.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-01-19T19:34:13.745Z · LW(p) · GW(p)
Lots of writers and philosophy postgrads get paid less than this. I don't mean to discourage people with fewer qualifications - a PhD is not required - but we posted a Craigslist ad recently for a different potential position, at a similar salary, and got applications from PhDs with 3 years experience. In any case, we shall see what the market thinks of our offer, and I see no reason for you to take offense at it a priori.
Replies from: bbarth, NancyLebovitz↑ comment by NancyLebovitz · 2012-02-05T07:40:29.959Z · LW(p) · GW(p)
It occurs to me that the qualifications are framed as skills rather than (in some cases) expensive credentials, and this might make a lowish salary make more sense.
comment by alexvermeer · 2012-01-23T19:54:19.678Z · LW(p) · GW(p)
Any idea when you will be choosing candidates? How long should an applicant wait before assuming they were not accepted?
Replies from: AnnaSalamon, None↑ comment by AnnaSalamon · 2012-01-26T20:30:08.158Z · LW(p) · GW(p)
We'll be sending out responses in the coming week; mostly requests for people to try particular sample work tasks.
↑ comment by [deleted] · 2012-01-24T19:06:38.624Z · LW(p) · GW(p)
My quick searches about that topic seem to suggest following up 1-2 weeks later if no timeline is given or after any timeline has expired. That sounds like the advice I've been told by my parents when I was job hunting as well.
Here are some of the links I found:
http://money.usnews.com/money/blogs/outside-voices-careers/2012/01/23/how-to-follow-up-on-your-job-application http://www.resumeservice.com/thejobbored/ask-brian-how-long-should-i-wait-to-hear-back-some-rules_489/
If you want a sample of the opposite view, here is someone saying "Never ask for followup unless you have already been interviewed." http://www.careerrocketeer.com/2011/03/when-should-you-follow-up-after-submitting-job-application.html
However, that person sounds like a person who is sick of drowning in useless applicants. That is not a feeling I have gotten from either Eliezer or Anna. So I would go with 1-2 weeks from the evidence I've seen so far.
comment by Schlega · 2012-01-20T05:35:18.969Z · LW(p) · GW(p)
The iPhone app example in the presentation confuses me.
The way it is presented, it looks like the conclusion is that you should always be willing to spend an additional $6999.99 no matter how much you have already spent. If current you is willing to spend the extra money regardless of whether you have already spent $4000 or $10999.99, then I don't see why future you would feel any different.
I would think that you should take in to account the fact that your original estimate of the cost was too low. Given that this is the case, you should expect that your current estimate of the cost to finish is also too low. I would multiply your cost to finish estimate by (current estimated total cost) / (original estimate) and only continue if that result is less than $7000.
Going over this in the presentation would introduce complications to the problem that would most likely lead to even more confusion, but when the details are left out, it looks like you are endorsing the same decisions that the sunk cost fallacy would lead to. I suggest changing the example to something else entirely.
Replies from: Schlega, MartinB↑ comment by MartinB · 2012-01-20T12:23:49.123Z · LW(p) · GW(p)
The example is bad in that it compares income forecasting with expenses. In reality you would have an expected distribution of revenue, some probabilities etc. You can fix the example by imagining it as contract work where you get paid the mentioned $7000 on delivery, with no penalty for non-delivery. The problem you see is the point of the sunken cost fallacy. Current »you« should ignore the money already sunk, and just look at the options presented. Therefore sinking more money into the project to complete it is worthwhile as long as the money sunk is less than your payout. If in the future you are even only 1$ away from finishing the app it does not matter how much you put into it already. The money is gone, sunken. You get to invest the 1$ and reap the benefit or not.
Replies from: Schlega↑ comment by Schlega · 2012-01-21T08:29:12.677Z · LW(p) · GW(p)
I agree that if the numbers given in the example were trustworthy, then it would be a good example. The part that confused me was that there would be no incentive to start the project unless the original estimate of the cost was significantly less than $7000. It seems reasonable to expect that the next $4000 you spend will have a similar effect on your expected cost to finish. If you perpetually think "Just another $5000 and I will be done", then you are no better off than if you think "I already spent so much, I can't just quit now."
The more money that is sunk into it, the stronger the evidence that you are bad at estimating the cost. I assume that this evidence is implied to be included in the new cost estimate, but I think a general audience would not immediately notice this.
Replies from: MartinBcomment by [deleted] · 2012-01-19T04:18:55.397Z · LW(p) · GW(p)
Social initiative enough to gather guinea pigs and run many practice trials of draft curriculum, while collecting data.
This seems like something that existing LW meetup groups could do. Perhaps you could send prototype lessons to meetup organizers who would be willing to participate?
Replies from: badger↑ comment by badger · 2012-01-19T04:32:08.109Z · LW(p) · GW(p)
Asking LW meetups to try out materials is worthwhile, but has a relatively large lag between ideas and results. Eliezer and Anna probably want someone who can test something within a day or two if it's high priority. Being able to iterate daily rather than weekly would be very useful.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-01-19T06:49:16.035Z · LW(p) · GW(p)
Existing LW readers aren't the target audience - the first iteration is aimed at around say the level of the Silicon Valley startup crowd. (Note that you have to imagine yourself aiming lower than this target to successfully hit it.)
comment by juped · 2012-03-03T11:50:28.115Z · LW(p) · GW(p)
The formatting for this page is really broken for me. I use Opera 11.whatever the latest is.
Replies from: None↑ comment by [deleted] · 2012-03-03T13:42:17.168Z · LW(p) · GW(p)
Broken on my Opera too. Tab-size spaces between words and the lines of text spilling to the right, going underneath the sidebar and beyond.
Replies from: None, Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-03-03T14:57:36.680Z · LW(p) · GW(p)
Does it work better now?
comment by Armok_GoB · 2012-01-20T18:44:33.234Z · LW(p) · GW(p)
I am entirely incapable or many of the things mentioned, I am unable to commit or fully participate due to helth and geographical reasons, and a bunch of other things...
On the other hand I have a feeling feeling of ability to be useful, despite all those things. This kind of deliberate growing of intuitions and making the brain do some specific thing is right up my alley and I might be very good at coming up with certain types of examples or providing a certain kind of hard to articulate meta intuition. I also know a bunch abaut art and design.
I'd also likely find it great fun and far more meaningful than anything I normally do.
So this is a bit of a dillema, what should I do?
Replies from: free_rip, Swimmer963↑ comment by free_rip · 2012-01-21T00:00:51.116Z · LW(p) · GW(p)
"I am entirely incapable or many of the things mentioned" "I might be very good at coming up with certain types of examples or providing a certain kind of hard to articulate meta intuition. I also know a bunch abaut art and design." They've stated they're looking for people who can do just a one or a couple steps (as well as all-rounders), so sounds like you should go for it.
"I am unable to commit or fully participate due to helth and geographical reasons, and a bunch of other things... what should I do?" If I were you, I'd send in an email saying just that, but with more detail of course - what things you'd like to do, what limits on time you're likely to have etc. My guess is you'd still be a help, and if not you've only lost 5mins in checking.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2012-01-21T16:23:01.513Z · LW(p) · GW(p)
Wow, I must have been really tired or something when I write that.
I... am not sure where I were going with that post at all. I'm kinda confused right now.
Replies from: chaosmosis↑ comment by chaosmosis · 2012-04-13T15:30:52.637Z · LW(p) · GW(p)
Upvoted because the typo makes this post amusing.
↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-01-21T00:12:13.675Z · LW(p) · GW(p)
Apply for 'can work part-time but would not be willing to relocate.' Or 'volunteer', I guess.
comment by Grognor · 2012-07-21T20:55:54.093Z · LW(p) · GW(p)
The "rationality" link in the "Bonuses" section has become broken.
comment by Bugmaster · 2012-01-25T04:50:48.807Z · LW(p) · GW(p)
From the booklet at link #3:
If you’re going to eat sinful foods, eat high-quality chocolate rather than potato chips—don’t waste calories.
But what if I enjoy potato chips way more than any chocolate ? This piece of advice sounds weird on all kinds of levels...
Replies from: free_rip, Kevin↑ comment by free_rip · 2012-01-26T04:56:38.021Z · LW(p) · GW(p)
If you like chips better, eat chips. The rule here is 'don't waste calories' - not 'eat chocolate rather than chips.'
ie. instead of buying the cheap stuff you don't like as much, buy the expensive stuff because it's costing you more than money - it's costing calories as well, so you should get the most utility out of it you can, and put on the minimum calories for it. If you happen to like the cheap stuff more per calorie, then eat that. I always feel satisfied quicker with high-quality chocolates than chips and I assume that's true for most people, but it won't be for everyone.
comment by beoShaffer · 2012-01-25T04:20:05.701Z · LW(p) · GW(p)
About how much detail are you looking for in the free response sections of the applications?
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2012-01-26T20:34:39.869Z · LW(p) · GW(p)
More detail is better; but sending in an application today is also better, so don't sweat composition too much.
Replies from: beoShaffer↑ comment by beoShaffer · 2012-01-26T22:06:55.140Z · LW(p) · GW(p)
Right, sent. Hope you find it useful.
comment by lronbayes · 2012-01-20T04:25:26.910Z · LW(p) · GW(p)
I just wanted to say that the presentation is wonderful. It's a refreshing change from the usual posts and I would love to see it continue. In fact you should stop the other posts or let them post them on their own blogs or something. This type of content will make a difference in the world, the other guy is more like a well-researched "top 10 ways to lose weight" type of article and has limited tangible benefit.
More please. This is what lesswrong should be about!
Replies from: Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-01-20T12:51:17.947Z · LW(p) · GW(p)
Probably true. Still, I think the number of people who would spend hours of their time reading LW would decrease if they were disallowed from posting. And EY may be too busy now to write posts like this every week.
Replies from: lronbayes↑ comment by lronbayes · 2012-01-20T23:44:32.174Z · LW(p) · GW(p)
I think that people keep subscriptions here because of posts like these, and I know that I certainly do not recommend it to others because of the low quality of some of the other authors. However if the content was similar to this then it would be a different story.