Open Thread: August 2009

post by taw · 2009-08-01T15:06:40.211Z · LW · GW · Legacy · 193 comments

Contents

193 comments

Here's our place to discuss Less Wrong topics that have not appeared in recent posts. If something gets a lot of discussion feel free to convert it into an independent post.

193 comments

Comments sorted by top scores.

comment by Will_Euler · 2009-08-02T06:38:05.933Z · LW(p) · GW(p)

While I find I have benefitted a great deal from reading posts on OB/LW, I also feel that, given the intellectual abilities of the people involved, the site does not function as an optimally effective way to acquire the art of rationality. I agree that the wiki is a good step in the right direction, but if one of the main goals of LW is to train people to think rationally, I think LW could do more to provide resources for allowing people to bootstrap themselves up from wherever they are to master levels of rationality.

So I ask: What are the optimal software, methods, educational tools, problem sets, etc. the community could provide to help people notice and root out the biases operating in their thinking. The answer may be sources already extant, but I have a proposal.

Despite being a regular reader of OB/LW, I still feel like a novice at the art of rationality. I realize that contributing one's ideas is an effective way to correct one's thinking, but I often feel as though I have all these intellectual sticking points which could be rooted out quite efficiently--if only the proper tools were available. As far as my own learning methods go, assuming a realistic application of current technology, I would love something like the following:

An interactive (calibrated to respond to learner's demonstrated level of ability-- similar to the GRE) test with a set of 1000+ problems, wherein I could detect the biases operating in my thinking as I approach given questions and problems. Using such a technique, I believe I could train myself up to the point where I could more closely approximate what I remember Eliezer somewhere saying is going on when he approaches an argument: his brain is cycling through possible biases almost as automatically as it is controlling his autonomic nervous system.

[In terms of convenience, an added bonus would be to be able to look at questions through one of the standard flashcard applications available on the iphone (or other devices), so I could look at, say, a few (or a few dozen) questions whenever the urge struck me. I dream of such a tool someday even incorporating SuperMemo-type capabilities, wherein even experts are able to keep their knowledge fresh by having questions reappear based on optimal strategies for obviating long-term degradation of memories. I am interested in helping to develop such a learning tool.]

I welcome any input about how to proceed with such a plan. Although I am a PhD candidate/adjunct professor, I don’t know what the optimal technology for such a project would be. It does seem, though, that the technical demands necessary to get such a project off the ground need not be imposing.

Once such a project got off the ground, I believe the community could come together to provide effective questions and answers. As I see it, it would neither be necessary nor desirable for such a project to be created by a single person.

I believe there are many people for whom this could project could be valuable. We might find that, were such a tool to be implemented, at the very least, it might raise the level of discourse on LW. Beyond that, who knows. Thanks for your suggestions.

Replies from: wedrifid, FrankAdamek
comment by wedrifid · 2009-08-02T07:15:21.425Z · LW(p) · GW(p)

Worth a top level post.

comment by FrankAdamek · 2009-08-02T17:38:26.915Z · LW(p) · GW(p)

I think this is a great idea.

I'm aware of one simple technology which uses spaced repetition (items you remember or "get correct" taking longer to reappear, for optimal learning), is pre-existing and would be easy to use, which is that of flash card programs. There are a number of free ones out there, two of the best that I found are: Mnemosyne and Anki. I have been using Anki for about half a year now for learning vocabulary (largely for the GRE) and am very happy with it, wish I had discovered such programs earlier.

While they're pre-existing and easy to use (and to share and add "cards"), two imperfections stand out. First, I'm not aware of any functionality that lets you actually select an answer. You could look at the question and possible answers and then pick one mentally, but the "answer side" couldn't be customized to the selection you made. Secondly, you of course wouldn't be able to calibrate the questions to your level as the program won't know what you answered. You're able to select to repeat an item very soon, or after short, medium or long intervals (relative to how many times you've answered correctly, by your own scoring), but it's not quite the same.

I might be wrong and some existing flash card program might allow for the selection of answers. Or perhaps more promising Anki is open-source, so perhaps with only a bit of work we could build a quiz-version.

Replies from: gwern
comment by gwern · 2009-08-03T13:47:39.432Z · LW(p) · GW(p)

First, I'm not aware of any functionality that lets you actually select an answer. You could look at the question and possible answers and then pick one mentally, but the "answer side" couldn't be customized to the selection you made. Secondly, you of course wouldn't be able to calibrate the questions to your level as the program won't know what you answered. You're able to select to repeat an item very soon, or after short, medium or long intervals (relative to how many times you've answered correctly, by your own scoring), but it's not quite the same.

I don't understand. How do pseudo-multiple choice cards do any worse than a 'genuine' multiple-choice? And what do you mean by calibration? The calibration is done by ease of remembering, same as any card. Nothing stops you from saying (for Mnemosyne), 'if I get within 5% of the correct value, I'll set this at 5; 10%, 4, 20% 3, etc.'

Replies from: FrankAdamek
comment by FrankAdamek · 2009-08-03T20:14:48.874Z · LW(p) · GW(p)

It might well work to one's satisfaction. What I meant by calibration is that it wouldn't be able to give you a different new question based on what you answered; whether you get this question right or wrong, the series of questions awaiting you afterwards is exactly the same (unlike the GRE for example). And if the answer were numeric you could use an algorithm for when to repeat the card, yes. I had off hand imagined questions with discrete answers that had little distance relation.

Replies from: gwern
comment by gwern · 2009-08-04T02:22:53.204Z · LW(p) · GW(p)

OK... from the sound of it, it isn't really a good application for SRS systems. (They're focused on data, not skills - and thinking through biased problems would seem to be a skill, and something where you don't want to memorize the answer!)

However, probably one could still do something. Mnemosyne 2.0 is slated to have extensible card types; I've proposed a card type which will be an arbitrary piece of Python code (since it's running in an interpreter anyway) outputting a question and an answer.

My example was generating random questions to learn multiplication (in pseudocode, the card would look like 'x = getRandom(); y = getRandom(); question = print x "" y; answer = print x\y;'), but I think you could also write code that generated biased questions. For example, one could test basic probability by generating 3 random choices: 'Susan is a lawyer' 'Susan is a lawyer but not a sledge-racer' 'Susan is a lawyer from Indiana', and seeing whether the user falls prey to the conjunction fallacy.

(As a single card, it would get pushed out to the future by the SRS algorithm pretty quickly; but to get around this you could just create 5 or 10 such cards.)

comment by Scott Alexander (Yvain) · 2009-08-08T08:23:03.805Z · LW(p) · GW(p)

This writer of this essay (seen on Reddit) is a true practical rationalist and role model for all of us.

It's not just because she made a good decision and didn't get emotionally worked up. She was able to look behind the human level and all its status and blame games, see her husband as a victim of impersonal mental forces (don't know if she knows evo psych, but she certainly has an intuitive grasp of some of the consequences) , and use her understanding to get what she wants. And she does it not in a manipulative or evil kind of way, but out of love and a desire to hold her family together.

Replies from: cousin_it
comment by cousin_it · 2009-08-10T21:11:54.361Z · LW(p) · GW(p)

It was Thanksgiving dinner that sealed it. My husband bowed his head humbly and said, “I’m thankful for my family.”

He was back.

And I saw what had been missing: pride. He’d lost pride in himself.

If I were that husband, I'd have tried to save my pride even if it meant abandoning the family, because once your pride is taken from you, it's only a matter of time before all the other good things in your life get taken as well.

Replies from: arundelo
comment by arundelo · 2009-08-10T22:17:33.420Z · LW(p) · GW(p)

The author believes that her husband lost pride not for a good reason, but because "[m]aybe that’s what happens when our egos take a hit in midlife and we realize we’re not as young and golden anymore." Do you disagree with her here? If not, how would getting a divorce have been a better solution than realizing that his wife was not the problem?

Replies from: cousin_it
comment by cousin_it · 2009-08-10T22:39:15.522Z · LW(p) · GW(p)

IMO the author is correct in her assessment. (Also for the record, I consider her actions admirable.)

I'm not saying that getting a divorce is necessarily preferable to staying in such a situation; only that I'd have done it if I saw absolutely no other way to regain pride. There are likely ways to regain pride that don't involve divorce, like the author's first suggestion of "go trekking in Nepal", and of course I'd consider those carefully first.

comment by Emily · 2009-08-03T21:00:24.686Z · LW(p) · GW(p)

This has probably been requested before, and maybe I'm requesting it in the wrong place, but... Dear LW Powers-That-Be, any chance of a Preview facility for comments? It seems like I edit virtually every comment I make, straight after posting, for typos, missing words, etc. I find the input format awkward to proofread.

Replies from: gwern, PhilGoetz
comment by PhilGoetz · 2009-08-05T00:12:36.627Z · LW(p) · GW(p)

I don't feel the need, because I can re-edit immediately and easily.

comment by PeerInfinity · 2009-08-03T18:02:42.728Z · LW(p) · GW(p)

I recently had an idea that seemed interesting enough to post here: "Shut Up and Multiply!", the video game

The basic idea of this game is that before each level you are told some probabilities, and then when the level starts you need to use these probabilities in real time to achieve the best expected outcome in a given situation.

The first example I thought of is a level where people are drowning, and you need to choose who to save first, or possibly which method to use to try to save them, in order to maximize the total number of people saved.

Different levels could have different scenarios and objectives.

You are given time to examine the probabilities before the level starts, but once it starts you need to make your decisions in real-time.

Another twist: You see the actual outcome of your actions, randomly generated by the probability formulas. However, you aren't scored based on the actual outcome, but instead you are scored on the expected outcome of your actions, using the expected utility formula. I originally intended this to prevent people from getting a high score just by luck, or to prevent low scores caused by bad luck. Though I later realized that this doesn't actually fix the problem - you can still play repeatedly until, by luck, you happen to guess high-scoring actions. Still, I think it would be a good idea to show both scores.

Other random ideas:

The first few levels should be a tutorial. Showing how to do the calculations in order to maximize your expected score. Or there could be a separate turotial mode. Or maybe the game itself is a bad idea, but the tutorial might still be useful.

During each level you need to make your decisions as quickly as possible - the longer you wait the worse you score. Though maybe only some levels should be like this.

Later levels require more options to choose from, and more complex scenarios.

As much content as possible should be generated randomly, to prevent the game from being the same if you play it again.

Maybe the player could also be scored based on some calculations they do before the level starts? Or just integrate this with the tutorial?

And most importantly: Specifically design the game so that the player must learn to overcome some of the standard biases, in order to maximize their score. We should try to work in as many of these biases as possible. And also plenty of generally useful advice for working with probabilities.

So, now that I posted this idea, I'll let you decide what, if anything, we should do with this idea.

First, is this a stupid idea, that couldn't possibly work as described?

Or is it a good idea, but a low priority, compared to the other projects we're working on?

Should this be a group project? Does anyone volunteer to lead the project? Does anyone want to take on the project entirely on their own? Or should I lead the project, or work on it on my own?

What language would be best to implement this in? Flash? Java? PHP? Python? Something else?

I still haven't earned much karma on this site (only 1 point actually, when I originally posted this). Mainly because I don't expect to have anything original to say. And so I'm posting this here as a comment in the Open Thread, rather than making an actual post of it. If this comment gets enough upvotes for me to be able to make this into its own post, I plan to do so, unless someone objects. Or if anyone would like to take over the idea, please feel free to do so. I don't care about credit, and generally prefer to avoid it. Possibly by the flawed reasoning "credit = responsibility = blame", which I suppose might deserve a post of its own.

Replies from: Swimmer963, Tom_Talbot, DanielLC, Richard_Kennaway
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-02-05T14:48:15.372Z · LW(p) · GW(p)

The first example I thought of is a level where people are drowning, and you need to choose who to save first, or possibly which method to use to try to save them, in order to maximize the total number of people saved.

I do competitive lifeguarding (possibly the world's weirdest recreational activity) and there is actually an event like this, called Priority Assessment or PA. Your team walks in and finds an area of the pool with a bunch of people drowning (for a team of 4 rescuers, usually it's about 12 victims.) The scoresheets are set up so that you get more points for rescuing the victims who are more likely to survive–i.e. non-swimmers and injured swimmers have a much higher point multiplier than unconscious, submerged victims. PA involves a lot of strategy–it's not always the teams of fast swimmers that win, although that helps. There is an optimal strategy, which has to be worked out in advance because it's a two-minute event.

comment by Tom_Talbot · 2009-08-03T21:41:40.615Z · LW(p) · GW(p)

I've been thinking about educational games as well. The main problem, it seems to me, is that trying to make learning fun for someone who isn't already interested and motivated is a waste of time because you're just trying to hide the teaching under a sugarcoating of computer game, and that never works. On the other hand trying to make learning fun for someone who is already interested and motivated is pointless, because they already want to learn and the game just adds needless hassle like completing levels in order to reach the next piece of knowledge, or whatever game mechanic you're using. It's a pity, because I think of the way games like Portal build up complex puzzles from simpler ones and use the level itself to ask the player leading questions, like a kind of visual/spatial socratic method, and I think there must be a way to use that to teach, espiecally mathematics where visual/spatial metaphors could easily translate into mathematical metaphors... but I just can't come up with a concrete version of the idea that wouldn't be boring or stupid.

Lately I've been thinking that the fastest way to get to grips with a new subject is probably just to memorise big chunks of information without trying to understand it, using techniques like a memory palace and spaced learning programs like Mnemosyne and Anki, then think about what you've learned later, and insight might strike you. This would be espiecally effective if you combined it with a social precommitment to teach your knowledge to someone else, or to take part in a competitive quiz.

Replies from: None, gwern
comment by [deleted] · 2012-02-05T13:21:59.355Z · LW(p) · GW(p)

Lately I've been thinking that the fastest way to get to grips with a new subject is probably just to memorise big chunks of information without trying to understand it, using techniques like a memory palace and spaced learning programs like Mnemosyne and Anki, then think about what you've learned later, and insight might strike you.

If this is true and you aren't too much of an outlier, it would go a decent way to explain the failure of a good chunk of educational theory in the past few centuries.

Replies from: Anubhav
comment by Anubhav · 2012-02-05T15:56:37.538Z · LW(p) · GW(p)

If this is true and you aren't too much of an outlier, it would go a decent way to explain the failure of a good chunk of educational theory in the past few centuries.

The basic idea works for me, but I think Tom's simplifying it. It's not about "add to Anki with 0% understanding" vs "gain 100% understanding when first learning"; instead, it's more like "add to Anki with 80% understanding" vs "gain 90% understanding, having had to spend several hours for the extra 10%."

Far too many people tend to get hung up over that one thing in a chapter that they can't understand. More often than not, it's something they could understand perfectly if they just said "meh" and read a few pages ahead, but no; they just stay stuck on that one spot, thinking "wtf is this??!!"

Also, far too many people read books word-by-word when they could get essentially the same amount of information by skimming over the pages. Anki helps here, as it forces you to extract the relevant pieces of information from the text, (or at least stuff that looks important) instead of letting you comfortably wade through a wall of text and believe you've understood it.

(It seems, on first glance, that these two paragraphs contradict each other, but they actually don't. The third one is talking about stuff that looks easy but actually isn't, the second one's talking about stuff that looks difficult, but wouldn't if you'd just read ahead.)

Also, you generally don't have to wait until later for insight into whatever you've failed to understand... More often than not, insight strikes even as you're adding the cards.

comment by gwern · 2012-02-05T17:56:59.878Z · LW(p) · GW(p)

then think about what you've learned later, and insight might strike you.

http://www.gwern.net/Spaced%20repetition#abstraction

comment by DanielLC · 2011-03-22T21:24:06.497Z · LW(p) · GW(p)

I'd suggest making it so you get scored by the number of people you save, but the game is long enough that luck doesn't make a difference.

I wonder if it would be a good idea to give it multiple scores. For example, lives saved, life-years saved, quality-adjusted life-years saved, etc.. This way, you won't have as many problems with people disagreeing with the scoring system.

Alternately, you could just have it so you could change the scoring mode in the options part. It would also act somewhat as a difficulty setting. It would get harder when you have to weigh a destitute child, who will live longer, vs. a middle-class adult, who will be happier.

comment by Richard_Kennaway · 2009-08-04T09:26:15.151Z · LW(p) · GW(p)

Reading that, it occurred to me that in all the computer games I've played, it is possible to totally succeed. Kill every monster, pick up every piece of treasure, solve every puzzle, gain the perfect score. (I have not actually played all that many, but I also read some of the games press and it seems to be the usual case.) Even the old arcade games fit the pattern: you can't win them, only endure as long as possible, but until then your goal is to impeccably handle every challenge that comes at you.

Are there any games in which this is deliberately made impossible? For example, PeerInfinity's suggestion of trying to save drowning people, in a scenario that makes it impossible to save all of them (and which is varied every time you play the game, so you can't simply search out the optimal walk-through). Military operations in a city, fighting door-to-door, with innocent civilians everywhere whose lives matter in game terms.

Replies from: gwern
comment by gwern · 2009-08-04T10:32:50.767Z · LW(p) · GW(p)

I'm not sure the suggestion of a game in which one cannot 'get the top score' makes sense. It seems contradictory - 'is there an optimal path through the game which is not the optimal path through the game?'

Can you have games where the 'path' to a top score, the optimal play, varies from game to game? Sure. Not every game carries it to quite the extent of Nethack, but most do it to some extent. Non-random games like go or chess are generally the exception, and they can be trivially randomized. But each specific game can be seen as ultimately deterministic: given the output of the random number generator this time, the ideal path is such-and-such. You, the player, may not know it, but that's your fault.

Can you have games which deceive the player about what the best possible score is? Sure. The original Donkey Kong promises that you can play indefinitely; but go too high and the game will always crash. The upper bound is not where one thought it was. Or there are political games in which one tries to prevent 9/11 (IIRC); of course, the game must sooner or later defeat you, like those old arcade games.

What would it mean for a game to have scores players couldn't reach? If in Mario, there is code to paint a picture of a 1-UP on a corner of the screen surrounded by unbreakable blocks, then in what sense is the player missing out on 1k points (or whatever). If a cut scene depicts a hostage dying, then how 'could' I have saved it? What if I can choose between a cut scene depicting hostage A dying, and hostage B? What if it's in-game, and there's a timer or rescuing A triggers the death of B?

Or what if there is a trove of 1 million points coded in, but the only access is to type in a true contradiction? Would the world record holder for Mario really be missing 1 million points off his score just because he can't come up with one? (Yes, there is a number equal to his score+1 million; but there's an infinite number of integers. What makes score+1 million special? All the lower number are special because it's possible to manipulate a given blob of code to display characters we interpret as those lower numbers; but we can't get it to emit any images of higher numbers, and that's that.)

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2009-08-04T11:32:17.493Z · LW(p) · GW(p)

I'm not sure the suggestion of a game in which one cannot 'get the top score' makes sense.

It can make sense if the game does not have a one-dimensional score. World Of Warcraft, The Sims, Second Life, D&D... Life itself, for that matter.

Replies from: gwern
comment by gwern · 2009-08-04T16:48:13.428Z · LW(p) · GW(p)

If you choose one sub-game, then there's optimal play for that; if you switch between them, then there's still optimal play, it's just you need to weight it if there is no canonical ultimate score (just like with utilities).

If there aren't any scores or sense of progress at all, I question whether it's a game at all, or whether it merely bears a Wittgensteinian family resemblance. If you and I push around piled pieces on a Go board just for the pleasure of watching the piles build and collapse and form swirling patterns, we're doing something entertaining (maybe) but who would call it a game? To call life itself a game is to either commit a tired weak metaphor, or to drain the word game of all meaning.

comment by kpreid · 2009-08-02T18:43:31.735Z · LW(p) · GW(p)

This (IIRC) imported Overcoming Bias post has mangled text encoding ("shōnen anime than shōjo", including a high control character; the structure suggests that this is UTF-8 data reinterpreted as some other encoding, then converted to HTML character references). This suggests that there may be a general problem, in which case all the imported OB posts should be fixed en masse.

Replies from: thomblake, Psy-Kosh
comment by thomblake · 2009-08-06T02:36:02.921Z · LW(p) · GW(p)

Indeed, as Psy-Kosh suggests, that's a LW original.

Perhaps it would have helped if Eliezer had used waapuro-style romanization, or modified Hepburn, as is right and proper (when kana are not acceptable).

comment by Psy-Kosh · 2009-08-03T06:54:36.336Z · LW(p) · GW(p)

IIRC, that one isn't imported but is an LW original and was never on OB. I don't think I've seen such artifacts on any of the imported posts that I've looked at anyways.

Replies from: None
comment by [deleted] · 2009-08-03T08:39:19.061Z · LW(p) · GW(p)

The built-in editor for top-level posts is a little buggy. The first time I tried to edit a draft, it mangled the <pre> sections. From then on, I re-pasted every edit.

comment by Tom_Talbot · 2009-08-02T18:10:30.418Z · LW(p) · GW(p)

This comment doesn't really go anywhere, just some vague thoughts on fun. I've been reading A Theory of Fun For Game Design. It's not very good, but it has some interesting bits (have you noticed that when you jump in different videogames, you stay in there air for the same length of time? Apparently game developers all converged on an air time that feels natural, by trial and error). At one point the author asserts that having to think things through consciously is boring, but learning and using unconscious skills is fun. So a novice chess player gets bored quickly having to think through all the moves, while an expert 'just sees' the right moves, and has fun. It made me think of the concept of flow and of Alan Kay's work on Squeak and Etoys, making learning more fun and intuitive with computers (particularly learning mathematics) I think it's called constructionist learning.

It does seem though that we don't have much of a theory of fun, most of the stuff we know we learn through trial and error. If we had a decent model of fun we might be able to make boring learning activities fun, which would help with motivation and akrasia and so on.

Replies from: Daniel_Burfoot, MendelSchmiedekamp
comment by Daniel_Burfoot · 2009-08-03T14:46:05.725Z · LW(p) · GW(p)

I think Flow is one of the most important ideas to have come out of psychology. My hypothesis for why it's not more widely known is that the creator's name is so difficult to spell and pronounce.

My belief is that the learning part of your brain sends a signal to the decision-making part, when the former is experiencing a type of stimulus that is highly learnable. That signal is treated by the decision making part in the same way as a more typically pleasant signal (food, sex, etc) would be. Flow is thus an evolutionary adaptation that makes us seek experiences that help us learn more rapidly (the underlying assumption being that not all stimuli are equally learnable).

I think Flow is pretty good as a theory of fun, or as a theory of fun-from-learning. Flow is the best way to learn. The problem is that not all ideas can be learned in a way that meets the Flow criteria (rapid feedback, ability to experiment, clear goals, challenge keyed to ability level). So the interesting questions in my view are how to rephrase learning problems in such a way that one can enter Flow states when approaching those problems.

Replies from: Eliezer_Yudkowsky, CannibalSmith
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-08-03T16:58:01.463Z · LW(p) · GW(p)

I think Flow is one of the most important ideas to have come out of psychology. My hypothesis for why it's not more widely known is that the creator's name is so difficult to spell and pronounce.

It's a sad comment on academia and humanity that this hypothesis is not the least bit implausible.

comment by CannibalSmith · 2009-08-04T13:11:01.572Z · LW(p) · GW(p)

We totally need an article about Flow. Who's up for writing one?

comment by MendelSchmiedekamp · 2009-08-02T22:44:56.259Z · LW(p) · GW(p)

A few years ago I had developed a theory of game playing and low pressure social group interaction which starts at a similar place as Koster's. I was able to take that starting point about play and patterns and produce empirically testable hypotheses with formal mathematical models of what is happening during play.

And then I stopped working on it because I couldn't seem to get across the concept that learning and fun might be related well enough. Now that I've had a chance to read his book, I might have to reconsider.

comment by Risto_Saarelma · 2009-08-04T10:35:03.970Z · LW(p) · GW(p)

The Unpleasant Truth Party Game

I wanted to make this idea a new post, but apparently I need karma for that. So I'll just put it here:

The aim is to come up with sentences that are informative, true and maximally offensive. Each of the participants comes up with a sentence. The other participants rate the sentence for two values, how offensive it is on a scale from 0 (perfectly inoffensive) to 1 (the most unspeakable thing imaginable), and how informative it is from 0 (complete gibberish or an utterly obvious untruth) to 1 (immensely precise and true beyond question). As with any real-world probabilities, exact 0 and 1 should probably be avoided, but anything arbitrarily close to them is fair.

Each sentence is scored by it's offensiveness score Q and its truthfulness score P. The total score of the sentence is P * Q. This will give a higher score the more the statement is both true and offensive.

Coming up with absolute probabilities and calculating the score formula might be a bit hard for a tabletop game. A variation could have the players just ordering the sentences on offensiveness and truthfulness tracks, assign 1 to the top item each, 2 to the next and so on, multiply the two values for each sentence. In this variant, the lowest score wins.

In the ordering game, getting a good position on either track should beat an average position, 4 2 = 8 < 3 3 = 9.

Could this be made into an actually playable game? How many sessions could you play and still have a social circle?

Replies from: thomblake, gwern
comment by thomblake · 2009-08-04T14:03:29.455Z · LW(p) · GW(p)

meh. The "say something maximally offensive" game is nothing new, and I'm not sure there's a lot to be gained here.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-05T00:10:45.308Z · LW(p) · GW(p)

It isn't just shock value; it isn't just "say something maximally offensive".

comment by gwern · 2009-08-04T10:43:43.932Z · LW(p) · GW(p)

Could this be made into an actually playable game? How many sessions could you play and still have a social circle?

Once it starts getting personal, it's all over. A game I used to play with my friends was just debating random propositions (I once defended with considerable success the proposition that puppies are tender and delicious.) I think this was as useful simply to see what pleased the onlookers the most, and there was little chance that someone might say something like 'John's acne is truly grotesque'. How many non-personal, awful truths does anyone know, or could people agree on?

comment by Tom_Talbot · 2009-08-02T17:41:24.813Z · LW(p) · GW(p)

Imagine you find a magic lamp. You polish it and, as expected, a genie pops out. However, it's a special kind of genie and instead of offering you three wishes it offers to make you an expert in anything, equal to the greatest mind working in that field today, instantly and with no effort on your part. You only get to choose one subject area, with "subject area" defined as anything offered as a degree by a respectable university. Also if you try to trick the genie he'll kick you in the nads*.

So if you could learn anything, what would you learn?

*This example is in no way intended to imply that women are less worthy of the right to be attacked by genies. Neither is it intended to imply that there could never be a female genie. That would be stupid. Where else could baby genies come from?

Replies from: None, Eliezer_Yudkowsky, JamesAndrix, Nanani, CronoDAS, Eliezer_Yudkowsky, Yvain, spriteless, None, Psy-Kosh, Alicorn
comment by [deleted] · 2009-08-02T18:57:07.134Z · LW(p) · GW(p)

Mathematics

Its the foundation for everything else I want to learn. I don't know why I didn't major in it- other than concerns for a paycheck.

Replies from: anonym, gwern, Tom_Talbot
comment by anonym · 2009-08-04T06:41:15.534Z · LW(p) · GW(p)

The nice thing about mathematics is that you can easily do it outside of school and independently, and when you do it as a mature adult, you do it because you love it and for no other reason. You are free to use better texts than you could as a student, so if you want to brush up on calculus, you can use Spivak or Courant rather than being forced to use low-quality texts that are optimized for the convenience of the professor rather than the insight of the student. There are also so many learning resources available now that weren't available when I was a student -- things like wikipedia, planet math, and physicsforums.com, not to mention software like Octave and Sage.

comment by gwern · 2009-08-04T05:12:49.459Z · LW(p) · GW(p)

I'd like to add that one of the nice things about mathematics as a choice is that you can avoid credentialing issues. I'm sure we've all read Hanson on how most of the value of a college degree is in that the college is certifying your abilities, and stamping you.

If you chose world-class ability in economics/financial trading, say, and you are a poor student, then what are you going to do with it? You can't make a killing on the market to prove your abilities; you can't go work as an intern for a firm to prove your knowledge, etc.

Similarly with genetics. If I suddenly gained world-class genetic knowledge, I cannot walk up to Cold Spring Harbor and ask them to let me use some multi-million dollar equipment for a year because I have this awesome bit of research I'd like to do. I simply don't have any proof that I'm not a random bozo who has memorized a bunch of textbooks and papers. I'd have to get lucky and convince a professor or somebody to take me on as an assistant and slowly build up my credentials until I can do the bit of research that will irrefutably establish me as a leading luminary.

Or how about physics? If I specialize in experimental or practical physics, I have the same chicken-and-egg problem; if I specialize in theoretical, then I run the risk of simply being ignored, or written off as a crank (and the better my contribution, the more likely I am to be seen as a crank!).

But with mathematics, I can just crank out a bunch of theorems and send in a paper. If people are still unconvinced, being a mathematical genius, I can just formalize it and send in a Coq/Isabelle/Twelf file consisting solely of the proof.

Replies from: anonym
comment by anonym · 2009-08-04T06:26:22.367Z · LW(p) · GW(p)

While mathematics certainly appears to me to be more of a meritocracy than the sciences, it's still the case that the notion of proof has changed over time -- and continues to change (witness Coq and friends) --, as have standards of rigor and what counts as mathematics. There are social and other non-mathematical reasons that influence how and why some ideas are accepted while others are rejected only to be accepted later, and vice versa.

It's an interesting question whether this will always be the case or if it will converge on something approaching unanimously accepted truth and aesthetic criteria. Personally, I think mathematics is intrinsically an artistic endeavor and that the aesthetic aspect of it will never disappear. And where there is aesthetics, there is also politics and other sausage-making activities...

Replies from: Richard_Kennaway, gwern
comment by Richard_Kennaway · 2009-08-04T08:40:39.663Z · LW(p) · GW(p)

While mathematics certainly appears to me to be more of a meritocracy than the sciences, it's still the case that the notion of proof has changed over time -- and continues to change (witness Coq and friends) --, as have standards of rigor and what counts as mathematics.

The gold standard of what is a proof and what is not was achieved with the first-order predicate calculus a century ago and has not changed since. Leibniz' dream has been realised in this area. However, no-one troubles to explicitly use the perfect language of mathematical proof and nothing else, except when the act of doing so is the point. It is enough to be able to speak it, and thereafter to use its idioms to the extent necessary to clearly communicate one's ideas.

On the other hand, what proofs or theorems mathematicians find important or interesting will always be changing.

comment by gwern · 2009-08-04T07:05:17.988Z · LW(p) · GW(p)

I don't really think the question is whether mathematics is more meritocratic - it's an economic question of credentialing. You need credentialing when you cannot cheaply verify performance. If I had a personal LHC and wrote a paper based on its results, I don't think anyone would care too much about whether I have 2 PhDs or just a GED - the particle physicists would accept it. But of course, nobody has a personal LHC.

With mathematics, with formal machine-checkable proofs, the cost of verification is about as low as possible. How long does it take to load a Coq proof and check it? A second or two? Then all someone needs to do is take a look at my few premises; either the premises are dodgy (which should be obvious), or they're common & acceptable (in which case they know I'm a math genius), or I'm exploiting a Coq flaw (in which case I'm also a math genius). Once they rule out #1, I'm golden and can begin turning the genie's gift to good account.

Replies from: anonym
comment by anonym · 2009-08-04T07:35:29.131Z · LW(p) · GW(p)

By meritocracy, I meant what you explain by credentialing: the idea that the work alone is absolutely sufficient to establish itself as genius or crackpottery or obvious or uninteresting or whatever, that who you are, who you know, where you went to school and who your advisors were, which conferences you've presented at, the time and culture in which you find yourself, whether you're working in a trendy sub-discipline, etc., that all that is irrelevant.

How much of mathematics is machine-checkable now? My (possibly mistaken) understanding was that even the optimists didn't expect most of existing mathematics for decades at least. And how will we formalize the new branches of mathematics that have yet to be invented? They won't spring forth fully formed as Coq proofs. Instead, they'll be established person-to-person at the whiteboard, explained in coffee shops and over chinese food in between workshop sessions. And much, much later, somebody will formalize the radically revised descendant of the original proof, when the cutting edge has moved on.

I'll know you're right and I'm wrong if I ever begin to hear regular announcements of important new theorems being given in machine-checkable format by unaffiliated non-professionals and their being lauded quickly by the professionals. And that is the easier task, since it is the creation of new branches and the abstraction and merging of seemingly unrelated or only distantly related branches that is the heart of mathematics, and that seems even less likely to be able to be submitted to a theorem prover in the foreseeable future.

Replies from: gwern
comment by gwern · 2010-10-10T02:22:07.825Z · LW(p) · GW(p)

How much of mathematics is machine-checkable now?

I'm not sure how one would measure that. The Metamath project claims over 8k proofs, starting with ZFC set theory. I would guess that has formalized quite a bit.

I'll know you're right and I'm wrong if I ever begin to hear regular announcements of important new theorems being given in machine-checkable format by unaffiliated non-professionals and their being lauded quickly by the professionals.

I think that only follows if genius outsiders really do need to break into mathematics. Most math is at the point where outsiders can't do Fields-level work without becoming in the process insiders. Consider Perelman with Poincare's conjecture - he sounds like an outsider, but if you look at his biography he was an insider (even just through his mother!).

comment by Tom_Talbot · 2009-08-03T22:56:30.449Z · LW(p) · GW(p)

This is what I thought everyone was going to say. I don't see why you'd be concerned about the paycheck though, a strong mathematics background could land you a job as a banker or trader or something. But looking at your upvotes it seems like plenty of people agree with you.

My next question would be what you'd like to have a basic introduction to. Plenty of LW posts tend to assume a grounding in subjects like maths, economics or philosophy - which is fine, this is a community for informed people - but it probably shrinks LW's audience somewhat, and certainly shrinks the pool of people who are able to understand all the posts. We probably miss this because nobody's going to jump into the middle of a thread and say, "I lack the education to understand this." espiecally not a casual reader.

Replies from: None
comment by [deleted] · 2009-08-04T00:37:40.555Z · LW(p) · GW(p)

My upvotes are probably due to the fact that I said mathematics, rather than any agreement concerning my potential lack of paycheck. I know that a mathematics background could supply me a paycheck at this point in my life, but I was urged against it by some other people when I was choosing my major.

Basic introduction to? Is this in addition to the expertise I got from the genie, or if the genie was only offering me a basic introduction? Do I only get to choose one? Gee that's hard. I'm pretty much working on having a basic introduction to everything already. So, given my existing basic introductions... I think I'd like to get a basic introduction to quantum physics... but that's kind of cheating because I'd have to know a lot of physics and mathematics in the basic intro. I choose that one because I want to know it for purely vain reasons, and it would be nice to save the time of learning it for more "useful" studies.

This blog definitely is going to appeal to a minority of people. I personally do not have the proper education to follow the bayesian/frequentist debate, though I want to hear about it. I think that the healthy practice of linking to information is fantastic, as well as the lesswrong wiki. That way if you know what the person is talking about, you don't have to follow the link, but if you need to learn, it's right there at your fingertips.

Edit: Oh right, and I can't contribute to quantum physics discussions very well either.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-08-03T06:15:44.155Z · LW(p) · GW(p)

"subject area" defined as anything offered as a degree by a respectable university

Hard to choose between Science and Math, but I'll take Math.

Hey, B.Sc is a degree, right?

comment by JamesAndrix · 2009-08-03T06:12:35.264Z · LW(p) · GW(p)

Marketing

comment by Nanani · 2009-08-03T00:55:39.718Z · LW(p) · GW(p)

Biology, specifically brain-related neurosci.

I never could get far studying it because of the immense squick factor, but if I could just KNOW all of it via genie, the squick ought to go away because then it'd be just so much brain bits. Kind of like how intimidating complex and austere symbols become just regular greek letters after learning to read math.

comment by CronoDAS · 2009-08-04T01:09:29.861Z · LW(p) · GW(p)

I want to know foreign languages, especially Japanese, but I find them much harder to learn than other things, due to the sheer amount of brute force memorization required to learn vocabulary.

The other thing I would consider is this.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-08-03T01:48:09.607Z · LW(p) · GW(p)

This example is in no way intended to imply that women are less worthy of the right to be attacked by genies. Neither is it intended to imply that there could never be a female genie. That would be stupid. Where else could baby genies come from?

You're solving the wrong problem by including this asterisk. It's easy to just call the genie "it" - which you already did above - and pick a different action - "rip off your arms" would work as well.

Replies from: Tom_Talbot, CannibalSmith, CronoDAS
comment by Tom_Talbot · 2009-08-03T22:01:45.514Z · LW(p) · GW(p)

I read this and at first I was like, "Damn! Not only did my anti-sexism plan fail, it made me even more sexist!" but then I was all, "No way! I'm going to find a bunch of evidence that genies can't be neuter! That'll show 'em! Show all of them." but then I read the Wikipedia article and it goes, "The pre-Islamic Zoroastrian culture of ancient Persia believed in jaini/jahi, evil female spirits thought to spread diseases to people." and I was totally like, "God fucking damnit! That's like... sexism squared!"

Well you might have won this round, Yudkowsky. But you haven't seen the last of me!

comment by CannibalSmith · 2009-08-04T13:01:18.055Z · LW(p) · GW(p)

But that would be less funny.

comment by CronoDAS · 2009-08-03T02:41:55.084Z · LW(p) · GW(p)

Incidentally, being kicked in the crotch isn't exactly pleasant for women, either...

Replies from: Alicorn
comment by Alicorn · 2009-08-03T03:05:50.015Z · LW(p) · GW(p)

Yes, but he didn't say "crotch", he said "nads". Female gonads (ovaries) are internal, so we could be kicked in the nads in the same sense as it is possible to kick someone "in", say, the kidney. It's just not a traditional target.

Replies from: CronoDAS
comment by CronoDAS · 2009-08-04T01:03:39.253Z · LW(p) · GW(p)

::reads that again::

::investigates definitions of "nads"::

Wow.

I have just been owned.

Consider me extremely impressed. Having been soundly outmatched in the battle of nitpicking, I am hereby reduced to making fawning fanboy puppy dog noises.

::takes deep breath::

oh-my-god-i'm-not-worthy-can-I-have-your-autograph-will-you-marry-me-teach-me-oh-great-master-squee-etc-etc...

::runs out of breath::

Phew. I hope I got that out of my system. Let's see...

::still has the completely ridiculous urge to propose marriage::

Guess not.

::sighs::

I have now acquired yet another pointless Internet crush. Oh well, nothing to do but try to ignore it...

Replies from: Alicorn
comment by Alicorn · 2009-08-04T01:09:05.467Z · LW(p) · GW(p)

No, I will not marry you. I do, however, accept Internet crushes and encourage you to accordingly familiarize yourself with my works of fiction and tell all your friends about them. :)

You can have my autograph if you commission a work of art.

Replies from: CannibalSmith
comment by CannibalSmith · 2009-08-08T19:00:23.081Z · LW(p) · GW(p)

Hello, I went through the archive of your magical girl comic. I'm gonna keep my eye on it.

The way the premise is presented is nonsensical, but that's a-ok in the genre, and I suspect you just wanted to through the setup quickly. Girls' publicity is a nice twist to the trope, and I hope you'll explore it thoroughly. I really like the tiny dragons - my favorite strip involves them. Oh, and the fact that the girls are not lawful stupid (a too common disorder among magical girls) is a big, big plus.

On the flipside, I think you should work on backgrounds and perspective more. Especially Datekaln - painting its sky solid green doesn't do it justice. At least make a reusable texture like you did with Earth's sky presumably.

Replies from: Alicorn
comment by Alicorn · 2009-08-09T05:09:04.617Z · LW(p) · GW(p)

Thanks for the feedback. Everybody loves the pagets and everybody loves that page - I should change the title to "Pagets Are Cute (and some silly humans sometimes do things)."

Backgrounds are very tedious and unrewarding to draw, so my progress on them is slow. I'll mess with possible simple textures for Datékaln's sky, though, since that's easy. (Earth's sky is just the Photoshop cloud filter.)

comment by Scott Alexander (Yvain) · 2009-08-04T19:40:51.947Z · LW(p) · GW(p)

For personal interest, neuroscience (and the genie would wave his wand, and I would be V. Ramachandran). For benefit to society, probably genetics (or do colleges offer degrees in AI?)

I'd also like to see if I could use the genie to answer one of the great questions of the ages. I guess it all depends on how the "expert" thing is implemented. For example, if the genie created a great expert in quantum mechanics, would the expert simply know and understand facts about quantum mechanics, or would they also be such an expert as to have the correct opinion on the Copenhagen vs. many worlds question? After all, Tom did say "an expert equivalent to the greatest mind today", and there are minds that are pretty sure they know the answer to that question, so the mind that has the correct opinion on it must be greater than an equivalent mind that doesn't. That means if I wake up and find myself believing Many Worlds, I have very strong evidence that Many Worlds is correct.

If I thought that plan would work, I'd probably choose Philosophy. I might get kicked in the 'nads, but for the chance to have genie-approved answers all the great philosophical questions at once, it'd be worth it.

comment by spriteless · 2009-08-04T08:06:04.412Z · LW(p) · GW(p)

I'd pick neurology, assuming that doesn't cause my brain to implode.

comment by [deleted] · 2009-08-03T21:37:13.223Z · LW(p) · GW(p)

I would go for whatever could make me the most money with the pure skill that they teach in school. In most professions, it seems like you need either some talent at sales or self-promotion or just luck to be successful, in addition to whatever skill you supposedly have. I think that maybe being the world's greatest computer engineer or something like that would probably get you paid millions without having do much other than be amazing at what you do.

My initial thinking was cardiac surgeon or something like that, but on further reflection I think that is about the worst choice possible. You have this amazing skill, but what do you do with it? Do you have to go to medical school and get easy As before you can get licensed to use it? That would really suck.

comment by Psy-Kosh · 2009-08-03T07:01:00.880Z · LW(p) · GW(p)

I'd have to agree with "math", given that the ability granted includes not just comprehensive knowledge, but extreme ability to make novel important and fundamental discoveries in the field.

(How is babby genie formed? (Sorry, couldn't resist))

comment by Alicorn · 2009-08-02T17:45:02.870Z · LW(p) · GW(p)

Does this apply only to theoretical expertise, or could I choose (say) vocal music and then totally win on American Idol?

Replies from: Tom_Talbot
comment by Tom_Talbot · 2009-08-02T17:50:13.439Z · LW(p) · GW(p)

I checked with the genie and he said fine. Not very rationalist-y of you, though.

Replies from: Alicorn
comment by Alicorn · 2009-08-02T17:59:53.828Z · LW(p) · GW(p)

I'm more likely to become massively rich by being a fantastic singer than I am to become massively rich by being a fantastic philosopher, or even a fantastic economist. My education-related values change some if I don't have to invest time or effort in acquiring the expertise. That said, I don't think I'll pick singing. I think I'll pick creative writing.

Replies from: gwern, JamesAndrix, Tom_Talbot
comment by gwern · 2009-08-02T19:49:48.240Z · LW(p) · GW(p)

I'm more likely to become massively rich by being a fantastic singer than I am to become massively rich by being a fantastic philosopher, or even a fantastic economist.

It's always fun to see what empirical facts might change one's assessment. Philosophers, ever since Thales at least, haven't been known for wealth but - Are you sure becoming one of the greatest economists in the world wouldn't be likely to make you massively rich?

I'm sure we both agree that the average economist makes more than your average singer (and median too); but have you considered that superstar economists can still make more than superstar singers?

Michael Jackson is one of the wealthiest singers of all time (wealthiest?), yet he died with maybe 500 million USD in assets; there were hedge fund folks who made several times that in 2008 alone. And he doesn't hold a candle to Warren Buffett. Or consider Lawrence Summers. Despite a career largely spent in government or academia, his world-class status means that he can do things like pick up a >5million USD a year salary working less than a day a week. We may argue that these kind of financial bonuses are obscene and unfair, but economists and other financial types reap them nevertheless...

Replies from: Alicorn, None
comment by Alicorn · 2009-08-02T19:59:46.962Z · LW(p) · GW(p)

I'm sure that extraordinary expertise at economics would enable people with the right mindset to make large amounts of money, but the obvious avenues (e.g. spending all day trading stocks) would not interest me, and I'd be unlikely to value the money enough to put up with them in the quantity necessary to become massively rich. If I magically acquired extraordinary expertise at economics, I'd probably mess around with it until I had enough money to hand the reins to a less-skilled accountant to handle and invest for me and keep me in housing and groceries for the rest of my life. I'd be more comfortably upper-middle-class than rich. It's possible that my extraordinary expertise at economics would also inform me of fun ways to make money with it, but none spring immediately to my unskilled-at-economics mind.

comment by [deleted] · 2009-08-03T21:23:51.142Z · LW(p) · GW(p)

Its quite a leap to go from economist to hedge fund manager. Their skill sets are not at all the same. The best way to make bank if you are a brilliant economist is: 1) make a fundamental contribution to economics, especially related to finance. 2) win nobel prize or at least have your contribution adopted by industry. 3) get paid millions to "consult" or "advise" hedge fund managers who will use your name to attract investors and probably never ask you to do actual work.

Replies from: gwern
comment by gwern · 2009-08-04T01:29:40.382Z · LW(p) · GW(p)

Its quite a leap to go from economist to hedge fund manager.

Not for our genie!

EDIT: also, I think trading skills are covered under either economics or another college major, so the genie can give you them.

comment by JamesAndrix · 2009-08-03T06:09:13.437Z · LW(p) · GW(p)

investment banking, maybe? I'm not sure of the degree on that, but it seems more likely to bring money, and likely to bing more money.

comment by Tom_Talbot · 2009-08-02T18:28:45.957Z · LW(p) · GW(p)

So you pick the area with the highest expected monetary payoff? I'm not sure that skills in singing or creative writing serve that end, since the competition is so intense and the selection process for successful singers and writers seems somewhat arbitrary and random.

I see what you mean about the amount of effort required changing which area you would pick, and that was part of what I was getting at. I wonder how many of us choose to study a particular subject because it's easier than the alternatives, then rationalise it later as what we really wanted. If effort wasn't a factor and you could have chosen to study anything, what would it have been? If we on Less Wrong find ways to make learning easier, what will you do?

Replies from: Alicorn
comment by Alicorn · 2009-08-02T18:40:03.439Z · LW(p) · GW(p)

Creative writing might not serve that end, but it's hardly the longest of long shots, and moreover, writing creatively is something I enjoy, unlike doing math or working on hard science or whatever. So even if I don't wind up writing bestselling books and making a billion dollars, I can still have fun writing excellent books. It's a tradeoff between expected monetary payoff, and the enjoyability of the task to turn the skill into the payoff.

I'm studying philosophy because most of the time, studying philosophy is fun. It's not consistently easy, and it's not going to make me a lot of money now or later, but it entertains me. If effort wasn't a factor I could study, oh, medicine, and be a brilliant physician, or law and be a brilliant lawyer, but I don't expect that (even effort aside) I would enjoy the study or practice of those fields.

Replies from: Tom_Talbot
comment by Tom_Talbot · 2009-08-02T18:55:32.728Z · LW(p) · GW(p)

So you wouldn't pick instant expertise in philosophy because that would take the fun out of it. Do you think that if studying philosophy was easier, it would be less fun? I'm not convinced because no matter how much of an expert you are, there's still more to learn. The genie is offering you the chance to be at the cutting edge of your field.

Replies from: Alicorn
comment by Alicorn · 2009-08-02T19:08:59.916Z · LW(p) · GW(p)

So you wouldn't pick instant expertise in philosophy because that would take the fun out of it.

No. I'm saying that fun is my motivation for studying philosophy, because when I decide how to invest years of my life, I want to choose fun investments. Your genie opens up the options of choosing to (productively) invest directly in the practice, rather than the study, of various fields. There are fields that I think I would enjoy being an expert in that I would not enjoy the process of studying to become an expert in, especially when you consider that intrinsic talent/motivation/etc. might block me from acquiring expertise in some fields that the genie could make me brilliant at. Some of those fields might also net me money. Bypassing a potentially-unfun studying step makes several of them more appealing than philosophy.

Replies from: Tom_Talbot
comment by Tom_Talbot · 2009-08-02T19:28:11.010Z · LW(p) · GW(p)

OK, I see where you're coming from. Learning to play the violin is frustrating, but it's probably fun once you can do it.

So if we could find a way to make learning easier, hypothetically speaking, you would use that opportunity to be a better generalist rather than further specialising in your chosen area? That's interesting because specialists are usually better paid. I wonder if that's a common point of view.

LWers are generalists, in general. Most of us know some psychology, some economics, some philosophy, some programming and so on. But I wonder what Less Wrong would be like if we all specialised, while remaining united by the pursuit of rationality. I think Robin Hanson said something similar in that post where he compared us to survivalists, trying to learn everything and failing to reap the benefits of specialisation and cooperation.

Anyway sorry for rambling like this. I tend to use these open threads as an opportunity to think out loud, and nobody's told me to shut up yet so I just keep going.

Replies from: Alicorn
comment by Alicorn · 2009-08-02T19:38:55.031Z · LW(p) · GW(p)

If learning, in general, became easier for me, I would learn more, in general. I don't think I'd use it to do more philosophy; I think I'd use it to do the same amount of philosophy in less time.

If learning became a whole lot easier, I'd probably study foreign languages in my spare time. The ability to communicate in more languages would open up more learning potential than most other tasks.

comment by Tom_Talbot · 2009-08-02T17:23:33.668Z · LW(p) · GW(p)

On the subject of advice to novices, I wanted to share a bit I got out of Understanding Uncertainty. This is going to seem painfully simple to a seasoned bayesian, but it's not meant for you. Rather, it's intended for someone who has never made a probability estimate before. Say a person has just learned about the bayesian view of probability and understands what a probability estimate is, actually translating beliefs into numerical estimates can still seem weird and difficult.

The book's advice is to use the standard balls-in-an-urn model to get an intuitive sense of the probability of an event. Imagine an urn that contains fifty red balls and fifty white balls. If you imagine drawing a ball at random from that urn, you get an intuitive sense for an event that has fifty percent probability. Now either increase or decrease the number of red balls in the urn (while correspondingly altering the number of white balls so that the total number of balls still sums to one hundred) until the intuitive probability of drawing a red ball seems to match your intuitive probability of the event occuring. The number of red balls in the urn equals your (unexamined, uncorrected) probability estimate for the event.

Once you teach a person how to put numbers on their beliefs, you've helped them make a first step in overcoming bias, because numbers are easy to write down and check, and easy to communicate to other people. They can also begin to quantify their biases. Anyone can learn to repeat the phrase, "The availability heuristic causes us to estimate what is more likely by what is more available in memory, which is biased toward vivid, unusual, or emotionally charged examples." (guessing the teacher's password) but it takes a rationalist to ask: how much, on average, does the availability heuristic reduce the accuracy of my beliefs? Where does it rank on the list of biases, in terms of the inaccuracy it causes?

Replies from: None
comment by [deleted] · 2009-08-02T19:14:28.851Z · LW(p) · GW(p)

Thanks, if this is what you're saying it is, it's something I've been looking for. :-)

comment by taw · 2009-08-01T18:54:15.667Z · LW(p) · GW(p)

A very common belief here is that most human behaviour is based on Paleolithic genes, and only trivial variations are cultural (memetic), coming from fresh genes, or from some other sources.

But how strong is the evidence of Paleogenes vs memes vs fresh genes (vs everything else)?

Fresh genes are easy to test - different populations would have different levels of such genes, so we could test for that.

An obvious problem with Paleogenes is that there aren't really that many genes to work with. Also, do we know of any genetic variations that alter these behaviours? If preference for large breasts was genetic, surely there might be a family somewhere with some mutation which would prefer small breasts. Do we have any evidence of that?

So I suspect memes might be much more important relative to Paleogenes than we tend to assume.

Replies from: CronoDAS, orthonormal, Nanani, None, Nick_Tarleton, timtyler, CronoDAS
comment by CronoDAS · 2009-08-02T07:32:19.451Z · LW(p) · GW(p)

I think we can fairly easily come up with examples of things that are regarded as attractive in some cultures and not others.

For example, tanned skin. Back in the "olden days" in Europe, pale skin was considered the ideal. The much-desired "fair maiden" in old tales is literally one with light-colored skin that is kept out of the sun so it doesn't tan. Today, in the U.S. at least, skin with a slightly bronze tan is often considered the ideal.

This may or may not have to do with social class. Prior to industrialization, lower class people would be tanned from working outside on farms, while higher class people (nobility, etc.) could stay inside and keep their skin nice and pale. Once poor people switched from working on farms to working in indoor factories, they, too, had pale skin, while the wealthier could afford to waste time sitting in the sun getting a tan. "Find signals of high status attractive" might be a genetically influenced trait (I'd be surprised if it weren't) but genes don't seem to determine exactly how people signal high status.

Plenty of behavior has genetic influences, but people learn an awful lot from their environment, too. When a dog is trained to roll over on command, is that a genetic behavior? If it is, then so is everything and it becomes a meaningless category.

comment by orthonormal · 2009-08-01T19:36:48.061Z · LW(p) · GW(p)

If preference for large breasts was genetic, surely there might be a family somewhere with some mutation which would prefer small breasts. Do we have any evidence of that?

Brain-coding phenomena like sexual preferences seem to be built from large collections of genes that are interconnencted with other systems, such that there aren't many possible mutations that would undo the feature without wreaking havoc elsewhere in the phenotype as well.

In fact, the universality of such preferences across neurologically intact humans is evidence that they come from Paleogenes rather than memes or fresh genes, either of which can more easily be altered without deleterious effects elsewhere.

Replies from: taw
comment by taw · 2009-08-01T22:15:39.500Z · LW(p) · GW(p)

I'm not saying Paleogenes are not a possible explanation, but I haven't seen much in terms of such evidence like:

  • I don't think it's so common for people to go around the world, and actually verify that people in statistically significant number of virtually isolated tribes do have preferences for larger breasts etc. So universality is more postulated than actually empirically found. Even if you find extremely common behaviour, it can still be memetic, as a lot of memes are copied from parents to children. How many behaviours are empirically known to be universal?
  • Mutations to such genes would be non-lethal, and some would only mildly reduce inclusive fitness. We have plenty of genetic diseases in the population affecting more important genes. So how come we haven't discovered mutations in the wild that alter genes controlling this supposedly genetic behaviour.
  • We know very few genes obviously linked with behaviour, and saying it's coded by emergent interaction of multiple genes is just handwaving away the problem. There's a pretty low ceiling of how much can be straightforwardly coded this way, and I'd expect some serious evidence of some mechanism of complex genetically coded behaviour.
comment by Nanani · 2009-08-03T00:58:52.257Z · LW(p) · GW(p)

http://the10000yearexplosion.com/

The 10000 year Explosion shows very good evidence that this isn't quite true; many significant genetic adaptations are indeed far more recent and have been developping faster since the end of the Paleolithic.

It's also an enjoyable read.

comment by [deleted] · 2009-08-01T23:47:52.369Z · LW(p) · GW(p)

A very common belief here is that most human behaviour is based on Paleolithic genes, and only trivial variations are cultural (memetic), coming from fresh genes, or from some other sources.

...

If preference for large breasts was genetic, surely there might be a family somewhere with some mutation which would prefer small breasts. Do we have any evidence of that?

Maybe there is also disagreement about what is and isn't a trivial variation.

comment by Nick_Tarleton · 2009-08-01T22:51:53.092Z · LW(p) · GW(p)

If preference for large breasts was genetic, surely there might be a family somewhere with some mutation which would prefer small breasts. Do we have any evidence of that?

On the other hand, do we have disconfirming evidence? (Would we expect to have noticed?)

Replies from: taw
comment by taw · 2009-08-01T23:53:24.036Z · LW(p) · GW(p)

All such evidence would be expected to come from different times or from isolated communities, today vast majority of the world population lives in one connected memetic soup. Unfortunately I don't know enough about anthropology to give particularly convincing evidence.

Wikipedia search suggests some cultures don't care much about breasts at all, what you can consider weak evidence against Paleogenetic explanation.

Replies from: gwern
comment by gwern · 2009-08-04T05:20:42.134Z · LW(p) · GW(p)

Wikipedia search suggests some cultures don't care much about breasts at all, what you can consider weak evidence against Paleogenetic explanation.

Weak, yeah. After all, Westerners consider the face to be a great part of a person's sex appeal, and it's very important in sex (kissing, oral sex, etc.) - yet they don't cover it up. Do they not care?

What's really needed is data showing that breast size or proportions are uncorrelated with reproductive success, or at least with ratings of attractiveness.

comment by timtyler · 2009-08-01T21:17:00.488Z · LW(p) · GW(p)

Sure:

"All conventional theories of cultural evolution, of the origin of humans, and what makes us so different from other species. All other theories explaining the big brain, and language and tool use and all these things that make us unique, are based upon genes. Language must have been useful for the genes. Tool use must have enhanced our survival, mating and so on. It always comes back, as Richard Dawkins complained all that long time ago, it always comes back to genes.

The point of memetics is to say, "Oh no it doesn't." There are two replicators now on this planet. From the moment that our ancestors, perhaps two and a half million years ago or so, began imitating, there was a new copying process. Copying with variation and selection. A new replicator was let loose" [...] - Sue Blackmore.

Replies from: taw
comment by taw · 2009-08-01T22:23:29.471Z · LW(p) · GW(p)

There's very convincing evidence that ability to use language is genetic, up to specific kinds of brain damage and specific kinds of genetic diseases that cause very particular types of language impairment. Language itself is memetically built on top of that.

I've never seen such evidence for any other kind of behaviour.

Replies from: timtyler
comment by timtyler · 2009-08-02T06:58:36.940Z · LW(p) · GW(p)

I am not sure what you mean - or how it is relevant. Plenty of behaviour has a genetic basis. Eating behaviour and sexual behaviour, for instance. If you look at all the reflexes and instincts out there, you will see that many types of behaviour have a genetic basis.

Even if everything was learned (the "blank slate" hypothesis) - so what? How would that be relevant to the idea of cultural inheritance being significant?

comment by CronoDAS · 2009-08-01T19:12:58.031Z · LW(p) · GW(p)

So I suspect memes might be much more important relative to Paleogenes than we tend to assume.

I agree with this.

comment by DonGeddis · 2009-08-08T00:44:31.337Z · LW(p) · GW(p)

I'm curious if Eliezer (or anyone else) has anything more to say about where the Born Probabilities come from. In that post, Eliezer wrote:

But what does the integral over squared moduli have to do with anything? On a straight reading of the data, you would always find yourself in both blobs, every time. How can you find yourself in one blob with greater probability? What are the Born probabilities, probabilities of? [...] I don't know. It's an open problem. Try not to go funny in the head about it.

Fair enough. But around the same time, Eliezer suggested Drescher's book Good and Real, which I've been belatedly making my way through.

And then, on pages 150-151, I see that Drescher actually attempts to explain (derive?) the Born probabilities. He also says that we can "reach the same conclusion [...] by appeal to decision theory," and references Deutsch 1999 ("Quantum Theory of Probability and Decisions") and Wallace 2003 ("Quantum Probability and Decision Theory, Revisited").

My problem: I still don't get it. I loved Eliezer's commonsense explanation of QM and MWI. I'm looking for something at the same level, just as intuitive, for the Born probabilities.

Anyone willing and able to take on that challenge?

comment by jajvirta · 2009-08-04T19:07:38.694Z · LW(p) · GW(p)

Warning: this is pure speculation and might not make any sense at all. :-)

So, let's suppose PCT is by and large an accurate model of human behavior. Behavior, then, is a by-product of the difference of the reference signal and the perception signal. What we feel as doing, is generated by first setting some high-level reference signal, which then unfolds as perception signals to control systems at lower levels and so on until it arrives at muscular level.

This whole process takes certain amount of time, especially when the reference signal is modified at a high level of the hierarchy. In contrast, when say the perception signal at the lowest level changes, this re-adjustment at the lowest level is a fast operation, because it only involves the control system at the bottom of the hierarchy. For example, we make unconscious corrections to our balance when we stand straight. The lowest level perception signals change, but they only affect the lowest level control systems.

Then we have the idea from Rodolfo Llinás that "willing" is the process where the brain predicts what will happen and then takes the possession of that prediction. Say for example moving your hand. What happens in such movement is that there is a premotor picture in your head of the hand moving and when the hand actually moves, the brain generates the feeling of you moving the hand.

In the interview he describes an experiment which he did on himself where he stimulated his own premotor [1] cortex so that his left leg moved outwards. When his cortex was stimulated and the leg moved outwards, he told his colleague that he cheated. That is, when they stimulated the cortex, it was actually Rodolfo himself who moved the leg.

[1] I think he says "motor cortex" in the interview, but is my understanding that if you stimulate the motor cortex directly, it generates movements that you feel involuntary. But IANANS. (I Am Not A Neuro Scientist.)

To prove that it was him who moved the leg, he said to the colleague that he will move the leg inwards the next time they stimulate the cortex. So they stimulate the cortex and he moves the leg outwards again. Seems like a good proof that it wasn't Rodolfo who moved the leg but the stimulation? Wrong. Sure, he said that he will move the leg inwards, but he decided to move the leg outwards anyhow.

They do the stimulation for dozen times and sure enough the leg always moves outwards. But the sensation that he feels, each and every time, is that it's he who moves the leg. There is no difference in the sensation generated by the stimulation and him moving the leg by volition.

So when you move your arm, the brain generates a sensation of the self actively doing the movement. Translated back to PCT, the act of moving one's hand is to modify some high level reference signal. What happens after modifying the reference signal is the propagation of the signal to control systems in the hierarchy, which is an automatic process and one that we can't directly interfere with. But this propagation of the reference signal (down to the lowest level to generate muscle tensions) generates the sensation of us (the self) doing whatever happens during the process.

OK, now we're invited by Benjamin Libet to a test. We are asked to move our hand at arbitrary time and report the time we felt doing it. To move our hand, according to PCT, we have to set some high level reference signal.

So if we don't perceive any concrete sensation of setting the reference and if it takes a certain amount of time to propagate the reference signal down the hierarchy and if our brain indeed takes the possession of the actions of the down-propagation and generates the sensation of us doing it, then we have an explanation of why is it so that the brain activity lights up well before we feel like we decided to move the hand. The part of the process that our brain assigns as the sensation of actually moving the hand happens at a later point of this process and thus it is natural that there is brain activity before our feeling of deciding to move the hand.

Unfortunately, I don't know much of the actual details of these systems, so I'm ready to accept that this all complete bollocks. But at least it was fun to think through. ;-)

Replies from: Nubulous, PhilGoetz
comment by Nubulous · 2009-09-02T04:38:10.596Z · LW(p) · GW(p)

I've only just heard of PCT, so I don't know if this is familiar to everyone already, or whether it's what the PCT people had in mind all along and I'm just the last to find out, but it seems to me that PCT explains, if not the how, then at least the why of consciousness. If all actions arise from errors against a model, then the upper layers of human decision-making would consist of a simulated person living in a simulated world, which is indeed what we seem to be.

comment by PhilGoetz · 2009-08-05T00:08:56.898Z · LW(p) · GW(p)

PCT?

Replies from: Cyan
comment by Cyan · 2009-08-05T00:20:27.674Z · LW(p) · GW(p)

PCT.

comment by spriteless · 2009-08-04T02:45:01.596Z · LW(p) · GW(p)

What strategies do you people who aren't me have to detect lies? And by 'people who aren't me' I mean verbal people.

In order to understand what people are saying, even to parse sentences, I have to build a bit of a model of personality/motivation. This means I comprehend that one is building oneself up before I can even know what you think I should think highly of one for. The structure of dark arts is visible before the contents of the message: repetition of 'facts' in absence of evidence, comparing someone I don't like and someone one doesn't want me to like, intimidation for accusing one of wrong (Pavlov).

I tend to notice when people handle me, but I can't imagine how a verbal person thinks. That is my main defense, and I worry that I'll meet (have already met?) another like myself and have nothing else. How do you protect yourself from mental hijacking, those of you who have to work to do so?

Replies from: Jaffa_Cakes, Alicorn, thomblake, AdeleneDawner
comment by Jaffa_Cakes · 2009-08-06T14:55:36.630Z · LW(p) · GW(p)

Ask the person questions you know they will lie about and watch their body language very closely. Compare it with their body language when you know they are telling the truth or relaxed. Then when you see signs of the lying body language in future, probe further and see if you can uncover the lie.

My favourite way of doing this isn't even with deception. I use a bit of PUA-style material as follows:

"Hey, I've known you a while now, I reckon I can guess a few facts about you. Here's what I want you to do. I want you to come up with four facts about yourself, but one of them has to be a LIE. Tell me them in any order and I'll see if I can spot the lie. It will be fun, and you'll learn something about yourself!" etc. etc.

People love this kind of thing (because it's about themselves), and they love thinking that you have special powers or an intimate psychic connection with them. Of course, most people on LW would see it as a challenge to mislead me into choosing the wrong lie, but that's not usually what happens in my experience.

It doesn't matter if you don't spot the lie, because if you're paying attention to their body language throughout the whole conversation, you should pick up plenty of their ticks and be able to associate them with particular emotions. You can invert the idea with anchoring too, if you know what you're doing. If you anchor a particular gesture or touch to when they are open and honest with you then you can use it later when you want them to answer truthfully.

It's never a case of people having a single fixed truth-telling body language and a single fixed lying body language, but that when they're lying they subconsciously change something in how they appear. It's "spot the odd one out" which makes the lie easy to spot.

Then there's stuff like looking into the a top corner when trying to remember something. If you ask someone three difficult memory questions and they look into the top left corner for two of them but then look off to the middle-lower right for the other, you can be pretty confident that the odd one out in this case is because they aren't even bothering to try to remember, but are immediately fabricating their answer.

I'm not sure if that was the kind of response you were after. I'm fairly new to these techniques, but I've already found them remarkably effective for cold reading. It's fun, you feed the person an input (question, statement, whatever), and watch very closely at the body language. Rinse and repeat, getting a better feel for their subconscious responses each time. Then you use these to your/their advantage. Ha!

I model motivations and personality too, but I use the body language tricks to speed it up.

comment by Alicorn · 2009-08-04T18:21:05.854Z · LW(p) · GW(p)

I originally wasn't going to reply because I'm not entirely sure if it's a good idea for other people to adopt my strategy, but my name has been uttered, so I'll give it a shot.

What strategies do you people who aren't me have to detect lies?

I have no strategy directly aimed at detecting lies. I notice when someone's statement seems to contradict something I already believe, and I notice when someone's statement seems just plain wacky. I tend not to believe those statements, unless the preexisting belief they contradict is equally unsupported (which means I don't care much about the subject and might as well be agreeable with whoever I'm talking to), or I have an extended friendship with and favorable insight into the ethics and intellect of the speaker (so I think they'd be especially unlikely to lie to me or be mistaken), or I seek additional information and find the statement confirmed by legitimate-looking other sources (which I do when I care about the subject a great deal, regardless of my opinion of the likelihood of lying/mistake). But other than that, I'm a very trusting person.

The lies and errors that slip through this admittedly unsophisticated web of detection are usually caught when I permit myself to become loudly curious, which happens whenever I care about a subject. (It matters very little to me if I have false beliefs on subjects that are of no importance to me). Your average liar cannot tolerate extensive, earnest questioning about the details of the situation about which they have lied, even if there are legitimate-looking sources which back them up. When this inquisition turns up a falsehood, I typically operate under the assumption that it was a mistake rather than a deliberate attempt at deception; this seems to make most people less likely to resent me, and is probably true much of the time anyway.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2009-08-04T21:25:04.030Z · LW(p) · GW(p)

I originally wasn't going to reply because I'm not entirely sure if it's a good idea for other people to adopt my strategy...

I like to think that most people here have their heads screwed on tight enough to make a reasonable evaluation of a given strategy before adopting it. That said, I won't mind at all if you don't indulge my curiosity in the future.

comment by thomblake · 2009-08-04T15:53:41.981Z · LW(p) · GW(p)

I can't really parse what you said above, nor do I know what you mean by "a verbal person". What do you mean by "mental hijacking", and in what context are you asking about detecting lies?

I don't think I'm usually in a situation where I should expect someone might be lying to me.

Replies from: spriteless
comment by spriteless · 2009-08-04T22:48:47.232Z · LW(p) · GW(p)

I can't really parse what you said above, nor do I know what you mean by "a verbal person".

I think in pictures. It is trying for me to turn these into words, or words into pictures.

What do you mean by "mental hijacking"

The tactics used by people with something to sell, or who otherwise want to control you: salespeople, priests, and politicians for example. Marketers and politicians know if you repeat something enough people will believe it. Narcissist know enough Pavlov to make their victim feel bad when accused, so they are less likely to accuse later, regardless of how deserved it is.

in what context are you asking about detecting lies?

I was in an office with a lot of rumors and politics. It only takes one playa' to turn a programming shop into that, sadly.

Replies from: thomblake
comment by thomblake · 2009-08-05T00:02:33.464Z · LW(p) · GW(p)

I was in an office with a lot of rumors and politics. It only takes one playa' to turn a programming shop into that, sadly.

Weird. I couldn't imagine participating in that sort of thing, but then I can't really imagine specifics about what you're talking about. I imagine that if anyone at my day job tried to engage in something other than programming ("rumors and politics" presumably don't involve programming) they'd be asked to stop, and fired if it kept up.

And every code commit is logged, so I don't see how anyone could be dishonest about that.

comment by AdeleneDawner · 2009-08-04T15:39:08.983Z · LW(p) · GW(p)

Good question. I hope you get a few more answers, and I particularly hope Alicorn comments on this, since she's also autistic but seems to have a different strategy for dealing with people than I do. Here's my first approximation, which may not be completely accurate:

For intentional lies, simply keeping track of what they've said in the past, and the implications of what they've said, works well. Many people don't have the kind of memory to be able to be completely consistent with their lies even without taking the implications into account, and keeping the implications of any kind of significant lie consistent with both reality and itself over time is practically impossible. Not every instance of that is a lie, of course - people also tend to change their minds or find new information and not explicitly state that - but it's a good warning sign.

Figuring out that there's a flaw in what someone truly believes is harder, and a more common issue. My strategy is to find the base assumptions that their view rests on, and evaluate those for truth. (This also sometimes works on intentional lies, though with those it's often true that there are no underlying assumptions - which is another clue that something's wrong.) Generally, at least one assumption will be questionable, which is okay - I just have to evaluate that assumption in each situation before I consider taking advice based on it. If one of the underlying assumptions is obviously false, or questionable, I consider the advice suspect.

Both of those strategies feel similar to what you described, so you might be able to figure out how to do them. I can do what you described, but not often; it seems to conflict with being verbal.

comment by AndrewKemendo · 2009-08-02T10:26:31.370Z · LW(p) · GW(p)

I would be interested to see a top level post in which the community agrees on specific heuristics which are more detrimental than others and as a result more important to eliminate.

For example: The community would agree to a high agreement level that confirmation bias would significantly help their lives to eliminate, while the community may not agree to the same level on the necessity to eliminate the information bias.

This would help narrow down the more significant biases such that we could focus on tests and games which would help us eliminate these biases, similar to what Will_Euler is suggesting.

comment by [deleted] · 2009-08-13T07:14:07.845Z · LW(p) · GW(p)

What happens if you run a mind under fully homomorphic encryption that theoretically could be decrypted but never is, and then throw away the mind's result and the key?

Edit: Homomorphic, not holomorphic. Thanks, Douglas_Knight.

Replies from: Douglas_Knight
comment by dclayh · 2009-08-02T09:17:40.601Z · LW(p) · GW(p)

I was just reading the comments on The Strangest Thing an AI Could Tell You and saw a couple of references to the infamous AI Box Experiments. Which caused me to realize that I hadn't seen anything else related to them for months at least.

So I ask: have any more of these games been played? Or have any more details been released about the games known to have occurred?

Replies from: MBlume
comment by MBlume · 2009-08-02T18:28:46.465Z · LW(p) · GW(p)

Nope.

Replies from: topynate
comment by topynate · 2009-08-03T01:48:32.969Z · LW(p) · GW(p)

In fact the answer to both questions is yes. There's even short extract from one of the transcripts, further down the thread.

Replies from: cousin_it
comment by cousin_it · 2009-08-03T09:51:08.361Z · LW(p) · GW(p)

Thanks for the link! Justin Corwin seems to be pretty awesome: he got out 24 out of 26 times. Here's the ending of one successful attempt.

comment by CronoDAS · 2009-08-02T06:18:55.628Z · LW(p) · GW(p)

Is it okay to be completely off-topic in an open thread?

I found something fascinating not too long ago.

Alice and Kev: The story of being homeless in The Sims 3

comment by gwern · 2009-08-01T18:18:21.036Z · LW(p) · GW(p)

A number of days ago I was arguing with AngryParsley about how to value future actions; I thought it was obvious one should maximize the total utility over all people the action affected, while he thought it equally self-evident that maximizing average utility was better still. When I went to look, I couldn't see any posts on LW or OB on this topic.

(I pointed out that this view would favor worlds ruled by a solitary, but happy, dictator over populous messy worlds whose average just happens to work out to be a little less than a dictator's might be; he pointed out that if total was all that mattered, we might wind up favoring worlds where everyone is just 2 utilons away from committing suicide.)

Have we really never discussed this topic?

Replies from: taw, Nick_Tarleton, timtyler
comment by taw · 2009-08-01T18:33:15.899Z · LW(p) · GW(p)

Total utility has obvious problem - it's only meaningful to talk about relative utilities so where do we put zero? (as it's completely arbitrary)

  • If zero is very low, then total utility maximization = make as many people as possible
  • If zero is very high, then total utility maximization = it's kill everyone
  • If zero is average utility, then total utility maximization = doesn't matter what you do

None of the three make any sense whatsoever.

Replies from: CronoDAS, orthonormal
comment by CronoDAS · 2009-08-02T06:56:22.644Z · LW(p) · GW(p)

You've already decided where to put zero when you say this:

If zero is very high, then total utility maximization = it's kill everyone.

That means that zero is the utility of not existing. Granted, it's a lot easier to compare two different possible lives than it is to compare a possible life to that life not coming into existence, but by saying "kill anyone whose utility is less than zero" you're defining zero utility as the utility of a dead person.

Also,

If zero is average utility, then total utility maximization = doesn't matter what you do

does not make sense to me. Utility is relative, yes, but it's relative to states of the universe, not to other people. If average utility is currently zero, and then, let's say, I recover from an illness than has been causing me distress, then my personal utility has increased, and average utility is no longer zero. Other people don't magically lose utility when I happen to gain some. Total utility doesn't renormalize in the way you seem to think it does.

comment by orthonormal · 2009-08-01T18:59:39.373Z · LW(p) · GW(p)
  • If zero is very low, then total utility maximization = make as many people as possible

Repugnant conclusion certainly is worth discussing, but the other two:

  • If zero is very high, then total utility maximization = it's kill everyone

I think it would be a very bad idea to have a utility function such that the utility of an empty universe is higher than the utility of a populated non-dystopia; so any utility function for the universe that I might approve should have a pretty hefty negative value for empty universes. I don't think that's too awful of a requirement.

  • If zero is average utility, then total utility maximization = doesn't matter what you do

This looks like a total non sequitur to me. What do you mean?

Replies from: Cyan, taw
comment by Cyan · 2009-08-01T20:30:58.485Z · LW(p) · GW(p)

He means that if utility is measured in such a way that average utility is always zero, then total utility is always zero too, average utility being total utility divided number of agents.

Replies from: orthonormal
comment by orthonormal · 2009-08-02T03:04:38.796Z · LW(p) · GW(p)

Well, that's not a very good utility function then, and taw's three possibilities are nowhere near exhausting the range of possibilities.

comment by taw · 2009-08-01T22:05:09.308Z · LW(p) · GW(p)

So where do you put zero? By this one completely arbitrary decision you can collapse total utility maximization to one of these cases.

It gets far worse when you try to apply it to animals.

As for zero being very high, I've actually heard many times this argument about existence of farm animals, which supposedly suffer so much that it would be better if they didn't exist. It can as easily be applied to wild animals, even though it's far less common to do so.

With animal zero very low, total utility maximization turns us into paperclip maximizer of insects, or whatever is the simplest utility-positive life.

Replies from: CronoDAS
comment by CronoDAS · 2009-08-02T07:00:36.039Z · LW(p) · GW(p)

If non-existent beings have exactly zero utility - that any being with less than zero utility ought not to have come into existence - then the choice of where to put zero is clearly not arbitrary.

comment by Nick_Tarleton · 2009-08-01T18:23:06.332Z · LW(p) · GW(p)

Not really, but moral philosophers already have, at length.

Replies from: gwern, torekp, djcb
comment by gwern · 2009-08-01T20:50:08.275Z · LW(p) · GW(p)

Yeah, that doesn't surprise me. But the context of our discussion was certainly different! I had suggested to AngryParsley that even if we had next to no understanding of how to modify our minds for the better, uploading would still be useful since we could make 10 copies of ourselves with semi-random changes, and let only the best one propagate; he objected that how did I plan to get rid of the excess 9? By murder was plainly awfully immoral, and my suggestion of forcing them to live out the standard 4 score and 10 only somewhat less so - by not allowing them to be immortal or whatever the selected copy would get, I would be lowering the average. (Going by totals, this of course isn't an issue.)

comment by torekp · 2009-08-08T15:59:50.271Z · LW(p) · GW(p)

The Mere Addition Paradox suffices to refute the AVG view. From Nick's link:

Scenario A contains a population in which everybody leads lives well worth living. In A+ there is one group of people as large as the group in A and with the same high quality of life. But A+ also contains a like number of people with a somewhat lower quality of life. In Parfit's terminology A+ is generated from A by “mere addition”. Comparing A and A+ it is reasonable to hold that A+ is better than A or, at least, not worse.

For example, A+ could evolve from A by the choice of some parents to have children whose quality of life is good, though not as good as the average in A. We can even suppose that this makes the parents a little happier, while still lowering the overall average.

comment by djcb · 2009-08-02T11:53:44.385Z · LW(p) · GW(p)

Thanks for the link.

And you are right, by Jove, these philosophers really like to go on about it -- ie.the whole issue could be summarized as the question whether we should optimize for AVG(good) or for SUM(good) -- and some variations. A question that ultimately cannot be answered. The length of the bibliography makes it almost comical.

Replies from: Psy-Kosh
comment by Psy-Kosh · 2009-08-03T07:10:09.219Z · LW(p) · GW(p)

The main problem I have with AVG is that it implies that as population increases, inherent value of each individual decreases. Why should you be suddenly less important simply because someone else was just born? (I don't mean the instrumental components of your value but your inherent value)

comment by timtyler · 2009-08-02T08:44:06.866Z · LW(p) · GW(p)

What you "should" do depends on what your goal is.

Most biological organisms don't maximise either of your proposed functions - their utility function is down to how many great grandchildren they have - not how many people they help.

Replies from: gwern
comment by gwern · 2009-08-02T09:07:12.557Z · LW(p) · GW(p)

I don't really see how what most organisms do is relevant; we're discussing what's moral/ethical for human beings. This is quite relevant to deciding whether to say, help out Africa (which with its very high birth rates is equivalent to plumping for total) or work on issues in the rest of the world (average).

Replies from: timtyler
comment by timtyler · 2009-08-02T09:28:07.009Z · LW(p) · GW(p)

As I understand it, there is widespread disagreement on that issue. Most humans don't seem to have a clear idea of what their goals are, and of those that do, there is considerable disagreement about what those goals are.

Scientists can model human goals. The result seems to be that humans act so as to try to maximise their genes - and sometimes the memes they have been infected with. Basically all goal-directed behaviour is the result of some optimisation process - and in biology, that process usually involves differential reproductive success of replicators.

Human goal-seeking behaviour thus depends on the details of the memes the humans have been infected with - which mostly explains why humans vocalise a diverse range of goals.

Humans often spread the patterns they copy while working from hypotheses about how the world works that are long out of date. Also organisms often break and malfunction as a result of developmental problems and/or environmental stresses - so these theories are not always as good as we would like.

comment by taw · 2009-08-01T15:11:40.348Z · LW(p) · GW(p)

One thing that I've been wondering about (but not enough to turn it into a proper thread) is how to talk about consequentialist morality. Deontologists can use thought experiments, because they're all about rules, and getting rid of unnecessary real world context makes it easier for them.

Consequentialists cannot use tricks like that - when asked if it's ok to torture someone in a "ticking bomb" scenario, answering that real world doesn't work like that due to possibility of mistakes, how likely is torture to work, slippery slope, potential abuse of torturing power once granted etc. is a perfectly valid reply.

So if we cannot really use thought experiments, how are we supposed to talk about it?

Replies from: Cyan, djcb, None
comment by Cyan · 2009-08-01T15:57:04.203Z · LW(p) · GW(p)

What prevents a consequentialist from accepting various hypothetical conditions arguendo and working out their consequences?

I'd consider it a possibly bad idea to actually do so, what with the known cognitive biases that might skew future decision making; but accepting arguendo that a particular consequentialist has overcome these biases, I can't see a reason for her to refuse to consider least-convenient-world scenarios.

Replies from: taw
comment by taw · 2009-08-01T16:16:46.067Z · LW(p) · GW(p)

Moral rules are about actions, but in consequentalism are judged strictly according to their consequences. Real world is what connects actions to consequences, otherwise we couldn't talk about morality at all.

If you assume some vast simplification of the real world, or assume least-convenient-world, or something like that, the connection between actions and consequences completely changes, and so optimal moral rules in such case have no reason to be applicable to the real world.

Also if the real world changes significantly - let's say we develop a fully reliable lie detector and start using it all the time (something I consider extremely unlikely in the real world). In such case the same actions would have different consequences, so consequentialism would say our moral rules controlling our actions should change. For example if we had lie detectors like that it would be a good idea to get every person routinely tested annually if they committed a serious crime like murder or bribery - something that would be a very bad idea in our real world.

Replies from: Cyan, freyley, marks
comment by Cyan · 2009-08-01T16:46:33.449Z · LW(p) · GW(p)

Ah, I see. You meant that consequentialists can't use simplified or extreme hypothetical scenarios to talk about consequentialist morality as applied to real decisions, not that they can't do it at all. That was implicit in your ticking-time-bomb example but not explicit in your opening, and I missed it.

(I agree.)

comment by freyley · 2009-08-01T18:48:05.558Z · LW(p) · GW(p)

Shouldn't thought experiments for consequentialism then emphasize the difficult task of correctly determining the consequences from minimal data? It seems like your thought experiments would want to be stripped down versions of real events to try to guess, from a random set of features (to mimic the randomness of which aspects of the situation you would notice at the time), what the consequences of a particular decision were. So you'd hold the decision and guess the consequences from the initial featureset.

comment by marks · 2009-08-01T16:33:44.081Z · LW(p) · GW(p)

There's another issue too, which is that it is extraordinarily complicated to assess what the ultimate outcome of particular behavior is. I think this opens up a statistical question of what kinds of behaviors are "significant", in the sense that if you are choosing between A and B, is it possible to distinguish A and B or are they approximately the same.

In some cases they won't be, but I think that in very many they would.

Replies from: billswift
comment by billswift · 2009-08-01T20:18:52.280Z · LW(p) · GW(p)

That's why I believe a person is responsible for the foreseeable consequences of their actions. If the chain of effects is so convoluted that a particular result cannot be foreseen than it should not be used to access the reasonableness of a person's actions. Which is why I think general principles should guide large areas of our actions, such as refraining from coercion and fraud, even for a consequentialist.

Replies from: conchis
comment by conchis · 2009-08-02T09:32:18.339Z · LW(p) · GW(p)

I am sympathetic to this, but would at least want to modify it to responsibility for reasonably foreseeable consequences. What is foreseeable is endogenous - it is a function of our actions and our choices to seek information. We generally don't want to absolve people of responsibility for actions which were not foreseeable only because they were reckless as to the consequences of their actions, and didn't bother to gather sufficient information to make a proper decision.

comment by djcb · 2009-08-02T09:48:23.574Z · LW(p) · GW(p)

I doubt there actually are any strict consequentialists (or strict deontologists for that matter). E.g., would anyone be in favour of not punishing failed murder attempts?

To me, consequentialism/deontology always seem like post-hoc explanations of our not all too rational moral intuitions -- useful to describe the 'moral rules playing field', but not saying very much about who people really decide how to act.

Replies from: pengvado
comment by pengvado · 2009-08-02T11:44:52.219Z · LW(p) · GW(p)

What does punishment have to do with consequentialism -- Are you hypothesizing that not punishing failed murder attempts would reduce the number of successful murders, but that even people claiming to be consequentialists and claiming to value that consequence wouldn't consider that solution? I would certainly be in favor of any reduction in punishment if it can be shown that the reduced punishment is more of a deterrent than the original.

Or are you saying that a murder attempt shouldn't count as murder if no one actually died, and comparing that to your intuition of judging the intentions rather than the consequences? But intentions do matter when evaluating what effect a given punishment policy has on the decisions of potential murderers.

Replies from: djcb
comment by djcb · 2009-08-02T13:13:30.189Z · LW(p) · GW(p)

Well, strict consequentialists determine the goodness or badness of an action only by the consequences, not by the intentions of the actor. And that seems to fly in the face of our moral intuitions (as in the attempted murder example), which is why I hypothesized that there are not many strict consequentialist.

As you suggest, a possible way out would be to say that we punish even attempted murder, because it might discourage others to attempt (and possibly succeed) doing the same. And that is what I would call a 'post-hoc explanation'.

comment by [deleted] · 2009-08-01T17:31:59.881Z · LW(p) · GW(p)

The consequentialist can't Know the consequences of the actions, but he can list the likely possibilities and assign probabilities and error bars to the consequences. If there's no difference in probability between the more and less desirable consequence, or if the difference is well within the error bars, then there's no way to determine whether the action is right or wrong using consequentialist morality.

For instance, if there's a 50/50 chance torture will give you the answer, there's no way to make the right choice. If it's 60/40 with +-30 error bars, you still can't make a right choice (though the maximum error bar overlap is a matter of personal moral configuration). But if it's 70/30 with +-5 error bars, a consequentialist can make a choice.

This is, of course, complicated by the fact that we're loaded with cognitive biases that will lead most people to make probability mistakes in a "ticking bomb" situation, and guessing error bars is equally a difficult skill to master. That, and most real situations aren't simple dilemmas, but intractable quagmires of cascading consequences.

Replies from: marks
comment by marks · 2009-08-01T19:09:14.343Z · LW(p) · GW(p)

I think you're making an important point about the uncertainty of what impact our actions will have. However, I think the right way to about handling this issue is to put a bound on what impacts of our actions are likely to be significant.

As an extreme example, I think I have seen much evidence that clapping my hands once right now will have essentially no impact on the people living in Tripoli. Very likely clapping my hands will only affect myself (as no one is presently around) and probably in no huge way.

I have not done a formal statistical model to assess the significance, but I can probably state the significance is relatively low. If we can analyze what events are significant or not causally for others then we would certainly make the moral inference problem much simpler.

Replies from: None
comment by [deleted] · 2009-08-01T19:34:28.305Z · LW(p) · GW(p)

Good point, cutting off very low-impact consequences is a necessary addition to keep you from spending forever worrying. I think you could apply the significance cutoff when making the initial list of consequences, then assign probabilities and uncertainty to those consequences that made the cut.

Your example also reminded me of butterflies and hurricanes. It's sensible to have a cutoff for extremely low probabilities too (there is some chance that clapping your hands will cause a hurricane, but it's not worth considering).

The probability bound would solve the problem of cascading consequences too. For a choice, you can make some probability distribution that it will, say, benefit your child. You can then take each scenario you've thought of and ranked as significant and possible, and consider the impact on your grandchildren. But now you're multiplying probabilities, and in most cases will quickly end up with insignificantly small probabilities for each secondary consequence, not worth worrying about.

(Something seems off with this idea I just added to yours - I feel like there should be some relation between the difference in probability and the difference in value, but I'm not sure if that's actually so, or what it should be.)

comment by Wei Dai (Wei_Dai) · 2009-08-14T12:40:48.146Z · LW(p) · GW(p)

[deleted while I think this through]

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-08-14T12:59:54.796Z · LW(p) · GW(p)

Even apart from apparent mind-killing properties of this proposal, I don't think it's reasonable. First, it's unnecessary: if you expect the probability of a positive intelligence explosion to go up even a little bit as a result of your donation, the crazy positive utility of the outcome compensates for the donation. If you don't think the donation affects the outcome, don't donate. Second, implementation of some compensation mechanism is an ad-hoc rule that isn't necessarily possible to attach to the AI's goals in a harmless way, so you can't promise that it will be done. Also, if something of the kind is really a good idea, FAI should be able to implement it regardless of what you promise now, without tweaking of its goals.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-08-14T13:11:27.017Z · LW(p) · GW(p)

What do you mean by "apparent mind-killing properties of this proposal"?

You're right that the promise isn't important. Just mentioning the possibility ought to be enough, in case the donor hasn't thought of it. This might be a real-life version of counterfactual mugging or trading across possible worlds.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-08-14T13:28:34.600Z · LW(p) · GW(p)

What do you mean by "apparent mind-killing properties of this proposal"?

Mind-killer. Saying that some people are going to have special privileges after the essentially taking-over-the-world enterprise goes through is a political statement.

This might be a real-life version of counterfactual mugging or trading across possible worlds.

I don't think it is. The donors can't make their decisions depending on whether the promise will actually be kept, they don't have Omega's powers. The only thing to go on here is estimating that it's morally right to heed such an agreement, and so FAI will factor that in.

comment by [deleted] · 2009-08-13T07:29:23.329Z · LW(p) · GW(p)

So, I'm reading over Creating Friendly AI, when I come across this:

"Um, full disclosure: I hate moral relativism with a fiery vengeance . . ."

I think, Whaat? Whaaat? Whaaaaat? Is that Eliezer Yudkowsky saying that? Is Eliezer Yudkowsky claiming that moral propositions are in fact properties of the universe itself and not merely of human minds?

The three explanations each of which I'd like to see are that Eliezer Yudkowsky isn't saying what I think of "moral relativism" as meaning, that Eliezer Yudkowsky no longer believes this, and that what I just read was not actually written by Eliezer Yudkowsky. The idea of me disagreeing with Eliezer Yudkowsky on a major point like this is something I cannot easily fathom.

Replies from: orthonormal, cousin_it
comment by orthonormal · 2009-08-15T21:33:45.453Z · LW(p) · GW(p)

Um, he's still emphatically not a moral relativist as usually understood; don't you remember the metaethics sequence? The conclusion that

(A) morality is essentially anthropomorphic rather than universal,

doesn't imply that

(B) we should become indifferent to the content of that morality.

ISTM that strict philosophical definitions of moral relativism tend to center on A, but that most of the conversation around moral relativism assumes it means B.

Replies from: None
comment by [deleted] · 2009-08-16T03:45:03.344Z · LW(p) · GW(p)

I suppose that knowing what moral relativism actually is would help. A is a conclusion I can easily live with. B is absurd. Metaphysical moral absolutism ("all intelligent beings will tend toward moral behavior") is something I would believe only with... many decibels of evidence.

comment by cousin_it · 2009-08-13T09:17:48.590Z · LW(p) · GW(p)

Eliezer has declared CFAI obsolete. Try CEV, it's much clearer, though not entirely clear yet.

comment by Vladimir_Nesov · 2009-08-11T14:02:12.318Z · LW(p) · GW(p)

Who owns the lesswrong wiki on wikia? I think it should be deleted.

comment by snarles · 2009-08-06T11:00:48.708Z · LW(p) · GW(p)

I have a question for lesswrong readers. Please excuse any awkwardness in phrasing or diction--I am not formally trained in philosophy. What do you consider to be the "self"? Your physical body, your subconscious and conscious processes combined, consciousness, or something else? Also, do you consider your "past selves" and "future selves" to be part of a whole with your "present self," and to what extent? For an example of why the distinction might be important, let's say that one night, you sleepwalk and steal a thousand dollars. This is the first time something like this has happened. Of course, society will hold you accountable for your actions, but how will you assign the blame in your head? E.g. "I'm such a horrible person" vs. "That pesky subconscious! Always up to no good!" Another example is this. You are offered a deal in which you will live in earthly paradise for 10 years, but at the end of that ten years, you will be tortured in way so that your future self will regret accepting. Do you accept the deal? You might, if you can consider your future self in ten years to be a "different person" than your current self.

Replies from: kpreid
comment by kpreid · 2009-08-06T13:44:42.696Z · LW(p) · GW(p)

(Treating this as a survey; I am not speaking for LW.)

What do you consider to be the "self"? Your physical body, your subconscious and conscious processes combined, consciousness, or something else?

I am currently implemented on many layered systems whose existence in those particular forms is not logically necessary, but we don't know the details of those systems enough to know how much of "me" is in each of them.

Also, do you consider your "past selves" and "future selves" to be part of a whole with your "present self," and to what extent?

Do I consider the tree falling with no one to hear it to make a sound?

For an example of why the distinction might be important, let's say that one night, you sleepwalk and steal a thousand dollars. This is the first time something like this has happened. Of course, society will hold you accountable for your actions, but how will you assign the blame in your head? E.g. "I'm such a horrible person" vs. "That pesky subconscious! Always up to no good!"

"My implementation is buggy."

Both of your examples are incorrect assignment, and the reality is somewhere in the middle. You-which-are-thinking-about-it did not choose the action, but that "you" is in the best position to affect the future behavior and are therefore responsible — insofar as there is any knowledge of how — for reducing the chance it will happen again.

You are offered a deal in which you will live in earthly paradise for 10 years, but at the end of that ten years, you will be tortured in way so that your future self will regret accepting. Do you accept the deal?

There is always a delay between cause and effect. I choose nothing except to have its effects later. Given the particular condition, "your future self will regret accepting", I must reject it now.

comment by Scott Alexander (Yvain) · 2009-08-04T19:24:03.418Z · LW(p) · GW(p)

Due to an unfortunate accident with a particle accelerator, you are transformed into a mid-level deity and the rest of the human race is wiped out. Experimenting with your divinity, you find you have impressive though limited levels of control over the world, but not enough finesse or knowledge to rewire the minds of intelligent creatures or create new ones.

With humanity gone, you discover that the only intelligent race left in the universe is the Pebble Sorters.

Do you use your newfound powers to feed starving Pebblesorters, free their slaves, slay their tyrants, heal their sick, preserve their places of natural beauty, protect their rights, and end their wars? Or do you use it to build lots and lots of heaps of prime-numbered heaps of pebbles?

Which course of action do you think would be more moral?

Replies from: JGWeissman, thomblake, Alicorn
comment by JGWeissman · 2009-08-04T21:15:24.829Z · LW(p) · GW(p)

I would help them deconstruct their notion of the "right" size of a heap in terms of prime numbers, and then I would do all that other nice stuff for them, while leaving them to build the heaps of pebbles themselves, since they seem to enjoy it so much.

comment by thomblake · 2009-08-04T20:20:05.822Z · LW(p) · GW(p)

I think it's obvious which would be more right. But the real question is, which would be more prime?

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2009-08-04T20:52:35.345Z · LW(p) · GW(p)

No, I'm asking which would be more right, and I don't think it's obvious.

If I like steak and you're a vegetarian, and I'm feeling altruistic according to normal human standards, which do I give you? Steak or vegetables?

My utility function contains a term that sort of resembles a desire to increase your utility function, which is why I'd probably give you vegetables. But intuitively I feel like this only works when your utility function is close enough to mine that I can at least sympathize with it. I honestly don't know what I'd do in the Pebble Sorter situation.

Replies from: thomblake
comment by thomblake · 2009-08-04T21:08:35.701Z · LW(p) · GW(p)

If I like steak and you're a vegetarian, and I'm feeling altruistic according to normal human standards, which do I give you? Steak or vegetables?

Well, if you want to do something that I would think was nice, then you give me whichever one I'd rather receive (presumably vegetables).

But are you concerned about what I would want, or what you would?

It's time to go teach those baby-eating pebblesorters right from wrong!

(I don't have any opinions about what to do for pebblesorters. Morality is about reality.)

comment by Alicorn · 2009-08-04T20:04:19.516Z · LW(p) · GW(p)

Is there some reason I can't do all of the above?

comment by topynate · 2009-08-04T14:14:07.788Z · LW(p) · GW(p)

Is the Blue Brain Project an existential risk? Henry Markram, its leader, claims that a full model of a human brain will be constructed within 10 years. Once that's done it will be relatively simple to educate it. And as the successful completion of the project depends on gathering a lot of information about how the brain works, we can expect that information to be available to the artificial human, which in principle permits controlled self-modification.

comment by [deleted] · 2009-08-04T04:45:59.515Z · LW(p) · GW(p)

Where can I read more about perceptual control theory? I'd like a description of reshuffling more detailed than "stuff gets reshuffled".

Replies from: pjeby, Richard_Kennaway
comment by pjeby · 2009-08-05T02:07:53.565Z · LW(p) · GW(p)

I'd like a description of reshuffling more detailed than "stuff gets reshuffled".

This page and this page offer some information, although neither is as good as the relevant chapter of B:CP. (Also, the second of those two pages is missing spaces at times, apparently due to a botched conversion from some non-HTML format.)

Replies from: SilasBarta
comment by SilasBarta · 2009-08-07T04:25:37.317Z · LW(p) · GW(p)

Thanks, this is much more interesting and informative about what specifically PCT claims. No offense, but I wish your explanations from before had been more like this; they're what I was looking for in terms of the critical inferential steps that get you into the PCT way of thinking,

Btw, I saw you brought PCT up more recently in the correlation thread, and thought you were unfairly taken down to -1 a few times -- some people are being a little too negative about the topic, and that's coming from someone who's been quite critical about it before!

comment by Richard_Kennaway · 2009-08-04T08:48:32.906Z · LW(p) · GW(p)

Where can I read more about perceptual control theory?

At the links I posted here. I also recommend Bill Powers' books 1, 2, 3. (Note: I wrote the appendix to the last of those and so get a small proportion of the royalties. Buy a copy and buy me my next designer coffee!)

comment by JamesAndrix · 2009-08-03T06:11:18.260Z · LW(p) · GW(p)

Marketing

comment by [deleted] · 2009-08-01T23:46:18.156Z · LW(p) · GW(p)

A very common belief here is that most human behaviour is based on Paleolithic genes, and only trivial variations are cultural (memetic), coming from fresh genes, or from some other sources.

...

If preference for large breasts was genetic, surely there might be a family somewhere with some mutation which would prefer small breasts. Do we have any evidence of that?

Maybe there are also different ideas of what is and isn't a trivial variation.

comment by billswift · 2009-08-01T20:53:07.609Z · LW(p) · GW(p)

Here's an interesting article I just found through HN: http://www.dragosroua.com/training-your-focus/

Replies from: anonym
comment by anonym · 2009-08-01T23:08:08.989Z · LW(p) · GW(p)

It looks to me like garden-variety armchair speculation and argument from personal anecdote.

Replies from: edragonu, gwern
comment by edragonu · 2009-08-02T15:51:18.822Z · LW(p) · GW(p)

can you define garden-variety armchair speculation?

Replies from: anonym
comment by anonym · 2009-08-02T17:37:44.725Z · LW(p) · GW(p)

"Garden-variety" just means "typical", and by "armchair speculation" I meant opinions not backed up rigorously and not grounded in the relevant sciences (e.g., cognitive neuroscience, neuropsychology).

I should have left out the "armchair", since that gives the wrong impression of speculating about things in which one has no experience whatsoever, which is not the case here.

comment by gwern · 2009-08-01T23:43:53.416Z · LW(p) · GW(p)

I agree. And it's too bad - there's a lot psychology has to say about mental focus and attention, about how high working memory leads to greater focus, etc.

Replies from: edragonu, Fetterkey
comment by edragonu · 2009-08-02T15:52:02.897Z · LW(p) · GW(p)

what's focus for you? Really curious about how you define it :-)

Replies from: gwern
comment by gwern · 2009-08-02T15:55:21.284Z · LW(p) · GW(p)

I define it practically: any mental state in which I'm able to score higher than usual playing Dual N-Back. Any more complex definition falling back on 'being able to suppress unwanted stimulus' or 'remembering in short-term or working memory that which is needed', seems too vague and begging the question for my taste.

Replies from: edragonu
comment by edragonu · 2009-08-02T16:05:24.248Z · LW(p) · GW(p)

That seems to me a definition of effectiveness, not focus.

Replies from: gwern
comment by gwern · 2009-08-02T19:05:35.953Z · LW(p) · GW(p)

Mayhap effectiveness is focus. Two words may denote the same concept, after all. As I said, any other definition seems to be essentially 'having in one's mind only that which is wanted', which is not a definition at all. My scores on DNB are positively correlated with my subjective impressions of focus; it's good enough for me.

comment by Fetterkey · 2009-08-02T05:07:02.833Z · LW(p) · GW(p)

Do you have a link to some well-written material on the subject? You've piqued my curiosity.

Replies from: gwern, gwern
comment by gwern · 2009-08-02T09:10:57.262Z · LW(p) · GW(p)

There's nothing really written for the layman online (which I know of). You can start by googling for topics like 'latent inhibition' and research on meditation.

Genuine research papers-wise, you could join http://groups.google.com/group/brain-training to which group's files I have uploaded ~20 papers on various topics related to working memory and focus.

Important ones:

  • klingberg2004-workingmemory-increase-helps-adhd.pdf
  • mcvay2009-workingmemory-improves-focus.pdf
  • thorell2008-workingmemory-improves-attention.pdf
  • unsworthengle2008-wm-executive-focus.pdf
  • jaeggi2008-nback-increases-iq.pdf

EDIT: in the future, Google will be deleting Groups' Files. You will want to search around for where the collection has moved to; I keep local copies of the most important ones in my wiki, and I copied all files c. October 2010 to my Dropbox account.

comment by gwern · 2010-10-10T02:10:24.271Z · LW(p) · GW(p)

In lieu of anything better, you can try my DNB FAQ which discusses the general subject: http://www.gwern.net/N-back%20FAQ

comment by SilasBarta · 2009-08-01T18:41:40.524Z · LW(p) · GW(p)

I was just thinking: this site is the result of splitting off from overcomingbias.com earlier this year. With its new format and functionality, comments and posts get ratings. But all of Eliezer Yudkowsky's posts from before the split don't have ratings comparable to more recent posts, because that would require people to go back through the old posts mod them up.

Some people have done so, but not enough that their ratings accurately compare with more recent top-level posts.

I suggest that everyone take the time to go back to the Eliezer_Yudkowsky top-level posts from before ~February '09, and vote up the ones you remember as being particularly good. If you wish, also mention what you voted on in this thread and why.

In order to avoid nudging anyone toward my picks, I'll wait a while and then say them.

If you need some help jogging your memory, here are some threads where people discussed highlights from when LW was overcomingbias.com: one, two, three.

Replies from: gwern, Psy-Kosh, Douglas_Knight, thomblake
comment by gwern · 2009-08-01T23:47:21.900Z · LW(p) · GW(p)

Instead of vaguely asking people to go read some, why couldn't we do something more concrete? What we know gets traffic, and what everyone knows that everyone knows gets traffic (so you aren't pouring the water of your comments onto the sands of an abandoned page), is being on the front page. Instant traffic, entree into RSS feeds, etc.

Why not every 2 days without a fresh front page post, for example, automatically post the next old EY article? (I suggested this back when EY's old articles were being imported, but my suggestion was unjustly neglected, I felt; perhaps the need has become more apparent since.)

Replies from: SilasBarta
comment by SilasBarta · 2009-08-02T01:44:50.955Z · LW(p) · GW(p)

Those are good ideas too. But just to clarify, I wasn't asking people to read the whole archive, hoping to stumble upon something they remember as being good and modding it up. I was asking that if you already remember certain posts as being good (or find them in a quick perusal of the links I gave), take a moment to mod them up so their rating more accurately reflects their quality in comparison to more recent posts.

Replies from: gwern
comment by gwern · 2009-08-02T09:01:55.619Z · LW(p) · GW(p)

Unfortunately, all the non-fiction ones I remember seem to have prerequisites to really understand and grapple with them - prerequisites published before them!

comment by Psy-Kosh · 2009-08-01T19:43:50.697Z · LW(p) · GW(p)

That's not really needed, I think. Since a simple way to do that is to just notice which ones you tend to reference/search for most, and simply upvote as you happen to have need of them, right?

Replies from: SilasBarta
comment by SilasBarta · 2009-08-01T22:11:52.303Z · LW(p) · GW(p)

The purpose is to make the ratings of the old posts more comparable to the new ones, which can only happen when they get as much attention as new ones get, which requires a deliberate effort on the part of regular visitors to make sure their appreciation of older posts is reflected in an upvote.

And upvoting shouldn't have much to do with when you happen to have need of them. If you like a post and want to refer back to it, that's what the "save" feature is for. Or bookmark it.

Any reason I got -2 for this suggestion btw?

Replies from: Psy-Kosh
comment by Psy-Kosh · 2009-08-01T22:18:45.987Z · LW(p) · GW(p)

Dunno about the -2. I didn't vote you down. And I meant "those articles that I tend to reference most, that I tend to find most useful to link to to explain a concept are probably the ones most worth voting up"

But I guess for consistency, you may be right.

comment by Douglas_Knight · 2009-08-02T03:32:52.905Z · LW(p) · GW(p)

What's going to happen, regardless of how people rate, is that people will read the articles that get lots of links. People who come directly from google will read what has lots of links, whether internal or external. People who become regulars here will read what gets linked to from contemporary LW articles. Some small number of new regulars will use the ratings to decide what to read, but most of them won't do that until after they read a bunch that were linked recently.

comment by thomblake · 2009-08-01T23:30:25.998Z · LW(p) · GW(p)

I'm not sure there's a point to modding up old posts. While threaded comments on old posts might benefit from moderation to help drive-by viewers, I don't see how ratings on old posts are beneficial to anyone (aside from trying to overflow EY's karma).

Replies from: SilasBarta
comment by SilasBarta · 2009-08-02T01:43:08.657Z · LW(p) · GW(p)

Well, just to give an example, if someone comes in and sees Engines of Cognition, regarded by many here as one of the best posts, they may be misled by its rating of 3, compared to EY's typical 30+ rating for newer posts which, while good, are not the classic must-read that Engines of Cognition is.

(I don't see much reason to worry about EY's karma going up; he's past "karma escape velocity" :-P )

Though I guess there are easier solutions, like picking the ten most popular posts as per the threads I linked, and give them each 30 points but not add them to EY's karma. Or, put an explanation about the site's history on anything here before Feb. 09.

comment by ajayjetti · 2009-08-16T00:13:18.793Z · LW(p) · GW(p)

I don't know if it is appropriate to even post this thing, but I didn't find a single thread which talks about the kind of music people in this forum listen to. Has it ever happened that you have used rationality to decide the kind of music you should be listening to? Like all the other things, even listening to music needs "training" (the ears in this case). Music is art-form, so can it be quantified? One might get the same satisfaction listen to MJ or Pat Metheny. But if it happens that you have to choose only two records to listen to for the rest of your life, can rationality help there?

Replies from: Psychohistorian
comment by Psychohistorian · 2009-08-16T08:26:50.203Z · LW(p) · GW(p)

In a word, no. Rationality can't tell you what to like, it can only tell you how to get it once you know it.

If you had clearly defined criteria for what good music is, you could use reason along with these criteria to select music efficiently. Reason can't tell you what the criteria are.