Posts
Comments
relevant article in the New Yorker http://www.newyorker.com/reporting/2007/08/20/070820fa_fact_page?currentPage=all
if management are doing that then are neglecting a powerful tool in their tool-kit, because announcing a G will surely cause G to fall, and experience says that to begin with a well-chosen G and G remain correlated (because many of the things to do to reduce G also reduce G). It is only over time that G* and G detach.
At work a large part of my job involves choosing G , and I can report that Goodhart's Law is very powerful and readily observable.
Further : rational players in the workspace know full-well that management desire G, and the G is not well-correlated with G, but nonethelss if they are rewarded on G*, then that's what they will focus on.
The best solution - in my experience - is mentioned in the post: the balanced scorecard. Define several measures G1 G2 G3 and G4 that are normally correlated with G. The correlation is then more persistent : if all four measures improve it is likely that G will improve.
G1 G2 G3 G4 may be presented as simulaneous measures, or if setting four measures in one go is too confusing for people trying to prioritise (the frwer the measures the more powerful) they can be sequential. IE If you hope to improve G over 2 years, then measure G1 for two quarters, then switch the measurement to G2 for the next two and so on. (obviously you don't tell people in advance). NB this approach can eb effective, but will make you very unpopular.
That's true (that they have biases) although I understand the training is attend to the nature of the injury, and practicalities of the situation - eg danger to the firefighter - rather than the age of the victim.
However what one might expect to see in firefighters would be ethical dilemmas like the trolley problem to trigger the cerebral cortex more, and the amaglydia less than in other people.
Perhaps.
Unless of course the training works by manipulating the emotional response. So firefighters are just as emotional, but their emotions have been changed by their training.
This is the sort of problem Kahane was talking about when he said it is very difficult to interpret brain scans.
A person in the audience suggested taking firefighters, who sometimes face dilemmas very like this (Do I try to save life-threatened person A or seriosly injured Baby B), and hooking them up to scans and seeing if their brains work differently - The hypothesis being that they would make decision in dilemmas more 'rationally' and less 'emotionally', as a result of their experience and training. Or the pre-disposition that led to them becoming fire-fighters in the first place.
The opening was deliberate - it's a common way that newspaper Diarists start their entries.... but perhaps it's a common way that British newspaper diarists start their entries, and sounds wrong to american ears. So I have changed it. Nations divided by a common language etc.
Yes. People get bogged down with the practical difficulties. Another common one is whether you have the strength to throw the stranger off the bridge (might he resist your assault and and even throw you off).
I think the problem is the phrasing of the question. People ask 'would you push the fat man', but they should ask 'SHOULD you push the fat man'. A thought experiemnt is like an opinion poll, the phrasing of the question has a large impact on the answers given. Another reason to be suspicious of them.
No, I wasn't declaring it meaningless.
My (perhaps trivial) points were that all hypothetical thought experiments are necessarily conducted in Far mode, even when thought experiment is about simulating Near modes of thinking. Does that undermine it a little?
And
- while all Thought Experiments are Far
- Actual Experiements are Near.
I was illustrating that with what I hoped was an amusing anecdote -- the bizarre experience I had last week of having the trolley problem discussed with the fat man actually personified and present in the room, sitting next to me, and how that nudged the thought experiment into something just slightly closer to a real experiment.
It's easy to talk about sacrificing one person's life to save five others, but hurting his feelings by appearing to be rude or unkind, in order to to get to a logical truth was harder. This is somewhat relevant to the subject of the talk - decisions may be made emotionally and then rationalised afterwards.
Look, I wasn't hoping to provoke one of Eliezer's 'clicks', just to raise a weekend smile and to discuss scenario where lesswrong readers had no cached thought to fall back on.
:-( no, not a draft! It was just supposed to be light-hearted - fun even - and to make a small point along the way.... it's shame if lesswrong article must be earnest and deep.
no, not at all, I don't think rational = unemotional (and I liked EY's article explaining how it is perfectly rational to feel sad ... when something sad happens).
But rationality does seem to be stongly associated with a constant meta-analytical process: always thinking about a decision, then thinking about the way we were thinking about the decision, and then thinking about the self-imposed axioms we have used to model the way that we were thinking about the meta-thinking, and some angst about whether there are undetected biases in the way that .. yada yada yada.
which is all great stuff,
but I wondered whether rationalists are like that all the time, or whether they ever come home late, open a beer or two and pick their nose while transfixed by czechoslvakian wrestling on ESPN, without stopping to wonder why they are doing it, and wouldn't it be more rational to go to bed already.
Do you act all rational at home . . or do you switch out of work mode and stuff pizza and beer in front of the TV like any normal akrasic person? (and if you do act all rational, what do your partner/family/housemates make of it? do any of them ever give you a slap upside the head?)
:-)
Can you make a living out of this rationality / SI / FAI stuff . . . or do you have to be independently wealthy?
I have been in Ashley's situation - roped in to play a similar parlour game to demonstrate game theory in action.
In my case it was in a work setting: part of a two day brainstorming / team building boondongle.
In my game there were five tables each with eight people, all playing the same, iterarted game.
In four out of five table every single person cooperated in every single iteration - including the first and last one. On the fifth table they got confused about the rules.
The reason for the behaviour was clear - the purpose of the game was to demonstrate that cooperation increased the total size of the pot (the game was structered that way). In a workplace setting the prize was to win the approbation of the trainers and managers, by demonstrating that we were teamplayers, and certainly NOT to be the asshole who cheated his tablemates and walked off with $50.
On the the fifth table they managed to confuse themselves such that on the first iteration two of them unwittingly defected. Their table therefore ended up with the least money, but the two individuals of course ended up the richest in the room - they were hideously embarrassed.
I was left wondering what amount of money it would have taken to change behaviour. Would people defect if there was $1000 at stake? In that setting, I think still not. $10,000? $100,000 ?
Practical game-theory experiments would be quite expensive to run, I think.
"Do not be too moral. You may cheat yourself out of much life. Aim above morality. Be not simply good; be good for something" Thoreau
What a lot of comments (and I was worried that it was all too trivial. Lesson: never underestimate the power of Dr Who) Thanks all.
@Nanani - yes, indeed, the initial round up of 600 or so was composed of waifs and strays like that, inc the ill. But when the demand of 10% was acceded to there wasn't time to handpick
@SharedPhoenix - I agree and a strength of this story was that was no easy way out. The scenario was played out right to the end with the main character forced to make a rational sacrifice. OK, he found a way for it to be jsut one child, but there was still a choice.
@mikem - I disagree. Yes there were selfish cabinet members simply looking out for their own (this was dealt with in several contexts - there was an assumption that the interests of one's own child is beyond the limit of human rationality) however the decision to accede to this, and actually make this a policy was taken by the prime minister for rational reasons. He recognised that unless he spared the children of the decision makers and enforcers, there would be no decisions and no enforcing. It was purely rational. (And 'units' yes I meant that it was a plausible that such a sinister euphemism would be employed)
@jwdink - yes, I was surprised they took that route (the rational give-in rather than fight to death) in TV-Land it was an unusual decision. That's why I wrote the post about it :-)
perhaps an arm-wrestling contest would be acceptable... hmm, but not possible on bloggingheadstv... a face-pulling contest?
what i'd actually like to see would be Robin Hanson v Mencius Moldbug
William Gibson? http://www.williamgibsonbooks.com/index.asp
He also thinks a lot - and cleverly - about the future but in a different way from Eliezer.
good luck. I am out of town today, but perhaps the next one...
It all goes to show that
what's grist to the mill is nose to the grindstone.
Browne's description of his own symptoms reminds me of interviews I have read and seen of Terry Pratchett talking about his early-onset dementia - particuarly this
It's unusual because people deal with me and they refuse to believe I have Alzheimer's because at the moment I can speak very coherently, I can plot a novel
Yvain, thanks for this - a fascinating case I hadn't read about before.
I think OB has improved since LW started up. OB now feels calmer and it' better paced.
"I'm pretty sure most of my Christian friends don't believe that any of Genesis is literally true"
Have you asked them? Probably not, it's considered rude to ask christians questions like that, isn't it? (which is no doubt one reason why religious beliefs are able persist)
But if you did ask them you might be surprised by the answer.
Actually I suspect you are probably somewhat right: they don't beleive genesis literally. However I suspect they don't disbelieve it, either.
I actually don't think religious belief has much to to with doctrine, and I don't thmink many western christians ever actually sit down to assess exactly 'what' they believe, and what they don't. Religion isn't about believing silly things, it's primarily about belonging. Belinging to a group that at a social everyday level is mostly harmless, and normally well intentioned.
I also tried manually upvoting my own post - just t see what would happen.
Never never do that.
======== More seriously - shouldn't you get Karma from people REPLYING to your comments? Lots of Karma - I mean: someone upvotes me - that's nice - but someone actually REPLIES to me - wo hoo!
OK, so as an ardent game player and natural pedant, I need the rules and scoring sysem of this 'karma' thing explained to me - can I find it on the site somewhere?
To start with: I seem to get a karma point just for making a comment.. is that right?
(or is my mum on-line here upvoting my every post)
I like this article (but then I liked Dennet's ideas of belief in belief right from the start) and I've been thinking about this off and on all day.
But I think perhaps Eliezer over-analyses: On the surface this person's beliefs and thoughts seem fuzzy, so Eliezer admiraly digs deeper - but perhaps it's just fuzz all the way down.
Perhaps she believes P and ~P, perhaps she believes P>Q and she believes P but she beleives ~Q.
Perhaps you just have to shrug, and move on.
My experience is that most religious people give very, very, very little thought to what they actually believe. (About 10,000th of the introspection that Eliezer performs, say :-) ) and analysing it terms of doctrine, beliefs (or indeed impressions) is simply using the wrong tools. Perhaps better to think about emotions invovled in 'being religious' and being 'part of' a religion.
Eliezer asks "how did you come to rationality?" It surprises me how many people answer: "this is how I lost my religion"
Clearly you can't be rationalist, while also being religious, but there is a more to rationality than simply absence of religion..
Anyway... personally: there's no one moment, but I'm a natural born sceptic and persistently urious analyst. Perhaps rationality attracted because it seems like methodical, organised, analytical scepticism
Single biggest book: Hofstadter's G-E-B, right when it first came out. I just didn't know there could be a book like that....