Posts

Trial and error in policy making 2011-06-15T10:39:10.812Z

Comments

Comment by DavidAgain on A Challenge: Maps We Take For Granted · 2015-05-30T10:47:35.406Z · LW · GW

"Have you tried flying into a third world nation today and dragging them out of backwardness and poverty? What would make it easier in the 13th century?"

I think this is an interesting angle. How comparable are 'backward' nations today with historical nations? Obvious differences in terms of technology existing in modern third world even if the infrastructure/skills to create and maintain it don't. In that way, I suppose they're more comparable to places in the very early middle ages, when people used Roman buildings etc. that they coudn't create themselves. But I also wonder how 13th century government compares to modern governments that we'd consider 'failed states'.

Comment by DavidAgain on LW should go into mainstream academia ? · 2015-05-14T21:30:58.903Z · LW · GW

I think this is the vital thing: not 'does academia work perfectly', but 'can you work more effectively THROUGH academia'. Don't know for sure the answer is yes, but it definitely seems like one key way to influence policy. Decision makers in politics and elsewhere aren't going to spend all their time looking at each field in detail, they'll trust whatever systems exist in each field to produce people who seem qualified to give a qualified opinion.

Comment by DavidAgain on LW should go into mainstream academia ? · 2015-05-14T21:29:06.314Z · LW · GW

Isn't there an argument that having a million voices synthesising and popularising and ten doing detailed research is much less productive than the opposite? Feels a bit like Aristophanes:
"Ah! the Generals! they are numerous, but not good for much"

Everyone going around discussing their overarching synthesis of everything sounds like it would produce a lot of talk and little research

Comment by DavidAgain on Translating bad advice · 2015-04-15T20:05:52.322Z · LW · GW

They often look the same.

You make a bit of effort to make conversation with someone you don't know: they give the minimum responses, move away when they can do so, and don't reciprocate initiation.

This could be shyness or arrogance. Very tough to tell the difference. Plus the two can actually be connected: if you see yourself as very different from others, the natural instinct is a mixture of insecurity ('I don't fit!') with arrogance ('I see things these guys don't'). I think the main way not to end up with a mix of both is just if one is very strong: if you're too insecure to be arrogant or too arrogant to be insecure.

Comment by DavidAgain on What level of compassion do you consider normal, expected, mandatory etc. ? · 2015-04-13T17:10:54.610Z · LW · GW

I basically agree with you, but I think situation B to quite that extent is rare. And of course identifying similarity to that is pretty open to bias if you just don't like that movement.

Concrete example - I used to use the Hebrew name of God in theological conversations, as this was normal at my college. I noticed a Jewish classmate of mine was wincing. I discussed it with him, he found it uncomfortable, I stopped doing it. Didn't cost me anything, happy to do it.

Also, I think some of this is bleeding over from 'I am not willing to inconvenience myself' to actively enjoying making a point (possibly in some vague sense that it will help them reform, though not sure if that's evidenced). I can get that instinct, and the habit of "punishing" people who push things can make sense in game theory terms. But I think the idea of not feeling duty-bound is different to getting to the position where some commenters might turn UP the music.

Comment by DavidAgain on What level of compassion do you consider normal, expected, mandatory etc. ? · 2015-04-13T17:05:08.317Z · LW · GW

You seem to be equivocating between 'a step towards being a utility monster' and 'being a utility monster'. Someone asking you to turn your music down is surely more likely to just be them actually having an issue with noise. There are literally hundreds of things I do without even feeling that strongly about them. So it seems eminently sensible to me that people tell me if they do matter a lot to them. If everyone in society gets to do that, even with a few free-riders, everyone ends up better off.

Obviously one way to organise the universally better off thing is to turn every interaction of this kind into a contractual agreement. But this is not how we deal with interactions between neighbours, generally. So you just act flexibly for others when asked unless you've got a fairly strong reason not to (including them constantly making unreasonable demands).

Comment by DavidAgain on What level of compassion do you consider normal, expected, mandatory etc. ? · 2015-04-11T06:17:54.776Z · LW · GW

This reads like quite a lot of bile towards a hypothetical person who doesn't like loud music.

You don't know what the neighbour's tried, you're putting a lot of weight on the word 'complained', which can cover a range of different approaches, and you're speculating about her nefarious motivations.

In my experience with neighbours, co-workers, generally other people, it's best to assume that people aren't being dicks unless you have positive reasons to think they are. And to lean towards accommodation.

Comment by DavidAgain on What level of compassion do you consider normal, expected, mandatory etc. ? · 2015-04-11T06:07:46.630Z · LW · GW

Interesting question. Not sure I agree with the premise, in that certainly where I live, I don't think there is a clear objective line of acceptable noise dictated by 'social norms'. I'd say that the social expectation should and does include reference to others' preferences and your own situation.

So if someone has a reason to dislike noise, you make more effort to avoid noise. But on the other hand, you're more tolerant of noise if, e.g. someone's just had a baby, than if they just like playing TV at maximum volume. Bit of give and take and all that.

Basically, I don't think there's really a hard division between 'objective requirement' and 'completely free favour you might choose to do' (unless the objective requirement is REALLY low, like at the legal level. But at that point doing what's 'required' would be seen almost universally as asshattery).

Social interaction is more complicated and blurry like that

Comment by DavidAgain on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 · 2015-03-01T11:37:25.825Z · LW · GW

Haven't seen this solution elsewhere: I think it's actually strong on its own terms, but doubt it's what Eliezer wants (I'm 90% sure it's about AI boxing, exploiting the reliability granted by Unbreakable Vows and parsetongue)

However, this being said, I think Harry could avoid imminent death by pointing out that if a prophecy says he'll destroy the world, then he presumably can't do that dead. Given that we have strong reasons to think prophecies can't be avoided, this doesn't mean killing him is safe, but the opposite - what Voldemort should do is make him immortal. Then the point at which he destroys the world can be delayed indefinitely. Most likely to a point when Voldemort gets bored and wants to die, after the heat death of the universe.

This isn't a great solution for Harry, because the best way to keep him alive would be paralysed/imprisoned in some fairly extreme way. But it should hit the criteria. The one really big point against it is that all this info is very available to Voldemort, so not sure why he hasn't come up with it himself.

Comment by DavidAgain on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 108 · 2015-02-23T18:57:01.211Z · LW · GW

I love this idea in general: but don't see how he could have faked the map, given:

"Did you tamper with thiss map to achieve thiss ressult, or did it appear before you by ssurprisse?"

"Wass ssurprisse," replied Professor Quirrell, with an overtone of hissing laughter. "No trickss."

Comment by DavidAgain on Unpopular ideas attract poor advocates: Be charitable · 2014-09-15T21:43:20.291Z · LW · GW

Interesting and useful post!

But on your last bullet, you seem to be conflating 'leadership' with 'people presenting the idea'. I'm not sure they are always the same thing: the 'leaders' of any group are quite often going to be there because they're good at forging consensus and/or because they have general social/personal skills that stop them appearing like cranks.

Take a fringe political party: I would guess that people promoting that party down the pub or in online comments on newspaper websites or whatever are more likely to be the sort of advocate you describe. But in all but the smallest fringe parties, you'd expect the actual leadership to have rather more political skill.

Comment by DavidAgain on Deception detection machines · 2014-09-06T08:03:53.673Z · LW · GW

To me it sounds like the full information provided to avoid being incomplete would be so immense and complex that you'd need another AI just to interpret that! But I may be wrong.

Comment by DavidAgain on Deception detection machines · 2014-09-05T21:25:43.527Z · LW · GW

Not sure what allowing a small chance of false negatives does: you presumably could just repeat all your questions?

More substantially, I don't know how easy 'deception' would be to define - any presentation of information would be selective. Presumably you'd have to use some sort of definition around the AI knowing that the person it's answering would see other information as vital?

Comment by DavidAgain on What motivates politicians? · 2014-09-05T20:07:27.112Z · LW · GW

On the terminal value, the first thing I thought when I read this post was the quote below. Not sure if I actually find it convincing psychology, or I just find it so aesthetically effective that it gains truthiness.

Now I will tell you the answer to my question. It is this. The Party seeks power entirely for its own sake. We are not interested in the good of others; we are interested solely in power, pure power. What pure power means you will understand presently. We are different from the oligarchies of the past in that we know what we are doing. All the others, even those who resembled ourselves, were cowards and hypocrites. The German Nazis and the Russian Communists came very close to us in their methods, but they never had the courage to recognize their own motives. They pretended, perhaps they even believed, that they had seized power unwillingly and for a limited time, and that just around the corner there lay a paradise where human beings would be free and equal. We are not like that. We know what no one ever seizes power with the intention of relinquishing it. Power is not a means; it is an end. One does not establish a dictatorship in order to safeguard a revolution; one makes the revolution in order to establish the dictatorship. The object of persecution is persecution. The object of torture is torture. The object of power is power. Now you begin to understand me.”

Second thing I thought was that if the query was genuine, adamzerner would in some ways be ideal to be appointed dictator of something, thought probably less great at actually trying to win at the Game of Politics (you win or you're deselected)

Comment by DavidAgain on What motivates politicians? · 2014-09-05T07:18:14.979Z · LW · GW

Others are probably right that politicians have plenty of genuine choices, where they don't have to use their decisions to cater to lobbyists (or even voters). It's a bit different in the UK, because legislators also form the executive: congressmen may have rather blunter tools to get their way, but Ministers in the UK definitely make LOADS of decisions and a large number aren't fixed by voter demand, lobby power or even party position: working in the civil service supporting Ministers I've seen fairly substantial changes to policy made simply because the individual Minister is replaced by one with a different outlook.

There's also the point that politicians 'cater to' THEIR voters/lobbyists, for the most part. They rely on the support of those that broadly agree with them.

I also think you're approaching this too much as if being a politician is something someone's worked out carefully as a strategy to do a specfic thing. People find politics exciting and engaging - a lot of this, though not all, is because of genuinely caring about the issues - and that's why they want to be involved with it. Once involved, they want to be succesful, that's human nature. I doubt many people go into politics purely because they've calculated it's the way they can get a list of policies delivered, although I think there are a few and they can add a lot to the political process.

Come to think of it, it's worth you looking at other countries if you're interested in this. Your theory of lobbying assumes that individual politicians can rack up $ms of advertising money, but various countries have spending caps (UK) or have systems of proportional representation that mean you can't really advertise as an individual. If you observe the same phenomena without the personal job-security element, then your model is probably flawed!

Comment by DavidAgain on Multiple Factor Explanations Should Not Appear One-Sided · 2014-08-08T15:45:39.259Z · LW · GW

I think my intuition depends on the context, to be honest: and I don't have Diamond's book to hand (don't think I own it, though I read it a few years ago).

I think it's clear that the briefest possible explanation of why a specific event happened is the key positive causes. Then you have the option of including two other sorts of things

  • Why the countervailing factors didn't stop it
  • Why similar things did not happen at other times/places in similar conditions

Say you're explaining why a country elected a particular political party. You would most naturally talk about the positives: 'polls showed they were trusted on issues X and Y'. You'd mostly talk about overcoming negatives where there was a important change in that area - people previously didn't like them because they associated them with policy Z, but the new leader convinced many voters that this was a thing of the past'. It wouldn't be as relevant in a short summary to say 'they probably lost some votes because of issues A and B' or 'while another party in a different country is trusted on the same issues, they lost an election six months later - the difference is due to C and D'

There's a difference here with policy debates, because they are saying what we should do, rather than trying to trace the line of what led to a particular thing that happened. Personally, I'd be much happier giving a one-sided account that said 'political position X is widely supported because of the following factors' than a one-sided account that said 'political position X SHOULD BE widely supported because of the following factors', even if the following detail was identical.

A lot of this might be quite parochial and based on various academic/journalistic/professional traditions, though. I'm trying to wrap my head round the underlying point about facts causing their evidence but this not applying to policy debates/moral positions/multiple factor explanations. I think I basically agree that multiple factor explanations are analagous to policy debates in this regard, but I'm trying to unpack some examples on the moral front to see if I agree there...

Comment by DavidAgain on Multiple Factor Explanations Should Not Appear One-Sided · 2014-08-08T14:32:09.105Z · LW · GW

Cheers for the thoughtful response! I think your global warming argument is subtly different: people don't want to just explain why temparatures rose at a certain point in the past (which would be the equivalent of Diamond's argument). They want to understand whether we should expect temparatures to rise in the future.

The question here is not 'Why did Athens beat Sparta', but closer to 'as Corinthians watching the arms race, should we expect Athens or Sparta to win next time'. In this case, we definitely want to know both sides, even if Athens has won all of the conflicts we've seen: for instance if all the other conflicts they won with ships and this is a landlocked struggle for some reason, that would change our conclusions.

What stands out here is that this sort of balanced account is what you want as soon as you expect your beliefs to 'pay rent'. A historical explanation which simply explains what series of events led to a certain event isn't necessarily particularly useful, even if it's true.

So for instance, a historian might seek to show that the First World War was caused by the shooting of Franz Ferdinand or that it inevitably followed from the alliance system of the early twentieth century. These explanations wouldn't necessarily ask 'what other things might cause world wars?' or 'what things were going on that might have stopped this world war?' unless they were directly relevant. And because of that, the primary purpose is to establish once particular incident of causation, not to draw general lessons that shooting archdukes is a Bad Idea or that all world wars are caused by the fallacious belief that having two huge blocs would deter each other..

On the other hand, historians might argue that more general economic/social laws apply through history: that slave cultures are at a significant advantage/disadvantage in war, that wars tend to lead to greater/lesser power for the previously downtrodden, that feudal systems tend to turn into democratic systems or whatever. For those cases the aspiration is to have some predictive power, and so both sides are needed.

Comment by DavidAgain on Roles are Martial Arts for Agency · 2014-08-08T09:54:15.115Z · LW · GW

Very interesting. I wonder how general the roles are. What you talk about at the end is basically bystander effect: I believe that different people are more or less vulnerable to that, and I wonder whether being more 'bystander' prone goes with being more likely to go along with pressure to conform (Millgram etc.) and possibly (to make it clear this isn't a straightforwardly ethical thing) more likely to collaborate in Prisoner's Dilemma. The most important role question might be simply whether you see yourself as a generalised Agent with responsibility for what actually happens beyond fulfilling set roles you've been given.

To quote HPMOR again: 'PC or NPC, that is the question'

Comment by DavidAgain on Multiple Factor Explanations Should Not Appear One-Sided · 2014-08-08T09:37:54.755Z · LW · GW

I don't think I really disagree with any of this! My point was that, as things stand, this isn't a case of individuals having confirmation bias, but of the system of how we as a society/culture/academy tend to approach the concept of 'explaining something'.

As far as I can see, your approach ends up not being focused on actually explaining a specific thing at all, but rather identifying all the stuff going on in a certain area under certain categories. Reminds me a bit of http://lesswrong.com/lw/h1/the_scales_of_justice_the_notebook_of_rationality/ in that regard.

If we know loads about a certain thing then this might also clearly point to why it was 'inevitable' that what happened did. But before then (and I doubt we know that much about Athens/Sparta or about the rise of agriculture), it mainly functions to turn 'explanations' into 'enumerations of relevant facts'. This is good in some ways because it stops people thinking issues have been resolved - I can imagine lots of people take Diamond's analysis to 'disprove' other accounts of the rise of agriculture, for instance. The downside is that given our psychology as it is, I suspect we think about things better when people are creating hypotheses and arguing for/against them rather than contesting the detail of a list of possible factors with no clear conclusion.

Comment by DavidAgain on Multiple Factor Explanations Should Not Appear One-Sided · 2014-08-07T20:52:12.068Z · LW · GW

This is very interesting indeed! I'm not sure how much we can get to bias, or whether it's about what the argument is trying to say. Is he asserting that those 8 are the (only) relevant things that could make agriculture more likely? It's awhile since I read it, but I saw it more as saying that those 8 are the reason why historically it was the Fertile Crescent. Not that it would always be those things on any remotely similar world, or even necessarily that it would always be there if you re-ran history. In fact, as you say, he seems to mostly be arguing why it's plausibly NOT the 'people from the Fertile Crescent are superior' argument. Or more strongly, why the geographical case is more compelling than the gene-based one.

Say there's a ninth category (I dunno, 'distance from steppes which tend to be full of dangerous nomads') which Fertile Crescent scores badly on, and which makes it 'less likely' to develop agriculture. If what we're trying to explain is why Fertile Crescent succeeded, we don't necessarily focus on that. If we wanted to give a complete explanation, we might do, but it's not necessary. Similarly, if we wanted to say 'why did Sparta beat Athens' we could point to the army, and if we wanted to ask 'why did Athens beat Sparta', we'd point to the navy (or whatever). The fact we can go either way shows that this explanation isn't strong enough to be predictive, but it gives a compelling alternative to 'innate cultural/genetic superiority'

Comment by DavidAgain on Open thread, August 4 - 10, 2014 · 2014-08-06T08:30:35.927Z · LW · GW

Thought that people (particularly in the UK) might be interested to see this, a blog from one of the broadsheets on Bostrom's Superintelligence

http://blogs.telegraph.co.uk/news/tomchiversscience/100282568/a-robot-thats-smarter-than-us-theres-one-big-problem-with-that/

Comment by DavidAgain on Gaming Democracy · 2014-08-01T09:55:13.868Z · LW · GW

I think the Kickstarter idea is interesting as a way to try to identify large-enough areas where a voting bloc might exist - and might make a more credible commitment than just a petition from people saying they'll vote based on something, which people might sign several of just because they support the cause. Otherwise, as people have said, this is something that pretty much exists already in various forms.

I think the fundamental problem with all the things you mention (transhumanism, homeopathy, FAI etc) is that the number of people per constituency who actually care about these enough to vote based on them would be miniscule. If this sort of thing was going to work, it would be on something with wider and more visceral appeal.

You also have to get things on the agenda - and realistically on the agenda - as well as get votes. The Government of the day sets the vast majority of what's debated, and deals with budgets etc. I therefore think this works better for things that are already being debated, and where the decision can be made somewhat independently from the broader government programme (e.g. the Eurosceptic one you mentioned). Otherwise there's little reason to think that the MP will get a chance to vote for your policy. You mention private membes Bills, but (i) they may not get one (ii) they're certainly not likely to get many, so you're asking them to give up a chance to promote their real priorities and/or build credibility with some other group. This is really unlikely, and makes defecting more likely: how many of your Kickstarters would really blame them if they raised an issue that had just emerged as a big urgent problem? (iii) they don't get passed all that often, ESPECIALLY on budgetary-type things: the science spend will already have been allocated to the Research Councils on a fairly long-term basis as part of an overall Spending Review, MPs don't very often just vote for a slug of money to go to something. You're talking about getting an exra £1bn out of the Treasury or diverting about a 30th of the already-allocated science budget: I don't see it happening.

Finally, I wouldn't be entirely surprised if that Kickstarter commitment to vote certain ways was regarded as breaking a law established to protect the secret ballot and prevent vote-rigging

Comment by DavidAgain on Separating university education from grading · 2014-07-05T09:33:09.103Z · LW · GW

This sort of standardised/independent testing would have a much more radical effect than the professor-teacher relationship. From my experience in the UK, plenty of people could get a decent grade at various humanities subjects just by doing a couple of weeks of 'revising' the subjects raised in exams and making sure they understood how it was graded. With university increasingly expensive (in the UK fees were introduced about 20 years ago, rose to £3k a year 10 years ago and rose to up to £9k a year a couple of years ago), it would be very interesting to see the effect of someone being able to get a degree in English by demonstrating their ability rather than having to pay the time/money cost of 3 years of studying.

I'm not sure the results would be overall good: you'd get more hothousing and less depth of knowledge etc. etc. But I think it would be quite meritocratic both for first-time students and for people in work who'd like to respecialise in something needing a new degree but find the costs of doing the course prohibitive.

It would also expose a huge difference between degrees that just require a library and a computer, and degrees that require access to labs of various kinds.

Comment by DavidAgain on Conservation of expected moral evidence, clarified · 2014-06-23T13:02:59.982Z · LW · GW

This area (or perhaps just the example?) is complicated somewhat because for authority-based moral systems (parental, religious, legal, professional...) directly ignoring a command/ruling is in itself considered to be an immoral act: on top of whatever the content of said act was. And even if the immorality of the act is constant, most of those systems seem to recognise in principle and/or in practice that acting when you suspect you'd get a different order is different to direct breaking of orders.

This makes sense for all sorts of practical reasons: caution around uncertainty, the clearer Schelling point of 'you directly disobeyed', and, cynically, the fact that it can allow those higher in the authority chain plausible deniability (the old "I'm going to pay you based on your sales. Obviously I won't tell you to use unscrupulous methods. If you ask me directly about any I will explicitly ban them. But I might not ask you in too much detail what you did to achieve those sales: that's up to you")

Comment by DavidAgain on A Story of Kings and Spies · 2014-06-13T13:01:52.326Z · LW · GW

To be charitable, it says that he'd be making 'payments' on 200 coins for the rest of his life. So possibly this means that he can pay off the interest, but not the capital? This would assume that he can pass on the debt to his children or somesuch, or just that banks grudgingly lend money to people who owe the paranoid king and then just extract as much money as they can from those people...

Comment by DavidAgain on What should normal people do? · 2013-10-26T16:43:45.019Z · LW · GW

Yep! But it's the best way I can imagine that someone could plausibly create on the forum.

Comment by DavidAgain on What should normal people do? · 2013-10-25T16:50:02.228Z · LW · GW

I reject the assumption behind 'ability with (and consequentially patience for and interest in)'. You could equally say 'patience for and interest in (and consequentially ability in)', and it's entirely plausible that said patience/interest/ability could all be trained.

Lots of people I know went to schools were languages were not prioritised in teaching. These people seem to be less inherently good at languages, and to have less patience with languages, and to have less interest in them. If someone said 'how can they help the Great Work of Translation without languages', I could suggest back office roles, acting as domestic servants for the linguists, whatever. But my first port of call would be 'try to see if you can actually get good at languages'

So my answer to your question is basically that by the time someone is the sort of person who says 'I am not that intelligent but I am a utilitarian rationalist seeking advice on how to live a more worthwhile life' that they are either already higher on the bellcurve than simple 'intelligence' would suggest, or at least they are highly likely to be able to advance.

Comment by DavidAgain on What should normal people do? · 2013-10-25T16:43:57.396Z · LW · GW

True. I don't think I can define the precise level of inaccuracy or anything. My point is not that I've detected the true signal: it's that there's too much noise for there to be a useful signal.

Do I think the average LessWronger has a higher IQ? Sure. But that's nothing remotely to do with this survey. It's just too flawed to give me any particularly useful information. I would probably update my view of LW intelligence more based on its existence than its results. In that reading the thread lowers my opinion of LW intellgence, simply because this forum is usually massively more rational and self-questioning than every other forum I've been on, which I would guess is associated with high IQ, and people taking the survey seriously is one of the clearest exceptions.

BTW, I'm not sure your assessments of knitting/auto maintenance/comic books/web forums are necessarily accurate. I'm not sure I have enough information on any of them to reasonably guess their intelligence. Forums are particularly exceptional in terms of showing amazing intelligence and incredible stupidity side by side.

Comment by DavidAgain on What should normal people do? · 2013-10-25T15:09:41.937Z · LW · GW

I bet the average LessWrong person has a great sense of humour and feels things more than other people, too.

Seriously, every informal IQ survey amongst a group/forum I have seen reports very high IQ. My (vague) memories of the LessWrong one included people who seemed to be off the scale (I don't mean very bright. I mean that such IQs either have never been given out in official testing rather than online tests, or possibly that they just can't be got on those tests and people were lying).

There's always a massive bias in self-reporting: those will only be emphasised on an intellectual website that starts the survey post by saying that LessWrongers are, on average, in the top 0.11% for SATs, and gives pre-packaged excuses for not reporting inconvenient results - "Many people would prefer not to have people knowing their scores. That's great, but please please please do post it anonymously. Especially if it's a low one, but not if it's low because you rushed the test", (my emphasis).

If there's a reason to be interested in average IQ beyond mutual ego-massage, I guess the best way would be to have an IQ test where you logged on as 'Less Wrong member X' and then it reported all the results, not just the ones that people chose to share. And where it revealed how many people pulled out halfway through (to avoid people bailing if they weren't doing well).

Comment by DavidAgain on What should normal people do? · 2013-10-25T15:00:29.244Z · LW · GW

This doesn't seem to me to be about fudamental intelligence, but upbringing/training/priorities.

You say in another response that IQ correlates heavily with conscientiousness (though others dispute it). But even if that's true, different cultures/jobs/education systems make different sort of demands, and I don't think we can assume that most people who aren't currently inclined to read long, abstract posts can't do so.

I know from personal experience that it can take quite a long while to get used to a new sort of taking in information (lectures rather than lessons, reading rather than lectures, reading different sorts of things (science to arguments relying on formal or near-formal logic to broader humanities). And even people who are very competent at focusing on a particular way of gaining information can get out of the habit and find it hard to readjust after a break.

In terms of checking privilege, there is a real risk that those with slightly better training/jargon, or simply those who think/talk more like ourselves are mistaken for being fundamentally more intelligent/rational.

Comment by DavidAgain on Notes on Brainwashing & 'Cults' · 2013-09-14T14:49:06.405Z · LW · GW

Note that the survey says that they believe that their [i]countrymen[/i] venerated Il-Sung. Defectors may be likely to dislike Il Sung themselves, but my (low certainty) expectation would be that they'd be more likely to see the population at large as slavishly devoted. People who take an unusual stance in a society are quite likely to caricature everyone else's position and increase the contrast with their own. Mind you, they sometimes take the 'silent majority' thing of believing everyone secretly agrees with them: I don't know which would be more likely here.

But I'd guess that defectors would be both be more likely to think everyone else is zealously loyal, AND be more likely to believe that everyone wishes they could overthrow the government. I'd imagine them to be more likely to end up on the extremes, in short.

Comment by DavidAgain on Notes on Brainwashing & 'Cults' · 2013-09-14T14:45:10.196Z · LW · GW

I don't think 'brainwashing' is a helpful or accurate term here, in the sense that I think most people mean it (deliberate, intensive, psychological pressure of various kinds). Presumably most North Koreans who believe such a thing do so because lots of different authority sources say so and dissenting voices are blocked out. I'm not sure it's helpful to call this 'brainwashing', unless we're going to say that people in the middle ages were 'brainwashed' to believe in monarchy, or to be racist, or to favour their country over their neighbours etc.

Even outside of repressive regimes, there are probably a whole host of things that most Americans believe that most Brits don't and vice versa, and that's in a case with shared language and culture. I'm not sure 'brainwashing' can be used just because lots of people in one place believe something that hardly anyone from outside does.

Comment by DavidAgain on Please share your reading habits/techniques/strategies · 2013-09-13T15:37:57.674Z · LW · GW

At university, I had to write an essay (2 thousands words or so) every week or two on the subject we were currently studying. Then I had to talk about them for an hour or so with someone far better informed on the topic than me. I retained far, far more about subjects by doing this than I did about subjects which I just read about or went to lectures on: even though a lot of time was used in apparently less optimal ways (skimming for things to quote, writing the actual essays to be elegant as well as make the relevant arguments etc.)

As a caveat to this, I should say that the subject was often very subjective (I wasn't embedding fundamental complex truths, more taking sides on debates: the most rigorous it got was analytic philosophy and science-being-talked-about-by-a-humanities-student), and that I really enjoy those sort of arguments, so I might be predisposed towards them.

For me, this way of learning things is a bit like realising that I can be incredibly creative (in the sense of making up arguments and crystallising my thoughts) in a test situation: I know it works, but I find it very difficult to force myself into the artificial situation of having to do it. If I need to in the future for some reason, I think I'd need to find a buddy or something to provide pre-commitment.

Comment by DavidAgain on [LINK] Behind the Shock Machine: book reexamining Milgram obedience experiments · 2013-09-13T15:26:06.275Z · LW · GW

Interesting. I have to resist the urge to dismiss this (because finding out about the experiment felt like such an amazing revelation, you don't want to think it's all made up).

I think it's quite possible that the results were exagerrated in that way people do with anecdotes: simplified to hammer the point home and losing some of the truth in doing so. I don't know what the standards of accuracy were in psychological papers of the time, so it's unclear whether to take this as just unfortunate or evidence of dishonesty in the sense of breaking the unspoken protocols of the discipline. I'd be interested to hear what counted as 'coercing' people to press the button, though, and without coercion, I don't see how the distinction between 'obey command to press button' and 'don't' can be blurred as this suggests.

The 'reason to believe people saw through it' is weird: would like more detail.

But I also tend to distrust the author (at least as represented in this editorial) because of a later section which seems very shoddy thinking to me:

"Gradually, Perry came to doubt the experiments at a fundamental level. Even if Milgram’s data was solid, it is unclear what, if anything, they prove about obedience. Even if 65 percent of Milgram’s subjects did go to the highest shock voltage, why did 35 percent refuse? Why might a person obey one order but not another? How do people and institutions come to exercise authority in the first place? Perhaps most importantly: How are we to conceptualize the relationship between, for example, a Yale laboratory and a Nazi death camp? Or, in the case of Vietnam, between a one-hour experiment and a multiyear, multifaceted war? On these questions, the Milgram experiments—however suggestive they may appear at first blush—are absolutely useless."

This seems to be a case of rejecting a very powerful and useful bit of information because it doesn't answer a whole series of additional, arbitarily chosen questions. If we can show that penicillin can stop infection, this is useful. And it isn't 'doubted at a fundamental level' by saying

'Even if 65 percent of patients got better, why did 35 percent not? Why might one infection respond, and not another? How do people get infected in the first place? Perhaps most importantly: How are we to conceptualize the relationship between, for example, a London hospital and a battlefield? Or between an urgent case of gangrene and a chronic illness slowly becoming more life-threatening? On these questions, the Fleming experiments—however suggestive they may appear at first blush—are absolutely useless."

These should be interesting new angles to explore, not reasons to ignore the original study.

Comment by DavidAgain on Biases of Intuitive and Logical Thinkers · 2013-08-14T09:04:11.030Z · LW · GW

In his situation, I'd probably read 'any' in the second sense simply because as a non-mathematician I can imagine the second sense being a practical test: (I give you a number, you show me that the difference between A and B is smaller, we reach a conclusion) whereas the first seems esoteric (you test every conceivable small number...)

On the other hand, the first reading is so blatantly wrong, the commenter really should have stepped back and thought 'could this sentence be saying something that wasn't obviously incorrect?' Principle of charity and all that.

Comment by DavidAgain on How to understand people better · 2013-08-14T09:01:29.724Z · LW · GW

Indeed. Which is why I like having discussions with people who follow the same ruleset as me and engages with metaphors in that pure, stripped-down way. It saves a hell of a lot of time. But there are lots of things that save time in communication that do not make for good communication in general.

Comment by DavidAgain on How to understand people better · 2013-08-13T15:23:35.503Z · LW · GW

If I udnerstand what you mean, I used to see 'metaphor blindness' in a lot of people. But I think it's more about how much people wall off the relevant bit of the metaphor/analogy from the general tone. I see this a lot in politics, on all sides, and I don't think the 'metaphor blind' people are just deliberately misunderstanding to score points. It may be not being able to separate the two, or it may be a feeling on their part that the metaphor is smuggling in unfair implications.

For instance, on same sex marriage (a good case for me to observe this because I'm instinctively pro- and the cases I'm looking at are metaphor-blindness by people who are also pro-), two arguments come to mind

1) Pro-SSM argument 'Marriage should be allowed as long as there is consent between the two people'. Counter-analogy 'But on those grounds, incestuous marriage or polygamy should also be allowed' 2) Pro-SSM argument 'If you don't like gay marriage, don't get gay married'. Counter-analogy: "Imagine for a moment that the Government had decided to legalise slavery but assured us that ‘no one will be forced to keep a slave. Would such worthless assurances calm our fury? Would they justify dismantling a fundamental human right? Or would they simply amount to weasel words masking a great wrong?” (this one is a direct quote from a Cardinal)

In these cases, the general response from pro-SSM people has been 'I can't believe you're comparing gay marriage to incest/slavery'. Because the toxicity of the comparison point overwhelms the quite focused analogy in both cases. People often feel the same when you try to convince them of something by analogy, particularly if they feel like you are trying to show that they are wrong by intellectual force rather than just taking them along with you. It took me awhile to adjust to this one: I just felt everyone else was [i]wrong[/i], and at a gut level I still do, and prefer arguing with people who take analogies in a narrow sense. But eventually, in line with the post above, you can't repeatedly, reliably, have failed communications with the rest of society and consider everyone else to be the aberration.

Comment by DavidAgain on How to understand people better · 2013-08-13T14:57:37.345Z · LW · GW

Upvoted. The general phenomenon is interesting, the gendered aspect could also be interesting, but is also potentially a big distraction. In my relationship, I am definitely often Alex. Although my girlfriend is better at being Bob than most men are, including me (in terms of resolving the issue in a way that we're both happy with, not 'winning the conversation').

Comment by DavidAgain on Biases of Intuitive and Logical Thinkers · 2013-08-13T14:49:38.476Z · LW · GW

I'm not sure if your first example "Ignoring information they cannot immediately fit into a framework" includes "sticking to an elegant, logical framework and considering cases where this does not occur to be exceptions or aberrations even when they are very common".

That's something you see quite a lot with some otherwise quite rational people: the 'if my system can't explain it, the world's wrong' attitude'. As illustrated here: http://xkcd.com/1112/

Comment by DavidAgain on Biases of Intuitive and Logical Thinkers · 2013-08-13T14:45:11.675Z · LW · GW

I'm not sure the first example is really an error on the part of the commenter, unless there was an implicit shared technical usage at play. The word 'any' in the quote you give below is not very clear. I knew what it meant, but only because I understood what the argument was getting at.

"If, for any small positive number you give me (epsilon), I can show that the difference between A and B is less than epsilon, then I have shown A and B are equal."

In this case, 'any' means 'if whatever number is given, the following analysis applies, the conclusion is reached'

Compare:

"If, for any small positive number you can give me (epsilon), I can show that the difference between A and B is greater than epsilon, then I have shown A and B are not equal".

Here, the natural reading is 'if a single case is found where the following analysis applies, the conclusion is reached'

As I said, this may be a failing of technical language on my part, but I don't think normal English is clear here.

Comment by DavidAgain on What's the name of this cognitive bias? · 2013-07-09T18:18:49.252Z · LW · GW

I identify with this very strongly. It's even stronger for me if the distance I have to travel is already 'extra': e.g. if I forget my train ticket I'd rather take a much slower bus than spend ten minutes walking back to the house because the latter I feel as intensely frustrating.

Its interesting because you don't just feel it at the point of being about to retrace your steps: you're aware of it as part of journey planning.

Comment by DavidAgain on Can we dodge the mindkiller? · 2013-06-29T09:02:13.731Z · LW · GW

These are both risks. But the issue about manipulation at various points is presumably unlikely to add up to systematically misleading results: the involvement of many manipulators here would presumably create a lot of noise.

Comment by DavidAgain on On manipulating others · 2013-06-26T15:37:54.747Z · LW · GW

Yes: buying stuff from people is pretty much instrumentalising them. That's capitalism! Although there tend to be limits as you note. And the 'would they like this if they knew what I was doing' is obviously a very good rule of thumb.

Occasionally, you'll have to break this. Sometimes somebody is irrationally self-destructive and you basically end up deciding that you have a better sense of what is best for them. But that's an INCREDIBLY radical/bold decision to make and shouldn't be done lightly.

Comment by DavidAgain on Can we dodge the mindkiller? · 2013-06-26T15:34:25.152Z · LW · GW

I'm not sure exactly what you're referring to, so it's hard to respond. I think most of the damage done to evidence-gathering is done in fairly open ways: the organisation explains what it's doing even while it's selecting a dodgy method of analysis. At least that way you can debate about the quality of the evidence.

There are also cases of outright black-ops in terms of evidence-gathering, but I suspect they're much rarer, simply because that sort of work is usually done by a wide range of people with varied motivations, not a dedicated cabal who will work together to twist data.

Comment by DavidAgain on On manipulating others · 2013-06-21T06:25:18.325Z · LW · GW

You clearly implied "only". The external favours were the basis of the motivation.

"It isn't immoral to notice that someone values friendship, and then to be their friend [b]in order to get the favors[/b] from them that they willingly provide to their friends"

In answer to your question: I'd still find it a little weird, tbh.

Comment by DavidAgain on Can we dodge the mindkiller? · 2013-06-21T06:21:52.209Z · LW · GW

Well, everything has risks. But you can generally tell when people are doing that. And it's harder if the evidence is systematic rather than post-hoc reviews of specific things.

Comment by DavidAgain on Rationality Quotes June 2013 · 2013-06-20T17:21:41.032Z · LW · GW

Well, until we know how to identify if something/someone is conscious, it's all a bit of a mystery: I couldn't rule out consciousness being some additional thing. I have an inclination to do so because it seems unparsimonious, but that's it.

Comment by DavidAgain on On manipulating others · 2013-06-20T17:05:23.867Z · LW · GW

Not revealing your own preferences and giving a balanced analysis that doesn't make them too obvious usually works.

But I don't think you can meaningfully manipulate people by accident. The nearest thing is probably having/developing a general approach that leads to you getting your way over other people, noticing it, and deciding that you like getting your way and not changing it.

What you really can do (and what almost everyone does) is manipulate people while maintaining plausible deniability (including sometimes to yourself). But I suspect most people can identify when they're manipulating people and trying to trick themselves into thinking they're not.

Comment by DavidAgain on On manipulating others · 2013-06-20T16:58:02.453Z · LW · GW

Ah: this may be the underlying confusion. I don't see the instrumentalist evo psych as bad and everything else as good. I see any deceptive, treating people as things approach as not valuing people.

I don't see the people who brag about cheating and slag off their wives as models to aspire to. This is both in that I don't particularly value the outcome they're aiming for, and that I object to the deception and the treating people as things.

But on the broader point about attitude mattering: obviously it might change the activity in that way. But my point was more that you can't step outside of your own psychology and humanity: thinking about people in this detached strategic way is not something done by a person looking in from outside the system: your sex life isn't a game of The Sims. My intuition and experience is that doing something in a way constantly focused at trying to get individual bits of stuff out of it ('I will now buy this wine to get sex, I will now comfort my friend so that they will help me move house next week, I will try to understand this subject I'm studying so that I get a higher mark in the exam) leads to you having less fun and doing less good than engaging with things in their own terms (which is compatible with being aware of the underlying dynamics).

There's also an issue of sincerity here, which to unpack into something that might be more appealing to your approach, is essentially game theoric. If you reassess for your benefit at every point, people can't rely on you in tough situations. I would like people to be able to rely on me, and to be able to rely on them. Taking other people seriously and relating to them as people rather than strategies allows you to essentially pre-commit.

Comment by DavidAgain on Rationality Quotes June 2013 · 2013-06-20T16:27:30.992Z · LW · GW

I dunno about essences. The point is that you can observe lots of interactions of neurons and behaviours and be left with an argument from analogy to say "they must be conscious because I am and they are really similar, and the idea that my consciousness is divorced from what I do is just wacky".

You can observe all the externally observable, measurable things that a black hole or container can do, and then if someone argues about essences you wonder if they're actually referring to anything: it's a purely semantic debate. But you can observe all the things a fish, or tree, or advanced computer can do, predict it for all useful purposes, and still not know if it's conscious. This is bothersome. But it's not to do with essences, necessarily.