Posts
Comments
I attended the minicamp last summer, at more personal expense than most participants, since I flew in from europe (I did have other things to do in California, so the cost wasn't entirely for minicamp).
If you want an analogy with minicamp, think of an academic summer school. At the most important level, I think the only thing that really separates minicamp (or an academic summer school) from christian camps is that the things they teach at minicamp (and summer schools) are mostly correct.
I go to summer schools to learn from people who have thought about things that I care about, in greater depth than I have. If you don't believe that will be true, don't go. You should be able to make a reasonable guess whether you think you have things to learn by looking at the instructors posts on less wrong.
I definitely agree with many things that the other participants said. I found that minicamp gave me a sense that things that normal people consider insoluble are often not, and a well thought out series of actions can lead you to places that most people would not believe. I also found it inspiring to be around a group of people that really care about improving themselves - something that I have found relatively rarely.
I have one genuine criticism of minicamp. There are reasons to be tactically 'irrational' in the real world. As a cartoon example: if disagreeing repeatedly with your boss will get you fired from your well-paid job, and you're giving significant amounts of money to the efficient charity, then stay quiet.
Now, Eliezer is too smart to fall for this - it's reading his writing that let me clearly understand the difference between faux-rational (Spock-like dedication to the truth, and getting fired) and truly rational (shutting up). Indeed, the complexities of this are beautifully explored in Harry Potter and the methods of rationality. However, at minicamp, I felt like the less inspiring aspects of being rational were under-emphasised. That is totally understandable, since talking about bending to social domination, lying etc, is low-status. Also, the instructors at minicamp have, quite deliberately, created a community where they are somewhat isolated from having to deal with irrational people, so they probably don't viscerally experience the importance on quite such a regular basis.
I felt that, at the end of minicamp, there should have been a session pointing out a few aspects of living rationally in an irrational world. I think we needed a lecture from Professor Quirrell, so that we don't create rationalists that can spot every bias known to psychology (and a few more) but aren't actually having positive impact on the world, because they don't know how to get things done.
I'll end by pointing out that I've just asked Anna whether I can go back this year, maybe as a participant, maybe as a volunteer. Hopefully that should let you estimate how I truthfully rate the overall experience.
This isn't hugely relevant to the post, but LessWrong doesn't really provide a means for a time-sensitive link dump, and it seems a shame to miss the opportunity to promote an excellent site for a slight lack of functionality.
For any cricket fans that have been enjoying the Ashes, here is a very readable description of Bayesian statistics applied to cricket batting averages.
Although I didn't actually comment, I based my choice on the fact that most people only seem to be able cope with two or three recursions before they get bored and pick an option. The evidence for this was based on the game where you have to pick a number between 0-100 that is 2/3 of the average guess. I seem to recall that the average guess is about 30, way off true limit of 0.
I'd be interested. So far my schedule has prevented me from attending most of the London meetups, and I live there, so i can't guarantee anything.
I think you're probably correct in your presumptions. I find it an interesting idea and would certainly follow any further discussion.
I don't think you'd have much success mastering non verbal communication through skype.
I think it may have something to do with limiting violence.
I'm trying to remember the reference (it might be Hanson or possibly the book the Red Queen Hypothesis - if I remember I'll post it) but a vast majority of violence is over access to women, at least in primitive societies. Obviously mongamy means that the largest number of males get access to a female, thereby reducing losses in violent competition to females. I think this would certainly explain why rich societies tend to be monogamous - less destructive waste.
Additionally I can imagine societies with high levels of polygyny (think emperors with giant harems) could be extremely unstable due to sexual jealousy, but that's mere speculation.
Apologies if this has already been posted - I was late to this thread and there's an unmanageable number of comments to search through.
This might be of interest to people here; it's an example of a genuine confusion over probability that came up in a friends medical research today. It's not particularly complicated, but I guess it's nice to link these things to reality.
My friend is a medical doctor and, as part of a PhD, he is testing peoples sense of smell. He asked if I would take part in a preliminary experiment to help him get to grips with the experimental details.
At the start of the experiment, he places 20 compounds in front of you, 10 of which are type A and 10 of which are type B. You have to select two from that group, smell them, and determine whether they are the same (i.e. both A or both B) or different (one is A, the other B). He's hoping that people will be able to distinguish these two compounds reliably.
It turned out that I was useless at distinguishing them - over a hundred odd trials I managed to hit 50% correct almost exactly. We then discussed the methodology and realised that it was possible to do a little bit better than 50% without any extra sniffing skills.
Any thoughts on how?
I'm interested, definitely online, possibly IRL. I'm in London.
I'm going to the H+ event but I'm also going to the dinner, so I'm not sure how that will fit in with the pub. If I can make it, I will.
I'll also come to the 6/6 meet up.
Nope. You've misunderstood counter-signalling. Alicorn wrote a great post about it.
I agree. But that doesn't stop people getting high status behaviours confused with counter-signalling (like with standing up straight) and therefore, makes making these lists difficult.
This wasn't really meant as the thrust of the comment. I was trying to raise awareness of the difficulty of creating an absolute list of high status behaviours when people can counter signal. It means that there are always exceptions.
But since you replied to this aspect:
I think I now understand. Are you using "standing up straight" in an extremely literal way? If you mean that standing to attention - in an uncomfortable military style - is low status, then i would agree. I don't think those models prove anything except that, within the bounds of what normal people would call standing up straight, they pretty much do.
The problem with trying to define a list of high status actions is that they are context dependent.
Counter-signalling means that, in a particular context, it could be higher status to perform in a manner that, in any other context, would appear low status.
Under most general circumstances though, good posture is high status (because the assumption is that they just stand like that - not that they are standing like that to make an impression). In general, people don't think as carefully as you about motivations. You are over-iterating your thinking beyond what an average person would ever consider. Go out and look at people on the street and see how the high and low status people stand.
I'm sure I can sort out a room at UCL. I'll find out whether it would be free.
UCL is particularly convenient for transport links since Euston and Kings Cross are <10mins walk and Paddington is a short tube ride away.
There are some nice little restaurants and pubs around for food/drink.
I think it's important not to conflate two separate issues.
The term 'science' is used to denote both the scientific method and also the social structure that performs science. It's critical to separate these in ones mind.
What you call "idealistic science" is the scientific method; what you call "social network" science is essentially a human construct aimed at getting science done. I think this is basically what you said.
The key point, and where I seem to disagree with you, is that these views are not mutually exclusive. I see 'social network' science as a reasonably successful mechanism to take humans, with all their failings, and end up with, at least some, 'idealistic science' as an output.
You do that by awarding people a higher status when they show a more detailed understanding of nature. I would agree that this process is subject to all kinds of market failures, but I don't think that it's as bad as you make out. And I certainly don't think that it has anything to do with why we haven't discovered quantum gravity (which, it appears, is the only discovery that would satisfy your definition of progress). There is literally no field of human endeavour that isn't defined by a search for status; 'network science' accepts this and asks how can we use our rationality to structure the game so that when we win, we win from both a individual perspective (get promoted to professor) and a team perspective (humanity gets new understanding/technology/wealth).
But this in no way calls into question 'idealistic science' since 'network science' is merely the process by which we try to attain 'idealistic science' in the real world.
[full disclosure: I am a young scientist]
I'm about to start writing up my doctoral thesis in experimental quantum computing.
If people are interested I might be able to write a few posts introducing quantum computing/quantum algorithms and many worlds over the next couple of months. I'm by no means an expert in the theory side, but I'll try to chat about it with people who are.
From a personal perspective it might help me to start the words flowing.
I hadn't realised that you were taking the karma ratings as indicative of agreement. I didn't vote it down before because I have tended only to use my downvote on stupid or thoughtless comments - not valid comments that disagree with what I think.
Once it became clear that you thought that the votes weren't just appreciating effort but were signalling agreement it would have been dishonest not to vote it down.
I don't agree with your summary.
By your own admission you haven't watched the entire talk. That might make it difficult to provide a full review.
By reducing what Deutsch said to the conjunction fallacy you missed the different emphasis that both Vladimir and I found interesting. If the people that voted up your comment didn't watch the talk (which seems plausible because of the negative nature of the review) then they wouldn't appreciate the difference between what Deutsch says and what you say. Therefore they aren't agreeing with your summary, they're simply appreciating your effort.
I think that 'whilst preserving the predictions' was assumed. Otherwise what's the constraint that's making things hard?
Perhaps it's clearer when written more explicitly though.
I agree that there's nothing new to people who have been on Overcoming Bias and Less Wrong for a few years (hence the cautionary statement at the start of the post) but I do think it's important that we don't forget that there are new people arriving all the time.
Not everyone would consider "the conjunction fallacy and how each detail makes your explanation less plausible" a standard point. We shouldn't make this site inaccessible to those people. Credit where it's due - Deutsch does a nice job of presenting this in a way that most people can understand.
That's a fair point, but I've never actually seen it mentioned explicitly. Maybe there should be a 'tips on writing posts' post.
I guess that quantum computers halve the doubling time, as compared to a classical computer, because every extra qubit squares the available state space. This could give the factor two in the exponential of Moore's law.
Quantum computing performance currently isn't doubling but it isn't jammed either. Decoherence is no longer considered to be a fundamental limit, it's more a practical inconvenience. The change that brought this about was the invention of quantum error correcting codes.
However experimental physicists are still searching for the ideal practical implementation. You might compare the situation to that of the pre-silicon days of classical computing. Until this gets sorted I doubt there will be any Moore's law type growth.
Were this true it would also seem to fit with Robin's theories on art as signalling. If you pick something bad to defend then the signal is stronger.
If you want to signal loyalty, for example, it's not that good picking Shakespeare. Obviously everyone likes Shakespeare. If you pick an obscure anime cartoon then you can really signal your unreasonable devotion in the face of public pressure.
In a complete about turn though, a situation with empirical data might be sports fans. And I'm fairly certain that as performances get worse, generally speaking, the number of fans (at least that attend games) drops. This would seem to imply the opposite.
I agree that the quality of the argument is an important first screening process in accepting something into the rationality canon. In addition, by truly understanding the argument, it can allow us to generalise or apply it to novel situations. This is how we progress our knowledge.
But the most convincing argument means nothing if we apply it to reality and it doesn't map the territory. So I don't understand why I'd be crazy to think well of Argument screens off authority if reading it makes me demonstrably more rational? Could you point me towards the earlier comments you allude to?
Can you clarify?
Exactly which material are you referring to? What basis would you suggest that you're assessing it on?
If you don't attempt to do something while you develop your rationality then you're not constraining yourself to be scored on your beliefs effectiveness. And we know that this makes you less likely to signal and more likely to predict accurately.
I agree for the most part with Tom. Here's a quote from an article that I drafted last night but couldn't post due to my karma:
"I read comments fairly regularly that certainly imply that people are less successful or less fulfilled than they might be (I don't want to directly link to any but I'm sure you can find them on any thread where people start to talk about their personal situation). Where are the posts that give people rational ways to improve their lives? It's not that this is particularly difficult - there's a huge psychological literature on various topics (for instance happiness, attraction and influence) that I'm sure people here have the expertise to disseminate. And it would have obvious applications in making people more successful and fulfilled in their day to day lives.
It seems to me that the Less Wrong community concentrates on x-rationality, which is a larger and more intellectually stimulating challenge (and, cynically, better for signalling intellectual prowess) at the expense of simple instrumental rationality. It's as if we think that because we're thinking about black belt x-rationality, we're above applying blue belt instrumental rationality.
In my life I'm constantly learning new and more accurate models with which to understand the world that don't come near to determining whether to one or two box in pure complexity terms. They are useful more often, though.
This isn't to denigrate x-rationality. Obviously its important but it currently seems like there's no balance on LW between that and instrumental rationality. As a side benefit I'll bet good money that the best way to get people interested in rationality is to simply show them how successful you are when applying it - something that would be more possible with instrumental rationality than x-rationality."
I disagree with Tom over the terminology though. I quite like the terms x-rationality and instrumental rationality because they allow me to easily talk about two broad types of rational thought even though i would be hard pressed to draw a specific line between them.
I think that you can legitimately worry about both for good reasons.
Fast growth is something to strive for but I think it will require that our best communicators are out there. Are you concerned that rationality teachers without secret lives won't be inspiring enough to convert people or that they'll get things wrong and head into death spirals?
From a personal perspective i don't have that much interest in being a rationality teacher. I want to use rationality as a tool to make the greatest success of my life. But I also find it fascinating and, in an ideal world, would stay in touch with a 'rational community' as both a guard against veering off into a solo death spiral and as a subject of intellectual interest. I'm sure that there must be other people like me that are more accomplished and could give inspiring lectures on how rationality helped them in their chosen profession. That would go some way to covering the inspiration angle.
As an aside i appreciate why you care about this; I'm always a bit suspicious of self help gurus who's only measurable success is in the self help theory they promote. I wonder whether I'm selecting for people who effectively sell advice rather than effectively use advice.
I guess the failure mode that you're concerned with is a slow dilution because errors creep in with each successive generation and there's no external correction.
I think that the way we currently prevent this in our scientific efforts is to have both a research and a teaching community. The research community is structured to maximise the chances of weeding out incorrect ideas. This community then trains the teachers.
The benefits of this are that you get the people who are best at communicating doing the teaching and the people who are the best at research doing research.
Is it possible that having taught yourself you haven't so directly experienced that there's not necessarily a correlation between a persons understanding of a subject and their ability to teach it?
Is it possible that humans, with their limited simulation abilities, do not have the mental computational resources to simulate an irrational persons more effective beliefs?
This would mean that the 'irrational' course of action would be the more effective.
I definitely enjoyed the meet up.
In defence of my fairly poor estimate I was unconvinced by the assumption that all the maise in mexico was eaten by mexicans. It seemed that this was an uncontrolled assumption but i felt that i could put reasonable bounds on all the assumptions in the land area estimate (if you're asking, yes, the final answer did fall within my 90-10 bounds :) ).
Hopefully with a bit more notice we can get a few extra people next time but i think it was a great idea to get the ball rolling. Thanks to Tomasz for organising.
What about cases where any rational course of action still leaves you on the losing side?
Although this may seem to be impossible according to your definition of rationality, I believe it's possible to construct such a scenario because of the fundamental limitations of a human brains ability to simulate.
In previous posts you've said that, at worst, the rationalist can simply simulate the 'irrational' behaviour that is currently the winning strategy. I would contend that humans can't simulate effectively enough for this to be an option. After all we know that several biases stem from our inability to effectively simulate our own future emotions, so to effectively simulate an entire other beings response to a complex situation would seem to be a task beyond the current human brain.
As a concrete example I might suggest the ability to lie. I believe it's fairly well established that humans are not hugely effective liars and therefore the most effective way to lie is to truly believe the lie. Does this not strongly suggest that limitations of simulation mean that a rational course of action can still be beaten by an irrational one?
I'm not sure that even if this is true it should effect a universal definition of rationality - but it would place bounds on the effectiveness of rationality in beings of limited simulation capacity.
Have you really never seen this before? I actually find that I myself struggle with it. When you define yourself as the plucky outsider it's difficult and almost unsatisfying when you conclusively win the argument. It ruins your self-identity because you're now just a mainstream thinker.
I've heard of similar stories when people are cured of various terminal diseases. The disease becomes so central to their definition of self that to be cured makes them feel slightly lost.