Posts
Comments
On philosophy, I think it's important to realize that most university philosophy classes don't assign textbooks in the traditional sense. They assign anthologies. So rather than read Russell's History of Western Philosophy or The Great Conversation (both of which I've read), I'd recommend something like The Norton Introduction to Philosophy.
Link(s)?
OH MY GOD. THAT WAS IT. THAT WAS VOLDEMORT'S PLAN. RATIONAL!VOLDEMORT DIDN'T TRY TO KILL HARRY IN GODRIC'S HOLLOW. HE WAITED ELEVEN YEARS TO GIVE HARRY A GRADE IN SCHOOL SO THAT ANY ASSASSINATION ATTEMPT WOULD BE IN ACCORDANCE WITH THE PROPHECY.
Duplicate comment, probably should be deleted.
Agreed. I actually looked up tax & spending for UK vs. Scandinavian countries, and they aren't that different. It may not be a good distinction.
I thought of this last year after I completed the survey, and rated anti-agathics less probable than cryonics. This year I decided cryonics counted, and rated anti-agathics 5% higher than cryonics. But it would be nice for the question to be clearer.
Done, except for the digit ratio, because I do not have access to a photocopier or scanner.
Liberal here, I think my major heresy is being pro-free trade.
Also, I'm not sure if there's actually a standard liberal view of zoning policy, but it often feels like the standard view is that we need to keep restrictive zoning laws in place to keep out those evil gentrifiers, in which case my support for loser zoning regulations is another major heresy.
You could argue I should call myself a libertarian, because I agree the main thrust of Milton Friedman's book Capitalism and Freedom. However, I suspect a politician running on Friedman's platform today would be branded a socialist if a Democrat, and a RINO if a Republican.
(Friedman, among other things, supported a version of guaranteed basic income. To which today's GOP mainstream would probably say, "but if we do that, it will just make poor people even lazier!")
Political labels are weird.
and anyone smart has already left the business since it's not a good way of making money.
Can you elaborate? The impression I've gotten from multiple converging lines of evidence is that there are basically two kinds of VC firms: (1) a minority that actually know what they're doing, make money, and don't need any more investors and (2) the majority that exist because lots of rich people and institutions want to be invested in venture capital, can't get in on investing with the first group, and can't tell the two groups apart.
A similar pattern appears to occur in the hedge fund industry. In both cases, if you just look at the industry-wide stats, they look terrible, but that doesn't mean that Peter Thiel or George Soros aren't smart because they're still in the game.
Hi! Welcome to LessWrong! A lot of people on LessWrong are worried about the problem you describe, which is why the Machine Intelligence Research Institute exists. In practice, the problem of getting an AI to share human values looks very hard. But, given that human values are implemented in human brains, it looks like it should be possible in principle to implement them in computer code as well.
I think the "Well-kept gardens die by pacifism" advice is cargo culted from a Usenet world where there weren't ways to filter by quality aside from the binary censor/don't censor.
Ah... you just resolved a bit of confusion I didn't know I had. Eliezer often seems quite wise about "how to manage a community" stuff, but also strikes me as a bit too ban-happy at times. I had thought it was just overcompensation in response to a genuine problem, but it makes a lot more sense as coming from a context where more sophisticated ways of promoting good content aren't available.
So regarding MIRI, you could say that experts disagreed about one of the 5 theses (intelligence explosion), as only 10% thought a human level AI could reach a strongly superhuman level within 2 years.
I should note that it's not obvious what the experts responding to this survey thought "greatly surpass" meant. If "do everything humans do, but at x2 speed" qualifies, you might expect AI to "greatly surpass" human abilities in 2 years even on a fairly unexciting Robin Hansonish scenario of brain emulation + continued hardware improvement at roughly current rates.
I like the idea of this fanfic, it seems like it could have been executed much better.
EDIT: Try re-writing later? As the saying goes, "Write drunk; edit sober."
So I normally defend the "trust the experts" position, and I went to grad school for philosophy, but... I think philosophy may be an area where "trust the experts" mostly doesn't work, simply because with a few exceptions the experts don't agree on anything. (Fuller explanation, with caveats, here.)
Have you guys given any thought to doing pagerankish stuff with karma?
Can you elaborate more? I'm guessing you mean people with more karma --> their votes count more, but it isn't obvious how you do that in this context.
Everyone following the situation knew it was Eugine. At least one victim named him publicly. Sometimes he was referred to obliquely as "the person named in the other thread" or something like that, but the people who were following the story knew what that meant.
I'm glad this was done, if only to send a signal to the community that something is being done, but you have a point that this is not an ideal solution and I hope a better one is implemented soon.
I'm not sure how to respond to this comment, given that it contains no actual statements, just rhetorical questions, but the intended message seems to be "F you for daring to cause Eliezer pain, by criticizing him and the organization he founded."
If that's the intended message, I submit that when someone is a public figure, who writes and speaks about controversial subjects and is the founder of an org that's fairly aggressive about asking people for money, they really shouldn't be insulated from criticism on the basis of their feelings.
The reason that nothing has been done about it is that Eliezer doesn't care. And he may well have good reasons not to, but he never commented on the issue, except maybe once when he mentioned something about not having technical capabilities to identify the culprits (which is no longer a valid statement).
My guess is that he cares not nearly as much about LW in general now as he used to...
This. Eliezer clearly doesn't care about LessWrong anymore, to the point that these days he seems to post more on Facebook than on LessWrong. Realizing this is a major reason why this comment is the first anything I've posted on LessWrong in well over a month.
I know a number of people have been working on launching a LessWrong-like forum dedicated to Effective Altruism, which is supposedly going to launch very soon. Here's hoping it takes off—because honestly, I don't have much hope for LessWrong at this point.
...my motivation has been "I see people around me succeeding by these means where I have failed, and I want to be like them".
Seems like noticing yourself wanting to imitate successful people around you should be an occasion for self-scrutiny. Do you really have good reasons to think the things you're imitating them on are the cause of their success? Are the people you're imitating more successful than other people who don't do those things, but who you don't interact with as much? Or is this more about wanting to affiliate the high-status people you happen to be in close proximity to?
I love how understated this comment is.
People voluntarily hand over a bunch of resources (perhaps to a bunch of different AIs) in the name of gaining an edge over their competitors, or possibly for fear of their competitors doing the same thing to gain such an edge. Or just because they expect the AI to do it better.
Maximizing your chances of getting accepted: Not sure what to tell you. It's mostly about the coding questions, and the coding questions aren't that hard—"implement bubble sort" was one of the harder ones I got. At least, I don't think that's hard, but some people would struggle to do that. Some people "get" coding, some don't, and it seems to be hard to move people from one category to another.
Maximizing value given that you are accepted: Listen to Ned. I think that was the main piece of advice people from our cohort gave people in the incoming cohort. Really. Ned, the lead instructor, knows what he's doing, and really cares about the students who go through App Academy. And he's seen what has worked or not worked for people in the past.
(I might also add, based on personal experience, "don't get cocky about the assessments." Also "get enough sleep," and should you end up in a winter cohort, "if you go home for Christmas, fly back a day earlier than necessary.")
Presumably. The question is whether we should accept that belief of theirs.
And the solution to how not to catch false positives is to use some common sense. You're never going to have an aytomated algorithm that can detect every instance of abuse, but even an instance that is not detectable by automatic means can be detectable if someone with sufficient database access takes a look when it is pointed out to them.
Right on. The solution to karma abuse isn't some sophisticated algorithm. It's extremely simple database queries, in plain english along the lines of "return list of downvotes by user A, and who was downvoted," "return downvotes on posts/comments by user B, and who cast the vote," and "return lists of downvotes by user A on user B."
Ah, of course, because it's more important to signal one's pure, untainted epistemic rationality than to actually get anything done in life, which might require interacting with outsiders.
This is a failure mode I worry about, but I'm not sure ironic atheist re-appropriation of religious texts is going to turn off anyone we had a chance of attracting in the first place. Will reconsider this position if someone says, "oh yeah, my deconversion process was totally slowed down by stuff like that from atheists," but I'd be surprised.
Nutrition scientists disagree. Politicians and political scientists disagree. Psychologists and social scientists disagree. Now that we know we can be looking for high-quality contrarians in those fields, how do we sort out the high-quality ones from the lower-quality ones?
What's your proposal for how to do that, aside from just evaluating the arguments the normal way? Ignore the politicians, and we're basically talking about people who all have PhDs, so education can't be the heuristic. You also proposed IQ and rationality, but admitted we aren't going to have good ways to measure them directly, aside from looking for "statements that follow proper logical form and make good arguments." I pointed out that "good arguments" is circular if we're trying to decide who to read charitably, and you had no response to that.
That leaves us with "proper logical form," about which you said:
Proper logical form comes cheap, but a surprising number of people don't bother even with that. Do you frequently see people appending "if everything I've said so far is true, then my conclusion is true" to screw with people who judge arguments based on proper logical form?
In response to this, I'll just point out that this is not an argument in proper logical form. It's a lone assertion followed by a rhetorical question.
Skimming the "disagreement" tag in Robin Hanson's archives, I found I few posts that I think are particularly relevant to this discussion:
Username explicitly linked to torture vs. dust specks as a case where it makes sense to use torture as an example. Username is just objecting to using torture for general decision theory examples where there's no particular reason to use that example.
But then we expect mainstream academia to be wrong in a lot of cases - you bring up the case of mainstream academic philosophy, and although I'm less certain than you are there, I admit I am very skeptical of them.
With philosophy, I think the easiest, most important thing for non-experts to notice is that (with a few arguable exceptions are independently pretty reasonable) philosophers basically don't agree on anything. In the case of e.g. Plantinga specifically, non-experts can notice few other philosophers think the modal ontological argument accomplishes anything.
The crackpot warning signs are good (although it's interesting how often basically correct people end up displaying some of them because they get angry at having their ideas rejected and so start acting out...
Examples?
We are so fascinated by the man-bites-dog cases of very intelligent people believing stupid things that it's hard to remember that stupid things are still much, much likelier to be believed by stupid people.
(possible exceptions in politics, but politics is a weird combination of factual and emotive claims, and even the wrong things smart people believe in politics are in my category of "deserve further investigation and charitable treatment".)
I don't think "smart people saying stupid things" reaches anything like man-bites-dog levels of surprisingness. Not only do you have examples from politics, but also from religion. According to a recent study, a little over a third of academics claim that "I know God really exists and I have no doubts about it," which is maybe less than the general public but still a sizeable minority (and the same study found many more academics take some sort of weaker pro-religion stance). And in my experience, even highly respected academics, when they try to defend religion, routinely make juvenile mistakes that make Plantinga look good by comparison. (Remember, I used Plantinga in the OP not because he makes the dumbest mistakes per se but as an example of how bad arguments can signal high intelligence.)
So when I say I judge people by IQ, I think I mean something like what you mean when you say "a track record of making reasonable statements", except basing "reasonable statements" upon "statements that follow proper logical form and make good arguments" rather than ones I agree with.
Proper logical form comes cheap, just add a premise which says, "if everything I've said so far is true, then my conclusion is true." "Good arguments" is much harder to judge, and seems to defeat the purpose of having a heuristic for deciding who to treat charitably: if I say "this guy's arguments are terrible," and you say, "you should read those arguments more charitably," it doesn't do much good for you to defend that claim by saying, "well, he has a track record of making good arguments."
I question how objective these objective criterion you're talking about are. Usually when we judge someone's intelligence, we aren't actually looking at the results of an IQ test, so that's subjective. Ditto rationality. And if you were really that concerned about education, you'd stop paying so much attention to Eliezer or people who have a bachelors' degree at best and pay more attention to mainstream academics who actually have PhDs.
FWIW, actual heuristics I use to determine who's worth paying attention to are
- What I know of an individual's track record of saying reasonable things.
- Status of them and their ideas within mainstream academia (but because everyone knows about this heuristic, you have to watch out for people faking it.
- Looking for other crackpot warning signs I've picked up over time, e.g. a non-expert claiming the mainstream academic view is not just wrong but obviously stupid, or being more interested in complaining that their views are being suppressed than in arguing for those views.
Which may not be great heuristics, but I'll wager that they're better than IQ (wager, in this case, being a figure of speech, because I don't actually know how you'd adjudicate that bet).
It may be helpful, here, to quote what I hope will be henceforth known as the Litany of Hermione: "The thing that people forget sometimes, is that even though appearances can be misleading, they're usually not."
You've also succeeded in giving me second thoughts about being signed up for cryonics, on the grounds that I failed to consider how it might encourage terrible mental habits in others. For the record, it strikes me as quite possible that mainstream neuroscientists are entirely correct to be dismissive of cryonics—my biggest problem is that I'm fuzzy on what exactly they think about cryonics (more here).
Oh, I see now. But why would Eliezer do that? Makes me worry this is being handled less well than Eliezer's public statements indicate.
Plantinga's argument defines God as a necessary being, and assumes it's possible that God exists. From this, and the S5 axioms of modal logic, it folllws that God exists. But you can just as well argue, "It's possible the Goldbach Conjecture is true, and mathematical truths are if true necessarily true, therefore the Goldbach Conjecture is true." Or even "Possibly it's a necessary truth that pigs fly, therefore pigs fly."
(This is as much as I can explain without trying to give a lesson in modal logic, which I'm not confident in my ability to do.)
People on LW have started calling themselves "rationalists". This was really quite alarming the first time I saw it. People used to use the words "aspiring rationalist" to describe themselves, with the implication that e didn't consider ourselves close to rational yet.
My initial reaction to this was warm fuzzy feelings, but I don't think it's correct, any more than calling yourself a theist indicates believing you are God. "Rationalist" means believing in rationality (in the sense of being pro-rationality), not believing yourself to be perfectly rational. That's the sense of rationalist that goes back at least as far as Bertrand Russell. In the first paragraph of his "Why I Am A Rationalist", for example, Russell identifies as a rationalist but also says, "We are not yet, and I suppose men and women never will be, completely rational."
This also seems like it would be a futile linguistic fight. A better solution might be to consciously avoid using "rationalist" when talking about Aumann's agreement theorem—use "ideal rationalists" or "perfect rationalist". I also tend to use phrases like "members of the online rationalist community," but that's more to indicate I'm not talking about Russell or Dawkins (much less Descartes).
His assertion that there is no way to check seems to me a better outcome than these posts shouting into the wind that don't get any response.
Did he assert that, exactly? The comment you linked to sounds more like "it's difficult to check." Even that puzzles me, though. Is there a good reason for the powers that be at LessWrong not to have easy access to their own database?
The right rule is probably something like, "don't mix signaling games and truth seeking." If it's the kind of thing you'd expect in a subculture that doesn't take itself too seriously or imagine its quirks are evidence of its superiority to other groups, it's probably fine.
You're right, being bad at signaling games can be crippling. The point, though, is to watch out for them and steer away from harmful ones. Actually, I wish I'd emphasized this in the OP: trying to suppress overt signaling games runs the risk of driving them underground, forcing them to be disguised as something else, rather than doing them in a self-aware and fun way.
Abuse of the karma system is a well-known problem on LessWrong, which the admins appear to have decided not to do anything about.
Update: actually, it appears Eliezer has looked into this and not been able to find any evidence of mass-downvoting.
How much have you looked into potential confounders for these things? With the processed meat thing in particular, I've wondered what could be so bad about processing meat, and if this could be one of those things where education and wealth are correlated with health, so if wealthy, well-educated people start doing something, it becomes correlated with health too. In that particular case, it would be a case of processed meat being cheap, and therefore eaten by poor people more, while steak tends to be expensive.
(This may be totally wrong, but it seems like an important concern to have investigated.)
So although I would endorse Aumann-adjusting as a final verdict with many of the people on this site, I think it's great that we have discussions - even heated discussions - first, and I think a lot of those discussions might look from the outside like disrespect and refusal to Aumann adjust.
I agree that what look like disrespectful discussions at first could eventually lead to Aumann agreement, but my impression is that there are a lot of persistent disagreements within the online rationalist community. Eliezer's disagreements with Robin Hanson are well-known. My impression is that even people within MIRI have persistent disagreements with each other, though not as big as the Eliezer-Robin disagreements. I don't know for sure Alicorn and I would continue to disagree about the ethics of white lies if we talked it out thoroughly, but it wouldn't remotely surprise me. Et cetera.
The role that IQ is playing here is that of a quasi-objective Outside View measure of a person's ability to be correct and rational. It is, of course, a very very lossy measure that often goes horribly wrong. On the other hand, it makes a useful counterbalance to our subjective measure of "I feel I'm definitely right; this other person has nothing to teach me."
So we have two opposite failure modes to avoid here. The first failure mode is the one where we fetishize the specific IQ number even when our own rationality tells us something is wrong - like Plantiga being apparently a very smart individual, but his arguments being terribly flawed. The second failure mode is the one where we're too confident in our own instincts, even when the numbers tell us the people on the other side are smarter than we are. For example, a creationist says "I'm sure that creationism is true, and it doesn't matter whether really fancy scientists who use big words tell me it isn't."
I guess I need to clarify that I think IQ is a terrible proxy for rationality, that the correlation is weak at best. And your suggested heuristic will do nothing to stop high IQ crackpots from ignoring the mainstream scientific consensus. Or even low IQ crackpots who can find high IQ crackpots to support them. This is actually a thing that happens with some creationists—people thinking "because I'm an , I can see those evolutionary biologists are talking nonsense." Creationists would do better to attend to the domain expertise of evolutionary biologists. (See also: my post on the statistician's fallacy.)
I'm also curious as to how much of your willingness to agree with me in dismissing Plantinga is based on him being just one person. Would you be more inclined to take a sizeable online community of Plantingas seriously?
Unless you are way way way more charitable than I am, I have a hard time believing that you are anywhere near the territory where the advice "be less charitable" is more helpful than the advice "be more charitable".
As I said above, you can try to pinpoint where to apply this advice. You don't need to be charitable to really stupid people with no knowledge of a field. But once you've determined someone is in a reference class where there's a high prior on them having good ideas - they're smart, well-educated, have a basic committment to rationality - advising that someone be less charitable to these people seems a lot like advising people to eat more and exercise less - it might be useful in a couple of extreme cases, but I really doubt it's where the gain for the average person lies.
On the one hand, I dislike the rhetoric of charity as I see it happen on LessWrong. On the other hand, in practice, you're probably right that people aren't too charitable. In practice, the problem is selective charity—a specific kind of selective charity, slanted towards favoring people's in-group. And you seem to endorse this selective charity.
I've already said why I don't think high IQ is super-relevant to deciding who you should read charitably. Overall education also doesn't strike me as super-relevant either. In the US, better educated Republicans are more likely to deny global warming and think that Obama's a Muslim. That appears to be because (a) you can get a college degree without ever taking a class on climate science and (b) more educated conservatives are more likely to know what they're "supposed" to believe about certain issues. Of course, when someone has a Ph.D. in a relevant field, I'd agree that you should be more inclined to assume they're not saying anything stupid about that field (though even that presumption is weakened if they're saying something that would be controversial among their peers).
As for "basic commitment to rationality," I'm not sure what you mean by that. I don't know how I'd turn it into a useful criterion, aside from defining it to mean people I'd trust for other reasons (e.g. endorsing standard attitudes of mainstream academia). It's quite easy for even creationists to declare their commitment to rationality. On the other hand, if you think someone's membership in the online rationalist community is a strong reason to treat what they say charitably, yeah, I'm calling that self-congratulatory nonsense.
And that's the essence of my reply to your point #5. It's not people having self-congratulatory attitudes on an individual level. It's the self-congratulatory attitudes towards their in-group.
Saying
Interesting point. I'm not entirely clear how you arrived at that position. I'd like to look up some detail questions on that. Could you provide references I might look at?
sort of implies you're updating towards the other's position. If you not only disagree but are totally unswayed by hearing the other person's opinion, it becomes polite but empty verbiage (not that polite but empty verbiage is always a bad thing).
There are some papers that describe ways to achieve agreement in other ways, such as iterative exchange of posterior probabilities. But in such methods, the agents aren't just moving closer to each other's beliefs. Rather, they go through convoluted chains of deduction to infer what information the other agent must have observed, given his declarations, and then update on that new information. (The process is similar to the one needed to solve the second riddle on this page.) The two agents essentially still have to communicate I(w) and J(w) to each other, except they do so by exchanging posterior probabilities and making logical inferences from them.
Is this realistic for human rationalist wannabes? It seems wildly implausible to me that two humans can communicate all of the information they have that is relevant to the truth of some statement just by repeatedly exchanging degrees of belief about it, except in very simple situations. You need to know the other agent's information partition exactly in order to narrow down which element of the information partition he is in from his probability declaration, and he needs to know that you know so that he can deduce what inference you're making, in order to continue to the next step, and so on. One error in this process and the whole thing falls apart. It seems much easier to just tell each other what information the two of you have directly.
I won't try to comment on the formal argument (my understanding that literature is mostly just what Robin Hanson has said about it), but intuitively, this seems wrong. It seems like two people trading probability estimates shouldn't need to deduce exactly what the other has observed, they just need to make inferences along the lines of, "wow, she wasn't swayed as much as I expected by me telling her my opinion, she must think she has some pretty good evidence." At least that's the inference you would make if you both knew you trust each other's rationality. More realistically, of course, the correct inference is usually "she wasn't swayed by me telling her my opinion, she doesn't just trust me to be rational."
Consider what would have to happen for two rationalists who knowingly trust each other's rationality to have a persistent disagreement. Because of conservation of expected evidence, Alice has to think her probability estimate would on average remain the same after hearing Bob's evidence, and Bob must think the same about hearing Alice's evidence. That seems to suggest they both must think they have better, more relevant evidence to the question at hand. And might be perfectly reasonable for them to think that at first.
But after several rounds of sharing their probability estimates and seeing the other not budge, Alice will have to realize Bob thinks he's better informed about the topic than she is. And Bob will have to realize the same about Alice. And if they both trust each other's rationality, Alice will have to think, "I thought I was better informed than Bob about this, but it looks like Bob thinks he's the one who's better informed, so maybe I'm wrong about being better informed." And Bob will have to have the parallel thought. Eventually, they should converge.
Personally, I am entirely in favor of the "I don't trust your rationality either" qualifier.
Upvoted for publicly changing your mind.
Further, the idea that the tribe of Honest Except When I Benefit is the vast majority while Always Honest is a tiny minority is not one that I'll accept without evidence.
Here's one relevant paper: Lying in Everyday Life
We can't forecast anything so let's construct some narratives..?
I think the point is more "good forecasting requires keeping an eye on what your models are actually saying about the real world."
In addition to mistakes other commenters have pointed out, it's a mistake to think you can neatly divide the world into "defectors" and "non-defectors," especially when you draw the line in a way that classifies the vast majority of the world as defectors.
Oops, sorry.
"Much of real rationality is learning how to learn from others."
I once talked to a theorist (not RBC, micro) who said that his criterion for serious economics was stuff that you can’t explain to your mother. I would say that if you can’t explain it to your mother, or at least to your non-economist friends, there’s a good chance that you yourself don’t really know what you’re doing.
--Paul Krugman, "The Trouble With Being Abstruse"