Posts
Comments
There is a lot going on with Nobel prize winners. The most common trait is that they work extremely hard. There have been 40-something g-loaded subtasks that I know of. It's quite possible that they have an exceptional "recipe" of sub-abilities from these elemental cognitive abilities that won't show up on a traditional WAIS IV.
But this is to be expected; the primary purpose of IQ testing is (1) to measure cognitive decline or some other kind of medical concern and/or (2) to determine which children go into gifted programs. Adult IQ is rarely tested outside of these settings, yet it is also where people try to draw the most generalizations.
(The reason you can infer that IQ tests aren't meant to be as much of a measure of ability as they are to do these other two things is because so few safeguards are put in place to prevent cheating. With enough money it is quite possible to retake an IQ test and get 140+; you can even google the answers if you want. They really don't care. Meanwhile, the SAT is psychotic about cheating and the people who have successfully cheated had to pull off preposterous schemes to do it.)
There is a lot to say about IQ. I plan to make a video about it. It's not my field, but I've been reading the literature on and off for 17 years. Recently, I have noticed an explosion in what we can (for the purpose of this post) call SecretSauce-ism which is adjacent to a "cult of genius" mindset, i.e. the idea that there is some secret genius juice that lends godlike credibility to a person. This is harmful, so I've been rereading the literature, and have over the past week spent about 50-100 hours refamiliarizing myself with the current literature.
It's essential to know that IQ is primarily used to measure g which is a factor analysis of subdomains: the three primary are perceptual, verbal, and spatial. (Professional gamers would score high on perceptual.) What a lot of people don't understand is that when people talk about genius or IQ they're talking very broadly about highly conditional phenomena.
For example, IQ is predictive more downwards than it is upwards. There is a debate in IQ (SPLODR) which presupposes an IQ threshold after which there are diminishing returns or little benefit. This was originally posited at 120 which is trivially easy to refute because e.g. mathematicians have an average of 130. However, it's much less certain if say 160 IQ will have a benefit over 145 IQ in any meaningful way. (160 is the ceiling on the WAIS IV. If someone says they have e.g. 172, someone used some kind of statistical reaching to get this number — like if you put all of the 4.0 students in a class and used that class's grade curve to determine who has a "5.0" GPA.)
To make an analogy, you are scoring the test by the rarity of people who get that score, not some kind of straightforward competence test like the math section of the SAT. If you make 20 free throws in a row, you're probably pretty good at free throws. If you make 75 in a row, you're really good. If you make 2000 in a row, you're one of the few freaks who compete for world records — and the current world record is 5200, which is many SD beyond the 2000 scorer. What is this percentile difference measuring? Likewise with 3.91 GPA vs 4.0 GPA. And, not to get into literal dick measuring, but you would be surprised at how much rarer each .25in of erect penis length is, despite being the same addition of length with each increment.
So, several things are true, which a lot of people don't know:
- general dumbness correlates more than general intelligence, i.e. the subtests are more likely to intercorrelate with lower scores.
- at higher scores, there is more likely to be a skew with some subtrait or set of subtraits, e.g. a person is much more likely to have a high spatial relative to verbal at high IQ, whereas this is more likely to be even around the average.
- high IQ is less heritable than IQ in general. (going from memory, heritability was .76 for IQ in general and about .54 for high IQ.) This is intuitive if you think of IQ like diet: it's more about what you avoid than what you add. (Genes that boost IQ and lower IQ are grouped into additive/nonadditive.) It's also just a lot easier to get the wrong brain configuration than it is to land on the goldilocks zone.
- the heritability of IQ is not identical to the midparent IQ correlation, which creates enough variability — especially with higher IQ — that selecting your partner based on what you think their IQ is may be foolish. the intellectual version of this can still happen, in both directions: https://i.imgur.com/oG0crQB.jpg
- general intelligence as it applies to humans may have very little to do with general intelligence as it applies to AI. https://i.imgur.com/C87aP24.png (bouchard/johnson 2014).
- finally, there is probably a "threshold effect", also called SPLODR, which shows diminishing returns past a certain point. the original hypothesis was that this occurred at 120 IQ, but this is trivial to refute; it's less clear that there isn't a threshold effect for, say, 145IQ. (many reasons for this — one is that it doesn't pass a sniff test. if there were utility in measuring beyond 160, test companies would do this; further, test companies that are highly invested in predicting professional success, like the creators of the GRE or LSAT, tend to have percentiles that cap at what would be the equivalent of 140-145 as well.)
this is among many other things. This is a toe-dip. But there are many misinformed beliefs about the construct.
When I used the website, contributors like lukeprog fostered the self-study of rationality through rigorous source materials like textbooks.
This is no longer remotely close to the case, and the community has become farcical hangout club with redundant jargon/nerd-slang and intolerable idiosyncrasies. Example below:
If the answer is in the video, I won't see it.
Hopeful to disappoint you, but the answer is in the video.
I am way more likely to make fun of you for posting stuff like this than any AI ever will.
epistemic status: lmao
Kindness will only affect decisions where altruistic behavior wouldn't occur if lacking kindness. Integrity I'm even less sure about. Rationality could affect any decision where bias or fuzzy reasoning is involved, which is almost every decision.
You should get a Ph.D. in Philosophy if you consider the material studied in philosophy to be an end in itself. Philosophy is a truthseeking discipline, so if you find that inherently rewarding and could imagine doing that for a large part of your life it's a good decision. Don't worry about the wariness of philosophy: I can guarantee you that the criticisms levied here against philosophy have been addressed tenfold in actual philosophy departments, by people with sympathies closer to Luke's than you'd think.
That said, a lot of people go into graduate programs for bad reasons. Here are two I've been tempted by:
#1.
Minimizing Status Risk. A lot of people think about risk in terms of financial gain or loss, but few think about risk in terms of status when it's a real concern for many people. Graduating college can be intimidating, especially if you're at a prestigious college, because you're about to be stripped of your hierarchical standing among people your age. If you've attended, say, Harvard for four years, you've spent those four years thinking of yourself on the top of the food chain relative to other college students.
Once you're out of college, this is no longer true, and you're measured by what kind of job you have. It's extremely tempting to avoid this by applying to graduate school, because graduate school allows you to continue the imagined hierarchical standing that you've had for the past few years. Eventually you'll get a Ph.D. and be on top of the intellectual food chain. This has nothing to do with "avoiding the real world", because "the real world" as an employment area is conspicuously centered on office jobs or whatever the majority of people happen to do for money. (I wonder if farmers consider everyone else to have a "fake" job. Probably.)
It's a way of avoiding vulnerability to your status, because working as a clerk or receptionist or barista or server or whatever after college is generally not prestigious and makes you feel like your intellect isn't worth anything. That's an uncomfortable feeling, sure, but make sure you're not eyeing a Ph.D. just to avoid that feeling.
#2.
Even if you're not avoiding Status Risk, make sure you're not getting a Ph.D. just to feel like an intellectual hotshot anyway. A lot of people reason about competence in binary ways (expert or non-expert) even though competence obviously exists on a spectrum, so it's tempting to get a title that lends you immediately to the "expert" end of any discussion. That way, you can throw your weight around whenever there's a clash of words.
With philosophy especially, it's enigmatic to a lot of people. There's a mystery of what you're actually learning in an advanced program. So a Ph.D. looks like a "certified smart person" badge to a lot of people, and that's tempting. Make sure you're not getting it for that reason either.
Here's the litmus test. Ask yourself: "would I self-study this material anyway if I had the next three-five years paid for? Would this occupy a large part of my time regardless of what I'm doing?" If so, it's worth it.
Sure, in the very short run (starting from absolutely no knowledge of the game) you'd have to make mistakes to learn anything at all. But the process of getting better is a gradual decrease of the frequency of those mistakes. You'd want to minimize your mistakes as much as possible as you got better, because the frequency of mistakes will be strongly correlated with how much you lose.
I think you're seeing "try to minimize how many mistakes you make" and reading that as "trying to make no mistakes." There are certainly mistakes you'll have to make to get better, but then there are superfluous mistakes that some people may make while others won't, or catastrophic mistakes that would make you look really bad which you'd definitely want to avoid. The depth of mistakes can go much deeper than the necessary mistakes you'd have to make to get better, in other words.
I really liked this post, and I think a lot of people aren't giving you enough credit. I've felt similarly before -- not to the point of suicide, and I think you might want to find someone who you can confide those anxieties with -- but about being angered at someone's dismissal of rationalist methodology. Because ultimately, it's the methodology which makes someone a rationalist, not necessarily a set of beliefs. The categorizing of emotions as in opposition to logic for example is a feature I've been frustrated with for quite some time, because emotions aren't anti-logical so much as they are alogical. (In my personal life, I'm an archetype of someone who gets emotional about logical issues.)
What I suspect was going on is that you felt that this person was being dismissive of the methodology and that the person did not believe reason to be an arbiter of disagreements. This reads to me like saying "I'm not truth-seeking, and I think my gut perception of reality is more important than the truth" -- a reading that sounds to me both arrogant and immoral. I've ran across people like this too, and every time I feel like someone is de-prioritizing the truth over their kneejerk reaction, it's extremely insulting. Perhaps that's what you felt?
I don't currently work at a restaurant, so at the moment I'm afraid of nothing.
But for the purposes of the example, it's not about discovering mistakes or incompetence -- it's about your level of incompetence being much greater than you previously estimated, for reasons you were unaware of prior to being exposed to those reasons.
I find that similar to the concept of fractal wrongness. What distinguishes an iceberg from a fractal is that you're in situations where someone is resisting exposing the whole iceberg for one reason or another. In the dishonesty scenario, you realize one lie reveals many others but only because that person has left you a tidbit of information that cracks their facade and allows you to infer just how deeply they've lied to you -- or in the case of attraction, an event or action that only would occur if they had a much greater level of attraction existing below the surface.
I think LessWrong actually has a higher barrier for contribution -- at least for articles -- because you're expected to have 20 comment karma before you can submit. This means that, if you're honest anyway, you'll have to spend your time in the pit interacting with people who could potentially shout you down, or call you a threat to their well-kept garden, or whatever.
I have at least 3 articles in draft format that I want to submit once I reach that total, but I don't comment on discussions as much because most of what I would say is usually said in one comment or another. For people like me, the barrier of "must email someone" is actually easier, since discussion contribution requires a sense of knowing how the community works, intuiting a sense of what the community deems a good comment, and posting along those lines.
Luke, I was curious: where does informal logic fit into this? It is the principal method of reasoning tested on the LSAT's logical reasoning section, and I would say the most practical form of reasoning one can engage in, since most everyday arguments will utilize informal logic in one way or another. Honing it is valuable, and the LSAT percentiles would suggest that not nearly as many people are as good at it as they should be.
His understanding of philosophy is barely up to undergraduate level. Sorry, but that's the way it is.
I feel like the phrasing "barely up to undergraduate level" is like saying something is "basic" or "textbook" not when it's actually basic or textbook but because it insinuates there is an ocean of knowledge that your opponent has yet to cross. If luke is "barely undergraduate" then I know a lot of philosophy undergrads who might as well not call themselves that.
While I agree that reform is far more likely to be done by a Dewey or Erasmus, your reasoning gives me a very "you must be accepted into our system if you want to criticize it" vibe.
The general population would contain 50 sociopaths to 1000; I don't think LessWrong contains 50 sociopaths to 1000. Rationality is a truth-seeking activity at its core, and I suspect a community of rationalists would do their best to avoid lying consciously.
I am not sure what "the common definition of the word 'lie'" is, especially since there are a lot of differing interpretations of what it means to lie. I know that wrong answers are distinct from lies, however. I think that a lot of LessWrong people might have put an IQ that does not reflect an accurate result. But I doubt that many LessWrong people have put a deliberately inaccurate result for IQ. Barring "the common definition" (I don't know what that is), I'm defining "stating something when you know what you are stating is false" as a lie, since someone can put a number when they don't know for sure what the true number is but don't know that the number they are stating is false either.
I don't know what you mean by "mean something" with respect to Mensa Denmark's normings. They will probably be less accurate than a professional IQ testing service, but I don't know why they would be inaccurate or "meaningless" by virtue of their organization not being a professional IQ testing service.
The only way I can think of in which the self-reported numbers would be more accurate than the IQTest.dk numbers is if the LW respondents knew that their IQ numbers were from a professional testing service and they had gone to this service recently. But since the test didn't specify how they obtained this self-report, I can't say, nor do I think it's likely.
IQTest.dk uses Raven's Progressive Matrices which is a standard way to measure IQ across cultures. This is because IQ splits between verbal/spatial are not as common. It wouldn't discriminate against autistics, because it actually discriminates in favor of autistics; people with disorders on the autism spectrum are likely to score higher, not lower.
I'm not sure how the bolding of "in that way" bolsters your argument. Paraphrased, it would be "in the way that the user types the IQ score into the survey box themselves, the IQTest.dk questions are equally flawed to the other intelligence questions." But this neglects to consider that the source of the number is different; they are self-reports in the sense that the number is up to someone to recall, but if someone types in their IQTest.dk number you know it came from IQTest.dk. If someone types in their IQ without specifying the source, you have no idea where they got that number from -- they could be estimating, it could be a childhood test score, and so on.
Please consider getting some rationality training or something.
Remarks like these are unnecessary, especially since I've just joined the site.
Over 1000 people took the test. Statistically speaking, it should have included about 50 sociopaths.
Not if LessWrong values truthseeking activities more than the general population, or considers lying/truth-fabrication a greater sin than the general population does, or if LessWrong just generally attracts less sociopaths than the general population. If over 1000 fitness enthusiasts take a test about weight, the statistics re: obesity are not going to reflect the general population's. Considering the CRT scores of LessWrong and the nature of this website to admire introspection and truthseeking activities, I doubt that LW would be reflective of the general population in this way.
Lies are more than untrue statements; at least, in the context of self-reports, they are conscious manipulations of what one knows to be true. Someone might think they know their IQ because they've taken less reliable IQ tests, or because they had a high childhood IQ, or because they extrapolated their IQ from SAT scores, or for a host of other reasons. In this case they haven't actually lied, they've just stated something inaccurate.
Someone could put an IQ when they have no idea what their IQ is, yes, in the sense that they have never taken a test of any sort and have no idea what their IQ would be if they took one, even an inaccurate one. I don't think many people here would do that, though, because of the truthseeking reasons mentioned earlier.
Mensa is a club not a professional IQ testing center. They're not even legally allowed to give out scores anymore. Their test scores are not considered to be accurate.
Mensa doesn't need to be a professional IQ testing center for their normings to be accurate, however. I am also not sure how not accounting for learning disorders would seriously alter IQTest.dk's validity over self-reports.
However, it's inaccurate to say that because someone puts their number in the box from IQTest.dk that they're "equally flawed" to the other intelligence questions. Someone who self-reports an IQ number, any number, may not know if that number was obtained using accurate methodology. It may be an old score from childhood, and childhood IQ scores vary wildly compared to adult IQ scores. It may be an extrapolation from SAT scores, as I mentioned above. There are a number of ways in which self-reported IQ differs from reported IQtest.dk IQ.
LessWrong is going to eat you alive, honey. Get out while you're ahead.
This reads as unnecessarily tribalistic to me. I take it you think I am an undiscriminating skeptic? In any case, cool it.
Some personality traits may be conducive to "natural" rationality. High scores on the Narcissistic Personality Inventory for example may indicate ego-preserving tendencies that make greater levels of rationality more difficult to obtain. I'd imagine that natural levels of introversion would also help, and I say that as someone who usually maxes out the extroversion scale on these kinds of tests.
Does anyone know if there is a similar trope website for rationality -- does the LessWrong wiki qualify? Or trope websites for humor? Or rhetorical devices?
Hell, Silvia Rhetoricae is sort of like TVTropes for rhetoric but managed by one person instead of a community.
Often people who dismiss philosophy end up going over the same ground philosophers trode hundreds or thousands of years ago. That's one reason philosophers emphasize the history of ideas so much. It's probably a mistake to think you are so smart you will avoid all the pitfalls they've already fallen into.
While I agree that it's important to avoid succumbing to these ideas, philosophy curricula tend to emphasize not just the history of ideas but the history of philosophers, which makes the process of getting up to speed for where contemporary philosophy is take entirely too long. It is not so important that we know what Augustine or Hume thought so much as why their ideas can't be right now.
Also, "the history of ideas" is really broad, because there are a lot of ideas that by today's standards are just absurd. Including the likes of Anaximander and Heraclitus in "the history of ideas" is probably a waste of time and cognitive energy.
YeahOKButStill has an interesting take on the interaction between philosophy done in blogs and philosophy done in journals:
"... Many older philosophers lament the current lack of creativity and ingenuity in the field (as compared to certain heady, action-packed periods of the 20th century), yet, it is a well-established fact that in order to be published in a major journal or present at a major conference, a young philosopher has to load their paper/presentation with enormous amounts of what is called the "relevant literature". This means that even the most creative people among us (a group I do not count myself as belonging to) must spend huge amounts of time, space and energy trying to demonstrate just how widely they have read and just how many possible objections to their view they can consider, lest some irritable senior philosopher think that their view has not been given a fair shake. Of course, there is no evidence whatsoever that the great philosophers of the 20th century wrote and thought in this manner, as a quick survey of that relevant literature will show.
Blogs are a space for young philosophers to explore their ideas without these sorts of constraints, to try ideas on for size and to potentially get feedback from a wide audience. Indeed, the internet has the potential to host forums that could make reading groups at Oxford and Cambridge look positively stultifying. Yet, this is not how things are playing out: most young philosophers I know are afraid to even sign their real names to a comment thread. This, as anyone can see, is an absurd situation. However, since I have no control over it, I must bid this public space adieu."
I have always despised the term "pseudointellectualism" since there isn't exactly a set of criteria for a pseudointellectual, nor is there a process of accreditation for becoming an intellectual; the closest thing I'm aware of is, perhaps, a doctorate, but the world isn't exactly short of Ph.D.s who put out crap. There are numerous graduate programs where the GRE/GPA combination to get in is barely above the undergrad averages, for example.
I don't think anyone on Less Wrong has lied about their IQ. (addendum: not enough to seriously alter the results, anyway.) If you come up with a "valuing the truth" measure, LessWrong would score pretty highly on that considering the elaborate ways people who post here go about finding true statements in the first place. To lie about your IQ would mean you'd have to know to some degree what your real IQ is, and then exaggerate from there.
However, I do think it's more likely than you mention that most people on LessWrong self-reporting IQ simply don't know what their IQ is in absolutely certain terms, since to know your adult IQ you'd have to see a psychometricist and receive an administered IQ test. iqtesk.dk is normed by Mensa Denmark, so it's far more reliable than self-reports. You don't know where the self-reported IQ figures are coming from -- they could be from a psychometricist measuring adult IQ, or they could be from somewhere far less reliable. It could be that they know their childhood IQ was measured at somewhere around 135 for example, and are going by memory. Or they could know by memory that their SAT is 99th percentile and spent a minute to look up what 99th percentile is for IQ, not knowing it's not a reliable proxy. Or they might have taken an online test somewhere that gave ~140 and are recalling that number. Who knows? Either way, I consider "don't attribute to malice what you can attribute to cognitive imperfection" a good mantra here.
126 is actually higher than a lot of people think. As an average for a community, that's really high -- probably higher than all groups I can think of except math professors, physics professors and psychometricists themselves. It's certainly higher than the averages for MIT and Harvard, anyway.
About the similarity between self-reported IQ and SAT scores: SAT scores pre-1994 (which many of the scores on here are not likely to fall into) are not reliable as IQ test proxies; Mensa no longer accepts them. This is because it is much easier to game. I tutor the SAT, and when I took the SAT prior to applying at a tutor company my reading score was 800, but in high school pre-college it was only in the mid-600s. SAT scores in reading are heavily influenced by (1) your implicit understanding of informal logic, and (2) your familiarity with English composition and how arguments/passages may be structured. Considering the SAT has contained these kinds of questions since the mid-90s, I am inclined to throw its value as a proxy IQ test out the window and don't think you can draw conclusions about LessWrong's real collective IQ from the reported SAT scores.
The IQTest.dk result may have given the lowest measure, but I also think it's the most accurate measure. It would not put LessWrong in the 130s, maybe, but it would mean that the community is on the same level of intellect as, say, surgeons and Harvard professors, which is pretty formidable for a community.