post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by TekhneMakre · 2023-02-21T13:31:40.549Z · LW(p) · GW(p)

As of this comment, this post (1 hour after posting at 8:30am EST) has 43 karma from 41 votes. I suspect it's being upvoted through sockpuppet accounts. (This isn't a comment on the content on the podcast.)

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2023-02-21T14:02:09.647Z · LW(p) · GW(p)

I also notice that alfredmacdonald last posted or commented here 10 years ago, and the content of the current post is a sharp break from his earlier (and brief) participation. What brings you back, Alfred? (If the answer is in the video, I won't see it. The table of contents was enough and I'm not going to listen to a 108-minute monologue.)

Replies from: ChristianKl, alfredmacdonald
comment by ChristianKl · 2023-02-21T17:49:18.160Z · LW(p) · GW(p)

That's not the case. He posted the same post [LW · GW]two months ago under another account and that account was banned [LW(p) · GW(p)]. 

comment by alfredmacdonald · 2023-02-21T14:14:22.395Z · LW(p) · GW(p)

When I used the website, contributors like lukeprog fostered the self-study of rationality through rigorous source materials like textbooks.

This is no longer remotely close to the case, and the community has become farcical hangout club with redundant jargon/nerd-slang and intolerable idiosyncrasies. Example below:

If the answer is in the video, I won't see it.

Hopeful to disappoint you, but the answer is in the video.

comment by IrenicTruth · 2023-02-21T13:52:52.165Z · LW(p) · GW(p)

Duplicating the description

TimePoints

  • 00:00 intro
  • 0:53 most of the sequences aren't about rationality; AI is not rationality
  • 3:43 lesswrong and IQ mysticism
  • 32:20 lesswrong and something-in-the-waterism
  • 36:49 overtrusting of ingroups
  • 39:35 vulnerability to believing people's BS self-claims
  • 47:35 norms aren't sharp enough
  • 54:41 weird cultlike privacy norms
  • 56:46 realnaming as "doxxing"
  • 58:28 no viable method for calling out rumors/misinformation if realnaming is 'doxxing'
  • 1:00:16 the strangeness and backwardness of LW-sphere privacy norms
  • 1:04:07 EA: disregard for the homeless and refusal to do politics because it's messy
  • 1:10:16 EA: largely socially inept, does not understand how truly bad the SBF situation is
  • 1:13:36 EA: treatment of utilitarianism and consciousness is simplistic
  • 1:20:20 EA rigor: vitamin A charity example
  • 1:23:39 extreme techno optimism and weak knowledge of human biology
  • 1:25:24 exclusionary white nerd millennial culture
  • 1:27:23 comfort class culture
  • 1:30:25 pragmatics-agnosticism
  • 1:33:13 shallow analysis of empirical topics
  • 1:34:18 idiosyncrasies of communication, e.g. being extremely obtuse at the thesis level
  • 1:39:50 epistemic rationality matters much more than instrumental rationality
  • 1:43:00 the scene isn't about rationality, it's about hanging out and board games (which is fine, just don't act like you're doing anything important)

References

  1. sample WAIS report https://www.pearsonassessments.com/co...
  2. what is g https://www.youtube.com/watch?v=jSo5v...
  3. childhood IQ vs. adult IQ https://pubmed.ncbi.nlm.nih.gov/12887...
  4. wonky attempts to measure IQ above 160 https://archive.vn/kFCY1
  5. computer-based verbal memory test https://humanbenchmark.com/tests/verb...
  6. typing speed / IQ https://eric.ed.gov/?id=ED022127
  7. simple choice reaction time https://www.psytoolkit.org/lessons/ex...
  8. severity of 83 IQ https://www.youtube.com/watch?v=5-Ur7...
  9. googleability of WAIS https://nda.nih.gov/data_structure.ht...
  10. uses of WAIS in clinical care https://www.ncbi.nlm.nih.gov/pmc/arti...
  11. drunk reaction time experiment https://imgur.com/a/IIZpTol
  12. how g correlates with WAIS https://archive.vn/gyDcM
  13. low murderer IQ https://archive.vn/SrenV
  14. tom segura bit about the first 48 https://www.youtube.com/watch?v=B0l2l...
  15. rarity of perfect LSAT scores (30 out of 100,000) https://archive.vn/KWAzf
  16. limits on human reading speed (1) https://archive.vn/IVU8x
  17. limits on human reading speed (2) https://psycnet.apa.org/record/1998-1...
  18. kinobody fitness callout by philion https://www.youtube.com/watch?v=WjytE...
  19. summary of lesswrong drama (Jan-Mar. 2022) https://alfredmacdonald.medium.com/su...
  20. leverage / geoff anders pseudo-cult https://archive.vn/BKvtM
  21. the questionability of michael vassar and related organizations https://archive.vn/8A8QO
  22. sharp vs soft culture https://archive.vn/VOpya
  23. something-in-the-waterism https://alfredmacdonald.medium.com/so...
  24. on the fakeness of many bayesian priors https://alfredmacdonald.substack.com/...
  25. criticism of the "postrationalist" subculture and the problems created by pseudonyms and hyper-privacy norms https://alfredmacdonald.substack.com/...
  26. proliferation of "technoyogi" woo in this culture due to lack of BS-calling norms https://alfredmacdonald.substack.com/...
  27. questionability of the vitamin A charity I mentioned https://archive.vn/2AxlK
  28. MIRI support from Open Philanthropy https://archive.vn/JW6WT
  29. MIRI publication record https://archive.vn/9hIhT
  30. MIRI staff https://archive.vn/hJeuT
  31. MIRI budget, 50% of which is spent on research personnel https://archive.vn/z6bvz
  32. benefits of sharp culture (or at least a mean robot boss) https://archive.vn/onIfM
  33. daniel dennett on, among other things, the problems with treating all suffering as interchangeable https://archive.vn/5SLEy
  34. on reading comprehension limits: https://catalog.shepherd.edu/mime/med... -- while a 50th percentile student reads (with retention) at 250wpm and a 75th at 500wpm for "general expository reading (e.g. news)", this same group reads at a 50th percentile of 149wpm and a 75th percentile of 170wpm for "advanced scientific and/or technical material". assuming a gaussian distribution, the distance between 50th percentile and 75th percentile is 2/3s an SD -- so with an SD of ~31.5, reading said material at 306.5WPM is 5SD from the mean, or about 1/3.5 million. the average audible narration rate is 155wpm, so this severely puts into question those who say they're 2xing or even 1.75xing advanced audiobooks/lectures.

Duplicating the first comment (@alfredmacdonald's proposed alternative)

A READING LIST FOR RATIONALITY THAT IS NOT LESSWRONG / RENDERS THE SEQUENCES SUPERFLUOUS

objection: "but I learned a lot about rationality through lesswrong"

response: maybe, but probably inadequately.

  1. while unorthodox, I usually suggest this above everything else: the PowerScore Logical Reasoning Bible, while meant as LSAT prep, is the best test of plain-language reasoning that I am aware of. the kinds of questions you are meant to do will humble many of you. https://www.amazon.com/PowerScore-LSAT-Logical-Reasoning-Bible/dp/0991299221 and you can take a 10-question section of practice questions at https://www.lsac.org/lsat/taking-lsat/test-format/logical-reasoning/logical-reasoning-sample-questions — many of you will not get every question right, in which case there is room to sharpen your ability and powerscore's book helps do that.
  2. https://www.amazon.com/Cengage-Advantage-Books-Understanding-Introduction/dp/1285197364 in my view, the best book on argumentation that exists; worth reading either alongside PowerScore's book, or directly after it.
  3. https://www.amazon.com/Rationality-What-Seems-Scarce-Matters/dp/B08X4X4SQ4 pinker's "rationality" is an excellent next step after learning how to reason through the previous two texts, since you will establish what rationality actually is.
  4. https://www.amazon.com/Cambridge-Handbook-Reasoning-Handbooks-Psychology/dp/0521531012 this is a reference text, meaning it's not meant to be read front-to-back. it's one of the most comprehensive of its kind.
  5. https://www.amazon.com/Handbook-History-Logic-Valued-Nonmonotonic/dp/044460359X — this is both prohibitively and ludicrously expensive, so you will probably need to pirate it. however, this history of logic covers many useful concepts.
  6. https://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555 this is a standard text that established "irrationality" as a mainstream academic concept. despite being a psychologist, some of kahneman's work won him the nobel prize in economics in 2002, shared with vernon smith.
  7. https://www.amazon.com/Predictably-Irrational-audiobook/dp/B0014EAHNQ this is another widely-read text that expands on the mainstream concept of irrationality.
  8. https://www.amazon.com/BIASES-HEURISTICS-Collection-Heuristics-Everything/dp/1078432317 it is exactly what it says: a list of about 100 cognitive biases. many of these biases are worth rereading and/or flashcarding. there is also https://en.wikipedia.org/wiki/List_of_cognitive_biases
  9. https://www.amazon.com/Informal-Logical-Fallacies-Brief-Guide/dp/0761854339 also exactly what it says, but with logical fallacies rather than biases. (a bias is an error in weight or proportion or emphasis; a fallacy is a mistake in reasoning itself.) there is also https://en.wikipedia.org/wiki/List_of_fallacies
  10. here is another fantastic handbook of rationality, which is a wonderfully integrated work spanning psychology, philosophy, law, and other fields with 806 pages of content. https://www.amazon.com/Handbook-Rationality-Markus-Knauff/dp/0262045079 (it is quite expensive -- no one will blame you if you pirate it from libgen.)

you will learn more through these texts than through the LessWrong Sequences. as mentioned, many of these are expensive, and no one will blame you if you need to pirate/libgen them. many or maybe even most of these you will need to reread some of these texts, perhaps multiple times.

"but I'd rather have a communi - " yes, exactly. hence the thesis of a video I made: lesswrong is primarily nerds who want a hangout group/subculture, rather than a means of learning rationality, and this disparity between claimed purpose and actual purpose produces most of the objections people have and many of my objections in my video, and why I have created this alternate reading list.

Replies from: IrenicTruth
comment by IrenicTruth · 2023-02-21T15:19:27.062Z · LW(p) · GW(p)

I haven't listened to the video yet. (It's very long, so I put it on my watch-later list.) Nor have I finished Eliezer's Sequences (I'm on "A Technical Explanation of Technical Explanation.") However, I looked at the above summaries to decide whether it would be worth listening to the video.

Potential Weaknesses

  • None of the alternative books say anything about statistics. A rough intro to Bayesian statistics is an essential part of the Sequences. Without this, you have not made them superfluous.
    • A rough understanding of Bayesian statistics is a valuable tool.
    • Anecdote: I took courses in informal logic when I was a teenager and was aware of cognitive biases. However, the a-ha moment that took me out of the religion of my childhood was to ask whether a particular theodicy was probable. This opened the way to ask whether some of my other beliefs were probable (not possible, as I'd done before). Within an hour of asking the first question, I was an atheist. (Though it took me another year to "check my work" by meeting with the area pastors and elders.) I thought to ask it because I'd been studying statistics. So, for me, the statistical lens helped in the case where the other lenses failed to reveal my errors. I already knew a hoard of problems with the Bible, but the non-probabilistic approaches allowed me to deal with the evidence piece by piece. I could propose a fix for each one. For example, following Origen, I could say that Genesis 1 was an allegory. Then it didn't count against the whole structure.
    • The above anecdote took place several years before I encountered LessWrong. I'm not saying that the Sequences/LessWrong helped me escape religion. I'm saying that Bayesian stats worked where other things failed, so it was useful to me, and you should not consider that you've replaced the sequences if you leave it out.
  • Handbook of the History of Logic: The Many Valued and Nonmonotonic Turn in Logic is on the reading list. I haven't read it, but the title gives me pause. Nonmonotonic logics are subtle and can be misapplied. I misapplied Zadeh's possibilistic logic to help justify my theism.
  • The promotion of the LSAT and legal reasoning seems out of place. Law is the opposite of truth-seeking. Lawyers create whatever arguments they can to serve their clients. A quick Google couldn't dig up statistics, but I'd guess that more lawyers are theists than scientists.
  • For me, the LessWrong community is a place I can get better data and predictions than other news sources. I know only one person who is also on LessWrong. They live across an ocean from me, and we haven't talked in 8 months. I don't think hanging out and playing board games is a major draw. If this is the thesis, it is far from my personal experience.

Potential Strengths

  • The emphasis of the sequences on epistemic over instrumental rationality.
    • Other people in the LessWrong community have pointed this out. (I remember a sequence with the word "Hammer" in it that talks about instrumental rationality.)
    • The alternative reading list does not seem to address instrumental rationality
  • Treating suffering as interchangeable doesn't always produce good outcomes. (Though I don't know how to deal with this - if you can only take one course of action, you must reify everything into a space where you can compare options.)

Other An alternative to piracy in the USA is to request books with the Interlibrary loan system. It is free in most places. Also, academic libraries in public universities frequently offer membership for a small fee ($10-$20 per month) or free to community members - especially students, so if you have a local university, you might ask them.

comment by Jonas Hallgren · 2023-02-21T14:32:45.919Z · LW(p) · GW(p)

I'm just going to write a review of this for anyone wants to get to the meat of the critique. I would consider myself outside the drama part of this as I've basically only engaged with the ideas and not that much with the "community". So I won't go into any of the community drama stuff but mostly factual disagreements.

(My take will of course still be biased however.)

General: I think Alfred did this in reasonably good faith most of the time. I think there was stuff he definetely skipped engaging with but I was honestly thinking it would be more drama and hit-piecy than it was.

TL;DR (of good points):

Suboptimal models of cognition when it comes to who can do good work as LW tends to care more about G then it does about for example conscientiousness and creativity that should be prioritised higher.

LessWrongers tend to lose the root for the tree when it comes to self optimisation. Exercise more and start thinking about the underlying cognitive algorithms of getting stuff done in the world.

There are weird culture norms where privacy and a homogenous population leads to in-group thinking and biases towards rediscovering things that already exist in the world within the LW sphere.

On different time stamps:

0:53 most of the sequences aren't about rationality; AI is not rationality

Death is bad is all i say here.

3:43 lesswrong and IQ mysticism

A bit handwavy but points to misunderstandings in how people actually make stuff in the world and the cult of genius.

32:20 lesswrong and something-in-the-waterism

Claim: you can do most things online anyway, why go to the Bay? //This seems like an intially good argument but there are major serendipity effects in terms of encountering new ideas that I feel he doesn't bring up. He doesn't bring up arguments such as the modes of cognition being more variable as you can go on a walk or have an in-person discussions with people.

36:49 overtrusting of ingroups

Summary: LW is way too in-group with too little mechanisms that can allow for calling out BS.

39:35 vulnerability to believing people's BS self-claims

Summary: People lie about their own capabilities in terms of for example writing speed which is bad.

47:35 norms aren't sharp enough

Summary: people aren't calling out bullshit. Points to something like "you sound ridicolous". An example is apparently saying "I drink soylent because it is more efficient" and this getting no pushback. //Yet pushback is based on if you share beloefs or not and so it seems this is more based on personal disagreements.

{54:41 weird cultlike privacy norms

56:46 realnaming as "doxxing"

58:28 no viable method for calling out rumors/misinformation if realnaming is 'doxxing'

1:00:16 the strangeness and backwardness of LW-sphere privacy norms}:

Summary: Privacy norms are weird in LW. //might be true, uncertain.

{1:04:07 EA: disregard for the homeless and refusal to do politics because it's messy

1:10:16 EA: largely socially inept, does not understand how truly bad the SBF situation is

1:13:36 EA: treatment of utilitarianism and consciousness is simplistic

1:20:20 EA rigor: vitamin A charity example}:

Vibe: "I disagree with longtermism and animal suffering and so it's bad" //I know this isn't charitable but it is a low level on the discussion here so I will respond in the same way. He also makes some goof points about QALY's and perception of pain to counteract utilitarianism not that deep of a discussion but still pretty good points.

1:23:39 extreme techno optimism and weak knowledge of human biology

//I would tend to agree that LWs could exercise more and learn more about neurobiology.

1:25:24 exclusionary white nerd millennial culture

Summary: homogenous culture

1:27:23 comfort class culture

Summary: sheltered and upper-middle class creates sheltering and implicit culture norms

1:30:25 pragmatics-agnosticism

Summary: yes you may be rational but what about writing and tonation and other parts of life?

1:33:13 shallow analysis of empirical topics

1:34:18 idiosyncrasies of communication, e.g. being extremely obtuse at the thesis level

Summary: People don't listen that well on LW. The letter of an argument is being followed rather than the spirit.

1:39:50 epistemic rationality matters much more than instrumental rationality

1:43:00 the scene isn't about rationality, it's about hanging out and board games (which is fine, just don't act like you're doing anything important)

Summary: Some epic ranting about how LW is about talking about AI and since he doesn't believe the AI stuff people are belittled to "hanging out and playing board games".

If you cba with implicit moral disagreements (such as the ones that underly the very epic EA drama that keeps popping up (not saying it's only that on the EA forum)) I would recommend thinking "I should exercise more and be more careful with implicit group norms in the future" and move on with your life.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2023-02-21T14:42:16.222Z · LW(p) · GW(p)

On IQ, I think a weaker criticism is justifiable: Once you select for high IQ once or twice, other traits matter more.

IQ matters, but it doesn't wholly determine your life.

Replies from: alfredmacdonald
comment by alfredmacdonald · 2023-02-21T15:09:51.733Z · LW(p) · GW(p)

There is a lot to say about IQ. I plan to make a video about it. It's not my field, but I've been reading the literature on and off for 17 years. Recently, I have noticed an explosion in what we can (for the purpose of this post) call SecretSauce-ism which is adjacent to a "cult of genius" mindset, i.e. the idea that there is some secret genius juice that lends godlike credibility to a person. This is harmful, so I've been rereading the literature, and have over the past week spent about 50-100 hours refamiliarizing myself with the current literature.

It's essential to know that IQ is primarily used to measure g which is a factor analysis of subdomains: the three primary are perceptual, verbal, and spatial. (Professional gamers would score high on perceptual.) What a lot of people don't understand is that when people talk about genius or IQ they're talking very broadly about highly conditional phenomena.

For example, IQ is predictive more downwards than it is upwards. There is a debate in IQ (SPLODR) which presupposes an IQ threshold after which there are diminishing returns or little benefit. This was originally posited at 120 which is trivially easy to refute because e.g. mathematicians have an average of 130. However, it's much less certain if say 160 IQ will have a benefit over 145 IQ in any meaningful way. (160 is the ceiling on the WAIS IV. If someone says they have e.g. 172, someone used some kind of statistical reaching to get this number — like if you put all of the 4.0 students in a class and used that class's grade curve to determine who has a "5.0" GPA.)

To make an analogy, you are scoring the test by the rarity of people who get that score, not some kind of straightforward competence test like the math section of the SAT. If you make 20 free throws in a row, you're probably pretty good at free throws. If you make 75 in a row, you're really good. If you make 2000 in a row, you're one of the few freaks who compete for world records — and the current world record is 5200, which is many SD beyond the 2000 scorer. What is this percentile difference measuring? Likewise with 3.91 GPA vs 4.0 GPA. And, not to get into literal dick measuring, but you would be surprised at how much rarer each .25in of erect penis length is, despite being the same addition of length with each increment.

So, several things are true, which a lot of people don't know:

  • general dumbness correlates more than general intelligence, i.e. the subtests are more likely to intercorrelate with lower scores.
  • at higher scores, there is more likely to be a skew with some subtrait or set of subtraits, e.g. a person is much more likely to have a high spatial relative to verbal at high IQ, whereas this is more likely to be even around the average.
  • high IQ is less heritable than IQ in general. (going from memory, heritability was .76 for IQ in general and about .54 for high IQ.) This is intuitive if you think of IQ like diet: it's more about what you avoid than what you add. (Genes that boost IQ and lower IQ are grouped into additive/nonadditive.) It's also just a lot easier to get the wrong brain configuration than it is to land on the goldilocks zone.
  • the heritability of IQ is not identical to the midparent IQ correlation, which creates enough variability — especially with higher IQ — that selecting your partner based on what you think their IQ is may be foolish. the intellectual version of this can still happen, in both directions: https://i.imgur.com/oG0crQB.jpg
  • general intelligence as it applies to humans may have very little to do with general intelligence as it applies to AI. https://i.imgur.com/C87aP24.png (bouchard/johnson 2014).
  • finally, there is probably a "threshold effect", also called SPLODR, which shows diminishing returns past a certain point. the original hypothesis was that this occurred at 120 IQ, but this is trivial to refute; it's less clear that there isn't a threshold effect for, say, 145IQ. (many reasons for this — one is that it doesn't pass a sniff test. if there were utility in measuring beyond 160, test companies would do this; further, test companies that are highly invested in predicting professional success, like the creators of the GRE or LSAT, tend to have percentiles that cap at what would be the equivalent of 140-145 as well.)

this is among many other things. This is a toe-dip. But there are many misinformed beliefs about the construct.

Replies from: Jonas Hallgren
comment by Jonas Hallgren · 2023-02-21T15:19:53.470Z · LW(p) · GW(p)

Latest time I read the literature (1-2 years ago) this was also my conception. If I remember correctly then the predictability of nobel price winners based on their IQ was very low conditional on if they had above 130 IQ. I think conscientousness and creativity were described generally more predictive for gettibg a nobel prize at higher IQs.

(fyi; creativity is quite hard to measure as it is a complex topic. Huberman has a great episode on the mechanisms behind creativity from December)

I do however, also want to mention that there probably exists a "package" that makes someone very capable where genius is part of it. I just think that IQ is overhyped when it comes to predicting this.

Replies from: alfredmacdonald
comment by alfredmacdonald · 2023-02-21T15:35:16.125Z · LW(p) · GW(p)

There is a lot going on with Nobel prize winners. The most common trait is that they work extremely hard. There have been 40-something g-loaded subtasks that I know of. It's quite possible that they have an exceptional "recipe" of sub-abilities from these elemental cognitive abilities that won't show up on a traditional WAIS IV.

But this is to be expected; the primary purpose of IQ testing is (1) to measure cognitive decline or some other kind of medical concern and/or (2) to determine which children go into gifted programs. Adult IQ is rarely tested outside of these settings, yet it is also where people try to draw the most generalizations.

(The reason you can infer that IQ tests aren't meant to be as much of a measure of ability as they are to do these other two things is because so few safeguards are put in place to prevent cheating. With enough money it is quite possible to retake an IQ test and get 140+; you can even google the answers if you want. They really don't care. Meanwhile, the SAT is psychotic about cheating and the people who have successfully cheated had to pull off preposterous schemes to do it.)