How do you Murphyjitsu essentially risky activities? 2020-06-23T21:09:54.593Z · score: 9 (3 votes)
ribbonfarm: A Brief History of Existential Terror 2017-03-01T01:18:52.888Z · score: 1 (2 votes)
Wireheading Done Right: Stay Positive Without Going Insane 2016-11-22T03:16:33.340Z · score: 4 (4 votes)


Comment by 9eb1 on How do you Murphyjitsu essentially risky activities? · 2020-06-24T17:36:55.532Z · score: 1 (1 votes) · LW · GW

When it comes to problems that are primarily related to motivation, the cost-benefit is so far weighted that the cost of implementing the plan probably doesn't seem relevant to consider, but this is a good point.

I like the idea of using Murphyjitsu for modeling shorter iterations, that's probably generally applicable.

Comment by 9eb1 on How do you Murphyjitsu essentially risky activities? · 2020-06-24T17:34:13.399Z · score: 2 (2 votes) · LW · GW

That seems mostly about the emotional content of a particular plan, while I see Murphyjitsu as a tool for avoiding the planning fallacy, forcing yourself to fully think through the implications of a plan, or getting more realistic predictions from System 1. I haven't viewed it much as an emotional tool, but maybe other people do find it useful for that.

Comment by 9eb1 on Thomas Kwa's Bounty List · 2020-06-19T17:35:04.112Z · score: 1 (1 votes) · LW · GW

Whew, glad I didn't invest more time in this. Seems there is lurking complexity everywhere.

Comment by 9eb1 on Thomas Kwa's Bounty List · 2020-06-13T02:00:34.403Z · score: 2 (2 votes) · LW · GW

At this price point this seems potentially doable. Some ideas in the order I'd try them:

  1. There is a person that has Kickstarted similar projects and you could contact him to see if they are willing to do a custom one-off. They'd probably be willing to just give you advice if you asked, too. Given that their entire Kickstarter was only $7000, at your price point this seems pretty likely.
  2. You can download a 3D model online and find a local machine shop to CNC you one. For example, just googling "tungsten machine shop san francisco" turned up which will probably mill tungsten from CAD.
  3. Same, but find a 3D printing company that can make one for you. There are a few online ( e.g.), and you'd have to request a quote, but it may be a better option if the feedstock for CNC ends up being cost prohibitive. I'm not sure if this kind of place will do individual retail orders.

This is a pretty fun format. Actually, I really like this gomboc idea and briefly considered doing a Kickstarter on it after reading your post. But then I realized that Kickstarter would only really make sense if everyone were willing to pay $800. The market is so niche, that it would have to be a passion project to be worth the hassle I think.

Comment by 9eb1 on Is there any scientific evidence for benefits of meditation? · 2020-05-11T02:14:03.752Z · score: 5 (4 votes) · LW · GW

I admit there might be reasons to invest in meditation practice that are not based on scientifically proven benefits (e.g., curiosity, sense of novelty, sense of belonging to a community). At the same time, I hope that most LW readers attach very little weight to those non-evidence-based reasons to meditate, just like I do.

I suppose I should admit the main reason I started meditating a long time ago was curiosity. I read Mastering the Core Teachings of the Buddha (reviewed on SSC here) and thought "well, this person sounds like they are explaining mental states that seem pretty unbelievable to me, I wonder if this is all BS." I was/am mentally healthy and emotionally stable more than the average person. I don't meditate that consistently anymore, only when things are more stressful than usual. Having it in the toolbox, like fitness, is enough for me. I did enough practice to know that what MCTB is pointing at is a real phenomenon, but that's it. I actually think that viewing it as a hobby is the healthiest way to approach the kind of serious practice needed for enlightenment.

Let's start with the easy to verify claims that I generalise as...

In my experience, these claims are false. I occasionally tried to use mindfulness to help me with dieting or exercising, since those are also things I do, and it never helped in a way I could discern.

Do you have some sources to back this up? I've heard many declared reasons why people begin their meditation practice, and it was quite a diverse set, none seemed dominant.

Thank you for challenging me on this, that was based only on personal observation, which as I admit above doesn't even square with my own experience! This survey has concrete data on why people meditate in Fig. 1. The top reason is "General wellness and general disease prevention." None of them are specifically happiness related, so maybe that's an overly specific claim.

I don't buy this at all. If the only observable benefit of me meditating is that I used to self-report average well-being of 5.17 out of 10, and now I self-report 7.39 on average

Based on my mental model of meditation, you probably would be dissatisfied with the results. In section IV of the post above, Scott Alexander summarizes thus:

Ingram dedicates himself hard to debunking a lot of the things people would use to fill the gap. Pages 261-328 discuss the various claims Buddhist schools have made about enlightenment, mostly to deny them all. He has nothing but contempt for the obviously silly ones, like how enlightened people can fly around and zap you with their third eyes. But he’s equally dismissive of things that sort of seem like the basics. He denies claims about how enlightened people can’t get angry, or effortlessly resist temptation, or feel universal unconditional love, or things like that. Some of this he supports with stories of enlightened leaders behaving badly; other times he cites himself as an enlightened person who frequently experiences anger, pain, and the like. Once he’s stripped everything else away, he says the only thing one can say about enlightenment is that it grants a powerful true experience of the non-dual nature of the world. [9eb1: I've excluded the possible counterargument here for brevity]

There are external benefits I think meditation has given me that feel like they are real, but the effect size is too small for studies to realistically find them. I can fall asleep reliably by using meditation as a tool. I can tactically break my own rumination thought cycles by meditating as a tool (or I can workout, but sometimes you've already worked out that day). I definitely feel like I am harder to surprise (lack of "jump"), but that's not a particularly practical superpower.

I've meditated for >300 hours (maybe 4 or 5). I don't regret my hours. It is a hobby, it satisfies my curiosity, it makes me happy when I need it. Lack of personal transformation is fine.

To be clear, my values with regard to self-rated wellness are different from yours. I am glad to improve my self-rated wellness even if it has no measurable outward impact on my behavior. My happiness is super important to me. If I move the needle on that, that's great even if I'm still an asshole. I have no interest in being a miserable saint.

There are several characteristics of nutrition and eating that make scientific scrutiny very difficult, and those characteristics are not shared with meditation

Those differences are subsumed in the "high short-term costs" side of my statement. The exact costs are different, that's all. You can tell people not to do all the things you mentioned during a diet study, but they won't follow your instruction.

Comment by 9eb1 on Is there any scientific evidence for benefits of meditation? · 2020-05-10T06:24:53.690Z · score: 13 (9 votes) · LW · GW

I think it is right to be skeptical of the science around meditation. Meditation perfectly fits into the Bermuda triangle of phenomenon for which our current scientific institutions and practices are not well-prepared to study.

It shares with psychological studies the challenge that the thing under investigation is the internal mental state of the subject. When there are studies with objective endpoints, usually the objective endpoint isn't the thing we want to get out of it, it's just a more reliable metric so we know the subjects aren't fooling themselves. As Science-based Medicine says:

But the more concrete and physiological the outcome, the smaller the placebo effect. Survival from serious forms of cancer, for example, has no demonstrable placebo effect. There is a “clinical trial effect,” as described above – being a subject in a trial tends to improve care and compliance, but no placebo effect beyond that. There is no compelling evidence that mood or thought alone can help fight off cancer or any similar disease.

In the case of meditation, people usually begin the practice to have mental well-being or greater happiness, which is among the outcomes least amenable to reliable objective observation. If it happens to also do something that could be reliably measured with a medical instrument, that would be a bit outside the point.

Meditation shares with nutritional science (also a wrecked landscape of low-quality studies that fail to answer our real questions) that performing the study relies on the subjects to reliably do something with a huge, short-term cost and an uncertain, long-term benefit, which humans are bad at.

High-quality studies on nutritional interventions rarely answer the questions that normal fitness-minded folks want answered, because we want the answer to the question "assuming I perfectly adhere to diet X, what results would I obtain." Studies can only measure "assuming we take a random sampling of people with varying levels of conscientiousness and investment in their diet, and tell them to do X, what happens" which is too big a difference to be useful.

Similarly with meditation, what meditators want to know is "is it worth my time meditating if I do it approximately perfectly" not "is it worth someone 'intervening' to tell me about meditation taking into account the possibility that I'm too lazy to really follow through with it." The second has more clinical relevance, but less personal relevance for the kind of people on Less Wrong.

All of that is a long precursor to saying "Is there any scientific evidence for benefits of meditation?" and "Are there good reasons for a typical reader of LessWrong to invest their time and effort into meditation practice?" are subtly different questions, so it would be wrong to literally equate them. The second is the answer we really care about, the first is one input which would, if available, fully resolve the question instead of leaving is in a state of uncertainty. You're entitled to arguments, but not (that particular) proof.

There is objective evidence that meditation does something real (EEG studies of Tibetan monks, for example), but the evidence it does something both real and valuable is probably not up to that standard.

Another, smaller, point I'd like to make is that this post is attempting to perform its own meta-analysis, but with a higher quality bar than academic meta analyses. I don't think crowdsourcing the best studies of meditation is likely to work this way. If you are interested in running a project to identify the top studies of meditation, I think you would need to identify all the relevant studies, get individuals who are interested in your project to review them, then collate the results. Just asking "the crowd" for the best studies they happen to have on hand I think is likely to fail regardless of what the evidence is.

Comment by 9eb1 on What are sensible ways to screen event participants to reduce COVID-19 risk? · 2020-03-04T04:43:32.858Z · score: 5 (3 votes) · LW · GW

As for cutoffs, just look up max healthy forehead temperature, maybe 37.5. More important is to have prominently available hand sanitizer pumps and encourage people to use it before and after the event, and remind them not to touch their faces.

Comment by 9eb1 on In defense of deviousness · 2020-01-15T22:59:34.637Z · score: 8 (6 votes) · LW · GW

There are several sources of spaghetti code that are possible:

  1. A complex domain, as you mention, where a complex entangled mess is the most elegant possible solution.
  2. Resource constraints and temporal tradeoffs. Re-architecting the system after adding each additional functionality is too time expensive, even when a new architecture could simplify the overly complex design. Social forces like "the market" or "grant money" mean it makes more sense to build the feature in the poorly architected way.
  3. Performance optimizations. If you code needs to fit inside a 64kb ROM, you may be very limited in your ability to structure your code cleanly.
  4. Lack of requisite skill. A person may not be able to provide a simple design even though one exists, even given infinite time.

If I had to guess, number 2 is the largest source of spaghetti code that Less Wrong readers are likely to encounter. Number 4 may be account for the largest volume of spaghetti code worldwide, because of the incredible amounts of line-of-business code churned out by major outsourcing companies. But even that is a reflection of economic realities. Therefore, one could say that spaghetti code is primarily an economic problem.

Comment by 9eb1 on Might humans not be the most intelligent animals? · 2020-01-06T14:41:10.885Z · score: 1 (1 votes) · LW · GW

Sorry, I could have been clearer. The empirical evidence I was referring to was the existence of human civilization, which should inform priors about the likelihood of other animals being as intelligent.

I think you are referring to a particular type of "scientific evidence" which is a subset of empirical evidence. It's reasonable to ask for that kind of proof, but sometimes it isn't available. I am reminded of Eliezer's classic post You're Entitled to Arguments, But Not (That Particular) Proof.

To be honest, I think the answer is that there is just no truth to this matter. David Chapman might say that "most intelligent" is nebulous, so while there can be some structure, there is no definite answer as to what constitutes "most intelligent." Even when you try to break down the concept further, to "raw innovative capacity" I think you face the same inherent nebulosity.

Comment by 9eb1 on What will quantum computers be used for? · 2020-01-02T13:31:09.347Z · score: 1 (1 votes) · LW · GW

The database search thing is, according to my understanding, widely misinterpreted. As Wikipedia says:

Although the purpose of Grover's algorithm is usually described as "searching a database", it may be more accurate to describe it as "inverting a function". In fact since the oracle for an unstructured database requires at least linear complexity, the algorithm cannot be used for actual databases.

To actually build Quantum Postgres, you need something that can store an enormous number of qubits, like a hard drive.

Comment by 9eb1 on Might humans not be the most intelligent animals? · 2019-12-24T14:23:16.813Z · score: 3 (2 votes) · LW · GW

Your take is contrarian as I suspect you will admit. There is quite a bit of empirical evidence, and if it turned out that humans were not the most intelligent it would be very surprising. There is probably just enough uncertainty that it's still within the realm of possibility, but only by a small margin.

Comment by 9eb1 on Against Premature Abstraction of Political Issues · 2019-12-19T15:18:56.897Z · score: 7 (2 votes) · LW · GW

This sort of existence argument is reasonable for hypothetical supehuman AIs, but real-world human cognition is extremely sensitive to the structure we can find or make up in the world. Sure, just saying "politics" does not provide a clear reference class, so it would be helpful to understand what you want to avoid about politics and engineer around it. My hunch is that avoiding your highly-technical definition of bad discourse that you are using to replace "politics" just leads to a lot of time spent on your politics analysis, with approximately the same topics avoided as a very simple rule of thumb.

I stopped associating or mentioning LW in real life largely because of the political (maybe some parts cultural as well) baggage of several years ago. Not even because I had any particular problem with the debate on the site or the opinions of everyone in aggregate, but because there was just too much stuff to cherry-pick from in our world of guilt by association. Too many mixed signals for people to judge me by.

Comment by 9eb1 on CO2 Stripper Postmortem Thoughts · 2019-12-08T01:00:28.251Z · score: 4 (2 votes) · LW · GW

I was very confused about your proposed setup after reading the wikipedia article on heat exchangers, since I couldn't figure out what thermal masses you proposed exchanging heat between. But I found this article which resolved my confusion.

Comment by 9eb1 on Do we know if spaced repetition can be used with randomized content? · 2019-11-17T23:14:12.413Z · score: 6 (4 votes) · LW · GW

It is still useful to memorize the flashcards. The terminology provides hooks that will remind you of the conceptual framework later. If you want to practice actually recognizing the design patterns, you could read some of and actively try to recognize design patterns. When you want to learn to do something, it's important to practice a task that is as close as possible to what you are trying to learn.

In real life when a software design pattern comes up, it's usually not as something that you determine from the code. More often it's by talking with the author, reading the documentation, or inferring from variable names.

The strategy described in, assuming you have read that, seems to suggest that just using Anki to cover enough of the topic space probably gives you a lot of benefits, even if you aren't doing the mental calculation.

Comment by 9eb1 on Where should I ask this particular kind of question? · 2019-11-03T14:24:06.650Z · score: 2 (2 votes) · LW · GW

Perhaps the community to ask on mostly doesn't depend on the expertise of the denizens, but your ability to get a response. If so, it matters more whether your question is something that will "hook" the people there, which depends more on the specific topic of the question than on the knowledge required to answer it. For example, if it were about the physics of AI, you'd be likely to get an answer on LessWrong. If it's about academic physics, reddit might be better. If you are using it to write fanfiction, just ask on a fanfiction forum.

It matters quite a bit how hypothetical the scenario is. For example, is it a situation that is actually physically impossible? Does it likely have a specific concrete answer even if you (or anyone) knows it, or will it end up being a matter of interpretation? Would a satisfying answer to the question advance the field of physics or any other field?

Anyway, another option is Twitter. Personally, I'd ask on LessWrong, PhysicsOverflow, or Reddit.

Comment by 9eb1 on On Internal Family Systems and multi-agent minds: a reply to PJ Eby · 2019-10-31T11:31:52.121Z · score: 4 (3 votes) · LW · GW

Yes, that seems like a reasonable perspective. I can see why that would be annoying.

Comment by 9eb1 on On Internal Family Systems and multi-agent minds: a reply to PJ Eby · 2019-10-30T23:18:54.036Z · score: 3 (3 votes) · LW · GW

I really appreciate that this post was on the front page, because I wouldn't have seen it otherwise and it was interesting. From an external viewer perspective on the "status games" aspect of it, I think the front page post didn't seem like a dominance attempt, but read as an attempt at truth seeking. I also don't think that it put your arguments in a negative light. Your comments here, on the other hand, definitely feel to an outside observer to be more status-oriented. My visceral reaction upon reading your comment above this one, for example, was that you were trying to demote IFS because it sounds like you make a living promoting this other non-IFS approach.

That said, I remember reading many of your posts on the old LessWrong and I have occasionally wondered what you had gotten up to, since you had stopped posting.

Comment by 9eb1 on What technical prereqs would I need in order to understand Stuart Armstrong's research agenda? · 2019-09-22T15:02:40.996Z · score: 1 (1 votes) · LW · GW

There appears to be some sort of bug with the editor, I had to switch to markdown mode to fix the comment. Thanks for the heads up.

I use Anki for this purpose and it works well as long as you already have a system to give you a strong daily Anki review habit.

Comment by 9eb1 on What technical prereqs would I need in order to understand Stuart Armstrong's research agenda? · 2019-09-19T04:30:05.093Z · score: 2 (2 votes) · LW · GW

If this is true, then this post by Michael Nielsen may be interesting to the poster. He uses a novel method of understanding a paper by using Anki to learn the areas of the field relevant to, in this case, the AlphaGo paper. I don't have a good reason to do this right now, but this is the strategy I would use if I wanted to understand Stuart's research program.

Comment by 9eb1 on The Hacker Learns to Trust · 2019-06-22T13:10:15.360Z · score: 18 (10 votes) · LW · GW

The phenomenon I was pointing out wasn't exactly that the person's decision was made because of status. It was that a prerequisite for them changing their mind was that they were taken seriously and engaged with respectfully. That said, I do think that its interesting to understand the way status plays into these events.

First, they started the essay with a personality-focused explanation:

To explain how this all happened, and what we can learn from it, I think it’s important to learn a little bit more about my personality and with what kind of attitude and world model I came into this situation.


I have a depressive/paranoid streak, and tend to assume the worst until proven otherwise. At the time I made my first twitter post, it seemed completely plausible in my mind that no one, OpenAI or otherwise, would care or even notice me. Or, even worse, that they would antagonize me."

The narrative that the author themselves is setting up is that they had irrational or emotional reasons for behaving the way they did, then they considered longer and changed their mind. They also specifically call out that their perceived lack of self-status as an influencing factor.

If someone has an irrational, status-focused explanation for their own initial reasoning, and then we see high-status people providing them extensive validation, it doesn't mean that they changed their mind because of the high-status people, but it's suggestive. My real model is that they took those ideas extra seriously because the people were nice and high status.

Imagine a counterfactual world where they posted their model, and all of the responses they received were the same logical argument, but instead made on 4Chan and starting with "hey fuckhead, what are you trying to do, destroy the world?" My priors suggest that this person would have, out of spite, continued to release the model.

The gesture they are making here, not releasing the model, IS purely symbolic. We know the model is not as good as mini-GPT2. Nonetheless, it may be useful to people who aren't being supported by large corporate interests, either for learning or just for understanding ML better for real hackers. Since releasing the model is not a bona fide risk, part of not releasing it is so they can feel like they are part of history. Note the end where they talk about the precedent they are setting now by not releasing it.

I think the fact that the model doesn't actually work is an important aspect of this. Many hackers would have done it as a cool project and released it without pomp, but this person put together a long essay, explicitly touting the importance of what they'd done and the impact it would have on history. Then, it turned out the model did not work, which must have been very embarrassing. It is fairly reasonable to suggest that the person then took the action that made them feel the best about their legacy and status: writing an essay about why they were not releasing the model for good rationalist approved reasons. It is not even necessarily the case that the person is aware that this is influencing the decision, this is a fully Elephant in the Brain situation.

When I read that essay, at least half of it is heavily-laden with status concerns and psychological motivations. But, to reiterate: though pro-social community norms left this person open to having their mind changed by argument, probably the arguments still had to be made.

How you feel about this should probably turn on questions like "Who has the status in this community to have their arguments taken seriously? Do I agree with them?" and "Is it good for only well-funded entities to have access to current state-of-the-art ML models?"

Comment by 9eb1 on The Hacker Learns to Trust · 2019-06-22T05:07:25.669Z · score: 7 (9 votes) · LW · GW

As is always the case, this person changed their mind because they were made to feel valued. The community treated what they'd done with respect (even though, fundamentally, they were unsuccessful and the actual release of the model would have had no impact on the world), and as a result they capitulated.

Comment by 9eb1 on BYOL (Buy Your Own Lunch) · 2018-04-09T01:49:14.812Z · score: 6 (2 votes) · LW · GW

It is not at all rude, at a business lunch, to say "Oh, thank you!" when someone says they will pay for lunch. Especially if you are a founder of a small company and meeting with people at more established companies who will likely be able to expense the meal. Those people don't care, because it's not their money.

If you are meeting with people in a similar position (fellow founders), you can just ask to split which people will either accept or they will offer to pay, in which case see above.

If you are meeting with casual acquaintances, you can also say "Split the check?" and it's totally fine.

The weirdness points of adding that to your e-mail and including a link to this post is far greater than saying "Thank you" when someone else offers to pay, so carefully consider if it's worth spending them this way.

Comment by 9eb1 on Is Rhetoric Worth Learning? · 2018-04-07T04:56:39.448Z · score: 7 (2 votes) · LW · GW

In a best case scenario, a fellow traveler will already have studied rhetoric and will be able to provide the highlights relevant to LWers. In the spirit of offering the "obvious advice" I've heard the "Very Short Introduction" series of books can give you an introduction to the main ideas of a field and maybe that will be helpful for guiding your research beyond the things that are easily googleable.

Comment by 9eb1 on My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms · 2018-03-26T07:04:36.709Z · score: 13 (4 votes) · LW · GW

The case of the Vietnamese monk who famously set himself on fire may meet your criteria. The Vietnamese government claimed that he had drugged himself, but it's hard to imagine a drug that would allow you to get out of a car under your own power and walk to a seated position, and then light a match to set yourself on fire but still have no reaction as your flesh burns off.

Comment by 9eb1 on Hammertime Postmortem · 2018-03-25T22:11:58.362Z · score: 1 (1 votes) · LW · GW

It's too bad the link for the referenced *"Focusing," for skeptics* article in your post on the tactic only leads to a 404 now. I wonder if it was taken down intentionally?

Comment by 9eb1 on Feedback on LW 2.0 · 2017-10-01T20:04:25.935Z · score: 8 (8 votes) · LW · GW

I love that the attempt is being made and I hope it works. The main feedback that I have is that the styling of the comment section doesn't work for me. One of the advantages of the existing LessWrong comment section is that the information hierarchy is super clear. The comments are bordered and backgrounded so when you decide to skip a comment your eye can very easily scan down to the next one. At the new site all the comments are relatively undifferentiated so it's much harder to skim them. I also think that the styling of the blockquotes in the new comments needs work. Currently there is not nearly enough difference between blockquoted text and comment text. It needs more spacing and more indenture, and preferably a typographical difference as well.

Comment by 9eb1 on LW 2.0 Strategic Overview · 2017-09-17T14:47:31.887Z · score: 0 (0 votes) · LW · GW


Since then I've thought of a couple more sites that are neither hierarchical nor tag-based. Facebook and eHow style sites.

There is another pattern that is neither hierarchical, tag-based nor search-based, which is the "invitation-only" pattern of a site like pastebin. You can only find content by referral.

Comment by 9eb1 on LW 2.0 Strategic Overview · 2017-09-17T03:13:06.707Z · score: 0 (0 votes) · LW · GW

That is very interesting. An exception might be "Google search pages." Not only is there no hierarchical structure, there is also no explicit tag structure and the main user engagement model is search-only. Internet Archive is similar but with their own stored content.

With respect to TV Tropes, I'd note that while it is nominally organized according to those indexes, the typical usage pattern is as a sort of pure garden path in my experience.

Comment by 9eb1 on Priors Are Useless · 2017-06-21T14:17:15.296Z · score: 6 (6 votes) · LW · GW

Now analyze this in a decision theoretic context where you want to use these probabilities to maximize utility and where gathering information has a utility cost.

Comment by 9eb1 on Change · 2017-05-07T07:14:59.896Z · score: 4 (3 votes) · LW · GW

This was incomprehensible to me.

Comment by 9eb1 on Open thread, Apr. 03 - Apr. 09, 2017 · 2017-04-04T22:15:50.777Z · score: 0 (0 votes) · LW · GW

Bryan Caplan responded to this exchange here

Comment by 9eb1 on Open thread, Apr. 03 - Apr. 09, 2017 · 2017-04-04T16:06:36.654Z · score: 5 (5 votes) · LW · GW

I think no one would argue that the rationality community is at all divorced from the culture that surrounds it. People talk about culture constantly, and are looking for ways to change the culture to better address shared goals. It's sort of silly to say that that means it should be called the "irrationality community." Tyler Cowen is implicitly putting himself at the vantage point of a more objective observer with the criticism, which I find ironic.

Where Tyler is wrong is that it's not JUST another kind of culture. It's a culture with a particular set of shared assumptions, and it's nihilistic to imply that all cultures are equal no matter from what shared assumptions they issue forth. Cultures are not interchangeable. Tyler would also have to admit (and I'm guessing he likely would admit if pressed directly) that his culture of mainstream academic thought is "just another kind of religion" to exactly the same extent that rationality is, it's just less self-aware about that fact.

As an aside, I think Lumifer is a funny name. I always associate it with Lumiere from Beauty and the Beast, and with Lucifer. Basically I always picture your posts as coming from a cross between a cartoon candle and Satan.

Comment by 9eb1 on IQ and Magnus Carlsen, Leo Messi and the Decathlon · 2017-03-29T19:42:26.306Z · score: 0 (0 votes) · LW · GW

You are correct, there are things that can negatively impact someone's IQ. With respect to maximizing, I think the fact that people have been trying for decades to find something that reliably increases IQ, and everything leads to a dead-end means that we are pretty close to what's achievable without revolutionary new technology. Maybe you aren't at 100% of what's achievable, but you're probably at 95% (and of course percentages don't really have any meaning here because there is no metric which grounds IQ in absolute terms).

Comment by 9eb1 on IQ and Magnus Carlsen, Leo Messi and the Decathlon · 2017-03-29T19:23:15.250Z · score: 0 (0 votes) · LW · GW

I agree that IQ is plenty interesting by itself. My goal with this article was to explore the boundaries of that usefulness and explore the ways in which the correlations break down.

The Big 5 personality traits have a correlation with some measures of success which is independent of IQ. For example, in this paper:

Consistent with the zero-order correlations, Conscientiousness was a significant positive predictor of GPA, even controlling for gender and SAT scores, and this finding replicated across all three samples. Thus, personality, in particular the Conscientiousness dimension, and SAT scores have independent effects on both high school and college grades. Indeed, in several cases, Conscientiousness was a slightly stronger predictor of GPA than were SAT scores.

Notably, the Openness factor is the factor that has the strongest correlation with IQ. I'm guessing Gwern has more stuff like this on his website, but if someone makes the claim that IQ is the only thing that matters to success in any given field, they are selling bridges.

Comment by 9eb1 on IQ and Magnus Carlsen, Leo Messi and the Decathlon · 2017-03-29T01:39:23.397Z · score: 4 (4 votes) · LW · GW

The tallest player to ever play in the NBA was Gheorghe Mureșan, who was 7'7". He was not very good. Manute Bol was almost as tall and he was good but not great. By contrast, the best basketball player of all time was 6'6" [citation needed]. In fact, perhaps an athletic quotient would be better for predicting top-end performance than height, since Jordon, Lebron and Kareem are all way more athletic than Muresan and Bol.

I will attempt to explain the strongest counterargument that I'm aware of regarding your first thesis. When you take a bunch of tests of mental ability and you create a correlation matrix, you obtain a positive manifold, where all the correlations are positive. When you perform a factor analysis of these subtests, you obtain a first factor that is very large, and secondary through n-iary factors that are small and vary depending on the number of factors you use. This is suggestive that there is some sort of single causal force that is responsible for the majority of test performance variation. If you performed a factor analysis of a bunch of plausible measures of athleticism, I think you would find that, for example, bench press and height do not participate in a positive manifold and you would likely find multiple relevant, stable factors rather than 1 athletic quotient that accounts for >50% of the variation. Cardio ability and muscular strength are at odds, so that would be at least two plausible stable factors. This argument is on Wikipedia here#Factor_structure_of_cognitive_abilities). Personally, in light of the dramatic differences there are between the different parts of an IQ test battery, I find this fact surprising and underappreciated. Most people do not realize this, and the folk wisdom is that there are very clear different types of intelligence.

The second point I would make regarding your first thesis is that there are plenty of researchers who don't like g, and they have spent decades trying to come up with alternative breakdowns of intelligence into different categorizations that don't include a single factor. Those efforts were mostly fruitless, because every time they were tested, it turned out that all the tests individually correlated with g still. Many plausible combinations of "intelligences" received this treatment. Currently popular models do have subtypes of intelligence, but they are all viewed sharing g as an important top-level factor (e.g. CHC theory) rather than g simply being a happenstance correlation of multiple factors. In this case absence of evidence is evidence of absence (in light of the effort that has gone into trying to uncover such evidence).

To be honest, I very much doubt that actual IQ researchers would disagree with your second thesis. My argument would be that for most fields there is enough randomness that you would not expect the most intelligent person to also be the most lauded. Even Einstein had to have the luck to have the insights he did, and there were undoubtedly many people who were just as smart but had different circumstances that led to them not having those insights. Additionally, there is a thing called Spearman's law of diminishing returns, which is the theory that the higher your g is, the less correlated your subtype intelligences are with your g factor. That is, for people who have very high IQs, there is a ton more variation between your different aspects of intelligence than there is for people with very low IQs. This has been measured and is apparently true, and would seem to support your thesis. It is true that these two observations (the factor decomposition and Spearman's law) seem to be in tension, but hopefully one day someone will come through with an explanation for intelligence that neatly explains both of these things and lots more besides.

Unrelated to your two theses, I think the fact that IQ correlates with SO MANY things makes it interesting alone. IQ correlates with school performance, job performance, criminality, health, longevity, pure reaction speed, brain size, income, and almost everything else (it seems like) that people bother to try correlating it with. If IQ hadn't originally come from psychometric tests, people would probably simply call it your "favor with the gods factor" or something.

There are enough correlations that any time I read a social sciences paper with statistics on outcomes between people with different characteristics, I always wish they would have controlled for IQ (but they never do). This may seem silly, but I think there is definitely an argument that can be made that IQ is "prior to" most of the things people study. We already know that IQ can't be meaningfully changed. It's pretty much set by the time you are an adult, and we know of nothing besides iodine deficiency that has a meaningful impact on it in the context of a baseline person in modern society.

Comment by 9eb1 on Open thread, March 13 - March 19, 2017 · 2017-03-15T07:44:34.759Z · score: 5 (5 votes) · LW · GW

I've read a lot of TLP and this is roughly my interpretation as well. Alone's posts do not come with nicely-wrapped thesis statements (although the conclusion of this one is as close as it gets). The point she is making here is that the system doesn't care about your happiness, but you should. The use of "goals" here isn't the LessWrong definition, but the more prosaic one where it implies achievements in life and especially in careers. Real people who want to be happy do want someone who is passionate, and the juxtaposition of passionate with "mutual respect and shared values" is meant to imply a respectful but loveless marriage. If someone asks you about your partner and you most central characteristic you have to define your marriage is "mutual respect and shared values" that says something very different than if your central characteristic is "passionate." It's sterile, and that sterility is meant to suggest that the person who says "passionate" is going to be happier regardless of their achievements in the workplace.

Comment by 9eb1 on Open thread, March 13 - March 19, 2017 · 2017-03-14T01:33:04.853Z · score: 0 (0 votes) · LW · GW

This is one of the more confusing problem statements, but I think I understand. So if we choose a regular hexagon with height = 0.5, as in this link, the scoring for this solution would be ((area of triangle - area of hexagon) + (area of square - 3 * area of hexagon)) / area of hexagon?

edit: Gur orfg fbyhgvba V pbhyq pbzr hc jvgu jnf whfg gur n evtug gevnatyr gung'f unys gur nern bs gur rdhvyngreny gevnatyr. Lbh pna svg 4 va gur fdhner naq gjb va gur gevnatyr, naq gur fpber vf cbvag fvk. V bayl gevrq n unaqshy bs erthyne gvyvatf gubhtu.

Comment by 9eb1 on Open thread, March 13 - March 19, 2017 · 2017-03-13T22:19:33.719Z · score: 0 (0 votes) · LW · GW

You can come arbitrarily close by choosing any tiling shape and making it as small as necessary.

Comment by 9eb1 on why people romantice magic over most science. · 2017-03-08T20:15:55.821Z · score: 0 (0 votes) · LW · GW

I have sometimes mused that accumulating political power (or generally being able to socially engineer) is the closest to magic that we have in the real world. It's the force multiplier that magic is used for in fiction by a single protagonist. Most people who want magic also do not follow political careers. Of course, this is only a musing because there are lots of differences. No matter how much power you accumulate you are still beholden to someone or something, so if independence is a big part of your magical power fantasy then it won't help.

Comment by 9eb1 on ribbonfarm: A Brief History of Existential Terror · 2017-03-02T06:01:06.440Z · score: 0 (0 votes) · LW · GW

The non-binariness of things seems to me to be a fundamental tenet of the post-rationality thing (ribbonfarm is part of post-rationality). In particular, Chapman writes extensively on the idea that all categories are nebulous and structured.

I also think there are options to control your risk factor, depending on the field. You can found a startup, you can be the first startup employee, you can join an established startup, you can join a publicly traded corporation, you can get a job in the permanent bureaucracy. Almost every spot on the work risk-stability spectrum is available.

Perhaps the real question is why some particular fields or endeavors lend themselves to seemingly continuous risk functions. All of Viliam's categories are purely social structures, where other people are categorizing you. So perhaps it's not about the risk inherent in an activity but being labeled that fits his intuition. People might label you a drug user if you smoke marijuana in their map, but in the territory the continuum of "having used marijuana once" to "uses heroin daily" is not only continuous but many-dimensioned.

Comment by 9eb1 on Open Thread, Feb. 20 - Feb 26, 2017 · 2017-02-23T17:14:53.880Z · score: 2 (2 votes) · LW · GW

That is true for people who you are going to become friends with, but difference in negative environments is much bigger. If your job has a toxic social environment, you are free to find a new one at any time. You also have many bona fide levers to adjust the environment, by for example complaining to your boss, suing the company, etc.

When your high school has a toxic social environment, you have limited ability to switch out of that environment. Complaints of other students have extremely high bars to be taken into account because it's mandatory for them to be there and it isn't in the administrator's best interests. If someone isn't doing something plainly illegal it's unlikely you will get much help.

Comment by 9eb1 on A semi-technical question about prediction markets and private info · 2017-02-20T04:11:11.090Z · score: 0 (0 votes) · LW · GW

This is an interesting puzzle. I catch myself fighting the hypothetical a lot.

I think it hinges on what would be the right move if you saw a six, and the market also had six as the favored option. In that situation, it would be appropriate to bet on the six which would move it past the 50% equilibrium, because you have the information from the market and the information from the die. I think maybe your equilibrium price can only exist if there is only one participant currently offering all of those bets, and they saw a six (so it's not really a true market yet, or there is only one informed participant and many uninformed). In that case, you having seen a six would imply a probability of higher than 50% that it is the weighted side. Given that thinking, if you see that prediction market favoring a different number ("3"), you should indeed bet against it, because very little information is contained in the market (one die throw worth).

The market showing a 50% price for one number and 10% for the rest is an unstable equilibrium. If you started out with a market-maker with no information offering 1/6 for all sides and there were many participants who only saw a single die, the betting would increase on the correct side. At each price that favors that side, every person who again sees that side would have both the guess from the market and the information from their roll, then they would use that information to estimate a slightly greater probability and the price would shift further in the correct direction. It would blow past 50% probability without even pausing.

Those don't seem like very satisfactory answers though.

Comment by 9eb1 on Are we running out of new music/movies/art from a metaphysical perspective? · 2017-02-06T00:49:34.594Z · score: 4 (5 votes) · LW · GW

You may be interested in a post on gwern's site related to this topic.

Comment by 9eb1 on How often do you check this forum? · 2017-02-03T19:32:40.970Z · score: 1 (1 votes) · LW · GW

Is there any information on how well-calibrated the community predictions are on Metaculus? I couldn't find anything on the site. Also, if one wanted to get into it, could you describe what your process is?

Comment by 9eb1 on A question about the rules · 2017-02-02T02:08:41.323Z · score: 1 (1 votes) · LW · GW

It certainly has something to do with his post, even if the main point of the post was specifically about domains from which to choose examples for your writing.

Politics is an important domain to which we should individually apply our rationality—but it's a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational.

This is the whole crux of the issue. According to some people, Less Wrong is a site to learn about and discuss rationality among people who are not already rational (or to "refine the art of human rationality"). According to some others, it's a community site around which aspiring rationalists should discuss topics of interest to the group. Personally I think phl43's posts are decent, and will likely improve with more practice, but I didn't think that particular post was very relevant or appropriate for Less Wrong specifically.

Comment by 9eb1 on How often do you check this forum? · 2017-01-31T18:11:42.533Z · score: 3 (3 votes) · LW · GW

I use Inoreader which is paid and web-based, but has browser plugins. I wouldn't say I "mostly" look for activity, since time is limited. The Open Thread on is extremely active, I think it gets about 200 comments per day, but I don't frequently browse it. Slate Star Codex and Less Wrong are the only actual "communities" in my feed reader. The others that are directly or tangentially related are blogs, and most blogs don't really engender that much discussion.

Comment by 9eb1 on How often do you check this forum? · 2017-01-30T22:50:41.142Z · score: 3 (3 votes) · LW · GW

I use a feed reader, so I check out almost all the posts and links. I click through to the comments on almost all of them as well, since that's the real point.

The reddit /r/slatestarcodex community is very active too, and I like that.

Comment by 9eb1 on 80,000 Hours: EA and Highly Political Causes · 2017-01-26T23:41:45.267Z · score: 10 (10 votes) · LW · GW

This is a trend in effective altruism and it's a very dangerous one for their cause. As soon as people outside the movement think that EAs are trying to spend everyone's money on "pet causes," it puts a distinct upper limit on the growth of the movement. Going after political targets seems really appealing because the leverage seems so high. If we could somehow install Holden Karnofsky as president it would probably improve the lives of a billion people, but there is no majority group of people that cares more about the global poor than they care about their own money.

It's very appealing, psychologically, because big political wins have outsized importance in how you feel about yourself. When a big decision comes down (like when gay marriage was legalized by the Supreme Court) it is literally a cause for celebration. Your side won and your enemies lost. If instead you somehow got people to donate $50 million to Against Malaria Foundation, it wouldn't be that salient.

Since you reference Robin Hanson's idea of pulling ropes sideways, I figure I should provide a link.

Comment by 9eB1 on [deleted post] 2017-01-23T22:12:15.885Z

I basically agree with all of this with one quibble. I think it is very easy to underestimate the impact that LessWrong has had. There are a lot of people (myself included) who don't want to be associated with rationality, but whose thoughts it has nonetheless impacted. I know many of them in real life. LessWrong is weird enough that there is a social cost to having the first google result for your real name point to your LessWrong comments. If I am talking to someone I don't know well about LessWrong or rationality in general I will not give it a full-throated defense in real life, and in venues where I do participate under my real name, I only link to rationalsphere articles selectively.

Partially because of this stigma, many people in startupland will read the sequences, put in in their toolbox, then move on with their lives. They don't view continuing to participate as important, and surely much of the low-hanging fruit has long since been plucked. But if you look at the hints you well find tidbits of information that point to rationality having an impact.

  1. Ezra Klein and Patrick Collison (CEO of Stripe) had an extensive conversation about rationality, and both are famous, notable figures.

  2. A member of the Bay Area rationality community was rumored to be a member of the Trump cabinet.

  3. Dominic Cummings (the architect of the Brexit "Leave" campaign) points to concept after concept that is both core to and adjacent to rationality, so much so that I would be genuinely surprised if he were not aware of it. (Perhaps this isn't good for rationality depending on your political views, but don't let it be said that he isn't winning).

  4. OpenAI was launched with $1B in funding from a Silicon Valley who's who and they have been in dialogue with MIRI staff (and interestingly Stripe's former CTO is the OpenAI CTO, obviously he knows about the rationalsphere). In general there has been tons of interest that has developed around AI alignment from multiple groups. Since this was the fundamental purpose of LessWrong to begin with, at least Eliezer is winning beyond what anyone could have ever expected based on his roundabout way of creating mindshare. We can't say with certainty that this wouldn't have happened without LessWrong, but personally I find it hard to believe that it didn't make a huge impact on Eliezer's influence within this field of thought.

Do we have an army of devout rationalists that are out there winning? No, it doesn't seem so. But rationalism has had a lot of children that are winning, even if they aren't looking back to improve rationalism later. Personally, I didn't expect LessWrong to have had as much impact as it has. I realized how hard it is to put these ideas into action when I first read the sequences.

Comment by 9eB1 on [deleted post] 2017-01-22T17:50:34.210Z

The Meaningness book's section on Meaningness and Time is all about culture viewed through Chapman's lens. Ribbonfarm has tons of articles about culture, most of which I haven't read. I haven't been following post-rationality for very long. Even on the front page now there is this which is interesting and typical of the thought.

Post-rationalists write about rituals quite a bit I think (e.g. here). But they write about it from an outsider's perspective, emphasizing the value of "local" or "small-set" ritual to everyone as part of the human experience (whether they be traditional or new rituals). When Rationalists write about ritual my impression is that they are writing about ritual for Rationalists as part of the project of establishing or growing a Rationalist community to raise the sanity waterline. Post-rationalists don't identify as a group to the extent that they want to have "post-rationalist rituals." David Chapman is a very active Buddhist, for example, so he participates in rituals (this link from his Buddhism blog) related to that community, and presumably the authors at ribbonfarm observe rituals that are relevant within their local communities.

Honestly, I don't think there is much in the way of fundamental philosophical differences. I think it's more like Rationalists and post-Rationalists are drawn from the same pool of people, but some are more interested in model trains and some are more interested in D&D. It would be hard for me to make this argument rigorous though, it's just my impression.