ribbonfarm: A Brief History of Existential Terror 2017-03-01T01:18:52.888Z · score: 1 (2 votes)
Wireheading Done Right: Stay Positive Without Going Insane 2016-11-22T03:16:33.340Z · score: 4 (4 votes)


Comment by 9eb1 on The Hacker Learns to Trust · 2019-06-22T13:10:15.360Z · score: 18 (10 votes) · LW · GW

The phenomenon I was pointing out wasn't exactly that the person's decision was made because of status. It was that a prerequisite for them changing their mind was that they were taken seriously and engaged with respectfully. That said, I do think that its interesting to understand the way status plays into these events.

First, they started the essay with a personality-focused explanation:

To explain how this all happened, and what we can learn from it, I think it’s important to learn a little bit more about my personality and with what kind of attitude and world model I came into this situation.


I have a depressive/paranoid streak, and tend to assume the worst until proven otherwise. At the time I made my first twitter post, it seemed completely plausible in my mind that no one, OpenAI or otherwise, would care or even notice me. Or, even worse, that they would antagonize me."

The narrative that the author themselves is setting up is that they had irrational or emotional reasons for behaving the way they did, then they considered longer and changed their mind. They also specifically call out that their perceived lack of self-status as an influencing factor.

If someone has an irrational, status-focused explanation for their own initial reasoning, and then we see high-status people providing them extensive validation, it doesn't mean that they changed their mind because of the high-status people, but it's suggestive. My real model is that they took those ideas extra seriously because the people were nice and high status.

Imagine a counterfactual world where they posted their model, and all of the responses they received were the same logical argument, but instead made on 4Chan and starting with "hey fuckhead, what are you trying to do, destroy the world?" My priors suggest that this person would have, out of spite, continued to release the model.

The gesture they are making here, not releasing the model, IS purely symbolic. We know the model is not as good as mini-GPT2. Nonetheless, it may be useful to people who aren't being supported by large corporate interests, either for learning or just for understanding ML better for real hackers. Since releasing the model is not a bona fide risk, part of not releasing it is so they can feel like they are part of history. Note the end where they talk about the precedent they are setting now by not releasing it.

I think the fact that the model doesn't actually work is an important aspect of this. Many hackers would have done it as a cool project and released it without pomp, but this person put together a long essay, explicitly touting the importance of what they'd done and the impact it would have on history. Then, it turned out the model did not work, which must have been very embarrassing. It is fairly reasonable to suggest that the person then took the action that made them feel the best about their legacy and status: writing an essay about why they were not releasing the model for good rationalist approved reasons. It is not even necessarily the case that the person is aware that this is influencing the decision, this is a fully Elephant in the Brain situation.

When I read that essay, at least half of it is heavily-laden with status concerns and psychological motivations. But, to reiterate: though pro-social community norms left this person open to having their mind changed by argument, probably the arguments still had to be made.

How you feel about this should probably turn on questions like "Who has the status in this community to have their arguments taken seriously? Do I agree with them?" and "Is it good for only well-funded entities to have access to current state-of-the-art ML models?"

Comment by 9eb1 on The Hacker Learns to Trust · 2019-06-22T05:07:25.669Z · score: 7 (9 votes) · LW · GW

As is always the case, this person changed their mind because they were made to feel valued. The community treated what they'd done with respect (even though, fundamentally, they were unsuccessful and the actual release of the model would have had no impact on the world), and as a result they capitulated.

Comment by 9eb1 on BYOL (Buy Your Own Lunch) · 2018-04-09T01:49:14.812Z · score: 6 (2 votes) · LW · GW

It is not at all rude, at a business lunch, to say "Oh, thank you!" when someone says they will pay for lunch. Especially if you are a founder of a small company and meeting with people at more established companies who will likely be able to expense the meal. Those people don't care, because it's not their money.

If you are meeting with people in a similar position (fellow founders), you can just ask to split which people will either accept or they will offer to pay, in which case see above.

If you are meeting with casual acquaintances, you can also say "Split the check?" and it's totally fine.

The weirdness points of adding that to your e-mail and including a link to this post is far greater than saying "Thank you" when someone else offers to pay, so carefully consider if it's worth spending them this way.

Comment by 9eb1 on Is Rhetoric Worth Learning? · 2018-04-07T04:56:39.448Z · score: 7 (2 votes) · LW · GW

In a best case scenario, a fellow traveler will already have studied rhetoric and will be able to provide the highlights relevant to LWers. In the spirit of offering the "obvious advice" I've heard the "Very Short Introduction" series of books can give you an introduction to the main ideas of a field and maybe that will be helpful for guiding your research beyond the things that are easily googleable.

Comment by 9eb1 on My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms · 2018-03-26T07:04:36.709Z · score: 13 (4 votes) · LW · GW

The case of the Vietnamese monk who famously set himself on fire may meet your criteria. The Vietnamese government claimed that he had drugged himself, but it's hard to imagine a drug that would allow you to get out of a car under your own power and walk to a seated position, and then light a match to set yourself on fire but still have no reaction as your flesh burns off.

Comment by 9eb1 on Hammertime Postmortem · 2018-03-25T22:11:58.362Z · score: 1 (1 votes) · LW · GW

It's too bad the link for the referenced *"Focusing," for skeptics* article in your post on the tactic only leads to a 404 now. I wonder if it was taken down intentionally?

Comment by 9eb1 on Feedback on LW 2.0 · 2017-10-01T20:04:25.935Z · score: 8 (8 votes) · LW · GW

I love that the attempt is being made and I hope it works. The main feedback that I have is that the styling of the comment section doesn't work for me. One of the advantages of the existing LessWrong comment section is that the information hierarchy is super clear. The comments are bordered and backgrounded so when you decide to skip a comment your eye can very easily scan down to the next one. At the new site all the comments are relatively undifferentiated so it's much harder to skim them. I also think that the styling of the blockquotes in the new comments needs work. Currently there is not nearly enough difference between blockquoted text and comment text. It needs more spacing and more indenture, and preferably a typographical difference as well.

Comment by 9eb1 on LW 2.0 Strategic Overview · 2017-09-17T14:47:31.887Z · score: 0 (0 votes) · LW · GW


Since then I've thought of a couple more sites that are neither hierarchical nor tag-based. Facebook and eHow style sites.

There is another pattern that is neither hierarchical, tag-based nor search-based, which is the "invitation-only" pattern of a site like pastebin. You can only find content by referral.

Comment by 9eb1 on LW 2.0 Strategic Overview · 2017-09-17T03:13:06.707Z · score: 0 (0 votes) · LW · GW

That is very interesting. An exception might be "Google search pages." Not only is there no hierarchical structure, there is also no explicit tag structure and the main user engagement model is search-only. Internet Archive is similar but with their own stored content.

With respect to TV Tropes, I'd note that while it is nominally organized according to those indexes, the typical usage pattern is as a sort of pure garden path in my experience.

Comment by 9eb1 on Priors Are Useless · 2017-06-21T14:17:15.296Z · score: 6 (6 votes) · LW · GW

Now analyze this in a decision theoretic context where you want to use these probabilities to maximize utility and where gathering information has a utility cost.

Comment by 9eb1 on Change · 2017-05-07T07:14:59.896Z · score: 4 (3 votes) · LW · GW

This was incomprehensible to me.

Comment by 9eb1 on Open thread, Apr. 03 - Apr. 09, 2017 · 2017-04-04T22:15:50.777Z · score: 0 (0 votes) · LW · GW

Bryan Caplan responded to this exchange here

Comment by 9eb1 on Open thread, Apr. 03 - Apr. 09, 2017 · 2017-04-04T16:06:36.654Z · score: 5 (5 votes) · LW · GW

I think no one would argue that the rationality community is at all divorced from the culture that surrounds it. People talk about culture constantly, and are looking for ways to change the culture to better address shared goals. It's sort of silly to say that that means it should be called the "irrationality community." Tyler Cowen is implicitly putting himself at the vantage point of a more objective observer with the criticism, which I find ironic.

Where Tyler is wrong is that it's not JUST another kind of culture. It's a culture with a particular set of shared assumptions, and it's nihilistic to imply that all cultures are equal no matter from what shared assumptions they issue forth. Cultures are not interchangeable. Tyler would also have to admit (and I'm guessing he likely would admit if pressed directly) that his culture of mainstream academic thought is "just another kind of religion" to exactly the same extent that rationality is, it's just less self-aware about that fact.

As an aside, I think Lumifer is a funny name. I always associate it with Lumiere from Beauty and the Beast, and with Lucifer. Basically I always picture your posts as coming from a cross between a cartoon candle and Satan.

Comment by 9eb1 on IQ and Magnus Carlsen, Leo Messi and the Decathlon · 2017-03-29T19:42:26.306Z · score: 0 (0 votes) · LW · GW

You are correct, there are things that can negatively impact someone's IQ. With respect to maximizing, I think the fact that people have been trying for decades to find something that reliably increases IQ, and everything leads to a dead-end means that we are pretty close to what's achievable without revolutionary new technology. Maybe you aren't at 100% of what's achievable, but you're probably at 95% (and of course percentages don't really have any meaning here because there is no metric which grounds IQ in absolute terms).

Comment by 9eb1 on IQ and Magnus Carlsen, Leo Messi and the Decathlon · 2017-03-29T19:23:15.250Z · score: 0 (0 votes) · LW · GW

I agree that IQ is plenty interesting by itself. My goal with this article was to explore the boundaries of that usefulness and explore the ways in which the correlations break down.

The Big 5 personality traits have a correlation with some measures of success which is independent of IQ. For example, in this paper:

Consistent with the zero-order correlations, Conscientiousness was a significant positive predictor of GPA, even controlling for gender and SAT scores, and this finding replicated across all three samples. Thus, personality, in particular the Conscientiousness dimension, and SAT scores have independent effects on both high school and college grades. Indeed, in several cases, Conscientiousness was a slightly stronger predictor of GPA than were SAT scores.

Notably, the Openness factor is the factor that has the strongest correlation with IQ. I'm guessing Gwern has more stuff like this on his website, but if someone makes the claim that IQ is the only thing that matters to success in any given field, they are selling bridges.

Comment by 9eb1 on IQ and Magnus Carlsen, Leo Messi and the Decathlon · 2017-03-29T01:39:23.397Z · score: 4 (4 votes) · LW · GW

The tallest player to ever play in the NBA was Gheorghe Mureșan, who was 7'7". He was not very good. Manute Bol was almost as tall and he was good but not great. By contrast, the best basketball player of all time was 6'6" [citation needed]. In fact, perhaps an athletic quotient would be better for predicting top-end performance than height, since Jordon, Lebron and Kareem are all way more athletic than Muresan and Bol.

I will attempt to explain the strongest counterargument that I'm aware of regarding your first thesis. When you take a bunch of tests of mental ability and you create a correlation matrix, you obtain a positive manifold, where all the correlations are positive. When you perform a factor analysis of these subtests, you obtain a first factor that is very large, and secondary through n-iary factors that are small and vary depending on the number of factors you use. This is suggestive that there is some sort of single causal force that is responsible for the majority of test performance variation. If you performed a factor analysis of a bunch of plausible measures of athleticism, I think you would find that, for example, bench press and height do not participate in a positive manifold and you would likely find multiple relevant, stable factors rather than 1 athletic quotient that accounts for >50% of the variation. Cardio ability and muscular strength are at odds, so that would be at least two plausible stable factors. This argument is on Wikipedia here#Factor_structure_of_cognitive_abilities). Personally, in light of the dramatic differences there are between the different parts of an IQ test battery, I find this fact surprising and underappreciated. Most people do not realize this, and the folk wisdom is that there are very clear different types of intelligence.

The second point I would make regarding your first thesis is that there are plenty of researchers who don't like g, and they have spent decades trying to come up with alternative breakdowns of intelligence into different categorizations that don't include a single factor. Those efforts were mostly fruitless, because every time they were tested, it turned out that all the tests individually correlated with g still. Many plausible combinations of "intelligences" received this treatment. Currently popular models do have subtypes of intelligence, but they are all viewed sharing g as an important top-level factor (e.g. CHC theory) rather than g simply being a happenstance correlation of multiple factors. In this case absence of evidence is evidence of absence (in light of the effort that has gone into trying to uncover such evidence).

To be honest, I very much doubt that actual IQ researchers would disagree with your second thesis. My argument would be that for most fields there is enough randomness that you would not expect the most intelligent person to also be the most lauded. Even Einstein had to have the luck to have the insights he did, and there were undoubtedly many people who were just as smart but had different circumstances that led to them not having those insights. Additionally, there is a thing called Spearman's law of diminishing returns, which is the theory that the higher your g is, the less correlated your subtype intelligences are with your g factor. That is, for people who have very high IQs, there is a ton more variation between your different aspects of intelligence than there is for people with very low IQs. This has been measured and is apparently true, and would seem to support your thesis. It is true that these two observations (the factor decomposition and Spearman's law) seem to be in tension, but hopefully one day someone will come through with an explanation for intelligence that neatly explains both of these things and lots more besides.

Unrelated to your two theses, I think the fact that IQ correlates with SO MANY things makes it interesting alone. IQ correlates with school performance, job performance, criminality, health, longevity, pure reaction speed, brain size, income, and almost everything else (it seems like) that people bother to try correlating it with. If IQ hadn't originally come from psychometric tests, people would probably simply call it your "favor with the gods factor" or something.

There are enough correlations that any time I read a social sciences paper with statistics on outcomes between people with different characteristics, I always wish they would have controlled for IQ (but they never do). This may seem silly, but I think there is definitely an argument that can be made that IQ is "prior to" most of the things people study. We already know that IQ can't be meaningfully changed. It's pretty much set by the time you are an adult, and we know of nothing besides iodine deficiency that has a meaningful impact on it in the context of a baseline person in modern society.

Comment by 9eb1 on Open thread, March 13 - March 19, 2017 · 2017-03-15T07:44:34.759Z · score: 5 (5 votes) · LW · GW

I've read a lot of TLP and this is roughly my interpretation as well. Alone's posts do not come with nicely-wrapped thesis statements (although the conclusion of this one is as close as it gets). The point she is making here is that the system doesn't care about your happiness, but you should. The use of "goals" here isn't the LessWrong definition, but the more prosaic one where it implies achievements in life and especially in careers. Real people who want to be happy do want someone who is passionate, and the juxtaposition of passionate with "mutual respect and shared values" is meant to imply a respectful but loveless marriage. If someone asks you about your partner and you most central characteristic you have to define your marriage is "mutual respect and shared values" that says something very different than if your central characteristic is "passionate." It's sterile, and that sterility is meant to suggest that the person who says "passionate" is going to be happier regardless of their achievements in the workplace.

Comment by 9eb1 on Open thread, March 13 - March 19, 2017 · 2017-03-14T01:33:04.853Z · score: 0 (0 votes) · LW · GW

This is one of the more confusing problem statements, but I think I understand. So if we choose a regular hexagon with height = 0.5, as in this link, the scoring for this solution would be ((area of triangle - area of hexagon) + (area of square - 3 * area of hexagon)) / area of hexagon?

edit: Gur orfg fbyhgvba V pbhyq pbzr hc jvgu jnf whfg gur n evtug gevnatyr gung'f unys gur nern bs gur rdhvyngreny gevnatyr. Lbh pna svg 4 va gur fdhner naq gjb va gur gevnatyr, naq gur fpber vf cbvag fvk. V bayl gevrq n unaqshy bs erthyne gvyvatf gubhtu.

Comment by 9eb1 on Open thread, March 13 - March 19, 2017 · 2017-03-13T22:19:33.719Z · score: 0 (0 votes) · LW · GW

You can come arbitrarily close by choosing any tiling shape and making it as small as necessary.

Comment by 9eb1 on why people romantice magic over most science. · 2017-03-08T20:15:55.821Z · score: 0 (0 votes) · LW · GW

I have sometimes mused that accumulating political power (or generally being able to socially engineer) is the closest to magic that we have in the real world. It's the force multiplier that magic is used for in fiction by a single protagonist. Most people who want magic also do not follow political careers. Of course, this is only a musing because there are lots of differences. No matter how much power you accumulate you are still beholden to someone or something, so if independence is a big part of your magical power fantasy then it won't help.

Comment by 9eb1 on ribbonfarm: A Brief History of Existential Terror · 2017-03-02T06:01:06.440Z · score: 0 (0 votes) · LW · GW

The non-binariness of things seems to me to be a fundamental tenet of the post-rationality thing (ribbonfarm is part of post-rationality). In particular, Chapman writes extensively on the idea that all categories are nebulous and structured.

I also think there are options to control your risk factor, depending on the field. You can found a startup, you can be the first startup employee, you can join an established startup, you can join a publicly traded corporation, you can get a job in the permanent bureaucracy. Almost every spot on the work risk-stability spectrum is available.

Perhaps the real question is why some particular fields or endeavors lend themselves to seemingly continuous risk functions. All of Viliam's categories are purely social structures, where other people are categorizing you. So perhaps it's not about the risk inherent in an activity but being labeled that fits his intuition. People might label you a drug user if you smoke marijuana in their map, but in the territory the continuum of "having used marijuana once" to "uses heroin daily" is not only continuous but many-dimensioned.

Comment by 9eb1 on Open Thread, Feb. 20 - Feb 26, 2017 · 2017-02-23T17:14:53.880Z · score: 2 (2 votes) · LW · GW

That is true for people who you are going to become friends with, but difference in negative environments is much bigger. If your job has a toxic social environment, you are free to find a new one at any time. You also have many bona fide levers to adjust the environment, by for example complaining to your boss, suing the company, etc.

When your high school has a toxic social environment, you have limited ability to switch out of that environment. Complaints of other students have extremely high bars to be taken into account because it's mandatory for them to be there and it isn't in the administrator's best interests. If someone isn't doing something plainly illegal it's unlikely you will get much help.

Comment by 9eb1 on A semi-technical question about prediction markets and private info · 2017-02-20T04:11:11.090Z · score: 0 (0 votes) · LW · GW

This is an interesting puzzle. I catch myself fighting the hypothetical a lot.

I think it hinges on what would be the right move if you saw a six, and the market also had six as the favored option. In that situation, it would be appropriate to bet on the six which would move it past the 50% equilibrium, because you have the information from the market and the information from the die. I think maybe your equilibrium price can only exist if there is only one participant currently offering all of those bets, and they saw a six (so it's not really a true market yet, or there is only one informed participant and many uninformed). In that case, you having seen a six would imply a probability of higher than 50% that it is the weighted side. Given that thinking, if you see that prediction market favoring a different number ("3"), you should indeed bet against it, because very little information is contained in the market (one die throw worth).

The market showing a 50% price for one number and 10% for the rest is an unstable equilibrium. If you started out with a market-maker with no information offering 1/6 for all sides and there were many participants who only saw a single die, the betting would increase on the correct side. At each price that favors that side, every person who again sees that side would have both the guess from the market and the information from their roll, then they would use that information to estimate a slightly greater probability and the price would shift further in the correct direction. It would blow past 50% probability without even pausing.

Those don't seem like very satisfactory answers though.

Comment by 9eb1 on Are we running out of new music/movies/art from a metaphysical perspective? · 2017-02-06T00:49:34.594Z · score: 4 (5 votes) · LW · GW

You may be interested in a post on gwern's site related to this topic.

Comment by 9eb1 on How often do you check this forum? · 2017-02-03T19:32:40.970Z · score: 1 (1 votes) · LW · GW

Is there any information on how well-calibrated the community predictions are on Metaculus? I couldn't find anything on the site. Also, if one wanted to get into it, could you describe what your process is?

Comment by 9eb1 on A question about the rules · 2017-02-02T02:08:41.323Z · score: 1 (1 votes) · LW · GW

It certainly has something to do with his post, even if the main point of the post was specifically about domains from which to choose examples for your writing.

Politics is an important domain to which we should individually apply our rationality—but it's a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational.

This is the whole crux of the issue. According to some people, Less Wrong is a site to learn about and discuss rationality among people who are not already rational (or to "refine the art of human rationality"). According to some others, it's a community site around which aspiring rationalists should discuss topics of interest to the group. Personally I think phl43's posts are decent, and will likely improve with more practice, but I didn't think that particular post was very relevant or appropriate for Less Wrong specifically.

Comment by 9eb1 on How often do you check this forum? · 2017-01-31T18:11:42.533Z · score: 3 (3 votes) · LW · GW

I use Inoreader which is paid and web-based, but has browser plugins. I wouldn't say I "mostly" look for activity, since time is limited. The Open Thread on is extremely active, I think it gets about 200 comments per day, but I don't frequently browse it. Slate Star Codex and Less Wrong are the only actual "communities" in my feed reader. The others that are directly or tangentially related are blogs, and most blogs don't really engender that much discussion.

Comment by 9eb1 on How often do you check this forum? · 2017-01-30T22:50:41.142Z · score: 3 (3 votes) · LW · GW

I use a feed reader, so I check out almost all the posts and links. I click through to the comments on almost all of them as well, since that's the real point.

The reddit /r/slatestarcodex community is very active too, and I like that.

Comment by 9eb1 on 80,000 Hours: EA and Highly Political Causes · 2017-01-26T23:41:45.267Z · score: 10 (10 votes) · LW · GW

This is a trend in effective altruism and it's a very dangerous one for their cause. As soon as people outside the movement think that EAs are trying to spend everyone's money on "pet causes," it puts a distinct upper limit on the growth of the movement. Going after political targets seems really appealing because the leverage seems so high. If we could somehow install Holden Karnofsky as president it would probably improve the lives of a billion people, but there is no majority group of people that cares more about the global poor than they care about their own money.

It's very appealing, psychologically, because big political wins have outsized importance in how you feel about yourself. When a big decision comes down (like when gay marriage was legalized by the Supreme Court) it is literally a cause for celebration. Your side won and your enemies lost. If instead you somehow got people to donate $50 million to Against Malaria Foundation, it wouldn't be that salient.

Since you reference Robin Hanson's idea of pulling ropes sideways, I figure I should provide a link.

Comment by 9eB1 on [deleted post] 2017-01-23T22:12:15.885Z

I basically agree with all of this with one quibble. I think it is very easy to underestimate the impact that LessWrong has had. There are a lot of people (myself included) who don't want to be associated with rationality, but whose thoughts it has nonetheless impacted. I know many of them in real life. LessWrong is weird enough that there is a social cost to having the first google result for your real name point to your LessWrong comments. If I am talking to someone I don't know well about LessWrong or rationality in general I will not give it a full-throated defense in real life, and in venues where I do participate under my real name, I only link to rationalsphere articles selectively.

Partially because of this stigma, many people in startupland will read the sequences, put in in their toolbox, then move on with their lives. They don't view continuing to participate as important, and surely much of the low-hanging fruit has long since been plucked. But if you look at the hints you well find tidbits of information that point to rationality having an impact.

  1. Ezra Klein and Patrick Collison (CEO of Stripe) had an extensive conversation about rationality, and both are famous, notable figures.

  2. A member of the Bay Area rationality community was rumored to be a member of the Trump cabinet.

  3. Dominic Cummings (the architect of the Brexit "Leave" campaign) points to concept after concept that is both core to and adjacent to rationality, so much so that I would be genuinely surprised if he were not aware of it. (Perhaps this isn't good for rationality depending on your political views, but don't let it be said that he isn't winning).

  4. OpenAI was launched with $1B in funding from a Silicon Valley who's who and they have been in dialogue with MIRI staff (and interestingly Stripe's former CTO is the OpenAI CTO, obviously he knows about the rationalsphere). In general there has been tons of interest that has developed around AI alignment from multiple groups. Since this was the fundamental purpose of LessWrong to begin with, at least Eliezer is winning beyond what anyone could have ever expected based on his roundabout way of creating mindshare. We can't say with certainty that this wouldn't have happened without LessWrong, but personally I find it hard to believe that it didn't make a huge impact on Eliezer's influence within this field of thought.

Do we have an army of devout rationalists that are out there winning? No, it doesn't seem so. But rationalism has had a lot of children that are winning, even if they aren't looking back to improve rationalism later. Personally, I didn't expect LessWrong to have had as much impact as it has. I realized how hard it is to put these ideas into action when I first read the sequences.

Comment by 9eB1 on [deleted post] 2017-01-22T17:50:34.210Z

The Meaningness book's section on Meaningness and Time is all about culture viewed through Chapman's lens. Ribbonfarm has tons of articles about culture, most of which I haven't read. I haven't been following post-rationality for very long. Even on the front page now there is this which is interesting and typical of the thought.

Post-rationalists write about rituals quite a bit I think (e.g. here). But they write about it from an outsider's perspective, emphasizing the value of "local" or "small-set" ritual to everyone as part of the human experience (whether they be traditional or new rituals). When Rationalists write about ritual my impression is that they are writing about ritual for Rationalists as part of the project of establishing or growing a Rationalist community to raise the sanity waterline. Post-rationalists don't identify as a group to the extent that they want to have "post-rationalist rituals." David Chapman is a very active Buddhist, for example, so he participates in rituals (this link from his Buddhism blog) related to that community, and presumably the authors at ribbonfarm observe rituals that are relevant within their local communities.

Honestly, I don't think there is much in the way of fundamental philosophical differences. I think it's more like Rationalists and post-Rationalists are drawn from the same pool of people, but some are more interested in model trains and some are more interested in D&D. It would be hard for me to make this argument rigorous though, it's just my impression.

Comment by 9eB1 on [deleted post] 2017-01-22T07:29:13.719Z

My observation is that post-rationalists are much more interested in culture and community type stuff than the Rationalist community. This is not to say that Rationalist community doesn't value culture and community, and in fact it gets discussed quite frequently (e.g. the solstice has been established explicitly to create a sense of community and "the divine"). The difference is that while Rationalists are most interested in biases, epistemology, and decision theory, post-rationalists are most interested in culture and community and related things. Mainstream Rationalists are usually/sometimes, loosely tied to Rationalism as a culture (otherwise the solstice wouldn't exist), but mostly they define their interests as whatever wins and the intellectual search for right action. Post-rationalists, on the other hand, view the world through a lens where culture and community is highly important, which is why they think that Rationalists even represents a thing you can be "post" to, while many Rationalists don't see it that way.

I don't think that Rationalists are wrong when they write about culture, they usually have well-argued points that point to true things. The main difference is that post-rationalists have a sort of richness to their descriptions and understanding that is lacking in Rationalist accounts. When Rationalists write about culture it has an ineffable dryness that doesn't ring true to my experience, while post-rationalists don't. The main exception to this is Scott Alexander, but in most other cases I think the rule holds.

Ultimately, I don't think there is much difference between the quality of insights offered by Rationalists and post-rationalists, and I don't think one is more right than the other. When reading the debates between Chapman and various Rationalist writers, the differences seem fairly minute. But there is a big difference in the sorts of things they write about. For myself, I find both views interesting and so far have not noticed any significant actual conflict in models.

Edit: Another related difference is that post-rationality authors are more willing to go out on a limb with ideas. Most of their ideas, dealing in softer areas, are necessarily less certain. It's not even clear that certainty can be established with some of their ideas, or whether they are just helpful models for thinking about the world. In the Rationalsphere people prefer arguments that are clearly backed up at every stage, ideally with peer reviewed evidence. This severely limits the kind of arguments you can make, since there are many things that we don't have research on and will plausibly never have research on.

Comment by 9eb1 on Dominic Cummings: how the Brexit referendum was won · 2017-01-14T02:47:46.601Z · score: 0 (0 votes) · LW · GW

Thanks for noting that, I found some more interesting discussion there (Linked for others' convenience).

Comment by 9eb1 on Dominic Cummings: how the Brexit referendum was won · 2017-01-14T02:27:49.268Z · score: 2 (2 votes) · LW · GW

Part of the reason is also because this is a UK issue and most LessWrong readers are not from there, so people have a little bit more of a outsider's or non-tribalist perspective on it (although almost all LW commenters would certainly have voted for Remain).

Comment by 9eb1 on Open thread, Jan. 02 - Jan. 08, 2017 · 2017-01-07T19:18:03.795Z · score: 3 (3 votes) · LW · GW

SlateStarCodex open threads or the weekly culture war thread on probably.

Comment by 9eb1 on Claim explainer: donor lotteries and returns to scale · 2016-12-31T21:50:59.314Z · score: 1 (1 votes) · LW · GW

It's only relevant if you're so confident in it that you don't feel the need to do any double-checking - that the right amount of research to do is zero or nearly zero.

My contention is that the people who are willing to participate in this have already done non-negligible amounts of thinking on this topic, because they are EA hobbyists. How could one be engaging with the EA community if they are not spending time thinking about the core issues at hand? Because of diminishing marginal returns, they are already paying the costs for the research that has the highest marginal value, in terms of their engagement with the community and reflection on these topics. I do not believe this is addressed in the original article. I believe this is our fundamental disagreement.

The objection of value misalignment can't be priced in because there is no pricing mechanism at play here, so I'm not sure what you mean (except for paulfchristiano's fee for administering the fund). That exact point was not the main thrust of the paragraph, however. The main thrust of that paragraph was to explain the two possible outcomes in the lottery, and explain how both lead to potential negative outcomes in light of the diminishing marginal returns to original research and the availability of a person's time in light of outside circumstances.

I am in the target market in the sense that I donate to EA charities, and I think that SOMEONE doing research improves its impact, but I guess I am not in the target market in the sense that I think that person has to be me.

Regarding your snips about my not reading the article, it's true that if I had more time and more interest in this topic, I would offer better quality engagement with your ideas, so I apologize that I lack those things.

Comment by 9eb1 on Claim explainer: donor lotteries and returns to scale · 2016-12-31T00:15:17.330Z · score: 6 (6 votes) · LW · GW

I think practical interest in these things is somewhat bizarre.

All of the people that would be interested in participating are already effective altruists. That means that as a hobby they are already spending tons of time theorizing on what donations they would make to be more efficient. Is the value of information from additional research really sufficient to make it worthwhile in this context? Keep in mind that much of the low-hanging analysis from a bog-standard EA's perspective has already been performed by GiveWell, and you can't really expect to meaningfully improve on their estimates. This limits the pool of rational participants to only those who know they have values that don't align with the community at large.

For me, the whole proposition is a net negative. If I don't get selected, then someone else chooses what to do with my money. Since they don't align with my values, they might donate it to the KKK or whatever. If I DO get selected, it's arguably worse, because now I have to do a bunch of research that has low value to me to make a decision. Winning the lottery to spend $100,000 of other people's money doesn't suddenly endow me with tens or hundreds of hours to use for extra research (unless I can spend some of the money on my research efforts...).

The complexity of the the system, its administration, and time spent thinking about whether to participate is all deadweight loss in the overall system. Someone, or many someones, have to spend time considering whether to participate, manage the actual money and the logistics of it. This is all conceptual overhead for the scheme.

Not to get too psychoanalytical or whatever, but I think this stems partly from the interest of people in the community to appreciate complex, clever, unusual solutions BECAUSE they are complex, clever and unusual. My engagement with effective altruism is very boring. I read the GiveWell blog and occasionally give them money. It's not my hobby, so I don't participate in things like the EA forum.

If you are considering participating, first figure out what actual research you would do if you won the award, what the VoI is for that time, and how you would feel if you either had to do that research, or had someone else choose the least efficient plausible alternative is for your values. Think about whether the cleverness and complexity of this system is actually buying you anything. If you like being contrarian and signalling your desire to participate in schemes that show you Take Utility Seriously, by all means, go for it.

Comment by 9eb1 on What if jobs are not the solution, but the problem · 2016-11-30T18:56:57.917Z · score: 1 (1 votes) · LW · GW

I don't think we disagree, fundamentally. The fact that GDP is a measure of currency-denominated trading IS the abstraction layer. The fact that it doesn't capture barter and value you create for yourself is part of the friction in the abstraction. (In the European Union GDP figures actually include estimates for black market transactions and barter, but not housework and the like.)

Taking value created as a focus, my conclusion is much the same. If 30% of people leave the labor force, the amount of value being generated for other people will decrease by some amount between 0% and 30%. If those people continue doing "activities that benefit other people but are technically not paid work" that amount will be smaller than if they just sit around watching TV, for sure.

People who are no longer "working" but still "creating value" face a couple big sources of inefficiencies that would certainly mean that they produce less value:

  • The pricing function is a useful technology they would no longer have access to. It's likely that the work they choose to do for others will be less valuable than if they kept working. If instead they are working for an alternative currency, such as in a reputation economy, they can't be said to have stopped working in the way the article posits.

  • If they aren't working for an organization, they don't benefit from the efficiencies that naturally arise from organization.

If you strip the bizarre political grab-bag of issues and economic misinterpretation from the article, it is definitely pointing at an issue we will face. What do we do when people no longer provide a net economic benefit from working? There are already people who are completely unemployable (and there always have been, e.g. the severely intellectually disabled), but with better technology that group will continue to grow. But if the answer to that is to give everyone the right to leisure, tens of millions who actually are on-net productive are going to take that option and we will be left dramatically poorer than otherwise. It's that answer that I take exception to, not the issue.

Comment by 9eb1 on What if jobs are not the solution, but the problem · 2016-11-30T04:30:13.592Z · score: 3 (3 votes) · LW · GW

Some people have a very weird understanding of economics that works for high-level analyses when considering situations similar to our current economic situation, but isn't grounded all the way down to the level of individual people working. What they consider economics is only the highest level of abstraction, and they don't realize that when you are talking about gigantic changes in the economic regime, you can't assume that the top levels of abstraction will be mostly correct or even useful. Even professional economists sometimes forget this fact, so it's excusable.

GDP is an abstraction that grounds out in people doing work, period. In America, 125 million people go to work every day for an entire year, and all that work goes into creating our $18T of GDP. That GDP accounts for all the movies you watch, the food you eat, the police that protect you, the bombs you drop and everything else. If you want 30% of those people to literally stop working, it's going to come out of GDP somehow, and GDP is a decent but not perfect proxy for things we want as a society. Sure, the 30% of people who stop working are the least productive so the impact on GDP will be less than 30% reduction, but even so you would be left getting dramatically less stuff each year than you otherwise could have. If you consider the effect this would have on technological development, it is even more dramatic, because technological progress compounds.

Comment by 9eb1 on On the importance of Less Wrong, or another single conversational locus · 2016-11-28T18:37:48.253Z · score: 24 (24 votes) · LW · GW

At one point I was planning on making a contribution. It was difficult just getting the code setup and there was very little documentation on the big picture of how everything was supposed to work. It is also very frustrating to run in a development mode. For example, on Mac you have to run it from within a disk image, the VM didn't work, and setting up new user accounts for testing purposes was a huge pain.

I started trying to understand the code after it was set up, and it is an extremely confusing mess of concepts with virtually no comments, and I am fluent in web development with Python. After 4-6 hours I was making progress on understanding what I needed to make the change I was working on, but I wasn't there yet. I realized that making the first trivial contribution would probably take another 10-15 hours and stopped. The specific feature I was going to implement was an admin view link that would show the usernames of people who had upvoted / downvoted a comment.

The issues list on GitHub represents at least several hundred hours of work. I think 3 or 4 contributors could probably do a lot of damage in a couple months of free time, if it weren't quite so unenjoyable. $10K is definitely a huge underestimate for paying an outsider. I do think that a lot of valuable low-hanging fruit, like stopping karma abuses and providing better admin tools, could be done for $10-20K though.

Comment by 9eb1 on Industry Matters · 2016-11-19T20:33:58.547Z · score: 1 (1 votes) · LW · GW

Estimates of productivity growth over long periods of time are entangled with hedonics in a complex way that makes it difficult to make very strong statements. It's very possible that manufacturing productivity is being understated due to increases in the quality or variety of goods that is not reflected in inflation figures. Accelerating innovation makes hedonics adjustments more difficult, and the manufacturing sector is very susceptible to these changes.

Comment by 9eb1 on Voting is like donating hundreds of thousands to charity · 2016-11-03T05:40:55.336Z · score: 1 (1 votes) · LW · GW

Yeah, 75% is pure nonsense. 50% of the budget is social security and medicare/medicaid/CHIP/marketplace subsidies, which are almost entirely locked in. Maybe at the margin policy can adjust this somewhat, especially the marketplace subsidies portion. 16% is defense spending, which are decisions made by the military bureaucracy with a little bit of pork barrel politics. Maybe they could adjust the growth rate of that up or down 5%. 10% is safety net and welfare, 6% is interest on the national debt, and 8% is pensions that have already been agreed to. 11% of the budget is for the rest of the government's services, and most of those budgets are requested by the bureaucracies and individually fulfilled plus or minus some percent. Then, the president has to share the responsibilities for setting those budgets with the senate and congress. I would be surprised if 15% of the budget went to different people under Obama than under Jon McCain, and, I think in terms of realistic impact, where very similar entities have relatively the same impact on my political views, the number is probably in the single-digit percentages.

Also, I would like to point out for Gleb's benefit that "tenure-track professor" sounds worse than simply "professor" for the same reason that "junior carpenter" sounds worse than "carpenter." Most people's availability heuristic is not that the typical professor is not on the tenure track. The idea wouldn't even be salient except that "tenure-track" was mentioned.

Budget source

Comment by 9eb1 on There are 125 sheep and 5 dogs in a flock. How old is the shepherd? / Math Education · 2016-10-18T18:19:51.689Z · score: 0 (0 votes) · LW · GW

This reminds me of this comment of mine, although it is not directly related.

Comment by 9eb1 on There are 125 sheep and 5 dogs in a flock. How old is the shepherd? / Math Education · 2016-10-18T18:15:35.797Z · score: 0 (0 votes) · LW · GW

I'm not sure, I have different forms of identification that state that they are different colors, but they are not brown.

Comment by 9eb1 on There are 125 sheep and 5 dogs in a flock. How old is the shepherd? / Math Education · 2016-10-17T17:11:24.919Z · score: 0 (0 votes) · LW · GW

The prior is that his eyes are brown, since most people have brown eyes. But in the most populous countries with brown eyes and busses, the busses always have way more than 7 people on them, so that slightly shifts the probability towards non-brown colors. But even in countries where there are fewer bus passengers, most people have brown eyes, so brown is still most likely.

Comment by 9eb1 on Open thread, Oct. 10 - Oct. 16, 2016 · 2016-10-10T17:01:32.249Z · score: 3 (3 votes) · LW · GW

I would be very interested in this as well. In the meantime, there is a subreddit for the site that has a thread with best posts for a new reader, and a thread on people's favorite things from TLP.

Comment by 9eb1 on Open thread, Sep. 26 - Oct. 02, 2016 · 2016-09-26T15:06:19.462Z · score: 5 (5 votes) · LW · GW

I have read Convict Conditioning. The programming in that book (that is, the way the overall workout is structured) is honestly pretty bad. I highly recommend doing the reddit /r/bodyweightfitness recommended routine.

  1. It's free.

  2. It has videos for every exercise.

  3. It is a clear and complete program that actually allows for progression (the convict conditioning progression standards are at best a waste of time) and keeps you working out in the proper intensity range for strength.

  4. If you are doing the recommended routine you can ask questions at /r/bodyweightfitness.

The main weakness of the recommended routine is the relative focus of upper body vs. lower body. Training your lower body effectively with only bodyweight exercises is difficult though. If you do want to use Convict Conditioning, /r/bodyweightfitness has some recommended changes which will make it more effective.

Comment by 9eb1 on New music powers · 2016-09-02T15:24:05.533Z · score: 1 (1 votes) · LW · GW

A similar thing happened to me with music as a result of practicing mindfulness meditation. I was listening to music in my car and I thought "Well, I should bring some mindfulness to this task." One of the common things you do in mindfulness is try to direct your mental attention at more specific aspects of something you are perceiving, and I realized that by paying attention to individual musical instruments it actually had a significant impact on how the song seemed. I wasn't exactly surprised, because many things are like this (maybe everything?) but it was neat how strongly your mind can filter out other aspects. This is fundamentally the same as when people write tasting notes for wine, or when a designer focuses on the blank space in a design, or when you pay attention to the feeling of your butt on your chair.

Next time you are eating, try to pay separate attention to the flavor, the smell and the texture of what you're eating. Also pay attention to the way it feels different to swallow the food than to chew it. If you are hungry, you may notice that the satisfaction-sensation from eating actually comes from the act of swallowing, rather than chewing, which is why diets where you chew food and spit it out have never been popular.

Comment by 9eb1 on Hedging · 2016-08-29T16:44:54.000Z · score: 1 (1 votes) · LW · GW

The original site was, but it's robots.txt disallows the Internet archive. Someone has recovered some of the blog posts and they are posted here. There are also a number of articles at that have been captured at a later date, which actually show the epistemic status markers I was talking about, described here

Comment by 9eb1 on Do you want to be like Kuro5hin? Because this is how you get to be like Kuro5hin. · 2016-08-27T00:14:03.704Z · score: 16 (16 votes) · LW · GW

LessWrong is no longer even large or active enough for downvoting to be necessary. The activity of posts here is similar to Usenet, which had no moderation.