ribbonfarm: A Brief History of Existential Terror 2017-03-01T01:18:52.888Z · score: 1 (2 votes)
Wireheading Done Right: Stay Positive Without Going Insane 2016-11-22T03:16:33.340Z · score: 4 (4 votes)


Comment by 9eb1 on What are sensible ways to screen event participants to reduce COVID-19 risk? · 2020-03-04T04:43:32.858Z · score: 5 (3 votes) · LW · GW

As for cutoffs, just look up max healthy forehead temperature, maybe 37.5. More important is to have prominently available hand sanitizer pumps and encourage people to use it before and after the event, and remind them not to touch their faces.

Comment by 9eb1 on In defense of deviousness · 2020-01-15T22:59:34.637Z · score: 8 (6 votes) · LW · GW

There are several sources of spaghetti code that are possible:

  1. A complex domain, as you mention, where a complex entangled mess is the most elegant possible solution.
  2. Resource constraints and temporal tradeoffs. Re-architecting the system after adding each additional functionality is too time expensive, even when a new architecture could simplify the overly complex design. Social forces like "the market" or "grant money" mean it makes more sense to build the feature in the poorly architected way.
  3. Performance optimizations. If you code needs to fit inside a 64kb ROM, you may be very limited in your ability to structure your code cleanly.
  4. Lack of requisite skill. A person may not be able to provide a simple design even though one exists, even given infinite time.

If I had to guess, number 2 is the largest source of spaghetti code that Less Wrong readers are likely to encounter. Number 4 may be account for the largest volume of spaghetti code worldwide, because of the incredible amounts of line-of-business code churned out by major outsourcing companies. But even that is a reflection of economic realities. Therefore, one could say that spaghetti code is primarily an economic problem.

Comment by 9eb1 on Might humans not be the most intelligent animals? · 2020-01-06T14:41:10.885Z · score: 1 (1 votes) · LW · GW

Sorry, I could have been clearer. The empirical evidence I was referring to was the existence of human civilization, which should inform priors about the likelihood of other animals being as intelligent.

I think you are referring to a particular type of "scientific evidence" which is a subset of empirical evidence. It's reasonable to ask for that kind of proof, but sometimes it isn't available. I am reminded of Eliezer's classic post You're Entitled to Arguments, But Not (That Particular) Proof.

To be honest, I think the answer is that there is just no truth to this matter. David Chapman might say that "most intelligent" is nebulous, so while there can be some structure, there is no definite answer as to what constitutes "most intelligent." Even when you try to break down the concept further, to "raw innovative capacity" I think you face the same inherent nebulosity.

Comment by 9eb1 on What will quantum computers be used for? · 2020-01-02T13:31:09.347Z · score: 1 (1 votes) · LW · GW

The database search thing is, according to my understanding, widely misinterpreted. As Wikipedia says:

Although the purpose of Grover's algorithm is usually described as "searching a database", it may be more accurate to describe it as "inverting a function". In fact since the oracle for an unstructured database requires at least linear complexity, the algorithm cannot be used for actual databases.

To actually build Quantum Postgres, you need something that can store an enormous number of qubits, like a hard drive.

Comment by 9eb1 on Might humans not be the most intelligent animals? · 2019-12-24T14:23:16.813Z · score: 3 (2 votes) · LW · GW

Your take is contrarian as I suspect you will admit. There is quite a bit of empirical evidence, and if it turned out that humans were not the most intelligent it would be very surprising. There is probably just enough uncertainty that it's still within the realm of possibility, but only by a small margin.

Comment by 9eb1 on Against Premature Abstraction of Political Issues · 2019-12-19T15:18:56.897Z · score: 7 (2 votes) · LW · GW

This sort of existence argument is reasonable for hypothetical supehuman AIs, but real-world human cognition is extremely sensitive to the structure we can find or make up in the world. Sure, just saying "politics" does not provide a clear reference class, so it would be helpful to understand what you want to avoid about politics and engineer around it. My hunch is that avoiding your highly-technical definition of bad discourse that you are using to replace "politics" just leads to a lot of time spent on your politics analysis, with approximately the same topics avoided as a very simple rule of thumb.

I stopped associating or mentioning LW in real life largely because of the political (maybe some parts cultural as well) baggage of several years ago. Not even because I had any particular problem with the debate on the site or the opinions of everyone in aggregate, but because there was just too much stuff to cherry-pick from in our world of guilt by association. Too many mixed signals for people to judge me by.

Comment by 9eb1 on CO2 Stripper Postmortem Thoughts · 2019-12-08T01:00:28.251Z · score: 4 (2 votes) · LW · GW

I was very confused about your proposed setup after reading the wikipedia article on heat exchangers, since I couldn't figure out what thermal masses you proposed exchanging heat between. But I found this article which resolved my confusion.

Comment by 9eb1 on Do we know if spaced repetition can be used with randomized content? · 2019-11-17T23:14:12.413Z · score: 6 (4 votes) · LW · GW

It is still useful to memorize the flashcards. The terminology provides hooks that will remind you of the conceptual framework later. If you want to practice actually recognizing the design patterns, you could read some of and actively try to recognize design patterns. When you want to learn to do something, it's important to practice a task that is as close as possible to what you are trying to learn.

In real life when a software design pattern comes up, it's usually not as something that you determine from the code. More often it's by talking with the author, reading the documentation, or inferring from variable names.

The strategy described in, assuming you have read that, seems to suggest that just using Anki to cover enough of the topic space probably gives you a lot of benefits, even if you aren't doing the mental calculation.

Comment by 9eb1 on Where should I ask this particular kind of question? · 2019-11-03T14:24:06.650Z · score: 2 (2 votes) · LW · GW

Perhaps the community to ask on mostly doesn't depend on the expertise of the denizens, but your ability to get a response. If so, it matters more whether your question is something that will "hook" the people there, which depends more on the specific topic of the question than on the knowledge required to answer it. For example, if it were about the physics of AI, you'd be likely to get an answer on LessWrong. If it's about academic physics, reddit might be better. If you are using it to write fanfiction, just ask on a fanfiction forum.

It matters quite a bit how hypothetical the scenario is. For example, is it a situation that is actually physically impossible? Does it likely have a specific concrete answer even if you (or anyone) knows it, or will it end up being a matter of interpretation? Would a satisfying answer to the question advance the field of physics or any other field?

Anyway, another option is Twitter. Personally, I'd ask on LessWrong, PhysicsOverflow, or Reddit.

Comment by 9eb1 on On Internal Family Systems and multi-agent minds: a reply to PJ Eby · 2019-10-31T11:31:52.121Z · score: 4 (3 votes) · LW · GW

Yes, that seems like a reasonable perspective. I can see why that would be annoying.

Comment by 9eb1 on On Internal Family Systems and multi-agent minds: a reply to PJ Eby · 2019-10-30T23:18:54.036Z · score: 3 (3 votes) · LW · GW

I really appreciate that this post was on the front page, because I wouldn't have seen it otherwise and it was interesting. From an external viewer perspective on the "status games" aspect of it, I think the front page post didn't seem like a dominance attempt, but read as an attempt at truth seeking. I also don't think that it put your arguments in a negative light. Your comments here, on the other hand, definitely feel to an outside observer to be more status-oriented. My visceral reaction upon reading your comment above this one, for example, was that you were trying to demote IFS because it sounds like you make a living promoting this other non-IFS approach.

That said, I remember reading many of your posts on the old LessWrong and I have occasionally wondered what you had gotten up to, since you had stopped posting.

Comment by 9eb1 on What technical prereqs would I need in order to understand Stuart Armstrong's research agenda? · 2019-09-22T15:02:40.996Z · score: 1 (1 votes) · LW · GW

There appears to be some sort of bug with the editor, I had to switch to markdown mode to fix the comment. Thanks for the heads up.

I use Anki for this purpose and it works well as long as you already have a system to give you a strong daily Anki review habit.

Comment by 9eb1 on What technical prereqs would I need in order to understand Stuart Armstrong's research agenda? · 2019-09-19T04:30:05.093Z · score: 2 (2 votes) · LW · GW

If this is true, then this post by Michael Nielsen may be interesting to the poster. He uses a novel method of understanding a paper by using Anki to learn the areas of the field relevant to, in this case, the AlphaGo paper. I don't have a good reason to do this right now, but this is the strategy I would use if I wanted to understand Stuart's research program.

Comment by 9eb1 on The Hacker Learns to Trust · 2019-06-22T13:10:15.360Z · score: 18 (10 votes) · LW · GW

The phenomenon I was pointing out wasn't exactly that the person's decision was made because of status. It was that a prerequisite for them changing their mind was that they were taken seriously and engaged with respectfully. That said, I do think that its interesting to understand the way status plays into these events.

First, they started the essay with a personality-focused explanation:

To explain how this all happened, and what we can learn from it, I think it’s important to learn a little bit more about my personality and with what kind of attitude and world model I came into this situation.


I have a depressive/paranoid streak, and tend to assume the worst until proven otherwise. At the time I made my first twitter post, it seemed completely plausible in my mind that no one, OpenAI or otherwise, would care or even notice me. Or, even worse, that they would antagonize me."

The narrative that the author themselves is setting up is that they had irrational or emotional reasons for behaving the way they did, then they considered longer and changed their mind. They also specifically call out that their perceived lack of self-status as an influencing factor.

If someone has an irrational, status-focused explanation for their own initial reasoning, and then we see high-status people providing them extensive validation, it doesn't mean that they changed their mind because of the high-status people, but it's suggestive. My real model is that they took those ideas extra seriously because the people were nice and high status.

Imagine a counterfactual world where they posted their model, and all of the responses they received were the same logical argument, but instead made on 4Chan and starting with "hey fuckhead, what are you trying to do, destroy the world?" My priors suggest that this person would have, out of spite, continued to release the model.

The gesture they are making here, not releasing the model, IS purely symbolic. We know the model is not as good as mini-GPT2. Nonetheless, it may be useful to people who aren't being supported by large corporate interests, either for learning or just for understanding ML better for real hackers. Since releasing the model is not a bona fide risk, part of not releasing it is so they can feel like they are part of history. Note the end where they talk about the precedent they are setting now by not releasing it.

I think the fact that the model doesn't actually work is an important aspect of this. Many hackers would have done it as a cool project and released it without pomp, but this person put together a long essay, explicitly touting the importance of what they'd done and the impact it would have on history. Then, it turned out the model did not work, which must have been very embarrassing. It is fairly reasonable to suggest that the person then took the action that made them feel the best about their legacy and status: writing an essay about why they were not releasing the model for good rationalist approved reasons. It is not even necessarily the case that the person is aware that this is influencing the decision, this is a fully Elephant in the Brain situation.

When I read that essay, at least half of it is heavily-laden with status concerns and psychological motivations. But, to reiterate: though pro-social community norms left this person open to having their mind changed by argument, probably the arguments still had to be made.

How you feel about this should probably turn on questions like "Who has the status in this community to have their arguments taken seriously? Do I agree with them?" and "Is it good for only well-funded entities to have access to current state-of-the-art ML models?"

Comment by 9eb1 on The Hacker Learns to Trust · 2019-06-22T05:07:25.669Z · score: 7 (9 votes) · LW · GW

As is always the case, this person changed their mind because they were made to feel valued. The community treated what they'd done with respect (even though, fundamentally, they were unsuccessful and the actual release of the model would have had no impact on the world), and as a result they capitulated.

Comment by 9eb1 on BYOL (Buy Your Own Lunch) · 2018-04-09T01:49:14.812Z · score: 6 (2 votes) · LW · GW

It is not at all rude, at a business lunch, to say "Oh, thank you!" when someone says they will pay for lunch. Especially if you are a founder of a small company and meeting with people at more established companies who will likely be able to expense the meal. Those people don't care, because it's not their money.

If you are meeting with people in a similar position (fellow founders), you can just ask to split which people will either accept or they will offer to pay, in which case see above.

If you are meeting with casual acquaintances, you can also say "Split the check?" and it's totally fine.

The weirdness points of adding that to your e-mail and including a link to this post is far greater than saying "Thank you" when someone else offers to pay, so carefully consider if it's worth spending them this way.

Comment by 9eb1 on Is Rhetoric Worth Learning? · 2018-04-07T04:56:39.448Z · score: 7 (2 votes) · LW · GW

In a best case scenario, a fellow traveler will already have studied rhetoric and will be able to provide the highlights relevant to LWers. In the spirit of offering the "obvious advice" I've heard the "Very Short Introduction" series of books can give you an introduction to the main ideas of a field and maybe that will be helpful for guiding your research beyond the things that are easily googleable.

Comment by 9eb1 on My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms · 2018-03-26T07:04:36.709Z · score: 13 (4 votes) · LW · GW

The case of the Vietnamese monk who famously set himself on fire may meet your criteria. The Vietnamese government claimed that he had drugged himself, but it's hard to imagine a drug that would allow you to get out of a car under your own power and walk to a seated position, and then light a match to set yourself on fire but still have no reaction as your flesh burns off.

Comment by 9eb1 on Hammertime Postmortem · 2018-03-25T22:11:58.362Z · score: 1 (1 votes) · LW · GW

It's too bad the link for the referenced *"Focusing," for skeptics* article in your post on the tactic only leads to a 404 now. I wonder if it was taken down intentionally?

Comment by 9eb1 on Feedback on LW 2.0 · 2017-10-01T20:04:25.935Z · score: 8 (8 votes) · LW · GW

I love that the attempt is being made and I hope it works. The main feedback that I have is that the styling of the comment section doesn't work for me. One of the advantages of the existing LessWrong comment section is that the information hierarchy is super clear. The comments are bordered and backgrounded so when you decide to skip a comment your eye can very easily scan down to the next one. At the new site all the comments are relatively undifferentiated so it's much harder to skim them. I also think that the styling of the blockquotes in the new comments needs work. Currently there is not nearly enough difference between blockquoted text and comment text. It needs more spacing and more indenture, and preferably a typographical difference as well.

Comment by 9eb1 on LW 2.0 Strategic Overview · 2017-09-17T14:47:31.887Z · score: 0 (0 votes) · LW · GW


Since then I've thought of a couple more sites that are neither hierarchical nor tag-based. Facebook and eHow style sites.

There is another pattern that is neither hierarchical, tag-based nor search-based, which is the "invitation-only" pattern of a site like pastebin. You can only find content by referral.

Comment by 9eb1 on LW 2.0 Strategic Overview · 2017-09-17T03:13:06.707Z · score: 0 (0 votes) · LW · GW

That is very interesting. An exception might be "Google search pages." Not only is there no hierarchical structure, there is also no explicit tag structure and the main user engagement model is search-only. Internet Archive is similar but with their own stored content.

With respect to TV Tropes, I'd note that while it is nominally organized according to those indexes, the typical usage pattern is as a sort of pure garden path in my experience.

Comment by 9eb1 on Priors Are Useless · 2017-06-21T14:17:15.296Z · score: 6 (6 votes) · LW · GW

Now analyze this in a decision theoretic context where you want to use these probabilities to maximize utility and where gathering information has a utility cost.

Comment by 9eb1 on Change · 2017-05-07T07:14:59.896Z · score: 4 (3 votes) · LW · GW

This was incomprehensible to me.

Comment by 9eb1 on Open thread, Apr. 03 - Apr. 09, 2017 · 2017-04-04T22:15:50.777Z · score: 0 (0 votes) · LW · GW

Bryan Caplan responded to this exchange here

Comment by 9eb1 on Open thread, Apr. 03 - Apr. 09, 2017 · 2017-04-04T16:06:36.654Z · score: 5 (5 votes) · LW · GW

I think no one would argue that the rationality community is at all divorced from the culture that surrounds it. People talk about culture constantly, and are looking for ways to change the culture to better address shared goals. It's sort of silly to say that that means it should be called the "irrationality community." Tyler Cowen is implicitly putting himself at the vantage point of a more objective observer with the criticism, which I find ironic.

Where Tyler is wrong is that it's not JUST another kind of culture. It's a culture with a particular set of shared assumptions, and it's nihilistic to imply that all cultures are equal no matter from what shared assumptions they issue forth. Cultures are not interchangeable. Tyler would also have to admit (and I'm guessing he likely would admit if pressed directly) that his culture of mainstream academic thought is "just another kind of religion" to exactly the same extent that rationality is, it's just less self-aware about that fact.

As an aside, I think Lumifer is a funny name. I always associate it with Lumiere from Beauty and the Beast, and with Lucifer. Basically I always picture your posts as coming from a cross between a cartoon candle and Satan.

Comment by 9eb1 on IQ and Magnus Carlsen, Leo Messi and the Decathlon · 2017-03-29T19:42:26.306Z · score: 0 (0 votes) · LW · GW

You are correct, there are things that can negatively impact someone's IQ. With respect to maximizing, I think the fact that people have been trying for decades to find something that reliably increases IQ, and everything leads to a dead-end means that we are pretty close to what's achievable without revolutionary new technology. Maybe you aren't at 100% of what's achievable, but you're probably at 95% (and of course percentages don't really have any meaning here because there is no metric which grounds IQ in absolute terms).

Comment by 9eb1 on IQ and Magnus Carlsen, Leo Messi and the Decathlon · 2017-03-29T19:23:15.250Z · score: 0 (0 votes) · LW · GW

I agree that IQ is plenty interesting by itself. My goal with this article was to explore the boundaries of that usefulness and explore the ways in which the correlations break down.

The Big 5 personality traits have a correlation with some measures of success which is independent of IQ. For example, in this paper:

Consistent with the zero-order correlations, Conscientiousness was a significant positive predictor of GPA, even controlling for gender and SAT scores, and this finding replicated across all three samples. Thus, personality, in particular the Conscientiousness dimension, and SAT scores have independent effects on both high school and college grades. Indeed, in several cases, Conscientiousness was a slightly stronger predictor of GPA than were SAT scores.

Notably, the Openness factor is the factor that has the strongest correlation with IQ. I'm guessing Gwern has more stuff like this on his website, but if someone makes the claim that IQ is the only thing that matters to success in any given field, they are selling bridges.

Comment by 9eb1 on IQ and Magnus Carlsen, Leo Messi and the Decathlon · 2017-03-29T01:39:23.397Z · score: 4 (4 votes) · LW · GW

The tallest player to ever play in the NBA was Gheorghe Mureșan, who was 7'7". He was not very good. Manute Bol was almost as tall and he was good but not great. By contrast, the best basketball player of all time was 6'6" [citation needed]. In fact, perhaps an athletic quotient would be better for predicting top-end performance than height, since Jordon, Lebron and Kareem are all way more athletic than Muresan and Bol.

I will attempt to explain the strongest counterargument that I'm aware of regarding your first thesis. When you take a bunch of tests of mental ability and you create a correlation matrix, you obtain a positive manifold, where all the correlations are positive. When you perform a factor analysis of these subtests, you obtain a first factor that is very large, and secondary through n-iary factors that are small and vary depending on the number of factors you use. This is suggestive that there is some sort of single causal force that is responsible for the majority of test performance variation. If you performed a factor analysis of a bunch of plausible measures of athleticism, I think you would find that, for example, bench press and height do not participate in a positive manifold and you would likely find multiple relevant, stable factors rather than 1 athletic quotient that accounts for >50% of the variation. Cardio ability and muscular strength are at odds, so that would be at least two plausible stable factors. This argument is on Wikipedia here#Factor_structure_of_cognitive_abilities). Personally, in light of the dramatic differences there are between the different parts of an IQ test battery, I find this fact surprising and underappreciated. Most people do not realize this, and the folk wisdom is that there are very clear different types of intelligence.

The second point I would make regarding your first thesis is that there are plenty of researchers who don't like g, and they have spent decades trying to come up with alternative breakdowns of intelligence into different categorizations that don't include a single factor. Those efforts were mostly fruitless, because every time they were tested, it turned out that all the tests individually correlated with g still. Many plausible combinations of "intelligences" received this treatment. Currently popular models do have subtypes of intelligence, but they are all viewed sharing g as an important top-level factor (e.g. CHC theory) rather than g simply being a happenstance correlation of multiple factors. In this case absence of evidence is evidence of absence (in light of the effort that has gone into trying to uncover such evidence).

To be honest, I very much doubt that actual IQ researchers would disagree with your second thesis. My argument would be that for most fields there is enough randomness that you would not expect the most intelligent person to also be the most lauded. Even Einstein had to have the luck to have the insights he did, and there were undoubtedly many people who were just as smart but had different circumstances that led to them not having those insights. Additionally, there is a thing called Spearman's law of diminishing returns, which is the theory that the higher your g is, the less correlated your subtype intelligences are with your g factor. That is, for people who have very high IQs, there is a ton more variation between your different aspects of intelligence than there is for people with very low IQs. This has been measured and is apparently true, and would seem to support your thesis. It is true that these two observations (the factor decomposition and Spearman's law) seem to be in tension, but hopefully one day someone will come through with an explanation for intelligence that neatly explains both of these things and lots more besides.

Unrelated to your two theses, I think the fact that IQ correlates with SO MANY things makes it interesting alone. IQ correlates with school performance, job performance, criminality, health, longevity, pure reaction speed, brain size, income, and almost everything else (it seems like) that people bother to try correlating it with. If IQ hadn't originally come from psychometric tests, people would probably simply call it your "favor with the gods factor" or something.

There are enough correlations that any time I read a social sciences paper with statistics on outcomes between people with different characteristics, I always wish they would have controlled for IQ (but they never do). This may seem silly, but I think there is definitely an argument that can be made that IQ is "prior to" most of the things people study. We already know that IQ can't be meaningfully changed. It's pretty much set by the time you are an adult, and we know of nothing besides iodine deficiency that has a meaningful impact on it in the context of a baseline person in modern society.

Comment by 9eb1 on Open thread, March 13 - March 19, 2017 · 2017-03-15T07:44:34.759Z · score: 5 (5 votes) · LW · GW

I've read a lot of TLP and this is roughly my interpretation as well. Alone's posts do not come with nicely-wrapped thesis statements (although the conclusion of this one is as close as it gets). The point she is making here is that the system doesn't care about your happiness, but you should. The use of "goals" here isn't the LessWrong definition, but the more prosaic one where it implies achievements in life and especially in careers. Real people who want to be happy do want someone who is passionate, and the juxtaposition of passionate with "mutual respect and shared values" is meant to imply a respectful but loveless marriage. If someone asks you about your partner and you most central characteristic you have to define your marriage is "mutual respect and shared values" that says something very different than if your central characteristic is "passionate." It's sterile, and that sterility is meant to suggest that the person who says "passionate" is going to be happier regardless of their achievements in the workplace.

Comment by 9eb1 on Open thread, March 13 - March 19, 2017 · 2017-03-14T01:33:04.853Z · score: 0 (0 votes) · LW · GW

This is one of the more confusing problem statements, but I think I understand. So if we choose a regular hexagon with height = 0.5, as in this link, the scoring for this solution would be ((area of triangle - area of hexagon) + (area of square - 3 * area of hexagon)) / area of hexagon?

edit: Gur orfg fbyhgvba V pbhyq pbzr hc jvgu jnf whfg gur n evtug gevnatyr gung'f unys gur nern bs gur rdhvyngreny gevnatyr. Lbh pna svg 4 va gur fdhner naq gjb va gur gevnatyr, naq gur fpber vf cbvag fvk. V bayl gevrq n unaqshy bs erthyne gvyvatf gubhtu.

Comment by 9eb1 on Open thread, March 13 - March 19, 2017 · 2017-03-13T22:19:33.719Z · score: 0 (0 votes) · LW · GW

You can come arbitrarily close by choosing any tiling shape and making it as small as necessary.

Comment by 9eb1 on why people romantice magic over most science. · 2017-03-08T20:15:55.821Z · score: 0 (0 votes) · LW · GW

I have sometimes mused that accumulating political power (or generally being able to socially engineer) is the closest to magic that we have in the real world. It's the force multiplier that magic is used for in fiction by a single protagonist. Most people who want magic also do not follow political careers. Of course, this is only a musing because there are lots of differences. No matter how much power you accumulate you are still beholden to someone or something, so if independence is a big part of your magical power fantasy then it won't help.

Comment by 9eb1 on ribbonfarm: A Brief History of Existential Terror · 2017-03-02T06:01:06.440Z · score: 0 (0 votes) · LW · GW

The non-binariness of things seems to me to be a fundamental tenet of the post-rationality thing (ribbonfarm is part of post-rationality). In particular, Chapman writes extensively on the idea that all categories are nebulous and structured.

I also think there are options to control your risk factor, depending on the field. You can found a startup, you can be the first startup employee, you can join an established startup, you can join a publicly traded corporation, you can get a job in the permanent bureaucracy. Almost every spot on the work risk-stability spectrum is available.

Perhaps the real question is why some particular fields or endeavors lend themselves to seemingly continuous risk functions. All of Viliam's categories are purely social structures, where other people are categorizing you. So perhaps it's not about the risk inherent in an activity but being labeled that fits his intuition. People might label you a drug user if you smoke marijuana in their map, but in the territory the continuum of "having used marijuana once" to "uses heroin daily" is not only continuous but many-dimensioned.

Comment by 9eb1 on Open Thread, Feb. 20 - Feb 26, 2017 · 2017-02-23T17:14:53.880Z · score: 2 (2 votes) · LW · GW

That is true for people who you are going to become friends with, but difference in negative environments is much bigger. If your job has a toxic social environment, you are free to find a new one at any time. You also have many bona fide levers to adjust the environment, by for example complaining to your boss, suing the company, etc.

When your high school has a toxic social environment, you have limited ability to switch out of that environment. Complaints of other students have extremely high bars to be taken into account because it's mandatory for them to be there and it isn't in the administrator's best interests. If someone isn't doing something plainly illegal it's unlikely you will get much help.

Comment by 9eb1 on A semi-technical question about prediction markets and private info · 2017-02-20T04:11:11.090Z · score: 0 (0 votes) · LW · GW

This is an interesting puzzle. I catch myself fighting the hypothetical a lot.

I think it hinges on what would be the right move if you saw a six, and the market also had six as the favored option. In that situation, it would be appropriate to bet on the six which would move it past the 50% equilibrium, because you have the information from the market and the information from the die. I think maybe your equilibrium price can only exist if there is only one participant currently offering all of those bets, and they saw a six (so it's not really a true market yet, or there is only one informed participant and many uninformed). In that case, you having seen a six would imply a probability of higher than 50% that it is the weighted side. Given that thinking, if you see that prediction market favoring a different number ("3"), you should indeed bet against it, because very little information is contained in the market (one die throw worth).

The market showing a 50% price for one number and 10% for the rest is an unstable equilibrium. If you started out with a market-maker with no information offering 1/6 for all sides and there were many participants who only saw a single die, the betting would increase on the correct side. At each price that favors that side, every person who again sees that side would have both the guess from the market and the information from their roll, then they would use that information to estimate a slightly greater probability and the price would shift further in the correct direction. It would blow past 50% probability without even pausing.

Those don't seem like very satisfactory answers though.

Comment by 9eb1 on Are we running out of new music/movies/art from a metaphysical perspective? · 2017-02-06T00:49:34.594Z · score: 4 (5 votes) · LW · GW

You may be interested in a post on gwern's site related to this topic.

Comment by 9eb1 on How often do you check this forum? · 2017-02-03T19:32:40.970Z · score: 1 (1 votes) · LW · GW

Is there any information on how well-calibrated the community predictions are on Metaculus? I couldn't find anything on the site. Also, if one wanted to get into it, could you describe what your process is?

Comment by 9eb1 on A question about the rules · 2017-02-02T02:08:41.323Z · score: 1 (1 votes) · LW · GW

It certainly has something to do with his post, even if the main point of the post was specifically about domains from which to choose examples for your writing.

Politics is an important domain to which we should individually apply our rationality—but it's a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational.

This is the whole crux of the issue. According to some people, Less Wrong is a site to learn about and discuss rationality among people who are not already rational (or to "refine the art of human rationality"). According to some others, it's a community site around which aspiring rationalists should discuss topics of interest to the group. Personally I think phl43's posts are decent, and will likely improve with more practice, but I didn't think that particular post was very relevant or appropriate for Less Wrong specifically.

Comment by 9eb1 on How often do you check this forum? · 2017-01-31T18:11:42.533Z · score: 3 (3 votes) · LW · GW

I use Inoreader which is paid and web-based, but has browser plugins. I wouldn't say I "mostly" look for activity, since time is limited. The Open Thread on is extremely active, I think it gets about 200 comments per day, but I don't frequently browse it. Slate Star Codex and Less Wrong are the only actual "communities" in my feed reader. The others that are directly or tangentially related are blogs, and most blogs don't really engender that much discussion.

Comment by 9eb1 on How often do you check this forum? · 2017-01-30T22:50:41.142Z · score: 3 (3 votes) · LW · GW

I use a feed reader, so I check out almost all the posts and links. I click through to the comments on almost all of them as well, since that's the real point.

The reddit /r/slatestarcodex community is very active too, and I like that.

Comment by 9eb1 on 80,000 Hours: EA and Highly Political Causes · 2017-01-26T23:41:45.267Z · score: 10 (10 votes) · LW · GW

This is a trend in effective altruism and it's a very dangerous one for their cause. As soon as people outside the movement think that EAs are trying to spend everyone's money on "pet causes," it puts a distinct upper limit on the growth of the movement. Going after political targets seems really appealing because the leverage seems so high. If we could somehow install Holden Karnofsky as president it would probably improve the lives of a billion people, but there is no majority group of people that cares more about the global poor than they care about their own money.

It's very appealing, psychologically, because big political wins have outsized importance in how you feel about yourself. When a big decision comes down (like when gay marriage was legalized by the Supreme Court) it is literally a cause for celebration. Your side won and your enemies lost. If instead you somehow got people to donate $50 million to Against Malaria Foundation, it wouldn't be that salient.

Since you reference Robin Hanson's idea of pulling ropes sideways, I figure I should provide a link.

Comment by 9eB1 on [deleted post] 2017-01-23T22:12:15.885Z

I basically agree with all of this with one quibble. I think it is very easy to underestimate the impact that LessWrong has had. There are a lot of people (myself included) who don't want to be associated with rationality, but whose thoughts it has nonetheless impacted. I know many of them in real life. LessWrong is weird enough that there is a social cost to having the first google result for your real name point to your LessWrong comments. If I am talking to someone I don't know well about LessWrong or rationality in general I will not give it a full-throated defense in real life, and in venues where I do participate under my real name, I only link to rationalsphere articles selectively.

Partially because of this stigma, many people in startupland will read the sequences, put in in their toolbox, then move on with their lives. They don't view continuing to participate as important, and surely much of the low-hanging fruit has long since been plucked. But if you look at the hints you well find tidbits of information that point to rationality having an impact.

  1. Ezra Klein and Patrick Collison (CEO of Stripe) had an extensive conversation about rationality, and both are famous, notable figures.

  2. A member of the Bay Area rationality community was rumored to be a member of the Trump cabinet.

  3. Dominic Cummings (the architect of the Brexit "Leave" campaign) points to concept after concept that is both core to and adjacent to rationality, so much so that I would be genuinely surprised if he were not aware of it. (Perhaps this isn't good for rationality depending on your political views, but don't let it be said that he isn't winning).

  4. OpenAI was launched with $1B in funding from a Silicon Valley who's who and they have been in dialogue with MIRI staff (and interestingly Stripe's former CTO is the OpenAI CTO, obviously he knows about the rationalsphere). In general there has been tons of interest that has developed around AI alignment from multiple groups. Since this was the fundamental purpose of LessWrong to begin with, at least Eliezer is winning beyond what anyone could have ever expected based on his roundabout way of creating mindshare. We can't say with certainty that this wouldn't have happened without LessWrong, but personally I find it hard to believe that it didn't make a huge impact on Eliezer's influence within this field of thought.

Do we have an army of devout rationalists that are out there winning? No, it doesn't seem so. But rationalism has had a lot of children that are winning, even if they aren't looking back to improve rationalism later. Personally, I didn't expect LessWrong to have had as much impact as it has. I realized how hard it is to put these ideas into action when I first read the sequences.

Comment by 9eB1 on [deleted post] 2017-01-22T17:50:34.210Z

The Meaningness book's section on Meaningness and Time is all about culture viewed through Chapman's lens. Ribbonfarm has tons of articles about culture, most of which I haven't read. I haven't been following post-rationality for very long. Even on the front page now there is this which is interesting and typical of the thought.

Post-rationalists write about rituals quite a bit I think (e.g. here). But they write about it from an outsider's perspective, emphasizing the value of "local" or "small-set" ritual to everyone as part of the human experience (whether they be traditional or new rituals). When Rationalists write about ritual my impression is that they are writing about ritual for Rationalists as part of the project of establishing or growing a Rationalist community to raise the sanity waterline. Post-rationalists don't identify as a group to the extent that they want to have "post-rationalist rituals." David Chapman is a very active Buddhist, for example, so he participates in rituals (this link from his Buddhism blog) related to that community, and presumably the authors at ribbonfarm observe rituals that are relevant within their local communities.

Honestly, I don't think there is much in the way of fundamental philosophical differences. I think it's more like Rationalists and post-Rationalists are drawn from the same pool of people, but some are more interested in model trains and some are more interested in D&D. It would be hard for me to make this argument rigorous though, it's just my impression.

Comment by 9eB1 on [deleted post] 2017-01-22T07:29:13.719Z

My observation is that post-rationalists are much more interested in culture and community type stuff than the Rationalist community. This is not to say that Rationalist community doesn't value culture and community, and in fact it gets discussed quite frequently (e.g. the solstice has been established explicitly to create a sense of community and "the divine"). The difference is that while Rationalists are most interested in biases, epistemology, and decision theory, post-rationalists are most interested in culture and community and related things. Mainstream Rationalists are usually/sometimes, loosely tied to Rationalism as a culture (otherwise the solstice wouldn't exist), but mostly they define their interests as whatever wins and the intellectual search for right action. Post-rationalists, on the other hand, view the world through a lens where culture and community is highly important, which is why they think that Rationalists even represents a thing you can be "post" to, while many Rationalists don't see it that way.

I don't think that Rationalists are wrong when they write about culture, they usually have well-argued points that point to true things. The main difference is that post-rationalists have a sort of richness to their descriptions and understanding that is lacking in Rationalist accounts. When Rationalists write about culture it has an ineffable dryness that doesn't ring true to my experience, while post-rationalists don't. The main exception to this is Scott Alexander, but in most other cases I think the rule holds.

Ultimately, I don't think there is much difference between the quality of insights offered by Rationalists and post-rationalists, and I don't think one is more right than the other. When reading the debates between Chapman and various Rationalist writers, the differences seem fairly minute. But there is a big difference in the sorts of things they write about. For myself, I find both views interesting and so far have not noticed any significant actual conflict in models.

Edit: Another related difference is that post-rationality authors are more willing to go out on a limb with ideas. Most of their ideas, dealing in softer areas, are necessarily less certain. It's not even clear that certainty can be established with some of their ideas, or whether they are just helpful models for thinking about the world. In the Rationalsphere people prefer arguments that are clearly backed up at every stage, ideally with peer reviewed evidence. This severely limits the kind of arguments you can make, since there are many things that we don't have research on and will plausibly never have research on.

Comment by 9eb1 on Dominic Cummings: how the Brexit referendum was won · 2017-01-14T02:47:46.601Z · score: 0 (0 votes) · LW · GW

Thanks for noting that, I found some more interesting discussion there (Linked for others' convenience).

Comment by 9eb1 on Dominic Cummings: how the Brexit referendum was won · 2017-01-14T02:27:49.268Z · score: 2 (2 votes) · LW · GW

Part of the reason is also because this is a UK issue and most LessWrong readers are not from there, so people have a little bit more of a outsider's or non-tribalist perspective on it (although almost all LW commenters would certainly have voted for Remain).

Comment by 9eb1 on Open thread, Jan. 02 - Jan. 08, 2017 · 2017-01-07T19:18:03.795Z · score: 3 (3 votes) · LW · GW

SlateStarCodex open threads or the weekly culture war thread on probably.

Comment by 9eb1 on Claim explainer: donor lotteries and returns to scale · 2016-12-31T21:50:59.314Z · score: 1 (1 votes) · LW · GW

It's only relevant if you're so confident in it that you don't feel the need to do any double-checking - that the right amount of research to do is zero or nearly zero.

My contention is that the people who are willing to participate in this have already done non-negligible amounts of thinking on this topic, because they are EA hobbyists. How could one be engaging with the EA community if they are not spending time thinking about the core issues at hand? Because of diminishing marginal returns, they are already paying the costs for the research that has the highest marginal value, in terms of their engagement with the community and reflection on these topics. I do not believe this is addressed in the original article. I believe this is our fundamental disagreement.

The objection of value misalignment can't be priced in because there is no pricing mechanism at play here, so I'm not sure what you mean (except for paulfchristiano's fee for administering the fund). That exact point was not the main thrust of the paragraph, however. The main thrust of that paragraph was to explain the two possible outcomes in the lottery, and explain how both lead to potential negative outcomes in light of the diminishing marginal returns to original research and the availability of a person's time in light of outside circumstances.

I am in the target market in the sense that I donate to EA charities, and I think that SOMEONE doing research improves its impact, but I guess I am not in the target market in the sense that I think that person has to be me.

Regarding your snips about my not reading the article, it's true that if I had more time and more interest in this topic, I would offer better quality engagement with your ideas, so I apologize that I lack those things.

Comment by 9eb1 on Claim explainer: donor lotteries and returns to scale · 2016-12-31T00:15:17.330Z · score: 5 (7 votes) · LW · GW

I think practical interest in these things is somewhat bizarre.

All of the people that would be interested in participating are already effective altruists. That means that as a hobby they are already spending tons of time theorizing on what donations they would make to be more efficient. Is the value of information from additional research really sufficient to make it worthwhile in this context? Keep in mind that much of the low-hanging analysis from a bog-standard EA's perspective has already been performed by GiveWell, and you can't really expect to meaningfully improve on their estimates. This limits the pool of rational participants to only those who know they have values that don't align with the community at large.

For me, the whole proposition is a net negative. If I don't get selected, then someone else chooses what to do with my money. Since they don't align with my values, they might donate it to the KKK or whatever. If I DO get selected, it's arguably worse, because now I have to do a bunch of research that has low value to me to make a decision. Winning the lottery to spend $100,000 of other people's money doesn't suddenly endow me with tens or hundreds of hours to use for extra research (unless I can spend some of the money on my research efforts...).

The complexity of the the system, its administration, and time spent thinking about whether to participate is all deadweight loss in the overall system. Someone, or many someones, have to spend time considering whether to participate, manage the actual money and the logistics of it. This is all conceptual overhead for the scheme.

Not to get too psychoanalytical or whatever, but I think this stems partly from the interest of people in the community to appreciate complex, clever, unusual solutions BECAUSE they are complex, clever and unusual. My engagement with effective altruism is very boring. I read the GiveWell blog and occasionally give them money. It's not my hobby, so I don't participate in things like the EA forum.

If you are considering participating, first figure out what actual research you would do if you won the award, what the VoI is for that time, and how you would feel if you either had to do that research, or had someone else choose the least efficient plausible alternative is for your values. Think about whether the cleverness and complexity of this system is actually buying you anything. If you like being contrarian and signalling your desire to participate in schemes that show you Take Utility Seriously, by all means, go for it.