Posts

Survival and Flourishing grant applications open until March 7th ($0.8MM-$1.5MM planned for dispersal) 2020-01-28T23:36:40.191Z · score: 20 (3 votes)
Studying Early Stage Science: Research Program Introduction 2020-01-17T22:12:03.829Z · score: 34 (10 votes)
Open & Welcome Thread - January 2020 2020-01-06T19:42:36.499Z · score: 11 (3 votes)
Open & Welcome Thread - December 2019 2019-12-03T00:00:29.481Z · score: 12 (3 votes)
Matthew Walker's "Why We Sleep" Is Riddled with Scientific and Factual Errors 2019-11-16T20:27:57.039Z · score: 67 (26 votes)
Open & Welcome Thread - November 2019 2019-11-02T20:06:54.030Z · score: 12 (4 votes)
Long Term Future Fund application is closing this Friday (October 11th) 2019-10-10T00:44:28.241Z · score: 29 (5 votes)
AI Alignment Open Thread October 2019 2019-10-04T01:28:15.597Z · score: 28 (8 votes)
Long-Term Future Fund: August 2019 grant recommendations 2019-10-03T20:41:16.291Z · score: 37 (10 votes)
Survival and Flourishing Fund Applications closing in 3 days 2019-10-02T00:12:21.287Z · score: 21 (4 votes)
SSC Meetups Everywhere: St. Louis, MO 2019-09-14T06:41:26.972Z · score: 0 (0 votes)
SSC Meetups Everywhere: Singapore 2019-09-14T06:38:47.621Z · score: 0 (0 votes)
SSC Meetups Everywhere: San Antonio, TX 2019-09-14T06:37:06.931Z · score: 0 (0 votes)
SSC Meetups Everywhere: Rochester, NY 2019-09-14T06:35:57.399Z · score: 2 (1 votes)
SSC Meetups Everywhere: Rio de Janeiro, Brazil 2019-09-14T06:34:49.726Z · score: 0 (0 votes)
SSC Meetups Everywhere: Riga, Latvia 2019-09-14T06:31:30.880Z · score: 0 (0 votes)
SSC Meetups Everywhere: Reno, NV 2019-09-14T06:24:01.941Z · score: 0 (0 votes)
SSC Meetups Everywhere: Pune, India 2019-09-14T06:22:00.590Z · score: 0 (0 votes)
SSC Meetups Everywhere: Prague, Czechia 2019-09-14T06:17:22.395Z · score: 0 (0 votes)
SSC Meetups Everywhere: Pittsburgh, PA 2019-09-14T06:13:43.997Z · score: 0 (0 votes)
SSC Meetups Everywhere: Phoenix, AZ 2019-09-14T06:10:21.429Z · score: 0 (0 votes)
SSC Meetups Everywhere: Oxford, UK 2019-09-14T05:59:04.728Z · score: 0 (0 votes)
SSC Meetups Everywhere: Ottawa, Canada 2019-09-14T05:56:03.155Z · score: 0 (-1 votes)
SSC Meetups Everywhere: Oslo, Norway 2019-09-14T05:52:44.748Z · score: 0 (0 votes)
SSC Meetups Everywhere: Orange County 2019-09-14T05:49:28.441Z · score: 0 (0 votes)
SSC Meetups Everywhere: Oklahoma City 2019-09-14T05:44:02.157Z · score: 0 (0 votes)
SSC Meetups Everywhere: Norman, OK 2019-09-14T05:37:04.278Z · score: 0 (0 votes)
SSC Meetups Everywhere: New York City, NY 2019-09-14T05:33:27.384Z · score: 0 (0 votes)
SSC Meetups Everywhere: New Haven, CT 2019-09-14T05:29:45.664Z · score: 0 (0 votes)
SSC Meetups Everywhere: New Delhi, India 2019-09-14T05:27:28.837Z · score: 0 (0 votes)
SSC Meetups Everywhere: Munich, Germany 2019-09-14T05:22:58.408Z · score: 1 (1 votes)
SSC Meetups Everywhere: Moscow, Russia 2019-09-14T05:14:03.792Z · score: 0 (-1 votes)
SSC Meetups Everywhere: Miami, FL 2019-09-14T03:36:45.087Z · score: 0 (0 votes)
SSC Meetups Everywhere: Memphis, TN 2019-09-14T03:34:28.740Z · score: 0 (0 votes)
SSC Meetups Everywhere: Melbourne, Australia 2019-09-14T03:32:23.510Z · score: 0 (0 votes)
SSC Meetups Everywhere: Medellin, Colombia 2019-09-14T03:30:32.369Z · score: 0 (0 votes)
SSC Meetups Everywhere: Manchester, UK 2019-09-14T03:28:08.448Z · score: 0 (0 votes)
SSC Meetups Everywhere: Madrid, Spain 2019-09-14T03:26:27.015Z · score: 0 (-1 votes)
SSC Meetups Everywhere: Madison, WI 2019-09-14T03:24:44.933Z · score: 0 (0 votes)
SSC Meetups Everywhere: Lexington, KY 2019-09-14T03:19:52.765Z · score: 0 (0 votes)
SSC Meetups Everywhere: Kitchener-Waterloo, ON 2019-09-14T03:16:50.644Z · score: 0 (0 votes)
SSC Meetups Everywhere: Kiev, Ukraine 2019-09-14T03:14:32.244Z · score: 0 (0 votes)
SSC Meetups Everywhere: Jacksonville, FL 2019-09-14T03:11:45.407Z · score: 0 (-1 votes)
SSC Meetups Everywhere: Iowa City, IA 2019-09-14T03:10:24.372Z · score: 0 (0 votes)
SSC Meetups Everywhere: Indianapolis, IN 2019-09-14T03:05:13.331Z · score: 0 (0 votes)
SSC Meetups Everywhere: Honolulu, HI 2019-09-14T03:02:49.127Z · score: 0 (0 votes)
SSC Meetups Everywhere: Helsinki, Finland 2019-09-14T03:01:22.561Z · score: 0 (0 votes)
SSC Meetups Everywhere: Fairbanks, AK 2019-09-14T02:58:05.828Z · score: 0 (0 votes)
SSC Meetups Everywhere: Halifax, Nova Scotia, Canada 2019-09-14T02:54:32.900Z · score: 0 (0 votes)
SSC Meetups Everywhere: Edinburgh, Scotland 2019-09-14T02:52:42.732Z · score: 0 (0 votes)

Comments

Comment by habryka4 on Jan Bloch's Impossible War · 2020-02-17T21:04:50.582Z · score: 2 (1 votes) · LW · GW

I can set it up for you, it's all done via RSS, so no action from your side necessary. 

I will set it up for all of your posts for now, but feel free to just tell me a tag that you want to apply instead, and then I can switch it towards that.

Comment by habryka4 on Jan Bloch's Impossible War · 2020-02-17T19:47:30.756Z · score: 11 (3 votes) · LW · GW

This post is great! Would you be up for us setting up automatic crossposting of your posts? We can also make it dependent on a tag you apply to your posts (Scott from SSC uses the "LW" tag to crosspost). 

Comment by habryka4 on You Only Live Twice · 2020-02-13T23:19:19.801Z · score: 2 (1 votes) · LW · GW

It's still a few months, though I am curious about the answer.

Comment by habryka4 on Writeup: Progress on AI Safety via Debate · 2020-02-06T07:20:08.283Z · score: 2 (1 votes) · LW · GW

You have an entire copy of the post in the commenting guidelines, fyi :)

Oops, sorry. My bad. Fixed.

Comment by habryka4 on What Money Cannot Buy · 2020-02-05T21:19:06.033Z · score: 2 (1 votes) · LW · GW

Edit note: Cleaned up your formatting a bunch.

Comment by habryka4 on Raemon's Scratchpad · 2020-02-02T19:11:15.052Z · score: 4 (2 votes) · LW · GW

I do also think that in addition to that, people also just vote less. If I remember correctly, number of people voting in a given week is about 60% of what it was at the peak, but total number of votes per week is closer to 35% or something like that. There are also a bunch less comments, so you likely get some quadratic effects that at least partially explain this. 

Comment by habryka4 on REVISED: A drowning child is hard to find · 2020-02-01T04:02:11.459Z · score: 27 (8 votes) · LW · GW

I don't think Ben is implying that CEA and GiveWell are claiming that the average price is low. Here is what I understand to be his argument: 

  • What you actually mean by marginal price is something like "the price I would have to pay to cause a marginal life to be saved, right now"
  • GiveWell and the Gates Foundation have already pledged billions of dollars towards saving marginal lives with the most cost-effective interventions
  • This means, that if I am trying to understand how much of a difference a counterfactual additional dollar would make, the relevant question is "what difference would my money make, after GiveWell and the Gates Foundation have spent their already pledged $50B+ on saving marginal lives with the most cost-effective intervention"
  • He then argues that the world does not look like it actually has $50B of life-saving opportunities for $5k a piece lying around
  • As such, as an independent donor, trying to assess the marginal cost of saving a life, I should estimate that as much higher than $5000, since we should expect the marginal cost of saving a life to go up with investment, and we already have $50B of investment into this area
  • Maybe GiveWell and the Gates Foundation state that they have done some weird things to commit themselves to not take some of the opportunities for saving lives at $5k a piece, but he argues that both (I am least clear on this part of the argument, both in my understanding of Benquo, and in my understanding of what the correct game theory here is):
    • Doing so is pretty similar to extortion and you should ignore it
    • They are most likely lying about that, and have shown in the past to just fund opportunities at that level of funding, and their overall messaging sure seems to communicate that they will take those opportunities

I think Ben is straightforwardly arguing that the marginal cost of saving a life, taking into account some basic game theory and economics, must be much higher than $5k. 

Comment by habryka4 on Existing work on creating terminology & names? · 2020-01-31T19:08:21.735Z · score: 10 (3 votes) · LW · GW

I've found the term/field of "Information Architecture" to be the most useful for finding things here. Books that I liked reading in this space: 

A bunch of the books in this space also include chapters on naming and clustering things. The O'Reilly book also included a chapter on naming schemes. 

Most of it is focused on names within ontologies though, not really on stand-alone names. 

Comment by habryka4 on Mod Notice about Election Discussion · 2020-01-30T00:05:47.419Z · score: 5 (3 votes) · LW · GW

We definitely talk some about it. I don't currently have any super precise takeaways that aren't highly context-dependent, but could try writing up some things. 

I do also want to note that the goal is not necessarily to discourage the discussion in a blanket way. I generally think that political discussion between long-time members with lower visibility is good and productive, and some of the goals here are to allow that to happen, without deteriorating in predictable ways.

Comment by habryka4 on Algorithms vs Compute · 2020-01-29T21:31:16.881Z · score: 2 (1 votes) · LW · GW

Matrix Multiplication

Comment by habryka4 on Mod Notice about Election Discussion · 2020-01-29T17:57:20.921Z · score: 5 (2 votes) · LW · GW

Last time we didn’t have the personal blog infrastructure in place, so I think there was basically just a ban on politics stuff. Now that ban isn’t in place on personal blogs, so I think there is a good chance a lot of people would go and start writing about that on their personal blogs, and transform the overall site culture. 

Comment by habryka4 on Have epistemic conditions always been this bad? · 2020-01-28T20:47:25.203Z · score: 2 (1 votes) · LW · GW

Oops, looks like we commented at the same time. You basically said the same thing I did, so I am glad we're on the same page.

Comment by habryka4 on Have epistemic conditions always been this bad? · 2020-01-28T20:43:03.197Z · score: 2 (1 votes) · LW · GW

one of the LW 2.0 admins has stated that it's fine to post about politics here, they'll just stay as "personal blogposts" (unfortunately I can't find that comment now).

That's roughly correct. The important caveat is that we do want to avoid the site being dominated by discussion of politics, and so are likely going to reduce the visibility of that discussion somewhat, in order to compensate for the natural tendencies of those topics to consume everything (I am not yet really sure how precisely we would go about that, since it hasn't been an issue so far), and also because I really want to avoid newcomers first encountering all the political discussion (and selecting on newcomers who come for the political discussion). 

Comment by habryka4 on The Main Sources of AI Risk? · 2020-01-28T04:00:03.766Z · score: 2 (1 votes) · LW · GW

Done! Daniel should now be able to edit the post. 

Comment by habryka4 on Open & Welcome Thread - January 2020 · 2020-01-27T19:49:51.841Z · score: 3 (2 votes) · LW · GW

You should be able to already. When you add a picture you can drag on its left and right edges to resize it.

Comment by habryka4 on Modest Superintelligences · 2020-01-25T17:57:44.988Z · score: 4 (2 votes) · LW · GW

Most estimates for heritability would still be significant even in a genetically identical population (since cultural factors are heritable due to shared family environments). You can try to control for his with twin adoption studies, which controls for shared family environment, but still leaves a lot of different aspects of the environment the same. You could also adjust for all other kinds of things and so get closer to something like the ”real effect of genes”. 

I am not fully sure what Donald Hobson meant by “effect of genes” but more generally heritability is an upper bound on the effect of genes on individuals, and we should expect the real effect to be lower (how much lower is a debate with lots of complicated philosophical arguments and people being confused about how causality works).

From Wikipedia: 

In other words, heritability is a mathematical estimate that indicates an upper bound on how much of a trait's variation within that population can be attributed to genes.

Comment by habryka4 on Modest Superintelligences · 2020-01-25T08:34:11.663Z · score: 2 (1 votes) · LW · GW

Heritability != genetic components!

Comment by habryka4 on 2018 Review: Voting Results! · 2020-01-24T22:06:11.643Z · score: 7 (4 votes) · LW · GW

There were also something like 10 who didn't spend their full vote-ballot, so my guess is that optimality concerns aren't a super big deal for many users, though I generally think that we should align the natural interaction with the system with the one that also spends your points most effectively, since anything else just weirdly biases the results towards people who either just vote differently naturally, or are thinking more about meta-level voting strategies, neither of which seems like a particularly good bias.

Comment by habryka4 on 2018 Review: Voting Results! · 2020-01-24T22:02:54.945Z · score: 18 (7 votes) · LW · GW

We have written some things about our motivation on this, though I don't think we've been fully comprehensive by any means (since that itself would have increased the cost of the vote a good amount). Here are the posts that we've written on the review and the motivation behind it: 

The first post includes more of our big-picture motivation for this. Here are some of the key quotes: 

Quotes

In his LW 2.0 Strategic Overview, habryka noted:

We need to build on each other’s intellectual contributions, archive important content, and avoid primarily being news-driven.

We need to improve the signal-to-noise ratio for the average reader, and only broadcast the most important writing

[...]

Modern science is plagued by severe problems, but of humanity’s institutions it has perhaps the strongest record of being able to build successfully on its previous ideas. 

The physics community has this system where the new ideas get put into journals, and then eventually if they’re important, and true, they get turned into textbooks, which are then read by the upcoming generation of physicists, who then write new papers based on the findings in the textbooks. All good scientific fields have good textbooks, and your undergrad years are largely spent reading them.

Over the past couple years, much of my focus has been on the early-stages of LessWrong's idea pipeline – creating affordance for off-the-cuff conversation, brainstorming, and exploration of paradigms that are still under development (with features like shortform and moderation tools).

But, the beginning of the idea-pipeline is, well, not the end.

I've written a couple times about what the later stages of the idea-pipeline might look like. My best guess is still something like this:

I want LessWrong to encourage extremely high quality intellectual labor. I think the best way to go about this is through escalating positive rewards, rather than strong initial filters.

Right now our highest reward is getting into the curated section, which... just isn't actually that high a bar. We only curate posts if we think they are making a good point. But if we set the curated bar at "extremely well written and extremely epistemically rigorous and extremely useful", we would basically never be able to curate anything.

My current guess is that there should be a "higher than curated" level, and that the general expectation should be that posts should only be put in that section after getting reviewed, scrutinized, and most likely rewritten at least once. 

I still have a lot of uncertainty about the right way to go about a review process, and various members of the LW team have somewhat different takes on it.

I've heard lots of complaints about mainstream science peer review: that reviewing is often a thankless task; the quality of review varies dramatically, and is often entangled with weird political games.

------

Before delving into the process, I wanted to go over the high level goals for the project:

1. Improve our longterm incentives, feedback, and rewards for authors

2. Create a highly curated "Best of 2018" sequence / physical book

3. Create common knowledge about the LW community's collective epistemic state regarding controversial posts

-------

Longterm incentives, feedback and rewards

Right now, authors on LessWrong are rewarded essentially by comments, voting, and other people citing their work. This is fine, as things go, but has a few issues:

  • Some kinds of posts are quite valuable, but don't get many comments (and these disproportionately tend to be posts that are more proactively rigorous, because there's less to critique, or critiquing requires more effort, or building off the ideas requires more domain expertise)
  • By contrast, comments and voting both nudge people towards posts that are clickbaity and controversial.
  • Once posts have slipped off the frontpage, they often fade from consciousness. I'm excited for a LessWrong that rewards Long Content, that stand the tests of time, as is updated as new information comes to light. (In some cases this may involve editing the original post. But if you prefer old posts to serve as a time-capsule of your post beliefs, adding a link to a newer post would also work)
  • Many good posts begin with an "epistemic status: thinking out loud", because, at the time, they were just thinking out loud. Nonetheless, they turn out to be quite good. Early-stage brainstorming is good, but if 2 years later the early-stage-brainstorming has become the best reference on a subject, authors should be encouraged to change that epistemic status and clean up the post for the benefit of future readers.

The aim of the Review is to address those concerns by: 

  • Promoting old, vetted content directly on the site.
  • Awarding prizes not only to authors, but to reviewers. It seems important to directly reward high-effort reviews that thoughtfully explore both how the post could be improved, and how it fits into the broader intellectual ecosystem. (At the same time, not having this be the final stage in the process, since building an intellectual edifice requires four layers of ongoing conversation)
  • Compiling the results into a physical book. I find there's something... literally weighty about having your work in printed form. And because it's much harder to edit books than blogposts, the printing gives authors an extra incentive to clean up their past work or improve the pedagogy.

------

Common knowledge about the LW community's collective epistemic state regarding controversial posts

Some posts are highly upvoted because everyone agrees they're true and important. Other posts are upvoted because they're more like exciting hypotheses. There's a lot of disagreement about which claims are actually true, but that disagreement is crudely measured in comments from a vocal minority.

The end of the review process includes a straightforward vote on which posts seem (in retrospect), useful, and which seem "epistemically sound". This is not the end of the conversation about which posts are making true claims that carve reality at it's joints, but my hope is for it to ground that discussion in a clearer group-epistemic state.

Further Comments

I expect we will write some more in the future about some of the broader goals behind the review, but the above I think summarizes a bunch of the high-level considerations reasonably well. 

I think one way one could describe at least my motivation for the review is that one of the big holes that I've always perceived in LessWrong, and the internet at large, is the focus on things that are popular in the moment, and that it's hard for people to really build on other people's ideas and make long-term intellectual progres. The review is an experiment in creating an incentive and attention allocation mechanism that tries to counteract those forces. I am not yet sure how much it succeeded at that, though I am broadly pleased with how it went. 

Comment by habryka4 on 2018 Review: Voting Results! · 2020-01-24T21:49:57.122Z · score: 9 (4 votes) · LW · GW

I wonder whether there is a way to take someone's vote and infer a more optimal allocation of the votes, and then scale that up to use the full available points, so that we could potentially estimate the size of the impact of this.

Comment by habryka4 on 2018 Review: Voting Results! · 2020-01-24T19:14:30.933Z · score: 4 (2 votes) · LW · GW

Promoted to curated: This is a bit of an odd curation, but my guess is that the vote results are of interest to many people, and many will be happy to have read them, so it seems like a good curation target. Less curated for the statistics or the writing, and more curated to establish common-knowledge of the vote results.

Comment by habryka4 on 2018 Review: Voting Results! · 2020-01-24T18:00:06.622Z · score: 8 (4 votes) · LW · GW

Yep, we considered this case, and so intentionally capped how much quadratic vote weight a single qualitative vote can translate to. So I am quite confident that this was intentional.

Comment by habryka4 on New paper: The Incentives that Shape Behaviour · 2020-01-24T00:40:17.772Z · score: 9 (3 votes) · LW · GW

Removed it, since I am 90%+ confident that it was an accident.

Comment by habryka4 on Use-cases for computations, other than running them? · 2020-01-23T19:28:14.929Z · score: 2 (1 votes) · LW · GW

Mod note: Fixed formatting of this comment.

Comment by habryka4 on New paper: The Incentives that Shape Behaviour · 2020-01-23T19:24:06.079Z · score: 10 (4 votes) · LW · GW

Mod note: I edited the abstract into the post, since that makes the paper more easily searchable in the site-search, and also seems like it would help people get a sense of whether they want to click through to the link. Let me know if you want me to revert that. 

Comment by habryka4 on New paper: The Incentives that Shape Behaviour · 2020-01-23T19:22:03.108Z · score: 10 (4 votes) · LW · GW

I quite liked this paper, and read through it this morning. It also seems good to link to the accompanying Medium post, which I found a good introduction into the ideas: 

https://medium.com/@RyanCarey/new-paper-the-incentives-that-shape-behaviour-d6d8bb77d2e4?fbclid=IwAR0hHcQTnAtSzxSONWWxw4pYNpiHBAsuaN8DT6GTz7oO1gxvjPc9R8VVhpY

Comment by habryka4 on Three signs you may be suffering from imposter syndrome · 2020-01-22T02:39:27.978Z · score: 4 (4 votes) · LW · GW

You are a con man. You are an impostor.

?

Comment by habryka4 on Becoming Unusually Truth-Oriented · 2020-01-20T23:58:22.181Z · score: 6 (4 votes) · LW · GW

Promoted to curated: I think this post ties together a large varieties of ideas and concepts I think are quite important, and does so in a very practical manner that I think is broadly undersupplied. I do think I was a bit confused what the goal of the post was, and would have benefitted from a bit more context-setting at the beginning. 

Comment by habryka4 on [deleted post] 2020-01-20T20:51:58.150Z

Makes sense. 

Comment by habryka4 on [deleted post] 2020-01-20T18:45:28.770Z

I will leave this here, but not add it to the AI Alignment Newsletter sequence, since presumably the content there has already been edited.

Comment by habryka4 on Open & Welcome Thread - January 2020 · 2020-01-18T05:30:50.945Z · score: 2 (1 votes) · LW · GW

Well, that sure is an interesting case. Fixed it. The account was marked as deleted and banned until late 2019 for some reason, so my guess is they were caught by our anti-spam measures in late 2018, which bans people for one year, and then they ended up posting again after the ban expired. 

Comment by habryka4 on Open & Welcome Thread - January 2020 · 2020-01-18T03:26:20.751Z · score: 2 (1 votes) · LW · GW

It's a fine place, though the best place is through the Intercom chat in the lower right corner (the gray chat bubble).

Comment by habryka4 on Bay Solstice 2019 Retrospective · 2020-01-18T01:57:06.726Z · score: 2 (1 votes) · LW · GW

I think the lyrics around that section actually say "5 billion years" and say it a bunch of times in a row (implying multiple intervals of billions of years passing), such that I think that line is basically accurate. 

Edit: Apparently Ben meant the line as a compliment, not as an epistemic critique. Oops. 

Comment by habryka4 on Why Quantum? · 2020-01-16T20:40:02.547Z · score: 2 (1 votes) · LW · GW

Comments should be indexed by Google. I just went to 5 very old posts with hundreds of comments and randomly searched text-strings from them on Google, and all of them returned a result: 

If anyone can find any comments that are not indexed, please let me know, and I will try to fix it, but it seems (to me) that all comments are indexed for now. 

Comment by habryka4 on [deleted post] 2020-01-11T22:20:24.466Z

Lol. We should probably finally get around to disable posting and commenting from deleted accounts. I will delete this post and this comment though, since that seems unnecessarily confusing.

Comment by habryka4 on We run the Center for Applied Rationality, AMA · 2020-01-11T21:41:23.351Z · score: 4 (2 votes) · LW · GW

For whatever it's worth, my sense is that it's actually reasonably doable to build an institution/process that does well here, and gets trust from a large fraction of the community, though it is by no means an easy task. I do think it would likely require more than one full-time person, and at least one person of pretty exceptional skill in designing processes and institutions (as well as general competence). 

Comment by habryka4 on Why Quantum? · 2020-01-11T04:23:00.568Z · score: 13 (3 votes) · LW · GW

Wait, the comments there are mostly pointing out that the parts of Barbour that Eliezer is referring to are obvious and nothing novel. Not that what he is saying is wrong!

His first idea, that time is simply another coordinate parameterizing a mathematical object (like a manifold in GR) and that it's specialness is an illusion, is ancient. His second idea, that any theory more fundamental than QM or GR will necessarily feature time only in a relational sense (in contrast to the commonly accepted, and beautiful, gauge freedom of all time and space coordinates) is interesting and possibly true, but it is most likely not profound. I can't read all of his papers, so perhaps he has some worthwhile work.

As far as I can tell, Eliezer is referring to the much more "trivial" aspects of Barbour's work as described here. 

To be clear, I am not a huge fan of the post in question here, but it is important to separate saying wrong things from saying confusing things. 

I also want to separate making wrong claims from attacking academic institutions. I think it's fine to say whatever you want about Eliezer's tone, but your original comment said: 

Most of the entire quantum sequence has been wrong

Which is primarily a claim about factual correctness, which I think is quite misplaced. Though I am not super confident, so if you do have a comment that points out a concrete error in one of his posts, then that would definitely convince me (though still leave me skeptical about the claim of "most", since a lot of the sequence is just really introductory quantum mechanics that I myself can easily verify as correct).

Comment by habryka4 on Why Quantum? · 2020-01-10T17:12:32.127Z · score: 7 (4 votes) · LW · GW

As far as I can tell, this is wrong. Over the years many people with a graduate background in quantum physics have fact-checked the sequence, and as far as I can tell there are no significant factual errors in it. Of course there are philosophical disagreements about how to evaluate the evidence about things like MWI, but in terms of basic facts that can meaningfully be checked, the sequence seems to hold up quite well, and I would take a bet that you can’t find a simple error in it that hasn’t been addressed.

Comment by habryka4 on Voting Phase of 2018 LW Review · 2020-01-09T22:39:31.152Z · score: 13 (6 votes) · LW · GW

I think it would make sense if you weakly vote on them, by spending relatively few points of your quadratic budget on them. Voting very strongly on them feels wrong to me. Basically, vote in strength proportional to your confidence times the goodness/badness of your assessment of the post, would be my guess.

Comment by habryka4 on Subscripting Typographic Convention For Citations/Dates/Sources/Evidentials: A Proposal · 2020-01-09T21:55:21.657Z · score: 5 (2 votes) · LW · GW

Seems reasonable to me. We use markdown-it for markdown conversion, so does this plugin look like what you would want? 

https://github.com/markdown-it/markdown-it-sub 

If so, I think I can probably get around to adding that to our markdown plugins sometime this week or early next week.

Comment by habryka4 on 2020's Prediction Thread · 2020-01-08T05:40:13.894Z · score: 2 (1 votes) · LW · GW

Would you count Paul's "altruistic equity allocation" as part of an impact certificate market?

Comment by habryka4 on Voting Phase of 2018 LW Review · 2020-01-08T04:51:25.357Z · score: 2 (1 votes) · LW · GW

Should be fixed now.

Comment by habryka4 on Open & Welcome Thread - January 2020 · 2020-01-08T04:12:00.086Z · score: 3 (2 votes) · LW · GW

Welcome! (In as much as that makes sense to say to someone who has been around for 10 years)

Is it just me or are the Open Threads kind of out of the way?

Open Threads should be pinned to the frontpage if you have the "Include Personal Blogposts" checkbox enabled. So for anyone who has done that, they should be pretty noticeable. Though you saying otherwise does make me update that something in the current setup is wrong. 

Comment by habryka4 on Less Wrong Poetry Corner: Walter Raleigh's "The Lie" · 2020-01-06T22:21:04.019Z · score: 4 (2 votes) · LW · GW

The Wikipedia article states that he was tried for treason at least two times, once for his involvement in the Main Plot, and once for the things he did on his El Dorado adventure. So I think that doesn't contradict what Scott said.

Comment by habryka4 on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-04T19:20:34.487Z · score: 2 (1 votes) · LW · GW

Sorry for editing it! I accidentally hit the submit button before the comment was ready (the thing I posted was a first draft). I will make sure to edit back some version of the comment next week, just so that your comment here doesn't end up lacking necessary context.

Comment by habryka4 on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-04T18:45:34.993Z · score: 2 (1 votes) · LW · GW

[Accidentally submitted something, will probably respond sometime early next week]

Comment by habryka4 on human psycholinguists: a critical appraisal · 2020-01-04T04:44:47.966Z · score: 15 (5 votes) · LW · GW

Promoted to curated: I found this post quite compelling, and also generally think that understanding how people have historically modelled progress in AI (and to what degree we have beaten benchmarks that we thought previously were quite difficult, and how much goalpost-moving there is) is a pretty important aspect of modeling future developments in the field. 

Comment by habryka4 on Lessons from the Cold War on Information Hazards: Why Internal Communication is Critical · 2020-01-04T00:11:09.430Z · score: 4 (2 votes) · LW · GW

Write a review!

Comment by habryka4 on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-03T22:29:15.855Z · score: 4 (2 votes) · LW · GW

I thought the comment was pretty clear that it was trying to give a summary of my comments, and a suggestion for how I should phrase my comment in order to better get my point across. A suggestion which (at least for the case of the use of "sealioning") I disagreed with. 

I agree with you that there was an implicature in Duncan's comment that he thought the term was an accurate characterization, though I am actually and honestly not that confident Duncan actually believes that the term accurately describes your commenting patterns (in addition to it accurately describing my model of your commenting patterns). I would currently give it about 75% probability, but not more. 

In general, I think implicatures of this type should be treated differently than outright accusations, though I also don't think they should be completely ignored. 

On a more general note, since the term appears to be a relatively niche term that I haven't heard before, it seems to me that the correct way for us to deal with this, would be for people to say openly what connotations the term has to them, and if enough people agree that the term has unhelpful connotations, then avoid using the term. I don't think we should harshly punish introducing a term like this if there isn't an established precedent of the connotations of that term.

Comment by habryka4 on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-03T22:09:42.460Z · score: 9 (5 votes) · LW · GW

Note that at least from the little I have read about the term, this seems like a reasonable stance to me, and my guess (as the person who instigated this thread) is that it is indeed better to avoid importing the existing connotations that term has. 

My guess is that the term is still fine to bring up as something to be analyzed at a distance (e.g. asking questions like "why did people feel the need to invent the term sealioning?"), but my sense is that it's better to not apply it directly to a person or interlocutor, given its set of associations. 

This is a relatively weakly held position of mine though, given that I only learned about that term yesterday, so I don't have a great map of its meanings and connotations.

Edit: I do want to say that the summary of "I don't expect engaging with you to be productive, therefore I must decline this and all future requests for dialogue from you" doesn't strike me as a very accurate summary of what people usually mean by sealioning. I don't think it matters much for my response, but I figured I would point out that I disagree with that summary.