Posts

Review Voting Thread 2020-12-30T03:23:06.075Z
Final Day to Order LW Books by Christmas for US 2020-12-09T23:30:36.877Z
The LessWrong 2018 Book is Available for Pre-order 2020-12-02T11:21:00.221Z
AGI Predictions 2020-11-21T03:46:28.357Z
Rationalist Town Hall: Pandemic Edition 2020-10-21T23:54:03.528Z
Sunday October 25, 12:00PM (PT) — Scott Garrabrant on "Cartesian Frames" 2020-10-21T03:27:12.739Z
Sunday October 18, 12:00PM (PT) — Garden Party 2020-10-17T19:36:52.829Z
Have the lockdowns been worth it? 2020-10-12T23:35:14.835Z
Fermi Challenge: Trains and Air Cargo 2020-10-05T21:51:45.281Z
Postmortem to Petrov Day, 2020 2020-10-03T21:30:56.491Z
Open & Welcome Thread – October 2020 2020-10-01T19:06:45.928Z
What are good rationality exercises? 2020-09-27T21:25:24.574Z
Honoring Petrov Day on LessWrong, in 2020 2020-09-26T08:01:36.838Z
Sunday August 23rd, 12pm (PDT) – Double Crux with Buck Shlegeris and Oliver Habryka on Slow vs. Fast AI Takeoff 2020-08-22T06:37:07.173Z
Forecasting Thread: AI Timelines 2020-08-22T02:33:09.431Z
[Oops, there is actually an event] Notice: No LW event this weekend 2020-08-22T01:26:31.820Z
Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) 2020-08-20T00:49:49.639Z
Survey Results: 10 Fun Questions for LWers 2020-08-19T06:10:55.386Z
10 Fun Questions for LessWrongers 2020-08-18T03:28:05.276Z
Sunday August 16, 12pm (PDT) — talks by Ozzie Gooen, habryka, Ben Pace 2020-08-14T18:32:35.378Z
Is Wirecutter still good? 2020-08-07T21:54:06.141Z
Sunday August 9, 1pm (PDT) — talks by elityre, jacobjacob, Ruby 2020-08-06T22:50:21.550Z
Sunday August 2, 12pm (PDT) — talks by jimrandomh, johnswenthworth, Daniel Filan, Jacobian 2020-07-30T23:55:44.712Z
What Failure Looks Like: Distilling the Discussion 2020-07-29T21:49:17.255Z
"Should Blackmail Be Legal" Hanson/Zvi Debate (Sun July 26th, 3pm PDT) 2020-07-20T04:06:26.275Z
Sunday July 19, 1pm (PDT) — talks by Raemon, ricraz, mr-hire, Jameson Quinn 2020-07-16T20:04:37.974Z
Sunday July 12 — talks by Scott Garrabrant, Alexflint, alexei, Stuart_Armstrong 2020-07-08T00:27:57.876Z
The silence is deafening – Devon Zuegel 2020-07-04T02:30:59.409Z
Inviting Curated Authors to Give 5-Min Online Talks 2020-07-01T01:05:39.794Z
Radical Probabilism [Transcript] 2020-06-26T22:14:13.523Z
Sunday June 28 – talks by johnswentworth, Daniel kokotajlo, Charlie Steiner, TurnTrout 2020-06-26T19:13:23.754Z
DontDoxScottAlexander.com - A Petition 2020-06-25T05:44:50.050Z
Prediction = Compression [Transcript] 2020-06-22T23:54:22.170Z
Online Curated LessWrong Talks 2020-06-19T02:16:14.824Z
Sunday June 21st – talks by Abram Demski, alkjash, orthonormal, eukaryote, Vaniver 2020-06-18T20:10:38.978Z
Superexponential Historic Growth, by David Roodman 2020-06-15T21:49:00.188Z
The one where Quirrell is an egg 2020-04-15T06:02:36.337Z
Coronavirus: Justified Key Insights Thread 2020-04-13T22:40:03.104Z
Hanson & Mowshowitz Debate: COVID-19 Variolation 2020-04-08T00:07:28.315Z
April Fools: Announcing LessWrong 3.0 – Now in VR! 2020-04-01T08:00:15.199Z
Small Comment on Organisational Disclaimers 2020-03-29T17:07:48.339Z
[Update: New URL] Today's Online Meetup: We're Using Mozilla Hubs 2020-03-29T04:00:18.228Z
March 25: Daily Coronavirus Updates 2020-03-27T04:32:18.530Z
Hanson vs Mowshowitz LiveStream Debate: "Should we expose the youth to coronavirus?" (Mar 29th) 2020-03-26T23:46:08.932Z
March 24th: Daily Coronavirus Link Updates 2020-03-26T02:22:35.214Z
Announcement: LessWrong Coronavirus Links Database 2.0 2020-03-24T22:07:29.162Z
How to Contribute to the Coronavirus Response on LessWrong 2020-03-24T22:04:30.956Z
Against Dog Ownership 2020-03-23T09:17:41.438Z
March 21st: Daily Coronavirus Links 2020-03-23T00:43:29.913Z
March 19th: Daily Coronavirus Links 2020-03-21T00:00:54.173Z

Comments

Comment by benito on Coherent decisions imply consistent utilities · 2021-01-15T00:43:58.181Z · LW · GW

(Once again, great use of that link)

Comment by benito on Voting Phase for 2019 Review · 2021-01-14T22:45:43.914Z · LW · GW

Woop, go you! :)

Comment by benito on Voting Phase for 2019 Review · 2021-01-14T15:44:04.205Z · LW · GW

I have voted! :)

I will maybe check it over once again before the voting period ends, but I'm taking a vacation and cutting myself off from most social/communal places on the internet including LW, so no promises.

Out of interest, due to the new renormalizing button, all my votes were lowered by 2 points. Whereas last year my votes span from about -1 to +8, this year they span from -3 to +6. I spent exactly 500/500 points. Doing a manual sum, my average vote was 0.39, meaning I used most of my voting power.

(In lots of my reviews, I said what I expected I'd vote on a post. Once you account for the -2 on everything, I was accurate in all of the predictions I made.)

I've also written down my guesses for what will be at the top of vote once it's all done.

Voting Suggestions

My current 3 guesses for most underrated posts are:

So I encourage you to check them out for voting on :)

Comment by benito on Simple Rules of Law · 2021-01-13T21:51:43.990Z · LW · GW

Thanks! :)

Comment by benito on Less Competition, More Meritocracy? · 2021-01-13T07:29:08.135Z · LW · GW

Zvi wrote a two whole posts on perfect/imperfect competition and how more competition can be bad. However, this is the only post that has really stuck with me in teaching me how increased competition can be worse overall for the system, and helped me appreciate Moloch in more detail. I expect to vote for this post around +4 or +5.

As with one or two others by Zvi, I think it's a touch longer than it needs to be, and can be made more concise.

Comment by benito on Simple Rules of Law · 2021-01-13T07:15:09.409Z · LW · GW

This is another great response post from Zvi.

It takes a list of issues that Zvi didn't get to cherry pick, and then proceeds to explain all them with a couple of core tools: Goodhart's Law, Asymmetric Justice/Copenhagen Interpretation of Ethics, Forbidden Considerations, Power, and Theft. I learned a lot and put a lot of key ideas together in this post. I think it makes a great follow-up read to some of the relevant articles (i.e. Asymmetric Justice, Goodhart Taxonomy, etc).

The only problem is it's very long. 8.5k words. That's about 4% of last year's book, IIRC. I think it's worth a lot, but I think probably a bit less than that. So I'd like it to be shortened if it makes it in. That said I think Zvi's probably up for that if it's getting published.

I expect to vote on this between +3 and +6.

Comment by benito on How Much is Your Time Worth? · 2021-01-13T07:03:59.546Z · LW · GW

This is a core piece of a mental toolkit, being able to quantify life choices like this, and the post explains it well. I think I would like the a version in the book to spend a bit more space helping the reader do the calculation that you do in the Clearer Thinking tool. A lot of the value of the post is in showing how to use the number to make decisions.

I think it's a valuable post, and I expect to vote for it somewhere in the range of +2 to +4.

Comment by benito on Complex Behavior from Simple (Sub)Agents · 2021-01-13T07:00:39.048Z · LW · GW

I love this post, it's a really healthy way of exploring assumptions about one's goals and subagents. I think it's really hard to come up with simple diagrams that communicate key info, and I am impressed by choices such as changing the color of the path over time. I also find it insightful in matters relating to what a distracted agent looks like, or how adding subgoals can improve things.

It's the sort of thing I'd like to see more rationalists doing, and it's a great read, and I feel very excited about more of this sort of work on LessWrong. I hope it inspires more LessWrongers to build on it. I expect to vote it at somewhere between +5 and +7.

Comment by benito on human psycholinguists: a critical appraisal · 2021-01-13T02:55:51.423Z · LW · GW

This was a great read at the time and still holds up. It's one of the rare artifacts that can only produced after a decade or two, which is an account of major shifts in a person's perspective over the course of a decade or two. (In that way it's similar in genre for me as Buck's post in the review.)

It's a very excitingly written history, and gives me insight into the different perspectives on the issue of psycholinguistics, and helps me frame the current situation in AI. I expect to vote on this somewhere between +5 and +7.

Comment by benito on Dishonest Update Reporting · 2021-01-13T02:49:50.829Z · LW · GW

This is a true engagement with the ideas in Paul original post. It actively changed my mind – at first I thought Paul was making a good recommendation, but now I think it was a bad one. It helped me step back from a very detailed argument and notice what rationalist virtues were in play. I think it's a great example of what a rebuttal of someone else's post looks like. I'd like to see it in the review, and I will vote on it somewhere between +3 and +7.

Comment by benito on [Part 2] Amplifying generalist research via forecasting – results from a preliminary exploration · 2021-01-13T02:45:33.025Z · LW · GW

I stand by my nomination. This was the most serious attempt I am aware of to set up straightforward amplification of someone's reasoning in this way, it was competently executed, the diagrams showing the results are awesome, and I am proud that this sort of work is on LessWrong. It's only a baby step, but I think this step is exciting and I hope it encourages others to run further with it.

Comment by benito on Avoid Unnecessarily Political Examples · 2021-01-12T22:16:12.696Z · LW · GW

Hm, in writing out a response to your comment, I noticed that we've not written up the reasoning for frontpaging Zvi's covid updates when we've done it (because it is an exception to the rules). Habryka wrote in March about why we were going to encourage a lot of covid content, and when we've curated Zvi's covid posts (twice) we've written about why we're making the exception in the curation notice. But not for frontpaging. [Edit: I stand corrected, see Habryka's reply to your comment, he did write it.]

For the record, the history of frontpaging here is that we largely stopped frontpaging covid content in May, including Zvi's content. Since then, we frontpaged one of his updates in October, and have consistently been frontpaging them since December. We also curated two of his posts, a general update on safety and precautions in May, and the piece about the new strain in December.

Briefly, I want to do this because I think many in the LessWrong community do not have good information sources during this crisis, and I am concerned about their health and safety, and because I think Zvi's updates are high-quality, honest, readable, and trustworthy. It's a judgment call, and it's costly to the norms around politics. I recognize once you make enough exceptions then the rule is lost. I don't think we're close to that, and I think that especially the covid model curation and the new strain curation were worth it.

Edit: Cut out a paragraph about planning to write an account of why frontpaging Zvi's stuff. Though I still would like to address some of the political stuff.

Comment by benito on In My Culture · 2021-01-12T07:08:22.603Z · LW · GW

The post attempts to point out the important gap between fighting over norms/values and getting on the same page about what people's norms/values even are, and offers a linguistic tool to help readers navigate it in their life.

A lot of (the first half of) the post feels like An Intuitive Introduction to Being Pro Conversation Before Fighting, and it's all great reading.

I think the OP wants to see people really have conversation about these important differences in values, and is excited about that. Duncan believes that this phrase is a key step allowing (certainly Duncan) to have these conversations, and I am happy that this seems accurate for some number of people.

There are many perspectives on why people avoid having conversations about value differences. Scott has written a lot trying to encourage people to have actual conversation about values differences, such as Guided By the Beauty of Our Weapons and Fundamental Values Differences Are Not That Fundamental. I think often there are forces that try to delegitimize honest talk about values differences in favor of just punishing those that don't share their values, in an attempt to gain power. I think it's also the case that many people have a kind of learned helplessness of values talk – they're scared because they expect those forces are out to get them, and that phrases that attempt to move the conversation there are just scary.

The post doesn't address this much, and in that regard it feels a touch naive to me. That said, I think we can build our own small, walled garden here and have trust in each other to have real conversations. And for that purpose, this sort of "laying out the basics and offering a linguistic tool" has a lot in common with how much of the sequences provided value.

I haven't ever used the phrase myself. One way to update on that would be "this is evidence that it's not the right phrase", but on reflection I feel more like "I regret not doing so and would like to make an explicit effort to try using it 3 times".

I think overall it's very clear and is valuable as a post for many to read. I expect to vote for it with somewhere between +2 and +4.

I think if it's included it would probably be good to include some of the comment section which was also good.

--

P.S. For me, while I liked the post, I didn't really get the phrase until I saw it being used in the wild a bunch of times. I think I would've grokked the phrase sooner if there was a concise instance early on of how both sides of the conversation go. For example, I think (?) that in most interactions, it's good for both people to use the phrase in my culture and explain what norms they think are right, and then decide which norms they're going to coordinate on together.

Comment by benito on Blackmail · 2021-01-12T06:03:36.960Z · LW · GW

Because Robin thought it was a 'checkmate' case that blackmail was good and should be legal. 

How exactly Robin comes up with his weird ideas is a mystery that scholars have written many dozens of books theorizing about, but we have learned to rule him in as a thinker who comes up with excellent ideas, and engage with them substantively even if they seem obviously wrong (as was true for Zvi in this instance).

Comment by benito on What are the open problems in Human Rationality? · 2021-01-12T05:58:33.027Z · LW · GW

So, I think this post is pretty bad as a 'comprehensive' list of the open problems, or as 'the rationality agenda'. All of the top answers (Wei, Scott, Brienne, Thrasymachus) add something valuable, but I'd be pretty unhappy if this was considered the canonical answer to "what is the research agenda of LW", or our best attempt at answering that question (I think we can do a lot better). I think it doesn't address many things I care about. Here's a few examples:

  • What are the best exercises for improving your rationality? Fermi estimates, Thinking Physics, Calibration Training, are all good, but are there much better ones?
  • What are the best heuristics for how to fight Moloch? What are examples of ways in which we have sold our souls to Moloch?
  • What are practical heuristics for how to get in touch with the world in a way that is reality-revealing rather than reality-masking?
  • What challenges do we face as embedded agents, and how should we think about them?
  • (This one's a bit weird) What is the best rationality advice in the utilitarianism/deontology/virtue ethics ontology?
    • For virtue ethics, right now we think that curiosity and caring about something intensely is key. Is there a different virtue we're not noticing?
    • For deontology, we have rules like "hold off on proposing solutions" and "sit down by a clock for 5 minutes trying to solve a problem before giving up on it". What are the most important rules for rationality?
    • For utilitarianism, we have ways to improve our precise modeling like "practice fermi estimates, solve thinking physics problems, do calibration training". Are there other quantitative practises that improve our ability to bring ourselves and the world into alignment?

I also think that I don't come away from the answer feeling like I "learned" something, in the way that I do from posts that set out big problems like Embedded Agency, Reality-Revealing and Reality-Masking Puzzles, and The Treacherous Path to Rationality. What Failure Looks Like is a great example of setting up a set of open problems by putting in the work to communicate them. (It's focused on AI not humans so didn't list it in the above list.)

So I feel conflicted on the list. I think there's lots of valuable ideas in it, but it doesn't feel at all like something right now I'd want to give someone as our best list of the open problems. I think I might vote this between -1 and -3 at the minute.

(I notice I think I'd be pretty happy if the post title just changed to "What are some open problems in Human Rationality?". I think then I'd vote at somewhere between +1 and +4.)

Comment by benito on Neural Annealing: Toward a Neural Theory of Everything (crosspost) · 2021-01-12T05:23:43.476Z · LW · GW

Fair enough. Probably will vote on it somewhere between 0 and -2.

Comment by benito on Blackmail · 2021-01-12T05:22:43.898Z · LW · GW

On initially reading it, I found it quite interesting, but over time it's come to shape my thinking much more than I expected.

Robin has correctly pointed out that blackmail is just a special case of free trade between apparently consenting adults, which tends to be pretty good, and you need quite a strong argument for making the law interfere with that. He also points out that it creates good incentives not to do things that you wouldn't want people finding out about.

However Zvi's point is that this is an incredibly strong incentive for someone to ruin your life and create information that you are not willing to make public (e.g. private photos, manipulate you into a minor illegality that would be very damaging for your reputation, embarrassing information about your relatives, etc) and then take you for all you're worth.

(Especially combined with asymmetric justice, most people are already interacting with things they'd be judged on if it were made very public, and all you'd have to do is look into their lives and threaten to make some part of it immensely public. Just take photos of any not obviously poor person walking past a homeless person, or publish a one-sided story from someone they had a conflict / falling-out with, or whatever.)

Essentially, making blackmail illegal largely removes the financial apparatus for people to do immense harm to you in the interest of taking all of your resources. We have built immensely powerful incentive systems with financial markets, and the law against blackmail says "If you think of a way to extort someone for all of their money by threatening to destroy their life/reputation, you will not be able to be rewarded using our core currency of exchange." And this backpropagates helpfully into not incentivising the destruction attempts in the first place.

This also fed into my understanding of Petrov Day. I previously had conceived of Petrov Day as being about punishing unilateral action, but on reflection I don't really want to stigmatize unilateral action, it's often good and virtuous. What I want to stigmatize is "putting horribly-negative-sum outcomes on the table", which is what lead to the cold war, and I want people to take responsibility for not using extortion in negotiation. You should never create the ability to wipe out another country in order to gain power over them. You shouldn't attempt to use the ability to take down LessWrong in order to get money. Of course, most sacred values can be overcome at some quantity of secular value, but the point is that it is sacred and should be considered a deep schelling point. The important part of the cold war was never really whether the two countries would actually destroy each other, it was that they had raced to create the ability to wipe each other out at the push of a few buttons. Petrov took responsibility for the part he had to play there, and wouldn't engage in that sort of game.

All the bolded parts of the essay are the most important parts for me, and I strongly recommend including this in the book. I expect to vote at it with somewhere between +5 to +9.

Comment by benito on Neural Annealing: Toward a Neural Theory of Everything (crosspost) · 2021-01-12T04:56:13.162Z · LW · GW

I guess I felt some obligation to take responsibility for nominating it in the first place, given that I anticipated it was not gonna stand up to review.

Comment by benito on Neural Annealing: Toward a Neural Theory of Everything (crosspost) · 2021-01-12T04:21:09.887Z · LW · GW

Alas, I haven't made it through this post. I do not understand what I have made of it, and nor does anyone else I know (except maybe Jacob Falkovich). I do wish there had been real conversation around this post, and I think there's some probability (~30%) that I will look back and deeply regret not engaging with it much more, but in my current epistemic state I can only vote against its inclusion in the book. Somewhere around -1 to -4.

Comment by benito on Risks from Learned Optimization: Introduction · 2021-01-12T04:18:27.727Z · LW · GW

For me, this is the paper where I learned to connect ideas about delegation to machine learning. The paper sets up simple ideas of mesa-optimizers, and shows a number of constraints and variables that will determine how the mesa-optimizers will be developed – in some environments you want to do a lot of thinking in advance then delegate execution of a very simple algorithm to do your work (e.g. this simple algorithm Critch developed that my group house uses to decide on the rent for each room), and in some environments you want to do a little thinking and then delegate a very complex algorithm to figure out what to do (e.g. evolution is very stupid and then makes very complex brains to figure out what to do in lots of situations that humans encountered in the EEA).

Seeing this more clearly in ML shocked me with the level of inadequacy that ML has for being able to do this with much direction whatsoever. It just doesn't seem like something that we have much control of. Of course I may be wrong, and there are some simple proposals (though that have not worked so far). Nonetheless, it's a substantial step forward in discussing delegation in modern ML systems. It discusses lots of related ideas very clearly.

Definitely should be included in the review. I expect to vote on this with something like +5 to +8.

I don't do research in this area, I expect others like Daniel Filan and Adam Shimi will have more detailed opinions of the sequence's strengths and weaknesses. (Nonetheless I stand by my assessment and will vote accordingly.)

Comment by benito on Review of 'Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, and More' · 2021-01-12T04:02:12.623Z · LW · GW

Yeah I agree. I think it's useful to have a public record of it, and I'm glad that public conversation happened, but I don't think it's an important part of the ongoing conversation in the rationality community, and the conversation wasn't especially insightful.

I hope some day we'll have better debates with more resources devoted by either side than a FB comment thread, and perhaps one day that will be good for the review.

Comment by benito on Group house norms really do seem toxic to many people. · 2021-01-12T02:04:41.644Z · LW · GW

I went to boarding school, and my heuristic after a number of years there was: Living with people you like is awesome. Living with people you don't like is terrible. And living with people you think are okay... is also terrible.

I think it's pretty rough to move in with people you don't know. At my house we do a 2 month probationary period, after which we all vote on the person, and if the score isn't high enough we don't offer them full time.

Comment by benito on In Defense of Twitter's Decision to Ban Trump · 2021-01-11T22:24:21.264Z · LW · GW

One point that stuck out to me in the post was that Twitter is only used by 22% of Americans, therefor it’s not a monopoly. 

22% is pretty close to the number of Americans who read newspapers just 20 years ago.

Admittedly there is much more competition today for platforms, but given that almost every major platform deplatformed Trump, I think the current situation is in some ways analogous to every newspaper in the US stopping interviewing the president 20 years ago.

Comment by benito on Everybody Knows · 2021-01-11T22:08:11.045Z · LW · GW

Once I got what this post was talking about, I really liked it. People linking to it made that abundantly clear (for me). But I didn’t get what the post was talking about first time I read it. I think the Cohen quote and the Dothraki saying didn’t help point me to the phenomenon the post is talking about. I might spend some time looking for an example that helps pinpoint it for me more quickly, if the post is highly voted, I think that would fix it for me.

Comment by benito on Dual Wielding · 2021-01-11T22:05:41.819Z · LW · GW

Quick thoughts for solving the identifying two phones things: (a) attach a differently colored piece of tape to the back of each, and (b) change the Lock Screen images to be totally different colours (so that you can see it even peripherally). Also perhaps (c) attach the colored tape to the front of the phone so that you can always check even while using it.

Comment by benito on Launching Forecast, a community for crowdsourced predictions from Facebook · 2021-01-11T10:21:08.639Z · LW · GW

A few things. First, the primary reason I want new people to visit LessWrong is to contribute to the ongoing conversation about rationality and related ideas, and so that's what I want logged-out users to see rather than a bunch of recency-based content. Second, I want a simple rule that prevents us from going down the gradient of having lots of things on the frontpage that want to ask you for your time or money, and that have any sense of being Out to Get You. I think a lot of projects are like this, especially those asking for donations (not present here).

I think this isn't too costly for the existing community because many logged in users have turned off the frontpage filter, and also they'll see discussion in recent discussion if they check regularly. But should I take it as a datapoint that you didn't see this post when it originally was posted?

Comment by benito on Launching Forecast, a community for crowdsourced predictions from Facebook · 2021-01-11T08:32:52.706Z · LW · GW

As a rule we don’t put project announcements on the front page, especially for things that are looking to attract users. Sometimes we make exceptions, but I don’t want to make an exception for Facebook.

Comment by benito on Utility ≠ Reward · 2021-01-11T02:44:07.121Z · LW · GW

For another datapoint, I'll mention that I didn't read this post nor Gradient Hacking at the time, I read the sequence, and I found that to be pretty enlightening and quite readable.

Comment by benito on You Have About Five Words · 2021-01-11T00:33:53.670Z · LW · GW

If it were easy to make elicit things, I'd post one here for people to give a probability that "You get about five words" is better than "You have about five words". Would appreciate someone doing that.

Comment by benito on You Have About Five Words · 2021-01-10T23:19:39.291Z · LW · GW

I'm also interested in someone else (e.g. Kaj, Zvi, Orthonormal, etc) who managed to get this stuff from the post, trying to make me less confused about how people are getting things from this post.

Comment by benito on You Have About Five Words · 2021-01-10T23:18:08.717Z · LW · GW

Okay, whenever I read this post, I don't get it.

There's some fermi-estimation happening, but the fermi is obviously wrong. As Benquo points out, certain religions have EVERYONE read their book, memorize it, chant it, discuss it every Sunday (or Saturday).

I feel like the post is saying "there are lots of bandwidth problems. the solution to all of them is '5'." and I don't get why 5.

So I read Ray's comment on Daniel Filan's review, where he says:

...at some maximum scale, your coordination-complexity is bottlenecked on a single working-memory-cluster, which (AFAICT based on experience and working memory research) amounts to 3-7 chunks of concepts that people already are familiar with. 

So, I am fairly confident that in the limit it is actually about 5 words +/- 2, because Working Memory Science and some observations about what slogans propagate.

Now THAT is a great point. If you CANNOT assume shared context beyong this idea, and you want to be able to have common knowledge of the idea whilst continuing to make further points... sounds like you get about 5 words.

That does change my mind significantly about the idea. That said I would want a basic version of that worked into the post. I think it can be done, even if it's not the 'rigorous' version Ray wants.

Before reading that, I was going to downvote the post in the review. Now I'm kinda neutral. If Ray says he'll very likely incorporate it in, should it pass review, then I'm moving toward like voting with strength 1-3 on it.

P.S. Zvi suggests "You GET about five words" and I also like that. Would encourage Ray to seriously think about the alternative then pick which one seems best to him.

Comment by benito on The Hard Work of Translation (Buddhism) · 2021-01-10T05:52:14.882Z · LW · GW

(Note that your link here and in all your other reviews doesn't go to this post, and it took me a bit to find it.)

Comment by benito on The AI Timelines Scam · 2021-01-10T05:43:39.137Z · LW · GW

My guess is we agree that talk of being able to build AGI soon has lead to substantial increased funding in the AGI space (e.g. involved in the acquisition of DeepMind and the $1billion from Microsoft to OpenAI)? Naturally it's not the sole reason for funding, but I imagine it was a key part of the value prop, given that both of them describe themselves as 'building AGI'.

Given that, I'm curious to what extent you think that such talk, if it was responsible, has been open for scrutiny or whether it's been systematically defended from skeptical analysis?

Comment by benito on COVID-19: home stretch and fourth wave Q&A · 2021-01-07T06:21:57.664Z · LW · GW

great points, thanks.

Comment by benito on COVID-19: home stretch and fourth wave Q&A · 2021-01-07T06:16:41.725Z · LW · GW

Why get it intentionally? Surely you should just "stop caring about it" and then you might get it, but you also might not, and that's the best of all possible worlds.

Comment by benito on Approval Extraction Advertised as Production · 2021-01-07T06:14:51.447Z · LW · GW

This was a great idea!

Companies that scored highly include Doordash, Dropbox and Gusto (all 2's), and companies that score low include Scale.com and Twitch (-2 and -1). 

I can't quite tell why you think Twitch is bad. It is subject to network effects, kind of a social media company, is that why? And I don't know what Scale.com is other than some AI company.

For many of these companies I feel like my opinion changes as they become monopolies. For example, we use Gusto at LW, it's great. That said, if it became the primary company people used in a country to interact with a part of government, then I could imagine Gusto working with that government to extract money from people in some way. So I like it to a point, then suddenly I might really not like it.

Overall, the distributions ended up very similar, though YC did come out with a higher mean, mostly driven by fewer negative tail companies. 

On the topic of tails, I wonder if your distribution would've come out differently had the scale been -10, -1, 0, 1, 10.

Comment by benito on Approval Extraction Advertised as Production · 2021-01-07T06:10:36.785Z · LW · GW

The basic claim of this post is that Paul Graham has written clearly and well about unlearning the desire to do perfectly on tests, but that his actions are incongruous, because he has built the organization that most encourages people to do perfectly on tests.

Not that he has done no better – he has done better than most – but that he is advertising himself as doing this, when he has instead probably just made much better tests to win at.

Sam Altman's desire to be a monopology

On tis the post offers quotes giving evidence saying:

  • YC is a gatekeeper to funding and a broader network of valuable supplies
  • Sam Altman ambitiously wants YC to be the primary funder globally of good companies (and this could imply the sole gatekeeper)

YC creating tests

The post says that the natural way to find such people would be proactive talent scouting rather than creating a formal test, and judges YC for not doing this, and claims that the test filters for people who are obsessed with passing tests.

Here are the points made by the post, in order:

  • One factor they care about is extreme responsiveness. The post points out that if you are to goodhart on this metric strongly enough, it will become 'performing' responsiveness.
  • The post also quotes the YC CEO (Sam Altman) saying that the primary type of person they select for is smart, upper-middle-class people, which is the set the post thinks is most likely to have the desire to do perfectly on tests.
  • The post also Altman talking about the desire to maximise numbers regarding health, and then also quotes Hotel Concierge talking about a time when maximising the numbers was to the clear detriment of the reality and their personal health, suggesting that Altman is selecting for people who maximise at the expense of reality rather than in accordance with it.
  • The next section is about how YC forces the founders in their program to do this, to be poor and to make their companies profitable enough to earn food and living.
  • The post points out the obsession with growth can be goodharted on in many ways, and points out that one company advertised "fifty-per-cent word-of-mouth growth" which sounds like a straightforward nonsense metric unrelated to building a great product, created by someone who wanted to show growth.

If I were to abstract this a bit, I'd say that if you goodhart on YC's metrics and tests, you will be able to pass them yet keep the desire to do perfectly on tests, and there is suggestive evidence that this has occured.

I think they should be praised for having built better tests. Much of society is about building better metrics to optimize for, and then when we have goodharted on them, learning from our mistakes and making better ones.

Related: I am reminded of Zvi's Less Competition, More Meritocracy. That post talks about how, if the pool of selection gets sufficiently big, the people being selected on are encouraged to take riskier strategies to pass the filters, and the selection process breaksdown. It seems plausible to me that YC substantially changed as an organism at a certain level of success, where initially nobody cared and so the people who passed the tests were very naturally aligned people (the founders of AirBnb, DoorDash, Stripe) but as the competition increased, the meritocracy decreased.

Psychologizing Paul Graham

The post gives some arguments psychologizing Paul Graham:

  • The post says that Paul Graham has deceived himself on whether mean people fail, because many mean people succeed.
  • It also points out that Paul Graham does not follow his own advice when it comes to funding companies, because (as he says) he would find it psychologically intolerable.

It argues that "this is a case of fooling oneself to avoid confronting malevolent power".

I think that this has some validity. My sense is that Paul Graham has made a lot of succes out of production-based strategies, and has somewhat blinded himself to the existence of adversarial strategies. He seems to me much less politically minded than other very productive people like Bezos and Musk, who I think have engaged much more directly with politics and still succeeded.

I also think that Sam Altman's expressed desire to be something of a monopoly is not something that Paul Graham has engaged with in his writing, I think, and that this would bring with it many political and coordination issues he has not addressed and that could be harmful for the world.

Conclusion

I don't think it's bad for YC to have tests, even tests that are goodharted on to an extent, but I don't think the post is actually that interested in YC. It's more interested in the phenomenon of action being incongruous with speech. 

The post is a ton of primary sources followed by a bit of psychologizing of Paul Graham. It's often impolite to publicly psychologize someone. I generally try to emphasize lots of true and positive things about the person when I do so, to avoid the person feeling attacked, but this post didn't choose to do that, which is a fine choice. Either way, I think it is onto something when it talks about Paul Graham being somewhat blind to adversarial and malevolent forces that can pass his tests. If I wanted to grade Paul Graham overall, I feel like this post is failing to properly praise him for his virtues. But the post isn't trying to grade him overall, instead its focus is on the gap between his speech and his actions, and analyzes what's going on there.

I do feel like there's much more to be said there, many more essays, but the post does say quite valuable things about this topic. I feel like I'd get a lot of returns from this post being more fleshed out (in ways I discuss below) and elaborating more on its ideas. In its current form, I'll probably give it a +1 or +2 in the review. I generally found it hard to read, but worthwhile.

Further work

Here are some more questions I'd like to see discussed and answered:

  • How did YC change over time with respect to optimizing for production-strategies and adversarial ones?
    • (I can imagine the answers here being (1) whenever YC became successful enough for the news media to notice it and make it prestigious, and (2) when Sam Altman became president.)
  • How much do the more recent YC companies care about production vs growth?
    • Seems like the best YC companies are the early ones, but they naturally have an age-advantage. I've heard rumours but would be interested in more evidence here.

I'd also be interested in a more fleshed out version of much of the discussion in this post. "This is a case of fooling oneself to avoid confronting malevolent power" -> what are other cases in the modern world, and what are some of the forces at play here? "If, to participate in higher growth rates, you have to turn into something else, then in what sense is it you that's getting to grow faster?" -> this is a great point, and i'm interested in where exactly things like YC made the decision to turn into something else, and what that looked like. "He then sets up an institution optimizing for "success" directly, rather than specifically for production-based strategies." -> I'd be interested in a more detailed sketch of what that organisation would look like.

I also think that there'd be some good work in tying this into the ontology of goodhart's law.

Comment by benito on The AI Timelines Scam · 2021-01-07T04:41:15.352Z · LW · GW

This is a cogent, if sparse, high-level analysis of the epistemic distortions around megaprojects in AI and other fields.

It points out that projects like the human brain project and the fifth generation computer systems project made massive promises, raised around a billion dollars, and totally flopped. I don't expect this was a simple error, I expect there were indeed systematic epistemic distortions involved, perpetuated at all levels.

It points out that similar scale projects are being evaluated today involving various major AI companies globally, and points out that the sorts of distortionary anti-epistemic tendencies can still be observed. Critics of the ideas that are currently getting billions of dollars (deep learning leading to AGI) are met with replies that systematically exclude the possibility of 'stop, halt, and catch fire' but instead only include 'why are you talking about problems and not solutions' and 'do this through our proper channels within the field and not in this unconstrained public forum', which are clearly the sorts you'd expect to see when a megaproject is protecting itself.

The post briefly also addresses why it's worth modeling the sociopolitical arguments, and not just the technical arguments. I think it's clear that megaprojects like this are subject to major distortionary forces – at the point where you're talking about arguments against the positions that is literally funding the whole field, it is obviously not acceptable to constrain dialogue to the channels that field controls, this is a mechanism that is open to abuse of power. I like this short section.

The post ends with the claim that 'people are being duped into believing a lie'. I don't feel convinced of this.

I tried to write down why simply, but I'm not having the easiest time. A few pointers:

  • A chain is as strong as its weakest link, but not all organisations are chains. Many mathematicians can be doing nonsense symbol-manipulation while Andrew Wiles solves Fermat's Last Theorem. I expect there was an overlap in the time today when science is substantially broken, and the time when Feynman was around making diagrams and building the atom bomb. In the intermediary time a lot of 'science' you could point to as not actually science and supported by anti-epistemic arguments, but this was somewhat separable from Feynman who was still doing real work.
  • There can be many levels of fraud, combined with many levels of actual competence at the object level. The modern field of ML has mightily impressed me with AlphaGo and GPT and so on. I think that the "full scam" position is that this is entirely a consequence of increased compute and not ML expertise, and that basically there is not much expertise at all in these fields. I find this plausible but not at the 50% level. So just because there's evidence of anti-epistemic and adversarial behavior, this does not preclude real work from being done.
  • I do think it's pretty normal for projects to have marketing, run in an epistemically adversarial way, kept in arms length and bringing in resources.
  • I also think that sometimes very competent people are surrounding by distortionary forces. I think I should be able to come up with strong examples here, and I thought a bit about making the case for Thiel or Cummings (who've both shown ability to think clearly but also engaged in somewhat political narrative-building). Perhaps Hoover is an example? Still, I think that sometimes a project can be engaged adversarially with the outside world and still be competent at its work. But I don't think I've shown that strongly, and in most actual cases I am repulsed by projects that do the adversarial stuff and think it's delusional to be holding out hope. I also think it's especially delusional to think this about science. Science isn't supposed to be a place where the real conversation happens in private.

Conclusion

I think this post raises a straightforward and valid hypothesis to evaluate the field against as a whole. I don't think it's sufficiently detailed to convince me that the overall hypothesis holds. I do think it's a valuable conversation to have, it's such an important topic, especially for this community. I think this post is valuable, and I expect I will give it a small positive vote in the review, around +2 or +3.

Further Work

Here are some further questions I'd like to see discussed and answered, to get a better picture of this:

  • What are a few other examples of criticism of the current wave of AI hype, and how were they dealt with?
  • What do leaders of these projects say on this topic, and in response to criticism?
    • (I recall an FLI panel with Demis Hassabis on, where the on detailed argument he made about the decision to put more/less funding into AGI right now was saying that it would be easier for lots of groups to do it in the future as compute gets cheaper, so in order to have centralized control and be able to include time for safety we should push as fast as we can on AGI now. I don't think it's an unreasonable argument but I was hardly surprised to hear it coming from him.)
  • How open are the channels of communication with the field? How easy is it for an outsider to engage with the people in the field?
  • Who are the funders of AI? To what extent are they interested in public discourse around this subject?
    • (My guess is that the answer here is something like "the academic field and industry have captured the prestige associated with it so that nobody else is considered reasonable to listen to".)
  • What is the state of the object level arguments around the feasibility of AGI?
  • Does the behavior of the people who lead the field matches up with their claims?
  • What are some other megaprojects or fields with billions of dollars going into projects, and how are these dynamics playing out in those areas?
Comment by benito on Predictions for 2021 (+ a template for yours) · 2021-01-06T08:15:54.976Z · LW · GW

We do.

Comment by benito on 100 Tips for a Better Life · 2021-01-05T09:10:09.478Z · LW · GW

Gotta agree with Said, your examples seem pretty bad to me. Especially the first one. Public health officials have lied repeatedly throughout the crisis – there was the stuff about masks (which we didn't fall for on LW – 1, 2, 3), and there was the time when one of the major public advisors literally admitted to lying. I could name lots more, but in sum no, do not defer to public health officials, they lied multiple times, instead think critically when incorporating their judgments into your own.

I have followed the climate stuff less well. I can tell that lots of government officials and news media types are exaggerating and fitting narratives everywhere, and don't trust them. I've less of an opinion on the scientists.

Comment by benito on Bets, Bonds, and Kindergarteners · 2021-01-04T22:57:52.433Z · LW · GW

wow this is so cute

Comment by benito on Open & Welcome Thread - January 2021 · 2021-01-04T08:38:10.050Z · LW · GW

You wrote in markdown, but we have a WYSIWYG editor! Just highlight a piece of text to see the edit menu popup, and you can put the link in that way. Or use cmd-k. Anyway, FTFY.

Comment by benito on Open & Welcome Thread - December 2020 · 2021-01-02T03:13:35.349Z · LW · GW

I'm taking a vacation from LessWrong work at the end of the work day today, to take some rest. I stuck it out for much of the book, and we ended the year with 2,271 sets sold, which is good. Jacob and Habryka are leading the charge on that for now.

So for the next few weeks, by-default I will not be responsive via PM or via other such channels for admin-related responsibilities. Have a good annual review, write lots of reviews, and vote! 

Please contact someone else on the LW team if you have a question or a request. (I'll probably write another update when I'm back.)

Comment by benito on Covid 12/31: Meet the New Year · 2020-12-31T23:58:20.559Z · LW · GW

Basic question about the two-dose one-dose cost-benefit analysis:

What is the prior to have on how soon you need the second dose to confer long-term immunity? The data we have is on 3 weeks between dose 1 and 2, and that shows 92% immunity reached with the single dose, but now it sounds like e.g. the UK is gonna hold back that second dose for some amount of time, and I can imagine that being like 2 months or so. Anyone know how long is too long?

Comment by benito on Luna Lovegood and the Chamber of Secrets - Part 12 · 2020-12-31T21:38:30.267Z · LW · GW

why is this so fun

Comment by benito on Some end-of-year media recommendations · 2020-12-31T20:24:39.827Z · LW · GW

Strong upvote, this was great. 

Which are the top five things you'd keep if you had to forget all of the others? Not as an assessment of which are 'the best', just interested in which ones were the most meaningful for you.

Comment by benito on Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, and More · 2020-12-31T19:59:30.131Z · LW · GW

Re the last line, it's public insofar as anyone can read it (you don't have to be friends with Yann LeCun to read it, you can read it logged out). Saying "actual debate" was intended somewhat as praise for the people involved having the conversation. I agree it was something like the mvp of a debate, but I think it is the first time I've ever seen these people really have this conversation, and the first sample gives the most information.

Can't imagine this post going in the book, but if people vote it in I'll make some effort to track down the participants and ask if they're willing to give legal permission for inclusion.

Comment by benito on Review Voting Thread · 2020-12-31T19:30:28.227Z · LW · GW

+1 was a bit surprised. Don't think it matters too much. Except mildly think it increases the chance those posts get reviewed.

Comment by benito on Dario Amodei leaves OpenAI · 2020-12-31T05:03:34.151Z · LW · GW

Interested to know if Paul's team and Chris Olah's team and Jan Leike's team/project are moving with.

Comment by benito on One Year of Pomodoros · 2020-12-31T05:02:30.828Z · LW · GW

We have had all the conversations worth having, and now we are resigned to have them all again. :D