Posts

What currents of thought on LessWrong do you want to see distilled? 2021-01-08T21:43:33.464Z
The National Defense Authorization Act Contains AI Provisions 2021-01-05T15:51:28.329Z
The Best Visualizations on Every Subject 2020-12-21T22:51:54.665Z
ryan_b's Shortform 2020-02-06T17:56:33.066Z
Open & Welcome Thread - February 2020 2020-02-04T20:49:54.924Z
We need to revisit AI rewriting its source code 2019-12-27T18:27:55.315Z
Units of Action 2019-11-07T17:47:13.141Z
Natural laws should be explicit constraints on strategy space 2019-08-13T20:22:47.933Z
Offering public comment in the Federal rulemaking process 2019-07-15T20:31:39.182Z
Outline of NIST draft plan for AI standards 2019-07-09T17:30:45.721Z
NIST: draft plan for AI standards development 2019-07-08T14:13:09.314Z
Open Thread July 2019 2019-07-03T15:07:40.991Z
Systems Engineering Advancement Research Initiative 2019-06-28T17:57:54.606Z
Financial engineering for funding drug research 2019-05-10T18:46:03.029Z
Open Thread May 2019 2019-05-01T15:43:23.982Z
StrongerByScience: a rational strength training website 2019-04-17T18:12:47.481Z
Machine Pastoralism 2019-04-03T16:04:02.450Z
Open Thread March 2019 2019-03-07T18:26:02.976Z
Open Thread February 2019 2019-02-07T18:00:45.772Z
Towards equilibria-breaking methods 2019-01-29T16:19:57.564Z
How could shares in a megaproject return value to shareholders? 2019-01-18T18:36:34.916Z
Buy shares in a megaproject 2019-01-16T16:18:50.177Z
Megaproject management 2019-01-11T17:08:37.308Z
Towards no-math, graphical instructions for prediction markets 2019-01-04T16:39:58.479Z
Strategy is the Deconfusion of Action 2019-01-02T20:56:28.124Z
Systems Engineering and the META Program 2018-12-20T20:19:25.819Z
Is cognitive load a factor in community decline? 2018-12-07T15:45:20.605Z
Genetically Modified Humans Born (Allegedly) 2018-11-28T16:14:05.477Z
Real-time hiring with prediction markets 2018-11-09T22:10:18.576Z
Update the best textbooks on every subject list 2018-11-08T20:54:35.300Z
An Undergraduate Reading Of: Semantic information, autonomous agency and non-equilibrium statistical physics 2018-10-30T18:36:14.159Z
Why don’t we treat geniuses like professional athletes? 2018-10-11T15:37:33.688Z
Thinkerly: Grammarly for writing good thoughts 2018-10-11T14:57:04.571Z
Simple Metaphor About Compressed Sensing 2018-07-17T15:47:17.909Z
Book Review: Why Honor Matters 2018-06-25T20:53:48.671Z
Does anyone use advanced media projects? 2018-06-20T23:33:45.405Z
An Undergraduate Reading Of: Macroscopic Prediction by E.T. Jaynes 2018-04-19T17:30:39.893Z
Death in Groups II 2018-04-13T18:12:30.427Z
Death in Groups 2018-04-05T00:45:24.990Z
Ancient Social Patterns: Comitatus 2018-03-05T18:28:35.765Z
Book Review - Probability and Finance: It's Only a Game! 2018-01-23T18:52:23.602Z
Conversational Presentation of Why Automation is Different This Time 2018-01-17T22:11:32.083Z
Arbitrary Math Questions 2017-11-21T01:18:47.430Z
Set, Game, Match 2017-11-09T23:06:53.672Z
Reading Papers in Undergrad 2017-11-09T19:24:13.044Z

Comments

Comment by ryan_b on [Linkpost] [Fun] CDC To Send Pamphlet On Probabilistic Thinking · 2022-01-16T17:34:13.323Z · LW · GW

So shall we file this under “do as I say, not as I do”? Ha!

Comment by ryan_b on Do you want quadratic voting in the Final Voting Phase? · 2022-01-14T15:52:51.026Z · LW · GW

Weak desire for quadratic voting. This is chiefly because it never shows up anywhere else, and there are very few areas of life where I care enough to vote and have the surplus capacity to actually engage with a new voting system.

If I don't endorse it in these conditions, then I effectively don't endorse new voting systems anywhere, which feels weird.

Comment by ryan_b on The 2020 Review [Updated Review Dashboard] · 2021-12-02T20:27:46.613Z · LW · GW

I expect the quadratic voting not to be very different from the 1-4-9 system, but I favor including quadratic voting again even if that is the case. I have two actual reasons for this:

  1. It's a cool mechanism, with flexible levels of engagement, and this is a good way to practice using it. If we don't make options like this available when voting opportunities arise, we can't expect them to ever appear in critical arenas like elections or governance.
  2. The more posts there are, the more valuable being able to fine-tune our votes becomes, operating under the assumption that the number of quality posts correlates with the number of posts overall (which I strongly expect). Since there are more posts this year, more granular voting has more value than it did last year. I want to be able to capture the additional value of the opportunity for granular voting.
Comment by ryan_b on [deleted post] 2021-11-19T20:22:38.691Z

Ha! This is a good one!

 The part of the book that got skimmed is titled 1984.

Comment by ryan_b on Quadratic Voting and Collusion · 2021-11-19T14:56:08.296Z · LW · GW

I have not read this one, thank you for the link!

Comment by ryan_b on Quadratic Voting and Collusion · 2021-11-18T19:09:01.975Z · LW · GW

From the MACI link, my objection is a generalized version of this:

Problems this does not solve

  • A key-selling attack where the recipient is inside trusted hardware or a trustworthy multisig
  • An attack where the original key is inside trusted hardware that prevents key changes except to keys known by an attacker

This is the level where trust is a problem in most real elections, not the voter level. I also note this detail:

It’s assumed that  is a smart contract that has some procedure for admitting keys into this registry, with the social norm that participants in the mechanism should only act to support admitting keys if they verify two things

Emphasis mine. In total this looks like it roughly says "Assuming we trust everyone involved, we can eliminate some of the incentive to breach that trust by eliminating certain information."

That is a cool result on the technical merits, but doesn't seem to advance the pragmatic goal of finding a better voting system.

Comment by ryan_b on Quadratic Voting and Collusion · 2021-11-18T15:20:43.717Z · LW · GW

I agree collusion is not a showstopper, because individual people very rarely bother to try anything dishonest, and even when they do it isn't effective. Also political parties will simply disseminate recommended spending plans. To prevent this would require something like absolute power over all communication, wielded by an entity over which no political party has any influence.

The truly secret voting suggestion is possibly the most awful idea I have ever heard with respect to voting, because while individual voters rarely commit fraud or do anything else inappropriate with their votes a very common and highly successful method of cheating an election is for the people who tally the votes to simply declare victory for one candidate or the other. If we cannot prove who anyone actually voted for, we can't prove who actually won at all.

Comment by ryan_b on Why I am no longer driven · 2021-11-17T03:28:19.946Z · LW · GW

A note on the metaphor of sprint, marathon, and hike: where you wound up is the only pace associated with carrying any load.

Comment by ryan_b on Attempted Gears Analysis of AGI Intervention Discussion With Eliezer · 2021-11-15T23:59:57.005Z · LW · GW

I am struck by two elements of this conversation, which this post helped clarify did indeed stick out how I thought they did (weigh this lightly if at all, I'm speaking from the motivated peanut gallery here). 

A. Eliezer's commentary around proofs has a whiff of Brouwer's intuitionism about it to me. This seems to be the case on two levels: first the consistent this is not what math is really about and we are missing the fundamental point in a way that will cripple us tone; second and on a more technical level it seems to be very close to the intuitionist attitude about the law of the excluded middle. That is to say, Eliezer is saying pretty directly that what we need is P, and not-not-P is an unacceptable substitute because it is weaker.

B. That being said, I think Steve Omohundro's observations about the provability of individual methods wouldn't be dismissed in the counterfactual world where they didn't exist; rather I expect that Eliezer would have included some line about how to top it all off, we don't even have the ability to prove our methods mean what we say they do, so even if we crack the safety problem we can still fuck it up at the level of a logical typo.

C. The part about incentives being bad for researchers which drives too much progress, and lamenting that corporations aren't more amenable to secrecy around progress, seems directly actionable and literally only requiring money. The solution is to found a ClosedAI (naturally not named anything to do with AI), go ahead and set those incentives, and then go around outbidding the FacebookAIs of the world for talent that is dangerous in the wrong hands. This has even been done before, and you can tell it will work because of the name: Operation Paperclip.

I really think Eliezer and co. should spend more time wish-listing about this, and then it should be solidified into a more actionable plan. Under entirely-likely circumstances, it would be easy to get money from the defense and intelligence establishments to do this, resolving the funding problem.

Comment by ryan_b on Where did the 5 micron number come from? Nowhere good. [Wired.com] · 2021-11-10T18:07:54.514Z · LW · GW
  1. This article is a wild ride.
  2. They do not jest about the difficulty of acquiring the book (Airborne Contagion and Air Hygiene: An Ecological Study of Droplet Infections). It has no DOI number; Worldcat confirms it was digitized in 2009 but it must have been a weird method because it doesn't get referenced like other old books I've searched for. I did find at least one review that said the book was to airborne disease as the pumphandle investigation was to waterborne disease, which is about the highest conceivable endorsement. Put the damn thing back into print, Harvard!
  3. Katie Randall's historical research.
  4. Access to a PDF versions of a few articles co-authored by Linsey Marr:
    1. The indoors influenza article from 2011.
    2. Letter published in Science, Oct 2020.
    3. Minimizing indoor transmission of COVID, Sept 2020.
    4. A review in Science from Aug, 2021
  5. Almost everything by Firth and co is unavailable.
    1. A first page of Firth's tuberculosis rabbits experiment, 1948.
    2. The guinea pig and UV study, done by Firth's student Richard Riley, 1962.

I have examined none of these in depth, but the publications all appear to be real and also make the reported claims. However, I notice that when you start from Firth, information about this was pretty widespread in the 2010-2019 timeframe. We had plenty of time not to screw this one up.

I feel like agencies who make recommendations to the public, either as a matter of routine or in times of crisis, should have a historian of science on staff whose job is to discover and maintain the intellectual history of these recommendations. This way we will know how to update them in light of whatever current crisis.

Comment by ryan_b on ryan_b's Shortform · 2021-11-09T23:14:20.275Z · LW · GW

I also have a notion this would help with things like the renewal of old content by making it incremental. For example, there has been a low-key wish for the Sequences to be revised and updated, but they are huge and this has proved too daunting a task for anyone to volunteer to tackle by themselves, and Eliezer is a busy man. With a tool similar to this, the community could divide up the work into comment-size increments, and once a critical mass has been reached someone can transform the post into an updated version without carrying the whole burden themselves. Also solves the problem of being too dependent on one person's interpretations.

Comment by ryan_b on ryan_b's Shortform · 2021-11-09T23:10:38.316Z · LW · GW

I want to be able to emphasize how to make a great comment, and therefore contribution to the ongoing discussion. Some people have the norm of identifying good comments, but that doesn't help as much with how to make them, or what the thought process looks like. It would also be tedious to do this for every comment, because the workload would be impossible.

What if there were some kind of nomination process, where if I see a good comment I could flag it in such a way the author is notified that I would like to see a meta-comment about writing it in the first place?

I already enjoy meta-posts which explain other posts, and the meta-comments during our annual review where people comment on their own posts. The ability to easily request such a thing in a way that doesn't compete for space with other commentary would be cool.

Comment by ryan_b on ryan_b's Shortform · 2021-11-09T23:04:08.154Z · LW · GW

What about a parallel kind of curation, where posts with a special R symbol or something are curated by the mods (maybe plus other trusted community members) are curated exclusively on their rationality merits? I mention this because the curation process is more of the general-intellectual-pipeline criteria now, of which rationality is only a part.

My reasoning here is that I wish it were easier to find great examples to follow. It would be good to have a list of posts that were "display rationality in your post the way these posts display rationality" to look up to.

Comment by ryan_b on ryan_b's Shortform · 2021-11-09T22:57:45.764Z · LW · GW

It would be nice if we had a way to separate what a post was about from the rationality displayed by the post. Maybe something like the Alignment Forum arrangement, where there is a highly-technical version of the post and a regular public version of the post, but we replace the highly technical discussion with the rationality of the post.

Another comparison would be the Wikipedia talk pages, where the page has a public face but the talk page dissecting the contents requires navigating to specifically.

My reasoning here is that when reading a post and its comments, the subject of the post, the quality of the post on regular stylistic grounds, and the quality of the post on rationality grounds all compete for my bandwidth. Creating a specific zone where attention can be focused exclusively on the rationality elements will make it easier to identify where the problems are, and capitalize on the improvements thereby.

In sum: the default view of a post should be about the post. We should have a way to be able to only look at and comment on the rationality aspects.

Comment by ryan_b on ryan_b's Shortform · 2021-11-09T22:45:19.299Z · LW · GW

I read Duncan's posts on concentration of force and stag hunts. I noticed that a lot of the tug-of-war he describes seems to stem from the fact that the object-level stuff about a post and the meta-level stuff (by which I mean rationality) of the post. It also takes the strong position that eliminating the least-rational is the way to improve LessWrong in the dimension the posts are about.

I feel we can do more to make getting better at rationality easier through redirecting some of our efforts. A few ideas follow.

Comment by ryan_b on Paths Forward: Scaling the Sharing of Information and Solutions · 2021-11-04T17:25:12.785Z · LW · GW

In the military case, I strongly recommend Supplying War by Martin van Creveld. It is a history, but systematically demolishes popular misconceptions about how supplies work in the military. It also completely changed my perspective of several important events, foremost among them Napoleon's invasion of Russia and Operation Overlord in WWII.

Otherwise, I think that logistics is mostly divided up on the private side into different specializations by industry. For using the existing logistical infrastructure to manage supply, there is Supply Chain Management; international shipping and the railways are their own specializations; I suspect that things like building truckyards is actually a subtask of owning a trucking company more than anything else.

This calls for a high-level survey of the field, I think. Putting it on the TODO.

Comment by ryan_b on Paths Forward: Scaling the Sharing of Information and Solutions · 2021-11-03T20:34:34.101Z · LW · GW

I am confused; what do you imagine this series of posts is doing?

Comment by ryan_b on Paths Forward: Scaling the Sharing of Information and Solutions · 2021-11-03T20:26:07.462Z · LW · GW

The whole thing makes me want to take up logistics. It’s high stakes, fascinating stuff where there’s high returns for actually solving problems properly.

I strongly endorse this. On LessWrong I see a reasonable awareness of communications and finance, but virtually none of logistics, and it is the third element that makes up the global economy. It is a tremendous torrent of object-level problems, and even introductory knowledge makes lots of other things much clearer. For example, military things make no sense sans logistics. But I don't know anything about commercial logistics, so I would be excited to explore the object level question of how stuff moves from A to B here.

Comment by ryan_b on True Stories of Algorithmic Improvement · 2021-11-03T18:56:55.591Z · LW · GW

Reflecting on this, I think I should have said that algorithms are the perspective that lets us handle dimensionality gracefully, but also that algorithms and compute are really the same category, because algorithms are how compute is exploited.

Algorithm vs compute feels like a second-order comparison in the same way as CPU vs GPU, or RAM vs Flash, or SSD vs HDD, just on the abstract side of the physical/abstraction divide. I contrast this with compute v. data v. expertise, which feel like the first-order comparison.

Chris Rackauckas as an informal explanation for algorithm efficiency which I always think of in this context. The pitch is that your algorithm will be efficient in line with how much information about your problem it has, because it can exploit that information. 

Comment by ryan_b on True Stories of Algorithmic Improvement · 2021-11-01T17:08:42.825Z · LW · GW

there’s a common narrative in which AI progress has come mostly from throwing more and more compute at relatively-dumb algorithms.

Is this context-specific to AI? This position seems to imply that new algorithms come out of the box at only a factor 2 above maximum efficiency, which seems like an extravagant claim (if anyone were to actually make it).

In the general software engineering context, I understood the consensus narrative to be that code has gotten less efficient on average, due to the free gains coming from Moore's Law permitting a more lax approach.

Separately, regarding the bitter lesson: I have seen this come up mostly in the context of the value of data. Some example situations are the supervised vs. unsupervised learning approaches; AlphaGo's self-play training; questions about what kind of insights the Chinese government AI programs will be able to deliver with the expected expansion of surveillance data, etc. The way I understand this is that compute improvements have proven more valuable than domain expertise (the first approach) and big data (the most recent contender).

My intuitive guess for the cause is that compute is the perspective that lets us handle the dimensionality problem at all gracefully.

Comment by ryan_b on An Unexpected Victory: Container Stacking at the Port of Long Beach · 2021-10-29T20:28:03.574Z · LW · GW

This is an important dimension of the problem; a rambly explanation of my intuitions about this:

It seems to me that if the basic technique of recruiting attention is used all the time, it cannot be a distinctive feature of the success in this case; almost all forms of attention appeals fail, and I go as far as to say the very largest fail the most frequently.

My model of how attention works in problems like this is that it has a threshold, after which further attention doesn't help. This is how special interests work in politics: it doesn't matter whether something is a good idea or its overall impact, what matters is that some groups can consistently meet the minimum attention threshold on issues important to them; this puts them on equivalent footing to universal acclaim. Contrast this with advertising for a product, where every additional person who responds buys the product, so the gains are only limited by the population you can reach.

What I think did the work here is the specificity: Ryan gave the action that needed to be taken, which made success an option at all. If there were no specific prescriptions in the twitter thread, and it was all just some variation on FIX THE PORT, the result would have been nothing even with orders of magnitude more attention.

Another way to frame this is that it is an attention economy problem, but the problem we need to solve is directing the attention of the relevant authority figures to the specific actions they can take that will impact the issue at hand. This leaves the medium of twitter as one option among many for how to meet the threshold that gets the official in question to take the message seriously.

I notice that the stacking rule change is the thing that happened, which also happened to be the only thing on the list where the relevant official (the governor) was specifically identified. Stuff like establishing a temporary truckyard, loaning trucks from the military, and bossing around the railroads is much less clear cut, and so even if people take action it takes a long time to suss out how it could possibly happen. Sort of the converse of avoiding few points of failure in system resilience; we want to identify the fewest points of success, with the added proviso that we want them to be as close to the problem as possible.

The default approach is to try to get the attention of the highest-ranking person they can think of, but this runs afoul of the exact mechanism you mention where attention is precious and the higher the rank, the more fierce the competition for it, and the higher the threshold we need to reach to direct them. But I think this is a power-law distribution, which is to say that as you go down the ladder of hierarchy the attention threshold drops rapidly.

To sum up, we can mitigate the attention problem by aiming as low on the totem pole as possible, and providing as explicit an action as possible.

Comment by ryan_b on An Unexpected Victory: Container Stacking at the Port of Long Beach · 2021-10-28T18:27:22.775Z · LW · GW

This strikes me as being essentially pro-social lobbying. Lobbying succeeds in a lot of cases because of things like: talking to the correct people who make the decision; easing the workload by providing a shovel-ready solution (for variants of shovel-ready that include write it this way; asking for specific things which don't obviously harm the mission or reputation of the people they lobby.

Considering the importance of the logistics issue, a natural candidate is to develop one of the other suggestions further. For example, points 3 and 4 (temporary truckyard adjascent to a rail terminal) are kind of a matched set. Off the top of my head:

  • identifying suitable truckyard sites, and identifying the owners/managers who would have to give approval
  • pre-work on some of the expected objections, like environmental impacts
  • some considerations on costs:
    • the point that stuck out to me here was switching how the trains run to short shuttle trips. Trains hate moving empty, because they don't get paid; the simple answer is to pay them; the government probably won't but other parties further back in the chain might be motivated (like the ship owners who want their ships moving; maybe Amazon and Walmart want their new stock; maybe there would be government money available but they cannot do a contract with the rails for weird procurement rule reasons)
  • I feel like it might make sense to stand up a company called Emergency Logistics Incorporated or something, where the pitch is connecting the dots such that the ship owners who are hemorrhaging value are willing to pay the trains to get stuff out of the port so they can put more stuff into it. The Western US is scarcely the only place to have this problem; the whole world seems to be having issues like this.
Comment by ryan_b on Petrov Day Retrospective: 2021 · 2021-10-23T23:10:04.595Z · LW · GW

I see a lot of commentary here about Petrov which flatly disagrees with the Wikipedia article about him. Some central notes, bolding mine:

  • On the opinion of his superiors about his actions:

General Yury Votintsev, then commander of the Soviet Air Defense's Missile Defense Units, who was the first to hear Petrov's report of the incident (and the first to reveal it to the public in the 1990s), states that Petrov's "correct actions" were "duly noted".[2] Petrov himself states he was initially praised by Votintsev

  • On being forced out of the army:

He was reassigned to a less sensitive post,[18] took early retirement (although he emphasized that he was not "forced out" of the army

  • He left the army to work for the R&D institute that designed the alarm system:

In 1984, Petrov left the military and got a job at the research institute that had developed the Soviet Union's early warning system. He later retired so he could care for his wife after she was diagnosed with cancer.[7]

  • Whether he abandoned his duty, or was a conscientious objector, or similar:

In an interview for the film The Man Who Saved the World, Petrov says, "All that happened didn't matter to me—it was my job. I was simply doing my job, and I was the right person at the right time, that's all. My late wife for 10 years knew nothing about it. 'So what did you do?' she asked me. 'Nothing. I did nothing.'"

The most important conclusion here is that Stanislav Petrov was assigned to monitor an alarm system. He reported a false alarm because he believed the alarm was false. If he had believed the alarm was real, he would have reported an attack, because that was his job.

Comment by ryan_b on What Do GDP Growth Curves Really Mean? · 2021-10-20T17:48:09.559Z · LW · GW

Ben Pace has a linkpost for the booklet "Is the rate of scientific progress slowing down?" by Tyler Cowen and Ben Southwood, which is completely about the discoveries-to-economic-measurement problem. They interrogate the signal in GDP, and conclude it is very weak; they move on to use Total Factor Productivity instead.

Comment by ryan_b on [Book Review] Altered Traits · 2021-10-18T15:57:45.816Z · LW · GW

When probed via fMRI the scientists found Mingyur's circuitry for empathy activated stronger than they had ever observed in normal people―a level normally associated with brief seizures lasting mere seconds.

It's kind of surprising to me that the yogis had such high empathy signals when they encounter almost no humans with whom to be empathic. This makes me wonder if the problem with meditating in civilization is that we keep encountering the kind of jerk that really strains our abilities.

Also, would you be willing to describe how you arrived at your current meditation practice?

Comment by ryan_b on Choice Writings of Dominic Cummings · 2021-10-15T18:21:32.182Z · LW · GW

The unrecognized simplicities of effective action series of posts; in particular #2(b) linked above. The dominant examples are the Manhattan Project, Atlas, and Apollo. He also spends quite a bit of time on ARPA and Xerox/PARC.

Included in the blog posts are the relevant books he was reading at the time, if I recall. 

Comment by ryan_b on Choice Writings of Dominic Cummings · 2021-10-15T17:47:34.083Z · LW · GW

It's important to distinguish seeking the truth from speaking the truth. The truth-seeking credential here is that the Vote Leave campaign applied basic epistemics, at his direction: review the literature to determine what methods actually work; gather as much data as possible; update methods according to feedback; aggressively ignore recommendations from high-status-but-wrong people who are nominally on the same side.

Comment by ryan_b on Book Review Review (end of the bounty program) · 2021-10-15T15:11:28.269Z · LW · GW

It is on my list of the reviews to read, so never fear! Feedback will be available.

Comment by ryan_b on Choice Writings of Dominic Cummings · 2021-10-15T14:42:22.889Z · LW · GW

The findings are similar in the US; the story I have pulled from it so far is that this basically boils down to tallying responses wrongly in political science research. The popular example from the US is that you might have a survey with multiple responses, and one person responds:

Q1. How do you feel about gay marriage?
A1. Gay people should have civil unions rather than marriage

Q2. How involved should the government be in the economy?
A2. Government should keep taxes low

But another person responds with:

A1. Gay people should not be allowed to get married, or adopt, or teach children

A2. Government should heavily tax the rich and important industries should be nationalized

Since both answers for the first person were conservative, the surveys marked that person as "very conservative." The second person, with one extremely conservative answer and one extremely liberal answer, got marked as a moderate.

This distinction flew under the radar for a long time because in the US there are only two political parties (which can realistically hold seats in the legislature), so it the question of which way a given voter would go was a matter of salience, which in political terms means which issues are top of mind at election time.

When looking for an older article I read on the subject, I came across a better one from 538, wherein they take some of these older survey questions and graph the outputs.

Comment by ryan_b on Choice Writings of Dominic Cummings · 2021-10-14T14:28:04.064Z · LW · GW

As a way to contextualize this, he describes the Vote Leave campaign as a pretty straightforward case of Working With Monsters.

Comment by ryan_b on What to read instead of news? · 2021-10-13T17:17:35.001Z · LW · GW

I enjoy blogs from experts in fields where I have an interest, but don't have the background to make anything of actual papers, or from the rationalsphere, or the odd specific commentator. These meet your criteria. Some usual suspects of mine are, sorted by frequency:

- Astral Codex Ten (psychiatry, rationalsphere)
- OvercomingBias (economics, rationalsphere)
- The Scholar's Stage (history, commentator)

- A Collection of Unmitigated Pedantry (history)
- InfoProc (physics, genomics)
- Dominic Cummings substack (politics, commentator)

The reason I like blog posts is that they not only do the work of summarization, but also are functionally the only venue for capturing important details like a single-person perspective of a field, or first-hand accounts of uncertainty or thought processes.

Comment by ryan_b on What to read instead of news? · 2021-10-13T17:00:54.096Z · LW · GW

These are cool suggestions; I will check them out! Thanks!

Comment by ryan_b on LessWrong is paying $500 for Book Reviews · 2021-10-13T16:59:35.378Z · LW · GW

Well gang, today's the last day. A rough count from a few minutes ago, judging by the Book Reviews tag, put us at ~31 potential entries, and the day isn't over yet.

Congratulations on the huge success! I do not envy you the judging workload thus generated.

Comment by ryan_b on LessWrong is paying $500 for Book Reviews · 2021-10-13T16:45:31.122Z · LW · GW

Based on the Book Reviews tag scan, this has been a smashing success. My scan around lunch showed ~33 entries which are "1 month" or younger. However, I don't know how many will get payouts.

Comment by ryan_b on The LessWrong Team is now Lightcone Infrastructure, come work with us! · 2021-10-01T14:19:45.170Z · LW · GW

I experience an entirely absurd glee seeing the word infrastructure as part of the actual name.

Comment by ryan_b on This War of Mine · 2021-09-28T20:49:53.834Z · LW · GW

My current prediction is sometime between 10 and 16; I think the most obvious trigger will be something like "showing distress over world events" like a war or famine abroad, or possibly bad riots in the US.

I'm going to push for it before college at least, because having this kind of perception will save so much time when learning anything adjacent to history or politics.

Comment by ryan_b on [Book Review] "The Vital Question" by Nick Lane · 2021-09-28T20:36:43.154Z · LW · GW

I was speaking to the probability of life appearing in different parts of the (finite) ocean, so possibly I misunderstood what you were addressing. But since the reasoning should generalize:

I mean different sizes of infinity like these:

  • the set-of-all-sets is infinite, but the set-of-all-sets-including-that-set is one set larger, and so on
  • given an infinite length, 1/2 the length is still infinite, and so is 1/3 the length, but the 1/2 length is larger than the 1/3 length
  • an infinite tube of soap foam and an infinite rod of steel of the same diameter both contain infinitely many atoms, but there are more atoms in the steel, because it is much more dense
  • there are infinitely many numbers between 0 and 1. But there are twice as many numbers between 0 and 2.
  • More germane to the life example, if process 1 generates life at 1 per unit time, and process 2 generates life at 2 per unit time, as the arrow of time extends infinitely, both process 1 and 2 will generate life infinitely many times, but process 2 genesis is twice as large as process 1 genesis.

I'm not familiar enough with either SSA or SIA to apply them, and my grasp of anthropic reasoning is shaky in the extreme, but the idea of not rewarding population sizes baffles me. Do you have a preferred breakdown for this point I should check out, or will google serve me well enough?

If this claim is true, wouldn't it also be true that a hypothesis in which life appears on every planet is more probable than a hypothesis in which life appears on every 10^40th planet?

If planets and stars lasted infinitely long, and were sufficiently constrained in their composition, then I would say yes. But this chain of reasoning ignores the local information we have about the problem. Returning to the primordial soup quote you were responding to, the argument is that what we know of thermodynamics doesn't allow a causal mechanism to work. By contrast, the white-smoker vent hypothesis does allow a causal mechanism to work; therefore we should prefer it as the explanation for the origin of life (as we know it).

When I try running the intuition of your example of different frequencies of planetary genesis in reverse, and on primordial soup:

We should keep primordial soup on the table because a big world predicts this will still work an infinite number of times, then surely we must also keep a less-complex primordial soup (say, a primordial cocktail) on the table for the same reason; and then bare rock, with no soup at all; and then a complex cell springing into existence with no causal history; etc. This may be true, but doesn't really seem helpful in terms of what to expect.

An alternative framing: would it be fair to say that under a big world, every prediction happens an infinite number of times? If I accept that all infinities should be treated the same, that still leaves us with the ability to compare the number of infinities, which should lead us to favor the hypotheses according to how many predictions they allow us to make.

Regarding the appearance of aliens, have you checked out the grabby aliens post yet? I recommend it.

Comment by ryan_b on This War of Mine · 2021-09-28T15:24:40.703Z · LW · GW

I appreciate this review. I have this game on my radar as part of a scheme to provide emotional context when the time comes for teaching my child about the awful elements of the world. I expect to time this around the same time we dump school history for actual history.

Comment by ryan_b on [Book Review] "The Vital Question" by Nick Lane · 2021-09-28T15:07:26.048Z · LW · GW

I'm confused by this - even if everything happens infinitely many times, there are still different sizes of infinity and we select the biggest as the most likely. How does a big world shift this perspective?

Comment by ryan_b on Petrov Day 2021: Mutually Assured Destruction? · 2021-09-27T20:29:00.852Z · LW · GW

I note that the feelings you describe are the underlying assumption which makes the risk real: if no one thought the consequences of pushing the button was entertaining or a learning opportunity, then no one would push the button, and the tension goes away.

Comment by ryan_b on This War of Mine · 2021-09-27T19:40:12.456Z · LW · GW

Is there any kind of reciprocity mechanism, where you can ask other survivors for supplies and/or help? Or are the altruistic actions always one-way?

Comment by ryan_b on Where's my magic sword? · 2021-09-27T15:21:44.002Z · LW · GW

This isn't a feature of military first aid either, though on reflection I can't really conceive of a situation where it would be a relevant decision point given the procedures otherwise.

Comment by ryan_b on Petrov Day 2021: Mutually Assured Destruction? · 2021-09-27T14:30:40.944Z · LW · GW

Calling a website going down for a bit "destruction of real value" is technically true, but connotationally just so over the top
 

I wonder if you are anchoring at the wrong point of comparison here. The point is that it is technically true, as distinct from button-whose-only-function-is-to-disable-the-button. Your post reads like you worry that we are all comparing this to actual nuclear destruction, which I agree would be deeply absurd.

In my view, the stakes are being a bit of a dick. The standard is: can we all agree to not to be a bit of a dick? It's a goofy sort of game, but we have it because of its similarity to the nuclear case: the winning move is not to play.

Comment by ryan_b on Petrov Day 2021: Mutually Assured Destruction? · 2021-09-27T14:18:25.467Z · LW · GW

My perspective is that the ritual has more than one dimension: I claim that this is low-risk training for future events, rather than only a celebration of a past event. Many senseless risks remain (including nuclear weapons), and we have no control whatsoever over whether they persist. Petrov is a foundation story because rationalist-as-we-use-it means making good decisions; even when we did not make the risks; even when the situation fundamentally stupid.

If we never even attempt to simulate these situations, then I believe we're not giving the problem its due.

Comment by ryan_b on Shared Frames Are Capital Investments in Coordination · 2021-09-25T18:52:19.400Z · LW · GW

My head-chunked relationships with the other writing outside the Gears/Capital Investments line of essays:

Frames are the costly coordination mechanism of common meta-knowledge.

Distillation lowers the cost of frames because you have four words.

Comment by ryan_b on Insights from Modern Principles of Economics · 2021-09-22T18:31:41.464Z · LW · GW

Referring to the "criticism of these numbers" arguments: the only one that stands out to me as very serious is 5. From my reading history and historiography, the problem of quantifying changes among groups of people who mostly practice subsistence farming and do not have any records of birth, health, death, or productivity is notorious. It looks to me like it would be down to archaeological and anthropological data to determine what their lives were like, and then the comparison with the lives of people on $X/day could begin.

I wouldn't go as far as calling the numbers bullshit, but wew lad do I expect the error bars to be huge on the early end of that chart. Time to go digging through those links to find out what they actually did!

Edit: yep, that's the case. They took the extant work from historians using the archaeology/remains methods and combined them as well as they were able. I was interested to see that the highest-uncertainty parts aren't so much the earlier periods as the periods where completely new products or radical product quality changes were introduced. So if we were to see the uncertainty, I expect it would start wider at the beginning and narrow as time went on, with spikes during stuff like the introduction of vaccines or factories.

Comment by ryan_b on Testing The Natural Abstraction Hypothesis: Project Update · 2021-09-21T19:11:28.839Z · LW · GW
  1. Meta: I greatly appreciate that you took the time to contextualize the earlier relevant posts within this one.
  2. Do you already have a plan of attack for the experimental testing? By this I mean using X application, or Y programming language, with Z amount of compute. If not, I would like to submit a request that you post that information when the time comes.
  3. Recalling the Macroscopic Prediction paper by Jaynes, am I correct in interpreting this as being conceptually replacing the microphenomena/macrophenomena choices with near/far abstractions?
  4. Following in this vein, does the phase-space trick seem to generalize to the abstractions level? By this I mean something like replacing

    predict the behavior that can happen in the greatest number of ways, while agreeing with whatever information you have 

    with

    choose the low-dimensional summaries which have been constrained in the greatest number of ways, while accurately summarizing the far-away information
Comment by ryan_b on Book review: The Checklist Manifesto · 2021-09-19T12:40:59.704Z · LW · GW

The tools at work I have used in the past were as much reference material as checklist; this had the effect of making them a completely separate, optional action item that people only use if they remember.

The example checklists from the post are all as basic as humanly possible: FLY AIRPLANE and WASH HANDS. These are all things everyone knows and can coordinate on anyway, but the checklist needs to be so simple that it doesn’t really register as an additional task. This feels like the same sort of bandwidth question as getting dozens or hundreds of people to coordinate on the statement USE THE CHECKLIST.

Put another way, I think that the reasoning in You Have About Five Words is recursive.

Comment by ryan_b on Book review: The Checklist Manifesto · 2021-09-17T23:49:27.195Z · LW · GW

I’ve lately been contemplating the problem of developing high-quality checklists at work for troubleshooting programs that work with big data. It is easily the most difficult thing I am considering, but also easily the most productivity-improving given adoption. Previous efforts at getting such tools to work were not successful, but neither were they very good. The viability threshold seems *very* high, probably for You Have About Five Words reasons.

Comment by ryan_b on Writing On The Pareto Frontier · 2021-09-17T23:09:24.203Z · LW · GW

In Being The Pareto Best In The World you mention the problem of elbow room:

Problem is, for GEM purposes, elbow room matters. Maybe I’m the on the pareto frontier of Bayesian statistics and gerontology, but if there’s one person just little bit better at statistics and worse at gerontology than me, and another person just a little bit better at gerontology and worse at statistics, then GEM only gives me the advantage over a tiny little chunk of the skill-space.

I notice the converse of a multi-dimensional skillset is multi-dimensional assessment. In the same way it is hard to hire good programmers without knowing anything about programming, it will be hard for anyone else to assess a pareto-optimal product or skillset along multiple dimensions simultaneously.

It seems to me this challenge is pareto legibility. The more dimensions on the frontier, the noisier the assessment will necessarily be. This introduces a meta-problem where one of the skills on which you want to get good-enough is making your pareto frontier position legible enough for others to benefit from it.

As a practical matter this doesn't seem like that big a deal for consumer goods like books, where even laypeople can take reviews of "X about this book was so good" and "I liked Y about this book" and round this off into a feeling of "muchly good." By contrast, legibility seems exceptionally important for something like the econometric modeling applied to proteomics example.