Posts

A Personal (Interim) COVID-19 Postmortem 2020-06-25T18:10:40.885Z · score: 164 (62 votes)
Market-shaping approaches to accelerate COVID-19 response: a role for option-based guarantees? 2020-04-27T22:43:26.034Z · score: 39 (10 votes)
Potential High-Leverage and Inexpensive Mitigations (which are still feasible) for Pandemics 2020-03-09T06:59:19.610Z · score: 35 (14 votes)
Ineffective Response to COVID-19 and Risk Compensation 2020-03-08T09:21:55.888Z · score: 29 (15 votes)
Link: Does the following seem like a reasonable brief summary of the key disagreements regarding AI risk? 2019-12-26T20:14:52.509Z · score: 11 (5 votes)
Updating a Complex Mental Model - An Applied Election Odds Example 2019-11-28T09:29:56.753Z · score: 10 (4 votes)
Theater Tickets, Sleeping Pills, and the Idiosyncrasies of Delegated Risk Management 2019-10-30T10:33:16.240Z · score: 26 (14 votes)
Divergence on Evidence Due to Differing Priors - A Political Case Study 2019-09-16T11:01:11.341Z · score: 27 (11 votes)
Hackable Rewards as a Safety Valve? 2019-09-10T10:33:40.238Z · score: 18 (5 votes)
What Programming Language Characteristics Would Allow Provably Safe AI? 2019-08-28T10:46:32.643Z · score: 5 (5 votes)
Mesa-Optimizers and Over-optimization Failure (Optimizing and Goodhart Effects, Clarifying Thoughts - Part 4) 2019-08-12T08:07:01.769Z · score: 17 (9 votes)
Applying Overoptimization to Selection vs. Control (Optimizing and Goodhart Effects - Clarifying Thoughts, Part 3) 2019-07-28T09:32:25.878Z · score: 19 (6 votes)
What does Optimization Mean, Again? (Optimizing and Goodhart Effects - Clarifying Thoughts, Part 2) 2019-07-28T09:30:29.792Z · score: 29 (6 votes)
Re-introducing Selection vs Control for Optimization (Optimizing and Goodhart Effects - Clarifying Thoughts, Part 1) 2019-07-02T15:36:51.071Z · score: 31 (7 votes)
Schelling Fences versus Marginal Thinking 2019-05-22T10:22:32.213Z · score: 23 (14 votes)
Values Weren't Complex, Once. 2018-11-25T09:17:02.207Z · score: 34 (15 votes)
Oversight of Unsafe Systems via Dynamic Safety Envelopes 2018-11-23T08:37:30.401Z · score: 11 (5 votes)
Collaboration-by-Design versus Emergent Collaboration 2018-11-18T07:22:16.340Z · score: 12 (3 votes)
Multi-Agent Overoptimization, and Embedded Agent World Models 2018-11-08T20:33:00.499Z · score: 9 (4 votes)
Policy Beats Morality 2018-10-17T06:39:40.398Z · score: 15 (15 votes)
(Some?) Possible Multi-Agent Goodhart Interactions 2018-09-22T17:48:22.356Z · score: 21 (5 votes)
Lotuses and Loot Boxes 2018-05-17T00:21:12.583Z · score: 29 (6 votes)
Non-Adversarial Goodhart and AI Risks 2018-03-27T01:39:30.539Z · score: 65 (15 votes)
Evidence as Rhetoric — Normative or Positive? 2017-12-06T17:38:05.033Z · score: 1 (1 votes)
A Short Explanation of Blame and Causation 2017-09-18T17:43:34.571Z · score: 1 (1 votes)
Prescientific Organizational Theory (Ribbonfarm) 2017-02-22T23:00:41.273Z · score: 3 (4 votes)
A Quick Confidence Heuristic; Implicitly Leveraging "The Wisdom of Crowds" 2017-02-10T00:54:41.394Z · score: 1 (2 votes)
Most empirical questions are unresolveable; The good, the bad, and the appropriately under-powered 2017-01-23T20:35:29.054Z · score: 7 (5 votes)
A Cruciverbalist’s Introduction to Bayesian reasoning 2017-01-12T20:43:48.928Z · score: 1 (2 votes)
Map:Territory::Uncertainty::Randomness – but that doesn’t matter, value of information does. 2016-01-22T19:12:17.946Z · score: 6 (11 votes)
Meetup : Finding Effective Altruism with Biased Inputs on Options - LA Rationality Weekly Meetup 2016-01-14T05:31:20.472Z · score: 1 (2 votes)
Perceptual Entropy and Frozen Estimates 2015-06-03T19:27:31.074Z · score: 17 (12 votes)
Meetup : Complex problems, limited information, and rationality; How should we make decisions in real life? 2013-10-09T21:44:19.773Z · score: 3 (4 votes)
Meetup : Group Decision Making (the good, the bad, and the confusion of welfare economics) 2013-04-30T16:18:04.955Z · score: 4 (5 votes)

Comments

Comment by davidmanheim on A Personal (Interim) COVID-19 Postmortem · 2020-07-07T12:59:49.117Z · score: 4 (2 votes) · LW · GW

I don't understand the hypothetical.

If every country in the world had closed their borders well enough to stop all movement before it left China, yes, spread would have been prevented. But that's unfeasible even if there was political will, since border closures are never complete, and there was already spread outside of China by mid-January.

Once there is spread somewhere, you can't reopen borders. And even if you keep them closed, no border closure is 100% effective - unless you have magical borders, spread will inevitably end up in your country. And at that point, countries are either ready to suppress domestic spread without closures, or they aren't, and end up closing later instead of earlier.

Comment by davidmanheim on A Personal (Interim) COVID-19 Postmortem · 2020-07-07T08:18:44.472Z · score: 2 (1 votes) · LW · GW

In general, I think that earlier closures would potentially have delayed spread enough to save lives due to getting vaccines and testing further along than they were.

I'm also claiming that now, with a fully in place and adequate test-and-trace program, including screening for passengers and isolation for positives, border closures have low marginal benefit. Without such a test and trace program, travel modifies the spread dynamics by little enough that it won't matter for places that don't have spread essentially controlled. The key case where it would matter is if the border closures delayed spread by long enough to put in place such systems, in which case they would have been very valuable. And yes, border closures in place have allowed this in some places, but certainly not the US or UK.

So, conditional on the policy failures, I think border closures were effectively only a way to signal, and if they distracted from putting in place testing and other systems by even a small amount, they were net negative.

Comment by davidmanheim on A Personal (Interim) COVID-19 Postmortem · 2020-07-07T08:10:02.137Z · score: 2 (1 votes) · LW · GW

See the back-and-forth with John Wentsworth in the comments earlier - https://www.lesswrong.com/posts/B7sHnk8P8EXmpfyCZ/a-personal-interim-covid-19-postmortem?commentId=ntGR3rpnSW6yKRoAP

Comment by davidmanheim on When a status symbol loses its plausible deniability, how much power does it lose? · 2020-07-07T08:07:44.659Z · score: 4 (3 votes) · LW · GW

The claim that Harvard is just a status symbol is that the entire variance in success from attending Harvard is explained by the two factors of 1) the characteristics of individual people entering the program, and 2) the prestige from being able to claim they graduated.

This seems implausible - so to extend this, I'd say all of the variance can be explained by those two plus a third factor, 3) the value of networking with Harvard students, faculty and staff.

In either case, the central point is that benefit from the services provided by Harvard are unrelated to the education they claim to provide.

Comment by davidmanheim on A Personal (Interim) COVID-19 Postmortem · 2020-07-05T18:23:47.784Z · score: 2 (1 votes) · LW · GW

Again, it didn't actually stop spread - it slowed it slightly. Borders haven't been actually closed. Flights have continued, you just need connections to get a visa. But people have been able to return home - and dual citizens have been able to travel both ways - the entire time.

Comment by davidmanheim on A Personal (Interim) COVID-19 Postmortem · 2020-07-04T19:43:58.658Z · score: 4 (2 votes) · LW · GW

Assuming away the political problem of making it stick, it seems clear that without universal border closures by countries, it would have made only a minor difference in spread - most cases that came to Europe, the US, and elsewhere didn't come from China.

If some set of countries were willing to completely shut down all borders, those countries might have avoided infections - might, but I'm skeptical. Even now, the countries that shut down international travel still have a fair amount of international travel, from diplomatic travel to repatriation of citizens to shipping and trucking. So it could plausibly have delayed spread by a month. In places that mounted a really effective response, a month might have made the difference between slow control and faster control. In most places, I think it would have shifted spread a couple weeks later.

Comment by davidmanheim on Conditions for Mesa-Optimization · 2020-07-02T20:56:37.799Z · score: 4 (2 votes) · LW · GW

The better link is the final version - https://www.mdpi.com/2504-2289/3/2/21/htm

The link in the original post broke because it included the trailing period by accident.

Comment by davidmanheim on A Personal (Interim) COVID-19 Postmortem · 2020-07-02T20:55:09.425Z · score: 3 (2 votes) · LW · GW

I agree that it could be a death spiral, and think the caution is in general warranted. My personal situation was one where I had fairly little personal interaction with members of the community - though this is likely less true not - but that was why I decided that explicitly considering the consensus opinions was reasonable.

Comment by davidmanheim on Betting with Mandatory Post-Mortem · 2020-06-30T07:38:31.137Z · score: 2 (1 votes) · LW · GW

"The more interesting thing is when you make a bet where a negative outcome should force a large update."

I think that's what odds are for. If you're convinced (incorrectly) that something is very unlikely, you should be willing to give large odds. You can't really say "I thought this was 40% likely, and I happened to get it wrong" if you gave 5:1 odds initially.

(And on the other side, the person who took the bet should absolutely say they are making a small update towards the other model, because it's far weaker evidence for them.)

Comment by davidmanheim on Betting with Mandatory Post-Mortem · 2020-06-30T07:35:29.350Z · score: 6 (1 votes) · LW · GW

"In the absence of an oracle, I would end up writing up praise for, and updating towards, your more wrong model, which is obviously not what we want."

Perhaps I'm missing something, but I think that's exactly what we want. It leads to eventual consistency / improved estimates of odds, which is all we can look for without oracles or in the presence of noise.

First, strength of priors will limit the size of the bettor's updates. Let's say we both used beta distributions, and had weak beliefs. Your prior was Beta(4,6), and mine was Beta(6,4). These get updated to B(5,6) and B(7,4). That sounds fine - you weren't very sure initially, and you still won't over-correct much. If the priors are stronger, say, B(12,18) and B(18,12), the updates are smaller as well, as they should be given our clearer world models and less willingness to abandon them due to weak evidence.

Second, we can look at the outside observer's ability to update. If the expectation is 40% vs. 60%, unless there are very strong priors, I would assume neither side is interested in making huge bets, or giving large odds - that is, if this bet would happen at all, given transaction costs, etc. This should implicitly limit the size of the update other people make from such bets.

Comment by davidmanheim on A Personal (Interim) COVID-19 Postmortem · 2020-06-28T08:20:06.335Z · score: 5 (3 votes) · LW · GW

There is a lot here to reply to, and I'm only going to address a few points.

First, on forecasting, I think there is a lot to discuss, and I think Johnwentsworth's comment and my reply are all that I have to say about this for now.

Second, on Government response, I'm also unsure how much we disagree. I definitely think that I have a number of useful insights about institutions, but this is an area where expertise seems to be non-predictive. That means I'm less sure how valuable it is - but I discussed this in more depth here, on Ribbonfarm. That said, I'll make comments anyways.

I agree that many countries were underprepared, but they also historically relied on American leadership for many of these types of events. America was the acknowledged world leader in biodefense and preparation, has spent more time and money on the problem than elsewhere, and has much more money and expertise than most places - so the failure is much more noteworthy than it otherwise would be.

I also think the EU "failures" should be counted as partial successes, since they mostly have case counts declining, and are well prepared to avoid the worst of a possible second wave. That's a solid half credit in an absolute sense, since they seems poised to have gotten it under control before it ended up everywhere, though they didn't catch it enough to prevent spread at first, which would have been the goal. The US (and to a lesser extent, the UK,) didn't manage to control things enough to even get past the first wave, and they are poised to fail to herd immunity in most places - a shocking level of failure, especially given how well other countries have managed this.

For counterfactual predictions, on B, if the US did as well as Germany, Japan, France, and other G-7 nations, they would have kept deaths under 20,000, or at least around there. I'd give at least 50% to keeping it below 20k so far. (I'm unsure how bad the Republican Governors would have made this, or what the rest of the world looks like under Clinton. Would the Chinese have cooperated earlier? Counterfactual predictions this far back are basically about writing an alternative timeline - there are WAY too many potential issues to really consider well.) But the epidemic seems under control in the EU, contra the US. So that seems like the relevant counterfactual. (Aside: It seems non-coincidental, though a surprisingly strong effect, that right-wing populist leaders are especially bad at controlling infectious diseases - BoJo, Trump, and Putin all got this very, very wrong. I think the default reaction of trying to control the narrative over dealing with problems is a particularly dangerous approach with infectious diseases.) And for the A counterfactual, it's similar, but with 20+% probability mass on "this was stopped enough before it left China that there was no pandemic."

Comment by davidmanheim on A Personal (Interim) COVID-19 Postmortem · 2020-06-27T18:39:51.865Z · score: 2 (3 votes) · LW · GW

Yes - it took me until mid or late March to be fully on board. See my comment here to a post arguing for pushing handwashing instead of suggesting masks, which I changed my mind about in mid to late March.

Comment by davidmanheim on A Personal (Interim) COVID-19 Postmortem · 2020-06-27T18:35:36.794Z · score: -1 (2 votes) · LW · GW

Agreed - but **for protecting the wearer alone**, I'd say that 10% more handwashing by most people would easily beat 50% more mask wearing.

Comment by davidmanheim on A Personal (Interim) COVID-19 Postmortem · 2020-06-27T18:34:20.542Z · score: -1 (5 votes) · LW · GW

I think Tyler's way too impressed by himself and his discipline than he should be. There's a saying about economists making fortune tellers look good that seems appropriate here. And he probably shouldn't be posting insulting things about epidemiologists in the same breath as saying most economists are just as bad - which he followed up with saying he wants to be rude by asking questions he could have spent half an hour googling - he hadn't even done basic research. I also think that people on lesswrong give too little credit to public health officials for being properly cautious about overreacting, especially given that even for COVID-19, many people are saying that we went too far, and the economic harms were not worth the damage.

Also see this thread: https://twitter.com/davidmanheim/status/1235274008142270466

Next, should academics and public servants in epidemiology simply be paid more? No, and no. If anything, there is not enough disincentive to enter academia, since there are so many more good applicants than spots, across disciplines. Something else needs to be fixed there first. (Everything, actually.) And government isn't set up well to pay people more in ways that gets better candidates - doubling salaries wouldn't be enough to get anyone more competent to run for the Senate, much less be a senior government appointee, unless they already wanted to do that and didn't actually care about the money. (There are other ways we underpay and sabotage government that money could fix, but that's a different discussion.) And I'm surprised that an economist doesn't know enough about these structures to see why higher pay isn't a useful lever.

Comment by davidmanheim on A Personal (Interim) COVID-19 Postmortem · 2020-06-27T18:15:18.947Z · score: 0 (2 votes) · LW · GW

I mean now - it's clear that masks are not particularly effective at preventing people from getting COVID, and are somewhat but not very effective at preventing people who have COVID from infecting others. That's enough to be incredibly important at a population level, which is obviously a key thing to do, but it's not anything similar to what proponents had been claiming.

Comment by davidmanheim on Is there a good way to simultaneously read LW and EA Forum posts and comments? · 2020-06-26T15:53:18.490Z · score: 2 (1 votes) · LW · GW

Yeah, Wordpress isn't the best platform for this.

I could imagine clear ways of doing this by having, say, python scripts to ingest the data, running, say, hourly on cloud servers, and then producing RSS for that could then be used in Wordpress - but I'm guessing there are people on here who would have far better ideas for how to engineer this.

Comment by davidmanheim on A Personal (Interim) COVID-19 Postmortem · 2020-06-26T15:50:39.964Z · score: 8 (5 votes) · LW · GW

Thank you - and I strongly endorse this answer. And now that you point this out, I realize that it should have been clear. I have speculated in the past that a large part of the value of Superforecasting is that there are people actually motivated to investigate and do the expensive updating I have also said that I'm unsure how worthwhile it is to pay for the time of the types of people who can superforecast. This seems like a clear case where it is worthwhile, if only it worked.

Given that, I think there's a strong case that we need large rewards for early correct updates away from consensus, especially for very rare events. (In a case like COVID, the value of faster information is in the tens or hundreds of billions of dollars. A tiny fraction of that would be more than enough.) But the typical time-weighted forecast scores don't account for heterogeneous update costs or give sufficient reward to figuring it out a day sooner than the average - though metaculus's score and the scoring Ozzie Gooen has looked at are trying to do this better. This seems very worth more consideration.

Comment by davidmanheim on Is there a good way to simultaneously read LW and EA Forum posts and comments? · 2020-06-26T09:07:38.276Z · score: 4 (2 votes) · LW · GW

Given OP's question, an obvious, if perhaps annoying / difficult idea to implement, is to have an expandable [+] next to each post which allows seeing it on that site, and a nested expandable button to see comments.

I'm unsure how happy or unhappy people who are indexed would be with this for non EA Forum / LW blogs.

Comment by davidmanheim on A Personal (Interim) COVID-19 Postmortem · 2020-06-26T08:37:42.309Z · score: -2 (6 votes) · LW · GW

You said that "In epidemiology it is a basic fact in the 101 textbook that slowing long distance transmission (using quarantines / travel restrictions) is very important." The parentheses make the statement incorrect. Obviously there are discussions of this, but I just checked my copy of "Modern Infectious Disease Epidemiology: Concepts, Methods, Mathematical Models, and Public Health." It discusses travel and the contribution to spread, but mostly focuses on the way IHR limits the imposition of travel bans, and why such bans are considered problematic. It does mention quarantines and travel restrictions, but they aren't the key tools that are recommended.

Also, you said "I would be interested in some justification of the claim that face masks are not very useful." That isn't what I said. I said that "mask wearing by itself is only marginally effective." See this FHI paper, which estimated, albeit with very low confidence, that mask policies were almost entirely ineffective - far more pessimistic than my claim. That is because that paper is likely to be understating the impact, as they admit. It seems clear that maks wearing reduces spread somewhat, but note that this is because of reducing spread from infectious individuals, especially pre-symptomatic and asymptomatic people, not protecting mask wearers. The early skepticism was in part based on the assumption, which in March seemed to have been shared by both promoters and skeptics, that the benefits were that masks were individually protective, rather than that they helped population-level spread reduction. It turns out that (contra the FHI paper,) there seems to be some impact helping spread reduction. Even so, it's not enough to bring R<1 without other interventions, either closures, or an effective test and trace program, as our forthcoming paper argues. (I will also note that one key thing that is changing from that pre-print version is because reviewers pointed out that we were likely too optimistic in our estimate of mask effectiveness, and the literature supports much smaller impacts.)

EDIT: I notice I am confused about why people downvote comments that make substantive points without replying. If the tone or substance is problematic, I certainly think downvotes are acceptable, but I think the norm is supposed to be that you also tell people what you think they did wrong.

Comment by davidmanheim on Don't punish yourself for bad luck · 2020-06-25T18:39:40.259Z · score: 4 (2 votes) · LW · GW

You're right - but the basic literature on principle agent dynamics corrects this simple model to properly account for non-binary effort and luck, and I think that is the better model for looking at luck and effort.

Comment by davidmanheim on Credibility of the CDC on SARS-CoV-2 · 2020-06-25T18:26:58.622Z · score: 30 (11 votes) · LW · GW

I want to apologize, and make sure there is a clear record of what I think both on the object level, and about my comment, in retrospect. (For other mistakes I made, not related to this comment, see here.)

I handled this very poorly, and wasted a significant amount of people's time. I still think that the claims in the post were materially misleading, (and think some of the claims still are, after edits.) The authors replaced the section saying not to listen to the CDC with a very different disclaimer, which now says: "Notably we’re not saying any of the things they do recommend are bad." I think we should have a clear norm that potentially harmful things need a much greater degree of caution than it displayed. But calling for it to be removed was stupid.

Above and beyond my initial comment, critically, I screwed up by being pissed off and responding angrily below about what I saw as an uninformed and misleading post, and continued to reply to comments without due consideration of the people involved in both the original post, and the comments. This was in part due to personal biases, and in part due to personal stress, which is not an excuse. This led to what can generously be described as a waste of valuable people's time, at a particularly bad time. I have apologized to some of those those involved already, but wanted to do so publicly here as well.

Reviewing the arguments

I initially said the post should have been removed. I also used the term "infohazard" in a way that was alarmist - my central claim was that it was damaging and misleading, not that it was an infohazard in the global catastrophic risk sense that people assumed.

Several counterarguments and response to my claim that it should be taken down were advanced follow. I originally responded poorly, so I wanted to review them here, along with my view on the strength of the claims.

1) I should not have been a jerk.

I was dismissive and annoyed about what seemed to me to be many obvious factual errors. My attitude was a mistake. It was also stupid for a number of reasons, and at the very least I should have contacted the authors directly and privately, and been less confrontational. Again, I apologize.

2) Telling people to check with others before posting, and threatening to remove posts which were not so checked, is censorship, which is harmful in other ways.

As I mentioned above, saying the post should be removed was stupid, but I maintain, as I did then, that when a person is unsure about whether saying something is a good idea, and it is consequential enough to matter, they should ask for some outside advice. I think this should be a basic norm, one that lesswrong and the rationality community should not just recommend but where feasible, should try to enforce. I do think that there was a reasonable sense of urgency in getting the message out in this case, and that excuses some level of failure to vet the information carefully.

3) We should encourage people to say true things even when harmful, or as one person said "I'd want people to err heavily on the side of sharing information even if it might be dangerous."

This stops short of Nietzschean honesty, but I still don't think this holds up well. First, as I said, I think the post was misleading, so this simply does not apply. But the discussion in the comments and privately pushed on this more, and I think it's useful to clarify what I claimed. I agree that we should not withhold information which could be important because of a vague concern, and if this post were correct, it would fall under that umbrella. However, what the post seem to me to try to do is collect misleading statements to make it clearer that a bad organization is, in fact, bad - playing level 2 regardless of truth. That seems obviously unacceptable. I do not think lying is acceptable to pursue level 2 goals in Zvi's explanation of Simulacra, except in dire circumstances.

But the principle advocated here says to default to level 1 brutal / damaging honesty far more often than I think is advisable, not to lie. My initial impression what the the CDC was doing far better than it in fact was, and that the negative impacts were greatly under-appreciated.

I can understand why the balance of how much truth to say when the effect is damaging is critical, and think that Lesswrong's norms are far better than those elsewhere. I agree that the bare minimum of not actively lying is insufficient, but as I said above, I disagree with others about how far to go in saying things that might be harmful because they are true.

4) We should not attempt to play political games by shielding bad organizations and ignoring or obscuring the truth in order to build trust incorrectly.

I think this is a claim that people should never play level 3. I endorse this. I agree that I was attempting to defend an institution that was doing poorly from claims that it was doing poorly, on the basis that a significant fraction of those claims were unfair. As I said above, this would be a defense. In retrospect, the organization was far worse than I thought at the time, as I realized far too late, and discussed more here. On the other hand, many of the claims were in fact misleading, and I don't think that false attacks on bad things are OK either.

Comment by davidmanheim on Don't punish yourself for bad luck · 2020-06-25T06:27:28.493Z · score: 4 (4 votes) · LW · GW

To nitpick on your throwaway Ringworld reference, that's exactly the opposite of the point. Other humans don't benefit from the fact that the Ringworld is going to shield Teela Brown from the core explosion. She would be the person who accidentally bought zoom stock in January because it sounded like a cool company name, or the immortal baby from an unreproducible biomedical research accident that is prompted by post COVID-19 research funding, probably extra lucky to be living in a mostly-depopulated high-technology world due to massive death tolls from some other disaster.

Comment by davidmanheim on Don't punish yourself for bad luck · 2020-06-25T06:21:10.855Z · score: 2 (1 votes) · LW · GW

I disagree, and that's my central issue with the post.

"So that is the irony of the situation: An optimal contract punishes you for bad luck, and for nothing else."

The post gets this exactly backwards - the optimal contract exactly balances punishing lack of effort and bad luck, in a way that the employer is willing to pay as much as the market dictates for that effort under the uncertainty that exists.

Comment by davidmanheim on Is there a good way to simultaneously read LW and EA Forum posts and comments? · 2020-06-25T06:16:42.156Z · score: 5 (3 votes) · LW · GW

Check out http://eablogs.net/, which aggregates both of them, plus more - and amusingly, I was recently pointed to it, which is how I found this post. (But it doesn't have comments, obviously, so it's not a full solution for you.)

Comment by davidmanheim on Plausible cases for HRAD work, and locating the crux in the "realism about rationality" debate · 2020-06-24T17:34:34.675Z · score: 9 (2 votes) · LW · GW

(I really like this post, as I said to Issa elsewhere, but) I realized after discussing this earlier that I don't agree with a key part of the precise vs. imprecise model distinction.

A precise theory is one which can scale to 2+ levels of abstraction/indirection.
An imprecise theory is one which can scale to at most 1 level of abstraction/indirection.

I think this is wrong. More levels of abstraction are worse, not better. Specifically, if a model exactly describes a system on one level, any abstraction will lose predictive power. (Ignoring computational cost - which I'll get back to,) Quantum theory is more specifically predictive than Newtonian physics. The reason that we can move up and down levels is because we understand the system well enough to quantify how much precision we are losing, not because we can move further without losing precision.

The reason that precise theories are better is because they are tractable enough to quantify how far we can move away from them, and how much we lose by doing so. The problem with economics isn't that we don't have accurate enough models of human behavior to aggregate them, but that the inaccuracy isn't precise enough to allow understanding how the uncertainty from psychology shows up in economics. Fore example, behavioral economics is partly useless because we can't build equilibrium models - and the reason is because we can't quantify how they are wrong. For economics, we're better off with the worse model of rational agents, which we know is wrong, but can kind-of start to quantify by how much, so we can do economic analyses.

Comment by davidmanheim on The ground of optimization · 2020-06-24T17:15:26.290Z · score: 4 (2 votes) · LW · GW

I think this is covered in my view of optimization via selection, where "direct solution" is the third option. Any one-shot optimizer is implicitly relying on an internal model completely for decision making, rather than iterating, as I explain there. I think that is compatible with the model here, but it needs to be extended slightly to cover what I was trying to say there.

Comment by davidmanheim on The ground of optimization · 2020-06-24T17:12:07.715Z · score: 8 (2 votes) · LW · GW

I think this is great.

I would want to relate it to a few key points out which I tried to address in a few earlier posts. Principally, I discussed selection versus control, which is about the difference between what optimization does externally, and how it uses models and testing. This related strongly to your conception of an optimizing system, but focused on how much of the optimization process occurs in the system versus in the agent itself. This is principally important because of how it relates to misalignment and Goodharting of various types.

I had hopes to further apply that conceptual model to meas-optimization, but I was a bit unsure how to think about it, and have been working on other projects. At this point, I think your discussion is probably a better conceptual model than the one I was trying to build there - it just needs to be slightly extended to cover the points I was trying to work out in those posts. I'd like to think about how it relates to mesa-optimization as well, but I'm unlikely to actually work on that

Comment by davidmanheim on Source code size vs learned model size in ML and in humans? · 2020-05-20T10:35:46.186Z · score: 13 (5 votes) · LW · GW

2 points -

First, this will be hard to compile information, because of the way the systems work, but seems like a very useful exercise. I would add that the program complexity should include some measure of the "size" of the hardware architecture as well as the libraries, etc. used.

Second, I think that for humans, the relevant size is not just the brain, but the information embedded in the cultural process used for education. This seems vaguely comparable to training data and/or architecture search for ML models, though the analogy should probably be clarified.

Comment by davidmanheim on Goodhart Taxonomy · 2020-05-20T09:12:42.402Z · score: 4 (2 votes) · LW · GW

Note to add: We did formalize this more, and it has been available on Arxiv for quite a while.

Comment by davidmanheim on Goodhart Taxonomy · 2020-05-20T09:10:03.889Z · score: 8 (2 votes) · LW · GW

Note that this post has been turned into a paper, which expands on the ideas, and incorporates some more details.

(Scott - should you edit the post to link to the paper?)

Comment by davidmanheim on How to learn from a stronger rationalist in daily life? · 2020-05-20T08:57:08.397Z · score: 3 (2 votes) · LW · GW

I don't know of specific 2-person exercises, in part because I think most of the benefits are from personal practice rather than interaction. But you should certainly ask for general feedback on what they think you can improve in your life, and talk through things with others when you are feeling confused - and people who are good at thinking clearly / rationally are valuable partners for doing that.

Comment by davidmanheim on What can currently be done about the "flooding the zone" issue? · 2020-05-20T08:51:58.870Z · score: 7 (2 votes) · LW · GW

"Flooding the zone" is straight out of the Russian disinformation playbook:

https://www.rand.org/pubs/perspectives/PE198.html (Disclosure: I worked with him on a couple of semi-related projects at RAND,)

Unfortunately, we don't have great answers for how to respond.

Comment by davidmanheim on Covid-19 Points of Leverage, Travel Bans and Eradication · 2020-05-17T19:02:16.203Z · score: 2 (1 votes) · LW · GW

Right, meaning that the population fatality rate looks like it will end up close to 0.1%, so saying 20% would need medical support to survive is incredibly alarmist.

Comment by davidmanheim on COVID-19 from a different angle · 2020-05-05T08:45:51.221Z · score: 2 (1 votes) · LW · GW

The death rate of per thousand is at least approximately correct, and doesn't imply people live to 120. You can't infer time to death by just dividing, because population is not evenly distributed across ages, partly because of birth cohort sizes, partially because people die as they age, so younger people are always over-represented.

Comment by davidmanheim on COVID-19 from a different angle · 2020-05-05T08:40:22.942Z · score: 2 (1 votes) · LW · GW

The last several weeks of data isn't entered yet - it takes time for them to get and enter death certificates.

" *Data during this period are incomplete because of the lag in time between when the death occurred and when the death certificate is completed, submitted to NCHS and processed for reporting purposes. This delay can range from 1 week to 8 weeks or more, depending on the jurisdiction, age, and cause of death. "

Comment by davidmanheim on COVID-19 from a different angle · 2020-05-04T20:22:21.255Z · score: 2 (1 votes) · LW · GW

You'll probably be interested in Good Judgement COVID-19 Dashboard, which asks "How many people will die in the U.S. in 2020 relative to 2019, according to the Centers for Disease Control and Prevention (CDC)?"

(Especially see the comments.)

Comment by davidmanheim on COVID-19: An opportunity to help by modelling testing and tracing to inform the UK government · 2020-04-19T16:41:33.114Z · score: 8 (2 votes) · LW · GW

Having done a large part of my dissertation doing infectious disease modelling in STAN, I'd be happy to work on this, but I doubt it is the best tool for modeling the type of interventions they are discussion.

Comment by davidmanheim on An Orthodox Case Against Utility Functions · 2020-04-16T13:51:44.428Z · score: 4 (2 votes) · LW · GW

2 points about how I think about this that differs significantly. (I just read up on Bolker and Jeffrey, as I was previously unfamiliar.) I had been thinking about writing this up more fully, but have been busy. (i.e. if people think it's worthwhile, tell me and I will be more likely do so.)

First, utility is only ever computed over models of reality, not over reality itself, because it is a part of the decision making process, not directly about any self-monitoring or feedback process. It is never really evaluated against reality, nor does it need to be. Evidence for this in humans is that people suck at actually noticing how they feel, what they like, etc. The updating of their world model is a process that happens alongside planning and decision making, and is only sometimes actively a target of maximizing utility because people's model can include correspondence with reality as a goal. Many people simply don't do this, or care about map/reality correspondence. They are very unlikely to read or respond to posts here, but any model of humans should account for their existence, and the likely claim that their brains work the same way other people's brains do.

Second, Jeffrey's "News Value" is how he fits in a relationship between utility and reality. As mentioned, for many people their map barely corresponds to the territory, and they don't seem to suffer much. (Well, unless an external event imposes itself on them in a way that affects them in the present. And even then, how often do they update their model?) So I don't think Jeffrey is right. Instead, I don't think an agent could be said to "have" utility at all - utility maximization is a process, never an evaluated goal. The only reason reality matters is because it provides feedback to the model over which people evaluate utility, not because utility is lost or gained. I think this also partly explains happiness set points - as a point of noticing reality, humans are motivated by anticipated reward more than reward. I think the model I propose makes this obvious, instead of surprising.

Comment by davidmanheim on Seemingly Popular Covid-19 Model is Obvious Nonsense · 2020-04-14T11:31:52.157Z · score: 5 (3 votes) · LW · GW

Yes, the model isn't properly sensitive to uncertainties - but the projection that they are near zero isn't unreasonable, if transmission is stopped.

Comment by davidmanheim on Seemingly Popular Covid-19 Model is Obvious Nonsense · 2020-04-12T20:24:35.501Z · score: 10 (5 votes) · LW · GW

"If I am incorrect, and that is how any of this works I have some very, very large bets I would like to place."

Maybe you can state what bets you'd like to make? Are you predicting that the number of cases or deaths in, say, NYC will look very different from consensus estimates?

Comment by davidmanheim on Seemingly Popular Covid-19 Model is Obvious Nonsense · 2020-04-12T17:51:30.907Z · score: 1 (6 votes) · LW · GW

The problem the modelers have is how to account for reduced transmission in a continuous model. If you don't set it to zero, you can end up with 1/10,000th of a person still sick, and then the virus comes back full force a couple months later, despite having literally eradicated it. So yes, setting it to zero is wrong, but not doing so is also wrong. Because all models are wrong.

Perhaps you think they should be using an entirely different and more sophisticated model, and maybe they should, but it turns out that those have other drawbacks, like needing far more data than we have to calibrate and build, or needing you to make up inputs.

Comment by davidmanheim on April Coronavirus Open Thread · 2020-04-07T14:25:26.325Z · score: 5 (2 votes) · LW · GW

I've been forecasting a high probability that almost all of the low case count growth in Africa and Southeast Asia as limited testing.

Comment by davidmanheim on Taking Initial Viral Load Seriously · 2020-04-06T11:35:32.123Z · score: 7 (4 votes) · LW · GW

I'm more concerned about increased rates of central nervous system impacts and cytokine storms, both of which are rare in typical COVID cases, but seem closely related to high fatality rates in the minority where they occur.

Comment by davidmanheim on Taking Initial Viral Load Seriously · 2020-04-05T19:00:13.487Z · score: 10 (4 votes) · LW · GW

It's unclear to me that you wouldn't end up with a worse clinical course in this case - perhaps you wouldn't, but I'm not sure why you'd assume it's safer.

Comment by davidmanheim on What are the costs, benefits, and logistics of opening up new vaccine facilities? · 2020-04-02T16:48:43.857Z · score: 2 (1 votes) · LW · GW

Unfortunately, 1bn doses is likely no more than a quarter of the world's need - less if COVID is stopped more places.

Comment by davidmanheim on What is the typical course of COVID-19? What are the variants? · 2020-04-01T08:10:16.462Z · score: 2 (1 votes) · LW · GW

See image here for a best-estimate of the course of infection. (Matches a number of other analyses, unfortunately doesn't have good representation of uncertainty.)

Comment by davidmanheim on What is the typical course of COVID-19? What are the variants? · 2020-04-01T08:05:46.722Z · score: 2 (1 votes) · LW · GW

They kept them there for long enough that this seems unlikely.

Comment by davidmanheim on LessWrong Coronavirus Agenda · 2020-03-31T07:14:22.518Z · score: 2 (1 votes) · LW · GW

Interesting - I'd ask Robin Hanson if that fits with his variolation suggestion.

Comment by davidmanheim on LessWrong Coronavirus Agenda · 2020-03-26T08:22:19.525Z · score: 6 (4 votes) · LW · GW

That's not quite right. I can't get to that book right now, but measles and mumps for MMR are also done in Chicken eggs, IIRC, as are Herpes and Poxviruses, while cell lines and other media can be used to grow other viruses - but the remainder of the facilities are still similar, and can be repurposed.

But I agree that we do need new platform technologies.

Comment by davidmanheim on Thinking About Filtered Evidence Is (Very!) Hard · 2020-03-25T07:11:05.956Z · score: 6 (3 votes) · LW · GW

This seems related to my speculations about multi-agent alignment. In short, for embedded agents, having a tractable complexity of building models of other decision processes either requires a reflexively consistent view of their reactions to modeling my reactions to their reactions, etc. - or it requires simplification that clearly precludes ideal Bayesian agents. I made the argument much less formally, and haven't followed the math in the post above (I hope to have time to go through more slowly at some point.)

To lay it out here, the basic argument in the paper is that even assuming complete algorithmic transparency, in any reasonably rich action space, even games as simple as poker become completely intractable to solve. Each agent needs to simulate a huge space of possibilities for the decision of all other agents in order to make a decision about what the probability is that the agent is in each potential position. For instance, what is the probability that they are holding a hand much better than mine and betting this way, versus that they are bluffing, versus that they have a roughly comparable strength hand and are attempting to find my reaction, etc. But evaluating this requires evaluating the probability that they assign to me reacting in a given way in each condition, etc. The regress may not be infinite, because the space of states is finite, as is the computation time, but even in such a simple world it grows too quickly to allow fully Bayesian agents within the computational capacity of, say, the physical universe.