Posts

Elicitation for Modeling Transformative AI Risks 2021-12-16T15:24:04.926Z
Modelling Transformative AI Risks (MTAIR) Project: Introduction 2021-08-16T07:12:22.277Z
Maybe Antivirals aren’t a Useful Priority for Pandemics? 2021-06-20T10:04:08.425Z
A Cruciverbalist’s Introduction to Bayesian reasoning 2021-04-04T08:50:07.729Z
Systematizing Epistemics: Principles for Resolving Forecasts 2021-03-29T20:46:06.923Z
Resolutions to the Challenge of Resolving Forecasts 2021-03-11T19:08:16.290Z
The Upper Limit of Value 2021-01-27T14:13:09.510Z
Multitudinous outside views 2020-08-18T06:21:47.566Z
Update more slowly! 2020-07-13T07:10:50.164Z
A Personal (Interim) COVID-19 Postmortem 2020-06-25T18:10:40.885Z
Market-shaping approaches to accelerate COVID-19 response: a role for option-based guarantees? 2020-04-27T22:43:26.034Z
Potential High-Leverage and Inexpensive Mitigations (which are still feasible) for Pandemics 2020-03-09T06:59:19.610Z
Ineffective Response to COVID-19 and Risk Compensation 2020-03-08T09:21:55.888Z
Link: Does the following seem like a reasonable brief summary of the key disagreements regarding AI risk? 2019-12-26T20:14:52.509Z
Updating a Complex Mental Model - An Applied Election Odds Example 2019-11-28T09:29:56.753Z
Theater Tickets, Sleeping Pills, and the Idiosyncrasies of Delegated Risk Management 2019-10-30T10:33:16.240Z
Divergence on Evidence Due to Differing Priors - A Political Case Study 2019-09-16T11:01:11.341Z
Hackable Rewards as a Safety Valve? 2019-09-10T10:33:40.238Z
What Programming Language Characteristics Would Allow Provably Safe AI? 2019-08-28T10:46:32.643Z
Mesa-Optimizers and Over-optimization Failure (Optimizing and Goodhart Effects, Clarifying Thoughts - Part 4) 2019-08-12T08:07:01.769Z
Applying Overoptimization to Selection vs. Control (Optimizing and Goodhart Effects - Clarifying Thoughts, Part 3) 2019-07-28T09:32:25.878Z
What does Optimization Mean, Again? (Optimizing and Goodhart Effects - Clarifying Thoughts, Part 2) 2019-07-28T09:30:29.792Z
Re-introducing Selection vs Control for Optimization (Optimizing and Goodhart Effects - Clarifying Thoughts, Part 1) 2019-07-02T15:36:51.071Z
Schelling Fences versus Marginal Thinking 2019-05-22T10:22:32.213Z
Values Weren't Complex, Once. 2018-11-25T09:17:02.207Z
Oversight of Unsafe Systems via Dynamic Safety Envelopes 2018-11-23T08:37:30.401Z
Collaboration-by-Design versus Emergent Collaboration 2018-11-18T07:22:16.340Z
Multi-Agent Overoptimization, and Embedded Agent World Models 2018-11-08T20:33:00.499Z
Policy Beats Morality 2018-10-17T06:39:40.398Z
(Some?) Possible Multi-Agent Goodhart Interactions 2018-09-22T17:48:22.356Z
Lotuses and Loot Boxes 2018-05-17T00:21:12.583Z
Non-Adversarial Goodhart and AI Risks 2018-03-27T01:39:30.539Z
Evidence as Rhetoric — Normative or Positive? 2017-12-06T17:38:05.033Z
A Short Explanation of Blame and Causation 2017-09-18T17:43:34.571Z
Prescientific Organizational Theory (Ribbonfarm) 2017-02-22T23:00:41.273Z
A Quick Confidence Heuristic; Implicitly Leveraging "The Wisdom of Crowds" 2017-02-10T00:54:41.394Z
Most empirical questions are unresolveable; The good, the bad, and the appropriately under-powered 2017-01-23T20:35:29.054Z
Map:Territory::Uncertainty::Randomness – but that doesn’t matter, value of information does. 2016-01-22T19:12:17.946Z
Meetup : Finding Effective Altruism with Biased Inputs on Options - LA Rationality Weekly Meetup 2016-01-14T05:31:20.472Z
Perceptual Entropy and Frozen Estimates 2015-06-03T19:27:31.074Z
Meetup : Complex problems, limited information, and rationality; How should we make decisions in real life? 2013-10-09T21:44:19.773Z
Meetup : Group Decision Making (the good, the bad, and the confusion of welfare economics) 2013-04-30T16:18:04.955Z

Comments

Comment by Davidmanheim on Elicitation for Modeling Transformative AI Risks · 2021-12-28T08:20:55.196Z · LW · GW

The model is available privately now, and I strongly agree that it's particularly important to do elicitations well!

Comment by Davidmanheim on Market-shaping approaches to accelerate COVID-19 response: a role for option-based guarantees? · 2021-12-26T16:34:00.851Z · LW · GW

This was a promising and practical policy idea, of a type that I think is generally under-provided by the rationalist community. Specifically. it attempts to actually consider how to solve a problem, instead of just diagnosing or analyzing it. Unfortunately, it took far too long to get attention paid, and the window for its usefulness has passed.

Comment by Davidmanheim on MichaelA's Shortform · 2021-12-26T15:23:47.987Z · LW · GW

Appendix D of this report informed a lot of work we did on this, and in decreasing order of usefulness, it lists Shafer's "Belief functions," Possibility Theory, and  the "Dezert-Smarandache Theory of Plausible and Paradoxical Reasoning." I'd add "Fuzzy Sets" / "Fuzzy Logic."

(Note that these are all formalisms in academic writing that predate and anticipate most of what you've listed above, but are harder ways to understand it. Except DST, which is hard to justify except as trying to be exhaustive about what people might want to think about non-probability belief.)

Comment by Davidmanheim on Zvi’s Thoughts on the Survival and Flourishing Fund (SFF) · 2021-12-26T12:57:31.475Z · LW · GW

You didn't respond to my comment that addressed this, but; "even granting prophecy, I think that there is no world in which even an extra billion dollars per year 2015-2020 would have been able to pay for enough people and resources to get your suggested change done. And if we had tried to push on the idea, it would have destroyed EA Bio's ability to do things now. And more critically, given any limited level of public attention and policy influence, focusing on mitigating existential risks instead of relatively minor events like COVID would probably have been the right move even knowing that COVID was coming!"

Comment by Davidmanheim on Zvi’s Thoughts on the Survival and Flourishing Fund (SFF) · 2021-12-26T07:59:17.555Z · LW · GW

iGem seems to a be a project about getting people to do more dangerous research and no project about reducing the amount of dangerous research that happens. Such an organization has bad incentives to take on the virology community to stop them from doing harm.

 

Did you look at what Open Philanthropy is actually funding? https://igem.org/Safety 

Or would you prefer that safety people not try to influence education and safety standards of people actually doing the work? Because if you ignore everyone with bad incentives, you can't actually change the behaviors of the worst actors.

Comment by Davidmanheim on Zvi’s Thoughts on the Survival and Flourishing Fund (SFF) · 2021-12-26T07:21:22.816Z · LW · GW

Yes, a huge one.

"COVAX, the global program for purchasing and distributing COVID-19 vaccines, has struggled to secure enough vaccine doses since its inception..

Nearly 100 low-income nations are relying on the program for vaccines. COVAX was initially aiming to deliver 2 billion doses by the end of 2021, enough to vaccinate only the most high-risk groups in developing countries. However, its delivery forecast was wound back in September to only 1.425 billion doses by the end of the year.

And by the end of November, less than 576 million doses had actually been delivered."

Comment by Davidmanheim on Zvi’s Thoughts on the Survival and Flourishing Fund (SFF) · 2021-12-24T09:33:16.227Z · LW · GW

The idea is that the extra production capacity funded with that $4b doesn't just move up access a few months for rich countries, it also means poor countries get enough doses in months not years, and that there is capacity for making boosters, etc. (It's a one-time purchase to increase the speed of vaccines for the medium term future. In other words, it changes the derivative, not the level  or the delivery date.)

Comment by Davidmanheim on What’s Up With the CDC Nowcast? · 2021-12-24T09:27:00.625Z · LW · GW

If people have any confidence at all that the CDC is wrong, this market looks like free money. (Which is both further evidence for markets overinterpreting the most recent data, and evidence that the mostly-efficient prediction market thinks the crazily increasing numbers will actually mostly check out, at the same time.)

Comment by Davidmanheim on Zvi’s Thoughts on the Survival and Flourishing Fund (SFF) · 2021-12-23T09:53:17.341Z · LW · GW

I can go through details, and you're wrong about what the mentioned orgs have done which matters, but even ignoring that, I strongly disagree about how we can and should push for better policy, and don't think that even giving unlimited funding (which we effectively had,) there could have been enough people working on this to have done what you suggest (and we still don't have enough people for high priority projects, despite, again, an effectively blank check!) and think you're suggesting that we should have prioritized a single task, stopping Chinese BSL-2 work, based purely on post-hoc information, instead of pursuing the highest EV work as it was, IMO correctly, assessed at the time. 

But even granting prophecy, I think that there is no world in which even an extra billion dollars per year 2015-2020 would have been able to pay for enough people and resources to get your suggested change done. And if we had tried to push on the idea, it would have destroyed EA Bio's ability to do things now. And more critically, given any limited level of public attention and policy influence, focusing on mitigating existential risks instead of relatively minor events like COVID would probably have been the right move even knowing that COVID was coming! (Though it would certainly have changed the strategy so we could have responded better.)

Comment by Davidmanheim on Zvi’s Thoughts on the Survival and Flourishing Fund (SFF) · 2021-12-22T13:49:03.108Z · LW · GW

See my reply above, but this was actually none of your 4 options - it was "funders in EA were pouring money into this as quickly as they could find people willing to work on it." 

And the reasons no-one was pushing the specific proposal of "publicly shame China into stopping [so-called] GoF work" include the fact that US labs have done and still do similar work in only slightly safer conditions, as do microbiologists everywhere else, and that building public consensus about something no-one but a few specific groups of experts care about isn't an effective use of funds.

Comment by Davidmanheim on Zvi’s Thoughts on the Survival and Flourishing Fund (SFF) · 2021-12-22T13:44:41.460Z · LW · GW

If you had done even a bit of homework, you'd see that there was money going in to all of this. iGem and the Blue ribbon panel have been getting funded for over half a decade, and CHS for not much less. The problem was that  there were too few people working on the problem, and there was no public will to ban scientific research which was risky. And starting from 2017, when I was doing work on exactly these issues - lab safety and precautions, and trying to make the case for why lack of monitoring was a problem - the limitation wasn't a lack funding from EA orgs. Quite the contrary - almost no-one important in biosecurity wasn't getting funded well to do everything that seemed potentially valuable. 

So it's pretty damn frustrating to hear someone say that someone should have been working on this, or funding this. Because we were, and they were.

Comment by Davidmanheim on A Personal (Interim) COVID-19 Postmortem · 2021-12-17T13:05:11.743Z · LW · GW

As mentioned in the post, I think it's personally helpful to look back, and is a critical service to the community as well. Looking back at looking back, there are things I should add to this list - and even something (hospital transmission) which I edited more recently because I have updated against having been wrong about in this post - but it was, of course, an interim postmortem, so both of these types of post-hoc updates seem inevitable.

I think that the most critical lesson I learned was to be more skeptical of information sources generally - even the most accurate, including superforecasters and the rationalist community, are fallible in ways which are somewhat predictable, and hard to evaluate prior to knowing the ground truth. This both highlights the value of staying uncertain and entertaining multiple hypotheses, and the importance of keeping diverse information sources available. The points made by John Wentworth in his comment about the need to do expensive updates was also very clear and valuable.

I certainly think additional posts of this type, by myself and by others, would provide value - and I could see it being its own genre. Unfortunately, there have been very few. I am happy to see several projects looking back at the community's reactions, successes, and failures, but they are still in progress. The 2020 Petrov Day postmortem and similar are also evaluating community behavior, and some have evaluated failures in companies, but I see fairly few, and I would think we could use more, and more individual posts. (I'd hoped to write another actual after-action report, but I have been busy - an insufficient excuse - and we're unfortunately still not post-COVID-19.)

Comment by Davidmanheim on Zvi’s Thoughts on the Survival and Flourishing Fund (SFF) · 2021-12-16T14:49:45.936Z · LW · GW

I will point out that my work proposing funding mechanisms to work on that, and the idea, was being funded by exactly those EA orgs which OpenPhil and others were funding. (But I'm not sure why they people you spoke with claim that they wouldn't fund this, and following your lead, I'll ignore the various issues with the practicalities - we didn't know mRNA was the right thing to bet on in May 2020, the total cost for enough manufacturing for the world to be vaccinated in <6 months is probably a (single digit) multiple of $4bn, etc.)

Comment by Davidmanheim on Inner Alignment: Explain like I'm 12 Edition · 2021-12-12T08:14:58.573Z · LW · GW

This post is both a huge contribution, giving a simpler and shorter explanation of a critical topic, with a far clearer context, and has been useful to point people to as an alternative to the main sequence. I wouldn't promote it as more important than the actual series, but I would suggest it as a strong alternative to including the full sequence in the 2020 Review. (Especially because I suspect that those who are very interested are likely to have read the full sequence, and most others will not even if it is included.)

Comment by Davidmanheim on A Personal (Interim) COVID-19 Postmortem · 2021-12-12T08:10:38.618Z · LW · GW

In further retrospect, this was very, very incorrect.

Comment by Davidmanheim on Taking Clones Seriously · 2021-12-05T09:34:21.820Z · LW · GW

Not really, given the huge disparity in numbers -  unless  you have a  magic way of feeding/housing/clothing/caring for children which costs far less than is currently possible? (And note that we know baby warehousing REALLY doesn't work well.)

Comment by Davidmanheim on Taking Clones Seriously · 2021-12-01T20:13:00.574Z · LW · GW

Expected value compared to hiring the top 100 people in the international math Olympiad each of the next 20 years?

Comment by Davidmanheim on Perceptual Entropy and Frozen Estimates · 2021-11-19T09:59:15.391Z · LW · GW

Link fixed, and title added. (If you didn't have another reason to dislike the CIA, they broke the link by moving it. Jerks.)

Comment by Davidmanheim on An Idea for a More Communal Petrov Day in 2022 · 2021-10-22T04:38:56.246Z · LW · GW

Could the ceremony's big red button also be mirrored on the site, with a similar shutdown trigger? Non-attendees would still see the results, similar to the status quo. (Much like actual wars are decisions of a small, hopefully trusted group but affect the world more broadly.)

Comment by Davidmanheim on Choice Writings of Dominic Cummings · 2021-10-22T04:35:16.724Z · LW · GW

I didn't say "domestic pressure / public agreement is strong evidence," I said that a reversal of the decision for those reasons would be strong evidence. And yes, I think that a majority of voters agreeing it was so much of a mistake that it is worth it to re-enter on materially worse terms, which it would need to be, would be a clear indication that the original decision was a bad one.

And I'm not sure why you say that a change in the long term trajectory of growth is a myopic criteria. If the principal benefit is better ability to react to crises, given the variety of crises that occur and their frequency, that should be obvious over the course of years, not centuries, and would absolutely affect economic growth over the long term.

Comment by Davidmanheim on Choice Writings of Dominic Cummings · 2021-10-19T06:09:38.011Z · LW · GW

I agree that evidence is weak, but I think it will be much clearer in the future whether it was a mistake - and the pathways for it to have been good are different than for it to have been bad.

Two concrete things that would be strong evidence either way which we'd see in the next 5 years:
- Significant divergence from previous economic trajectory that differs from changes in the EU.
- UK choosing to rejoin the EU due to domestic pressure, or general public agreement that it was good.

Perhaps more likely, we see a mix of evidence, and we conclude that like most complex policy decisions, it will take an additional decade or two for a consensus of economists and historians to  emerge so we clearly see what the impact was.

That said, I would be very happy to bet at even odds about it resolving as a clear negative - albeit with a very long resolution time frame, needing a somewhat qualitative resolution criteria.

Comment by Davidmanheim on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T18:19:35.983Z · LW · GW

I don't specifically know about mental health, but I do know specific stories about financial problems being treated as security concerns - and I don't think I need to explain how incredibly horrific it is to have an employee say to their employer that they are in financial trouble, and be told that they lost their job and income because of it.

Comment by Davidmanheim on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T16:27:12.802Z · LW · GW

I agree that there is a real issue here that needs to be addressed, and I wasn't claiming that there is no reason to have support - just that there is a reason to compartmentalize.

And yes, US military use of mental health resources is off-the-charts. But in the intelligence community there are some really screwed up incentives, in that having a mental health issue can get your clearance revoked - and you won't necessarily lose your job, but the impact on a person's career is a great reason to avoid mental health care, and my (second-hand, not reliable) understanding is that there is a real problem with this.

Comment by Davidmanheim on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T07:06:22.817Z · LW · GW

To attempt to make this point more legible:

Standard best practice in places like the military and intelligence organizations, where lives depend on secrecy being kept from outsiders - but not insiders - is to compartmentalize and maintain "need to know." Similarly, in information security, the best practice is to only give being security access to what they need, and granularize access to different services / data, and well as differentiating read / write / delete access.  Even in regular organizations, lots of information is need-to-know - HR complaints, future budgets, estimates of profitability of a publicly traded company before quarterly reports, and so on. This is normal, and even though it's costly, those costs are needed. 

This type of granular control isn't intended to stop internal productivity, it is to limit the extent of failures in secrecy, and attempts to exploit the system by leveraging non-public information, both of which are inevitable, since costs to prevent failures grow very quickly as the risk of failure approaches zero. For all of these reasons, the ideal is to have trustworthy people who have low but non-zero probabilities of screwing up on secrecy. Then, you ask them not to share things that are not necessary for others' work. You only allow limited exceptions and discretion where it is useful. The alternative, of "good trustworthy people [] get to have all the secrets versus bad untrustworthy people who don't get any," simply doesn't work in practice.

Comment by Davidmanheim on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T06:53:08.479Z · LW · GW

I think this is much more complex than you're assuming. As a sketch of why, costs of communication scale poorly, and the benefits of being small and coordinating centrally often beats the costs imposed by needing to run everything as one organization. (This is why people advise startups to outsource non-central work.)

Comment by Davidmanheim on Choice Writings of Dominic Cummings · 2021-10-17T09:49:23.056Z · LW · GW

I'm happy to make more specific recommendations on how to think about policy, depending on what you're looking for - but I'm generally happy recommending James Q. Wilson's "Bureaucracy" and Eugene Bardach's "A Practical Guide for Policy Analysis" - he former largely explains why things would be so dysfunctional, and the latter is a generally great introduction to understanding what policy analysis and interventions can do.

Comment by Davidmanheim on Choice Writings of Dominic Cummings · 2021-10-17T09:48:19.510Z · LW · GW

First, I think that even understanding "post-covid" as now, it's early to look at the overall impacts - and again, see the linked survey. Economists still think this was overall a mistake, from that perspective at least.

Second, as I said in a different response, the reasoning seems to be the claim that they wanted to take back, to slightly paraphrase from memory, "their money, their borders, and their laws" - and yes, laws definitely includes the sort of policy choice he's pointing to, but it wouldn't have needed to slow their early purchase nor their excellent distribution system (which would perhaps have been a couple weeks later due to the vaccine approval delay, which they likely could have pushed forward, but given how slowly they started to arrive, this would have made at most a small difference in vaccine timing for most people,) but the other two claims came first, and seemed like the central parts of the question.

Comment by Davidmanheim on Choice Writings of Dominic Cummings · 2021-10-17T09:35:59.586Z · LW · GW

Thanks - that seems plausible. But again, I think not mentioning the obvious reason for people's distaste led to a clearly incorrect claim.

Comment by Davidmanheim on Choice Writings of Dominic Cummings · 2021-10-17T09:34:42.039Z · LW · GW

Yes, there are downsides to bureaucracy - but I'm entirely unconvinced that the UK has reduced the number of downsides via Brexit. It seems more like they traded one set for a larger and more expensive set of bureaucratic problems both internally, and interacting with the EU. Finding a single example which turned out (very) well, like vaccine distribution - which would likely have been possible even if they had been EU members - doesn't really seem like a convincing pitch, even if it's true that it was only possible because they left.

Comment by Davidmanheim on Common knowledge about Leverage Research 1.0 · 2021-10-14T16:35:54.306Z · LW · GW

One of the negative consequences of our information policy, as we have learned, is the way it made some regular interactions with people outside of the relevant information circles more difficult than intended.

 

Is Leverage willing to grant a blanket exemption from the NDAs which people evidently signed, to rectify the potential ongoing harms of not having information available? If not, can you share the text of the NDAs?

Comment by Davidmanheim on Choice Writings of Dominic Cummings · 2021-10-14T14:20:21.231Z · LW · GW

Improving existing institutions is inherently about distrusting how they operate. 

 

That's true, and a fair criticism, but the replication crisis was about object-level criticisms of the science - it certainly did not start with strategizing about convincing people to take political action.

Comment by Davidmanheim on Choice Writings of Dominic Cummings · 2021-10-14T14:17:46.579Z · LW · GW

I didn't claim to know all of the post Brexit effects, I linked to a survey of economists. But I don't think I need to defend the claim that Brexit was damaging.

And when asked about what they were taking back control of, I recall that the leaders pushing for Brexit said they wanted control of their money, their borders, and their laws. Only the last of those is plausibly what you meant - the first is a weird misunderstanding about where money came from and went, and the second is about disliking immigration.

Comment by Davidmanheim on Choice Writings of Dominic Cummings · 2021-10-14T14:11:54.900Z · LW · GW

First, yes, I've read a fair amount of his writing, albeit only up to a couple years ago. And no, he's not "uniquely bad" - quite the opposite. But I wouldn't advise people interested in rationality to read about political strategy generally. Even though Cummings is significantly better than most - which I think he is, to clarify - that doesn't mean it's worth reading his material.

For those familiar with LW, I thought the distaste for politics was obvious. And yes, I think it's rare for political strategists not to almost exclusively play level 3 and 4 simulacra games, and engage in what has been called dark arts of rationality on this blog for years. 

Comment by Davidmanheim on Non-Adversarial Goodhart and AI Risks · 2021-10-14T07:01:59.280Z · LW · GW

Yes on point Number 1, and partly on point number 2.

If humans don't have incredibly complete models for how to achieve their goals, but know they want a glass of water, telling the AI to put a cup of H2O in front of them can create weird mistakes. This can even happen because of causal connections the humans are unaware of. The AI might have better causal models than the humans, but still cause problems for other reasons. In this case, a human might not know the difference between normal water and heavy water, but the AI might decide that since there are two forms, it should have them present in equal amounts, which would be disastrous for reasons entirely beyond the understanding of the human who asked for the glass of water. The human needed to specify the goal differently, and was entirely unaware of what they did wrong - and in this case it will be months before the impacts of the weirdly different than expected water show up, so human-in-the-loop RL or other methods might not catch it.

Comment by Davidmanheim on Choice Writings of Dominic Cummings · 2021-10-14T06:48:37.776Z · LW · GW

Generally, it's hard to judge whether someone does things for causes you agree with or don't agree with when you don't know what the causes are. 


One way to do this is to trust the people when they claim to tell you what their motives are. But Cummings spends his time talking about how politicians need to lie about that, and talking about how to do that type of manipulation well. And ceteris paribus, I will trust someone less if they say they study how to lie effectively. I'm not saying I don't trust Cummings - I think he's relatively honest, and extremely / unfortunately so for a political figure - I'm saying that I don't think encouraging people to learn the skills he wants to teach is a good thing for enhancing trust more generally.

Comment by Davidmanheim on Choice Writings of Dominic Cummings · 2021-10-14T06:43:29.394Z · LW · GW

I intended to make something like the last claim here. I don't need to shun political strategists, but  I do think we should shun their methods. 

Yes, perhaps current politics requires a level of dishonesty and manipulation (but I'd agree wuth your supposition that it is not usually at the level seen in Brexit,) and even if it's critical for some people to engage in these dark arts for laudable goals (which is unclear, and certainly contrary to the goal of raising the sanity waterline,) Lesswrong will be worse off for trying to communally learn the lessons of how to lie to the public. 


To use an analogy, learning how to be a pickpocket might be useful, and might even have benefits aside from theft, but I don't want to need to guard my wallet, so if some of the people I knew started saying we should all learn to be better pickpockets, I'd want to spend less time with them.

My unease with studying Cumming's ideas is not just because it's horrific PR - though I think it is - and definitely not just because I don't think it could teach anything, but because it is geared towards learning things which enhance distrust among people. Given that we're otherwise involved in honest and truth-seeking conversations, this seems particularly bad. Otherwise, every conversation that even potentially relates to the real world becomes subject to lots of really bad epistemological pressures, with LWers trying to operate on simulacra level 2, or even worse, playing levels 3 and 4. In my view, that would be a tragic loss - so maybe we should avoid trying to get better.

Comment by Davidmanheim on Choice Writings of Dominic Cummings · 2021-10-14T06:30:46.575Z · LW · GW

I think it's somewhere between very early and unreasonable to ask about "post-COVID" impacts when we're probably a year away from returning to any semblance of normal globally. At the same time, while I don't think there is a clear answer, the consensus of economists seems to be that overall Brexit was clearly bad, as of January this year, i.e. mid-pandemic.

Next, the UK going alone on vaccination, which probably would have been possible even without Brexit, seems to contrast with them going alone on pushing for herd immunity, in what was both in retrospect bad, and predictably so according to economists and epidemiologists who were shouting about it at the time.

Second, my understanding is that the stated reasoning for why to do Brexit had little or nothing to do with this type of policy freedom. But even if it was mentioned, I think it's strange to defend the impacts of Brexit on the basis of a difficult to explore counterfactual understanding of how the UK would have behaved differently during this tail event, ignoring the consensus that the impact on the economic situation was very negative. 

Comment by Davidmanheim on Choice Writings of Dominic Cummings · 2021-10-14T06:21:32.658Z · LW · GW

That all seems fair - I was just surprised and disappointed to see one obviously important explanation of why people were put off by Cummings be completely ignored in the post.

Comment by Davidmanheim on Choice Writings of Dominic Cummings · 2021-10-14T06:20:13.884Z · LW · GW

Thanks for the response. First, economists and experts seem pretty unified in thinking that Brexit will be bad for the UK, and somewhat less bad but still negative for the EU.  That's not proof, but it's fairly convincing data, and I haven't seen plausible claims to the contrary.

Regarding the rest, I think you've just admitted that there were places where lies were used in service of a supposed greater truth, and that the claims used to promote Brexit were willfully inconsistent - but that's exactly what we mean by dark arts, and no additional empirical data is needed to support the claim. Of course I agree that neither side was honest - but a policy of getting involved in (epistemic) mud fights isn't about relative muddiness, it's about actually staying clean. If we care about our epistemic health, there are lots of things we might want to avoid, and dishonesty in service of our prior (debatably effective / correct) ideas seems like a great candidate.

Comment by Davidmanheim on Choice Writings of Dominic Cummings · 2021-10-13T09:46:45.940Z · LW · GW

most people get rebuffed by the sheer number of words and posts he’s written


I think most people are far more put off by his close association with Vote Leave, and the damage it caused. He's clearly brilliant and insightful, but I'm very wary about promoting rationality "dark arts" like how to manipulate the public, especially when coming from someone whose primary claim to fame is that they hurt their own country, further destabilized the European Union, and worsened the world economy.

Comment by Davidmanheim on Coordination as a Scarce Resource · 2021-10-11T09:49:16.107Z · LW · GW

An additional point worth noting is that there is tremendous social value in reducing coordination costs, but it's nearly impossible to capture that value, so it's very under-provided.

What does lowering coordination costs look like? Trade meetups, conferences, and similar events or locations to foster communication and coordination (Like EA and LW meetups and forums,)  as well as trustworthy information sharing - which is costly to individuals and mostly benefits others. (Like Givewell, which provides analysis that doesn't benefit itself, so it is a largely trusted broker.)

I'd be very interested in thinking about what other general strategies could exist - they seems like great targets for world optimization.

Comment by Davidmanheim on AI Prediction Services and Risks of War · 2021-10-04T09:11:23.328Z · LW · GW

Very interesting, and I think it mostly goes in the right direction - but I'm not very convinced by the arguments, mostly because I don't think the analysis of causes of war is sufficient here.

For example, even within rational actor models, I don't think you give enough credence to multi-level models of incentives for war, which I discussed a bit here. Leaders often are willing to play at brinksmanship or even go to war because it's advantageous regardless of whether they win. A single case can illustrate: a dictator might go to war to prevent internal dissent, and in that case, even losing the war can be a rallying cry for him to consolidate power. An AI system might even tell people that, but it won't keep him from making the decision if it's beneficial to have a war. And even without a dictator, different constituencies will support or avoid war for reasons unrelated to whether the country is likely to win - because "good for the country overall" isn't any single actor's reason for any decision, and prediction services won't (necessarily) change that.

Comment by Davidmanheim on AI learns betrayal and how to avoid it · 2021-10-02T17:47:56.797Z · LW · GW

This seems really exciting, and I'd love to chat about how betrayal is similar to or different than manipulation. Specifically, I think the framework I proposed in my earlier multi-agent failure modes paper might be helpful in thinking through the categorization. (But note that I don't endorse thinking of everything as Goodhart's law, despite that paper - though I still think it's technically true, it's not as useful as I had hoped.)

Comment by Davidmanheim on Sam Altman Q&A Notes - Aftermath · 2021-09-13T08:39:11.521Z · LW · GW

I agree that in general there is a tradeoff, and that there will always be edge cases. But in this case, I think judgement should be tilted strongly in favor of discretion. That's because a high trust environment is characterized by people being more cautious around public disclosure and openness. Similarly, low trust environments have higher costs of communication internal to the community, due to lack of willingness to interact or share information. Given the domain discussed, and the importance of collaboration between key actors in AI safety, I'm by default in favor of putting value more on higher trust and less disclosure than on higher transparency and more sharing.

Comment by Davidmanheim on Sam Altman Q&A Notes - Aftermath · 2021-09-11T19:59:31.434Z · LW · GW

That's all fair, and given what has been said, despite my initial impression, I don't think this was "obviously wrong" - but I do have a hope that  in this community, especially in acknowledged edge cases, people wait and check with others rather than going ahead.

Comment by Davidmanheim on Paths To High-Level Machine Intelligence · 2021-09-11T17:57:17.887Z · LW · GW

On the topic of growth rate of computing power, it's worth noting that we expect the model which experts have to be somewhat more complex that what we represented as "Moore's law through year " - but as with the simplification regarding CPU/GPU/ASIC compute, I'm unsure how much this is really a crux for anyone about the timing for AGI.

I would be very interested to hear from anyone who said, for example, "I would expect AGI by 2035 if Moore's law continues, but I expect it to end before 2030, and it will therefore likely take until 2050 to reach HLMI/AGI."

Comment by Davidmanheim on Sam Altman Q&A Notes - Aftermath · 2021-09-11T17:45:32.015Z · LW · GW

I don't have a very good (or even halfway decent) memory for phrases, so I have no idea, and since no-one else heard it, I assume it wasn't said. Still, it seemed clear to me that the request was intended for the talk to be off the record, in the journalistic sense. 

The phrase "no recording and no transcript," which you seem to agree was said explicitly, seems to indicate that he didn't want there to be a record of what he said. At that point,  maybe you didn't technically do anything he requested you not to do, but it seems like the responsible  and decent thing would be to have asked Sam if he minded.

Comment by Davidmanheim on Sam Altman Q&A Notes - Aftermath · 2021-09-11T17:30:46.691Z · LW · GW

I don't recall what phrase was used, but I thought that it was clear enough. If someone said that they agree to do a talk on the condition that there be no recording and no transcript, unlike every other talk in the series, it seems to take a really weird model of the situation to claim that you had no idea that they would not want people publicly posting notes. At the very least, it merits checking.

Comment by Davidmanheim on Sam Altman Q&A Notes - Aftermath · 2021-09-11T17:26:04.198Z · LW · GW

Sam has said he thought this was "off-the-record-ish" and it was clearly known that it not being recorded was a precondition for giving the talk. I don't recall what terms were used, but I thought it was pretty obvious - and Sam's later responses seem to agree - that he expected notes like this not to be made public.

Edit to add: I thought at the time that it was clear that this was off the record, despite that phrase likely not being used. If not, I would not have asked the question which I asked during the meetup.

Comment by Davidmanheim on Sam Altman Q&A Notes - Aftermath · 2021-09-09T04:30:57.234Z · LW · GW

The request for this to be off the record was explicit during the introduction to the talk, so I'm not sure why it's ambiguous. And "off the record" has a pretty clear meaning - I certainly had the clear expectation that my question, and his answer, weren't going to be published.

Edit: I do not recall the phrasing, but as I said below, I was under a distinct impression that the request for no recording and no transcript was at least indicative, and that asking him if you could share notes publicly would have been the right thing to do.