Still Not in Charge
post by Zvi · 2021-02-09T16:00:01.474Z · LW · GW · 24 commentsContents
24 comments
Epistemic Status: Speed premium, will hopefully flesh out more carefully in future
Previously: Why I Am Not in Charge
A brief update about my exchange with Scott from this past week.
After Scott Alexander wrote a post about why WebMD is terrible and it would be good but impossible to put me in charge of the CDC, which is super flattering and unsurprisingly blew up my inbox, I wrote a quick response post to expand on things, and give more color on my model of the dynamics involved in organizations like the CDC and FDA, my view of how people in my position can often get reliably ahead of such organizations, and what would happen if one tried to change them and get them to do the most useful things. That required tying things back to several key past posts, including Zeroing Out, Leaders of Men, Asymmetric Justice, Motive Ambiguity, and the sequences on Simulacra and Moral Mazes.
The disagreements between my model and Scott’s, and the places in which my communication attempts fell short, are broadly in two (closely linked) categories, which his Reddit response captured well and made clearer to me, as did other reactions on the same thread.
The first category is where claims about how perverse things are get rounded down and not seen. Scott is advocating here for what we might call the Utility Function Hypothesis (UFH).
The second is (as Scott explicitly notes) the generalized form of the Efficient Market Hypothesis, which one might call the Efficient Action Hypothesis (EAH).
UFH and EAH are linked.
If UFH is impactfully false, then EAH is also false. If you’re acting meaningfully distinctly from how you would act with a coherent utility function, you are leaving money on the table.
If UFH is effectively true (e.g. it is not impactfully false), then EAH is plausible, but can still either be true or false. EAH could still be false if either one could have a better model of the impact of decisions than those making the decisions (this could be better political sense, better physical-impact sense that then impacts the political calculus, or both). EAH could also still be false if the utility function being maximized doesn’t square up with what we’d mean by politically successful.
If the EAH is true, then UFH is effectively true as well. Anyone acting in a way that can’t be improved upon isn’t meaningfully different from how they’d be with a utility function.
If the EAH is false, then that’s strong evidence that UFH is also false (since EAH -> UFH), but UFH could still be true if the EAH is false due to our having superior information, especially superior information about physical impacts.
On the first disagreement, I think we can look at the decisions in detail and find evidence for who is right. The utility function hypothesis expects to find sensible trade-offs being made, with bad decisions only being made when good information was unavailable or political pressure was stronger than the physical stakes. We can ask ourselves if we see this pattern. The difficulty is that the political pressures are often invisible to us, or hard to measure in magnitude. But if it’s a matter of battling real interests, we should expect political pressure to generally push in favor of useful actions for particular interests, rather than for perversity in general.
Another way of looking at the question is, do such folks seem to be goal optimizers or adaptation executors? To the extent that actions seem to reflect goals, including political ones, what kind of time horizon and discount rate seem to be in play?
We can also ask whether making destructive decisions seems to correlate with political profit. The issue here is that both sides have competing hypotheses that explain this correlation. The UFH says this is because of trade-offs. The alternate hypothesis says that this happens because there has been selection for those who act as if they need to be careful not to be seen favoring the constructive over the destructive because it is constructive.
What distinguishes these claims is that the UFH thinks that when the explicit political stakes are low then constructive decisions will dominate, whereas the alternate hypothesis thinks that there can be no explicit stakes at all and the destructive decisions still happen, because the mechanism causing them is still functioning.
On the second disagreement, my claim is that if you could execute with an ordinary level of tactical political savvy and not make continuous unforced errors, except when it mattered you used my judgment to make constructive decisions and implement constructive policies, that this would have a good chance of working out, especially if applied to the FDA along with the CDC.
My claim is importantly not that putting me in as CDC director tomorrow would have a good chance of working out, which it most certainly wouldn’t because I wouldn’t have the ordinary skill in the art necessary to not have the whole thing blow up for unrelated reasons.
But again, I don’t need to be a better overall judge of what is politically savvy and be able to execute the entire job myself to point to a better path, I only need to make improvements on the margin, even if that margin is relatively wide and deep. I can claim the EAH is importantly false and point to big improvements without being able to reimplement the rest of the system.
In particular, don’t make me head of the FDA, that’s crazy, but do appoint my father Solomon Mowshowitz as head of the FDA, give us some rope, and watch what happens. Scott was talking about the director of the CDC, which I’d also accept, but I think you can have a lot more impact right now at the FDA.
Why do I think a lot of politicians are leaving money on the ground for ‘no reason’, other than “Donald Trump spent four years as President of the United States’?
First, my model is that politicians mostly aren’t goal maximizers with utility functions, they’re adaptation executors who have developed systems that have been shaped to seek power. Those adaptations lead to political success. They’ve been heavily selected for those attributes by people looking for those attributes. One of the adaptations is to be visibly part of this selection process. Another is avoiding displaying competing loyalties, such as caring about money being on the ground enough to both see the money on the ground and then pick it up.
Second, the politicians don’t directly know what would and wouldn’t work out, and have been selected for not thinking it is possible to know such things. To the extent they try to do things that would work out, they approximate this by having a mechanism that avoids being blamed for things within the next two weeks, which is then subject to back propagation by others. If you do something with bad consequences next year or next month, if it’s bad enough the hope is that other people notice this and get mad about it now, you notice that and so you choose differently. The advantage of this approach is that Avoid Blame is a Fully General Excuse for action (or even better, for inaction), so it doesn’t cause suspicion that you prefer constructive to destructive action, or think you can tell the difference.
Third, this is all outside of the training data that the politicians learned on. Everyone involved is trained on data where the feedback loops are much longer, and the physical impacts are slow and far away. It hasn’t sunk in that this is a real emergency, and that in an emergency the rules are different. One can think of this as a battle between perception and reality, to see who can get inside whose OODA loop. People in mazes (and everyone involved here is in a maze) are used to making destructive decisions and then either not having the consequences rebound back at all, or being long gone before the physical consequences can catch up with them. Also, a lot of these learned behaviors go back to previous times without rapid social media feedback loops and other ways for us to distinguish constructive from destructive action, or identify lies or obvious nonsense. Back then there was more elite control over opinion and more reason to worry greatly about being a Very Serious Person who properly demanded their Sacrifices to the Gods, and consequences were less likely to back propagate into current blame and credit assignments.
Fourth, Leaders of Men, but also imposter syndrome and the illusion of competence. Everyone is everywhere and always muddling through far more than anyone realizes. Always have been. They make dumb mistakes and unforced errors, they overlook giant opportunities, they improvise and act confident and knowledgable. How many political campaigns does one need to watch unforced error after unforced error, to say ‘how did we end up with all these horrible choices and no good ones?’ and to watch candidates be woefully unprepared time and time again, before one realizes that this is the normal state of affairs? We’re choosing the people who were in the right place at the right time with the skills that most impact ability to raise money and campaign, not the people who are the best at governance. There is a dramatic difference in effectiveness levels between different politicians and leaders, not only in government but also in other areas. You take what you can get. And again, we’re aiming at improving on the margin, and it would be pretty shocking if there weren’t large marginal improvements available that we could spot if we tried.
Fifth, because they’re not properly modeling the shifts that occur when policy changes. The people who get to move elite opinion, and hence move ‘expert’ opinion, don’t realize this is within their power. Again, each time there was a fight over a shift from a destructive policy stance or claim to a constructive one in the pandemic, once the shift was made, most of the ‘experts’ saying otherwise fell in line immediately. At maximum, they nominally complained and then kept quiet. It’s almost like they’re not offering their expertise except insofar as they use it to back up what elites decided to tell us.
It’s an interesting question to what extent that mechanism doesn’t work when the new decision is destructive, but again we have data on that, so think back and form your own opinion on who would push back how much on such questions.
You could also respond that the constructive changes were chosen and timed exactly in order to be politically beneficial, and thus this isn’t a fair test. There’s certainly some selection effects but if you compare the results to a naive prior or to the prediction of the trade-off model, I think you’ll notice a big difference.
Sixth, because bandwidth is limited. Politicians aren’t looking on the sidewalk for the bill, so they don’t notice it and therefore don’t pick it up. When you are a powerful person there’s tons of things to do and no time to do them, whatever your combination of avoiding blame, executing adaptations, cashing in, gathering power and trying to do the most good as you see it. Everyone trying to contact you and give you ideas has an agenda of some form. Getting the good ideas even on the radar screen is hard even when you have a relatively competent and well-meaning group of people.
Seventh, this is a known blind spot, where there is reliably not enough attention to satisfying the real needs of voters and giving them what they care about, and not doing this reliably loses people power, while satisfying such needs reliably gets rewarded. This is true for things that voters are right about, and also things voters are wrong or selfish about.
Eighth, if a process filters out actions by making them unthinkable to anyone with the power to execute on them, partly by filtering who gets power and partly by getting those who seek power to self-modify, and thus such people never think seriously about them, or have lots of people whose job it is when they accidentally do think them to point out how unthinkable they are via what are usually essentially Beware Trivial Inconveniences [LW · GW] arguments, it’s hard to turn around and call not taking those actions evidence that those actions wouldn’t work if someone actually did them.
Lastly, because ‘there’s a good chance this would actually work out’ does not translate to free money on the ground. The political calculus is not ‘free money’ here, it’s ‘if this works sufficiently well, you reap large political benefits that outweigh your costs.’ You’d be betting on an alternate mechanism kicking in. Doing a different thing that isn’t as legible puts one very open to large amounts of blame via Asymmetric Justice, and inherently feels terrible for those trained on such data. None of this looks like a safe play, even if a lot of it on the margin is safe. Doing the full throttle version would definitely not be safe.
In general, rather than look at this as a ‘all trading opportunities are bad or someone would have taken them already’ or ‘if the fence should have been taken down then the fence-removal experts would have already taken care of it,’ look at this as the “How are you f***ing me?” or Chesterson’s Fence question. Before you take down a fence you need to know why someone put it up. Before you do a trade, you need to know the reason why you have the opportunity to do this trade. We see politicians failing to do seemingly obviously correct things that would be physically beneficial to people and look like they would be popular and net them political capital, so we need an explanation for why they’re not acting on it.
We have plenty of interlocking explanations for why they’re not acting on it. That doesn’t mean that any given instance doesn’t have additional good explanations, including explanations that could plausibly carry the day. And some of the explanations given here are reasonable reasons to not do some of the things, including the pure ‘there are people who don’t want you to do that, and they pressure you, and that sucks and raises the cost of acting an amount it’s hard for us to measure.’
As for the pure modesty argument, that I am not a political expert and thus shouldn’t expect to be able to predict what will win political capital, the response is in two parts.
First, I’m also not a medical or biological expert, yet here we are. I fully reject the idea that smart people can’t improve on the margin on ‘expert’ opinion, period. Welcome to 2021. Modesty shmodesty.
Second, much of the difference is in our physical world models and not our political models. To fully model politics, one must fully understand the world.
I don’t think this fully did justice to the questions involved. That will require several posts that I’ve been working on for a while in one form or another and are difficult to write. This did make writing those posts easier, so there is hope.
24 comments
Comments sorted by top scores.
comment by Scott Alexander (Yvain) · 2021-02-12T07:59:14.243Z · LW(p) · GW(p)
Thanks for this.
I think the UFH might be more complicated than you're making it sound here - the philosophers debate whether any human really has a utility function.
When you talk about the CDC Director sometimes doing deliberately bad policy to signal to others that she is a buyable ally, I interpret this as "her utility function is focused on getting power". She may not think of this as a "utility function", in fact I'm sure she doesn't, it may be entirely a selected adaptation to execute, but we can model it as a utility function for the same reason we model anything else as a utility function.
I used the example of a Director who genuinely wants the best, but has power as a subgoal since she needs it in order to enact good policies. You're using the example of a Director who really wants power, but (occasionally) has doing good as a subgoal since it helps her protect her reputation and avoid backlash. I would be happy to believe either of those pictures, or something anywhere in between. They all seem to me to cash out as a CDC Director with some utility function balancing goodness and power-hunger (at different rates), and as outsiders observing a CDC who makes some good policy and some bad-but-power-gaining policy (where the bad policy either directly gains her power, or gains her power indirectly by signaling to potential allies that she isn't a stuck-up goody-goody. If the latter, I'm agnostic as to whether she realizes that she is doing this, or whether it's meaningful to posit some part of her brain which contains her "utility function", or metaphysical questions like that).
I'm not sure I agree with your (implied? or am I misreading you?) claim that destructive decisions don't correlate with political profit. The Director would never ban all antibiotics, demand everyone drink colloidal silver, or do a bunch of stupid things along those lines; my explanation of why not is something like "those are bad and politically-unprofitable, so they satisfy neither term in her utility function". Likewise, she has done some good things, like grant emergency authorization for coronavirus vaccines - my explanation of why is that doing that was both good and obviously politically profitable. I agree there might be some cases where she does things with neither desideratum but I think they're probably rare compared to the above.
Do we still disagree on any of this? I'm not sure I still remember why this was an important point to discuss.
I am too lazy to have opinions on all nine of your points in the second part. I appreciate them, I'm sure you appreciate the arguments for skepticism, and I don't think there's a great way to figure out which way the evidence actually leans from our armchairs. I would point to Dominic Cummings as an example of someone who tried the thing, had many advantages, and failed anyway, but maybe a less openly confrontational approach could have carried the day.
comment by AnnaSalamon · 2021-02-09T20:31:35.354Z · LW(p) · GW(p)
Why and when does self-interest (your "utility function hypothesis") ever arise? (As opposed to people effectively being a bunch of not-very-conscious flinchy reflexes that can find their way to a local optimum, but can't figure out how to jump between optima?)
I keep feeling a sense of both interest/appreciation and frustration/this-isn't-quite-it-yet when I read your posts, and the above seems like one of the main gaps for me.
Replies from: Zvi↑ comment by Zvi · 2021-02-11T18:44:58.069Z · LW(p) · GW(p)
Freeform answer:
My first instinct is to say this is a wrong question, in the sense that it doesn't arise but rather pre-exists, and either survives or is suppressed. There's a small group that learns explicitly about utility functions and starts doing more maximization, but mostly self-interest starts out as something people care about? And then they learn to stop, through a combination of implicit instruction and observation, gradual conditioning and so on, and/or that those that don't stop get selected out?
Where in some places these suppression and replacement effects are very large, and in other places where people have to sit around doing real things the effects are small or even non-existent and then people can act in their own interests or in the interests of those around them or towards whatever goal they care about.
There's still some of it there in almost all cases, even if it's suppressed, and when someone has sufficiently large self-interests (or other things they value, doesn't have to be selfish) at stake, that creates an opportunity to shock the person into reality and to caring about outcomes increasingly directly and explicitly. But it's not reliable. Some (not many, but some) people are so far gone they really do give up everything that matters or literally die before that happens even without an intentional boil-the-frog strategy designed to push them to do that, and if you use such a strategy you can do that to a lot more people.
So essentially, self-interest (in the sense of caring about any outcomes at all relative to any other outcomes at all) is the baseline scenario, which gets increasingly suppressed under some conditions including mazes, in the extreme with severe selection effects against anyone not actively acting against such interests as a way of passing the continuous no-utility-function tests others implicitly are imposing. Then they muddle along this way until sufficiently high and sufficiently clear and visible stakes shock some of them into utility-function mode at least temporarily, and if not enough of them do that enough then reality causes the whole thing to come crashing down and get defeated by outsiders and the cycle starts again.
comment by Rob Bensinger (RobbBB) · 2021-02-09T16:50:48.708Z · LW(p) · GW(p)
On the first disagreement, I think we can look at the decisions in detail and find evidence for who is right. The utility function hypothesis expects to find sensible trade-offs being made, with bad decisions only being made when good information was unavailable or political pressure was stronger than the physical stakes. We can ask ourselves if we see this pattern. The difficulty is that the political pressures are often invisible to us, or hard to measure in magnitude. But if it’s a matter of battling real interests, we should expect political pressure to generally push in favor of useful actions for particular interests, rather than for perversity in general.
In addition to looking at outcomes, on priors I expect it to be possible to pick up evidence here and there by listening to what people say (especially off-the-cuff responses to new information and challenges) and drawing inferences about what specific cognitive moves are occurring.
E.g., Fauci exhibits amused self-awareness about the fact that he lied. This is evidence that he has some amount of self-awareness as well as reflective consistency ('I still endorse saying false things before, for the same reason I endorse saying less-false things now -- to encourage the right level of caution in people'). This in turn is nonzero evidence that he's optimizing a real-world outcome rather than acting on reflex, because that level of self-awareness is more necessary for optimizing real-world outcomes.
Basically, I think it's very hard for sufficiently confused/myopic/rationalizing humans to properly simulate a long-term-outcome-optimizing human in detail, and vice versa; so I think just listening could help. It's like inferring anosognosia [LW · GW] from listening to what the patient says and inferring cognition from their slip-ups and improvisations ('wait, that makes absolutely no sense'), vs. inferring anosognosia from macro-outcomes like 'how well do they hold down a job?' and 'how good are they at navigating obstacle courses?'.
comment by Ben Pace (Benito) · 2021-02-10T04:44:39.594Z · LW(p) · GW(p)
(Writing a summary of the post and my reading of it, in large part for my own understanding.)
The first disagreement is about adaptation-executors vs utility-maximisers. I take the adaptation-executors side mostly, although once in a while a utility maximiser gets through and then lots of stuff happens, and it’s not clear if it’s good but it is exciting.
The second disagreement is whether Zvi can beat the status quo. I think the status quo is adaptation-executors, or blue-minimizing robots. Things that have learned to follow local gradients in a bunch of basic ways, and not really think about reality nor optimize it.
Still, it is not obvious to me that you can easily do better than the adaptation-executor that was selected for by the environment. Put me alone in the North Pole, and for all that I’m an intelligent human, a polar bear will do better.
I think that I’d bet against Zvi working out there, but not overwhelmingly, and it would be worth it to get him there given the potential upside. I’d give him 30% chance of winning and creating greatness. Though I could easily be persuaded further in either direction with more detailed info about the job and about Zvi.
The rest of the post is Zvi giving the details of why the adaptation-executor isn’t optimal, and it’s a lot of details, but all very good detail. In summary:
- They are adaptation-executors, which almost by-definition never works well when the environment suddenly changes (e.g. pandemic)
- They have been selected to not really think about reality, just to think about whether they’re getting blamed within the next 2 weeks.
- Once again, the environment changed suddenly, and they don’t model it.
- They’re just actually not that competent.
- For some reason they’re incorrectly worried about the costs of good policy change (as I saw in the case of First Doses First). I don’t know why they’re making this mistake though.
- They have no time to really think, and also all the information sources around them are adversarial.
- The short time horizons of feedback with the public mean that even doing actually good things is kind of not going to be rewarded.
- (Here Zvi just makes the argument that if the above is all true, you can’t reason from their not taking an action that the action will not work. Which is fine, but is exactly what Zvi is trying to argue in the first place, so I don’t get this point.)
- Doing actually good things just gets you feedback in a very different way than covering your ass avoids getting you punished, and these agents haven’t learned to model that reward. This is nearly the same as 7.
So I do feel like they’re adaptation executors much more than they’re utility function maximisers. Give them an option to do good, where there‘s no risk of being punished, and I don’t think they’ll jump at it. I don’t think they’ll think much about it at all. They’re not built to really notice, they’re weird life forms that exist to be in exactly the weird role they’re usually in, and not get blamed on short time scales.
It isn’t all of my thinking, sometimes politicians actually make longer term bets, or have some semblance of cognition, and the appointees aren’t subject to this as much as the elected people. But this is most of what’s happening, I think.
This all gives me a much scarier feeling about politics. How does literally anything happen at all, if these are the strange beasts we’ve selected to run things? My my. The British TV show The Thick Of It is excellent at portraying many of these properties of people, weak people constantly just trying to cover their asses. (It’s also hilarious.) I suggest Zvi watch one or two episodes, it might help him see more clearly the things he’s trying to say.
Replies from: Raemon↑ comment by Raemon · 2021-02-10T06:50:09.306Z · LW(p) · GW(p)
Assuming this is an accurate summary, I feel skeptical about "the deal is that The People At The Top are Adaptation Executors". It just... seems like it must be pretty hard to get to the top without having some kind of longterm planning going on (even if it's purely manipulative)
Replies from: jaspax, AnnaSalamon↑ comment by jaspax · 2021-02-10T07:28:08.612Z · LW(p) · GW(p)
The main issue here is that the people in question (heads of the FDA and CDC) are not really The People At The Top. They are bureaucrats promoted to the highest levels of the bureaucracy, and their attitudes and failures are those of career bureaucrats, not successful sociopaths (in the sense of Rao's "Gervais Principle").
↑ comment by AnnaSalamon · 2021-02-10T17:17:20.375Z · LW(p) · GW(p)
It just... seems like it must be pretty hard to get to the top without having some kind of longterm planning going on (even if it's purely manipulative)
I think I would bet against the quoted sentence, though I'm uncertain. The crux for me is whether the optimization-force that causes a single person to end up "at the top" (while many others don't) is mostly that person's own optimization-force (vs a set of preferences/flinches/optimization-bits distributed in many others, or in the organization as a whole, or similar).
(This overlaps with jaspax's comment; but I wanted to state the more general version of the hypothesis.)
See also Kaj's FB post from this morning.
Replies from: Kaj_Sotala, Raemon↑ comment by Kaj_Sotala · 2021-02-11T11:21:33.666Z · LW(p) · GW(p)
See also Kaj's FB post from this morning.
(Now also on LW. [LW · GW])
↑ comment by Raemon · 2021-02-10T18:09:00.206Z · LW(p) · GW(p)
I don't have a very strong guess here. I can imagine the world where being fully adaptation executer outcompetes deliberate strategy. It seems plausible that, at lower levels, adaptation-execution outcompetes strategy because people's deliberate strategies just aren't as good as copy-the-neighbors social wisdom.
But, to end up at the very top (even of a career bureaucrat ladder), you have to outcompete all the other people who were copying the neighbors / adaptation executing (I'm currently lumping these strategies together, not sure if that seems fair). It seems to me like this should require some kind of deliberate optimization, of some sort.
The crux for me is whether the optimization-force that causes a single person to end up "at the top" (while many others don't) is mostly that person's own optimization-force (vs a set of preferences/flinches/optimization-bits distributed in many others, or in the organization as a whole, or similar).
This phrasing feels overly strong – a world where 65% of the variance is distributed flinches/optimization but 35% personal optimization seems like the sort of world that might exist, in which personal optimization plays a significant role in who ends up at the top.
(Note that I'm basically arguing "the case for 'there is some kind of deliberate optimization pressure at the top of bureaucracies' seems higher than Zvi thinks it is", not making any claims about "this seems most likely how it is", or "the optimization at the top is altruistic.")
Replies from: RobbBB, Benito↑ comment by Rob Bensinger (RobbBB) · 2021-02-10T18:20:09.071Z · LW(p) · GW(p)
One relevant question is how many smart, strategic, long-term optimizers exist. (In politics, vs. in business, or in academia, etc.)
E.g., if 1 in 3 people think this way at the start of their careers, that's very different than if 1 in 1000 do, or 1 in 15. The rarer this way of thinking is, the more powerful it needs to be in order to overcome base rates.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2021-02-10T19:35:49.540Z · LW(p) · GW(p)
I was also trying to think about the numbers, it felt important but I didn’t get anywhere with it. Where to start? Should I assume a base rate of 1 in a 1,000 people being strategic? More? Less?
↑ comment by Ben Pace (Benito) · 2021-02-10T19:41:25.495Z · LW(p) · GW(p)
I think that one natural way that you get to the top without being long-term strategic and being able to model reality, is nepotism. Some companies are literally handed down father-to-son, so the son doesn’t actually need to show great competence at the task. The Clinton families and the Bush families are also examples. There is some strategy here, but it isn’t strategy that’s closely connected to being competent at the job or even modeling reality that much (other than the social reality within the family).
comment by ChristianKl · 2021-02-09T18:20:26.315Z · LW(p) · GW(p)
The utility function hypothesis expects to find sensible trade-offs being made, with bad decisions only being made when good information was unavailable or political pressure was stronger than the physical stakes.
Bad decisions can be made by organizations even when the individual are generally persuing their goals well. If you have time pressure and people disagree on what to do a meeting might go to 2AM with everybody understanding that people can only go home once a compromize is found. Then at 2AM one party makes a suggestion of a compromise and then somehow people agree on a bad policy with which both sides can live because they want to go home.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2021-02-10T17:26:31.185Z · LW(p) · GW(p)
Yes; the test Zvi mentions seems like it actually tests "folks have utility functions and good coordination ability". (Like, good ability to form binding contracts, or make trades.)
Replies from: ChristianKl↑ comment by ChristianKl · 2021-02-10T19:20:28.453Z · LW(p) · GW(p)
Good coordiantion ability alone is not enough when problems arrise because people defect in prisoner dilemmas.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2021-02-10T19:33:10.864Z · LW(p) · GW(p)
If the social substrate people are in makes it easy to form binding contracts, people won't defect in prisoner dilemmas. Maybe I'm using the wrong words; I'm trying to agree with your point. I don't mean "coordination ability" to be a property just of the individuals; it's a property of them and their context.
comment by Felix Karg (felix-karg) · 2021-02-09T21:59:23.867Z · LW(p) · GW(p)
I appreciate your fast meta-takes and responses on the current situation.
Speed premium, will hopefully flesh out more carefully in future
I would love to see a carefully fleshed out version, since it seems to have insights, applications and implications significantly beyond their immediately discussed content.
comment by Dorikka · 2021-02-09T22:20:23.954Z · LW(p) · GW(p)
Out of curiosity, how come the strong speed premium on these posts? AFAICT there's nothing here that informs short-term decisions for readers; I've been skimming and mostly tossing into my to-read pile for that reason. Know I'm not exactly an important stakeholder here, but personally I'd sorta prefer to read the synthesis from a chat between yourself and Scott rather than the blow-by-blow.
Replies from: Zvi, Dorikka, RobbBB↑ comment by Zvi · 2021-02-10T13:55:41.183Z · LW(p) · GW(p)
Interest in things internet has a half-life between 0.5 and 2 days, and I get an order of magnitude or more additional attention after an interest spike like this one.
(Also Rob's answer that the underlying problem has a giant speed premium of its own, which is why the weekly posts and such.)
↑ comment by Dorikka · 2021-02-10T22:57:36.695Z · LW(p) · GW(p)
Thank you both! Zvi - makes sense re short duration of increased interest and effective to capitalize on it while that lasts. Rob - the part I'm not seeing is the causal link between these posts and influencing/improving decisions made by the FDA and CDC.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2021-02-10T23:47:31.556Z · LW(p) · GW(p)
I note that posts like marginal revolution and others on first doses first likely had a serious effect on causing that to happen. So I think it's fair to think that the discussion around here is having real effects, even if it's indirect and hard to pin down very explicitly.
↑ comment by Rob Bensinger (RobbBB) · 2021-02-09T22:45:59.521Z · LW(p) · GW(p)
The FDA and CDC's decisions over the coming weeks and months will have a large effect on how much death, suffering, and waste COVID-19 causes. If the FDA and CDC's decisions aren't "efficient", then it makes more sense to try to influence and improve those decisions.
We're also early into a presidential administration, when fewer policy and staffing decisions have been locked in (compared to a few months from now).
Replies from: Raemon↑ comment by Raemon · 2021-02-09T23:36:20.828Z · LW(p) · GW(p)
That said, it's not obvious that trading longform posts makes more sense than Scott and Zvi (or other of people) doing a more iterated chat of some kind, and then summarizing afterwards.
(I don't have a strong opinion on one being better than the other, just noting the possibility)