Jimrandomh's Shortform

post by jimrandomh · 2019-07-04T17:06:32.665Z · score: 29 (4 votes) · LW · GW · 37 comments

This post is a container for my short-form writing. See this post [LW(p) · GW(p)] for meta-level discussion about shortform as an upcoming site feature.

37 comments

Comments sorted by top scores.

comment by jimrandomh · 2019-09-12T01:19:07.010Z · score: 22 (6 votes) · LW(p) · GW(p)

Eliezer has written about the notion of security mindset [LW · GW], and there's an important idea that attaches to that phrase, which some people have an intuitive sense of and ability to recognize, but I don't think Eliezer's post quite captured the essence of the idea, or presented anything like a usable roadmap of how to acquire it.

An1lam's recent shortform post [LW(p) · GW(p)] talked about the distinction between engineering mindset and scientist mindset, and I realized that, with the exception of Eliezer and perhaps a few people he works closely with, all of the people I know of with security mindset are engineer-types rather than scientist-types. That seemed like a clue; my first theory was that the reason for this is because engineer-types get to actually write software that might have security holes, and have the feedback cycle of trying to write secure software. But I also know plenty of otherwise-decent software engineers who don't have security mindset, at least of the type Eliezer described.

My hypothesis is that to acquire security mindset, you have to:

  • Practice optimizing from a red team/attacker perspective,
  • Practice optimizing from a defender perspective; and
  • Practice modeling the interplay between those two perspectives.

So a software engineer can acquire security mindset because they practice writing software which they don't want to have vulnerabilities, they practice searching for vulnerabilities (usually as an auditor simulating an attacker rather as an actual attacker, but the cognitive algorithm is the same), and they practice going meta when they're designing the architecture of new projects. This explains why security mindset is very common among experienced senior engineers (who have done each of the three many times), and rare among junior engineers (who haven't yet). It explains how Eliezer can have security mindset: he alternates between roleplaying a future AI-architect trying to design AI control/alignment mechanisms, roleplaying a future misaligned-AI trying to optimize around them, and going meta on everything-in-general. It also predicts that junior AI scientists won't have this security mindset, and probably won't acquire it except by following a similar cognitive trajectory.

Which raises an interesting question: how much does security mindset generalize between domains? Ie, if you put Theo de Raadt onto a hypothetical future AI team, would he successfully apply the same security mindset there as he does to general computer security?

comment by NaiveTortoise (An1lam) · 2019-09-12T02:13:38.445Z · score: 8 (5 votes) · LW(p) · GW(p)

I like this post!

Some evidence that security mindset generalizes across at least some domains: the same white hat people who are good at finding exploits in things like kernels seem to also be quite good at finding exploits in things like web apps, real-world companies, and hardware. I don't have a specific person to give as an example, but this observation comes from going to a CTF competition and talking to some of the people who ran it about the crazy stuff they'd done that spanned a wide array of different areas.

Another slightly different example, Wei Dai is someone who I actually knew about outside of Less Wrong from his early work on cryptocurrency stuff, so he was at least at one point involved in a security-heavy community (I'm of the opinion that early cryptocurrency folks were on average much better about security mindset than the average current cryptocurrency community member). Based on his posts and comments, he generally strikes me as having security mindset style thinking from his comments and from my perspective has contributed a lot of good stuff to AI alignment.

Theo de Raadt is notoriously... opinionated, so it would definitely be interesting to see him thrown on an AI team. That said, I suspect someone like Ralph Merkle, who's a bona fide cryptography wizard (he invented public key cryptography and Merkle trees!) and is heavily involved in the cryonics and nanotech communities, could fairly easily get up to speed on AI control work and contribute from a unique security/cryptography-oriented perspective. In particular, now that there seems to be more alignment/control work that involves at least exploring issues with concrete proposals, I think someone like this would have less trouble finding ways to contribute. That said, having cryptography experience in addition to security experience does seem helpful. Cryptography people are probably more used to combining their security mindset with their math intuition than your average white-hat hacker.

comment by jimrandomh · 2019-09-13T22:24:50.632Z · score: 22 (4 votes) · LW(p) · GW(p)

I'm kinda confused about the relation between cryptography people and security mindset. Looking at the major cryptographic algorithm classes (hashing, symmetric-key, asymmetric-key), it seems pretty obvious that the correct standard algorithm in each class is probably a compound algorithm -- hash by xor'ing the results of several highly-dissimilar hash functions, etc, so that a mathematical advance which breaks one algorithm doesn't break the overall security of the system. But I don't see anyone doing this in practice, and also don't see signs of a debate on the topic. That makes me think that, to the extent they have security mindset, it's either being defeated by political processes in the translation to practice, or it's weirdly compartmentalized and not engaged with any practical reality or outside views.

comment by Wei_Dai · 2019-09-15T01:09:19.661Z · score: 5 (2 votes) · LW(p) · GW(p)

Combining hash functions is actually trickier than it looks, and some people are doing research in this area and deploying solutions. See https://crypto.stackexchange.com/a/328 and https://tahoe-lafs.org/trac/tahoe-lafs/wiki/OneHundredYearCryptography. It does seem that if cryptography people had more of a security mindset (that are not being defeated) then there would be more research and deployment of this already.

comment by NaiveTortoise (An1lam) · 2019-09-14T21:50:20.849Z · score: 3 (2 votes) · LW(p) · GW(p)

In fairness, I'm probably over-generalizing from a few examples. For example, my biggest inspiration from the field of crypto is Daniel J. Bernstein, a cryptographer who's in part known for building qmail, which has an impressive security track record & guarantee. He discusses principles for secure software engineering in this paper, which I found pretty helpful for my own thinking.

To your point about hashing the results of several different hash functions, I'm actually kind of surprised to hear that this might to protect against the sorts of advances I'd expect to break hash algorithms. I was under the very amateur impression that basically all modern hash functions relied on the same numerical algorithmic complexity (and number-theoretic results). If there are any resources you can point me to about this, I'd be interested in getting a basic understanding of the different assumptions hash functions can depend on.

comment by Wei_Dai · 2019-09-15T01:12:22.846Z · score: 3 (1 votes) · LW(p) · GW(p)

Can you give some specific examples of me having security mindset, and why they count as having security mindset? I'm actually not entirely sure what it is or that I have it, and would be hard pressed to come up with such examples myself. (I'm pretty sure I have what Eliezer calls "ordinary paranoia" at least, but am confused/skeptical about "deep security".)

comment by NaiveTortoise (An1lam) · 2019-09-15T04:28:52.001Z · score: 5 (2 votes) · LW(p) · GW(p)

Sure, but let me clarify that I'm probably not drawing as hard a boundary between "ordinary paranoia" and "deep security" as I should be. I think Bruce Schneier's and Eliezer's buckets for "security mindset" blended together in the months since I read both posts. Also, re-reading the logistic success curve post reminded me that Eliezer calls into question whether someone who lacks security mindset can identify people who have it. So it's worth noting that my ability to identify people with security mindset is itself suspect by this criteria (there's no public evidence that I have security mindset and I wouldn't claim that I have a consistent ability to do "deep security"-style analysis.)

With that out of the way, here are some of the examples I was thinking of.

First of all, at a high level, I've noticed that you seem to consistently question assumptions other posters are making and clarify terminology when appropriate. This seems like a prerequisite for security mindset, since it's a necessary first step towards constructing systems.

Second and more substantively, I've seen you consistently raise concerns about human safety problems [LW · GW] (also here [LW(p) · GW(p)]. I see this as an example of security mindset because it requires questioning the assumptions implicit in a lot of proposals. The analogy to Eliezer's post here would be that ordinary paranoia is trying to come up with more ways to prevent the AI from corrupting the human (or something similar) whereas I think a deep security solution would look more like avoiding the assumption that humans are safe altogether and instead seeking clear guarantees that our AIs will be safe even if we ourselves aren't.

Last, you seem to be unusually willing to point out flaws in your own proposals, the prime example being UDT. The most recent example of this is your comment about the bomb argument, but I've seen you do this quite a bit and could find more examples if prompted. On reflection, this may be more of an example of "ordinary paranoia" than "deep security", but it's still quite important in my opinion.

Let me know if that clarifies things at all. I can probably come up with more examples of each type if requested, but it will take me some time to keep digging through posts and comments so figured I'd check in to see if what I'm saying makes sense before continuing to dig.

comment by riceissa · 2020-02-01T06:04:59.355Z · score: 1 (1 votes) · LW(p) · GW(p)

This comment [LW(p) · GW(p)] feels relevant here (not sure if it counts as ordinary paranoia or security mindset).

comment by jimrandomh · 2019-07-04T17:09:37.876Z · score: 20 (7 votes) · LW(p) · GW(p)

Bullshit jobs are usually seen as an absence of optimization: firms don't get rid of their useless workers because that would require them to figure out who they are, and risk losing or demoralizing important people in the process. But alternatively, if bullshit jobs (and cover for bullshit jobs) are a favor to hand out, then they're more like a form of executive compensation: my useless underlings owe me, and I will get illegible favors from them in return.

What predictions does the bullshit-jobs-as-compensation model make, that differ from the bullshit-jobs-as-lack-of-optimization model?

comment by mr-hire · 2019-07-04T17:59:28.850Z · score: 22 (7 votes) · LW(p) · GW(p)

When I tried to inner sim the "bullshit jobs as compensation" model, I expected to see a very different world than I do see. In particular, I'd expect the people in bullshit jobs to have been unusually competent, smart, or powerful before they were put in the bullshit job, and this is not in fact what I think actually happens.

The problem being that the kind of person who wants a bullshit job is not typically the kind of person you'd necessarily want a favor from. One use for bullshit jobs could be to help the friends (or more likely the family) of someone who does "play the game." This I think happens more often, but I still think the world would be very different if this was the main use case for bullshit jobs- In particular, I'd expect most bullshit jobs to be isolated from the rest of the company, such that they don't have ripple effects. This doesn't seem to be the case as many bullshit jobs exist in management.

When I inquired about the world I actually do see, I got several other potential reasons for bullshit jobs that may or may not fit the data better:

  • Bullshit jobs as pre-installed scapegoats: Lots of middle management might fit into this role. This could be viewed as a favor (I'll give you a cushy job now in exchange for you throwing yourself on the sword when the time comes.) However, I think the predictive model is to view it in terms of the Gervais principle: The clueless middle managers don't realize they're being manipulated by the sociopaths.
  • Bullshit jobs as a way to make people feel important: Lets say you have a preinstalled scapegoat. You need to keep them happy enough that they'll stay in their position and not ask too many questions. One way to do that for a certain type of person is to give them underlings. But if you gave them underlings with real jobs they could screw things up for the organization, so you give them underlings with bullshit jobs.
    • Another instance of this that I imagined might happen: Someone is really great at what they do (say they're a 10x employee), but to feel important wants to be a manager. You know if you don't promote them you'll lose them, but you know they'll be an awful manager. You promote them, give them a couple underlings with a bullshit job, and now they're still only a 4x employee because they spend a lot of their time managing, but you still manage to squeeze a little bit of productivity out of the deal. This one I'm less sure about but its' interesting because it turns the peter principle on its' head.

Edit: As I continued to inner sim the above reasons, a few feedback loops began to become clear:

  • To be a proper scapegoat, your scapegoat has to seem powerful within the organization. But to prevent them from screwing things up, you can't give them real power. This means, the most effective scapegoats have lots of bullshit jobs underneath them.
  • There are various levels of screwup. I might not realize I'm a scapegoat for the very big events above me, but still not want to get blamed for the very real things that happen on the level of organization I actually do run. One move I have is to hire another scapegoat who plays the game one level below me, install them as a manager, and then use them as a scapegoat. Then there's another level at which they get blamed for things that happen on their level, and this can recurse for several levels of middle management.
  • Some of the middle managment installed as scapegoats might accidentally get hands on real power in the organization. Because they're bad managers, they're bad at figuring out what jobs are needed. This then becomes the "inefficiency" model you mentioned.
comment by Benquo · 2019-07-05T00:37:28.696Z · score: 12 (3 votes) · LW(p) · GW(p)
In particular, I'd expect the people in bullshit jobs to have been unusually competent, smart, or powerful before they were put in the bullshit job, and this is not in fact what I think actually happens.

Moral Mazes claims that this is exactly what happens at the transition from object-level work to management - and then, once you're at the middle levels, the main traits relevant to advancement (and value as an ally) are the ones that make you good at coalitional politics, favor-trading, and a more feudal sort of loyalty exchange.

comment by mr-hire · 2019-07-05T02:52:13.197Z · score: 4 (2 votes) · LW(p) · GW(p)

Do you think that the majority of direct management jobs are bullshit jobs? My direction is that especially the first level of management that is directly managing programmers is a highly important coordination position.

comment by jimrandomh · 2020-02-15T02:20:40.348Z · score: 19 (7 votes) · LW(p) · GW(p)

I suspect that, thirty years from now with the benefit of hindsight, we will look at air travel the way we now look at tetraethyl lead. Not just because of nCoV, but also because of disease burdens we've failed to attribute to infections, in much the same way we failed to attribute crime to lead.

Over the past century, there have been two big changes in infectious disease. The first is that we've wiped out or drastically reduced most of the diseases that cause severe, attributable death and disability. The second is that we've connected the world with high-speed transport links, so that the subtle, minor diseases can spread further.

I strongly suspect that a significant portion of unattributed and subclinical illnesses are caused by infections that counterfactually would not have happened if air travel were rare or nonexistent. I think this is very likely for autoimmune conditions, which are mostly unattributed, are known to sometimes be caused by infections, and have risen greatly over time. I think this is somewhat likely for chronic fatigue and depression, including subclinical varieties that are extremely widespread. I think this is plausible for obesity, where it is approximately #3 of my hypotheses.

Or, put another way: the "hygiene hypothesis" is the opposite of true.

comment by leggi · 2020-02-20T04:42:12.592Z · score: 3 (2 votes) · LW(p) · GW(p)

Some comments:

we've wiped out or drastically reduced most of the diseases that cause severe, attributable death and disability

we've wiped out or drastically reduced some diseases in some parts of the world.   There's a lot of infectious diseases still out there: HIV, influenza, malaria, tuberculosis, cholera, ebola,  infectious forms of pneumonia, diarrhoea, hepatitis .... 


we've connected the world with high-speed transport links, so that the subtle, minor diseases can spread further.

Disease has always spread - wherever people go, far and wide.  It just took longer over land and sea  (rather than the nodes appearing on global maps that we can see these days). 


... very likely for autoimmune conditions ... have risen greatly over time

"autoimmune conditions" covers a long list of conditions lumped together because they involve the immune system 'going wrong'. (and the immune system is, at least to me, a mind-bogglingly complex system)

Given the wide range of conditions that could be "auto-immune" saying they've risen greatly over time is vague. Data for more specific conditions?

Increased rates of automimmune conditions could just be due to the increase in the recognition, diagnosis and recording of cases (I don't think so but it should be considered).

What things other than high speed travel have also changed in that time-frame that could affect our immune systems?   The quality of air we breathe, the food we eat, the water we drink, our environment, levels of exposure to fauna and flora, exposure to chemicals, pollutants ...? Air travel is just one factor.


I think this is somewhat likely for chronic fatigue and depression, including subclinical varieties that are extremely widespread.

Fatigue and depression are clinical symptoms - they are either present or not (to what degree - mild/severe is another matter) so sub-clinical is poor terminology here.   Sub-clinical disease has no recognisable clinical findings - undiagnosed/unrecognised would be closer. But I agree there is widespread issues with health and well-being these days.


Or, put another way: the "hygiene hypothesis" is the opposite of true.

Opposite of true?  Are you saying you believe the "hygiene hypothesis" is false?

In which case, that's a big leap from your reasoning above.

comment by Adam Scholl (adam_scholl) · 2020-02-15T19:21:12.892Z · score: 2 (2 votes) · LW(p) · GW(p)

I'm curious about your first and second hypothesis regarding obesity?

comment by jimrandomh · 2020-02-18T00:32:27.427Z · score: 3 (2 votes) · LW(p) · GW(p)

Disruption of learning mechanisms by excessive variety and separation between nutrients and flavor. Endocrine disruption from adulterants and contaminants (a class including but not limited to BPA and PFOA).

comment by jimrandomh · 2019-07-04T17:22:49.463Z · score: 16 (8 votes) · LW(p) · GW(p)

The discussion so far on cost disease seems pretty inadequate, and I think a key piece that's missing is the concept of Hollywood Accounting. Hollywood Accounting is what happens when you have something that's extremely profitable, but which has an incentive to not be profitable on paper. The traditional example, which inspired the name, is when a movie studio signs a contract with an actor to share a percentage of profits; in that case, the studio will create subsidiaries, pay all the profits to the subsidiaries, and then declare that the studio itself (which signed the profit-sharing agreement) has no profits to give.

In the public contracting sector, you have firms signing cost-plus contracts, which are similar; the contract requires that profits don't exceed a threshold, so they get converted into payments to de-facto-but-not-de-jure subsidiaries, favors, and other concealed forms. Sometimes this involves large dead-weight losses, but the losses are not the point, and are not the cause of the high price.

In medicine, there are occasionally articles which try to figure out where all the money is going in the US medical system; they tend to look at one piece, conclude that that piece isn't very profitable so it can't be responsible, and move on. I suspect this is what's going on with the cost of clinical trials, for example; they aren't any more expensive than they used to be, they just get allocated a share of the profits from R&D ventures that're highly profitable overall.

comment by Elizabeth (pktechgirl) · 2019-07-04T20:27:18.131Z · score: 4 (2 votes) · LW(p) · GW(p)
they aren't any more expensive than they used to be, they just get allocated a share of the profits from R&D ventures that're highly profitable overall.

Did you mean "allocated a share of the costs"? If not, I am confused by that sentence.

comment by jimrandomh · 2019-07-04T20:46:52.061Z · score: 4 (2 votes) · LW(p) · GW(p)

I'm pretty uncertain how the arrangements actually work in practice, but one possible arrangement is: You have two organizations, one of which is a traditional pharmaceutical company with the patent for an untested drug, and one of which is a contract research organization. The pharma company pays the contract research organization to conduct a clinical trial, and reports the amount it paid as the cost of the trial. They have common knowledge of the chance of success, of the future probability distribution of future revenue for the drug, how much it costs to conduct the trial, and how much it costs to insure away the risks. So the amount the first company pays to the second is the costs of the trial, plus a share of the expected profit.

Pharma companies making above-market returns are subject to political attack from angry patients, but contract research organizations aren't. So if you control both of these organizations, you would choose to allocate all of the profits to the second organization, so you can defend yourself from claims of gouging by pleading poverty.

comment by Elizabeth (pktechgirl) · 2019-07-04T21:05:40.922Z · score: 2 (1 votes) · LW(p) · GW(p)

Ah, that makes sense. Thanks for explaining.

comment by jimrandomh · 2020-04-04T05:45:01.934Z · score: 15 (4 votes) · LW(p) · GW(p)

This tweet raised the question of whether masks really are more effective if placed on sick people (blocking outgoing droplets) or if placed on healthy people (blocking incoming droplets). Everyone in public or in a risky setting should have a mask, of course, but we still need to allocate the higher-quality vs lower-quality masks somehow. When sick people are few and are obvious, and masks are scarce, masks should obviously go on the sick people. However, COVID-19 transmission is often presymptomatic, and masks (especially lower-quality improvised masks) are not becoming less scarce over time.

If you have two people in a room and one mask, one infected and one healthy, which person should wear the mask? Thinking about the physics of liquid droplets, I think the answer is that the infected person should wear it.

  1. A mask on a sick person prevents the creation of fomites; masks on healthy people don't.
  2. Outgoing particles have a larger size and shrink due to evaporation, so they'll penetrate a mask less, given equal kinetic energy. (However, kinetic energies are not equal; they start out fast and slow down, which would favor putting the mask on the healthy person. I'm not sure how much this matters.)
  3. Particles that stick to a mask but then un-stick lose their kinetic energy in the process, which helps if the mask is on the sick person, but doesn't help if the mask is on the healthy person.

Overall, it seems like for a given contact-pair, a mask does more good if it's on the sick person. However, mask quality also matters in proportion to the number of healthy-sick contacts it affects; so, upgrading the masks of all of the patients in a hospital would help more than upgrading the masks of all the workers in that hospital, but since the patients outnumber the workers, upgrading the workers' masks probably helps more per-mask.

comment by jimrandomh · 2019-07-09T02:20:15.059Z · score: 13 (6 votes) · LW(p) · GW(p)

Among people who haven't learned probabilistic reasoning, there's a tendency to push the (implicit) probabilities in their reasoning to the extremes; when the only categories available are "will happen", "won't happen", and "might happen", too many things end up in the will/won't buckets.

A similar, subtler thing happens to people who haven't learned the economics concept of elasticity. Some example (fallacious) claims of this type:

  • Building more highway lanes will cause more people to drive (induced demand), so building more lanes won't fix traffic.
  • Building more housing will cause more people to move into the area from far away, so additional housing won't decrease rents.
  • A company made X widgets, so there are X more widgets in the world than there would be otherwise.

This feels like it's in the same reference class as he traditional logical fallacies, and that giving it a name - "zero elasticity fallacy" - might be enough to significantly reduce the rate at which people make it. But it does require a bit more concept-knowledge than most of the traditional fallacies, so, maybe not? What happens when you point this out to someone with no prior microeconomics exposure, and does logical-fallacy branding help with the explanation?

comment by Kaj_Sotala · 2019-07-09T12:50:32.584Z · score: 12 (3 votes) · LW(p) · GW(p)
Building more highway lanes will cause more people to drive (induced demand), so building more lanes won't fix traffic.

Is this really fallacious? I'm asking because while I don't know the topic personally, I have some friends who are really into city planning. They've said that this is something which is pretty much unambiguously accepted in the literature, now that we've had the time to observe lots and lots of failed attempts to fix traffic by building more road capacity.

A quick Googling seemed to support this, bringing up e.g. this article which mentions that:

In this paper from the Victoria Transport Policy Institute, author Todd Litman looks at multiple studies showing a range of induced demand effects. Over the long term (three years or more), induced traffic fills all or nearly all of the new capacity. Litman also modeled the costs and benefits for a $25 million line-widening project on a hypothetical 10-kilometer stretch of highway over time. The initial benefits from congestion relief fade within a decade.
comment by habryka (habryka4) · 2019-07-11T01:53:19.186Z · score: 4 (2 votes) · LW(p) · GW(p)

Yeah, I do agree that for the case of traffic, elasticity is pretty close to 1, which importantly doesn't mean building more traffic is a bad idea, it's actually indicative of demand for traffic capacity being really high, meaning marginal value of doing so is likely also really high.

comment by jimrandomh · 2020-02-10T02:21:34.520Z · score: 7 (4 votes) · LW(p) · GW(p)

Some software costs money. Some software is free. Some software is free, with an upsell that you might or might not pay for. And some software has a negative price: not only do you not pay for it, but someone third party is paid to try to get you to install it, often on a per-install basis. Common examples include:

  • Unrelated software that comes bundled with software you're installing, which you have to notice and opt out of
  • Software advertised in banner ads and search engine result pages
  • CDs added to the packages of non-software products

This category of software is frequently harmful, but I've never seen the it called out by the economic definition. For laypeople, about 30% of computer security is recognizing the telltale signs of this category of software, and refusing to install it.

comment by Viliam · 2020-02-10T21:43:27.840Z · score: 4 (2 votes) · LW(p) · GW(p)

I wonder what would be a non-software analogy of this.

Perhaps those tiny packages with labels "throw away, do not eat" you find in some products. That is, in a parallel world where 99% of customers would actually eat them anyway. But even there it isn't obvious how the producer would profit from them eating the thing. So, no good analogy.

comment by mr-hire · 2020-02-10T23:44:50.729Z · score: 2 (1 votes) · LW(p) · GW(p)

I'm trying to wrap my head around the negative price distinction. A business can't be viable if the cost of user acquisition is lower than the lifetime value of a user.

Most software spend money on advertising, then they have to make that money back somehow. In a direct business model, they'll charge the users of the software directly. In an indirect business model, they'll charge a third party for access to the users or an asset that the user has. Facebook is more of an indirect business model, where they charge advertisers for access to the users' attention and data.

In my mind, the above is totally fine. I choose to pay with my attention and data as a user, and know that it will be sold to advertisers. Viewing this as "negatively priced" feels like a convoluted way to understand the business model however.

Some malware makes money by trying to hide the secondary market they're selling. For instance, by sneaking in a default browser search that sells your attention to advertisers, or selling your computers idle time to a botnet without your permission. This is egregious in my opinion, but it's not the indirect business model that is bad here, it's the hidden costs that they lie about or obfuscate.

comment by jimrandomh · 2020-02-11T19:05:03.917Z · score: 6 (3 votes) · LW(p) · GW(p)

User acquisition costs are another frame for approximately the same heuristic. If software has ads in an expected place, and is selling data you expect them to sell, then you can model that as part of the cost. If, after accounting for all the costs, it looks like the software's creator is spending more on user acquisition than they should be getting back, it implies that there's another revenue stream you aren't seeing, and the fact that it's hidden from you implies that you probably wouldn't approve of it.

comment by mr-hire · 2020-02-11T19:26:03.020Z · score: 4 (2 votes) · LW(p) · GW(p)

Ahhh I see, so you're making roughly the same distinction of "hidden revenue streams".

comment by jimrandomh · 2020-02-27T19:53:20.056Z · score: 4 (2 votes) · LW(p) · GW(p)

The Diamond Princess cohort has 705 positive cases, of which 4 are dead and 36 serious or critical. In China, the reported ratio of serious/critical cases to deaths is about 10:1, so figure there will be 3.6 more deaths. From this we can estimate a case fatality rate of 7.6/705 ~= 1%. Adjust upward to account for cases that have not yet progressed from detection to serious, and downward to account for the fact that the demographics of cruise ships skew older. There are unlikely to be any undetected cases in this cohort.

comment by steve2152 · 2020-02-27T21:10:43.280Z · score: 5 (3 votes) · LW(p) · GW(p)

Hang on, maybe I'm being stupid, but I don't get the 3.6. Why not say 36+4=40 serious/critical cases and the 10%=4 of them have already passed away?

comment by jimrandomh · 2020-02-27T21:25:28.713Z · score: 5 (3 votes) · LW(p) · GW(p)

You're right, adding deaths+.1*serious the way I did seems incorrect. But, since not all of the serious cases have recovered yet, that would seem to imply that the serious:deaths ratio is worse in the Diamond Princess than it is in China, which would be pretty strange. It's not clear to me that the number of serious cases is as up to date as the number of positive tests.

So, widen the error bars some more I guess?

comment by Dagon · 2020-02-27T20:57:54.057Z · score: 4 (2 votes) · LW(p) · GW(p)

How many passengers were exposed? Capacity of 2670, I haven't seen (and haven't looked that hard) how many actual passengers and crew were aboard when the quarantine started. So maybe over 1/4 of exposed became positive, 6% of that positive become serious, and 10% of that fatal.

Assuming it escapes quarantine and most of us are exposed at some point, that leads to an estimate of 0.0015 (call it 1/6 of 1%) of fatality. Recent annual deaths are 7.7 per 1000, so best guess is this adds 20%, assuming all deaths happen in the first year and any mitigations we come up with don't change the rate by much. I don't want to downplay 11.5 million deaths, but I also don't want to overreact (and in fact, I don't know how to overreact usefully).

I'd love to know how many of the serious cases have remaining disability. Duration and impact of survival cases could easily be the differences between unpleasantness and disruption that doubles the death rate, and societal collapse that kills 10x or more as the disease directly.