Posts
Comments
I could imagine this turning into a flexible system of alliances similar to the conference system in NCAA college football and other sports (see here for a nice illustrated history of the many changes over time). Just as conferences and schools negotiate membership based on the changing quality of their sports programs, ability to generate revenue, and so on, states could form coalitions that could be renegotiated based on changing populations or voter preferences.
Thinking from that perspective, one potential Schelling point could be a "Northwest" coalition of WA/OR/ID/MT/WY/ND/SD/NE. This is quite well-balanced, as these states combined to give 21 EV to each candidate. And although the state populations are higher in WA/OR (12.0M) than the six red states (7.4M), the combined vote totals actually show a small lead for Trump (4.1M vs 3.9M, with more votes remaining to be counted in the blue states likely to close the gap).
After this, maybe the remaining "Southwest" states (NV, UT, CO, AZ, NM) decide to join forces? Here a state by state analysis is less useful, especially since two of them still haven't been called, but the current combined vote count is a very narrow Trump lead of 4.07M to 4.05M.
The eastern half of the country seems harder to predict - clearly there are large potential blocs of blue states in the northeast and red states in the southeast, but it's harder to see clear geographical groupings that make sense.
Unlikely any of this happens of course, but fun to think about.
I was thinking more that the acidic environment of the stomach could break down the aggregates to the protein monomers. This step wouldn't be reliant on proteases, although proteases might then be able to further break down the monomers. But I haven't looked into whether this has been studied.
I'm not convinced that eating prion-contaminated tissue is a major factor in transmitting prion diseases. Prions are still proteins, which are broken down to amino acids very readily by the digestive system. Even if prions are more stable than most proteins because they have formed these crystalline-like oligomers, such large molecules would have little chance of being absorbed intact into the bloodstream. Instead, I would imagine they would pass through the digestive system and be excreted in feces.
The Wikipedia article on kuru proposes an alternate mechanism which seems more plausible to me:
the strong possibility exists that it was passed on to women and children more easily because they took on the task of cleaning relatives after death and might have had open sores and cuts on their hands.
This would allow the prion particles to enter the bloodstream directly where they could be absorbed into tissues and persist for a long time, bypassing the digestive system entirely.
If this is the primary mechanism of transmission, it would support your argument that eating cooked meat would have minimal risk, while handling diseased tissue would actually be the much higher risk activity.
Do you know if any of the 200 people who came down with BSE were involved in handling/butchering the meat, as opposed to just buying contaminated meat at the store and eating it? (I suppose people who bought meat at the grocery store could have still gotten infected during meal prep, but if a substantial number of victims were butchers/slaughterhouse workers/etc., it could be evidence in support of this hypothesis.)
Should we humans broadcast more explicitly to future AGIs that we greatly prefer the future where we engage in mutually beneficial trade with them to the future where we are destroyed?
(I am making an assumption here that most, if not all, people would agree with this preference. It seems fairly overdetermined to me. But if I'm missing something where this could somehow lead to unintended consequences, please feel free to point that out.)
Sorry, I thought that would be more commonly understood. As Carl said, it stands for Contract Research Organization. Hiring one is a way to get additional resources to perform specific tasks without having them be part of your organization, understand your corporate strategy, or even know what project you're working on. For example, a pharma company can hire a CRO to synthesize a specific set of potential drug compounds, without telling them what the biological target is or what disease they are trying to treat. Or think of the scenario where a rogue AGI hires someone to make a DNA sequence which turns out to code for a pathogen that kills all humans. This would likely be done at a CRO.
CRO's are often thought of as being fairly competent at executing the specific task required of them, but less competent at thinking strategically, understanding the big picture, etc. So they are generally only hired for very well-defined trades, as you mentioned above.
Maybe it's better to model the army of ants as a CRO you would hire instead of an employee? And by extension, I would much prefer to be part of an AGI's CRO than be extinct.
I often use the heuristic that if two sources with opposing Narratives both claim that a certain fact is true, it is strong evidence that the fact is indeed true. Are there cases where this heuristic fails? E.g. where both sides claim a fact is true (likely with different motives), but it is actually false?
If you have hierarchy in a company, regardless of whether people are "middle managers" per se, there's a tendency for people to come to care about advancing in the hierarchy. It's a natural thing to want to do.
I would take this a step further and say that once maze levels are high enough, it essentially becomes a requirement to care (or at least pretend to care) about advancing in the hierarchy. Instead of advancement being something that some employees might want and others might not want, it becomes almost an axiom within the organization that everyone must strive for advancement at all times. But although advancement can be a natural thing to want, it's certainly not a universal thing to want. And for people like me who aren't strongly motivated by their place in the hierarchy, this can lead to a lot of conflict, stress, and low morale.
When I was a kid (maybe around 10) I learned about the Peter Principle, how everyone in an organization gets promoted to the level of their incompetence. I thought that was one of the saddest things I'd ever heard. Why would everyone try so hard to get promoted to a role they weren't good at? Just for the extra money? I decided that when I started working, I would rather stay in a role I was good at and enjoyed on a day to day basis than get promoted to a managerial role which already sounded awful, even if it meant staying at a lower salary.
Once in the maze, however, I found it a lot harder to stay in my happy, productive role than I was expecting. I constantly felt pressure to want to get promoted. But I secretly didn't want to, because that would mean spending less time doing the actual hands-on work that I liked and more time spent in the maze world interacting with other managers. This led to a lot of tension with my bosses. They couldn't comprehend why anyone wouldn't be excited about getting promoted. Higher level jobs were just better; why couldn't I see that? But to me, they weren't better and I couldn't get them to see my perspective. Ironically, their desire to promote me incentivized me to be less productive than I would have been otherwise - if we had been able to come to an agreement where I could stay in my desired role, I would have been more motivated to work harder without the fear of accidentally getting promoted too quickly.
This was all very frustrating and confusing to me for a long time. Eventually I came across the Moral Mazes sequence and the Gervais principle, which together seemed to explain a lot of what I was experiencing and ultimately gave me the courage to leave that organization.
Anyway, that's my story of working in a maze - happy to discuss further if this was useful or informative.
Another potential assumption/limitation of the EMH:
- Socially acceptable to trade: It must be socially acceptable for people who have enough financial resources to noticeably affect market prices to trade based on the new information.
I initially proposed this idea to try to explain the market's slow response to the early warning signs of Covid in this comment. Similar dynamics may come into play with respect to the social acceptability of ESG vs anti-ESG investing based on political affiliation, although in this case I don't think there is enough anti-ESG money to affect the prevailing ESG trends much at this point.
Maybe the market is predicting that R0 will be >1, but isolation and contact tracing will be enough to prevent a wider outbreak?
What about the combo: a tic-tac-toe board position, a tic-tac-toe board position with X winning, and a tic-tac-toe board position with O winning. Would it give realistic positions matching the descriptions?
That's fair. Maybe I was more trying to get at the chances that current live orgs will develop this know-how, or if it would require new orgs designed with that purpose.
Does an organization's ability to execute a "pivotal act" overlap with Samo Burja's idea of organizations as "live players"? How many are there, and are there any orgs that you would place in one category and not the other?
Do you prefer D over E? I do.
Is this backwards? Seems like it should be E over D.
Galeev mentions Navalny in his newest thread about power dynamics and how they might change in response to the current crisis. It's a long thread so you'll need to scroll down quite a bit to see the section on Navalny. Galeev doesn't portray him in a very positive manner.
Yeah, I like his prediction that if Europe stops buying Russian energy it could force Russia into greater economic dependence on China. I'm wondering how likely Europe is to actually move away from Russian energy though. It sounds like the obvious thing to do, but I don't know from a practical standpoint how easy it would be without causing a lot of disruption in the short to medium term. I doubt they can just flip a switch and convert to new energy sources overnight, especially in Eastern Europe which is heavily reliant on Russian supply.
I think the longer the war lasts, the more likely it is for Europe to move away from Russian energy. But if the war ends relatively quickly, the motivation to do so might fade away due to economic considerations as well as general inertia/political difficulty when trying to make substantial changes.
There is also the consideration that the earlier you pick a partner, you get to enjoy the benefits of having a partner for longer.
Thanks for the link. I found it well thought out and plausible, but it seems strongly based on the assumption that Russia will remain isolated from the global financial system for the next 5-20 year timeframe discussed in the article. Is that a reasonable assumption? Although Russia is a pariah now, once the hostilities have ended I would guess the sanctions will be lifted over time, since they are also expensive for the West to maintain.
This article by Tomas Pueyo looks at Russia from a historical and geographical perspective. It makes the case that much of Russia's foreign policy is based on the need to protect Moscow, which is in the middle of the vast Eurasian plain with no natural barriers for defense, and so is vulnerable to attack from all directions. So Russia's strategy has been to expand as much as possible, to either control directly the land where invasions might have otherwise come from (e.g. Siberia), or failing that, to at least create predictably controllable buffer states (the former Soviet republics) between them and their rivals. From that perspective, Ukraine may have been becoming too unpredictable as a buffer state recently, giving Russia an incentive to want to control the land directly.
#21's response that "If a pill form was available... I would" might be related to needle phobia, although not explicitly stated.
When I've asked my Red Tribe friends their thoughts about the vaccines, they've generally given similar answers to your survey how they don't trust the government, media, big pharma, etc. But I think the main underlying reason is just signaling which tribe they belong to. Early in the pandemic, Trump's messaging was that covid was no big deal, just the flu, it will go away soon, etc. and this became the party line. For relatively young, healthy people, choosing to get the vaccine might be seen as disloyal to the tribe. So when they weighed these social costs against the seemingly negligible risk of death or serious illness, they chose their social identity over getting vaccinated. And as Matthew Barnett mentioned, they could easily come up with plausible sounding rationalizations like the ones from your survey.
This seems consistent with Zvi's concept of Asymmetric Justice.
I'm not a lawyer, but their newest Terms of Service imply otherwise:
USE OF A VIRTUAL PRIVATE NETWORK (“VPN”) TO CIRCUMVENT THE RESTRICTIONS SET FORTH HEREIN IS PROHIBITED.
Not sure how willing and able they would be to enforce such a regulation, but that's a different question. (Not legal advice!)
Any thoughts on if/when Polymarket might be available again in the US? I found their Compliance Update which says they are still looking to build a US product, but given the recent CFTC settlement it's hard to tell how likely this is to happen.
Of course, one pleasingly meta way to get at this question would be to create a new prediction market asking "Will Polymarket be available in the US on [date]", but I wonder if Polymarket would be willing to put up a market like this, since the regulators they are dealing with might not find it amusing.
I agree that attending large events is a type of risk compensation, and we may be referring to similar behavior patterns using different words here. But I'm trying to distinguish between these two types of infection:
- Infections resulting from people going about their daily activities (e.g. getting exposed at work, in a store or restaurant, other small gatherings, etc.) Here, individuals might indeed change their behavior based on their own vaccination status and risk tolerance. But since Omicron is so widespread at this point, the probability that an infected person was vaccinated should be close to the base rate of vaccination among the overall population (although somewhat lower, since the vaccines still prevent some transmission of Omicron). In other words, P(vaccinated | type-1 infection) is a little less than P(vaccinated).
- Infections resulting directly from attending large superspreader events where proof of vaccination was required. In this case, while P(vaccinated | type-2 infection) won't be exactly 1 due to the possibility of fake vaccine ID cards or weak enforcement of the policy at the event, I think it would still be quite close to 1.
If type-2 infections are a high enough percentage of overall infections, this could make it look like vaccinated people are more likely to get infected (which would be true at the population level!) even though getting vaccinated makes it less likely for any individual to get infected (assuming their behavior after vaccination remains the same).
Apologies if much of this is obvious or redundant - I'm still trying to understand the gears behind this dynamic better myself. I agree there is likely a component coming from "vaccinated people take on more risk in general", but I hadn't considered that policies which only allow vaccinated people (to a first approximation) to attend large potentially-superspreading events could lead to increased transmission among the vaccinated relative to the unvaccinated, which could lead to negative perceived vaccine effectiveness, until seeing Peter's post.
That may contribute as well, but I think Peter was implying that if enough cases overall take place during superspreader events where ~all of the attendees were vaccinated, vaccinated people may be more likely to test positive just because they were substantially more likely to be attending those superspreader events than unvaccinated people.
I also strongly upvoted for the same reasons. Very much looking forward to the results of the ELISA mucus test!
Bitcoin can only go as low as $0. Bitcoin could, in theory, go up not only to $100k but to $1 million or more.
I'm confused. In theory, $50k currently invested in VTI could also go to any of those values. Is there something I'm missing about the relative likelihood of different outcomes that would make Bitcoin the more attractive investment? I feel like there's some Econ 101 lesson I'm forgetting here.
There’s no trade, since (as many people reminded me) Metaculus is not a prediction market and you can’t trade on its values, but there’s still a big contradiction with market prices here.
In this case, isn't the trade to just use the info Metaculus provides to inform your trades elsewhere? In a way, that's an advantage of having Metaculus in addition to money-based prediction markets - predictors at money-based vs. points-based prediction markets have different motivations for predicting, so they're likely to be self-selected from different populations and may generate different, complementary predictions. Granted, for any individual question it would be easier to be able to trade directly in the money-based market, but I think there's an overall benefit in having both types available.
Welcome, and thanks for making your first comment!
As a fellow scientist with decades of experience in the industry, I disagree with several of your claims.
First, you will never know if it really works until you run blinded clinical trials against a placebo. This is the only way to tell and that is why it's required for any new drug/vaccine to be launched on the market.
Clinical trials are helpful for understanding whether a drug/vaccine works on the population level. But on the individual level, clinical trials are not the only way to tell. For example, you can just take an antibody test and see if it works.
You can't just take a antibody test and see if it works.
Of course you can.
Even if there were the right antibody tests for these peptides
Anna Czarnota posted an initial protocol here. I haven't tried it, but it seems reasonable and likely to provide useful information about one's level of protection.
but without using rigorous scientific method, there could be many other factors why you could see a response. Like you were exposed already to the virus and didn't know it.
The "rigorous scientific method" is not the only way to generate knowledge that allows individuals to update their priors. But setting that aside, the question of whether one's immune response came from the vaccine or from previous exposure to the virus is not very relevant to one's future decision making. Either way, the antibody test provides information about one's current level of immunity, which one can use to update their risk tolerance and behaviors.
It feels like your comments are aimed at the question, "What is the best vaccine (or vaccines) to approve and mass produce for the general population?" which is a perfectly valid and important question. As things currently stand, this relies on the standard clinical trials/FDA approval process. But this process takes a long time and is prone to all sorts of delays and inefficiencies due to politics and organizational maze behaviors, during which the pandemic continues to spread. Realizing that, the radvac developers and many commenters here have been asking a different question: "What can individuals do now (or in a future pandemic) to mitigate their personal risk of being infected?"
Both questions are important, but the large organizations responsible for developing/approving new vaccines have very different incentives than individuals looking for ways to minimize their own risk of infection.
I think it's just that a few weeks is the going rate for avoiding blame, as Zvi outlined in his posts Asymmetric Justice and Motive Ambiguity.
A politician can choose between two messages that affirm their loyalty: Advocating a beneficial policy, or advocating a useless and wasteful policy. They choose useless, because the motive behind advocating a beneficial policy is ambiguous. Maybe they wanted people to benefit!
Good question. I hadn't defined it in any more detail in my mind. But my basic thought is that someone should be able to build an online presence under a pseudonym (from the beginning, without having revealed their real name publicly like Scott had) as long as they comply with the rules of the communities they choose to join, without legal obligation to declare their real name. I would imagine some exceptions would have to apply (for example, in the case of a legally enforceable warrant) but others, including journalists, would refer to the pseudonym if they wanted to report on such a person.
But of course there could be unintended consequences of this sort of rule that I haven't considered.
Strongly agree with your analysis.
I also think a lesson to take away here is that, assuming we agree pseudonymity is generally considered a desirable option to have available, it falls on us to assert the right to it.
I agree this is an important topic for discussion, and I hope others will continue to weigh in with their thoughts. I'm sure this won't be the last time a journalist writes/is interested in writing an article about this community, and it would be good to coordinate around some norms here.
- Scott was told that the way to get ahead of damaging journalism is to reveal everything they might want to find out. For those of us writing under a pseudonym, should we all just be revealing our real names, and letting friends, family members, and colleagues (where appropriate) know about our connection with SSC and this community?
I'm personally not ready to do that yet. I also feel that revealing it too early would risk some of the positive things I'm trying to do within my community, and I don't want to take that chance.
Agree with John, thank you so much!
Yes, I think we are all in agreement on the topic. On my first reading, seeing the isolated quote between the other two examples of poor vaccine responses made me think this was another example of a poor response, and the quote itself can be interpreted that way if read alone (i.e. We think only vaccinating 75-year-olds is the correct policy, and it's hard but necessary work to enforce it).
The loss of life and health of innocent people who got suckered into a political issue without considering the ramifications?
By now, everyone has had a year to consider the ramifications of their decisions. People are free to make their own choices about the vaccine and their response to covid in general. If they make their choices based on their political affiliation or in-group signaling, so be it.
But with these numbers (death rate, long term health conditions, effectiveness of vaccines) around are you seriously suggesting trying to help them is not cost-effective?
I am seriously suggesting it is not cost-effective for me to try to influence others to get the vaccine. Most of the people I know have either already decided to get the vaccine at their first opportunity, or decided they will never get it. In November/December, as the vaccines were starting to get approved, I had some discussions with my few friends who I thought might be on the fence, but they weren't moved much by my arguments. I don't actually think I know anyone that I could convince at this point.
On a population level, I agree it is worthwhile and most likely cost-effective to continue to encourage people to get vaccinated. But that is almost entirely beyond my ability to influence. And I reject any blame for observing this situation and commenting on it without completely fixing it.
I believe the quote in the Janelle Nanos tweet (after "Meanwhile, in Boston, priorities are straight:") was taken out of context here. The full article shows how Dr. Ivers was trying to point out the inefficiency of the state's rigid system and offer improvements:
For weeks, Dr. Louise Ivers has been advocating for Massachusetts to speed-up the pace of its COVID-19 vaccinations. But it’s not just the slowness of the rollout that is causing the Boston doctor consternation when it comes to the state’s vaccine push.
The executive director of Massachusetts General Hospital Global Health and interim head of MGH’s Division of Infectious Diseases told Boston.com that while she’s been disappointed by the state’s vaccine efforts, she isn’t completely surprised by the sluggish and fragmented rollout based on the response to the virus over the last year.
...
Ivers told Boston.com she believes that if the pace of the vaccine were ramped up with more flexibility to start new phases as others plateau, that some of the issues around equity that the state has seen would “settle a little more carefully.”
“It’s quite complicated — you spend a lot of operational resources and planning and logistics to make sure that you only vaccinate 75-year-olds,” Ivers said. “There’s a lot of time and energy spent on making sure that a 74-year-old doesn’t accidentally get vaccinated.”
Instead, Ivers said the state should be moving more quickly to expand vaccine access to those 65 years old and up, as well as groups with comorbidities.
I also notice that there is a large part of me that thinks, once it’s easily and widely available, you know what? Straight up, just f*** ‘em if they don’t want the vaccine.
This is how I was planning to act at that point, and basically as soon as I'm able to get an official vaccine. Once it's readily available I'll feel no guilt about continued cases (assuming no major vaccine escape, that would be a different story). Even once I've gotten the official vaccine, I'll want to propagate the norm that vaccinated people should live their lives as if they were, you know, vaccinated, so I intend to act that way, unless there's a reason I'm not considering.
Is this something that can be done at home with readily available and affordable equipment? If so, would you be willing to share more details of how someone might get started? I think a lot of readers would be interested in hearing more about this - it could even be its own post.
Maintaining 4 °C sounds doable with a good fridge and a data logging thermometer. -20 °C is more tricky - maybe use a home freezer (*** is specced at ≤ -18 °C) and add a data logger. If it then turns out that it can't reach -20 °C, it might be possible to fix that by modding its internal thermostat somehow. Or have access to a lab freezer, or shell out the big bucks (four figures) to buy one.
As someone who has worked in the labs a long time, I wouldn't worry about having to hit exactly -20 °C; that basically just means "freezer temperature". Lab freezers don't work any differently than home freezers as far as I can tell, although they do have certain safety features that a home freezer wouldn't. But the temperature can still vary a few degrees up or down, and it shouldn't affect your storage much. The (very) general rule of thumb is a difference of +/- 10 °C makes chemical reactions (such as peptide degradation) go 2x faster/slower. So even having to store in a fridge temporarily would only be ~4x faster than a freezer, still maybe good enough for one's purposes.
The big difference comes for -20 °C vs -80 °C, since there you have a 2^6 or 64-fold rate difference. So something that can last for a month at -80 °C might degrade in half a day in a freezer. Hence the complex supply chains needed for such vaccines.
I didn't know that! OP, you can also highlight the desired text and click the block quote button. You can also add links that way.
Totally agree, and this is pretty much what I had in mind as well. The organizer can also host a Zoom call beforehand where they explain the procedure, answer any questions, and let people sign up for times spaced out by 5-10 minutes to self administer.
Agree neither Sarah or you had explicitly mentioned a clinical trial. I was pushing back more against Sarah's statement “Take a random peptide that has never been tested on any living thing” and your statement "She doesn't explicitly state that this has never been tested on any living thing", which I interpreted as endorsing the claim that this vaccine has never been tested on any living thing. My point is that there is evidence this vaccine has been tested in living things, namely the humans who claim to have self administered it. I have no strong reason to doubt they have done so, and I haven't seen any reports of harm coming to these individuals as a result (although admittedly I have no idea if such reports would be publicly available). When I mentioned clinical trials, I was trying to think of what evidence might convince Sarah this approach is not as risky as she fears, and a clinical trial was the first thing that came to mind.
This should be fairly easy to do, for someone with access to a good lab, personal-scale funding, and motivation. I have to assume that Church et. al. have the first two, so either they don't care enough to bother, or they did but the results weren't encouraging (and either kept quiet or just unnoticed).
Agree they almost certainly have the first two, but I don't see why they would have had motivation to perform the kind of cell-based studies you are looking for. Here is how I imagine their motivation and incentives throughout the last year, mostly drawn from the article I linked above and info from the radvac website:
- They see Covid is becoming a pandemic, estimate that a commercial vaccine is >1 year away, and wonder if they can develop an open source vaccine that will provide some level of protection more quickly. At this point, their strongest motivation is to develop a vaccine for their own personal use.
- They design the radvac vaccine, and based on their personal and collective understanding of vaccines, biochemistry, immunology, etc., each individual decides it is in their personal best interest to self administer the vaccine.
- They are torn between competing desires to make their protocol and the underlying research public, and to avoid unnecessary attention from regulatory authorities. From the article:
Given the international attention on covid-19 vaccines, and the high political stakes surrounding the crisis, the Radvac group could nevertheless find itself under scrutiny by regulators. “What the FDA really wants to crack down on is anything big, which makes claims, or makes money. And this is none of those,” says Church. “As soon as we do any of those things, they would justifiably crack down. Also, things that get attention. But we haven’t had any so far.”
- Therefore they settle on the strategy of publishing the white paper under the radar, so it is publicly available but attracts as little attention as possible. (With great success I might add, since we are only having this discussion 6 months later!)
- Each individual has already made the decision to self administer based on their personal risk-benefit analysis, without the need for cell-based studies.
- Publishing additional cell-based studies could increase the chance of drawing unwanted regulatory attention to their effort.
- Thus, they don't have strong incentives to carry out any cell-based studies (which would also take time and effort away from higher priority things they might work on instead), and they likely do have incentives to avoid publishing any cell-based studies.
Which leaves us in the current equilibrium where there are no published cell-based studies.
I think your claim that "they don't care enough to bother" is not very accurate, and a consideration of their incentives as I outlined above provides an alternative reason why we might not expect to find any published cell-based studies.
At the end of the day, we all still have to make personal decisions based on the information at our disposal, as incomplete or challenging to interpret as it may be.
Happy to hear any additional thoughts on this topic!
Why would they have to gather in close quarters? One person could make it in their kitchen, then leave the room while others come in one at a time to self administer their dose.
This article from July 2020 claims that George Church and many of his colleagues had already self administered their vaccine at that point. It's almost certainly true that there hasn't been a clinical trial, because nobody has ever had an incentive to run a clinical trial. I don't think their intent was to publicize this widely or profit commercially from it. Rather, they realized they could just do it, went ahead and did it, and wrote up their findings publicly but under the radar, so other like-minded individuals could duplicate their procedure at their own risk. Remember that they are an academic research group and they face very different incentives than the drug companies trying to vaccinate the general public. In any case, it seems clear that these vaccines have been tested on many living things, just not in an official study.
For the average Less Wrong reader, I tend to agree. But a nurse in an area with a strong, vocal anti-vaccine community may face substantial social pressure to (at least publicly) reject commercial vaccines, for the reasons I stated above.
Agree it is extremely unlikely that many nurses have done so, and your probabilities seem quite reasonable. I think the main reason why many nurses have declined the vaccine is social signaling - either to maintain their social status within a mostly anti-vaccine peer group, or to maintain credibility with their anti-vaccine patients, who may be reluctant or outright refuse to be treated by a nurse who has been vaccinated because such a nurse is on "the wrong side" and can no longer be trusted. However, a nurse could self-administer the radvac vaccine and get some protection, while still being able to honestly claim they have no plans to get the commercial vaccines.
I hadn't read the whitepaper yet before my initial post, and after a quick scan it looks like you are correct that radvac covers different epitopes than the commercial vaccines (I haven't done my own detailed analysis yet). Are you and others planning to take radvac still planning to get a commercial vaccine once you are eligible?
Crazy thought, and I doubt this is likely on large scale or it would have been in the news, but any chance this could explain the higher than expected percentage of nurses who have rejected getting the vaccine? Perhaps some have already vaccinated themselves under the radar! And therefore have no need to take the "real" one.
Also from nostalgebraist's summary:
Meanwhile, the change which the essay does argue for – towards more legibility – feels only tangentially relevant to the problem. Yes, designs that are easier to understand are often easier to customize.
For voting systems, I think the key insight is instead: Designs that are easier to understand are easier to trust.
One last comment/reminder to myself: I read nostalgebrist's summary of Weyl's "Why I am not a technocrat" argument (haven't read the original yet), and his last few points seem very relevant to my argument:
8. What needs to be true for a mechanism to be open to modification by the masses? For one thing, the masses need to understand what the mechanism is! This is clearly not sufficient but it at least seems necessary.
9. Elites should design mechanisms that are simple and transparent enough for the masses to inspect and comprehend. This goal (“legibility”) trades off against fidelity, which tends to favor illegible models.
10. But the elite’s mechanisms will always have problems with insufficient fidelity, because they miss information known to the masses (#3). The way out of this is not to add ever more fidelity as viewed from the elite POV. We have to let the masses fill in the missing fidelity on their own.
And this will requires more legibility (#8), which will come at the cost of short-term fidelity (#9). It will pay off in fidelity gains over the long term as mass intervention supplies the “missing” fidelity.
I take this to be the central piece of advice articulated in the essay.