Posts

JBlack's Shortform 2021-08-28T07:42:42.667Z

Comments

Comment by JBlack on Newcomb's Lottery Problem · 2022-01-28T08:17:19.026Z · LW · GW

I take both boxes for a whole bunch of reasons. Included among them are the facts that Omega is terrible at setting up these problems, and also that she is a very poor predictor.

I have no idea who the faceless mob are who "know" that Omega is 99% accurate in her predictions and completely honest, but they're wrong.

Comment by JBlack on What's Up With Confusingly Pervasive Consequentialism? · 2022-01-24T00:18:37.766Z · LW · GW

Combining multiple source of information, double checking etc are ways to decrease error probability, certainly. The problem is that they're not independent. For highly complex spaces not only does the number of additional checks you need increase super-linearly, but the number of types of checks you need likely possibly also increases super-linearly.

That's my intuition, at least.

Comment by JBlack on Is AI Alignment a pseudoscience? · 2022-01-24T00:08:39.536Z · LW · GW

The field of AI alignment is definitely not a rigorous scientific field, but nor is it anything like a fictional-world-building exercise. It is a crash program to address an existential risk that appears to have a decent chance of happening, and soon in the timescale of civilization, let alone species.

By its very nature it should not be a scientific field in the Popperian sense. By the time we have any experimental data on how any artificial general superintelligence behaves, the field is irrelevant. If we could be sure that it wasn't possible to happen soon, we could take more time to probe out the field and start the likely centuries-long process to make it more rigorous.

So I answer your question by rejecting it. You have presented a false dichotomy.

Comment by JBlack on Risk and Safety in the age of COVID · 2022-01-23T23:24:29.811Z · LW · GW

Plexiglass separators were a reasonable precaution in the days back when the mainstream view was that the disease was spread mostly by large droplets that mostly fell within seconds. They seem less useful in these days when nearly everyone gives higher credence to primarily aerosol spread.

That said, we still don't have great data on how easily COVID spreads in practice through various transmission routes. Maybe they do significantly reduce probability of transmission after all.

Comment by JBlack on What's Up With Confusingly Pervasive Consequentialism? · 2022-01-22T01:25:28.203Z · LW · GW

Your "harmfulness" criteria will always have some false negative rate.

If you incorrectly classify a harmful plan as beneficial one time in a million, in the former case you'll get 10^44 plans that look good but are really harmful for every one that really is good. In the latter case you get 10000 plans that are actually good for each one that is harmful.

Comment by JBlack on Risk and Safety in the age of COVID · 2022-01-22T00:46:54.357Z · LW · GW

There possibly is a moral dimension, but much more about risk of spreading it to others than about catching it. Individuals are responsible for limiting how likely they are to spread it on a moral dimension, but only responsible for avoiding catching it on the usual pragmatic dimension like riding a motorcycle.

Compare: "I went to a nightclub and caught COVID there" vs "I had a positive test but was feeling fine so I went to work anyway". The former is more likely to be viewed as risky behaviour while the latter is more likely to be viewed as immoral behaviour.

Even then, this isn't specific to zero-COVID at all in my experience. The distinction between various policies in the people I've talked to seems to be more about different models of disease, costs, and benefits over various timescales than anything moral at all.

Comment by JBlack on What's Up With Confusingly Pervasive Consequentialism? · 2022-01-21T06:31:50.199Z · LW · GW

I think the problem is not quite so binary as "good/bad". It seems to be more effective vs ineffective and beneficial vs harmful.

The problem is that effective plans are more likely to be harmful. We as a species have already done a lot of optimization in a lot of dimensions that are important to us, and the most highly effective plans almost certainly have greater side effects that make thing worse in dimensions that we aren't explicitly telling the optimizer to care about.

It's not so much that there's a direct link between sparsity of effective plans and likelihood of bad outcomes, as that more complex problems (especially dealing with the real world) seem more likely to have "spurious" solutions that technically meet all the stated requirements, but aren't what we actually want. The beneficial effective plans become sparse faster than the harmful effective plans, simply because in a more complex space there are more ways to be unexpectedly harmful than good.

Comment by JBlack on What's Up With Confusingly Pervasive Consequentialism? · 2022-01-21T05:52:51.974Z · LW · GW

Yes, it definitely doesn't work with A or C. It might work with B, because judging whether a poem is Shakespeare-level or not is heavily entangled with human society and culture and it may turn out that manipulating humans to rave about whatever you wrote (whether it's actually Shakespeare-level poetry or not) might be easier. I expect not, but it's hard to be sure. I would certainly put C as safer than B.

Everything else is obviously much more dangerous.

Comment by JBlack on Risk and Safety in the age of COVID · 2022-01-21T04:34:32.924Z · LW · GW

While it is interesting drawing a distinction between "risky" and "unsafe" on moral grounds, I don't know anyone else who does the same, and most of the rest of the post felt like it entirely missed the point somewhere adjacent to that.

I do know some people who have a "zero COVID" viewpoint. Exactly zero of them hold the view you describe as "if you failed to take a precaution and ended up getting infected, then you are morally responsible for any negative effects that this caused". Likewise the distinctions you draw seem to completely miss the point in most of the other categories, where I also personally know people who hold the corresponding views.

This doesn't invalidate your post, but I thought it might be worth presenting counter-evidence, even if it is a very small sample. Perhaps it's a cultural variation between wherever you live and where I live?

Comment by JBlack on The unfalsifiable belief in (doomsday) superintelligence scenarios? · 2022-01-20T02:11:27.455Z · LW · GW

Utilities in decision theory are both scale and translation invariant. It makes no sense to ask what the utility of going extinct "would be" in isolation from the utilities of every other outcome. All that matters are ratios of differences of utilities, since those are all that are relevant to finding the argmax of the linear combination of utilities.

I'm not sure what you mean by "I believe that e has to be 0", since e is a set of observations, not a number. Maybe you meant P(e) = 0? But this makes no sense either since then conditional probabilities are undefined.

Comment by JBlack on MackGopherSena's Shortform · 2022-01-20T01:52:06.342Z · LW · GW

It only implies that you can have no moral imperative to change the past. It has no consequences whatsoever for morally evaluating the past.

Comment by JBlack on MackGopherSena's Shortform · 2022-01-19T02:02:43.099Z · LW · GW

The motivation remains the same regardless of whether your first 'if' is just an if, but at least it would answer part of the question.

My motivation is to elicit further communication about the potential interesting chains of reasoning behind it, since I'm more interested in those than in the original question itself. If it turns out that it's just an 'if' without further interesting reasoning behind it, then at least I'll know that.

Comment by JBlack on MackGopherSena's Shortform · 2022-01-19T01:14:19.249Z · LW · GW

Mostly that it's a very big "if". What motivates this hypothesis?

Comment by JBlack on Entropy isn't sufficient to measure password strength · 2022-01-19T00:50:47.656Z · LW · GW

Taking it a bit further, these conditional probabilities depend upon some prior for various types of attacks. The numbers you have are P(guess password | password scheme AND attacker guesses via login attempts). They are both still really bad, but the difference is slightly greater if the password can be attacked by somehow exfiltrating a password hash database and running a billion guesses against that.

Comment by JBlack on Value extrapolation partially resolves symbol grounding · 2022-01-15T06:01:19.055Z · LW · GW

There would, so long as the extra dimensions are irrelevant. If there are more relevant dimensions then the total space becomes larger much faster than the happy space. Even having lots of irrelevant dimensions can be risky because it makes the training data sparser in the space being modelled, thus making superexponentially many more alternative hypotheses viable.

Comment by JBlack on How to tradeoff utility and agency? · 2022-01-14T02:48:31.130Z · LW · GW

Superficially (which is all this simplified scenario permits), X > 10 suffices for me. In any real scenario my threshold for X will change, because other things are never equal.

Comment by JBlack on Omicron Post #14 · 2022-01-14T00:47:09.869Z · LW · GW

I'm very happy to be able to work 100% from home, where previously I was 0% able to work from home. I had previously asked to work from home and was flatly refused.

I would pay quite a large amount (such as a month's income) to avoid going back to that, especially since the pandemic has not yet had any comparable negative effects on my life.

Comment by JBlack on On the Impossibility of Interaction with Simulated Agents · 2022-01-14T00:10:04.904Z · LW · GW

I can't communicate with you, for exactly the same reason. This is a serious objection, though phrased frivolously. Your statement

Each simulated agent's experience is the sum of all possible ways that that agent's subjective experience could happen, including all possible ways it could exist in fundamental base reality, and all possible ways it could be simulated

(emphasis mine) is given no justification whatsoever. You appear to be conflating the ensemble of all possible agents with the individual agents having different experiences in that ensemble.

Nothing you have said applies only to simulated agents, and so it seems that you are proving too much.

That is, you seem to be saying that "I" can't communicate with "you" because some versions of "you" could be atmospheric life forms in a gas giant in a different universe where "I" am not even present.

Comment by JBlack on elifland's Shortform · 2022-01-11T22:45:59.627Z · LW · GW

On the same line but more commercial is the game Screeps, which has both ongoing and seasonal servers run by the developers as well as private servers (you can run your own).

Comment by JBlack on The "gestalt" operator · 2022-01-11T02:54:13.528Z · LW · GW

I presume the "{x|y}" notation is what is being defined here by example, but I have little idea what the examples are intended to convey. I have a lot of ideas about what the examples might be intended to convey, but very little information about which ones might be correct. I notice that all the examples seem to "expand to" two sentences with the common syntactic elements combined and the differences enclosed in the "{|}" symbols.

What I don't get is the function of the two sentences. Are you trying to "point at" a concept that lies between the corresponding words in the two sentences but isn't expressed by either? Are you intending to draw a distinction between the sentences and using the notation to reduce repetition? Are you pointing at common features of both, some sort of conceptual intersection? Something else entirely?

Comment by JBlack on adamzerner's Shortform · 2022-01-11T02:12:07.332Z · LW · GW

Yes, this was a point of confusion for me. The point of confusion that followed very quickly afterward were why the strong nuclear force didn't mean that everything piles up into one enormous nucleus, and from there to a lot of other points of confusion - some of which still haven't been resolved because nobody really knows yet.

The most interesting thing to me is that the strong nuclear force is just strong enough without being too strong. If it was somewhat less strong then we'd have nothing but hydrogen, and somewhat more strong would make diprotons, neutronium, or various forms of strange matter more stable than atomic elements.

Comment by JBlack on Lies, Damn Lies, and Fabricated Options · 2022-01-10T00:42:06.429Z · LW · GW

Yes, that was my point. If you set reasonable numbers for these things, you get something on the order of magnitude of 1% chance of missing one as a good target. Hence if you've made fewer than 100 flights then having not yet missed one is extremely weak evidence for having spent too much time in airports, and is largely consistent with having spent too little.

Most people have not made 100 flights in their lives, so the advice stands a very high chance of being actively harmful to most people.

It would be more reasonable to say that if you have missed a flight then you're spending "too much" time in airports, because you're probably doing way too much flying.

Comment by JBlack on Omicron Post #13: Outlook · 2022-01-09T01:32:46.514Z · LW · GW

If Australia was on the graph it would destroy the y-axis. That’s what happens when procedures that previously were sufficient become inadequate.

No, this is actually completely incorrect. Most of the procedures that previously were sufficient were deliberately abolished when the 16+ vaccination rate went over 80%. For most states that happened just before the first reports of Omicron from South Africa.

This is what happens when a government changes from "COVID zero" to "let it rip" just as a strain comes along that infects vaccinated people almost as easily as unvaccinated. Except in the state of Western Australia, which peaked at 16 new cases a few days ago and returned to zero yesterday.

We currently have no restrictions almost everywhere else. People who have been in contact outside their home with known cases have no requirement to isolate or get tested, and there are essentially no restrictions on large gatherings in most areas. There are some ridiculous and ineffective restrictions in some areas (such as no dancing or singing in hospitality venues). Some states are considering asking health workers to come to work while known to be infected if they're not displaying symptoms.

Update: Household close contacts of known cases who would normally be required to quarantine (used to be 14 days, now 7) are now allowed back to work if they are in "critical" positions such as hospital workers and not showing symptoms.

To avoid looking up the relevant government websites or news, you could look at the "stringency index" graph for Australia on OurWorldInData: from being one of the highest in Western nations in November, it's now lower than most of them.

Comment by JBlack on Why maximize human life? · 2022-01-09T01:04:24.312Z · LW · GW

I only partly value maximizing human life, but I'll comment anyway.

Where the harm done seems comparatively low, it makes sense to increase the capacity for human lives. Whether that capacity actually goes into increasing population or improving the lives of those that already exist is a decision that others can make. Interfering with individual decisions about whether or not new humans should be born seems much more fraught with likelihoods of excess harms. Division of the created capacity seems more of a fiddly social and political problem than a wide-view one in the scope of this question.

The main problem is that on this planet there is a partial trade off between capacity for humans to live and the capacity for other species to live. I unapologetically favour sapient species there. Not to exclusion of all else, and particularly not to the point of endangerment or extinction, but definitely value a population of a million kangaroos and two million humans (or friendly AGIs or aliens) more than ten million kangaroos and one million humans. There is no exact ratio here, and I could hypothetically support some people who are better than human (and not inimical to humans) having greater population capacity, though I would hope that humans would be able to improve over time just as I hope that they do in the real world.

In the long term, I do think we should spread out from our planet, and be "grabby" in that weak sense. The cost in terms of harm to other beings seems very much lower than on our planet, since as far as we can tell, the universe is otherwise very empty of life.

If we ever encounter other sapient species, I would hope that we could coexist instead of anything that results in the extinction or subjugation of either. If that's not to be then it may help to already have the resources of a few galactic superclusters for the inevitable conflict. but I don't see that as a primary reason to value spreading out.

Comment by JBlack on Newcomb's Problem as an Iterated Prisoner's Dilemma · 2022-01-07T06:08:26.599Z · LW · GW

The essence of Prisoner's Dilemma is that it is symmetric, and both players individually have incentive to defect if the other cooperates. How does Omega gain from defecting if you cooperate? Or indeed, how does Omega gain or lose at all?

Comment by JBlack on MackGopherSena's Shortform · 2022-01-07T05:49:54.680Z · LW · GW

I suspect that I don't understand your last sentence at all.

Human behavior becomes a lot less confusing when you categorize each intentional action according to the two aforementioned categories.

Do you mean in this hypothetical universe? I imagine that it would diverge from our own very quickly if feats like your examples were possible for even a small fraction of people. I don't think intentional actions in such a universe would split nicely into secular and mana actions, they would probably combine and synergize.

Comment by JBlack on An Observation of Vavilov Day · 2022-01-04T05:41:07.231Z · LW · GW

I aspire to be a person who does good things, and who is capable of doing hard things in service of that. This is a plan to test that capacity.

I haven’t been in a battle, but if you gave me the choice between dying in battle and slowly starving to death, I would immediately choose battle. Battles are scary but they are short and then they are over.

A large proportion of "battle deaths" historically have been due to infection following serious injuries, and many of those who died lingered for weeks. I'm not sure whether this would change your decision, but in case you ever actually had to make this choice it might be worth considering.

Personally, also aspiring to be a person who does good things, I tend to think of "dying in battle" as involving taking actions to kill and/or maim other people and therefore something to avoid as being worse than plague.

That aside, thank you for posting about the actions of these people, and Vavilov in particular.

While I generally am not a fan of people making public displays of personal sacrifice, I can understand some of the reasons why you might be doing so, and hope that you achieve your goals.

Comment by JBlack on Each reference class has its own end · 2022-01-03T07:18:53.213Z · LW · GW

In that case, we come to the idea of some kind of “reference class of qualified observers”, which consists of the minds who do think about anthropics or at least can do it.

Or it specifically consists of the minds who think about anthropics in the same confused way that we do

If most intelligent species continue for a billion years but their anthropic questions are resolved early using something other than SSA, the conditional probability of using SSA to get an incorrect short doomsday timeline is high, because those species that use SSA at all discard it early in their development.

You can take this as anthropic evidence that using SSA as a model is doomed soon.

Comment by JBlack on We need a theory of anthropic measure binding · 2022-01-03T05:13:16.373Z · LW · GW

That's fair. One problem is that without constraining the problem to a ridiculous degree, we can't even agree on how many of these decisions are being made and by whom.

Comment by JBlack on COVID Skepticism Isn't About Science · 2022-01-02T01:15:30.869Z · LW · GW

The number of connection paths to the person who died, and therefore the average number of connections via which you find out about their deaths, is basically just proportional to the square regardless of overlaps. In a "small world" you might find out about the same person's death more than once via these connections, but the number of people you directly associate with who have someone close to them die is the same as in a "large world" so I don't think degree of overlap matters much.

I do agree that the impact of deaths decreases with the indirectness of the connections. I was only commenting on the numbers in the example.

Comment by JBlack on We need a theory of anthropic measure binding · 2021-12-30T11:10:41.232Z · LW · GW

Is this really an anthropic problem? It looks more like a problem of decision making under uncertainty.

The fundamental problem is that you have two hypotheses, and don't know which is true. There are no objective probabilities for the hypotheses, only subjective credences. There may even be observers in your epistemic situation for which neither hypothesis is true.

As an even more fun outcome, consider the situation where the mind states of the two types of entity are so exactly alike that they are guaranteed to think exactly the same thoughts and make the same decision, and only diverge afterward. Are there one, two, or four people making that decision? Does the distinction even make a difference, if the outcomes are exactly the same in each case?

Comment by JBlack on COVID Skepticism Isn't About Science · 2021-12-29T23:46:51.899Z · LW · GW

I think this grossly underestimates the second tier connectivity. In the hypothetical society, this person has 9900 indirect associates (such as the great uncle of a friend). If a random 0.2% of the population were to die from a disease, then there would not be just one indirect associate that dies, there would be around 20 of them.

In a small isolated community many of these would overlap, maybe even just in one person in extreme cases, but then it still wouldn't be just one "friends' great uncle" who died, it would also be "my boss's friend" and "my co-worker's grandfather" and "my cousin's neighbour" and so on for a dozen more second-layer relationships.

But no, the real tragedy is that in modern society older people - those most likely to die or suffer severe effects of COVID - generally have much weaker associations with the population who post most loudly on the Internet. Many people over 70 are not friends with anyone under 50, are not co-workers with anyone, not playing any sport played by younger people, and often not even in regular contact with more than a few members of their younger family. There are still just as many people dying, but from the point of view of some of the public discourse, very few that you would care about.

Comment by JBlack on A good rational fiction about IT-inspired magic system? · 2021-12-28T01:54:39.176Z · LW · GW

Avaunt has a magic system that sounds similar to this. It's not directly the central focus of the story, but it does contribute to the flavour of the story and some of the plot.

Comment by JBlack on A good rational fiction about IT-inspired magic system? · 2021-12-28T01:41:14.170Z · LW · GW

I suspect that in such universes that are not destroyed very quickly, an early user creates fail-safe spell constructs that limit such destruction by future users (including themselves under most conditions). This does leave open the possibility that some primordial magic user with root access still exists somewhere, and is very careful to use such power only when absolutely necessary, and only with the minimum weakening of ordinary constraints.

Comment by JBlack on Gerald Monroe's Shortform · 2021-12-28T01:31:22.458Z · LW · GW

This is definitively not AGI.

And it lacks the cognitive ability to consider most of these things because this doesn't improve reward during the training phase.

If it lacks cognitively ability to consider things that humans can consider, then it's not AGI.

Comment by JBlack on Quinn's Shortform · 2021-12-27T08:18:45.318Z · LW · GW

The main problem is that prior investment into the oil method of powering stuff doesn't translate into having a comparative advantage in a renewable way of powering stuff. They want a return on their existing massive investments.

While this looks superficially like a sunk cost fallacy, it isn't. If a comparatively small investment (mere billions) can ensure continued returns on their trillions of sunk capital for another decade, it's worth it to them.

Investment into renewable powering stuff would require substantially different skill sets in employees, in very different locations, and highly non-overlapping investment. At best, such an endeavour would constitute a wholly owned subsidiary that grows while the rest of the company withers. At worst, a parasite that hastens the demise of the parent while eventually failing in the face of competition anyway.

Comment by JBlack on What is a probabilistic physical theory? · 2021-12-27T07:47:12.425Z · LW · GW

The use of the word "Bayesian" here means that you treat credences according to the same mathematical rules as probabilities, including the use of Bayes' rule. That's all.

Comment by JBlack on What is a probabilistic physical theory? · 2021-12-26T02:07:53.876Z · LW · GW

I'm not sure what the problem is, nor why you connect Bayesian approaches with "how some agent with a given expected utility should act". There is a connection between those concepts, but they're certainly not the same thing.

The Bayesian approach is simply that you can update prior credences of hypotheses using evidence to get posterior credences. If the posterior credence is literally zero then that hypothesis is eliminated in the sense that every remaining hypothesis with nonzero credence now outweighs it. There will always be hypotheses that have nonzero credence.

Comment by JBlack on Gerald Monroe's Shortform · 2021-12-26T01:49:34.979Z · LW · GW

Yes, you can avoid AGI misalignment if you choose to not employ AGI. What do you do about all the other people who will deploy AGI as soon as it is possible?

Comment by JBlack on Tough Choices and Disappointment · 2021-12-25T00:34:15.360Z · LW · GW

I've certainly had many tough choices that were not preceded by disappointment, so I can't relate to the premise at all.

As a comment on English usage, I'm not sure about the intended meaning of "I think this is so obvious that it's most certainly true in most situations". When taken literally, qualifiers such as "I think" are redundant. You wrote it, so you thought it. In practice they are used to reduce the expressed confidence of a statement. However, then you go on to say "it's obvious", and "most certainly true" which are extreme statements of high confidence. Then you weaken it again with the qualifier "most situations" which contradicts the phrasing in the opening sentence "[...] always preceded by a disappointment."

I am left with contradictory information about how strongly you believe this hypothesis, which rather defeats the whole point of an epistemic status.

Comment by JBlack on Worldbuilding exercise: The Highwayverse. · 2021-12-24T10:22:33.751Z · LW · GW

Thanks, that explains why I had no idea what you meant by deterministic. It's not a meaning for the term that I would have guessed. I obviously wasn't assuming that the universe is deterministic in that sense.

It does open up more questions, and so is interesting. Let us use the function notation D2(x) to refer to the deterministically single allowed timeline of semiverse 2 given that semiverse 1 has timeline x, and similarly for D1. Such a universe is only possible if D1 and D2 satisfy certain conditions, in particular that there exists at least one pair (x,y) such that D2(x) = y and D1(y) = x.

We can eliminate the case where D1 or D2 are constant, since those correspond to causally isolated or one-way semiverses and are therefore boring.

For almost all other function pairs, almost all timelines x in one semiverse have no corresponding timeline y since in general D1(D2(x)) != x. This places drastic limitations either on what single semiverse timelines are allowed in ways that are utterly foreign to conventional causality or even continuity, or on what sorts of functions D1 and D2 are allowed. Almost all ordinary deterministic laws of physics will fail this condition.

So for this notion of determinism to be sustained, we have to consider universes in which even within a single semiverse with no flipping, the laws of physics are utterly different from our own.

Comment by JBlack on Worst-case thinking in AI alignment · 2021-12-24T00:36:53.441Z · LW · GW

For example, I don’t think that “a terrorist infiltrates the team of labellers who are being used to train the AGI and poisons the data” is a very likely AI doom scenario. But I think there are probably 100 scenarios as plausible as that one, each of which sounds kind of bad.

There are even much more likely scenarios which have the same basic mechanism and effect, such as "a disgruntled employee poisons the data", "nation state operation", "criminal group", "software bug", "one intern making an error", or even "internet trolls for the lulz". All of these have actually happened to corrupt data for important software projects in subtle and destructive ways.

Comment by JBlack on Worldbuilding exercise: The Highwayverse. · 2021-12-24T00:01:14.796Z · LW · GW

I suspect that I don't know what you mean by "deterministic" here, since the meaning I have in mind can't possibly apply to such a universe. That is, that future states are completely determined by prior states. That can't possibly apply here since the universe has no global distinction between future and past. Even splitting our view into timelines within each semiverse doesn't help, since determinism is violated by the sudden appearance of sentient beings and other materials that are not in any way determined by that semiverse's prior states. So you must be using some other meaning for "deterministic".

Perhaps you just mean that the universe timeline is single-valued? That is, only one set of events actually happens at each point in spacetime in each semiverse. That is also the model I'm using, but from a different point of view. Rather than taking that a contrary person exists as a fixed event that must be worked around, I am taking a wider view of what fraction (in some sense) of possible timelines that are otherwise very similar contain that contrary person vs those that don't. Since the actual timeline has to be one of the possible timelines, this seems to be a useful consideration.

My conclusion is that contrary people drastically lower the measure of mostly-similar timelines that contain them, and so over trillions of sentient beings it seems likely that the proportion of contrary people is much more likely to be very, very small than that they are relatively frequent and cause lots of bizarre events.

Comment by JBlack on Worldbuilding exercise: The Highwayverse. · 2021-12-23T00:38:38.218Z · LW · GW

I think the first question is the most important.

To me the simplest solution is that such people simply don't exist. There are possible timelines in which they don't exist, and timelines that have them seem likely to be unstable, so the actual timeline will be one of the stable ones in which they don't exist.

There are other possible solutions of course, but "increasingly bizarre events occur in which their desires are thwarted" seems far more convoluted than "one of the other millions of sperm met the egg instead and that person never existed" or even a more general "this species' brains develop in such a way that they don't have such ideas".

Of course, in general this leads to the much more stable state in which although (as per premise) every sentient being potentially has the capability to flip universes, in practice none of them know that it's possible and even if they do then they don't know how to actually do it.

Comment by JBlack on Worldbuilding exercise: The Highwayverse. · 2021-12-23T00:27:42.165Z · LW · GW

Interesting premise.

This universe definitely violates the second law of thermodynamics, and even our concept of probability seems like it would be something that isn't useful there. Everything is in a steady state of some acausal equilibrium that is strongly entangled with the entire universe's population of sentient minds.

Apparently if any sentient being wants the timeline changed, then that's not a stable equilibrium because essentially any of them can act on it. So it seems that the only equilibria are those in which essentially nobody ever wants it changed, at least not enough to take two flips to change it.

Initially this seems great, in that everyone gets what they want forever. However, that's not really the only type of equilibrium. One in which no sentient beings exist at all is another. A third is one in which plenty of people want things to be different but they can't flip, whether because they were born defective or through external constraints. A fourth is one in which they hate their life (or some events in it) but don't believe that they can do anything to change it. A fifth is one in which they are merely observers to things their bodies do and experience, outside of their control.

There are probably even more bizarre possibilities. We don't have any rule that tells us how the actual situation is determined from all the possible equilibria, and the rules that we have developed for investigating likelihood in our apparently one-way causal universe may have no applicability to one governed by acausal equilibrium.

Comment by JBlack on Understand the exponential function: R0 of the COVID · 2021-12-22T02:47:34.954Z · LW · GW

The difference cancels out.

That's a strong claim. Do you have any evidence for it?

Comment by JBlack on Manipulation resistance of futarchy · 2021-12-22T02:43:11.485Z · LW · GW

In the simple conditional case with N possible outcomes, you are (in the basic case) paying $1 to create 2N stocks: W|D_i and (1-W)|D_i for each of the N decisions D_i, where W is the agreed welfare metric ranging from 0 to 1. When decision n is implemented and the outcome measured, the W|D_n and (1-W)|D_n stocks pay out appropriately.

So yes, if you never sold your |D_n stocks then you get $(W + 1-W) = $1 back. However, you don't have an unlimited number of dollars and can't create an unlimited number of stocks.

Comment by JBlack on Manipulation resistance of futarchy · 2021-12-21T07:03:42.854Z · LW · GW

In most models of prediction markets that I've seen so far, stocks aren't finite. Any investor can pay to create an outcome-neutral bundle.

If the benefiting speculator is willing to pay more than E(D) for a D stock, then other investors can create more and sell D to that buyer for a price greater than E(D) while holding or selling off the rest for net expected profit. In most cases this will result in a price somewhere between E(D) and E(D*).

If E(D*) and E(D) are very close, or the D buyer financially dominates the whole market, then this could still result in market manipulation such that price(D) > E(D*). In the former case, there's an argument that the correct decision really is D rather than D*: the expected loss to the public is tiny while the benefit to the single person (or perhaps minority coalition) is great enough to outweigh the combined difference to the rest of the market.

The second case is more problematic, but really if a single entity already dominates the markets to that extent, there are other problems.

Comment by JBlack on Understand the exponential function: R0 of the COVID · 2021-12-20T02:18:24.923Z · LW · GW

I was presuming that we (and many other readers) are already familiar with such simplistic models.

I don't know why you are asking me to do calculations using them when my post explicitly notes some of the errors in the assumptions of such models, and how the actual spread of infectious diseases does not follow such models as scale increases.

Comment by JBlack on Understand the exponential function: R0 of the COVID · 2021-12-18T01:40:41.214Z · LW · GW

There is quite a lot of evidence that vaccination, on average, reduces:

  1. the chance of contracting disease at all compared with those who are not vaccinated (~40-70% for Delta, reduced to maybe ~10-30% for Omicron);
  2. the duration of detectable infection and presumably infectiousness (~20-30%, unknown for Omicron);
  3. the quantity of virus present in respiratory tract, which may affect infectiousness (numbers vary wildly between studies); and
  4. severity of illness in those who contract the disease (as you note).

The problem is not that (1) (2) and (3) don't exist, the problem is that they weren't sufficient to prevent widespread transmission, even with large fractions of the population vaccinated and fairly substantial non-medical interventions such as masks and distancing.

One other thing to consider is that in the broader picture virus transmission isn't exponential or even logistic. Reproduction number R isn't quite a lie, but it's a drastic simplification that's only useful in the early stages of an outbreak.

Associations that lead to transmission are non-uniform and non-random at every scale. Consider R_0 = 10. If one person can spread the virus to 10 other people, who can each spread it to 10 other people, it is very likely that those latter groups substantially overlap so that the second-generation number of infections isn't 10^2 = 100, but may be only 40. You can see such slowing in every graph of every outbreak in every region, varying in size from towns to continents with the magnitude of the slowdown increasing with scale.

The behaviour of any one outbreak is not the end game, though. COVID will not be contained within the next decade. Everyone should assume that they will sooner or later be exposed to multiple variants in the coming years. Lockdowns, masks, distancing, and current vaccines buy most of us time: time that can be used to improve treatments and make newer vaccines that protect better.