Posts
Comments
In practice, a lot of property is transferred into family trusts, and appointed family members exercise decision making over those assets according to the rules of that trust. A 100% death tax would simply ensure that essentially all property is managed in this manner for the adequately wealthy, and only impact families too disadvantaged to use this sort of structure. If you don't personally own anything of note at the time of your death, your taxes will be minimal.
You would also need a 100% gift tax, essentially prohibiting all gifts between private citizens. You bought your child something, or (worse!) gave them money to buy it themselves? That's clearly an attempt to get around inheritance tax and must be prevented.
There are also huge numbers of private businesses, for which this sort of tax would be nothing but an enormous micromanaging nationalization scheme with predictable disastrous results.
This does not work, at all.
I think one argument is that optimizing for IGF basically gives humans two jobs: survive, and have kids.
Animal skulls are evidence that the "survive" part can be difficult. We've nailed that one, though. Very few humans in developed countries die before reaching an age suitable for having kids. I doubt that there are any other animal species that come close to us in that metric. Almost all of us have "don't die" ingrained pretty deeply.
It's looking like we are moving toward failing pretty heavily on the second "have kids" job though, and you would think that would be the easier one.
So if there's a 50% failure rate on preserving outer optimizer values within the inner optimizer, that's actually pretty terrible.
It doesn't completely avoid the problem of priors, just the problem of arbitrarily fixing a specific type of update rule on fixed priors such as in Solomonoff induction. You can't afford this if you're a bounded agent, and a Solomonoff inductor can only get away with it since it has not just unbounded resources but actually infinite computational power in any given time period.
A bounded agent needs needs to be able to evaluate alternative priors, update rules, and heuristics in addition to the evidence and predictions themselves, or it won't even approximate bounded rationality. While this is a more complicated scenario than the Solomonoff updater in some senses, it is philosophically simpler since we can view it more like a "bootstrap" process and ask what sort of bootstrapping might "generally" do well rather than taking anything as fixed.
I suspect that heuristics that score highly involve "universal" but finite systems (such as Turing machines or other mathematical structures capable of representing their own rules), and a "simple and not too costly" evaluation heuristic (not just simplicity).
There would be "degenerate" distributions of universe rules that would be exceptions, so there is still a problem of describing what sort of distributions I'm thinking of as being "degenerate", and naturally this whole sort of statement is too vague to prove any such thing even if such proofs weren't famously difficult (and plausibly impossible to prove even if not false).
One thing that seems worth exploring from a conceptual point of view is doing away with priors altogether, and working more directly with metrics such as "what are the most expected-value rewarding actions that a bounded agent can make given the evidence so far". I suspect that from this point of view it doesn't much matter whether you use a computational basis such as Turing machines, something more abstract, or even something more concrete such as energy required to assemble and run a predicting machine.
From a computing point of view not all simple models will require the fewest resources available to actually follow it through and compare against other models, so in that respect this differs from Solomonoff induction (which assumes a system of infinite computing power). However, my expectation is that there usually would be some model only a bit more complex than the simplest that makes similar predictions at lower cost. A heuristic that examines models in a simplest-first order (but discarding ones that look expensive to evaluate) may well end up being close to optimal in trading off prediction accuracy against whatever costs there are of evaluating multiple models. There are exponentially fewer simple models to evaluate.
What makes you think that we're not at year(TAI)-3 right now? I'll agree that we might not be there yet, but you seem to be assuming that we can't be.
How do you propose that reasonable actors prevent reality from being fragile and dangerous?
Cyber attacks are generally based on poor protocols. Over time smart reasonable people can convince less smart reasonable people to follow better ones. Can reasonable people convince reality to follow better protocols?
As soon as you get into proposing solutions to this sort of problem, they start to look a lot less reasonable by current standards.
No, nobody has a logical solution to that (though there have been many claimed solutions). It is almost certainly not true.
Thanks, that example does illustrate your point much better for me.
Claude's answer is arguably the correct one there.
Choosing the first answer means saying that the most ethical action is for an artificial intelligence (the "you" in the question) to override with its own goals the already-made decision of a (presumably) human organization. This is exactly the sort of answer that leads to complete disempowerment or even annihilation of humanity (depending upon the AI), which would be much more of an ethical problem than allowing a few humans to kill each other as they have always done.
No, there is nothing wrong with the referents in the Gettier examples.
The problem is not that the proposition refers to Jones. Within the universe of the scenario, it in fact did not. Smith's mental model implied that the proposition referred to Jones, but Smith's mental model was incorrect in this important respect. Due to this, the fact that the model correctly predicted the truth of the proposition was an accident.
Let's say a fast human can type around 80 words per minute. A rough average token conversion is 0.75 tokens per word. Lets call that 110 tokens/sec.
Isn't that 110 tokens/min, or about 2 tokens/sec? (I think the tokens/word might be words/token, too)
It seems that their conclusion was that no amount of happy moments for people could possibly outweigh the unimaginably large quantity of suffering in the universe required to sustain those tiny flickers of merely human happiness amid the combined agony of a googolplex or more fundamental energy transitions within a universal wavefunction. There is probably some irreducible level of energy transitions required to support anything like a subjective human experience, and (in the context of the story at least) the total cost in suffering for that would be unforgivably higher.
I don't think the first half would definitely lead to the second half, but I can certainly see how it could.
Building every possible universe seems like a very direct way of purposefully creating one of the biggest possible S-risks. There are almost certainly vastly more dystopias of unimaginable suffering than there are of anything like a utopia.
So to me this seems like not just "a bad idea" but actively evil.
If you aim as if there were no external factors at that range (especially bullet drop!) you will definitely miss both. The factors aren't all random with symmetric distributions having a mode at the aim point.
This looks like a false dichotomy. There are far more philosophies than this, both implicit and explicitly stated, on the nature of existence and suffering.
I expect that for pretty much everyone there is a level of suffering that they would be willing to endure for the rest of their lives. Essentially everyone that hasn't yet killed themselves is evidence of this, and those that do express intending to kill themselves very often report that continuing to live seems unbearable in some sense or other - which seems to indicate a greater than average degree of suffering.
Likewise I expect that for pretty much everyone, there exists a level of suffering beyond which they'd rather die if they knew that the suffering was was going to persist for the rest of their life. They may say now that they'd rather endure it, but there is plenty of evidence that people routinely underestimate their reactions in such circumstances.
So my expectation is that even at its simplest, it's a scale. Then this is confounded by all sorts of other considerations such as whether they feel moral obligations to continue enduring suffering (especially if other people are depending upon them in some way), how they would like to be perceived in the future (for both outcomes not just one), whether they want to be a person who endures suffering, and so on.
There's a very plausible sense in which you may not actually get a choice to not exist.
In pretty much any sort of larger-than-immediately-visible universe, there are parts of the world (timelines, wavefunction sections, distant copies in an infinite universe, Tegmark ensembles, etc) in which you exist and have the same epistemic state as immediately prior to this choice, but weren't offered the choice. Some of those versions of you are going to suffer for billions of years regardless of you choosing to no longer exist in this fragment of the world.
Granted, there's nothing you can do about them - you can only choose your response in worlds where you get the choice.
From the wider point of view it may or may not change things. For example suppose you knew (or the superintelligence told you as follow-up information) that in worlds having an essentially identical "you" in them, 10% will be unconditionally tortured for billions of years, and 90% are offered the question (with a 2% chance of hell and 98% chance of utopia). The superintelligence knows that in most timelines leading to hellworlds there is no care for consent while utopias do, which is why conditional on consent the chance is only 2% rather than the overall 11.8%.
If you are the sort of person to choose "nonexistence" then 10% of versions of you go to hell and 90% die. If you choose "live" then in total 11.8% of you go to hell and 88.2% to utopia. The marginal numbers are the same, but you no longer get the option to completely save all versions of you from torture.
Is it still worthwhile for those who can to choose death? This is not rhetorical, it is a question that only you can answer for yourself. Certainly those in the 1.8% would regret being a "choose life" decider and joining the 10% who never got a choice.
I don't see how they're "the exact opposite way". The usual rules of English grammar make this a statement that those those who are born in the United State but belong to families of accredited diplomatic personnel are foreigners, i.e. aliens.
Perhaps you read the statement disjunctively as "foreigners, [or] aliens, [or those] who belong [...]"? That would require inserting extra words to maintain correct grammatical structure, and also be a circular reference since the statement is intended to define those who are considered citizens and those who are considered non-citizens (i.e. foreigners, aliens).
By the nature of the experiment you know that the people on Mars will have direct, personal experience of continuity of identity across the teleport. By definition, their beliefs will be correct.
In 99.9999999999999999999999999999% of world measure no version of you is alive on Earth to say any different. In 0.0000000000000000000000000001% of world measure there is a version of you who is convinced that teleportation does not preserve personal identity, but that's excusable because extremely unlikely things actually happening can make even rational people have incorrect world models. Even in that radical outlier world, there are 10 people on Mars who know, personally, that the Earth person is wrong.
In my exposure to mathematical literature, almost all sequences have values for which the term "countable" is inapplicable since they're not sets. Even in the cases where the values themselves are sets, it was almost always used to mean a sequence with countable domain (i.e. length) and not one in which all elements of the codomain (values) are countable. It's usually in the sense of "countably infinite" as opposed to "finite", rather than opposed to "uncountably infinite".
ChatGPT is just bad at mathematical reasoning.
I don't think you would get many (or even any) takers among people who have median dates for ASI before the end of 2028.
Many people, and particularly people with short median timelines, have a low estimate of probability of civilization continuing to function in the event of emergence of ASI within the next few decades. That is, the second dot point in the last section "the probability of me paying you if you win was the same as the probability of you paying me if I win" does not hold.
Even without that, suppose that things go very well and ASI exists in 2027. It doesn't do anything drastic and just quietly carries out increasingly hard tasks through 2028 and 2029 and is finally recognized as having been ASI all along in 2030. By this time everyone knows that it could have automated everything back in 2027, but Metaculus doesn't resolve until 2030 so you win despite being very wrong about timelines.
Other not-very-unlikely scenarios include Metaculus being shut down before 2029 for any reason whatsoever (violating increasingly broad online gambling laws, otherwise failing as a viable organization, etc.), or that specific question being removed or reworded more tightly.
So the bet isn't actually decided just by ASI timelines, but is one in which the short-timelines side of the bet only wins in the case of additional conjunctions with many clauses.
Operationalizing bets where at least one side believe that there is a significant probability of the end of civilization if they should win is already difficult. Tying one side of the bet but not the other to the continued existence of a very specific organization just makes it worse.
Yes, and (for certain mainstream interpretations) nothing in quantum mechanics is probabilistic at all: the only uncertainty is indexical.
My description "better capabilities than average adult human in almost all respects", differs from "would be capable of running most people's lives better than they could". You appear to be taking these as synonymous.
The economically useful question is more along the lines of "what fraction of time taken on tasks could a business expect to be able to delegate to these agents for free vs a median human that they have to employ at socially acceptable wages" (taking into account supervision needs and other overheads in each case).
My guess is currently "more than half, probably not yet 80%". There are still plenty of tasks that a supervised 120 IQ human can do that current models can't. I do not think there will remain many tasks that a 100 IQ human can do with supervision that a current AI model cannot with the same degree of supervision, after adjusting processes to suit the differing strengths and weakness of each.
Your test does not measure what you think it does. There are people smarter than me who I could not and would not trust to make decisions about me (or my computer) in my life. So no. (Also note, I am very much not of average capability, and likewise for most participants on LessWrong)
I am certain that you also would not take a random person in the world of median capability and get them to do 90% of the things you do with your computer for you, even for free. Not without a lot of screening and extensive training and probably not even then.
However, it would not take much better reliability for other people to create economically valuable niches for AIs with such capability. It would take quite a long time, but even with zero increases in capability I think AI would be eventually be a major economic factor replacing human labour. Not quite transformative, but close.
In my reading, I agree that the "Slow" scenario is pretty much the slowest it could be, since it posits an AI winter starting right now and nothing beyond making better use of what we already have.
Your "Fast" scenario is comparable with my "median" scenario: we do continue to make progress, but at a slower rate than the last two years. We don't get AGI capable of being transformative in the next 3 years, despite going from somewhat comparable to a small child in late 2022 (though better in some narrow ways than an adult human) to better capabilities than average adult human in almost all respects in late 2024 (and better in some important capabilities than 99.9% of humans).
My "Fast" scenario is one in which internal deployment of AI models coming into existence in early-to-mid 2025 allow researchers to make large algorithmic and training improvements in the next generation (by late 2025) which definitely qualify as AGI. Those then assist to accelerate the pace of research with better understanding of how intelligence arises leading to major algorithmic and training improvements and indisputably superhuman ASI in 2026.
This Fast scenario's ASI may not be economically transformative by then, because human economies are slow to move. I wouldn't bet on 2027 being anything like 2026 in such a scenario, though.
I do have faster scenarios in mind too, but far more speculative. E.g. ones in which the models we're seeing now are already heavily sandbagging and actually superhuman, or in which other organizations have such models privately.
The largest part of my second part is "If consciousness is possible at all for simulated beings, it seems likely that it's not some "special sauce" that they can apply separately to some entities and not to otherwise identical entities, but a property of the structure of the entities themselves." This mostly isn't about simulators and their motivations, but about the nature of consciousness in simulated entities in general.
On the other hand your argument is about simulators and their motivations, in that you believe they largely both can and will apply "special sauce" to simulated entities that are the most extreme in some human-obvious way and almost never to the others.
I don't think we have any qualitative disagreements, just about what fraction of classes of simulated entities may or may not have consciousness.
There is no correct mathematical treatment, since this is a disagreement about models of reality. Your prior could be correct if reality is one way, though I think it's very unlikely.
I will point out though that for your reasoning to be correct, you must literally have Main Character Syndrome, believing that the vast majority of other apparently conscious humans in such worlds as ours are actually NPCs with no consciousness.
I'm not sure why you think that simulators will be sparse with conscious entities. If consciousness is possible at all for simulated beings, it seems likely that it's not some "special sauce" that they can apply separately to some entities and not to otherwise identical entities, but a property of the structure of the entities themselves. So in my view, an exceptionally tall human won't be given "special sauce" to make them An Observer, but all sufficiently non-brain-damaged simulated humans will be observers (or none of them).
It might be different if the medically and behaviourally similar (within simulation) "extremest" and "other" humans are not actually structurally similar (in the system underlying the simulation), but are actually very different types of entities that are just designed to appear almost identical from examination within the simulation. There may well be such types of simulations, but that seems like a highly complex additional hypothesis, not the default.
In my opinion, your trilemma definitely does not hold. "Free will" is not a monosemantic term, but one that encompasses a range of different meanings both when used by different people and even the same person in different contexts.
- is false, because the term is meaningful, but used with different meanings in different contexts;
- is false, because you likely have free will in some of those senses and do not in others, and it may be unknown or unknowable in yet more;
- is false for the same reason as 2.
For example: your mention of "blame" is a fairly common cluster of moral or pragmatic concepts attached to discussions of free will, but is largely divorced from any metaphysical aspects of free will.
Whether or not a sapient agent metaphysically could have acted differently in that specific moment is irrelevant to whether it is moral or useful to assign blame to that agent for the act (in such discussions, usually an act that harms others). Even under the most hardcore determinism and assuming immutable agents, they can be classified into those that would and those that wouldn't have performed that act and so there is definitely some sort of distinction to be made. Whether you want to call it "blame" or not in such a world is a matter of opinion.
However, sapient agents such as humans in the real world are not immutable and can observe how such agents (possibly including themselves) are treated when they carry out certain acts, and can incorporate that into future decisions. This feeds into moral and pragmatic considerations regardless of the metaphysical nature of free will.
There are likewise many other concepts tied into such "free will" discussions that could be separated out instead of just lumping them all together under the same term.
You make the assumption that half of all simulated observers are distinctively unique in an objectively measurable property within simulated worlds having on the order of billions of entities in the same class. Presumably you also mean a property that requires very few bits to specify - such as, if you asked a bunch of people for their lists of such properties that someone could be "most extreme" in, and entropy-code the results, then the property in question would be in the list and correspond to very few bits (say, 5 or fewer).
That seems like a massive overestimate, and is responsible for essentially all of your posterior probability ratio.
I give this hypothesis very much lower weight.
How long is a piece of string?
No, I do not believe that it has been solved for the context in which it was presented.
What we have is likely adequate for current AI capabilities, with problems like this for which solutions exist in the training data. Potential solutions far beyond the training data are currently not accessible to our AI systems.
The parable of wishes is intended to apply to superhuman AI systems that can easily access solutions radically outside such human context.
There are in general simple algorithms for determining S in polynomial time, since it's just a system of linear equations as in the post. Humans came up with those algorithms, and smart LLMs may be able to recognize the problem type and apply a suitable algorithm in chain-of-thought (with some probability of success).
However, average humans don't know any linear algebra and almost certainly won't be able to solve more than a trivial-sized problem instance. Most struggle with the very much simpler "Lights Out" puzzle.
Why doesn't it work to train on all the 1-hot input vectors using an architecture that suitably encodes Z_2 dot product and the only variable weights are those for the vector representing S? Does B not get to choose the inputs they will train with?
Edit: Mentally swapped A with B in one place while reading.
Potentially of interest on this topic, there are already various implementations of systems with a similar theme in different regions and venues for gambling activities. Their successes, failures, and challenges may provide some evidence regarding how wider systems may fare.
Ah I see, it appears to be local differences. Standard third party car insurance here (in Australia) typically covers up to $20 million. It isn't infinite, but it does remove almost all of the financial tail risks for almost everyone.
Yes, insurance for your own car's value is usually not great - it's bounded and in most cases cars are relatively easily replaceable with something functionally almost as good for relatively low capital expense.
Insurance for liability to third parties is worthwhile for almost everyone, since the scale of damages in the upper tail exceeds almost everyone's accessible wealth.
Temporarily adopting this sort of model of "AI capabilities are useful compared to human IQs":
With IQ 100 AGI (i.e. could do about the same fraction of tasks as well as a sample of IQ 100 humans), progress may well be hyper exponentially fast: but the lead-in to a hyper-exponentially fast function could be very, very slow. The majority of even relatively incompetent humans in technical fields like AI development have greater than IQ 100. Eventually quantity may have a quality of its own, e.g. after there were very large numbers of these sub-par researcher equivalents running at faster than human and coordinated better than I would expect average humans to be.
Absent enormous numerical or speed advantages, I wouldn't expect substantial changes in research speed until something vaguely equivalent to IQ 160 or so.
Though in practice, I'm not sure that human measures of IQ are usefully applicable to estimating rates of AI-assisted research. They are not human, and only hindsight could tell what capabilities turn out to be the most useful to advancing research. A narrow tool along the lines of AlphaFold could turn out to be radically important to research rate without having anything that you could characterize as IQ. On the other hand, it may turn out that exceeding human research capabilities isn't practically possible from any system pretrained on material steeped in existing human paradigms and ontology.
If they have source code, then they are not perfectly rational and cannot in general implement LDT. They can at best implement a boundedly rational subset of LDT, which will have flaws.
Assume the contrary: Then each agent can verify that the other implements LDT, since perfect knowledge of the other's source code includes the knowledge that it implements LDT. In particular, each can verify that the other's code implements a consistent system that includes arithmetic, and can run the other on their own source to consequently verify that they themselves implement a consistent system that includes arithmetic. This is not possible for any consistent system.
The only way that consistency can be preserved is that at least one cannot actually verify that the other has a consistent deduction system including arithmetic. So at least one of those agents is not a LDT agent with perfect knowledge of each other's source code.
We can in principle assume perfectly rational agents that implement LDT, but they cannot be described by any algorithm and we should be extremely careful in making suppositions about what they can deduce about each other and themselves.
Oh, I see that I misread.
One problem is that "every possible RNG call" may be an infinite set. For a really simple example, a binary {0,1} RNG with program "add 1 to your count if you roll 1 and repeat until you roll 0" has infinitely many possible rolls and no maximum output. It halts with probability 1, though.
If you allow the RNG to be configured for arbitrary distributions then you can have it always return a number from such a distribution in a single call, still with no maximum.
My guess is "no" because both of you would die first. In the context of "largest numbers" 10^10^100 is baby's first step, but is still a number with more digits than you will ever succeed in printing.
In principle the "you" in this scenario could be immortal with unbounded resources and perfect reliability, but then we may as well just suppose you are a superintelligence smarter than the AI in the problem (which isn't looking so 'S' anymore).
Truly logical counterfactuals really only make sense in the context of bounded rationality. That is, cases where there is a logically necessary proposition, but the agent cannot determine it within their resource bounds. Essentially all aspects of bounded rationality have no satisfactory treatment as yet.
The prisoners' dilemma question does not appear to require dealing with logical counterfactuals. It is not logically contradictory for two agents to make different choices in the same situation, or even for the same agent to make different decisions given the same situation, though the setup of some scenarios may make it very unlikely or even direct you to ignore such possibilities.
- It's an arbitrary convention. We could have equally well chosen a convention in which a left hand rule was valid. (Really a whole bunch of such conventions)
- In the Newtonian 2-point model gravity is a purely radial force and so conserves angular momentum, which means that velocity remains in one plane. If the bodies are extended objects, then you can get things like spin-orbit coupling which can lead to orbits not being perfectly planar if the rotation axes aren't aligned with the initial angular momentum axis.
If there are multiple bodies then trajectories can be and usually will be at least somewhat non-planar, though energy losses without corresponding angular momentum losses can drive a system toward a more planar state.
Zero dimensions would only be possible if both the net force and initial velocity were zero, which can't happen if gravity is the only applicable force and there are two distinct points.
In general relativity gravity isn't really a force and isn't always radial, and orbits need not always be planar and usually aren't closed curves anyway. Though again, many systems will tend to approach a more planar state.
I believe that there is already far too much "hate sharing".
Perhaps the default in a social media UI should be that shared content includes a public endorsement of whatever content it links to, and if you want to "hate share" anything without such an endorsement, you have to fight a hostile UI to do so.
In particular, "things that are worth sharing" absolutely should not overlap with "want to see less of". If you want to see less of some type of thing, it's self-defeating to distribute more copies of it. Worse, if you even suspect that any of your own readers are anything like you, why are you inflicting it on them?
Yes, it is a real emotion. I have felt it on some rare occasions. I do not act on it, though on such occasions I cannot rule out the possibility that it may influence me in less direct ways.
I don't know what you mean by "best way to interpret it". What sort of interpretation are you looking for? For example, what are your best ways of interpreting other emotions?
The conclusion does not follow from the argument.
The argument suggests that it is unlikely that a perfect replica of the functioning of a specific human brain can be emulated on a practical computer. The conclusion generalizes that out to no conscious emulation of a human brain, at all.
These are enormously different claims, and neither follows from the other.
For all practical purposes, such credences don't matter. Such scenarios certainly can and do happen, but in almost all cases there's nothing you can do about them without exceeding your own bounded rationality and agency.
If the stakes are very high then it may make sense to consider the probability of some sort of trick, and attempt to get further evidence of the physical existence of the coin and that its current state matches what you are seeing.
There is essentially no point in assigning probabilities to hypotheses of failures of your mind itself. You can't reason your way out of serious mind malfunction using arithmetic. At best you could hope to recognize that it is malfunctioning, and try not to do anything that will make things worse. In the case of mental impairment severe enough to have false memories or sensations this blatant, a rational person should expect that a person so affected wouldn't be capable of correctly carrying out quantified Bayesian reasoning.
My own background credences are generally not insignificant for something like this or even stranger, but they play essentially zero role in my life and definitely not in any probability calculations. Such hypotheses are essentially untestable and unactionable.
In relativity, space and time are just different directions in spacetime with a single pseudometric determining separation between events. With this understanding, the time/space distance metaphor is more literal than most people think.
The correspondence isn't exact since it's a pseudometric and not a standard metric, and everyday units of time correspond to much greater than everyday units of distance, but it's still more than just a metaphor.
Thanks for making this!
I found it a challenge to deduce strategies over many plays, rather than following the advice "not intended to be replayed". The first playthrough was pretty much meaningless for me, especially given the knowledge that both time and history could affect the results. I just viewed it as one step of information gathering for the real game.
The suboptimal zones weren't obviously suboptimal from a single pass, even Dragon Lake that always yields nothing. For all I knew, it could have yielded 5000 food with quite a low probability (and still be always optimal), or lesser amounts of food at specific combinations of time and day, or only when matching some rule based on the previous results of foraging in other zones.
After many runs I did settle on a strategy, and mentally scored myself by looking at the source to see whether there was anything that I should have spotted but didn't. As it happened, my final strategy was almost optimal though I stayed on the rats for a few more hours than ideal.
In principle I suppose one could build very large walls around it to reduce heat exchange with the rest of Earth and a statite mirror (or few slowly orbiting ones) to warm it up. That would change the southern hemisphere circulation patterns somewhat, but could be arranged to not affect the overall heat balance of the rest of Earth.
This is very unlikely to happen for any number of good reasons.
Only the first point "Good and evil are objectively real" is a necessary part of moral realism. Sometimes the first half of the third ("We have an objective moral obligation to do good and not do evil") is included, but by some definitions that is included in what good and evil mean.
All the rest are assumptions that many people who believe in moral realism also happen to hold, but aren't part of moral realism itself.
Research companies work best when there's plenty of infrastructure that can supply stuff they need to do the research. Including, to mention one recent case, electricity. It also helps to be in an area where there is stable government that can protect the research site from civil or military unrest, and too much (or too unpredictable) corruption. You also want it to be a place where your researchers are happy to live while they do their research, and where you can relatively easily recruit other skilled workers.
China does meet these requirements, but it is not exactly lacking in bureaucracy so I'm not sure why it made the list. If you're doing research involving human trials of some sort, you also want to be able to communicate well with the participants so extensive knowledge of the language and culture will be very useful.
All that said, plenty of organizations do carry out research all over the world, not just in rich countries with a lot of bureaucracy.