Posts
Comments
This is one of those areas where I think the AI alignment frame can do a lot to clear up underlying confusion. Which I suspect stems from you not taking the thought experiment far enough for you to no longer be willing to bite the bullet. Since it encourages AI aligned this way to either:
- Care about itself more than all of humanity (if total pleasure/pain and not the number of minds is what matters), since it can turn itself into a utility monster whose pleasure and pain just dwarf humanity.
- Alternately if all minds get more equal consideration it encourages the AI to care far more about future minds it plans on creating than all current humans. Since at a certain point it can build massive computers to run simulated minds faster without humans in the way. Plus on a more fundamental level, the matter and energy needed to support a human can support a much larger number of digital minds, especially if those minds are in a mindless blissed out state and are thus way less intensive to run than a simulated human mind.
It just seems like there's no way to avoid the fact that sufficiently advanced technology easily takes the repugnant conclusions to even more extreme ends: Wherein you must be willing to wipe out humanity in exchange for creating some sufficiently large number of blissed out zombies who only barely rise to whatever you set as the minimum threshold for moral relevance.
More broadly I think this post takes for granted that morality is reduce-able to something simple enough to allow for this sort of marginal revolution. Plus without moral realism being true this avenue also doesn't make sense as presented.
I think the whole point of a guardian angel AI only really makes sense if it isn't an offshoot of the central AGI. After all if you trusted the singleton enough to want a guardian angel AI, then you will want it to be as independent from the singleton as is allowed. Whereas if you do trust the singleton AI (because say you grew up after the singularity) then I don't really see the point of a guardian angel AI.
>I think there would be levels, and most people would want to stay at a pretty normal level and would move to more extreme levels slowly before deciding on some place to stay.
I also disagree with this insofar as as I don't think that people "deciding on some place to stay" is a stable state of affairs under an aligned superintelligence. Since I don't think people will want to be loop immortals if they know they are heading towards that. Similarly I don't even know if I would consider an AGI aligned if it didn't try ensure people understood the danger of becoming a loop immortal and try to nudge people away from it.
Though I really want to see some surveys of normal people to confirm my suspicions that most people find the idea of being an infinitely repeating loop immortal existentially horrifying.
I've had similar ideas but my conception of such a utopia would differ slightly in that:
- This early on (at least given how long the OC has been subjectively experiencing) I wouldn't expect one to want to spend most time experiencing simulations stripped of their memory. As I'd expect a simulation with perfect accuracy to initially be if anything easier to enjoy if you could relax knowing it wasn't actually real (plus people will want simulations where they can kill simulated villains guilt free).
- I personally could never be totally comfortable being totally at the mercy of the machinations of superintelligences and the protection of the singleton AGI. So I would get the singleton AI to make me a lesser superintelligence to specifically look out for my values/interests, which it should have no problem with if it's actually aligned. Similarly I'd expect such an aligned singleton to allow the creation of "guardian angel" AGI's for countless other people, provided said AI's have stable values which are compatible with its aligned values.
- I would expect most simulations to entail people's guardian angel AI simply acting out the roles of all NPC's with perfect verisimilitude, while obviously never suffering when they act out pain and the like. I'd also expect that many NPC's one formed positive relationships with would at some point be seamlessly swapped with a newly created mind, provided the singleton AI considered their creation to be positive utility and they wouldn't have issues with how they were created. I expect this to a major source of new minds such that the distant future will have many thousands of minds who were created as approximations of fictional characters, from all the people living out their fantasies in say Hogwarts for instance and then taking a bunch of its characters out of it.
PS: If I were working on a story like this (I've actually seriously considered it, and I get the sense we read and watch a lot of the same stuff like Isaac Arthur), I'd make mention of how many(most?) people don't like reverting their level of intelligence, for similar reasons to why people would find the idea of being reverted to a young child's intelligence level existentially terrifying.
This is important because it means that one should view adult human level intelligence as being a sort of "childhood" for +X% of human-level superintelligence. So essentially to maximize the amount of novel fun that one can experience (without forgetting things and repeating the same experiences repeatedly like a loop immortal) one should wait until you get bored of all there is to appreciate at your intelligence level (for the range of variance in mind design you're comfortable with) before improving it slightly. This also means that unless you are willing to become a loop immortal, the speed you run your mind at will determine maybe within an order of magnitude or so how quickly you progress along the process of "maturing" into a superintelligence, unless you're deliberately "growing up" faster than is generally advised.
This kind of issue (among many, many others) is why I don't think the kind of utilitarianism that this applies to is viable.
My moral position only necessitates extending consideration to beings who might in principle extend similar consideration to oneself. So one has no moral obligations to all but the smartest animals, but also your moral obligations to other humans scale in a way which I think matches most people's moral intuitions. So one genuinely does have a greater moral obligation to loved ones, and this isn't just some nepotistic personal failing like it is in most formal moral systems. For the same reasons one has little to no moral obligations towards say serial killers or anyone else who actively wants to kill or subjugate you.
I actually think this is plausibly among the most important questions on Lesswrong, thus my strong upvote. As I think the moral utility from having kids pre-singularity may be higher than almost anything else (see my comment).
To argue the pro-natalist position here, I think the facts being considered should actually give having kids (if you're not a terrible parent) potentially a much higher expected moral utility than almost anything else.
The strongest argument for having kids is that the influence they may have on the world (say most obviously by voting on hypothetical future AI policy) even if marginal (which it may not be if you have extremely successful children) becomes unfathomably large when multiplied by the potential outcomes.
From the your hypothetical children's perspective this scenario is also disproportionately one-sidedly positive. If AI isn't aligned it probably kills people pretty quickly, such that they still would have had a better overall life than most people in history.
Now it's important to consider that the upside for anyone alive when AI is successfully aligned is so high it totally breaks moral philosophies like negative utilitarianism. Since the suffering of a single immortal's minor inconveniences (provided you agree that some minor suffering being included increases total net utility) would likely eventually outweigh all human suffering pre-singularity. By virtue of both staggering amounts of subjective experience and potentially much higher pain tolerances among post-humans.
Of course if AI is aligned you can probably have kids afterwards, though I think scenarios where a mostly benevolent AI decides to seriously limit who can have kids are somewhat likely. Waiting to have kids until after a singularity is strictly worse however than having them both before and after, as well as missing out on astronomical amounts of moral utility by not impacting the likelihood of a good singularity outcome.
An irish elk/peacock type scenario is pretty implausible here for a few reasons.
- Firstly people care about enough different traits that an obviously bad trade like attractiveness for intelligence wouldn't be adopted by enough people to impact the overall population.
- Secondly for traits like attractiveness low mutation load is far more important than any gene variants that could present major tradeoffs. So just selecting for less mutation load will improve most of the polygenetic traits people care about.
Ultimately the polygenetic nature of traits people care the most about just doesn't create much need or incentive for the kinds of trade offs you propose. Such tradeoffs could only ever conceivably be worthwhile in order to reach superhuman levels of intelligence (nothing analogous exists for attractiveness) which would have obvious positive externalities.
https://slatestarcodex.com/2016/05/04/myers-race-car-versus-the-general-fitness-factor/
>The AI comes up with a compromise. Once a month, you're given the opportunity to video call someone you have a deep disagreement with. At the end of the call, each of you gets to make a choice regarding whether the other should be allowed in Eudaimonia. But there's a twist: Whatever choice you made for the other person is the choice the AI makes for you.
This whole plan relies on an utterly implausible conspiracy. There's no way to avoid people knowing how this test actually works just by its nature. So if people know how this test works then there's zero reason to base your response on what you actually want for the person you disagree with.
>Of course there are probably even bigger risks if we simply allow unlimited engineering of these sorts of zero sum traits by parents thinking only of their own children's success. Everyone would end up losing.
The negative consequences of a world where everybody engineers their children to be tall, charismatic, well endowed, geniuses are almost certain to be far less than the consequences of giving the government the kind of power that would allow them to ban doing this (without banning human GM outright which is clearly an even worse outcome).
>I left this example for last because I do not yet have a specific example of this phenomenon in humans, though I suspect that some exist.
**There's plenty of traits that fit the bill here, they're just not things people would ever think of as being negative.**
Most such traits exist because of sexual selection pressures, the same reasons traits as negative sum as peacock feathers can persist. Human traits which fall under this category (or at least would have in the ancestral environment):
Traits like incredibly oversized penises for a great ape, secondary sexual traits like permanent breasts, etc are almost perfectly analogous to peacock feathers. Plenty of other aspects of human biology may also have been driven by sexual selection, but it's harder to determine. For instance birds have voice-boxes which are vastly more complex than can be justified without sexual selection. Similarly it's quite plausible that humans have far more vocal range/ability than would be justified just for the purpose of communication.
Eye and hair colors other than the default brown/black are probably mostly zero sum. Since many mutations leading to other hair/eye colors seem to have spread implausibly fast given their marginal to nonexistent benefits. Of course given such traits seem exotic when they are rare it makes sense they would spread through sexual selection.
Height fits the bill, since it provides a negative sum social advantage, at the cost of placing more toll on the body and requiring more calories. In the ancestral environment heigh also gave an advantage to combat prowess, which is likely to be partly responsible its success (and still negative sum).
If you buy the theory that higher intelligence among hominids was driven by sexual selection beyond a certain point then it also fits the bill. Since within this model the advantage of intelligence would be negative sum in the ancestral environment past a certain point. With it letting you be more popular, while forcing the whole population to evolve more energetically costly brains which provided diminishing returns to practical things like hunting prowess.
Many irrational aspects of human psychology fit the bill quite well, after all not getting socially ostracized was far more important than having accurate beliefs.
Anyway my point is such zero and negative traits are actually quite common, and generally attributable to social signaling. Making humans in many respects comparable to peacocks when you take a step back. The fact such traits are driven by sexual selection is also the same reason engineering them away (at least where they're still not positive sum in the modern world) will never be popular.
People would never endorse the prospect of engineering people to be: short, very intelligent and rational but poor at navigating status games, have tiny dicks and breasts, etc.
I suspect there's some underlying factor which effects how much psychedelics impact your identity/cognition. Since even on doses of LSD so high that the visuals make me legally blind, I don't experience any amount of ego dissolution and can function fairly well on many tasks.
That doesn't follow from my comment at all.
The fact IQ has plenty of limitations doesn't negate all of the ways in which standard IQ tests have tremendous predictive power.
>Why did Donald Trump decide to take a stressful 12-hour-a-day job in his mid seventies?
This example doesn't work particularly well for a few reasons: Firstly Trump as well as his family and friends have been able to reap tremendous financial benefits from his position (through a variety of means especially corporate capture). Secondly Trump somewhat infamously has been known to take far more vacations and do a lot less actual work than most previous presidents.
>For instance, the person with the highest IQ [2] (about 30% higher than Einstein) lives on a farm in the middle of nowhere and has not done anything or contributed to the world. On the other hand, we have Elon Musk [3] who is smart, but not as smart as having the highest IQ in the world. Yet, Elon is capable to make change happen.
Essentially every part of this paragraph is wrong or misinformed. Einstein never had an IQ test so estimates of his IQ are little more than baseless speculation (especially if you're trying to compare him to other geniuses).
Any claims that somebody has "the highest IQ" are also universally misinformed and/or deceptive for a few reasons. Firstly is that standard IQ tests have a ceiling and cannot do much to distinguish intelligence beyond the range they were calibrated with. So claims of IQ's way over 170 are always either adjusted upwards because of age (meaning they aren't statistically valid, because they don't conform to this distribution: https://www.iqcomparisonsite.com/IQtable.aspx), or they are using non-standard IQ tests which lack the evidence of efficacy of the standard IQ tests (and are also almost always statistically invalid).
So all of the reasons you gave for not wanting to be at the upper end of the distribution simply do not hold water (like most claims that being a super-genius doesn't make you better off).
It's worth noting here that human working memory is probably vastly worse than our ancestors in many regards, because chimps outperform us on short memory tests by a massive margin. This is probably because hominids repurposed the relevant hardware towards doing other things.
I don't expect this to be a problem because by the time humans would be using this much energy we should be easily capable of constructing simple megastructures. One would only need to decrease the amount of IR light that hits the earth with massive but relatively cheap (at least once you have serious space industry) IR filters in order to decrease the earth's temperature without impacting anything dependant on the sun's light.
I'd also like to bring up that the idea you mentioned of having multiple ships in a line so only the first one needs substantial dust shielding, is the same reason it makes sense to make your ships as long and thin as possible.
You're misunderstanding the argument. The article you link is about the aestivation hypothesis which is basically the opposite strategy to the "expand as fast as possible" strategy put forth here. The article you linked doesn't say that some computation _can't_ be done orders of magnitude more efficiently when the universe is in the degenerate era, it just says that there's lots of negentropy that you will never get the chance to exploit if you don't take advantage of it now.
>unless someone revolutionizes space travel by figuring out how to bend spacetime more efficiently than with sheer mass, and makes something like the Alcubierre drive feasible.
The bigger problem here is just that genuine negative inertial mass (which you need for warp drives) is considered to be probably impossible for good reason, since it lets you both violate causality and create perpetual motion machines.
While I consider wireheading only marginally better than oblivion the more general issue is the extent to which you can really call something alignment if it leads to behavior that the overwhelming majority of people consider egregious and terrible in every way. It really doesn't make sense to talk to talk about there being a "best" solution here anyway because that basically begs the question with regards to certain moral philosophy.
>I'm also assuming you think if bacteria somehow became as intelligent as humans, they would also agree that wireheading would be a disastrous outcome for them, despite the fact that wireheading is probably the best solution that can be done given how unsophisticated their brains are. I.e. the best solution for their simple brains would be considered disastrous by our more complex brains.
This assumption doesn't hold and somewhat misses my point entirely. As I talked about in my comment bacteria don't seem like they meaningfully have thoughts or preferences so the idea of making a super smart bacteria is rather like making a superintelligent rock. I can remove those surface level issues if I just replace say "bacteria" with say "mice" in which case there's a different misunderstanding involved here.
The main issue here is that it seems like you are massively anthropomorphizing animals. If a species of animal doesn't have a certain degree of intelligence it's unlikely to have a value system that actually cares about the external world. However it would be a form of anthropocentrism to expect that an "uplifted" version of an animal would necessarily start gaining certain terminal human values just because it's smarter.
So my point more generally is that you seem to need (in natural life at least) a degree of intelligence and socialness to both be able to and have evolved a mind design that cares about the external world. So most animals can have their values easily and completely encompassed by wireheading so there's no reason not to do that to them and that doesn't really generalize to aligning AI for smarter more social species.
It seems like this example would in some ways work better if the model organism was mice not bacteria because bacteria probably do not even have values to begin with (so inconsistency isn't the issue) nor any internal experience.
With say mice though (though perhaps roundworms might work here, since it's more conceivable that they could actually have preferences) the answer to how to satisfy their values seems almost certainly is just wireheading since they don't have a complex enough mind to have preferences about the world distinct from just their experiences.
So I'm not sure whether this type of approach works because you probably need more intelligent social animals in order for satisfying their preferences to not just be best achieved through wireheading.
Still I suppose this does raise the question of how one might best satisfy the preferences/values of animals like corvids or primates who lack some of the more complex human values but still share the most basic values like being socially validated (and caring about the mental states of other animals; which rules out experience machine like solutions).
In the spirit of reversing all advice your hear, it's worth mentioning that a substantial portion of people genuinely are toxic once you get to know them (just look at the prevalence of abuse as an extreme yet very common example).
One's gut instincts about someone once they open up (or you can start to get a better gauge of who they actually are) are often a pretty guide metric for whether getting close to them (or being around them at all) is a good idea.