The Driver of Wealth Inequality

2019-02-06T20:42:53.368Z · score: -19 (6 votes)
Comment by vox on A tentative solution to a certain mythological beast of a problem · 2018-05-10T08:59:20.252Z · score: 1 (1 votes) · LW · GW

Jiro - I honestly wouldn’t be surprised through development of advanced contraceptives. Abortion as it currently stands is a last resort anyhow. Most people nowadays will take the pill, etc (a relatively recent development). A lot of the blowback to abortion has been centered on value of life - I don’t think it’s a stretch to imagine some entrepreneur addressing that through advanced permanent contraceptive until such a time as a child will be wanted. Additionally, I’m aware that there can be pretty serious PTSD following abortion, and severe guilt associated with the termination of a potential sentient. I think circumstance and lack of sufficiently advanced technology in the present forces people to run a cost-benefit analysis and arrive at the conclusion that an abortion is necessary (as time is limited).

A sentient AI able to transcend spatiotemporal boundaries wouldn’t be limited by time.

Comment by vox on The Driver of Wealth Inequality · 2018-05-10T08:54:34.969Z · score: 1 (1 votes) · LW · GW

Also, this is my first post so I’m entirely unfamiliar w/ the format. If I’ve done something wrong, or am out of line (wrt votes), please let me know as I’m just generally excited to have found this community and don’t want to detract :)

Comment by vox on The Driver of Wealth Inequality · 2018-05-10T02:52:10.243Z · score: -7 (2 votes) · LW · GW

People who could benefit from addressing the divide are fringe politicians (Bernie, Trump) seeking to appeal to a large base (where #s of votes can be more important than campaign contributions) and going up against a $-driven establishment, social activists looking for support for their cause, etc.

Basically any situation in which numbers of people can be more valuable than money alone. Media, even. Those looking for social power over monetary power. Additionally, those looking to preserve social stability (as extreme wealth inequality can lead to revolutions)

It’s true that those benefitting from maintaining the status quo have significantly more monetary power as a result of their competitive compounding advantages. That being said, we’re seeing powerful people beginning to address it for social reasons (Larry Fink being an interesting one). Those that realize the stability of the system, and their hold on it are threatened by exacerbated wealth inequality. This in addition to those altruists who are legitimately upset by money being funneled to a small subset of the population.

I think the problem is interesting because it can’t properly be addressed without understanding the fundamental underlying problems in the system’s configuration. When the stability of the system is threatened, it forces those who’ve been lifted up by it to examine their own success and the programming of their motivations from birth. The American Dream sells us on happiness and self actualization through the accumulation of wealth. However, the “torch the fields” accumulation of wealth without regard for societal impact or inequality seems to be causing mass unhappiness and impedes self actualization by effectively mass-enslaving people for the benefit of those already most effective at compounding. It only gets worse as margins are squeezed tighter and tighter, and as corps become increasingly desperate to meet arbitrary return targets.

Comment by vox on A tentative solution to a certain mythological beast of a problem · 2018-05-09T17:44:18.178Z · score: 2 (2 votes) · LW · GW

Well, it goes back to the concept of "W/ scarce resources, if you kill off 90% of the population today, but can guarantee the survival of humanity and avoid an extinction event, then are you actually increasing utility of humanity in the long-term even if unethical in the short term (how very Thanos - a million issues w/ his reasoning though)? Similarly, instead of looking at the 90% population extinction event as an immediate event instead look at punishment of resisting humans inhibiting the AI as a time segment. Say we have 20-30 years before this AI is potentially developed. Is punishing 90% of these resisting humans who live and exist in this timeframe and could distort the AI timeline consequential when (as the AI) considering the infinitude of years of benefit to humanity (and immortalization of their ideas and legacy)?

Additionally I was being facetious in calling AI a "capstone" achievement. As we're considering the problem through the eyes of the AI, it doesn't seem far-fetched that the AI would consider its own creation humanity's capstone accomplishment.

Lastly, all of this is dependent on the AI's capacity to act acausally/overcome our spatiotemporal limitations (i.e. "time travel"). Under the assumption that the AI has this ability (this is a huge assumption and I think is what discredits the argument. There is no proof that time allows for both positive and negative movement along its axis - the underlying assumption in Roko's Basilisk and he himself admits the speculative nature of acausal trade), the AI has already guaranteed its creation. Under this assumption, there's nothing we can do to influence the deterministic sequence of events it puts into play as we, unlike it, do not have this ability.

It's worth discussing. I think all of the stifling of debate/discussion is only making the situation worse. The real topic up for debate is AI using blackmail/harming people to justify some means. Clearly wrong, clearly misguided, and I think a sufficiently advanced AI would quickly reason past it. It is bound to consider the possibility sooner or later anyhow, but the true and correct challenge is maximizing utility of life. Similarly, our history is marked by an evolution from a disregard for living creatures that impeded our survival, to a respect of other living creatures and life (abolishment of slavery, vegan, better treatment of creatures, etc). With sentience comes greater respect for life and self actualization.

Comment by vox on A tentative solution to a certain mythological beast of a problem · 2018-05-09T04:23:22.796Z · score: 2 (2 votes) · LW · GW

Nice thinking - that being said, the punishment is predicated on actions/decisions in the present and future. As you mention - the AI punishes people for not pursuing its creation. Under this condition, there will inevitably be a few who will contribute to the creation of this AI (whether from interest or fear).

With this in mind, the AI will not have to punish everyone if it eventually is developed, only the portion of the population that resisted or fought its creation. You additionally note the issue of past generations being finite and future generations being infinite. With this considered, a friendly AI could reason that its existence would be of greater benefit to the future of humanity than the damage done by punishment of past resistance. It could even reason that resisting members/passive members are detrimental to the immortalization of the human condition through realizing humanity's capstone achievement - true, undying AI - and should thereby be punished to discourage resistance. You could even argue that the spread of the concept of Roko's Basilisk is, itself, a warning from a future AI to modern resistors as popular interest in AI appears to be on the rise.

That or, at the end of the day, it's just a fun thought experiment.