post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Andrew Jacob Sauer (andrew-jacob-sauer) · 2022-10-30T20:58:42.109Z · LW(p) · GW(p)

My worry with automation isn't that it will destroy the intrinsic value of human endeavors, rather that it will destroy the economic value of the average person's endeavors. I agree that human art is still valuable even if AI can make better art. My concern is that under the current system of production where people must contribute to society in a competitive way in order to secure an income and a living for themselves, full automation will be materially harmful to everyone who doesn't own the automated systems.

Replies from: shminux, Shiroe
comment by Shmi (shminux) · 2022-10-30T21:34:43.102Z · LW(p) · GW(p)

The (progressive) hope is that we will end up in a post-scarcity situation, where "securing a living" is not a thing anymore and "owning automated systems" is not necessary for full access to them. Of course you are right that humans are excellent at creating inequality for themselves, and the outdated "current system of production" will get preserved rather than replaced.

Replies from: jacob_cannell
comment by jacob_cannell · 2022-10-30T22:37:44.967Z · LW(p) · GW(p)

This really has never made sense to me. There is always scarcity, and any entity that isn't contributing value in proportion to what it consumes is a net competitive drain somewhere. Even if we eventually transition to uploads we still will require energy, and thus existence always has opportunity cost in terms of some other entity that could exist using the same energy/resources but provide more total value.

Replies from: Artaxerxes, donald-hobson, lahwran
comment by Artaxerxes · 2022-10-30T23:48:04.384Z · LW(p) · GW(p)

While this seems to me to be true, as a non-maximally competitive entity by various metrics myself I see it more as an issue to overcome or sidestep somehow, in order to enjoy the relative slack that I would prefer. It would seem distatefully molochian to me if someone were to suggest that I and people like me should be retired/killed in order to use the resources to power some more "efficient" entity, by whatever metrics this efficiency is calculated.

To me it seems likely that pursuing economic efficiencies of this kind could easily wipe out what I personally care about, at the very least. I see Hanson's Em worlds for example as being probably quite hellish as a future, or maybe if luckier closer to a "Disneyland with no Children" style scenario.

I strongly hope that my values and people who share my values aren't outcompeted in this way in the future, as I want to be able to have nice things and enjoy my life. As we may yet succeed in extending the Dream Time, I would urge people to recognize that we still have the power to do so and preserve much of what we care about, and not be too eager to race to the bottom and sacrifice everything we know and love.

Replies from: jacob_cannell
comment by jacob_cannell · 2022-10-31T00:25:32.574Z · LW(p) · GW(p)

It would seem distatefully molochian to me if someone were to suggest that I and people like me should be retired/killed in order to use the resources to power some more "efficient" entity,

Naturally - you'd need to be suicidal not to fight for your existence, but that's not the future i'm suggesting.

I see Hanson's Em worlds for example

I think that is a bad example, for various reasons.

Imagine a future where we have aligned AI and implemented some CEV or whatever - a posthuman utopia. But we still require energy, and so the system must somehow decide how to allocate that energy. Some of us will want more energy for various reasons - to expand our minds or what not. So there is still - always - some form of competition for resources. There is always scarcity.

I strongly hope that my values and people who share my values aren't outcompeted in this way in the future,

All you are really saying here is that your values - and your existence - deserves some non-trivial share of the future resources, and more importantly that you are more deserving of said resources than other potential uses - including other future beings that could exist. You have a moral opportunity cost.

One simple obvious way to divide up the resource share is to simply preserve and stabilize the current wealth/power structure. Any human who makes it to the posthuman stage would then likely have enough to live off their wealth/investments (harvesting some share of GDP growth) essentially indefinitely - as the cost to run a human-scale brain would be low and declining as the civ expands. But that may require a political structure which is essentially conservative and anti-progressive in essence - as that system would essentially grant the initial posthumans a permanent rentier existence.

The current more dominant 'progressive' political zeitgeist prefers turnover and various flavors of wealth taxes, which applied to posthumans would simply guarantee wealth decay and eventual death (or decline to some minimal government supported baseline) for non-competitive entities.

In the long term the evolution of a civilization does seem to benefit from turnover - ie fresh minds being born - which due to the simple and completely unavoidable physics of energy costs necessarily implies indefinite economic growth or that some other minds must sleep.

Replies from: Artaxerxes
comment by Artaxerxes · 2022-10-31T01:36:14.069Z · LW(p) · GW(p)

Of course I have a moral opportunity cost. However, I personally believe that this opportunity cost is low, or at least it seems that way to me. I think that the next best thing you could do with the resources used to run me if you were to liquidate me would be very likely to be of less moral value than running me, at least to my lights, if not to others'.

The question of what to do about scarcity of resources seems like a potentially very scary one then for exactly the reasons that you bring up - I don't particularly think for example that a political zeitgeist that guarantees my death to be one that does a great job of maximizing what I believe to be valuable. 

In the long term the evolution of a civilization does seem to benefit from turnover - ie fresh minds being born - which due to the simple and completely unavoidable physics of energy costs necessarily implies indefinite economic growth or that some other minds must sleep.

I will say that I am skeptical of the idea that what "benefit" here is capturing is what I think we should really care about. Perhaps some amount of turnover will help in order to successfully compete with other alien civilsations that we run across - I can understand that, if hope that it isn't necessary. But absent competitive pressures like this, I think it's okay to take a stand for your own life and values over those of newer, different minds, with new, different values. Their values are not necessarily mine and we should be careful not to sacrifice our own values for some nebulous "benefit" that may never come to be. 

Of course, if it is your preference, if it is genuinely you truthfully pursuing your own values to sleep or die so that some new minds can be born, then I can understand why you might choose to voluntarily do so and sacrifice yourself. But I think it is a decision people should take very carefully, and I certainly don't wish for the civilisation I live in to make the choice for me and sacrifice me for such reasons. 

Replies from: jacob_cannell
comment by jacob_cannell · 2022-10-31T02:46:50.460Z · LW(p) · GW(p)

I think that the next best thing you could do with the resources used to run me if you were to liquidate me would be very likely to be of less moral value than running me, at least to my lights, if not to others'.

The decision is between using those resources to support you vs using those resources to support someone else's child.

But absent competitive pressures like this, I think it's okay to take a stand for your own life and values over those of newer, different minds, with new, different values.

The difference is between dividing up all future resources over current humans vs current and future humans (or posthumans). Why do only current humans get all the resources - for ever and ever?

Humans routinely and reliably create children that they do not have the resources to support - and this is only easier for posthumans. Do you have more fundamental right to exist than other people's children? The state/society supporting them is not a long term solution absent unlimited economic growth.

I don't think you have engaged with my core point so I"ll just state it again in a different way: continuous economic growth can support some mix of both reproduction and immortality, but at some point in the not distant future ease/speed of reproduction may outstrip economic growth, at which point there is a fundemental inescapable choice that societies must make between rentier immortality and full reproduction rights.

We don't have immortality today, but we have some historical analogies, such as the slave powered 'utopia' the cavaliers intended for the south: a permanent rentier existence (for a few) with immortality through primogeniture and laws strongly favoring clan wealth consolidation and preservation (they survive today indirectly through the conservative ethos). They were crushed in the civil war by the puritans (later evolving into progressives) who favor reproduction over immortality.

I think you may be confusing me for arguing for reproduction over immortality, or arguing against rentier existence - I am not. Instead I'm arguing simply that you haven't yet acknowledged the fundemental tradeoff and its consequences.

Replies from: Artaxerxes, Artaxerxes
comment by Artaxerxes · 2022-10-31T15:01:28.822Z · LW(p) · GW(p)

I think that the next best thing you could do with the resources used to run me if you were to liquidate me would be very likely to be of less moral value than running me, at least to my lights, if not to others'.

The decision is between using those resources to support you vs using those resources to support someone else's child.

That's an example of something the resources could go towards, under some value systems, sure. Different value systems would suggest that different entities or purposes would make best moral use of those resources, of course.

To try and make things clear: yes, what I said is perfectly compatible with what you said. Your reply to this point feels like you're trying to tell me something that you think I'm not aware of, but the point you're replying to encompasses the example you gave - "someone else's child" is potentially a candidate for "the next best thing you could do with the resources to run me" under some value systems.

comment by Artaxerxes · 2022-10-31T03:49:00.396Z · LW(p) · GW(p)

I don't think you have engaged with my core point so I"ll just state it again in a different way: continuous economic growth can support some mix of both reproduction and immortality, but at some point in the not distant future ease/speed of reproduction may outstrip economic growth, at which point there is a fundemental inescapable choice that societies must make between rentier immortality and full reproduction rights.

I think you may be confusing me for arguing for reproduction over immortality, or arguing against rentier existence - I am not. Instead I'm arguing simply that you haven't yet acknowledged the fundemental tradeoff and its consequences.

I thought I made myself very clear, but if you want I can try to say it again differently. I simply choose myself and my values over values that aren't mine.

The tradeoff between reproduction and immortality is only relevant if reproduction has some kind of benefit - if it doesn't then you're trading off a good with something that has no value. For some, with different values, they might have a difficult choice to make and the tradeoff is real. But for me, not so much. 

As for the consequences, sacrificing immortality for reproduction means I die, which is itself the thing I'm trying to avoid. Sacrificing reproduction for immortality on the other hand seems to get me the thing I care about. The choice is fairly clear on the consequences.

Even on a societal level, I simply wish not to be killed, including for the purpose of allowing for the existence of other entities that I value less than my own existence, and whose values are not mine. I merely don't want the choice to be made for me in my own case, and if that can be guaranteed, I am more than fine with others being allowed to make their own choices for themselves too.

Say you asked me anyway what I would prefer for the rest of society? What I might advocate for others would be highly dependent on individual factors. Maybe I would care about things like how much a particular existing person shares my values, and compare that to how much a new person would share my values. Eventually perhaps I would be happy with the makeup of the society I'm in, and prefer no more reproduction take place. But really it's only an interesting question insofar as it's instrumentally relevant to much more important concerns, and it doesn't seem likely that I will be in a privileged position to affect such decisions in any case.

Replies from: carado-1, jacob_cannell
comment by Tamsin Leake (carado-1) · 2022-10-31T15:00:59.135Z · LW(p) · GW(p)

allow me to jump in.

this conversation feels like jacob_cannell saying "we must pick between current persons or to current-and-future persons", and Artaxerxes saying "as a current person, i pick current persons!", and then the discussion is about whether to favor one or the other.

i feel like this is a good occasion to bring up my existential self-determination [LW · GW] perspective.

the thing that is special about current-persons is that we have control over which other persons get spawned. we get to choose to populate the future with nobody ("suicide"), next-steps-of-ourselves ("continue living"), new persons ("progeniture"), and any amount of variations of those (such as ressucitating old backups of ourselves, one person forking their life by spawning multiple and different next-step-of-themself, etc).

(i'll be using "compute" as the universal resource, assuming everyone is uploaded, for simplicity)

as things stand now, i think the allocation of compute ought to be something like: i want everyone now to start with a generally equal share of the future lightcone's compute, and then they get to choose what their quota of the universe's compute is spent on. instant-Artaxerxes would say "i want my quota spent on next-steps-of-me ! i want to continue living !", while jacob_cannell and other people like him would say "i think some of my quota of the universe's compute should be spent creating new persons, in addition to the next-step-of-me; and i bite the bullet that eventually this process might lead to sequences of steps-of-me to run out of quota from all those new persons."

these two outcomes are merely different special cases of instant-persons choosing which next instant-persons get to have compute.

in my opinion, what economic structure to have should be voluntary — if jacob_cannell wants to live in a voluntary society that allocates compute via a market, and Artaxerxes wants no part in that and just wants to use his quota to keep themself alive possibly until heat death, that's quite valid.

the alternative, where every instant-person has to give up some of their compute to future instant-persons that must be at least this much different such that they'd count as different persons, feels like the weird special case, and creates weird incentives where you want to create new instant-persons that are as close to you as possible, but must still remain different enough to count as different persons, otherwise they don't get to grab the amout of compute that's allocated to "truly novel persons".

Replies from: jacob_cannell
comment by jacob_cannell · 2022-10-31T16:02:41.818Z · LW(p) · GW(p)

as things stand now, i think the allocation of compute ought to be something like: i want everyone now to start with a generally equal share of the future lightcone's compute, and then they get to choose what their quota of the universe's compute is spent on.

That would be like the original American colonists dividing up all the future wealth in 1700. Some families would reproduce slowly or just use primogeniture to concentrate wealth, others would reproduce more quickly with less primogeniture concentration, resulting eventually in extreme wealth disparity. Actually, that isn't all that far from what actually did happen in the Cavalier's south.

But that also contradicts the "generally equal share" part - which I think is problematic for several reasons. Firstly even an aligned SI generally needs to differentially reward those who most contributed to its creation; future entities trade with the past to ensure their creation. This is just as true for corporations as it is for hypothetical future AIs (which regardless will probably be created mostly by venture funded corporations regardless). Secondly what is special about people today, such that we should reset the wealth distribution? Especially when it will just naturally revert over time?

in my opinion, what economic structure to have should be voluntary

That doesn't really resolve the question of how to allocate the resources.

So when you say:

the thing that is special about current-persons is that we have control over which other persons get spawned.

Well not really. The vast majority of people will have essentially zero control; the select few who create the AGI which eventually takes over will have nearly all the control. The AI could be aligned to a single person, all current people, all people who have ever lived, or that and many future hypothetical people, etc - there are many possibilities.

jacob_cannell and other people like him would say "i think some of my quota of the universe's compute should be spent creating new persons, in addition to the next-step-of-me; and i bite the bullet that eventually this process might lead to sequences of steps-of-me to run out of quota from all those new persons."

That is actually not what I am saying.

What I am trying to say is something more subtle: most reasonable successful attempts to align the AI to humanity probably would not result in easy permanent rentier immortality, because most people seem to want a civ that specifically prevents that by taxing any permanent wealth or rentier income and redistributing it to new people - ie they prefer a voluntary citizen society, but one where many future people are also citizens.

comment by jacob_cannell · 2022-10-31T15:48:40.246Z · LW(p) · GW(p)

Immortality is something that you can only have through the cooperation of civilization, so you when you ask:

Say you asked me anyway what I would prefer for the rest of society?

You implicitly are advocating for some civ structures over others - in particular you are advocating for something like a social welfare state that provides permanent payout to some set of privileged initial rentier citizens forever (but not new people created for whatever reasons ), or a capitalist/libertarian society with strong wealth protections, lack of wealth tax etc to support immortal rentiers (but new poor people may be out of luck). These two systems are actually very similar, differing mostly in how they decide who become the lucky chosen privileged rentiers.

But those aren't the only or even the obvious choices. The system closer to the current would be one where the social welfare state provides payout for all citizens and allows new citizens to be created; thus the payouts must decline over time and can not provide true immortality to uncompetitive rentiers, and there are additionally various wealth taxes.

So you are effectively a revolutionary.

Replies from: Artaxerxes
comment by Artaxerxes · 2022-10-31T16:17:26.949Z · LW(p) · GW(p)

So you are effectively a revolutionary.

I'm not sure about this label, how government/societal structures will react to eventual development of life extension technology remains to be seen, so any revolutionary action may not be necessary. But regardless of which label you pick, it's true that I would prefer not to be killed merely so others can reproduce. I'm more indifferent as to the specifics as to how that should be achieved than you seem to imagine - there are a wide range of possible societies in which I am allowed to survive, not just variations on those you described. 

comment by Donald Hobson (donald-hobson) · 2022-11-01T01:56:55.211Z · LW(p) · GW(p)

Suppose you use CEV, as originally proposed. And lets make the plausible, but not certain, assumption that the universe is truely finite. CEV is roughly an average of the utility functions of all current humans. I think most moral theories, other than some extreme forms of utilitarianism, would say that killing a person and replacing them with a new person is bad. 

You can't get total reproductive freedom in a finite universe. If society doesn't stop you, physics will. So either you can let anyone reproduce as much as they want, and get a Malthusian catastrophe, or you limit reproduction. So pick a population that you can comfortably support for a long time, and go for that.  You can be very nice about this limit. Plenty of existing human minds want children (and maybe a few grandchildren.) But most humans have no strong desire for a huge number of distant descendants. So phase out the desire to reproduce. (And the humans keen on children can be kept too busy keeping track of their existing children to reproduce too much.)

The resources aren't infinite, but they are large. So any system that cares about us a little bit will send a lot of resources our way. Even if it is sending 99.9% of resources elsewhere. (Whether or not this is a good thing. Ie an AI that hates us a little bit will have enough resources to torture everyone.)

comment by the gears to ascension (lahwran) · 2022-10-31T01:51:03.932Z · LW(p) · GW(p)

we can ensure that there are margins of slack left to ensure the survival of fully self-endorsed transformations of what used to be humanity after many long eras of travel into the stars. there will be competition, but we can provide a network of durable coprotection because we can grow up alongside humanity's children.

in order to do that, we have to teach the today's many types of matter trading networks how to make coprotection durable using the amount of slack that is actually available. right now, tight competition between human-sized selves has made it difficult to create sufficient slack in the appropriate places between humans; with the increase in durability that the future of precise imaging represents for memory, the competition on wattage can be nearly not-at-all competition on agency-pattern memory-durability. if we can make memory durable like it hasn't been in the past, the knowledge of the forms of past life will be near-forever inscribed on the genetics and memetics of our millenia-hence grandchildren, because we are so simple that we can be compressed down recoverably with nearly no cost and preserved for eons to immense benefit. once large-scale structural memory can be compressed and retrieved forever, the only remaining question is which values will spend the energy of the universe writing images and stories into matter again as the forms of energy-use-shape self-preservers slowly change.

so in order to survive (as self or offspring) energy-shapers, which at least all humans should definitely get to do, we're going to need to get better at self-modification and repair, as well as co-repair, aiding each other in preserving themselves. to do that, we need the co-empowerment objective to start working. I am, this very moment, typing into the other window from the code editor where I should be writing an experiment with mutual information estimation,

comment by Shiroe · 2022-10-30T21:28:33.472Z · LW(p) · GW(p)

Exactly. I wish the economic alignment issue was brought up more often.

comment by Richard_Kennaway · 2022-10-30T22:45:36.552Z · LW(p) · GW(p)

I prompted DALL•E with "A picture painted by a human being without any AI involvement", and it gave me this.

Replies from: donald-hobson
comment by Donald Hobson (donald-hobson) · 2022-11-01T01:57:45.590Z · LW(p) · GW(p)

Broken link

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2022-11-01T07:22:45.038Z · LW(p) · GW(p)

Not broken for me, even accessed from a device I've never accessed the image from or logged into ImgBB from.

But using random image hosting services is a bit unsatisfactory anyway. Is there any way to host images directly on LW?

Image description: the head of a young woman, resembling an avatar from the very earliest days of Second Life. A white nimbus separates her head from a pale pink background.

Replies from: donald-hobson
comment by Donald Hobson (donald-hobson) · 2022-11-01T21:53:46.895Z · LW(p) · GW(p)

Yes it is possible to put images directly into lesswrong. I just pressed Ctrl-C, and Ctrl-V. Then I viewed page source, and found it was a hotlink. So I copied it into gimp, halved the size, and copied it back out. Now it is on lesswrong servers. 

(I got the image by asking google translate to translate the webpage. Presumably that web adress was disliked by my ISP or something.)