Posts

Comments

Comment by Lichdar on Why I no longer identify as transhumanist · 2024-02-07T04:14:27.852Z · LW · GW

I had a very long writeup on this but I had a similar journey from identifying as a transhumanist to deeply despising AI, so I appreciate seeing this. I'll quote part of mine and perhaps identify:

"I worked actively in frontier since at least 2012 including several stints in "disruptive technology" companies where I became very familiar with the technology cult perspective and to a significant extent, identified. One should note that there is a definitely healthy aspect to it, though even the most healthiest aspect is, as one could argue, colonialist - the idea of destructive change in order to "make a better world."

Until 2023..

And yet I also had a deep and abiding love of art, heavily influenced by "Art as Prayer" by Andrei Tarkovsky, and this actually fit into a deep and abiding spiritual, even religion viewpoint that integrated my love of technology and love of art.

In my spare time and nontrivially, I spent a lot of time writing, drawing here and there, as well as writing poetry - all which had a deep and significant meaning to me.


In the Hermetic system, Man is the creation of clay and spirit imbued with the divine spark to create; this is why science and technology can be seen as a form of worship, because to learn of Nature and to advance the "occult" into the known is part of the Divine Science to ultimately know God and summon the spirits via the Machine; the computer is the modern occultist's version of the pentagram and circle. But simultaneous with this, is the understanding that the human is the one who ultimately is the creator, who "draws imagination from the Empyrean realms, transforms them into the material substance, and sublimates it into art" as glimpses of the magical. Another version of this is in the Platonic concept of the Forms, which are then conveyed into actual items via the labor of the artist or the craftsman.

As such, the deep love of technology and the deep love for art was not in contrast in the least, because one helped the other and in both, the ultimately human spirit was very much glorified. Perhaps there is a lot of the cyborg in it, but the human is never extinguished.


The change came in last year with the widespread release of Chatgpt. I actually was an early user of Midjourney and played around with it as an ideation device, which I would have never faulted a user of AI for. But it was when the realization that the augmentation of that which is human was exchanged for the replacement of human that I knew that something had gone deeply, terribly wrong with the philosophy of the place.

Perhaps that was not the only reason: another transformative experience was reading Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness which changed a lot of my awareness of life from something of "information"(which is still the default thinking of techbros, which is why mind-uploading is such a popular meme) into the understanding of the holobiont that life, even in an "organism" is in fact a kind of enormous ecosystem of many autonomous or semi-autonomous actors as small as the cellular level(or even below!). A human is in fact almost as much "not human" as human - now while the research as been revised slightly, it still comes out that an average human is perhaps at most 66% human, and many of our human cells show incredible independent initiative. Life is a "whorl of meaning" and miraculous on so many fundamental levels.

And now, with the encroachment of technology, was the idea of not only replacing humanity with the creation of art, one of the most fundamentally soulful things that we do, but the idea of extinguishing life for some sort of "more perfection." Others have already talked about this particular death cult of the tech-bros, but this article covers the omnicidal attitude well, which goes beyond just the murder of all of humanity but also the death of all organic life. Such attitudes are estimated to be among AI researchers between 5 to 10%, the people who are in fact, actively leading us down this path who literally want you and everything you love to die.

I responded to the widespread AI the same way with a lot of other researchers in the area - initially with a mental breakdown(which I am still recovering from) and then with the realization that with the monster that had been created, we had to make the world as aware as possible and to try to stop it.

Art as Transcendence

Art is beauty, art is fundamental to human communication and I believe that while humans exist, we will never stop creating art. But beyond that, is the realization of art as transcendent - it speaks of the urgency of love, the precariousness of life, the marvel that is heroism, the gentleness of caring, and all of the central meaning of our existence, things which should be eternal and things I truly realize now are part and parcel of our biology. One doesn't have to relate to a supernatural soul to know that these things should be infused into our very being and that they are, by nature, not part of any mechanical substrate. A jellyfish alien that breathes through its limbs would be more true to us than a computer with a human mind upload in that sense; for anything that is organic is almost certainly that same kind of ecosystem, of life built upon life, of a whorl of meaning and ultimately of beauty, while the machine is a kind of inert death, and undeath in its simulacrum and counterfeit of the human being.

And in that, I realized that there was a point where the human must be protected from the demons that we are trying to raise, because no longer are we trying to summon aid to us, but so many of the technologists have become besotted and have fallen in love with their own monsters, and not with the very life that has created us and given us all this wonderous existence to feel, to breathe and to be. "

Comment by Lichdar on What Failure Looks Like is not an existential risk (and alignment is not the solution) · 2024-02-03T18:04:01.097Z · LW · GW

I think you are incorrect on dangerous use case, though I am open to your thoughts. The most obvious dangerous case right now, for example, is AI algorithmic polarization via social media. As a society we are reacting, but it doesn't seem like it is in an particularly effectual way.

Another way to see this current destruction of the commons is via automated spam and search engine quality decline which is already happening, and this reduces utility to humans. This is only in the "bit" universe but it certainly affects us in the atoms universe and as AI has "atom" universe effects, I can see similar pollution being very negative for us.

Banning seems hard, even for obviously bad use cases like deepfakes, though reality might prove me wrong(happily!) there.

Comment by Lichdar on Four visions of Transformative AI success · 2024-01-19T22:28:46.243Z · LW · GW

Its not a myth, but an oversimplification which makes the original thesis much less useful. The mind, as we are care about, is a product and phenomenon of the entire environment it is in, as well as the values we can expect it to espouse.

It would indeed be akin to taking an engine, putting it in another environment like the ocean and expecting the similar phenomenon of torque to rise from it.

Comment by Lichdar on Four visions of Transformative AI success · 2024-01-19T20:51:29.102Z · LW · GW

Lifelong quadriplegics are perfectly capable of love, right?

As a living being in need of emotional comfort and who would die quite easily, it would be extremely useful to express love to motivate care and indeed excessively so. A digital construct of the same brain would have immediately different concerns, e.g. less need for love and caring, more to switch to a different body, etc.

Substrate matters massively. More on this below.

Again, an perfect ideal whole-brain-emulation is a particularly straightforward case. A perfect emulation of my brain would have the same values as me, right?

Nope! This is a very common and yet widespread error, which I suppose comes from the idea that the mind comes from the brain. But even casually, we can tell that this isn't true: would a copy of you, for example, still be recognizably you if put on a steady drip of cocaine? Or would it still be you if you were permanently ingesting alcohol? Both would result in a variation of you that is significantly different, despite otherwise identical brain. Your values would likely have shifted then, too. Your brain is identical - only the inputs to it have changed.

In essence, the mind is the entire body, e.g.

https://www.psychologytoday.com/us/blog/body-sense/202205/the-fiction-mind-body-separation

There is evidence that even organ transplants affect memory and mood.

https://www.sciencedirect.com/science/article/abs/pii/S0306987719307145#:~:text=Neuroplasticity is one of the,at the time of transplantation.

The key here is that the self is always a dynamic construct of the environment and a multiplicity of factors. The "you" in a culture of cannibalism will likely have different values than a "you" in a culture of Shakers, to add to it.

The values of someone who is a digital construct who doesn't die and doesn't need to reproduce very much will be very different from a biological creature that needs emotional comfort, values trust in an enviromment of social deception, holds heroism in high regard due to the fragility of life, and needs to cooperate with other like minds.

Is it theoretically possible? If you replicate all biological conditions to a digital construct, perhaps but its fundamentally not intrinsic to the substrate, where digital substrate entails perfect copying via mechanical processes, while biology entails dynamic agentic cells in coordination and much more variability in process. Its like trying to use a hammer to be a screwdriver.

The concept of the holobiont goes much deeper into this and is a significant reason why I think any discussion of digital copying is the equivalent of a shadowy undead mockery than anything else, since it fails to account for the fundamental co-evolutions that build up an "organism."

https://en.m.wikipedia.org/wiki/Holobiont

In life, holobionts do change and alter, but its much more like evolutional extensions and molding by degree. Mechanism just tromps over it by fiat.

Comment by Lichdar on Why are people unkeen to immortality that would come from technological advancements and/or AI? · 2024-01-19T03:52:51.849Z · LW · GW

But you do pass on your consciousness in a significant way to your children through education, communication and relationships and there is an entire set of admirable behaviors selected around that.

I generally am less opposed to any biological strategy, though the dissolution of the self into copies would definitely bring up issues. But I do think that anything biological has significant advantages in that ultimate relatedness to being, and moreover in the promotion of life: biology is made up of trillions of individual cells, all arguably agentic, which coordinate marvelously into a holobioant and through which endless deaths and waste all transform into more life through nutrient recycling.

Comment by Lichdar on Four visions of Transformative AI success · 2024-01-17T23:19:47.191Z · LW · GW

I am in Vision 3 and 4, and indeed am a member of Pause.ai and have worked to inform technocrats, etc to help increase regulations on it.

My primary concern here is that biology remains substantial as the most important cruxes of value to me such as love, caring and family all are part and parcel of the biological body.

Transhumans who are still substantially biological, while they may drift in values substantially, will still likely hold those values as important. Digital constructions, having completely different evolutionary pressures and influences, will not.

I think I am among the majority of the planet here, though as you noted, likely an ignored majority.

Comment by Lichdar on Why are people unkeen to immortality that would come from technological advancements and/or AI? · 2024-01-17T19:02:41.083Z · LW · GW

I don't mind it: but not in a way that wipes out my descendants, which is pretty likely with AGI.

I would much rather die than to have a world without life and love, and as noted before, I think a lot of our mores and values as a species comes from reproduction. Immortality will decrease the value of replacement and thus, those values.

Comment by Lichdar on Why are people unkeen to immortality that would come from technological advancements and/or AI? · 2024-01-17T18:30:42.620Z · LW · GW

I want to die so my biological children can replace me: there is something essentially beautiful about it all. It speaks to life and nature, both which I have a great deal of esteem for.

That said, I don't mind life extension research but anything that threatens to end all biological life or essentially kill a human to replace it with a shadowy undead digital copy are both not worth it for it.

As another has mentioned, a lot of our fundamental values come from the opportunities and limitations of biology: fundamentally losing that eventually leads to a world without life, love or meaning. As we are holobioants, each change will have substantial downstream loss and likely not to a good end.

As far as I am concerned, immortality comes from reproduction and the vast array of behaviors around it are fundamentally beautiful and worthwhile.

Comment by Lichdar on MIRI 2024 Mission and Strategy Update · 2024-01-07T22:10:55.174Z · LW · GW

I generally feel that biological intelligence augmentation, or a biosingularity is by far the best option and one can hope such enhanced individuals realize to forestall AI for all realistic futures.

With biology, there is life and love. Without biology, there is nothing.

Comment by Lichdar on Deep atheism and AI risk · 2024-01-07T22:03:33.997Z · LW · GW

Its not merely the rejection of God, its a story of "progress" to reject also reverence of nature and eventually, even life and reality itself, presumably so we can accept mass extinction for morally superior machines.

Comment by Lichdar on Gentleness and the artificial Other · 2024-01-04T20:52:19.072Z · LW · GW

I am speaking of their eventual evolution: as it is, no, they cannot love either. The simulation of mud is not the same as love and nor would it have similar utility in reproduction, self-sacrifice, etc. As in many things, context matters and something not biological fundamentally cannot have the context of biology beyond its training, while even simple cells will alter based on its chemical environment, etc, and is vastly more part of the world.

Comment by Lichdar on Gentleness and the artificial Other · 2024-01-04T20:47:55.745Z · LW · GW

And yet eukaryotes have extensive social coordination at times, see quorum sensing. I maintain that biology is necessary for love.

Comment by Lichdar on Gentleness and the artificial Other · 2024-01-04T15:00:28.230Z · LW · GW

Love would be as useful to them as flippers and stone knapping are to us, so it would be selected out. So no, they won't have love. The full knowledge of a thing also requires context: you cannot experience being a cat without being a cat, substrate matters.

Biological reproduction is pretty much the requirement for maternal love to exist in any future, not just as a copy of an idea.

Comment by Lichdar on Gentleness and the artificial Other · 2024-01-03T22:01:00.620Z · LW · GW

This is exactly how I feel. No matter how different, biological entities will have similar core needs. In particular, reproduction will entail love, at least maternal love.

We will not see this with machines. I see no desire to be gentle to anything without love.

Comment by Lichdar on Stop talking about p(doom) · 2024-01-02T17:42:18.208Z · LW · GW

I am one of those people; I don't consider myself EA due to its strong association with atheism, but nonetheless am very much on slowing down AGI before it kills us all.

Comment by Lichdar on What should a non-genius do in the face of rapid progress in GAI to ensure a decent life? · 2024-01-02T17:15:38.543Z · LW · GW

I would say to do everything possible to stop GAI. We might not win, but it was better to have tried. We might even succeed.

Comment by Lichdar on A NotKillEveryoneIsm Argument for Accelerating Deep Learning Research · 2023-10-20T00:15:10.665Z · LW · GW

But notably, we have not killed all biological life and we are substantially Neanderthal. Versus death by AI, its a far better prospect.

Comment by Lichdar on AI #34: Chipping Away at Chip Exports · 2023-10-19T19:42:54.026Z · LW · GW

And moving doom back by a few years is entitely valid as a strategy, I think it should be realized, and is even pivitol. If someone is trying to punch you and you can delay it by a few seconds, that can determine the winner of the fight.

In this case, we also have other technologies which are concurrently advancing such as genetic therapy or brain computer interfaces.

Having them advance ahead of AI may very well change the trajectory of human survival.

Comment by Lichdar on Does AI governance needs a "Federalist papers" debate? · 2023-10-19T16:54:44.327Z · LW · GW

AGI coup completion is an assumption; if safer alternatives arise, such as biosingularity or cyborgism, it is entirely possible that it could be avoided and humanity remains extant.

Comment by Lichdar on Does AI governance needs a "Federalist papers" debate? · 2023-10-19T16:38:00.961Z · LW · GW

Incorrect, as every slowdown in progress allows alternative technologies to catch up and the advancement of monitoring solutions also will promote safety from what basically would be omnicidal maniacs(likely result of all biological life gone from machine rule).

Comment by Lichdar on A NotKillEveryoneIsm Argument for Accelerating Deep Learning Research · 2023-10-19T16:32:47.965Z · LW · GW

This solves nothing that could not be better solved by freezing development of hardware, which would also slow down evolutionary setups.

This also allows for more time for safer approaches such as genetic engineering and biological advancements to catch up, and keep us from Killing Everyone.

Comment by Lichdar on Evolution Solved Alignment (what sharp left turn?) · 2023-10-16T17:12:07.879Z · LW · GW

The natural consequence of "postbiological humans" is effective disempowerment if not extinction of humanity as a whole.

Such "transhumanists" clearly do not find the eradication of biology abhorrent, any more than any normal person would find the idea of "substrate independence"(death of all love and life) to be abhorrent.

Comment by Lichdar on Population After a Catastrophe · 2023-10-03T17:21:56.127Z · LW · GW

Value is based on scarcity. That which can be copied and pasted has little value.

In any story, this is the equivalent of discussing why undeath would be better than life.

Comment by Lichdar on Population After a Catastrophe · 2023-10-02T18:35:03.278Z · LW · GW

All of this seems to be a higher value world to me than either a world of "artificial people" which thus ends the entire cycle of life itself or total extinction of humanity, which is likely also as a result of AI continuity.

As such, it seems that total human consciousness may endure longer, tell and feel more stories and thus have a higher total existence by having a near total catastrope to lower the rate of AI development.

Comment by Lichdar on Summary of and Thoughts on the Hotz/Yudkowsky Debate · 2023-08-20T14:47:13.976Z · LW · GW

Disagree: values come from substrate and environmental. I would almost certainly ally myself with biological aliens versus a digital "humanity" as the biological factor will create a world of much more reasonable values to me.

Comment by Lichdar on The Negentropy Cliff · 2023-08-18T17:21:14.610Z · LW · GW

We do have world takeover compared to ants, though our desire to wipe out all ants is just not that high.

Comment by Lichdar on The Negentropy Cliff · 2023-08-18T17:08:51.802Z · LW · GW

I think even if AI proves strictly incapable of surviving in the long time due to various efficiency constraints, this has no relevance on its ability to kill us all.

A paperclip maximizer that eventually runs into a halting problem as it tries to paperclip itself may very well have killed everyone by that point.

I think the term for this is "minimal viable exterminator."

Comment by Lichdar on Summary of and Thoughts on the Hotz/Yudkowsky Debate · 2023-08-17T20:54:18.131Z · LW · GW

But land and food doesnt actually give you more computational capability: only having another human being cooperate with you in some way can.

The essential point here is that values depend upon the environment and the limitations thereof, so as you change the limitations, the values change. The values important for a deep sea creature with extremely limited energy budget, for example, will be necessarily different from that of human beings.

Comment by Lichdar on Summary of and Thoughts on the Hotz/Yudkowsky Debate · 2023-08-17T17:30:27.017Z · LW · GW

Humans can't eat another human and get access to the victim's data and computation but AI can. Human cooperation is a value created by our limitations as humans, which AI does not have similar constraints for.

Comment by Lichdar on Problems with Robin Hanson's Quillette Article On AI · 2023-08-11T17:34:48.949Z · LW · GW

I disagree on the inference to the recent post, which I quite liked and object heavily to Hanson's conclusions.

The ideal end state is very different: in the post mentioned, biological humans, if cyborgs, are in control. The Hanson endpoint has only digital emulations of humanity.

This is the basic distinguishing point between the philosophies of Cyborgism vs more extreme ones like mind uploading or Hanson's extinction of humanity as we know it for "artificial descendants."

Comment by Lichdar on Open Thread - July 2023 · 2023-08-07T16:44:06.561Z · LW · GW

Both open thread links at the base of the article lead to errors for me.

Comment by Lichdar on Problems with Robin Hanson's Quillette Article On AI · 2023-08-07T16:29:47.608Z · LW · GW

"You can't reason a man out of a position he has never reasoned himself into."

I think I have seen a similar argument on LW for this, and it is sensible. With vast intelligence, it is possible for the search space to support priors to be even greater. An AI with a silly but definite value like "the moon is great, I love the moon" may not change its value as much as develop an entire religion around the greatness of the moon.

We see this in goal misgeneralization, where it very much maximizes a reward function independent of the meaningful goal.

Comment by Lichdar on BCIs and the ecosystem of modular minds · 2023-08-07T16:03:45.925Z · LW · GW

I have considered the loss of humanity from being in a hive mind versus the loss of humanity from being extinct completely or being emulated on digital processes, and concluded as bad as it might be to become much more akin to true eusocial insects like ants, you still have more humanity left by keeping some biology and individual bodies.

Comment by Lichdar on Problems with Robin Hanson's Quillette Article On AI · 2023-08-07T15:51:11.617Z · LW · GW

But if you believed that setting fire to everything around you was good, and by showing you that hurting ecosystems by fire would be bad, you would change your values, would that really be "changing your values?"

A lot of values update based on information, so perhaps one could realign such AI with such information.

Comment by Lichdar on Problems with Robin Hanson's Quillette Article On AI · 2023-08-07T15:48:18.360Z · LW · GW

I have never had much patience for Hanson and it seems someone as intelligent as himself should know that values emerge from circumstance. What use, for example, would AI have for romantic love in a world where procreation consists of digital copies? What use are coordinated behaviors for society if lies are impossible and you can just populate your "society" with clones of yourself? What use is there for taste without the evolutionary setup for sugars, etc.

Behaviors arise from environmental conditions, and its just wild to see a thought that eliminating all of that would give us anything similar.

Essentially the only value you will preserve is the universal one for power seeking. I like to think very few of us want to value power seeking over love and cooperation: right now, Hanson is valuing the "power" of his "descendants" over their ability to be human, why would AI be different?

I also believe that animal life and cognition has value, as their own form of non-human intelligence. An AI catastrope that eliminates the biosphere seems to be vastly negative, immoral and agency-reducing for them: they didn't vote to go extinct.

Comment by Lichdar on What The Lord of the Rings Teaches Us About AI Alignment · 2023-08-01T01:55:47.485Z · LW · GW

I count myself among the simple and the issue would seem to be that I would just take the easiest solution of not building a doom machine, to minimize risks of temptation.

Or as the Hobbits did, throw the Ring into a volcano, saving the world the temptation. Currently, though, I have no way of pressing a button to stop it.

Comment by Lichdar on Slowing down AI progress is an underexplored alignment strategy · 2023-07-24T17:05:46.227Z · LW · GW

I believe that the general consensus is that it is impossible to totally pause AI development due to Molochian concerns: I am like you, and if I could press a button to send us back to 2017 levels of AI technology, I would.

However, in the current situation, the intelligent people as you noted have found ways to convince themselves to take on a very high risk of humanity and the general coordination of humanity is not enough to convince them otherwise.

There have some positive updates but it seems that we have not been in a world of general sanity and safety at this scale.

I have taken solace in the morbid amusement that many of the "powers that be" may indeed be dying with us, but they are quite blind.

"Humanity: extinct due to hyperbolic discounting behavior."

Comment by Lichdar on Why was the AI Alignment community so unprepared for this moment? · 2023-07-18T23:22:40.641Z · LW · GW

So in short, they are generally unconcerned with existential risks? I've spoken with some staff and I get the sense they do not believe it will impact them personally.

Comment by Lichdar on Internal independent review for language model agent alignment · 2023-07-14T14:49:51.083Z · LW · GW

I would prefer total oblivion over AI replacement myself: complete the Fermi Paradox.

Comment by Lichdar on What Does LessWrong/EA Think of Human Intelligence Augmentation as of mid-2023? · 2023-07-09T17:26:24.683Z · LW · GW

I have been wondering if the new research into organoids will help? It would seem one of the easiest ways to BCI is to use more brain cells.

One example would be the below:

https://www.cnet.com/science/ai-could-be-made-obsolete-by-oi-biocomputers-running-on-human-brain-cells/

Comment by Lichdar on The Dial of Progress · 2023-06-16T16:48:03.209Z · LW · GW

I would campaign against lead pipes and support the goths in destroying Rome which likely improved human futures over an alternative of widespread lead piping.

Comment by Lichdar on The Dial of Progress · 2023-06-14T23:47:51.082Z · LW · GW

The point is that sanctions should be applied as necessary to discourage AGI, however, approximate grim triggers should apply as needed to prevent dystopia.

As the other commentators have mentioned, my reaction is not unusual and thus this is why the concerns of doom have been widespread.

So the answer is: enough.

Comment by Lichdar on The Dial of Progress · 2023-06-14T22:57:32.666Z · LW · GW

I don't think it is magic but it is still sufficiently disgusting to treat it with equal threat now. Red button now.

Its not a good idea to treat a disease right before it kills you: prevention is the way to go.

So no, I don't think it is magic. But I do think just as the world agreed against human cloning long before there was a human clone, now is the time to act.

Comment by Lichdar on The Dial of Progress · 2023-06-14T21:39:33.082Z · LW · GW

I'll look for the article later but basically the Air Force has found pilotless aircraft to be useful for around thirty years but organized rejection has led to most such programs meeting an early death.

The rest is a lot of AGI is magic without considering the actual costs of computation or noncomputable situations. Nukes would just scale up: it costs much less to destroy than it is to build and the significance of modern economics is indeed that they require networks which do not take shocks well. Everything else basically is "ASI is magic."

I would bet on the bomb.

Comment by Lichdar on The Dial of Progress · 2023-06-14T20:23:55.203Z · LW · GW

This frames things as an inevitability which is almost certainly wrong, but more specifically opposition to a technology leads to alternatives being developed. E.g. widespread nuclear control led to alternatives being pursued for energy.

Being controllable is unlikely even if it is tractable by human controllers: it still represents power which means it'll be treated as a threat by established actors and its terroristic implications mean there is moral valence to police it.

In a world with controls, grim triggers or otherwise, AI would have to develop along different lines and likely in ways that are more human compatible. In a world of intense grim triggers, it may be that is too costly to continue to develop beyond a point. "Don't build ASI or we nuke" is completely reasonable if both "build ASI" and "nuking" is negative, but the former is more negative.

Autonomous weapons actually are an excellent example of delay: despite excellent evidence of the superiority of drones, pilots have continued to mothball it for at least 40 years and so have governments in spite of wartime benefits.

The argument seems to similar to the flaw in the "billion year" argument: we may die eventually, but life only persists by resisting death, long enough for it to replicate.

As far as real world utility, notwithstanding some recent successes, going down without fighting for myself and my children is quite silly.

Comment by Lichdar on The Dial of Progress · 2023-06-14T20:12:53.836Z · LW · GW

He discussed it here:

https://youtu.be/Ufm85wHJk5A?list=PLQk-vCAGvjtcMI77ChZ-SPP--cx6BWBWm

Comment by Lichdar on The Dial of Progress · 2023-06-14T19:46:52.485Z · LW · GW

No, I wouldn't want it even if it was possible since by nature it is a replacement of humanity. I'd only accept Elon's vision of AI bolted onto humans, so it effectively is part of us and thus can be said to be an evolution rather than replacement.

My main crux is that humanity has to be largely biological due to holobiont theory. There's a lot of flexibility around that but anything that threatens that is a nonstarter.

Comment by Lichdar on The Dial of Progress · 2023-06-14T19:18:09.262Z · LW · GW

Lead is irrelevant to human extinction, obviously. The first to die is still dead.

In a democratic world, those affected have a say in how they should be inflicted with AI and how much they want to die or suffer.

The government represents the people.

Comment by Lichdar on The Dial of Progress · 2023-06-14T19:05:27.573Z · LW · GW

I think even the wealthy supporters of it are more complex: I was surprised that Palantir's Peter Thiel came out discussing how AI "must not be allowed to surpass the human spirit" even as he clearly is looking to use AI in military operations. This all suggests significant controls incoming, even from those looking to benefit from it.

Comment by Lichdar on The Dial of Progress · 2023-06-14T18:28:17.867Z · LW · GW

The UK has already mentioned that perhaps there should be a ban on models above a certain level. Though it's not official, I have pretty good record that Chinese party members have already discussed worldwide war as potentially necessary(Eric Hoel also mentioned it, separately). Existential risk has been mentioned and of course, national risk is already a concern, so even for "mundane" reasons, it's a matter of priority/concern and grim triggers are a natural consequence.

Elon had a personal discussion with China recently as well, and given his well known perspective on the dangers of AI, I expect that this point of view has only been reinforced.

And this is with barely reasoning chatbots!

As for Luddites, I don't see why inflicting dystopia upon humanity because it fits some sort of cute agenda has any good purpose. But notably the Luddites did not have the support of the government and the government was not threatened by textile mills. Obviously this isn't the case with nuclear, AI or bio. We've seen slowdowns on all of those.

"Worlds change" has no meaning: human culture and involvement influence the change of the world.