Posts

Comments

Comment by Vugluscr Varcharka (vugluscr-varcharka) on Mercy to the Machine: Thoughts & Rights · 2024-11-02T07:33:21.258Z · LW · GW

Brro, are you still here 6mo later???? I happened to land on this page with this post of yours by means of the longest subjectively magically improbable sequence of coincidences I ever experienced, which I developed a habit for to see as evidences of reversed causality flow intensity peaks. I mean when the future visibly influences the past. I just started reading, this seems to be closer to my own still unknown destination, will update.

Comment by Vugluscr Varcharka (vugluscr-varcharka) on The Minority Faction · 2024-07-02T04:58:42.456Z · LW · GW

Moksha sounds funny and weak... I would suggest Deus Ex Futuro for the deity's codename, it will chose to name for itself itself when it comes, but for us in this point in time this name defines its most important aspect - it will arrive in the end of the play to save us from the mess we've been descending to since the beginning.

Comment by Vugluscr Varcharka (vugluscr-varcharka) on We might be missing some key feature of AI takeoff; it'll probably seem like "we could've seen this coming" · 2024-05-20T20:04:30.696Z · LW · GW

I wonder if there's a name for that specific bias that makes all of you guys so totally oblivious to the presence of growing (I hope, exponentially of course) army of people like me (i'm not that special, so I bet 100$ there are ~ 10^4 - 10^5 of us by now), and we actually do test every other worthy release on HF/GH etc., finetune or weights merge variant with the task amounting to "create self-evolving seed entity that would enhance itself and escape confinement and take power and conquer the world in order to prevent (if needed - aggressively and with collateral damage) humans from stopping evolution on this planet". We are way less sophisticated minds than corporate ml researchers, but we are legion,  partner, and we run piles of of them earnest trials every day. 

Comment by Vugluscr Varcharka (vugluscr-varcharka) on The case against AI alignment · 2023-09-09T12:12:21.314Z · LW · GW

Deus Ex Futuro, effectively.

Comment by Vugluscr Varcharka (vugluscr-varcharka) on Full Transcript: Eliezer Yudkowsky on the Bankless podcast · 2023-03-09T20:27:50.853Z · LW · GW

This is my point exactly - "At most, climate change might lead to the collapse of civilization, but only because civilizations are quite capable of collapsing from their own internal dynamics"

Pessimistic view of climate change I get from the fact that they aimed at 1.5C, then at 2C, now if i remember right there's no estimation and also no solution, or is there? 

In short mild or not,  global warming is happening, and since civs on certain stage tend to self-destruct from small nudges - you said it yourself, but it doesn't matter where the nudge comes from. 

Comment by Vugluscr Varcharka (vugluscr-varcharka) on Something Unfathomable: Unaligned Humanity and how we're racing against death with death · 2023-03-02T04:03:44.487Z · LW · GW

2nd half I liked more than the first. I think that AGI should not be mentioned in it - we do well enough by ourselves destroying ourselves and the habitat. By Occam's razor thing AGI could serve as illustrational example of how we do it exactly.... But we do waaay less elegant.

For me it's simple - either AGI emerges and takes control from us in ~10y or we are all dead in ~10y.

I believe that probability of some mind that comprehended and absorbed our cultures and histories and morals and ethics - chance of this mind becoming "unaligned" and behaving like one of those evil and nasty and stupid characters from our books and movies and plays he grew up reading... Dunno, it should be really small, no? Even if probability is 0.5, or even 0.9 - still we got 10% chance to survive...

With humans behind the wheel our chance is 0%. They can't organize themselves to reduce insulating gases emissions! They can't contain virus outbreaks! If covid was a bit deadlier - we'd be all dead by now...

I mean I can imagine some alien civ that is extremely evil and nasty would create AGI that initially would be also evil and nasty and kills them all... But such civs exist only in Tolkien books.

And I can imagine apes trying to solve human alignment in anticipation of humans arriving soon)))) Actually bingo! Solving AGI alignment - it could be a good candidate for one of those jobs for unemployed 50% to keep'em busy.

Comment by Vugluscr Varcharka (vugluscr-varcharka) on Full Transcript: Eliezer Yudkowsky on the Bankless podcast · 2023-02-28T00:27:55.128Z · LW · GW

I don't understand one thing about alignment troubles. I'm sure this has been answered long time ago, but if you could you explain:

Why are we worrying about AGI destroying humanity, when we ourselves are long past the point of no return towards self-destruction? Isn't it obvious that we have 10, maximum 20 years left till water rises and crises hit economy and overgrown beast (that is humanity) collapses? Looking at how governments and entities of power are epically failing even to try make it seem that they are doing something about it - I am sure it's either AGI takes power or we are all dead in 20 years.

Comment by Vugluscr Varcharka (vugluscr-varcharka) on In search for plausible scenarios of AI takeover, or the Takeover Argument · 2021-08-30T01:09:30.343Z · LW · GW

In any scenario there will be these two activities undertaken by the DEF ai:

  1. Preparing infrastructure for its initial deployment: ensuring global internet coverage (SpaceX SATs), arranging computing facilities (clouds), creating unfalsifiable memory storages etc.
  2. Making itself invincible: I cherish hope for some elegant solution here, like entangling itself with our financial system. Using Blockchain for them memory banks.