Posts

Apocalypse Prepping - Concise SHTF guide to prepare for AGI doomsday 2023-07-04T17:41:41.401Z

Comments

Comment by prepper on Apocalypse Prepping - Concise SHTF guide to prepare for AGI doomsday · 2023-07-05T22:38:03.780Z · LW · GW

It is a prepping guide, like it says in the title and introduction page. Prepping is the practice of preparing for disasters. Are you sure you actually opened the link I posted? Here is a PDF printout of the site: https://docdro.id/nnIJ16G 

Or are you literally just downvoting because you got tired after one click?

Comment by prepper on Apocalypse Prepping - Concise SHTF guide to prepare for AGI doomsday · 2023-07-05T20:39:08.197Z · LW · GW

The audience is the general public. That is anyone who has the attention span and is smart enough to read what I wrote, without feeling the need to disregard the idea out of comfort, personal conviction, laziness, etc. I was toying with the idea of writing a much shorter version for stupid people. But then I think that is just an exercise in futility. And also I don't really like the idea of stupid people gaining such an existential advantage.

Unfortunately I am not allowed to create another post the next 7 days, due to low karma. I have written a new post however that I will post soon. It is quite long. If you are interested you can read it here:

https://pastebin.com/7WR0P8ZM 

Maybe you could tell me how much it is on track, from your experience with this community. And I would also like to understand any reasons how my perspective could be flawed. I was reading quite a bit on here and watching some videos. Although influenced a lot by your replies, it is not meant to attack you, but those who follow by authority and influence of Eliezer and don't think for themselves instead.

I don't write and will not write those these things to please people though. If people cannot jump over their own shadow and process dissonance in a healthy constructive way, then so be it.

Comment by prepper on Apocalypse Prepping - Concise SHTF guide to prepare for AGI doomsday · 2023-07-04T19:09:43.314Z · LW · GW

Thanks for your reply.

To the point of contention, I believe it is actually fairly well illustrated on the website, that it is a fallacy in itself to make it up to such nuances and to demand proof of any particular of the possible outcomes in the future to shape one's actions, if it pertains to the question of whether or not you should do your bests to mitigate the risk in ensuring your basic survival.

An unsurvivable AGI outcome is just one of the many possible scenarios. Although you can speculate about the details of how it could play out (partial extermination, full extermination, no extermination) and what means AGI might use, the whole point of thinking logically about the issue in terms of ensuring your survival is to recognize that those things are ultimately unknowable. What you have to do is to simply develop a strategy that deals with all eventualities as best as possible. 

I don't know if it is clear to you, but those are the basic scenarios of AGI development taken into consideration on the website, which are basically everything you can consider:

  1. AGI will be docile, that is super-intelligent but fully obedient to humans
    -> society can collapse due to abuse or misuse (ranging from mild to severe)
    -> society can collapse due to system shock (ranging from mild to severe)
    -> society can flourish due to benefits
  2. AGI will become fully independent
    -> accidental extinction due to insanity (ranging from mild to severe)
    -> deliberate extinction to eliminate humans as a threat (ranging from mild to severe)
    -> coincidental extinction due to disregard for human life (ranging from mild to severe)
    -> AGI will become god or a guardian angle and help or digitalize human existence
    -> AGI will leave on a space ship and simply leave us alone, because we are insignificant and there is an infinite amount of planets to exploit resources from
  3. Experimental stages of AGI lead to some kind of destruction
  4. AGI will not develop in the near future, because it is too complicated or outlawed
  5. AGIs will fight against each other and may cause some kind of detrimental chaos

Obviously you can also mix those scenarios, as they all derive from each other, and they all can be rather permanent or temporary in nature, mild or severe as well. For example you could have a collapsing stock market, due to system shock of hyper-intelligent but fully human-controlled AI systems, all the while independent AGI systems are on the rise 2 years later and then waging digital wars against each other and some of them try to exterminate humans, while others try to protect them or enslave them. While this seems somewhat ridiculous to consider, it is just to illustrate that there is a wide range of possible outcomes with a wide range of details, and no single definitive outcome (e.g. full extinction of every last human on earth from AGI) can be determined with any degree of certainty.

In the end in most scenarios, society will recover in time and some amount of human life will still be present afterwards. But even if there was just a single scenario with the off chance of human survival, then this would be enough to work towards by literally spending all your time and resources on it. Anything else can only be described as suicidal.

Personally though, I believe the chances of overall human survival are very high, and the chances of hostile AGI are rather low. This is exactly how it is reflected by expert opinion as well. But there is a lot of in-between risk that you need to consider, most of which concerns intermediary stages of AGI, and this is where spending reasonable amounts of money (e.g. 2000 Euros) come into play.

So I was hoping you can help me to improve the page and tell me how that was not clear from reading it through and maybe how I can write it differently, without adding thousands of words to the page.

This guide was not written for LessWrong btw., but for common people who are smart enough to follow the arguments and are willing to protect themselves if necessary in light of this new information and proper consideration and risk management.

Comment by prepper on The literature on aluminum adjuvants is very suspicious. Small IQ tax is plausible - can any experts help me estimate it? · 2023-07-04T15:00:33.759Z · LW · GW

Only live-attenuated vaccines may (sometimes) not need adjuvants. Plus you sometimes have other ingredients acting as adjuvants that are not declared as such. For example mercury is declared as a preservative, not adjuvant, but it performs the same function. Also as of recent they started removing constituents from the ingredient list, that were part of the manufacturing process (e.g. culture media), but are not "intended" part of the final product. If a food manufacturer washes potatoes with iodine for example in order to clean them, he is not required to list that as an ingredient, regardless of whether or not quantities in the final product are relevant.

To put simply without a live virus, the immune system recognizes the would-be antigens as simply garbage molecules, and not as a threat. In order for immunization to work, you need to inject something dangerous like a live virus, aluminium, some kind of toxic protein or cytokine alongside the antigen.

Comment by prepper on The literature on aluminum adjuvants is very suspicious. Small IQ tax is plausible - can any experts help me estimate it? · 2023-07-04T14:46:20.836Z · LW · GW

-Alternatives to aluminum exist, including mRNA and other new adjuvants.

 

This is not really true. Despite aluminium being a neurotoxin that is detrimental at any dose just like mercury, with identical neurotoxic effects as mercury, no viable alternatives exist that were as extensively tested, and most alternatives are in fact much more dangerous. I'm not an expert on mRNA, but we have all seen with the covid shots that the side-effects were much more extreme. And for all we know it didn't need adjuvants because the spike protein itself is cytotoxic.

This is not to say that we can't develop better adjuvants, for example by using natural chemokines, which are of course not licensable hence not researched. However, this would only really address the small subset of the population, who are sensitive to heavy metals and somehow can't detoxify from them as effectively as others. Autoimmune reactions, anaphylaxis and other side-effects of vaccines, some of them also resulting in encephalitis and subsequently autism, will still persist even with healthier adjuvants, due to the nature of artificially provoking the immune system, which sometimes just leads to unpredictable immune reactions, no matter what you do.

Conceivably you could work out some extended-release nanoparticle coating to mitigate this problem. But who cares about that, if manufacturers are exempt from liability anyway?

Of course with such a lunatic vaccine schedule as found in the US that has close to 100 shots, the toxic metal poison with longest half-life estimated around 7 years, accumulates so much that it is predestined to cause all sorts of neurological dysfunction. In a sane vaccine schedule, you have just 10 shots and the ability for people to opt out. Aluminium then isn't really a major priority to improve vaccine safety. There is even limited research and reason to believe that aluminium hydroxide does not cross the blood-brain barrier that much in most people. But of course the overall toxocology of aluminium is not that simple. Especially if you inject doses that are between 10 - 100 times higher than what you would naturally absorb through food.

Bottom line is we don't really know about adjuvant safety, because really no one cares, and no one does the studies. With a little bit more censorship and vaccine Lysenkoism, the profits are just rolling and keep on rolling. Anti-vaxxer science is fascist science that undermines society and harms the greater good. That's all you really need to know about.