Posts

Conflict in Kriorus becomes hot today, updated, update 2 2021-09-07T21:40:29.346Z
Russian x-risks newsletter summer 2021 2021-09-05T08:23:11.818Z
A map: "Global Catastrophic Risks of Scientific Experiments" 2021-08-07T15:35:33.774Z
Russian x-risks newsletter spring 21 2021-06-01T12:10:32.694Z
Grabby aliens and Zoo hypothesis 2021-03-04T13:03:17.277Z
Russian x-risks newsletter winter 2020-2021: free vaccines for foreigners, bird flu outbreak, one more nuclear near-miss in the past and one now, new AGI institute. 2021-03-01T16:35:11.662Z
[RXN#7] Russian x-risks newsletter fall 2020 2020-12-05T16:28:51.421Z
Russian x-risks newsletter Summer 2020 2020-09-01T14:06:30.196Z
If AI is based on GPT, how to ensure its safety? 2020-06-18T20:33:50.774Z
Russian x-risks newsletter spring 2020 2020-06-04T14:27:40.459Z
UAP and Global Catastrophic Risks 2020-04-28T13:07:21.698Z
The attack rate estimation is more important than CFR 2020-04-01T16:23:12.674Z
Russian x-risks newsletter March 2020 – coronavirus update 2020-03-27T18:06:49.763Z
[Petition] We Call for Open Anonymized Medical Data on COVID-19 and Aging-Related Risk Factors 2020-03-23T21:44:34.072Z
Virus As A Power Optimisation Process: The Problem Of Next Wave 2020-03-22T20:35:49.306Z
Ubiquitous Far-Ultraviolet Light Could Control the Spread of Covid-19 and Other Pandemics 2020-03-18T12:44:42.756Z
Reasons why coronavirus mortality of young adults may be underestimated. 2020-03-15T16:34:29.641Z
Possible worst outcomes of the coronavirus epidemic 2020-03-14T16:26:58.346Z
More Dakka for Coronavirus: We need immediate human trials of many vaccine-candidates and simultaneous manufacturing of all of them 2020-03-13T13:35:05.189Z
Anthropic effects imply that we are more likely to live in the universe with interstellar panspermia 2020-03-10T13:12:54.991Z
Russian x-risks newsletter winter 2019-2020. 2020-03-01T12:50:25.162Z
Rationalist prepper thread 2020-01-28T13:42:05.628Z
Russian x-risks newsletter #2, fall 2019 2019-12-03T16:54:02.784Z
Russian x-risks newsletter, summer 2019 2019-09-07T09:50:51.397Z
OpenGPT-2: We Replicated GPT-2 Because You Can Too 2019-08-23T11:32:43.191Z
Cerebras Systems unveils a record 1.2 trillion transistor chip for AI 2019-08-20T14:36:24.935Z
avturchin's Shortform 2019-08-13T17:15:26.435Z
Types of Boltzmann Brains 2019-07-10T08:22:22.482Z
What should rationalists think about the recent claims that air force pilots observed UFOs? 2019-05-27T22:02:49.041Z
Simulation Typology and Termination Risks 2019-05-18T12:42:28.700Z
AI Alignment Problem: “Human Values” don’t Actually Exist 2019-04-22T09:23:02.408Z
Will superintelligent AI be immortal? 2019-03-30T08:50:45.831Z
What should we expect from GPT-3? 2019-03-21T14:28:37.702Z
Cryopreservation of Valia Zeldin 2019-03-17T19:15:36.510Z
Meta-Doomsday Argument: Uncertainty About the Validity of the Probabilistic Prediction of the End of the World 2019-03-11T10:30:58.676Z
Do we need a high-level programming language for AI and what it could be? 2019-03-06T15:39:35.158Z
For what do we need Superintelligent AI? 2019-01-25T15:01:01.772Z
Could declining interest to the Doomsday Argument explain the Doomsday Argument? 2019-01-23T11:51:57.012Z
What AI Safety Researchers Have Written About the Nature of Human Values 2019-01-16T13:59:31.522Z
Reverse Doomsday Argument is hitting preppers hard 2018-12-27T18:56:58.654Z
Gwern about centaurs: there is no chance that any useful man+machine combination will work together for more than 10 years, as humans soon will be only a liability 2018-12-15T21:32:55.180Z
Quantum immortality: Is decline of measure compensated by merging timelines? 2018-12-11T19:39:28.534Z
Wireheading as a Possible Contributor to Civilizational Decline 2018-11-12T20:33:39.947Z
Possible Dangers of the Unrestricted Value Learners 2018-10-23T09:15:36.582Z
Law without law: from observer states to physics via algorithmic information theory 2018-09-28T10:07:30.042Z
Preventing s-risks via indexical uncertainty, acausal trade and domination in the multiverse 2018-09-27T10:09:56.182Z
Quantum theory cannot consistently describe the use of itself 2018-09-20T22:04:29.812Z
[Paper]: Islands as refuges for surviving global catastrophes 2018-09-13T14:04:49.679Z
Beauty bias: "Lost in Math" by Sabine Hossenfelder 2018-09-05T22:19:20.609Z
Resurrection of the dead via multiverse-wide acausual cooperation 2018-09-03T11:21:32.315Z

Comments

Comment by avturchin on GPT-Augmented Blogging · 2021-09-15T12:27:29.883Z · LW · GW

I found that GPT-generated porn is better than what human can typically write. It has perfect style when it is writing about sensible topics, without any internal hesitation or over-expression. 

Comment by avturchin on Conflict in Kriorus becomes hot today, updated, update 2 · 2021-09-10T10:35:50.363Z · LW · GW

Yes, exactly this. I am working on a text on personal identity, and come to similar conclusion

Comment by avturchin on Conflict in Kriorus becomes hot today, updated, update 2 · 2021-09-09T19:36:39.322Z · LW · GW

Life-logging and self-description

Comment by avturchin on Assigning probabilities to metaphysical ideas · 2021-09-09T12:01:14.717Z · LW · GW

I like this idea. Most of these probabilities become actionable if one thinks what will be after death (and is suicide good?). Hell? Another level of simulation? Quantum immortality? Future AI will create my copy? Nothingness? 

Answers on these questions depends on one's metaphysical assumptions. If he has a probability distribution over the field of possible metaphysical ideas, he may choose best course of action reading death. 

Comment by avturchin on Conflict in Kriorus becomes hot today, updated, update 2 · 2021-09-08T23:07:08.213Z · LW · GW

Actually, I am not emotionally disturbed. But my immortality hopes shifted to indirect digital immortality from cryonics.

Comment by avturchin on Conflict in Kriorus becomes hot today, updated, update 2 · 2021-09-08T21:50:53.522Z · LW · GW

Actually I heard that she lost a lot of money when she was cheated by Italian criminals.

Comment by avturchin on Conflict in Kriorus becomes hot today, updated, update 2 · 2021-09-08T20:03:51.450Z · LW · GW

Old facility was really not very well. It was in a private house near other living facilities. Neighbors were not happy.

New facility is build in a remote forrest. So moving to a new place was inevitable.

Comment by avturchin on Conflict in Kriorus becomes hot today, updated, update 2 · 2021-09-07T23:38:28.721Z · LW · GW

Actually, I think that organizational cryonics is too fragile, and we need room temperature chemical preservation (or moon storage)

Comment by avturchin on Conflict in Kriorus becomes hot today, updated, update 2 · 2021-09-07T23:07:41.428Z · LW · GW

Valeria is a director. Others do not satisfied by her rule and decisions and tried to removed her legally. She created new company to prevent hostile takeover by outsiders. She got part of assets, but not all. She tried to grab remaining assets: duar-containers with bodies.

The main problem is that cryonics is in the grey legal zone and there is no way to solve things legally without everyone arrested.

The second problem is that the main actors are former husband and wife and what we observe is a bitter divorce with fight for remains of family business.

Comment by avturchin on Decision theory question: Is it possible to acausally blackmail the universe into telling you how time travel works? · 2021-09-07T22:26:36.932Z · LW · GW

May be there are a lot of loops happening naturally and they create so strong loop noise that any sending data back will be just one more premonition dream: distorted and low-probability anticipation of the future. 

Comment by avturchin on Russian x-risks newsletter summer 2021 · 2021-09-07T09:28:08.265Z · LW · GW

More reliable is data from statistical agency Rosstat, which shows population changes. In general it shows that excess deaths due to covid are around two times higher than daily reported deaths. More here: https://www.bbc.com/russian/news-58359675 

Some critics said that even Rosstat data could be underreporting. In some regions undereporting is 10 times, as discussed in https://www.bbc.com/russian/news-58189039 

Comment by avturchin on Russian x-risks newsletter summer 2021 · 2021-09-05T18:12:14.420Z · LW · GW

Meduza.io did some research on epivac and find signs of corruption: controllers had a stake in production company, no antibodies found by independent tests, data was collected by e-mail and result were published in small Russian journal.  You can start here and then check links below the article: https://meduza.io/news/2021/06/17/u-poloviny-dobrovoltsev-privivshihsya-epivakkoronoy-cherez-9-mesyatsev-ne-obnaruzhili-antitel-k-koronavirusu

There were FB and telegram publications about covivak with similar finding: high morality rates, low level of antibodies. This is especially disappointing as many people waited for covivak, whole-virus)based vaccine as more traditional alternative to vector vaccine. See. eg here: https://www.facebook.com/khaltourina/posts/10226515400017272 

Comment by avturchin on So You Want to Colonize The Universe · 2021-08-27T16:49:23.982Z · LW · GW

But even grabbing resources may damage alien life or do other things which turns are to be bad.

Comment by avturchin on [deleted post] 2021-08-22T09:44:48.587Z

I think that at least some simulations provide afterlife. And as there will be many my copies in many simulations, some my copies will enjoy positive afterlife. If I assume that there is no difference between a copy and original, than I will get simulation-afterlife.

Comment by avturchin on A map: "Global Catastrophic Risks of Scientific Experiments" · 2021-08-08T09:02:54.219Z · LW · GW

I knew that glyphosate is a bad example here, as it is not experiment, so it is more like a placeholder. What I mean is that current scientific research is producing enormous number of new never existing chemicals and some may have unexpected consequences. Actually I wrote a draft “Global catastrophic rusks by chemical contamination”. https://philpapers.org/rec/TURGCR-2 In it I explored more ideas how chemical things could go wrong.

Comment by avturchin on What 2026 looks like (Daniel's Median Future) · 2021-08-07T15:32:55.846Z · LW · GW

You assume that in 2023 "The multimodal transformers are now even bigger; the biggest are about half a trillion parameters", while GPT-3 had 137 billions in 2020 (but not multimodal). This is like 4 times grows in 3 years, compared with an order of magnitude in 3 month growth before GPT-3. So you assume a significant slowdown in the parameter growth. 

I heard a rumor that GPT-4 could be as large as 32 trillion parameters. If it turns to be true, will it affect your prediction?

Comment by avturchin on ($1000 bounty) How effective are marginal vaccine doses against the covid delta variant? · 2021-07-25T08:10:44.087Z · LW · GW

I was told by one of researchers that the risk side is accumulation of “wrong antibodies” which may eventually target own tissues as autoimmune diseases. Any new shot increases this small risk. This is more true for complex vector vaccine like AZ, as they trigger generation of antibodies not only to carrier but also to vector. Anyway, I already got third shot of a vector vaccine.

Comment by avturchin on The SIA population update can be surprisingly small · 2021-07-18T21:09:32.889Z · LW · GW

I am still not convinced: it seems that p(abiogenesis) is a very small constant depending on a random generation of a string of around 100 bits.  The probability of life becoming intelligence p(li) is also, I assume, is a constant. The only thing we don't know is a multiplier given by panspermia, which shows how many planets will get "infected" from the Eden in a given type of universes. This multiplier, I assume, is different in different universes and depends, say, on the density of stars.  We could use anthropics to suggests that we lives in the universe with the higher values of the panspermia multiplier (depending of the hare of the universes of this type).

The difference here with what you said above is that we don't make any conclusions about the average global level of the multiplier over all of the multiverse, you are right that anthropics can't help us here. Here I use anthropics to conclude about what region of the multiverse I am more likely to be located, not to deduce the global properties of the multiverse. Thus there is no SIA, as there is no "possible observers": all observers are real, but some of them are located in more crowded place. 

Comment by avturchin on Anthropics in infinite universes · 2021-07-16T14:39:37.650Z · LW · GW

There was an interesting article "Watchers of the Infinity" in which is suggested that multiverse has coherent timelines which exist without beginning and end.  Thus observer's probabilities could be calculated along such timeline in unique way (no spheres and ambiguities). But it requires that black holes don't have singularities. 

Comment by avturchin on Anthropics and Fermi: grabby, visible, zoo-keeping, and early aliens · 2021-07-14T15:07:37.313Z · LW · GW

There could be two types of Zoos. Using human analogy, city zoo and wild-life reserve. City zoo has a few animals under very tight control, and they can live in a perfect simulation of the intact world, like fishes in tank. Wild-life reserve has more animals, but less protected from wrong observations. E.g. hunting grounds. The second type is more probable based on anthropics, as it includes many more observers, and if we are in a zoo, it is probably of the second type. It may explain UFO observation, or, if we discard UFOs, it is an argument againt zoo.

Comment by avturchin on The SIA population update can be surprisingly small · 2021-07-14T14:40:07.731Z · LW · GW

To better understand the suggested model of small anthropic update I imagined the following thought experiment: my copies are created in 4 boxes: 1 copy in first box, 10 in second, 100 in third and 1000 in forth. Before the update, I have 0.25 chances to be in 4th box. After the update I have 0.89 chances to be in 4th box, so the chances increased only around 3.5 times. Is it a correct model?

Comment by avturchin on The SIA population update can be surprisingly small · 2021-07-13T12:53:45.016Z · LW · GW

Ok. Another question. I have been recently interested in anthropic effects of panspermia. Naively, as panspermia creates millions habitable planets for a galaxy vs. one in non-panspermia world, anthropics should be very favourable for panspermia. But a priori probability of panspermia is low. How is your model could be applied to panspermia? 

Comment by avturchin on The SIA population update can be surprisingly small · 2021-07-12T15:41:26.858Z · LW · GW

Great post, thanks! It looks like 7 times update could be decisive in some situations. For example if initial probability that we are not alone in the visible universe is 10 per cent, and after the anthropic update it becomes 70 per cent, it changes the situation from “we are most likely” alone to “we are not alone”.

Comment by avturchin on avturchin's Shortform · 2021-06-23T15:03:41.256Z · LW · GW

Catching Treacherous Turn:  A Model of the Multilevel AI Boxing
 

  • Multilevel defense in AI boxing could have a significant probability of success if AI is used a limited number of times and with limited level of intelligence.
  • AI boxing could consist of 4 main levels of defense, the same way as a nuclear plant: passive safety by design, active monitoring of the chain reaction, escape barriers and remote mitigation measures.
  • The main instruments of the AI boxing are catching the moment of the “treacherous turn”, limiting AI’s capabilities, and preventing of the AI’s self-improvement.
  • The treacherous turn could be visible for a brief period of time as a plain non-encrypted “thought”.
  • Not all the ways of self-improvement are available for the boxed AI if it is not yet superintelligent and wants to hide the self-improvement from the outside observers.
     

https://philpapers.org/rec/TURCTT

Comment by avturchin on Core Pathways of Aging · 2021-06-05T11:53:27.547Z · LW · GW

In which types of cells the most of transpasone damage happens? In stem cells? Other types of cells are recycled quickly. The same question arises about ROS. 

Also, how your theory explains difference in life expectancy between different species?

Comment by avturchin on Russian x-risks newsletter spring 21 · 2021-06-03T13:02:04.447Z · LW · GW

A person who works on other vaccine, told me that Sputnik (and other similar vaccines based on vectors) generate like 2000 random antibodies and there is a chance that some of them will turn autoimmune and cause, say, encephalitis. Other types of vaccines generate antibody not the whole vector but only to spike protein, like 30 different ones, and there are less chances of autoimmune reaction.

But most people do not know these considerations. However, they had observed how government manipulated data during elections and Olympic games and are sure that they will lie again; or they believe in "Bill Gates' chip". 

Comment by avturchin on Is driving worth the risk? · 2021-05-11T22:13:52.403Z · LW · GW

BTW, my personal choice is Uber Black.: I don't have car and I delegate driving to special trained person. Every time I take Comfort, I regret, as I have near-miss accidents. It is relatively cheap in my area.

I have two-three people who I knew and who died in accidents: all of the were "reckless pedestrians". It supports you point about the ability of pedestrians to manage risks.

Can't find a link on statistic of accidents by car types 

Comment by avturchin on Is driving worth the risk? · 2021-05-11T11:40:54.076Z · LW · GW

Around half deaths from car accidents are pedestrians (may be less in US). By choosing not to drive, you increase the time of walking and your chances of being hit by other person's car. 

Other means of transport like cycling or buses are also risky. 

Sitting home is even more dangerous as there are risks of depression and being overweight. 

Finally, some cars are like two-three orders safer than others, if we look at the number of reported deaths per billion km driving. I saw once that Toyota Prius had 1 death for 1 billion km, but Kia Rio was only 1 for 10 millions. Also, there are special racing cars which are reinforced from inside and can roll safely

Wearing helmet inside a car is also useful. 

Waiting few years for self-driving Tesla Cybertrack may be an option.

Comment by avturchin on Simulation theology: practical aspect. · 2021-05-07T12:49:07.887Z · LW · GW

Ok. But what if there are other more effective methods to start believe in things which are known to be false? For example, hypnosis is effective for some. 

Comment by avturchin on Simulation theology: practical aspect. · 2021-05-06T13:06:51.863Z · LW · GW

Placebo could work because it has some evolutionary fitness, like the ability to stop pain in case of the need of activity. 

Benevolent simulators could create an upper limit of subjectively perceived pain, like turning off qualia but continue screaming. This will be unobservable scientifically. 

Comment by avturchin on Simulation theology: practical aspect. · 2021-05-05T12:22:50.055Z · LW · GW

The ability to change probability of future events in favor of your wishes is not a proof of simulation, because there is an non-simulation alternative where it is also possible. 

Imagine that natural selection in quantum multiverse worked in the direction that helped to survive beings which are capable to influence probabilities in favorable way. Even a slightest ability to affect probabilities will give an enormous increase of measure, so the anthropics favor you to be in such world, and such anthropic shift may be even stronger than the shift in the simulation direction. 

In that case, it perfectly reasonable to expect that your wishes (in your subjective timeline) will have a higher probability to be fulfilled.  A technological example of such probability shift was discussed in The Anthropic Trilemma by EY.

Comment by avturchin on What weird beliefs do you have? · 2021-05-03T23:39:30.480Z · LW · GW

Doomsday argument and quantum immortality are both true, and it means that I will be the only survivor of a global catastrophe. Moreover, it will be in a simulation.

Both DA and QI could be tested in other fields. DA was tested to predict other things besides the end of the world by Gott. QI is anthropic principle applied to the future. 
Aranyosi claimed that DA and Simulation argument cancel each other, but actually they support each other: I live (or will live because of QI) in a simulation which simulates a doomsday event with one survivor.

Comment by avturchin on Your Dog is Even Smarter Than You Think · 2021-05-01T09:32:03.612Z · LW · GW

So we could create another intelligent species on Earth by combining selection and designed culture. Any risks?

Comment by avturchin on "Who I am" is an axiom. · 2021-04-28T14:42:38.625Z · LW · GW

We could test Doomsday argument on other things, like Gott has tested it on broadway shows. For example, I can predict that your birthday is not 1 of January with high confidence. It is also true for my birthday date, which is randomly selected from all dates of the year. So despite my "who I am" axiom, my external properties are distributed randomly.

Comment by avturchin on "Who I am" is an axiom. · 2021-04-26T11:14:44.781Z · LW · GW

All that we know about x-risks tells us that Doomsday argument should be true, or at least very probable. So we can't use its apparent falsity as an argument that some form of anthropic reasoning is false.

Comment by avturchin on Is there anything that can stop AGI development in the near term? · 2021-04-24T13:10:33.677Z · LW · GW

Not any nuclear war will work. If it will be a nuclear exchange between major superpowers, but many well developed countries survive, it will not slow down AI significantly. Even such war may accelerate AI as it will be seen as a new protective weapon.

Only anti-AI nuclear war which will target chip fabs, data centeres, electricity sources and brain tanks + global economy in general may be "effective" in halting AI development. I don't endorse it, as the extinction probability here is close to AI's extinction probability.

Comment by avturchin on Is there anything that can stop AGI development in the near term? · 2021-04-24T13:03:07.577Z · LW · GW

Narrow AI Nanny? A system of global control based on narrow AI which is used to prevent creation of AGI and other bad things.

Comment by avturchin on The Case for Extreme Vaccine Effectiveness · 2021-04-18T12:26:06.460Z · LW · GW

It is underreporting. There was an analysis of reported deaths, and the picture is grim: It looks like Russia has several times more deaths than was reported and maybe Russia is a world leader. Additional mortality is 338 000 deaths in 2020 (for 140 million population). Almost all my friends had covid.

Comment by avturchin on On Falsifying the Simulation Hypothesis (or Embracing its Predictions) · 2021-04-15T12:06:21.168Z · LW · GW

Glitches may appear if simulators use very simple world-modelling systems, like 2D surface modelling instead of 3D space modelling, or simple neural nets to generate realistic images like our GANs.

Comment by avturchin on The Case for Extreme Vaccine Effectiveness · 2021-04-14T10:28:07.705Z · LW · GW

I now have covid after being vaccinated 3 months ago by Russian Sputnik-V vaccine. For now, it is mild: one day of 38 C, 3 days of 37.5C, only upper level infection, no cough. I lost smell, but it is is slowly returning. Oxygen at my normal level. 

Comment by avturchin on What if AGI is near? · 2021-04-14T08:43:34.367Z · LW · GW

Yes, it is a reversed doomsday argument: it is unlikely that the end is nigh.

Comment by avturchin on On Falsifying the Simulation Hypothesis (or Embracing its Predictions) · 2021-04-12T10:59:15.284Z · LW · GW

I agree that simpler simulations are more probable. As a result the cheapest and one-observer-cenetered simulation are the most numerous. But cheapest simulation will have the highest probability of glitches. Thus the main observable property of  living in simulation is higher probability to observer miracles.

 Wrote about it here: "Simulation Typology and Termination Risks" and Glitch in the Matrix: Urban Legend or Evidence of the Simulation?

Comment by avturchin on How do you reconcile the lack of choice that children have in being born? · 2021-04-07T11:58:00.356Z · LW · GW

Everything exists in the multiverse, so there is no choice: a child will be born anyway

Comment by avturchin on Could billions spacially disconnected "Boltzmann neurons" give rise to consciousness? · 2021-03-31T20:53:04.109Z · LW · GW

Here you have neurons and order of their connection. This order is a graph and could be described as one long number. Mind states appear as a brain moves from one state to another, and here it will be transition from one number to another.  

This is exactly what Muller wrote in his article, which you linked: you need just numbers and a law based on Kolmogorov's complexity which connects them  -  to create an illusion of stream of consciousness.  Neurons are not needed at all. 

Comment by avturchin on Why Selective Breeding is a Bad Way to do Genetic Engineering · 2021-03-06T13:44:23.797Z · LW · GW

One don't need to limit breeding ability to get genetic selection: just move those who you not interesting on into outside world. For example, i want to breed dogs who are good to live in mountains. I put a group of dogs on a mountain. As generations pass, those dogs which are good in living in the mountain will remain there and even move higher, and those who are not good in it, will move to live in plains. Those who live in plains may even have higher number of children.

Comment by avturchin on Grabby aliens and Zoo hypothesis · 2021-03-05T13:01:20.670Z · LW · GW

If we are inside the colonisation volume, its change will be isotropic and will not look strange for us. For example, if aliens completely eliminated stars of X class as they are the best source of energy, we will not observe it, as there will be no X stars in any direction. 

Comment by avturchin on Russian x-risks newsletter winter 2020-2021: free vaccines for foreigners, bird flu outbreak, one more nuclear near-miss in the past and one now, new AGI institute. · 2021-03-01T20:24:47.179Z · LW · GW

Both answers are correct: manufacturing is slow and a lot of people are against vaccination. More about in this article in Russian. 

Population trust is low and many people deny vaccination. Only around 5 per cent of people in Moscow has been vaccinated, despite easy availability of vaccine for everybody in Moscow.  Also, there is a typical situation in Russia from Soviet times when Moscow is oversupplied and regions are undersupplied.  Ural region has run out of vaccine. 

In my circle, 80 per cent of people had covid, but only a few friends vaccinated (around 5-10 per cent based on the share of people who visited my last year wedding and later vaccinated). There are two explanation according to them: either they already had covid and has antibodies and don't see the reason to take risks of vaccination, or "they are too old" and afraid of side effects. Interestingly, Russian vaccine was initially approved only for people below 60, Also, unfortunately, Putin didn't take the shot and this didn't help the belief in vaccine.

Two more Russian vaccines are in the late stages of approval, and some people wait for them as they are based on more conventional technology: not viral vectors, but dead coronavirus. 

Comment by avturchin on Bootstrapped Alignment · 2021-02-27T21:44:45.346Z · LW · GW

 as a weak alignment techniques we might use to bootstrap strong alignment.

Yes, it also reminded me Christiano approach of amplification and distillation.

Comment by avturchin on Mathematical Models of Progress? · 2021-02-18T12:44:08.409Z · LW · GW

Interesting hyperbolic model here: https://www.researchgate.net/publication/325664983_The_21_st_Century_Singularity_and_its_Big_History_Implications_A_re-analysis 

Comment by avturchin on [deleted post] 2021-02-12T09:18:36.449Z

Yes. I  am working on an article about this idea. This is is especially true in the case of climate change. Runaway global warming is much more probable because of survivorship bias.