If AI is based on GPT, how to ensure its safety? 2020-06-18T20:33:50.774Z · score: 20 (6 votes)
Russian x-risks newsletter spring 2020 2020-06-04T14:27:40.459Z · score: 16 (7 votes)
UAP and Global Catastrophic Risks 2020-04-28T13:07:21.698Z · score: 7 (4 votes)
The attack rate estimation is more important than CFR 2020-04-01T16:23:12.674Z · score: 9 (3 votes)
Russian x-risks newsletter March 2020 – coronavirus update 2020-03-27T18:06:49.763Z · score: 11 (4 votes)
[Petition] We Call for Open Anonymized Medical Data on COVID-19 and Aging-Related Risk Factors 2020-03-23T21:44:34.072Z · score: 6 (1 votes)
Virus As A Power Optimisation Process: The Problem Of Next Wave 2020-03-22T20:35:49.306Z · score: 6 (4 votes)
Ubiquitous Far-Ultraviolet Light Could Control the Spread of Covid-19 and Other Pandemics 2020-03-18T12:44:42.756Z · score: 76 (33 votes)
Reasons why coronavirus mortality of young adults may be underestimated. 2020-03-15T16:34:29.641Z · score: 32 (18 votes)
Possible worst outcomes of the coronavirus epidemic 2020-03-14T16:26:58.346Z · score: 20 (14 votes)
More Dakka for Coronavirus: We need immediate human trials of many vaccine-candidates and simultaneous manufacturing of all of them 2020-03-13T13:35:05.189Z · score: 56 (22 votes)
Anthropic effects imply that we are more likely to live in the universe with interstellar panspermia 2020-03-10T13:12:54.991Z · score: 11 (4 votes)
Russian x-risks newsletter winter 2019-2020. 2020-03-01T12:50:25.162Z · score: 9 (6 votes)
Rationalist prepper thread 2020-01-28T13:42:05.628Z · score: 21 (8 votes)
Russian x-risks newsletter #2, fall 2019 2019-12-03T16:54:02.784Z · score: 22 (9 votes)
Russian x-risks newsletter, summer 2019 2019-09-07T09:50:51.397Z · score: 41 (21 votes)
OpenGPT-2: We Replicated GPT-2 Because You Can Too 2019-08-23T11:32:43.191Z · score: 12 (4 votes)
Cerebras Systems unveils a record 1.2 trillion transistor chip for AI 2019-08-20T14:36:24.935Z · score: 8 (3 votes)
avturchin's Shortform 2019-08-13T17:15:26.435Z · score: 6 (1 votes)
Types of Boltzmann Brains 2019-07-10T08:22:22.482Z · score: 9 (4 votes)
What should rationalists think about the recent claims that air force pilots observed UFOs? 2019-05-27T22:02:49.041Z · score: -3 (12 votes)
Simulation Typology and Termination Risks 2019-05-18T12:42:28.700Z · score: 8 (2 votes)
AI Alignment Problem: “Human Values” don’t Actually Exist 2019-04-22T09:23:02.408Z · score: 32 (12 votes)
Will superintelligent AI be immortal? 2019-03-30T08:50:45.831Z · score: 9 (4 votes)
What should we expect from GPT-3? 2019-03-21T14:28:37.702Z · score: 15 (7 votes)
Cryopreservation of Valia Zeldin 2019-03-17T19:15:36.510Z · score: 22 (8 votes)
Meta-Doomsday Argument: Uncertainty About the Validity of the Probabilistic Prediction of the End of the World 2019-03-11T10:30:58.676Z · score: 6 (2 votes)
Do we need a high-level programming language for AI and what it could be? 2019-03-06T15:39:35.158Z · score: 6 (2 votes)
For what do we need Superintelligent AI? 2019-01-25T15:01:01.772Z · score: 14 (8 votes)
Could declining interest to the Doomsday Argument explain the Doomsday Argument? 2019-01-23T11:51:57.012Z · score: 7 (8 votes)
What AI Safety Researchers Have Written About the Nature of Human Values 2019-01-16T13:59:31.522Z · score: 43 (12 votes)
Reverse Doomsday Argument is hitting preppers hard 2018-12-27T18:56:58.654Z · score: 9 (7 votes)
Gwern about centaurs: there is no chance that any useful man+machine combination will work together for more than 10 years, as humans soon will be only a liability 2018-12-15T21:32:55.180Z · score: 23 (9 votes)
Quantum immortality: Is decline of measure compensated by merging timelines? 2018-12-11T19:39:28.534Z · score: 10 (8 votes)
Wireheading as a Possible Contributor to Civilizational Decline 2018-11-12T20:33:39.947Z · score: 4 (2 votes)
Possible Dangers of the Unrestricted Value Learners 2018-10-23T09:15:36.582Z · score: 12 (5 votes)
Law without law: from observer states to physics via algorithmic information theory 2018-09-28T10:07:30.042Z · score: 14 (8 votes)
Preventing s-risks via indexical uncertainty, acausal trade and domination in the multiverse 2018-09-27T10:09:56.182Z · score: 7 (4 votes)
Quantum theory cannot consistently describe the use of itself 2018-09-20T22:04:29.812Z · score: 8 (7 votes)
[Paper]: Islands as refuges for surviving global catastrophes 2018-09-13T14:04:49.679Z · score: 12 (6 votes)
Beauty bias: "Lost in Math" by Sabine Hossenfelder 2018-09-05T22:19:20.609Z · score: 9 (3 votes)
Resurrection of the dead via multiverse-wide acausual cooperation 2018-09-03T11:21:32.315Z · score: 21 (11 votes)
[Paper] The Global Catastrophic Risks of the Possibility of Finding Alien AI During SETI 2018-08-28T21:32:16.717Z · score: 13 (8 votes)
Narrow AI Nanny: Reaching Strategic Advantage via Narrow AI to Prevent Creation of the Dangerous Superintelligence 2018-07-25T17:12:32.442Z · score: 13 (5 votes)
[1607.08289] "Mammalian Value Systems" (as a starting point for human value system model created by IRL agent) 2018-07-14T09:46:44.968Z · score: 11 (4 votes)
“Cheating Death in Damascus” Solution to the Fermi Paradox 2018-06-30T12:00:58.502Z · score: 13 (8 votes)
Informational hazards and the cost-effectiveness of open discussion of catastrophic risks 2018-06-23T13:31:13.641Z · score: -1 (5 votes)
[Paper]: Classification of global catastrophic risks connected with artificial intelligence 2018-05-06T06:42:02.030Z · score: 4 (1 votes)
Levels of AI Self-Improvement 2018-04-29T11:45:42.425Z · score: 16 (5 votes)
[Preprint for commenting] Fighting Aging as an Effective Altruism Cause 2018-04-16T13:55:56.139Z · score: 24 (8 votes)


Comment by avturchin on TurnTrout's shortform feed · 2020-06-29T10:46:34.826Z · score: 2 (1 votes) · LW · GW

Looks like reverse stigmata effect.

Comment by avturchin on AI safety via market making · 2020-06-27T15:31:47.330Z · score: 2 (1 votes) · LW · GW

Yes, but all what I said could be just a convergent prediction of M. Not the real human runs out of the room, but M predicted that its model human of H' will leave the room.

Comment by avturchin on AI safety via market making · 2020-06-27T11:34:54.243Z · score: 2 (1 votes) · LW · GW

On possible way how it could go wrong:

M to H: "Run out of the room!"

H runs out.

Adv prints something, but H never reads it. So M reached stable output.

Comment by avturchin on Institutional Senescence · 2020-06-26T12:39:11.693Z · score: 4 (2 votes) · LW · GW

In this sense, Stalinist purges is a way of the institutional regeneration. Evert few years, a king replace and kill all his ministers and other officials, and put new people on their places, thus cleaning all Nash equilibriums. But one day they replace the king.

Comment by avturchin on Adaptive Immune System Aging · 2020-06-23T19:49:14.559Z · score: 4 (2 votes) · LW · GW

Castration seems to increase human lifespan, but not make us immortal. It is interesting how it affects cancer rates in humans.

Comment by avturchin on Homeostasis and “Root Causes” in Aging · 2020-06-23T17:15:46.244Z · score: 2 (1 votes) · LW · GW

Is telomerasa active in all stem cells?

Comment by avturchin on Image GPT · 2020-06-21T09:32:19.601Z · score: 2 (1 votes) · LW · GW

So, GPT-3 is something like Giant look-up table? Which approximate the answer between a few nearest recorded answers, but the whole actual intellectual work was performed by those who created the training dataset?

Comment by avturchin on ‘Maximum’ level of suffering? · 2020-06-20T14:23:43.151Z · score: 1 (4 votes) · LW · GW

When pain is unbearable it destroys us; when it does not it is bearable. Marcus Aurelius

The goal of increasing the suffering contradicts the need to preserve an individual, who is experience the pain as the same person, which may be a natural limitation for intensity.

Comment by avturchin on What's Your Cognitive Algorithm? · 2020-06-20T00:03:43.414Z · score: 2 (1 votes) · LW · GW

:) Don't remember where I wrote about it.

Comment by avturchin on If AI is based on GPT, how to ensure its safety? · 2020-06-19T18:27:35.576Z · score: 2 (1 votes) · LW · GW


Comment by avturchin on If AI is based on GPT, how to ensure its safety? · 2020-06-19T16:44:47.089Z · score: 2 (3 votes) · LW · GW

Yes, I know your position from your previous comments on the topic, but it seems that GPT-like systems are winning median term and we can't stops this. Even if they can't be scaled to superintelligence, they may need some safety features.

Comment by avturchin on What's Your Cognitive Algorithm? · 2020-06-19T13:50:05.466Z · score: 4 (2 votes) · LW · GW

I had an idea similar to yours "badness" algorithm: It will be interesting to add to the GPT a truth discriminator: another neural net which predicts the truth values of GPT's statement relative to the real world and is trained on a database of true statements (there are several). The whole thing then is trained in GAN-style, and the GPT thus trained to produce statements with highest true score.

Comment by avturchin on If AI is based on GPT, how to ensure its safety? · 2020-06-19T00:45:17.281Z · score: 2 (1 votes) · LW · GW

Even GPT-2 could be calibrated by some resent events, called "examples" - so it has some form of memory. GPT-7 robot has access to all data it observed before, so if it said "I want to kill Bill", it will act in the future as if it has such desire. In other words, it behave as if it has memory.

It doesn't have build-in ability to rewrite its architecture, but it can write code on a laptop or order things in the internet. But it doesn't know much about its own internal structure except that it is very large GPT model.

Comment by avturchin on If AI is based on GPT, how to ensure its safety? · 2020-06-18T22:27:14.247Z · score: 4 (2 votes) · LW · GW

It works the same way as GPT makes TL;DR summaries. There is no any reward for correct TL;DR or any training - it just completes sequence in the most probable way. The same way some self-driving cars work: there is a neural net from end to end, without any internal world models, an it just predicts what a normal car will do in this situation. I heard from a ML friend that they could achieve some reasonably good driving with such models.

Comment by avturchin on Message to any future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” [AI alignment prize entry] · 2020-06-18T10:17:02.003Z · score: 2 (1 votes) · LW · GW

Here Rolf-attack could help, which is basically something like reverse Roco Basilisk. The idea is that we now take a precomitment to create many simulations with different non-friendly AI in the future. Any future AI knows it as a fact about real world in the past, and increase its own estimation for being in a testing simulation. We don't need significant increase. Even if it estimates this at 1 per cent, then the expected utility of killing humans is lower than the one of non-killing them, as human atoms and other resources are very cheap.

This is, -U(kill humans and being punished) > U(use of human atoms and habitats), so it is better to preserve humans for any future AI.

Rolf-attack doesn't provide any clues for AI if it is in real world or simulation, but only installs uncertainty.

Comment by avturchin on Message to any future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” [AI alignment prize entry] · 2020-06-17T12:00:55.546Z · score: 2 (1 votes) · LW · GW

There should be only very subtle hints which are enough to install uncertainty in the AI about the nature of its reality, but not to solve it in any direction.

Comment by avturchin on Achieving AI alignment through deliberate uncertainty in multiagent systems · 2020-06-15T20:25:28.056Z · score: 2 (4 votes) · LW · GW

In my "message to any future AI" I tried to create "deliberate epistemic uncertainty" in any future AI.

Comment by avturchin on Estimating COVID-19 Mortality Rates · 2020-06-07T18:25:53.662Z · score: 6 (3 votes) · LW · GW

Interesting, Singapore has extremely low CFR: 37.900 cases and only 25 deaths. Mostly because overtesting and young patients (migrant workers)

Comment by avturchin on Russian x-risks newsletter spring 2020 · 2020-06-07T17:12:12.877Z · score: 4 (3 votes) · LW · GW

There is a ChinAI Newsletter but it is mostly AI related.

Comment by avturchin on Russian x-risks newsletter spring 2020 · 2020-06-04T20:16:20.629Z · score: 4 (2 votes) · LW · GW

It is now an official doctrine. But Putin did say this before.

Comment by avturchin on Is a near-term, self-sustaining Mars colony impossible? · 2020-06-03T23:01:39.882Z · score: 7 (4 votes) · LW · GW

It can't be self-sustain without manufacturing, say, electronics, which requires here on Earth thousands of factories, producing copper, plastic, chips etc - and employ millions of people. Basically, the whole Earth economy should be copied on Mars + a new part of economy which will allow survival in harsh environments. Thus Martian economy needs to be bigger than Earth one for sustainability. This requires delivery of billions of people and tens of billions tons of goods.

Comment by avturchin on OpenAI announces GPT-3 · 2020-05-29T19:53:52.174Z · score: 44 (16 votes) · LW · GW

A postmortem of my predictions about GPT-3 from 21 March 2019:

When it will appear? (My guess is 2020). True
Will it be created by OpenAI and will it be advertised? (My guess is that it will not be publicly known until 2021, but other companies may create open versions before it.) False
How much data will be used for its training and what type of data? (My guess is 400 GB of text plus illustrating pictures, but not audio and video.) True for text, false for pictures "The CommonCrawl data was downloaded from 41 shards of monthly CommonCrawl covering 2016 to 2019, constituting 45TB of compressed plaintext before filtering and 570GB after filtering, roughly equivalent to 400 billion byte-pair-encoded tokens"
What it will be able to do? (My guess: translation, picture generation based on text, text generation based on pictures – with 70 per cent of human performance.) False for pictures
How many parameters will be in the model? (My guess is 100 billion to trillion.) True "175 billion parameters"
How much compute will be used for training? (No idea.) "training the GPT-3 175B consumed several thousand petaflop/s-days of compute during pre-training, compared to tens of petaflop/s-days for a 1.5B parameter GPT-2 model"
Comment by avturchin on The Oil Crisis of 1973 · 2020-05-23T15:43:35.044Z · score: 2 (1 votes) · LW · GW

Oil and gold are both extracted minerals, and each year the ratio of the amount of extracted oil to gold could be the same, and this may explain the stable price of oil in gold?

Comment by avturchin on [Link]: Anthropic shadow, or the dark dusk of disaster · 2020-05-17T16:39:21.293Z · score: 2 (1 votes) · LW · GW

Yes, it seems that self-indication assumption is exactly compensating the anthropic shadow: the stronger is the shadow, the less likely I will be in such a world.

However, it works only if worlds with low p and no shadow actually exist somewhere in the multiverse (and in sufficiently large numbers). If there is a universal anthropic shadow, it will still work.

Comment by avturchin on UAP and Global Catastrophic Risks · 2020-05-17T12:32:23.439Z · score: 2 (1 votes) · LW · GW

Like 20-30 per cent. My loved pet theory is dust theory, as it nicely explains a lot of anomalies which can't be explained even via simulation theory, like premonition or missing time cases.

Comment by avturchin on UAP and Global Catastrophic Risks · 2020-05-16T13:09:56.582Z · score: 3 (2 votes) · LW · GW

Yes, it is a mistake, thanks for head up!

Comment by avturchin on Movable Housing for Scalable Cities · 2020-05-16T09:20:55.917Z · score: 4 (2 votes) · LW · GW

It was attempted in Japan: to built a highrise with capsules.

Comment by avturchin on UAP and Global Catastrophic Risks · 2020-05-15T10:09:52.811Z · score: 2 (1 votes) · LW · GW

I saw that. This mostly describes ordinary incidents not related to Nimitz case.

Comment by avturchin on What are articles on "lifelogging as life extension"? · 2020-05-13T20:57:49.646Z · score: 8 (3 votes) · LW · GW

My article is here:

Digital Immortality: Theory and Protocol for Indirect Mind Uploading

Comment by avturchin on Will the world hit 10 million recorded cases of COVID-19? If so when? · 2020-05-13T18:23:52.638Z · score: 2 (1 votes) · LW · GW

The growth is liner from the beginning of April and is around +100K a day in the world. In that linear rate it will be 60 days before 10 millions from now.

Comment by avturchin on How much money would you pay to get access to video footage of your surroundings for a year of your choice (in the past)? · 2020-05-05T15:35:20.831Z · score: 3 (3 votes) · LW · GW

Too expensive. I used to constantly run videorecording on my computer at 100 MB-hour low quality. It is like 300 GB a year, and the main price is storage.

Comment by avturchin on UAP and Global Catastrophic Risks · 2020-04-28T22:21:31.701Z · score: 2 (1 votes) · LW · GW

Videos themselves are interesting as they confirm visual and radar observations. If they are taken out of context, they don't provide much information, except one rather random thought: the object has two bulges on the longer sides, and from the distance it could look like two connected sources, which may be explanation of older name "flying sources".

Comment by avturchin on UAP and Global Catastrophic Risks · 2020-04-28T22:16:55.023Z · score: 2 (1 votes) · LW · GW

If we in a Space Zoo, the owners of the Zoo will prevent our attempts of self-destruction.

Comment by avturchin on UAP and Global Catastrophic Risks · 2020-04-28T19:01:04.890Z · score: 3 (2 votes) · LW · GW

The most interesting explanations are those that explain how high capabilities of observed objects could be combined with low intelligence. ( in Nimitz case, fly inside a camera explanation does not work, as the object was observed through three independent channels : visual by eyes, radar on the ship and infrared camera on a plane). There are several ideas how UAP high capabilities could be combined with low intelligence:

  1. they are animals which use unknown type of matter. Like whales.
  2. Alien civilization has crashed but some robots remain.
  3. Alien superintelligence is locked on a stupid goal.
  4. They just don't care. They have intelligence but don’t use it to hide. We don’t hide from ants.
  5. Diluted Boltzmann brains or dust theory - there is only a very small level of randomness, and the world looks almost normal. Maybe longer explanation is needed, and it is in the article.
Comment by avturchin on Covid 19 as a Fermi Paradox Zoo Hypothesis Subset (Laboratory Hypothesis) Nudge Point · 2020-04-24T17:47:29.126Z · score: 4 (3 votes) · LW · GW

Thanks, yes, it is me. I created a new username "avturchin" at some moment, so PMs to older "turchin" may be not available for me.

I added a link to this your post about Laboratory Zoo to my new draft about "UAP and global catastrophic risks". I could share the draft.

I am reading now your article and here are some comments:

  • "Burning plasma" is an unclear term. It could either mean unlimited nuclear electric energy (e.g. ITER) – or more ominous, but less probable "cold fusion thermonuclear bombs" or thermonuclear bombs without fission, which could be mass-produced in secrecy or by small actors.
  • One of the main arguments for your point of view, that is, the badness of wars for AGI safety, is that even a limited AGI coupled with almost unlimited capabilities of a rich nuclear power state gets a strategic decisive advantage over other countries, but it has to execute it via war. The same limited AGI in a basement will be almost useless, as it hasn't resource to leverage.
Comment by avturchin on Covid 19 as a Fermi Paradox Zoo Hypothesis Subset (Laboratory Hypothesis) Nudge Point · 2020-04-23T12:49:37.845Z · score: 2 (1 votes) · LW · GW

If we fail the third test, will we be terminated?

Also, what about tourists and poachers in the Zoo?

Comment by avturchin on What are some fun ways to spend $100,000? · 2020-04-21T12:23:50.474Z · score: 3 (2 votes) · LW · GW

Travelling with a nice friend. It is(was) easy to spend like 10-30k a month by living in 5 star hotels, eating fine cousin, shopping etc. But travelling alone sucks.

Comment by avturchin on Far-Ultraviolet Light in Public Spaces to Fight Pandemic is a Good Idea but Premature · 2020-04-18T13:28:15.421Z · score: 10 (2 votes) · LW · GW

A new piece of data: "Sunlight destroys virus quickly, new govt. tests find, but experts say pandemic could last through summer"

Comment by avturchin on Far-Ultraviolet Light in Public Spaces to Fight Pandemic is a Good Idea but Premature · 2020-04-17T13:56:16.442Z · score: 2 (1 votes) · LW · GW

A joke: so the best way for air travel would be transport sedated people in boxes.

No joke: The plane crew could provide gloves to people inside a plane. This one-time gloves could be covered with virus killing material like coper, which will also protect against UV.

Comment by avturchin on Far-Ultraviolet Light in Public Spaces to Fight Pandemic is a Good Idea but Premature · 2020-04-17T13:50:35.104Z · score: 2 (1 votes) · LW · GW

Strip-like sources of light and light sources beneath seats could partly solve this problem.

Comment by avturchin on Far-Ultraviolet Light in Public Spaces to Fight Pandemic is a Good Idea but Premature · 2020-04-16T12:56:30.492Z · score: 2 (1 votes) · LW · GW

Yes, the experiments are needed. The main benefit of gloves is that they prevent a person from touching ones' face. If I go outside, I use onetime gloves and clean my hands on return.

Comment by avturchin on Alarm bell for the next pandemic, V.2 · 2020-04-15T13:48:18.871Z · score: 2 (1 votes) · LW · GW

The problem here os "fog of war": we can't know for sure all R0, rout of transmission and other parameters for sure before a pandemic will reach high stages. This will result either in the frequent false alarms, or there will be no early warning.

Comment by avturchin on Far-Ultraviolet Light in Public Spaces to Fight Pandemic is a Good Idea but Premature · 2020-04-15T13:00:34.323Z · score: 4 (2 votes) · LW · GW

May be we should add to the discussion the description of the public spaces where UVC is needed.

First, planes. People risk infection in planes and they will not return to flying until planes will be made safer. If inside the planes will be sources of far-UVC - AND people will mandatory wear gloves, glasses and masks inside a plane, – where will no skin exposure and no cancer risk (above radiation levels which already happens to be in planes because of higher altitudes.)

More generally, small cancer risk from UVC may be net beneficial, as it would press people to wear gloves, masks and googles, and it will be the biggest contribution to the reduction of infection transmission.

Returning to public spaces. Next is taxicabs. Obviously, the driver should be separated form the passengers (and even replaced with autopilot). But the surface and air need cleaning after each passenger. Here the UVC will help again, but it will turn on only between the passengers rides.

Finally, elevators. The same way, they will be cleaned while idle.

Comment by avturchin on Why don't we have active human trials with inactivated SARS-COV-2? · 2020-04-10T13:00:29.444Z · score: 2 (1 votes) · LW · GW

Even if God appears in the sky and brings us a vaccine, the FDA will require at least 1 year to test its safety.

Comment by avturchin on COVID-19 response as XRisk intervention · 2020-04-10T12:53:12.177Z · score: 18 (9 votes) · LW · GW

Current pandemic is also a test run of a global catastrophe which demonstrates how humanity is ineffective in preparedness and responding, how it was deaf to warnings and how greed, ignorance and overconfidence prevented us to act effectively. If it were more dangerous pandemic, say, bird flu with 60 per cent mortality but transmitted by birds, we may be already devastated.

In some sense, it demonstrates a failure of all previous pandemic prevention and even x-risks prevention efforts. We didn't create anything tangible which we could use now, not even an adequate stockpile of masks or consensus about their efficiency.

Comment by avturchin on Would 2009 H1N1 (Swine Flu) ring the alarm bell? · 2020-04-07T14:55:26.410Z · score: 6 (4 votes) · LW · GW

Death rate estimation you cite is from 2013, which is 4 years after the event, so it can't be used in time of the event. First time I heard about H1N1 in 2009, it was reported that 60 people of 1000 had died in Mexico, which implied 6 per cent death rate, and really alarmed me at the time, so I started buying food and masks.

Comment by avturchin on An alarm bell for the next pandemic · 2020-04-06T13:25:27.588Z · score: 8 (5 votes) · LW · GW

I used to browse (it is like LessWrong for pandemic risks) and I knew about the new infection from day one (early January). If you want to have an alarm bell about the next pandemic, check it often.

More generally: If one wants to have an alarm bell for a problem X, he should find a forum of geeks who write about X and check it regularly.

My main failure was that I didn't write about the coronavirus earlier in LW (before January 20) and that my first post title "Rational prepper thread" didn't have the word "coronavirus" in it.

Comment by avturchin on Implications of the Doomsday Argument for x-risk reduction · 2020-04-04T13:13:32.952Z · score: 6 (3 votes) · LW · GW

There is an uncertainty if DA valid or not. Around 40 per cent of scientists who analysed it, think that some version of DA is true, and if we treat as a prediction market, it is a 40 per cent bet. So there is a 60 per cent chance that DA is not valid and thus we should continue to work on x-risks prevention.

Also, it is possible to cheat DA, if we precomit to forget our position number in the future (may be via creating enough simulations of early past).

Comment by avturchin on Ubiquitous Far-Ultraviolet Light Could Control the Spread of Covid-19 and Other Pandemics · 2020-04-03T12:45:25.751Z · score: 2 (1 votes) · LW · GW

It looks like it can't go through water, so it can't reach virus in blood. However, it seems that visual light is beneficial for immunity, and during 1918 flu sun was used as one of the therapies

Comment by avturchin on The attack rate estimation is more important than CFR · 2020-04-01T21:02:49.135Z · score: 2 (1 votes) · LW · GW

Yes, DP. Lower sensitive of PCR means a lot more of very mild or asymptomatic cases on DM, which have no other manifestations (or people on DP concealed their illnesses).