Posts

David Chalmers on LessWrong and the rationalist community (from his reddit AMA) 2017-02-22T19:07:32.402Z
The Leverhulme Centre for the Future of Intelligence officially launches. 2016-10-21T01:22:19.504Z
Reducing Risks of Astronomical Suffering (S-Risks): A Neglected Global Priority 2016-10-14T19:58:14.077Z
UC Berkeley launches Center for Human-Compatible Artificial Intelligence 2016-08-29T22:43:19.018Z
[Link] NYU conference: Ethics of Artificial Intelligence (October 14-15) 2016-07-16T21:07:57.167Z

Comments

Comment by ignoranceprior on The case for C19 being widespread · 2020-04-12T15:36:26.810Z · LW · GW
FWIW, if UK death toll will surpass 10,000, then this wouldn't fit very well with this hypothesis here.

The UK death toll currently stands at 10,612 according to:

https://www.worldometers.info/coronavirus/country/uk/

Comment by ignoranceprior on The case for C19 being widespread · 2020-04-11T17:23:31.954Z · LW · GW
Alternatively, if the Covid-19 deaths in NY state go above 3,333 in the first week of April, that seems like it would also falsify the hypothesis. (NY state has fewer than one third the population of the UK.) Unfortunately I think this is >80% to happen.

On April 4, the death toll in NY state surpassed 3,333. As of April 10, there are 7,844 deaths.

Comment by ignoranceprior on Has LessWrong been a good early alarm bell for the pandemic? · 2020-04-05T08:29:14.844Z · LW · GW

The "Rationalist prepper thread" was actually posted on January 28, not January 20.

Comment by ignoranceprior on The case for C19 being widespread · 2020-03-29T00:31:22.456Z · LW · GW

This is indeed what I meant. Also I was thinking about once-the-dust-settles IFR, not "crude IFR".

Comment by ignoranceprior on The case for C19 being widespread · 2020-03-28T01:40:30.309Z · LW · GW

If the IFR is indeed .003% (the upper end of your range), then assuming the worst case scenario that 100% of the population of the UK gets infected eventually, only .003%*66.4 million = approx 2000 people will die total.

Would you consider the theory falsified if the death toll in the UK surpasses 2000?

Comment by ignoranceprior on The case for C19 being widespread · 2020-03-28T01:29:57.376Z · LW · GW

I'm confused why you assume that 36-68% of the population in the UK is infected. I thought, based on comments here, that those numbers were the output of a model that made highly optimistic assumptions about IFR, not an attempt at estimating the actual proportion of infections.

Do you think this is a realistic range for the proportion already infected in the UK?

Comment by ignoranceprior on The case for C19 being widespread · 2020-03-28T00:22:27.870Z · LW · GW

What is your personal point estimate or credible interval for IFR?

Comment by ignoranceprior on March Coronavirus Open Thread · 2020-03-26T16:51:43.854Z · LW · GW

Epidemiologist Behind Highly-Cited Coronavirus Model Admits He Was Wrong, Drastically Revises Model (archive)

Epidemiologist Neil Ferguson, who created the highly-cited Imperial College London coronavirus model, which has been cited by organizations like The New York Times and has been instrumental in governmental policy decision-making, offered a massive revision to his model on Wednesday.

Ferguson’s model projected 2.2 million dead people in the United States and 500,000 in the U.K. from COVID-19 if no action were taken to slow the virus and blunt its curve.
However, after just one day of ordered lockdowns in the U.K., Ferguson has changed his tune, revealing that far more people likely have the virus than his team figured. Now, the epidemiologist predicts, hospitals will be just fine taking on COVID-19 patients and estimates 20,000 or far fewer people will die from the virus itself or from its agitation of other ailments.

Ferguson thus dropped his prediction from 500,000 dead to 20,000.

Author and former New York Times reporter Alex Berenson broke down the bombshell report via Twitter on Thursday morning (view Twitter thread below).

“This is a remarkable turn from Neil Ferguson, who led the [Imperial College] authors who warned of 500,000 UK deaths — and who has now himself tested positive for #COVID,” started Berenson.

“He now says both that the U.K. should have enough ICU beds and that the coronavirus will probably kill under 20,000 people in the U.K. — more than 1/2 of whom would have died by the end of the year in any case [because] they were so old and sick,” he wrote.

Thoughts on this?

Comment by ignoranceprior on What will be the big-picture implications of the coronavirus, assuming it eventually infects >10% of the world? · 2020-03-25T19:46:39.054Z · LW · GW

I'd greatly appreciate it if you could respond here:

https://www.greaterwrong.com/posts/ACyGvQchWzGjGkKgS/coronavirus-open-thread/comment/LeYZeyPGndDGaMhMQ

Comment by ignoranceprior on March Coronavirus Open Thread · 2020-03-25T19:45:01.636Z · LW · GW

Does anyone have thoughts on the recent Oxford study that claims that only a very small minority of infections lead to hospitalization or death, and that >50% of the UK population is already infected?

https://www.dropbox.com/s/oxmu2rwsnhi9j9c/Draft-COVID-19-Model%20%2813%29.pdf

Comment by ignoranceprior on March Coronavirus Open Thread · 2020-03-20T06:03:30.864Z · LW · GW

Questions about buying chloroquine:

1. Is it better to buy hydroxychloroquine or regular chloroquine? The studies I've found suggest hydroxychloroquine is safer and more potent, but it is a bit more expensive.

2. How many days worth of the drug is it reasonable to buy per person?

3. How much should someone take per day and how should the dosage be timed?

4. Can someone confirm that the products you can find on reliablerxpharmacy.com when searching for "Lariago" (500 mg chloroquine as phos) and "OXCQ" (200 mg Hydroxychloroquine Sulfate) are the right things to buy? If not, is there any other reputable or semi-reputable source that sells the right product?

Comment by ignoranceprior on What will be the big-picture implications of the coronavirus, assuming it eventually infects >10% of the world? · 2020-03-04T17:25:42.268Z · LW · GW

Maybe birth rates will increase if there are massive quarantines, for the same reason birth rates are said to increase during natural disasters (???). Very uncertain. Just throwing this idea out there, since I've seen little discussion of it.

Comment by ignoranceprior on What will be the big-picture implications of the coronavirus, assuming it eventually infects >10% of the world? · 2020-03-01T16:29:32.180Z · LW · GW

Is the 5-10% global mortality prediction conditional on COVID-19 infecting >10% of the world, or unconditional?

What do you think of the prospects for antivirals like remdesivir to be tested and mass-produced? How much could they lower CFR?

Why do you think other predictions, such as those given by Metaculus 1, 2, 3 are much less pessimistic?

Do you think shorting the market is a good idea still?

Comment by ignoranceprior on Is there an intuitive way to explain how much better superforecasters are than regular forecasters? · 2020-02-19T02:57:45.670Z · LW · GW

This AI impacts article includes three intuitive ways to think about the findings.

Comment by ignoranceprior on Have epistemic conditions always been this bad? · 2020-01-26T06:56:45.367Z · LW · GW
It confuses me that I seem to be the first person to talk much about this on either LW or EA Forum, given that there must be people who have been exposed to the current political environment earlier or to a greater extent than me.

This isn't an answer to your historical question, but I would like to point out that an EA recently wrote up his thoughts on speech policing here on the EA Forum, and I recall some previous relevant discussions as well (example).

Comment by ignoranceprior on Open & Welcome Thread - November 2019 · 2019-11-23T07:01:01.515Z · LW · GW

Do any AI safety researchers have little things they would like to get done, but don't have the time for?

I'm willing to help out for no pay.

I have a backgound in computer science and mathematics, and I have basic familiarity with AI alignment concepts. I can write code to help with ML experiments, and can help you summarize research or do literature reviews.

Comment by ignoranceprior on Wirehead your Chickens · 2018-06-22T15:46:35.879Z · LW · GW

If you're interested in this idea, you may want to join the "Reducing pain in farm animals" Facebook group. (It's currently very small.)

Comment by ignoranceprior on David C Denkenberger on Food Production after a Sun Obscuring Disaster · 2017-09-18T17:38:41.967Z · LW · GW

I thought you were a negative utilitarian, in which case disaster recovery seems plausibly net-negative. Am I wrong about your values?

Comment by ignoranceprior on Is Feedback Suffering? · 2017-09-11T01:57:27.324Z · LW · GW

Could you please try to keep discussion on topic and avoid making everything about politics? Your comment does not contribute to the discussion in any way.

Comment by ignoranceprior on Is Feedback Suffering? · 2017-09-11T01:50:57.199Z · LW · GW

According to this study, the law appears to be inaccurate for academic articles.

Comment by ignoranceprior on [deleted post] 2017-08-30T20:03:51.182Z .
Comment by ignoranceprior on [deleted post] 2017-08-30T13:31:28.816Z .
Comment by ignoranceprior on Torture vs. Dust Specks · 2017-08-27T00:06:42.599Z · LW · GW

Here's the latest working link (all three above are dead)

Also, here's an archive in case that one ever breaks!

Comment by ignoranceprior on Mini map of s-risks · 2017-07-11T18:30:47.357Z · LW · GW

I believe I already told you that I don't consider "spreading wild animal suffering" to be absurd; it's a plausible scenario. What may be intuitively absurd is the claim that "destroying nature is a good thing" -- which is not necessarily the same as the claim that "spreading wild animal suffering to new realms is bad, or ought to be minimized". (And there are possible interventions to reduce non-human suffering conditional on spreading non-human life. E.g. "value spreading" is often discussed in the EA community.)

Anyway, I'm done with this conversation for now as I believe other activities have higher EV.

Comment by ignoranceprior on Mini map of s-risks · 2017-07-11T18:09:29.759Z · LW · GW

Yes, I think it does because it's a plausible scenario and most plausible (IMO) ethical views say that causing non-human suffering is bad. Further exploration of the probability of such scenarios could influence my EA cause priorities, donation targets, and/or general worldview of the future.

seems like you'll be better off figuring out whether living on intersections of ley lines is beneficial, or maybe whether ghosts have many secrets to tell you...

Those have very low prior probabilities and low decision-relevance to me.

Comment by ignoranceprior on Mini map of s-risks · 2017-07-11T17:57:23.583Z · LW · GW

I don't see much in the way of empirical claims here (these would require a hard definition of "suffering" and falsifiability to start with), so I guess I'm talking about counterintuitive normative claims.

Fair point. This is one problem I have had with moral realist utilitarianism. Although I think it may still be the case that sentience and suffering are objective, just not (currently) measurable. Regardless, I don't think the claim of net-suffering in nature is all that absurd.

The claim is a bit different: that we should not spread (non-human) life through the galaxy. This is counterintuitive.

The claim I made is that spreading non-human life throughout the galaxy constitutes an s-risk, i.e. it could drastically increase the total amount of suffering. Any plausible moral view would say that s-risks are generally bad things, but it is not necessarily the case that suffering can never be outweighed by positive value. E.g., if one is not something like a negative utilitarian, then it could still be permissible to spread non-human life throughout the galaxy, as long as you take action to ensure that the benefits outweigh the harms, however you want to define that. Perhaps genetically altering them to reduce infant mortality rates, or to reduce their capacity to experience suffering, having a singleton to prevent suffering re-emerging through Darwinian processes, etc.

So how do you pick absurd ideas to engage with? There are a LOT of them.

This is a hard problem in practice, and I don't claim to know the solution. Ideally, you would prioritize exploring ideas that are decision-relevant and where further research has high Value of Information. Then you would probably transition from an exploration stage to an exploitation state (see the "multi-armed bandit").

Comment by ignoranceprior on Mini map of s-risks · 2017-07-11T17:32:10.375Z · LW · GW

Are you referring to empirical or normative claims? I don't consider the idea that wild animals experience net suffering absurd, although the idea that habitat destruction is morally beneficial is counterintuitive to most people. I think the idea that we should reduce the chance of spreading extreme involuntary suffering, including wild-animal suffering, throughout the universe is much less counterintuitive, and is consistent with a wide range of moral views.

Since I give significant (but not 100%) weight to "the overwhelming importance of the far future" (Nick Beckstead), and the future is always absurd, we should probably spend significant time engaging with ideas that seem intuitively absurd. I don't think opposition to spreading wild-animal suffering is one of these, although things like suffering subroutines and some of the ideas mentioned in the OP (e.g., quantum immortality, multiverses) might be. Some people consider the intelligence explosion absurd, but I still think it has some non-negligible plausibility.

Comment by ignoranceprior on Mini map of s-risks · 2017-07-11T17:05:09.039Z · LW · GW

Someone once proposed a possible s-risk:

If the suffering of hypothetical entities is morally relevant, then Brian Tomasik’s electron thought experiment was a crime of unimaginable proportions. In fact, it may well be that Tomasiks spontaneously forming in empty space outweigh every “conventional” source of suffering in the Universe. I call this the Boltzmann Brian problem.

Comment by ignoranceprior on Mini map of s-risks · 2017-07-11T16:53:06.349Z · LW · GW
  1. No, it doesn't necessarily imply that. Suppose wild animals have net-positive aggregate welfare, but a subset of these lives contain extreme involuntary suffering. Spreading this throughout the universe would still be considered an s-risk according to FRI's definition: "Finally, some futures may contain both vast amounts of happiness and vast amounts of suffering, which constitutes an s-risk but not necessarily a (severe) x-risk. For instance, an event leading to a future containing 10^35 happy individuals and 10^25 unhappy ones, would constitute an s-risk, but not an “x-risk"."

  2. It may actually be the case that wild animals have net-negative welfare. The economist Yew-Kwang Ng has argued for this position. Brian Tomasik takes a similar view, and even endorses your attempted reductio (Edit: Ng has explicitly rejected it at this point). Michael Plant has written several counter-arguments to the Ng/Tomasik view. There doesn't seem to be any way to resolve this at present. There may also be other ways to reduce wild animal suffering besides destroying nature (e.g., see Pearce's abolitionist project).

Comment by ignoranceprior on Mini map of s-risks · 2017-07-11T16:31:51.317Z · LW · GW

FRI has focused on a few s-risks that you didn't mention (perhaps because they are not "colossal" enough):

Spread of wild animals (Related to your #2, "Normal Level") - "Humans may colonize other planets, spreading suffering-filled animal life via terraforming. Some humans may use their resources to seed life throughout the galaxy, which some sadly consider a moral imperative."

A possible compromise between the pro-panspermia and suffering-focused groups would be directed panspermia based on gradients of bliss (if Pearce's abolitionist project is possible).

Michael Dello-Iacovo also wrote a paper on the possible spread of wild animal suffering through the cosmos.

Sentient simulations: "Given astronomical computing power, post-humans may run various kinds of simulations. These sims may include many copies of wild-animal life, most of which dies painfully shortly after being born. For example, a superintelligence aiming to explore the distribution of extraterrestrials of different sorts might run vast numbers of simulations of evolution on various kinds of planets. Moreover, scientists might run even larger numbers of simulations of organisms-that-might-have-been, exploring the space of minds. They may simulate decillions of reinforcement learners that are sufficiently self-aware as to feel what we consider conscious pain."

I don't know whether such simulations would experience net-positive or net-negative welfare according to classical utilitarian standards, but it could very well cause a lot of suffering. There may also be evolutionary reasons for having more pain than pleasure, which could apply to the kinds of beings that would be simulated.

Suffering subroutines: "It could be that certain algorithms (say, reinforcement agents) are very useful in performing complex machine-learning computations that need to be run at massive scale by advanced AI. These subroutines might be sufficiently similar to the pain programs in our own brains that we consider them to actually suffer. But profit and power may take precedence over pity, so these subroutines may be used widely throughout the AI's Matrioshka brains."

PETRL.org advocates the idea that such "voiceless" algorithms deserve moral consideration. Tomasik argues that even some current-day reinforcement learners may be sentient. These claims rely on controversial positions about the philosophy of mind, but it may still be worth erring on the safe side.

Brian Tomasik also mentions lab universes as a potential source of infinite suffering (but also infinite happiness? how to deal with infinite utilities? although, if you give some even some small nonzero moral weight to negative utilitarianism, then you may want to err on the side of not creating lab universes.).

BTW, I don't understand how non-existence could be considered an s-risk, except insofar as existing people may have a preference to continue living and we define suffering as preference frustration. So while you can argue that death is a form of suffering, it does not really make sense to say that "never having existed" is a form of suffering. I think if you broaden the term that much, it loses most of its value.

Comment by ignoranceprior on [deleted post] 2017-07-05T00:59:44.711Z

Some people in the EA community have already written a bit about this.

I think this is the kind of thing Mike Johnson (/user/johnsonmx) and Andres Gomez Emilsson (/user/algekalipso) of the Qualia Research Institute are interested in, though they probably take a different approach. See:

Effective Altruism, and building a better QALY

Principia Qualia: blueprint for a new cause area, consciousness research with an eye toward ethics and x-risk

The Foundational Research Institute also takes an interest in the issue, but they tend to advocate an eliminativist, subjectivist view according to which there is no way to objectively determine which beings are conscious because consciousness itself is an essentially contested concept. (I don't know if everyone at FRI agrees with that, but at least a few including Brian Tomasik do.) FRI also has done some work on measuring happiness and suffering.

Animal Charity Evaluators announced in 2016 that they were starting a deep investigation of animal sentience. I don't know if they have done anything since then.

Luke Muehlhauser (/u/lukeprog) wrote an extensive report on consciousness for the Open Philanthropy Project. He has also indicated an interest in further exploring the area of sentience and moral weight. Since phenomenal consciousness is necessary to experience either happiness or suffering, this may fall under the same umbrella as the above research. Lukeprog's LW posts on affective neuroscience are relevant as well (as well as a couple by Yvain).

Comment by ignoranceprior on Idea for LessWrong: Video Tutoring · 2017-06-29T19:52:36.523Z · LW · GW

What would count as "LessWrong-esque"?

Comment by ignoranceprior on S-risks: Why they are the worst existential risks, and how to prevent them · 2017-06-21T07:12:55.313Z · LW · GW

And the concept is much older than that. The 2011 Felicifia post "A few dystopic future scenarios" by Brian Tomasik outlined many of the same considerations that FRI works on today (suffering simulations, etc.), and of course Brian has been blogging about risks of astronomical suffering since then. FRI itself was founded in 2013.

Comment by ignoranceprior on S-risks: Why they are the worst existential risks, and how to prevent them · 2017-06-20T20:53:36.808Z · LW · GW

Oh, in those cases, the considerations I mentioned don't apply. But I still thought they were worth mentioning.

In Star Trek, the Federation has a "Prime Directive" against interfering with the development of alien civilizations.

Comment by ignoranceprior on S-risks: Why they are the worst existential risks, and how to prevent them · 2017-06-20T20:11:50.331Z · LW · GW

You might like this better:

https://foundational-research.org/reducing-risks-of-astronomical-suffering-a-neglected-priority/

Comment by ignoranceprior on S-risks: Why they are the worst existential risks, and how to prevent them · 2017-06-20T20:07:47.493Z · LW · GW

The flip side of this idea is "cosmic rescue missions" (term coined by David Pearce), which refers to the hypothetical scenario in which human civilization help to reduce the suffering of sentient extraterrestrials (in the original context, it referred to the use of technology to abolish suffering). Of course, this is more relevant for simple animal-like aliens and less so for advanced civilizations, which would presumably have already either implemented a similar technology or decided to reject such technology. Brian Tomasik argues that cosmic rescue missions are unlikely.

Also, there's an argument that humanity conquering aliens civs would only be considered bad if you assume that either (1) we have non-universalist-consequentialist reasons to believe that preventing alien civilizations from existing is bad, or (2) the alien civilization would produce greater universalist-consequentialist value than human civilizations with the same resources. If (2) is the case, then humanity should actually be willing to sacrifice itself to let the aliens take over (like in the "utility monster" thought experiment), assuming that universalist consequentialism is true. If neither (1) nor (2) holds, then human civilization would have greater value than ET civilization. Seth Baum's paper on universalist ethics and alien encounters goes into greater detail.

Comment by ignoranceprior on S-risks: Why they are the worst existential risks, and how to prevent them · 2017-06-20T14:28:25.562Z · LW · GW

Want to improve the wiki page on s-risk? I started it a few months ago but it could use some work.

Comment by ignoranceprior on Book recommendation requests · 2017-06-14T06:36:19.833Z · LW · GW

Thank you very much!

Comment by ignoranceprior on Book recommendation requests · 2017-06-04T05:12:04.285Z · LW · GW

I don't know specifically. Where would be the best place to start?

Comment by ignoranceprior on Book recommendation requests · 2017-06-03T19:57:24.947Z · LW · GW

What are good introductory books on chemistry and biology that do not require any background knowledge? I'm ashamed to say it, but I don't really even have a high-school level knowledge of either subject, and what little I knew is now forgotten. My background in basic (classical) physics is much better, but I have forgotten some of that too.

Comment by ignoranceprior on AI Safety reading group · 2017-01-28T23:19:02.173Z · LW · GW

You could advertise this on /r/ControlProblem too.

Comment by ignoranceprior on Dialectic algorithm - For calculating if an argument is sustained or refuted · 2016-11-21T04:30:07.927Z · LW · GW

Yes, for cases of Gish gallop it would be impractical to refute every single point.

Comment by ignoranceprior on [Link] NYU conference: Ethics of Artificial Intelligence (October 14-15) · 2016-10-16T04:35:10.983Z · LW · GW

You can watch the archived videos here: http://livestream.com/nyu-tv/ethicsofAI

Comment by ignoranceprior on Reducing Risks of Astronomical Suffering (S-Risks): A Neglected Global Priority · 2016-10-15T22:41:02.118Z · LW · GW

A similar question is whether happiness and suffering are equally energy-efficient.

Comment by ignoranceprior on Open Thread, Sept 5. - Sept 11. 2016 · 2016-09-05T02:05:10.016Z · LW · GW

Has anyone here had success with the method of loci (memory palace)? I've seen it mentioned a few times on LW but I'm not sure where to start, or whether it's worth investing time into.

Comment by ignoranceprior on Open thread, Jul. 18 - Jul. 24, 2016 · 2016-07-24T19:27:10.697Z · LW · GW

You need at least 10 karma points to vote (you currently have 2 points, according to your profile). Once you have 10 points you should be able to see the voting buttons. Incidentally, after a troll downvoted me from 12 to 4, I lost the ability to vote, and now I can no longer see the buttons.

Comment by ignoranceprior on Should you change where you live? (also - a worked “how to solve a question”) · 2016-07-23T18:38:46.660Z · LW · GW

It might be that downvote troll everyone keeps talking about. Eugine?

Comment by ignoranceprior on Post ridiculous munchkin ideas! · 2016-07-14T13:09:55.612Z · LW · GW

Archive.org copy (takes a few seconds to load)

Archive.is copy

Comment by ignoranceprior on [Link] White House announces a series of workshops on AI, expresses interest in safety · 2016-07-02T01:20:34.105Z · LW · GW

Sort of a follow-up post here: http://lesswrong.com/r/discussion/lw/nqp/notes_on_the_safety_in_artificial_intelligence/

Comment by ignoranceprior on Rationality Quotes July 2016 · 2016-07-01T20:42:22.291Z · LW · GW

Source, since you didn't link it.