Comment by ignoranceprior on Wirehead your Chickens · 2018-06-22T15:46:35.879Z · score: 2 (1 votes) · LW · GW

If you're interested in this idea, you may want to join the "Reducing pain in farm animals" Facebook group. (It's currently very small.)

Comment by ignoranceprior on David C Denkenberger on Food Production after a Sun Obscuring Disaster · 2017-09-18T17:38:41.967Z · score: 1 (1 votes) · LW · GW

I thought you were a negative utilitarian, in which case disaster recovery seems plausibly net-negative. Am I wrong about your values?

Comment by ignoranceprior on Is Feedback Suffering? · 2017-09-11T01:57:27.324Z · score: 6 (6 votes) · LW · GW

Could you please try to keep discussion on topic and avoid making everything about politics? Your comment does not contribute to the discussion in any way.

Comment by ignoranceprior on Is Feedback Suffering? · 2017-09-11T01:50:57.199Z · score: 2 (2 votes) · LW · GW

According to this study, the law appears to be inaccurate for academic articles.

Comment by ignoranceprior on Is life worth living? · 2017-08-30T20:03:51.182Z · score: 1 (1 votes) · LW · GW

And finally, everyone who answers (1), can you identify the point when your past turned from nonnegative to negative? If not, you probably have a skewed memory and the sum of your experiences at those points in time is probably higher value than your aggregated memory at this point in time.

Personally, I've always had very high levels of anxiety and neuroticism, and a lack of social enjoyment due to social anxiety/autism or other happiness to make up for it, so I'm not sure that even my early childhood was positive (I'm 18 now). But I can definitely pinpoint a time where I contracted other medical issues and am fairly confident my life after that has been more negative than the time before it was positive (assuming it even was positive).

Also, you could flip this the other way: "everyone who answers (2), can you identify the point when your past turned from nonpositive to positive? If not, you probably have a skewed memory and the sum of your experiences at those points in time is probably lower value than your aggregated memory at this point in time." It's good to avoid pessimism bias, but let's not fall prey to Pollyanna/optimism bias either.

Comment by ignoranceprior on Is life worth living? · 2017-08-30T13:31:28.816Z · score: 3 (3 votes) · LW · GW

Probably in the minority here, but I'd choose not to relive my life. I don't think my life is worth living. Partly because I have a lot of medical issues which cause significant suffering, and partly because the strongest intensities of suffering I have experienced are much worse than the strongest intensities of happiness are good. However, I do think it's plausible that most lives in the developed world are worth living.

(I am implicitly using a classical utilitarian definition of life worth living.)

Comment by ignoranceprior on Torture vs. Dust Specks · 2017-08-27T00:06:42.599Z · score: 1 (1 votes) · LW · GW

Here's the latest working link (all three above are dead)

Also, here's an archive in case that one ever breaks!

Comment by ignoranceprior on Mini map of s-risks · 2017-07-11T18:30:47.357Z · score: 1 (1 votes) · LW · GW

I believe I already told you that I don't consider "spreading wild animal suffering" to be absurd; it's a plausible scenario. What may be intuitively absurd is the claim that "destroying nature is a good thing" -- which is not necessarily the same as the claim that "spreading wild animal suffering to new realms is bad, or ought to be minimized". (And there are possible interventions to reduce non-human suffering conditional on spreading non-human life. E.g. "value spreading" is often discussed in the EA community.)

Anyway, I'm done with this conversation for now as I believe other activities have higher EV.

Comment by ignoranceprior on Mini map of s-risks · 2017-07-11T18:09:29.759Z · score: 2 (2 votes) · LW · GW

Yes, I think it does because it's a plausible scenario and most plausible (IMO) ethical views say that causing non-human suffering is bad. Further exploration of the probability of such scenarios could influence my EA cause priorities, donation targets, and/or general worldview of the future.

seems like you'll be better off figuring out whether living on intersections of ley lines is beneficial, or maybe whether ghosts have many secrets to tell you...

Those have very low prior probabilities and low decision-relevance to me.

Comment by ignoranceprior on Mini map of s-risks · 2017-07-11T17:57:23.583Z · score: 2 (2 votes) · LW · GW

I don't see much in the way of empirical claims here (these would require a hard definition of "suffering" and falsifiability to start with), so I guess I'm talking about counterintuitive normative claims.

Fair point. This is one problem I have had with moral realist utilitarianism. Although I think it may still be the case that sentience and suffering are objective, just not (currently) measurable. Regardless, I don't think the claim of net-suffering in nature is all that absurd.

The claim is a bit different: that we should not spread (non-human) life through the galaxy. This is counterintuitive.

The claim I made is that spreading non-human life throughout the galaxy constitutes an s-risk, i.e. it could drastically increase the total amount of suffering. Any plausible moral view would say that s-risks are generally bad things, but it is not necessarily the case that suffering can never be outweighed by positive value. E.g., if one is not something like a negative utilitarian, then it could still be permissible to spread non-human life throughout the galaxy, as long as you take action to ensure that the benefits outweigh the harms, however you want to define that. Perhaps genetically altering them to reduce infant mortality rates, or to reduce their capacity to experience suffering, having a singleton to prevent suffering re-emerging through Darwinian processes, etc.

So how do you pick absurd ideas to engage with? There are a LOT of them.

This is a hard problem in practice, and I don't claim to know the solution. Ideally, you would prioritize exploring ideas that are decision-relevant and where further research has high Value of Information. Then you would probably transition from an exploration stage to an exploitation state (see the "multi-armed bandit").

Comment by ignoranceprior on Mini map of s-risks · 2017-07-11T17:32:10.375Z · score: 3 (3 votes) · LW · GW

Are you referring to empirical or normative claims? I don't consider the idea that wild animals experience net suffering absurd, although the idea that habitat destruction is morally beneficial is counterintuitive to most people. I think the idea that we should reduce the chance of spreading extreme involuntary suffering, including wild-animal suffering, throughout the universe is much less counterintuitive, and is consistent with a wide range of moral views.

Since I give significant (but not 100%) weight to "the overwhelming importance of the far future" (Nick Beckstead), and the future is always absurd, we should probably spend significant time engaging with ideas that seem intuitively absurd. I don't think opposition to spreading wild-animal suffering is one of these, although things like suffering subroutines and some of the ideas mentioned in the OP (e.g., quantum immortality, multiverses) might be. Some people consider the intelligence explosion absurd, but I still think it has some non-negligible plausibility.

Comment by ignoranceprior on Mini map of s-risks · 2017-07-11T17:05:09.039Z · score: 2 (2 votes) · LW · GW

Someone once proposed a possible s-risk:

If the suffering of hypothetical entities is morally relevant, then Brian Tomasik’s electron thought experiment was a crime of unimaginable proportions. In fact, it may well be that Tomasiks spontaneously forming in empty space outweigh every “conventional” source of suffering in the Universe. I call this the Boltzmann Brian problem.

Comment by ignoranceprior on Mini map of s-risks · 2017-07-11T16:53:06.349Z · score: 1 (1 votes) · LW · GW
  1. No, it doesn't necessarily imply that. Suppose wild animals have net-positive aggregate welfare, but a subset of these lives contain extreme involuntary suffering. Spreading this throughout the universe would still be considered an s-risk according to FRI's definition: "Finally, some futures may contain both vast amounts of happiness and vast amounts of suffering, which constitutes an s-risk but not necessarily a (severe) x-risk. For instance, an event leading to a future containing 10^35 happy individuals and 10^25 unhappy ones, would constitute an s-risk, but not an “x-risk"."

  2. It may actually be the case that wild animals have net-negative welfare. The economist Yew-Kwang Ng has argued for this position. Brian Tomasik takes a similar view, and even endorses your attempted reductio (Edit: Ng has explicitly rejected it at this point). Michael Plant has written several counter-arguments to the Ng/Tomasik view. There doesn't seem to be any way to resolve this at present. There may also be other ways to reduce wild animal suffering besides destroying nature (e.g., see Pearce's abolitionist project).

Comment by ignoranceprior on Mini map of s-risks · 2017-07-11T16:31:51.317Z · score: 4 (4 votes) · LW · GW

FRI has focused on a few s-risks that you didn't mention (perhaps because they are not "colossal" enough):

Spread of wild animals (Related to your #2, "Normal Level") - "Humans may colonize other planets, spreading suffering-filled animal life via terraforming. Some humans may use their resources to seed life throughout the galaxy, which some sadly consider a moral imperative."

A possible compromise between the pro-panspermia and suffering-focused groups would be directed panspermia based on gradients of bliss (if Pearce's abolitionist project is possible).

Michael Dello-Iacovo also wrote a paper on the possible spread of wild animal suffering through the cosmos.

Sentient simulations: "Given astronomical computing power, post-humans may run various kinds of simulations. These sims may include many copies of wild-animal life, most of which dies painfully shortly after being born. For example, a superintelligence aiming to explore the distribution of extraterrestrials of different sorts might run vast numbers of simulations of evolution on various kinds of planets. Moreover, scientists might run even larger numbers of simulations of organisms-that-might-have-been, exploring the space of minds. They may simulate decillions of reinforcement learners that are sufficiently self-aware as to feel what we consider conscious pain."

I don't know whether such simulations would experience net-positive or net-negative welfare according to classical utilitarian standards, but it could very well cause a lot of suffering. There may also be evolutionary reasons for having more pain than pleasure, which could apply to the kinds of beings that would be simulated.

Suffering subroutines: "It could be that certain algorithms (say, reinforcement agents) are very useful in performing complex machine-learning computations that need to be run at massive scale by advanced AI. These subroutines might be sufficiently similar to the pain programs in our own brains that we consider them to actually suffer. But profit and power may take precedence over pity, so these subroutines may be used widely throughout the AI's Matrioshka brains."

PETRL.org advocates the idea that such "voiceless" algorithms deserve moral consideration. Tomasik argues that even some current-day reinforcement learners may be sentient. These claims rely on controversial positions about the philosophy of mind, but it may still be worth erring on the safe side.

Brian Tomasik also mentions lab universes as a potential source of infinite suffering (but also infinite happiness? how to deal with infinite utilities? although, if you give some even some small nonzero moral weight to negative utilitarianism, then you may want to err on the side of not creating lab universes.).

BTW, I don't understand how non-existence could be considered an s-risk, except insofar as existing people may have a preference to continue living and we define suffering as preference frustration. So while you can argue that death is a form of suffering, it does not really make sense to say that "never having existed" is a form of suffering. I think if you broaden the term that much, it loses most of its value.

Comment by ignoranceprior on We need a better theory of happiness and suffering · 2017-07-05T00:59:44.711Z · score: 4 (4 votes) · LW · GW

Some people in the EA community have already written a bit about this.

I think this is the kind of thing Mike Johnson (/user/johnsonmx) and Andres Gomez Emilsson (/user/algekalipso) of the Qualia Research Institute are interested in, though they probably take a different approach. See:

Effective Altruism, and building a better QALY

Principia Qualia: blueprint for a new cause area, consciousness research with an eye toward ethics and x-risk

The Foundational Research Institute also takes an interest in the issue, but they tend to advocate an eliminativist, subjectivist view according to which there is no way to objectively determine which beings are conscious because consciousness itself is an essentially contested concept. (I don't know if everyone at FRI agrees with that, but at least a few including Brian Tomasik do.) FRI also has done some work on measuring happiness and suffering.

Animal Charity Evaluators announced in 2016 that they were starting a deep investigation of animal sentience. I don't know if they have done anything since then.

Luke Muehlhauser (/u/lukeprog) wrote an extensive report on consciousness for the Open Philanthropy Project. He has also indicated an interest in further exploring the area of sentience and moral weight. Since phenomenal consciousness is necessary to experience either happiness or suffering, this may fall under the same umbrella as the above research. Lukeprog's LW posts on affective neuroscience are relevant as well (as well as a couple by Yvain).

Comment by ignoranceprior on Idea for LessWrong: Video Tutoring · 2017-06-29T19:52:36.523Z · score: 0 (0 votes) · LW · GW

What would count as "LessWrong-esque"?

Comment by ignoranceprior on S-risks: Why they are the worst existential risks, and how to prevent them · 2017-06-21T07:12:55.313Z · score: 1 (1 votes) · LW · GW

And the concept is much older than that. The 2011 Felicifia post "A few dystopic future scenarios" by Brian Tomasik outlined many of the same considerations that FRI works on today (suffering simulations, etc.), and of course Brian has been blogging about risks of astronomical suffering since then. FRI itself was founded in 2013.

Comment by ignoranceprior on S-risks: Why they are the worst existential risks, and how to prevent them · 2017-06-20T20:53:36.808Z · score: 1 (1 votes) · LW · GW

Oh, in those cases, the considerations I mentioned don't apply. But I still thought they were worth mentioning.

In Star Trek, the Federation has a "Prime Directive" against interfering with the development of alien civilizations.

Comment by ignoranceprior on S-risks: Why they are the worst existential risks, and how to prevent them · 2017-06-20T20:11:50.331Z · score: 5 (5 votes) · LW · GW

You might like this better:

https://foundational-research.org/reducing-risks-of-astronomical-suffering-a-neglected-priority/

Comment by ignoranceprior on S-risks: Why they are the worst existential risks, and how to prevent them · 2017-06-20T20:07:47.493Z · score: 4 (4 votes) · LW · GW

The flip side of this idea is "cosmic rescue missions" (term coined by David Pearce), which refers to the hypothetical scenario in which human civilization help to reduce the suffering of sentient extraterrestrials (in the original context, it referred to the use of technology to abolish suffering). Of course, this is more relevant for simple animal-like aliens and less so for advanced civilizations, which would presumably have already either implemented a similar technology or decided to reject such technology. Brian Tomasik argues that cosmic rescue missions are unlikely.

Also, there's an argument that humanity conquering aliens civs would only be considered bad if you assume that either (1) we have non-universalist-consequentialist reasons to believe that preventing alien civilizations from existing is bad, or (2) the alien civilization would produce greater universalist-consequentialist value than human civilizations with the same resources. If (2) is the case, then humanity should actually be willing to sacrifice itself to let the aliens take over (like in the "utility monster" thought experiment), assuming that universalist consequentialism is true. If neither (1) nor (2) holds, then human civilization would have greater value than ET civilization. Seth Baum's paper on universalist ethics and alien encounters goes into greater detail.

Comment by ignoranceprior on S-risks: Why they are the worst existential risks, and how to prevent them · 2017-06-20T14:28:25.562Z · score: 1 (1 votes) · LW · GW

Want to improve the wiki page on s-risk? I started it a few months ago but it could use some work.

Comment by ignoranceprior on Book recommendation requests · 2017-06-14T06:36:19.833Z · score: 0 (0 votes) · LW · GW

Thank you very much!

Comment by ignoranceprior on Book recommendation requests · 2017-06-04T05:12:04.285Z · score: 0 (0 votes) · LW · GW

I don't know specifically. Where would be the best place to start?

Comment by ignoranceprior on Book recommendation requests · 2017-06-03T19:57:24.947Z · score: 0 (0 votes) · LW · GW

What are good introductory books on chemistry and biology that do not require any background knowledge? I'm ashamed to say it, but I don't really even have a high-school level knowledge of either subject, and what little I knew is now forgotten. My background in basic (classical) physics is much better, but I have forgotten some of that too.

David Chalmers on LessWrong and the rationalist community (from his reddit AMA)

2017-02-22T19:07:32.402Z · score: 13 (14 votes)
Comment by ignoranceprior on AI Safety reading group · 2017-01-28T23:19:02.173Z · score: 4 (4 votes) · LW · GW

You could advertise this on /r/ControlProblem too.

Comment by ignoranceprior on Dialectic algorithm - For calculating if an argument is sustained or refuted · 2016-11-21T04:30:07.927Z · score: 1 (2 votes) · LW · GW

Yes, for cases of Gish gallop it would be impractical to refute every single point.

The Leverhulme Centre for the Future of Intelligence officially launches.

2016-10-21T01:22:19.504Z · score: 1 (2 votes)
Comment by ignoranceprior on [Link] NYU conference: Ethics of Artificial Intelligence (October 14-15) · 2016-10-16T04:35:10.983Z · score: 1 (1 votes) · LW · GW

You can watch the archived videos here: http://livestream.com/nyu-tv/ethicsofAI

Comment by ignoranceprior on Reducing Risks of Astronomical Suffering (S-Risks): A Neglected Global Priority · 2016-10-15T22:41:02.118Z · score: 2 (2 votes) · LW · GW

A similar question is whether happiness and suffering are equally energy-efficient.

Reducing Risks of Astronomical Suffering (S-Risks): A Neglected Global Priority

2016-10-14T19:58:14.077Z · score: 6 (7 votes)
Comment by ignoranceprior on Open Thread, Sept 5. - Sept 11. 2016 · 2016-09-05T02:05:10.016Z · score: 2 (2 votes) · LW · GW

Has anyone here had success with the method of loci (memory palace)? I've seen it mentioned a few times on LW but I'm not sure where to start, or whether it's worth investing time into.

UC Berkeley launches Center for Human-Compatible Artificial Intelligence

2016-08-29T22:43:19.018Z · score: 10 (11 votes)
Comment by ignoranceprior on Open thread, Jul. 18 - Jul. 24, 2016 · 2016-07-24T19:27:10.697Z · score: 2 (2 votes) · LW · GW

You need at least 10 karma points to vote (you currently have 2 points, according to your profile). Once you have 10 points you should be able to see the voting buttons. Incidentally, after a troll downvoted me from 12 to 4, I lost the ability to vote, and now I can no longer see the buttons.

Comment by ignoranceprior on Should you change where you live? (also - a worked “how to solve a question”) · 2016-07-23T18:38:46.660Z · score: 10 (12 votes) · LW · GW

It might be that downvote troll everyone keeps talking about. Eugine?

[Link] NYU conference: Ethics of Artificial Intelligence (October 14-15)

2016-07-16T21:07:57.167Z · score: 4 (5 votes)
Comment by ignoranceprior on Post ridiculous munchkin ideas! · 2016-07-14T13:09:55.612Z · score: 1 (1 votes) · LW · GW

Archive.org copy (takes a few seconds to load)

Archive.is copy

Comment by ignoranceprior on [Link] White House announces a series of workshops on AI, expresses interest in safety · 2016-07-02T01:20:34.105Z · score: 1 (1 votes) · LW · GW

Sort of a follow-up post here: http://lesswrong.com/r/discussion/lw/nqp/notes_on_the_safety_in_artificial_intelligence/

Comment by ignoranceprior on Rationality Quotes July 2016 · 2016-07-01T20:42:22.291Z · score: 6 (6 votes) · LW · GW

Source, since you didn't link it.