Posts

steven0461's Shortform Feed 2019-06-30T02:42:13.858Z · score: 36 (7 votes)
Agents That Learn From Human Behavior Can't Learn Human Values That Humans Haven't Learned Yet 2018-07-11T02:59:12.278Z · score: 28 (19 votes)
Meetup : San Jose Meetup: Park Day (X) 2016-11-28T02:46:20.651Z · score: 0 (1 votes)
Meetup : San Jose Meetup: Park Day (IX), 3pm 2016-11-01T15:40:19.623Z · score: 0 (1 votes)
Meetup : San Jose Meetup: Park Day (VIII) 2016-09-06T00:47:23.680Z · score: 0 (1 votes)
Meetup : Meetup : San Jose Meetup: Park Day (VII) 2016-08-15T01:05:00.237Z · score: 0 (1 votes)
Meetup : San Jose Meetup: Park Day (VI) 2016-07-25T02:11:44.237Z · score: 0 (1 votes)
Meetup : San Jose Meetup: Park Day (V) 2016-07-04T18:38:01.992Z · score: 0 (1 votes)
Meetup : San Jose Meetup: Park Day (IV) 2016-06-15T20:29:04.853Z · score: 0 (1 votes)
Meetup : San Jose Meetup: Park Day (III) 2016-05-09T20:10:55.447Z · score: 0 (1 votes)
Meetup : San Jose Meetup: Park Day (II) 2016-04-20T06:23:28.685Z · score: 0 (1 votes)
Meetup : San Jose Meetup: Park Day 2016-03-30T04:39:09.532Z · score: 1 (2 votes)
Meetup : Amsterdam 2013-11-12T09:12:31.710Z · score: 4 (5 votes)
Bayesian Adjustment Does Not Defeat Existential Risk Charity 2013-03-17T08:50:02.096Z · score: 48 (48 votes)
Meetup : Chicago Meetup 2011-09-28T04:29:35.777Z · score: 3 (4 votes)
Meetup : Chicago Meetup 2011-07-07T15:28:57.969Z · score: 2 (3 votes)
PhilPapers survey results now include correlations 2010-11-09T19:15:47.251Z · score: 6 (7 votes)
Chicago Meetup 11/14 2010-11-08T23:30:49.015Z · score: 8 (9 votes)
A Fundamental Question of Group Rationality 2010-10-13T20:32:08.085Z · score: 10 (11 votes)
Chicago/Madison Meetup 2010-07-15T23:30:15.576Z · score: 9 (10 votes)
Swimming in Reasons 2010-04-10T01:24:27.787Z · score: 8 (17 votes)
Disambiguating Doom 2010-03-29T18:14:12.075Z · score: 16 (17 votes)
Taking Occam Seriously 2009-05-29T17:31:52.268Z · score: 28 (28 votes)
Open Thread: May 2009 2009-05-01T16:16:35.156Z · score: 4 (5 votes)
Eliezer Yudkowsky Facts 2009-03-22T20:17:21.220Z · score: 140 (218 votes)
The Wrath of Kahneman 2009-03-09T12:52:41.695Z · score: 25 (26 votes)
Lies and Secrets 2009-03-08T14:43:22.152Z · score: 14 (25 votes)

Comments

Comment by steven0461 on What are some unpopular (non-normative) opinions that you hold? · 2019-10-25T20:25:10.715Z · score: 5 (2 votes) · LW · GW

Expressing unpopular opinions can be good and necessary, but doing so merely because someone asked you to is foolish. Have some strategic common sense.

Comment by steven0461 on What are some unpopular (non-normative) opinions that you hold? · 2019-10-25T20:14:52.400Z · score: 9 (5 votes) · LW · GW

(c) unpopular ideas hurt each other by association, (d) it's hard to find people who can be trusted to have good unpopular ideas but not bad unpopular ideas, (e) people are motivated by getting credit for their ideas, (f) people don't seem good at group writing curation generally

Comment by steven0461 on The Power to Solve Climate Change · 2019-09-15T17:38:39.211Z · score: 2 (1 votes) · LW · GW

Even if you assume no climate policy at all and if you make various other highly pessimistic assumptions about the economy (RCP 8.5), I think it's still far under 10% conditional on those assumptions, though it's tricky to extract this kind of estimate.

Comment by steven0461 on The Power to Solve Climate Change · 2019-09-13T21:37:21.744Z · score: 2 (1 votes) · LW · GW
We're predicting it to be as high as a 6°C warming by 2100, so it's actually a huge fluctuation.

6°C is something like a worst case scenario.


Comment by steven0461 on What are the best resources for examining the evidence for anthropogenic climate change? · 2019-08-07T02:04:19.092Z · score: 4 (3 votes) · LW · GW

The question you should ask for policy purposes is how much the temperature would rise in response to different possible increases in CO2. It's basically a matter of estimating a continuous parameter that nobody thinks is zero and whose space of possible values has no natural dividing line between "yes" and "no". Attribution of past warming partly overlaps with the "how much" question and partly just distracts from it. That said, I would just read the relevant sections of the latest IPCC report.

Comment by steven0461 on steven0461's Shortform Feed · 2019-07-11T20:31:30.483Z · score: 6 (3 votes) · LW · GW

Online posts function as hard-to-fake signals of readiness to invest verbal energy into arguing for one side of an issue. This gives readers the feeling they won't lose face if they adopt the post's opinion, which overlaps with the feeling that the post's opinion is true. This function sometimes makes posts longer than would be socially optimal.

Comment by steven0461 on FB/Discord Style Reacts · 2019-07-04T17:05:35.533Z · score: 7 (4 votes) · LW · GW

"This is wrong, harmful, and/or in bad faith, but I expect arguing this point against determined verbally clever opposition would be too costly."

Comment by steven0461 on steven0461's Shortform Feed · 2019-07-03T05:18:54.104Z · score: 3 (2 votes) · LW · GW

I guess I wasn't necessarily thinking of them as exact duplicates. If there are 10^100 ways the 21st century can go, and for some reason each of the resulting civilizations wants to know how all the other civilizations came out when the dust settled, each civilization ends up having a lot of other civilizations to think about. In this scenario, an effect on the far future still seems to me to be "only" a million times as big as the same effect on the 21st century, only now the stuff isomorphic to the 21st century is spread out across many different far future civilizations instead of one.

Maybe 1/1,000,000 is still a lot, but I'm not sure how to deal with uncertainty here. If I just take the expectation of the fraction of the universe isomorphic to the 21st century, I might end up with some number like 1/10,000,000 (because I'm 10% sure of the 1/1,000,000 claim) and still conclude the relative importance of the far future is huge but hugely below infinity.

Comment by steven0461 on Being Wrong Doesn't Mean You're Stupid and Bad (Probably) · 2019-07-01T20:52:15.965Z · score: 12 (4 votes) · LW · GW

If you don't just learn what someone's opinion is, but also how they arrived at it and how confidently they hold it, that can be much stronger evidence that they're stupid and bad. Arguably over half the arguments one encounters in the wild could never be made in good faith.

Comment by steven0461 on steven0461's Shortform Feed · 2019-07-01T20:11:33.739Z · score: 3 (2 votes) · LW · GW

How much should I worry about the unilateralist's curse when making arguments that it seems like some people should have already thought of and that they might have avoided making because they anticipated side effects that I don't understand?

Comment by steven0461 on steven0461's Shortform Feed · 2019-07-01T20:06:55.701Z · score: 2 (1 votes) · LW · GW
based on the details of the estimates that doesn't look to me like it's just bad luck

For example:

  • There's a question about whether the S&P 500 will end the year higher than it began. When the question closed, the index had increased from 2500 to 2750. The index has increased most years historically. But the Metaculus estimate was about 50%.
  • On this question, at the time of closing, 538's estimate was 99+% and the Metaculus estimate was 66%. I don't think Metaculus had significantly different information than 538.
Comment by steven0461 on steven0461's Shortform Feed · 2019-07-01T19:57:43.606Z · score: 3 (2 votes) · LW · GW

A naive argument says the influence of our actions on the far future is ~infinity times as intrinsically important as the influence of our actions on the 21st century because the far future contains ~infinity times as much stuff. One limit to this argument is that if 1/1,000,000 of the far future stuff is isomorphic to the 21st century (e.g. simulations), then having an influence on the far future is "only" a million times as important as having the exact same influence on the 21st century. (Of course, the far future is a very different place so our influence will actually be of a very different nature.) Has anyone tried to get a better abstract understanding of this point or tried to quantify how much it matters in practice?

Comment by steven0461 on steven0461's Shortform Feed · 2019-07-01T19:11:32.633Z · score: 7 (3 votes) · LW · GW

Newcomb's Problem sometimes assumes Omega is right 99% of the time. What is that conditional on? If it's just a base rate (Omega is right about 99% of people), what happens when you condition on having particular thoughts and modeling the problem on a particular level? (Maybe there exists a two-boxing lesion and you can become confident you don't have it.) If it's 99% conditional on anything you might think, e.g. because Omega has a full model of you but gets hit by a cosmic ray 1% of the time, isn't it clearer to just assume Omega gets it 100% right? Is this explained somewhere?

Comment by steven0461 on steven0461's Shortform Feed · 2019-07-01T19:07:59.869Z · score: 2 (1 votes) · LW · GW

I think one could greatly outperform the best publicly available forecasts through collaboration between 1) some people good at arguing and looking for info and 2) someone good at evaluating arguments and aggregating evidence. Maybe just a forum thread where a moderator keeps a percentage estimate updated in the top post.

Comment by steven0461 on steven0461's Shortform Feed · 2019-07-01T17:01:35.959Z · score: 5 (3 votes) · LW · GW

I would normally trust it more, but it's recently been doing way worse than the Metaculus crowd median (average log score 0.157 vs 0.117 over the sample of 20 yes/no questions that have resolved for me), and based on the details of the estimates that doesn't look to me like it's just bad luck. It does better on the whole set of questions, but I think still not much better than the median; I can't find the analysis page at the moment.

Comment by steven0461 on steven0461's Shortform Feed · 2019-06-30T02:46:07.865Z · score: 9 (5 votes) · LW · GW

Considering how much people talk about superforecasters, how come there aren't more public sources of superforecasts? There's prediction markets and sites like ElectionBettingOdds that make it easier to read their odds as probabilities, but only for limited questions. There's Metaculus, but it only shows a crowd median (with a histogram of predictions) and in some cases the result of an aggregation algorithm that I don't trust very much. There's PredictionBook, but it's not obvious how to extract a good single probability estimate from it. Both prediction markets and Metaculus are competitive and disincentivize public cooperation. What else is there if I want to know something like what the probability of war with Iran is?

Comment by steven0461 on Writing children's picture books · 2019-06-27T22:35:21.874Z · score: 9 (4 votes) · LW · GW
how much of the current population would end up underwater if they didn’t move

(and if they didn't adapt in other ways, like by building sea walls)

Comment by steven0461 on Writing children's picture books · 2019-06-27T20:18:33.986Z · score: 9 (2 votes) · LW · GW
I think I’ve heard that, with substantial mitigation effort, the temperature difference might be 2 degrees Celsius from now until the end of the century.

Usually people mean from pre-industrial times, not from now. 2 degrees from pre-industrial times means about 1 degree from now.

Comment by steven0461 on Should rationality be a movement? · 2019-06-21T19:24:57.750Z · score: 20 (11 votes) · LW · GW
the development of a new 'mental martial art' of systematically correct reasoning

Unpopular opinion: Rationality is less about martial arts moves than about adopting an attitude of intellectual good faith and consistently valuing impartial truth-seeking above everything else that usually influences belief selection. Motivating people (including oneself) to adopt such an attitude can be tricky, but the attitude itself is simple. Inventing new techniques is good but not necessary.

Comment by steven0461 on Is "physical nondeterminism" a meaningful concept? · 2019-06-19T15:29:19.648Z · score: 5 (3 votes) · LW · GW

What does it mean for a bit to pop into existence? As I see it, if I measure a particle's spin at time t, then it's either timelessly the case that the result is "up" or timelessly the case that the result is "down". Maybe this is an issue of A Theory versus B Theory?

Comment by steven0461 on Discourse Norms: Moderators Must Not Bully · 2019-06-18T22:19:54.220Z · score: 3 (3 votes) · LW · GW

(I agree with this and made the uncle comment before seeing it. Also, my experience wasn't like that most of the time; I think it was mainly that way toward the end of LW 1.0.)

Comment by steven0461 on Discourse Norms: Moderators Must Not Bully · 2019-06-18T22:04:55.626Z · score: 14 (5 votes) · LW · GW

This seems like an argument for the hypothesis that nitpicking is net bad, but not for mr-hire's hypothesis in the great-grandparent comment that nitpicking caused LW 1.0 to have a lot of mediocre content as a second-order effect.

Comment by steven0461 on Is the "business cycle" an actual economic principle? · 2019-06-18T21:38:32.164Z · score: 11 (6 votes) · LW · GW

It's not gambler's fallacy if recessions are caused by something that builds up over time (but is reset during recessions), like a mismatch between two different variables. In that case, more time having passed means there's probably more of that thing, which means there's more force toward a recession. I have no idea if this is what's actually happening, though.

Comment by steven0461 on Discourse Norms: Moderators Must Not Bully · 2019-06-18T21:08:01.794Z · score: 5 (4 votes) · LW · GW

Only if nitpicking (or the resulting lower posting volume, or something like that) demotivates good posters more strongly than it demotivates mediocre posters. If this is true, it requires an explanation. My naive guess would be it demotivates mediocre posters more strongly because they're wrong more often.

Comment by steven0461 on In physical eschatology, is Aestivation a sound strategy? · 2019-06-18T19:18:59.016Z · score: 3 (2 votes) · LW · GW

Unimaginably large amounts of theory can often compensate for small amounts of missing empirical data. I can imagine the possibility that all of our current observations truly underdetermine facts about the universe's future large-scale evolution, but it wouldn't be my default guess.

For what it's worth, my intuition agrees that any superintelligence, even if using an aestivation strategy, would leave behind some sort of easily visible side effects, and that there aren't actually any aestivating aliens out there.

Comment by steven0461 on In physical eschatology, is Aestivation a sound strategy? · 2019-06-18T17:32:48.129Z · score: 5 (3 votes) · LW · GW

The computing resources in one star system are already huge and it's not clear to me that you need more than that to be certain for all practical purposes about both the fate of the universe and how best to control it.

Comment by steven0461 on In physical eschatology, is Aestivation a sound strategy? · 2019-06-18T17:13:30.634Z · score: 4 (2 votes) · LW · GW

That doesn't sound like it would work in UDT or similar decision theories. Maybe in Heat Death world there's one me and a thousand Boltzmann brains with other observations (as per the linked post), and in Big Rip world there's only the one me. If I'm standing outside the universe trying to decide what response to the observation that I'm me would have the best consequences, why shouldn't I just ignore the Boltzmann brains? (This is just re-arguing the controversy of how anthropics works, I guess, but considered by itself this argument seems strong to me.)

Comment by steven0461 on In physical eschatology, is Aestivation a sound strategy? · 2019-06-18T15:46:21.038Z · score: 2 (1 votes) · LW · GW
Big Rip now seems more plausible

How so? I looked on the web for a defense of Big Rip being more plausible than heat death but couldn't immediately find it.

Comment by steven0461 on Is "physical nondeterminism" a meaningful concept? · 2019-06-17T18:57:26.436Z · score: 14 (5 votes) · LW · GW

In MWI, the future state of the universe is uniquely determined by the past state of the universe and the laws of physics. In Copenhagen, the future state of the universe isn't uniquely determined by those things, but is uniquely determined by those things plus a lot of additional bits that represent how each measurement goes. You could either call those bits part of the state of the universe (in which case Copenhagen is deterministic) or you could call them something else (in which case Copenhagen is nondeterministic), so it seems like a matter of convention. The usual convention is to call the bits something else than part of the state of the universe, making Copenhagen nondeterministic, but I don't think there's a fully principled way across theories to decide what to call part of the state of the universe.

Comment by steven0461 on Discourse Norms: Moderators Must Not Bully · 2019-06-17T18:36:01.206Z · score: 24 (9 votes) · LW · GW

I suspect being nitpicked is only aversive if you feel the audience is using the nitpicks to dismiss you. People aren't going to leave the site over "I agree with your post, but China has a population of 1.4 billion, not 1.3 billion". They might leave the site over "Your post is nonsense: China has a population of 1.4 billion, not 1.3 billion. Downvoted!" But then the problem isn't that unimportant errors are being pointed out, but that they're being mistaken for important errors, and it's a special case of the problem of people being mistaken in general.

Comment by steven0461 on FB/Discord Style Reacts · 2019-06-15T17:27:53.629Z · score: 5 (2 votes) · LW · GW

A lot of the benefit from reacts would be the ability to distinguish between "this comment makes the thread a little worse given constraints on attention and reading time" and "die, monster, you don't belong in this world". Downvotes are aversive because they come across as a mix of those two despite being mostly the former.

Comment by steven0461 on Discourse Norms: Moderators Must Not Bully · 2019-06-15T17:20:11.531Z · score: 35 (11 votes) · LW · GW

My memory of LW 1.0 is that it had a lot of mediocre content that made me not want to read it regularly.

Comment by steven0461 on Discourse Norms: Moderators Must Not Bully · 2019-06-15T17:09:24.580Z · score: 12 (10 votes) · LW · GW

What's included in "and the like"?

Comment by steven0461 on Agents That Learn From Human Behavior Can't Learn Human Values That Humans Haven't Learned Yet · 2019-06-01T18:47:18.597Z · score: 2 (1 votes) · LW · GW

See my reply to Rohin above - I wasn't very clear about it in the OP, but I meant to consider questions where the AI knows no philosophy papers etc. are available.

Comment by steven0461 on Agents That Learn From Human Behavior Can't Learn Human Values That Humans Haven't Learned Yet · 2019-06-01T18:18:51.107Z · score: 3 (2 votes) · LW · GW

I meant to assume that away:

But we'll assume that her information stays the same while her utility function is being inferred, and she's not doing anything to get more; perhaps she's not in a position to.

In cases where you're not in a position to get more information about your utility function (e.g. because the humans you're interacting with don't know the answer), your behavior won't depend on whether or not you think it would be useful to have more information about your utility function, so someone observing your behavior can't infer the latter from the former.

Maybe practical cases aren't like this, but it seems to me like they'd only have to be like this with respect to at least one aspect of the utility function for it to be a problem.

Paul above seems to think it would be possible to reason from actual behavior to counterfactual behavior anyway, I guess because he's thinking in terms of modeling the agent as a physical system and not just as an agent, but I'm confused about that so I haven't responded and I don't claim he's wrong.

Comment by steven0461 on "But It Doesn't Matter" · 2019-06-01T17:52:21.127Z · score: 10 (6 votes) · LW · GW

This is a valid criticism of the second sentence as it stands, but I think Zack is pointing at a real pattern, where the same person will alternate between suggesting it matters that H is true, and, when confronted with evidence against H, suggesting it doesn't matter whether or not H is true, as an excuse not to change the habit of saying or thinking H.

Comment by steven0461 on Does the Higgs-boson exist? · 2019-05-23T19:32:53.560Z · score: 4 (3 votes) · LW · GW
That is what we mean when we say “quarks exist”: We mean that the predictions obtained with the hypothesis agrees with observations.

That's not literally what we mean. I can easily imagine a universe where quarks don't exist where Omega intervenes to make observations agree with quark-based predictions in response to predictions being made (but not, say, in parts of the universe causally inaccessible to humans). Maybe this is a strawman interpretation, but if so, it's not obvious to me what the charitable interpretation is.

edit: by "quark-based predictions" I mean predictions based on the hypothesis that quarks exist outside of the mind of Omega, including in causally inaccessible parts of the universe

Comment by steven0461 on Getting Out of the Filter Bubble Outside Your Filter Bubble · 2019-05-20T19:42:21.543Z · score: 16 (5 votes) · LW · GW

When people talk about expanding their filter bubbles, it often seems like a partial workaround for a more fundamental problem that they could be addressing directly instead, which is that they don't update negatively enough on hearing surprisingly weak arguments in directions where effort has been expended to find strong arguments. If your bubble isn't representing the important out-of-bubble arguments accurately, you can still gain information by getting them directly from out-of-bubble sources, but if your bubble is biasing you toward in-bubble beliefs, you're not processing your existing information right.

Comment by steven0461 on Disincentives for participating on LW/AF · 2019-05-11T20:43:00.345Z · score: 16 (5 votes) · LW · GW

The expectation of small numbers of long comments instead of large numbers of short comments doesn't fit with my experience of how productive/efficient discourse happens. LW culture expects posts to be referenced forever and it's hard to write for the ages. It's also hard to write for a general audience of unknown composition and hard to trust such an audience not to vote and comment badly in a way that will tax your future attention.

Comment by steven0461 on Has "politics is the mind-killer" been a mind-killer? · 2019-03-26T17:27:45.139Z · score: 12 (3 votes) · LW · GW

On the other hand, whenever you do something, you practice it whether you intend to or not.

Comment by steven0461 on Is Clickbait Destroying Our General Intelligence? · 2018-11-17T23:08:29.583Z · score: 10 (6 votes) · LW · GW
It feels to me like the historical case for this thesis ought to be visible by mere observation to anyone who watched the quality of online discussion degrade from 2002 to 2017.

My impression is that politics is more prominent and more intense than it used to be, and that this is harming people's reasonableness, but that there's been no decline outside of that. I feel like I see fewer outright uninformed or stupid arguments than I used to; probably this has to do with faster access to information and to feedback on reasoning. EA and AI risk memes have been doing relatively well in the 2010s. Maybe that's just because they needed some time to germinate, but it's still worth noting.

Comment by steven0461 on [deleted post] 2018-08-15T01:43:05.806Z

It didn't look to me like my disagreement with your comment was caused by hasty summarization, given how specific your comment was on this point, so I figured this wasn't among the aspects you were hoping people wouldn't comment on. Apparently I was wrong about that. Note that my comment included an explanation of why I thought it was worth making despite your request and the implicit anti-nitpicking motivation behind it, which I agree with.

Comment by steven0461 on Logarithms and Total Utilitarianism · 2018-08-14T20:34:23.255Z · score: 2 (3 votes) · LW · GW

If a moral hypothesis gives the wrong answers on some questions that we don't face, that suggests it also gives the wrong answers on some questions that we do face.

Comment by steven0461 on [deleted post] 2018-08-14T20:19:36.052Z

Moral circle widening groups together two processes that I think mostly shouldn't be grouped together:

1. Changing one's values so the same kind of phenomenon becomes equally important regardless of whom it happens in (e.g. suffering in a human who lives far away)

2. Changing one's values so more different phenomena become important (e.g. suffering in a squid brain)

Maybe if you do it right, #2 reduces to #1, but I don't think that should be assumed.

Comment by steven0461 on [deleted post] 2018-08-14T20:13:57.639Z
“CEV”, i.e. “coherent extrapolated volition”, refers (as I understand it) to the notion of aggregating the extrapolated volition across many (all?) individuals (humans, usually), and to the idea that this aggregated EV will “cohere rather than interfere”. (Aside: please don’t anyone quibble with this hasty definition; I’ve read Eliezer’s paper on CEV and much else about it besides, I know it’s complicated. I’m just pointing at the concept.)

I'll quibble with this definition anyway because I think many people get it wrong. The way I read CEV, it doesn't claim that extrapolated preferences cohere, but specifically picks out the parts that cohere, and it does so in a way that's interleaved with the extrapolation step instead of happening after the extrapolation step is over.

Comment by steven0461 on [deleted post] 2018-08-14T20:08:07.543Z

If it were up to me, I'd use "CEV" to refer to the proposal Eliezer calls "CEV" in his original article (which I think could be cashed out either in a way where applying the concept to subselves makes sense or in a way where that does not make sense), use "extrapolated volition" to refer to the more general class of algorithms that extrapolate people's volitions, and use something like "true preferences" or "ideal preferences" or "preferences on reflection" when the algorithm for finding those preferences isn't important, like in the OP.

If I'm not mistaken, "CEV" originally stood for "Collective Extrapolated Volition", but then Eliezer changed the name when people interpreted it in more of a "tyranny of the majority" way than he intended.

Comment by steven0461 on Logarithms and Total Utilitarianism · 2018-08-12T15:35:02.272Z · score: 12 (3 votes) · LW · GW

In thought experiments about utilitarianism, it's generally a good idea to consider composite beings. A bus is a utility monster in traffic. If it has 30 people in it, its interests count 30 times as much. So maybe there could be things we'd think of as one mind whose internals mapped onto the internals of a bus in a moral-value-preserving way. (I guess the repugnant conclusion is about utility monsters but for quantity instead of quality.)

Comment by steven0461 on Logarithms and Total Utilitarianism · 2018-08-12T14:59:58.304Z · score: 2 (1 votes) · LW · GW

One line of attack against the idea that we should reject the repugnant conclusion is to ask why the lives are barely worth living. If it's because the many people have the same good lives but they're p-zombies 99.9999% of the time, I can easily believe that increasing the population until there's more total conscious experiences makes the tradeoff worthwhile.

Comment by steven0461 on Logarithms and Total Utilitarianism · 2018-08-12T14:29:03.166Z · score: 5 (3 votes) · LW · GW

I think in the philosophy literature it's generally interpreted as independent of resource constraints. A quick scan of the linked SEP article seems to confirm this. Apart from the question of what Parfit said, it makes a lot of sense to consider the questions of "what is good" and "what is feasible" separately. And people find the claim that sufficiently many barely-good lives are better than fewer happy lives plenty repugnant even if it has no direct implications for population policy. (In my opinion this is largely because a life barely worth living is better than they imagine.)

Comment by steven0461 on Logarithms and Total Utilitarianism · 2018-08-10T19:12:19.611Z · score: 6 (4 votes) · LW · GW

The repugnant conclusion just says "a sufficiently large number of lives barely worth living is preferable to a smaller number of good lives". It says nothing about resources; e.g., it doesn't say that the sufficiently large number can be attained by redistributing a fixed supply.