Comment by steven0461 on Does the Higgs-boson exist? · 2019-05-23T19:32:53.560Z · score: 2 (1 votes) · LW · GW
That is what we mean when we say “quarks exist”: We mean that the predictions obtained with the hypothesis agrees with observations.

That's not literally what we mean. I can easily imagine a universe where quarks don't exist where Omega intervenes to make observations agree with quark-based predictions in response to predictions being made (but not, say, in parts of the universe causally inaccessible to humans). Maybe this is a strawman interpretation, but if so, it's not obvious to me what the charitable interpretation is.

edit: by "quark-based predictions" I mean predictions based on the hypothesis that quarks exist outside of the mind of Omega, including in causally inaccessible parts of the universe

Comment by steven0461 on Getting Out of the Filter Bubble Outside Your Filter Bubble · 2019-05-20T19:42:21.543Z · score: 15 (4 votes) · LW · GW

When people talk about expanding their filter bubbles, it often seems like a partial workaround for a more fundamental problem that they could be addressing directly instead, which is that they don't update negatively enough on hearing surprisingly weak arguments in directions where effort has been expended to find strong arguments. If your bubble isn't representing the important out-of-bubble arguments accurately, you can still gain information by getting them directly from out-of-bubble sources, but if your bubble is biasing you toward in-bubble beliefs, you're not processing your existing information right.

Comment by steven0461 on Disincentives for participating on LW/AF · 2019-05-11T20:43:00.345Z · score: 13 (4 votes) · LW · GW

The expectation of small numbers of long comments instead of large numbers of short comments doesn't fit with my experience of how productive/efficient discourse happens. LW culture expects posts to be referenced forever and it's hard to write for the ages. It's also hard to write for a general audience of unknown composition and hard to trust such an audience not to vote and comment badly in a way that will tax your future attention.

Comment by steven0461 on Has "politics is the mind-killer" been a mind-killer? · 2019-03-26T17:27:45.139Z · score: 12 (3 votes) · LW · GW

On the other hand, whenever you do something, you practice it whether you intend to or not.

Comment by steven0461 on Is Clickbait Destroying Our General Intelligence? · 2018-11-17T23:08:29.583Z · score: 10 (6 votes) · LW · GW
It feels to me like the historical case for this thesis ought to be visible by mere observation to anyone who watched the quality of online discussion degrade from 2002 to 2017.

My impression is that politics is more prominent and more intense than it used to be, and that this is harming people's reasonableness, but that there's been no decline outside of that. I feel like I see fewer outright uninformed or stupid arguments than I used to; probably this has to do with faster access to information and to feedback on reasoning. EA and AI risk memes have been doing relatively well in the 2010s. Maybe that's just because they needed some time to germinate, but it's still worth noting.

Comment by steven0461 on The ever expanding moral circle · 2018-08-15T01:43:05.806Z · score: 4 (2 votes) · LW · GW

It didn't look to me like my disagreement with your comment was caused by hasty summarization, given how specific your comment was on this point, so I figured this wasn't among the aspects you were hoping people wouldn't comment on. Apparently I was wrong about that. Note that my comment included an explanation of why I thought it was worth making despite your request and the implicit anti-nitpicking motivation behind it, which I agree with.

Comment by steven0461 on Logarithms and Total Utilitarianism · 2018-08-14T20:34:23.255Z · score: 2 (3 votes) · LW · GW

If a moral hypothesis gives the wrong answers on some questions that we don't face, that suggests it also gives the wrong answers on some questions that we do face.

Comment by steven0461 on The ever expanding moral circle · 2018-08-14T20:19:36.052Z · score: 2 (1 votes) · LW · GW

Moral circle widening groups together two processes that I think mostly shouldn't be grouped together:

1. Changing one's values so the same kind of phenomenon becomes equally important regardless of whom it happens in (e.g. suffering in a human who lives far away)

2. Changing one's values so more different phenomena become important (e.g. suffering in a squid brain)

Maybe if you do it right, #2 reduces to #1, but I don't think that should be assumed.

Comment by steven0461 on The ever expanding moral circle · 2018-08-14T20:13:57.639Z · score: 0 (2 votes) · LW · GW
“CEV”, i.e. “coherent extrapolated volition”, refers (as I understand it) to the notion of aggregating the extrapolated volition across many (all?) individuals (humans, usually), and to the idea that this aggregated EV will “cohere rather than interfere”. (Aside: please don’t anyone quibble with this hasty definition; I’ve read Eliezer’s paper on CEV and much else about it besides, I know it’s complicated. I’m just pointing at the concept.)

I'll quibble with this definition anyway because I think many people get it wrong. The way I read CEV, it doesn't claim that extrapolated preferences cohere, but specifically picks out the parts that cohere, and it does so in a way that's interleaved with the extrapolation step instead of happening after the extrapolation step is over.

Comment by steven0461 on The ever expanding moral circle · 2018-08-14T20:08:07.543Z · score: 2 (1 votes) · LW · GW

If it were up to me, I'd use "CEV" to refer to the proposal Eliezer calls "CEV" in his original article (which I think could be cashed out either in a way where applying the concept to subselves makes sense or in a way where that does not make sense), use "extrapolated volition" to refer to the more general class of algorithms that extrapolate people's volitions, and use something like "true preferences" or "ideal preferences" or "preferences on reflection" when the algorithm for finding those preferences isn't important, like in the OP.

If I'm not mistaken, "CEV" originally stood for "Collective Extrapolated Volition", but then Eliezer changed the name when people interpreted it in more of a "tyranny of the majority" way than he intended.

Comment by steven0461 on Logarithms and Total Utilitarianism · 2018-08-12T15:35:02.272Z · score: 12 (3 votes) · LW · GW

In thought experiments about utilitarianism, it's generally a good idea to consider composite beings. A bus is a utility monster in traffic. If it has 30 people in it, its interests count 30 times as much. So maybe there could be things we'd think of as one mind whose internals mapped onto the internals of a bus in a moral-value-preserving way. (I guess the repugnant conclusion is about utility monsters but for quantity instead of quality.)

Comment by steven0461 on Logarithms and Total Utilitarianism · 2018-08-12T14:59:58.304Z · score: 2 (1 votes) · LW · GW

One line of attack against the idea that we should reject the repugnant conclusion is to ask why the lives are barely worth living. If it's because the many people have the same good lives but they're p-zombies 99.9999% of the time, I can easily believe that increasing the population until there's more total conscious experiences makes the tradeoff worthwhile.

Comment by steven0461 on Logarithms and Total Utilitarianism · 2018-08-12T14:29:03.166Z · score: 5 (3 votes) · LW · GW

I think in the philosophy literature it's generally interpreted as independent of resource constraints. A quick scan of the linked SEP article seems to confirm this. Apart from the question of what Parfit said, it makes a lot of sense to consider the questions of "what is good" and "what is feasible" separately. And people find the claim that sufficiently many barely-good lives are better than fewer happy lives plenty repugnant even if it has no direct implications for population policy. (In my opinion this is largely because a life barely worth living is better than they imagine.)

Comment by steven0461 on Logarithms and Total Utilitarianism · 2018-08-10T19:12:19.611Z · score: 6 (4 votes) · LW · GW

The repugnant conclusion just says "a sufficiently large number of lives barely worth living is preferable to a smaller number of good lives". It says nothing about resources; e.g., it doesn't say that the sufficiently large number can be attained by redistributing a fixed supply.

Agents That Learn From Human Behavior Can't Learn Human Values That Humans Haven't Learned Yet

2018-07-11T02:59:12.278Z · score: 28 (14 votes)
Comment by steven0461 on A Rationalist Argument for Voting · 2018-06-13T16:48:57.208Z · score: 8 (1 votes) · LW · GW

By "following" I just meant "paying attention to", which is automatically not low cost. I think it's plausible that you could make decent decisions without paying any attention, but in practice people who think about rationalist arguments for/against voting do pay attention, and would pay less attention (perhaps 10-100 hours' worth per election?) if they didn't vote.

Comment by steven0461 on A Rationalist Argument for Voting · 2018-06-13T16:36:46.000Z · score: 8 (1 votes) · LW · GW

Thanks, I did mean per hour and I'll edit it. I think my impression of people's lightcones per hour is higher than yours. As a stupid model, suppose lightcone quality has a term of 1% * ln(x) or 10% * ln(x) where x is the size/power of the x-risk movement. (Various hypotheses under which the x-risk movement has surprisingly low long-term impact, e.g. humanity is surrounded by aliens or there's some sort of moral convergence, also imply elections have no long-term impact, so maybe we should be estimating something like the quality of humanity's attempted inputs into optimizing the lightcone.) Then you only need to increase x by 0.01% or 0.001% to win a microlightcone per lifetime. I think there are hundreds or thousands of people who can achieve this level of impact. (Or rather, I think hundreds or thousands of lifetimes' worth of work with this level of impact will be done, and the number of people who could add some of these hours if they chose to is greater than that.) Of course, at this point it matters to estimate the parameters more accurately than to the nearest order of magnitude or two. (For example, Trump vs. Clinton was probably more closely contested than my numbers above, even in terms of expectations before the fact.) Also, of course, putting this much analysis into deciding whether to vote is more costly than voting, so the point is mostly to help us understand similar but different questions.

Comment by steven0461 on A Rationalist Argument for Voting · 2018-06-12T19:29:54.429Z · score: 13 (4 votes) · LW · GW

The real cost of voting is mostly the cost of following politics. Maybe you could vote without following politics and still make decent voting decisions, but that's not a decision people often make in practice.

Comment by steven0461 on A Rationalist Argument for Voting · 2018-06-12T19:20:38.500Z · score: 8 (1 votes) · LW · GW
With millions of voters, the chance that you are correlated to thousands of them is much better.

It seems to me there are also millions of potential acausal trade partners in non-voting contexts, e.g. in the context of whether to spend most of your effort egoistically or altruistically and toward which cause, whether to obey the law, etc. The only special feature of voting that I can see is it gives you a share in years' worth of policy at the cost of only a much smaller amount of your time, making it potentially unusually efficient for altruists.

Comment by steven0461 on A Rationalist Argument for Voting · 2018-06-12T19:01:27.578Z · score: 10 (2 votes) · LW · GW

Naive and extremely rough calculation that doesn't take logical correlations into account: If you're in the US and your uncertainty about vote counts is in the tens of millions and the expected vote difference between candidates is also in the tens of millions, then the expected number of elections swayed by the marginal vote might be 1 in 100 million (because almost-equal numbers of votes have lower probability density). If 0.1% of the quality of our future lightcone is at stake, voting wins an expected 10 picolightcones. If voting takes an hour, then it's worth it iff you're otherwise winning less than 10 picolightcones per hour. If a lifetime is 100,000 hours, that means less than a microlightcone per lifetime. The popular vote doesn't determine the outcome, of course, so the relevant number is much smaller in a non-swing state and larger in a swing state or if you're trading votes with someone in a swing state.

Comment by steven0461 on A Rationalist Argument for Voting · 2018-06-12T18:31:40.202Z · score: 19 (3 votes) · LW · GW

If your decision is determined by an x-risk perspective, it seems to me you only correlate with others whose decision is determined by an x-risk perspective, and logical correlations become irrelevant because their votes decrease net x-risk if and only if yours does (on expectation, after conditioning on the right information). This doesn't seem to be the common wisdom, so maybe I'm missing something. At least a case for taking logical correlations into account here would have to be more subtle than the more straightforward case for acausal cooperation between egoists.

Comment by steven0461 on [deleted post] 2018-06-11T19:54:06.937Z

LW is a public website existing in a conflict-theorist world. My impression is discussions on this subject and various others are doomed to be "fake" in the sense that important considerations will be left out, and will provide material for critics to misrepresent as being typical of rationalists. If I recall correctly, a somewhat similar thread on LW 1.0 (I can't immediately find it, but it involved someone being on fire as a metaphor) turned into a major blow-up that people left the site over. I don't see any upside to outweigh these downsides. Maybe there's honor in being able to handle this, but if we can't handle this, then that doesn't mean it will help to act as if we can.

Comment by steven0461 on [deleted post] 2018-05-30T20:07:07.496Z

I agree that it doesn't affect many users and didn't mean to claim it should be a priority.

Comment by steven0461 on [deleted post] 2018-05-30T20:01:57.358Z

I didn't mean to argue that this deserves mod attention, just that it shouldn't have been posted or commented on.

Comment by steven0461 on [deleted post] 2018-05-30T19:30:00.098Z

the people who come to a thread like this more or less know what they're getting into

That's true to an extent, but humans are notorious for clicking on web links against their better judgment, and comments here appear in people's comment histories at least.

Comment by steven0461 on [deleted post] 2018-05-30T19:22:43.995Z

I strongly suspect that it's harmful for LessWrong to have unpersuasive posts arguing for unpopular views on emotionally fraught, low value topics, and that it's harmful for LessWrong to have object-level comments on such posts.

Comment by steven0461 on Expressive Vocabulary · 2018-05-24T20:38:21.039Z · score: 12 (4 votes) · LW · GW

You can solve this by adding scare quotes or the phrase "so to speak". E.g., "That brand of dip is full of 'chemicals', so to speak." That way, you're safe from pedants without intruding on the existing meaning of the word "chemicals".

Comment by steven0461 on Mental Illness Is Not Evidence Against Abuse Allegations · 2018-05-13T22:12:31.931Z · score: 16 (4 votes) · LW · GW

I expect much of these effects comes from mentally ill people being in worse circumstances, and disappears if you condition on circumstances, which it seems like you can usually do in practice.

Comment by steven0461 on Metaphilosophical competence can't be disentangled from alignment · 2018-04-01T22:41:45.273Z · score: 12 (4 votes) · LW · GW

I think the number of safe people depends sensitively on the details of the 1,000,000,000,000,000xing. For example: Were they given a five minute lecture on the dangers of value lock-in? On the universe's control panel, is the "find out what would I think if I reflected more, and what the actual causes are of everyone else's opinions" button more prominently in view than the "turn everything into my favorite thing" button? And so on.

Comment by steven0461 on Browser Bug Hunt for LessWrong.com migration · 2018-03-23T15:10:17.515Z · score: 3 (1 votes) · LW · GW

I've seen a gray (or blue?) oscillating oval at the bottom of some pages on Android + Chrome in the past. I haven't tried to type into it. I don't think it's there now.

Comment by steven0461 on Strengthening the foundations under the Overton Window without moving it · 2018-03-23T15:01:53.655Z · score: 7 (2 votes) · LW · GW

user:tempus has been reposting his reply to this comment from many different accounts (without also reposting my reply). Meanwhile, I think the parent comment received multiple downvotes. I think the same may be true of user:gjm's comment below. If these downvotes are from legitimate users, then I apologize, but if I happened to be hunting for further user:tempus sockpuppets, that's where I'd look.

Comment by steven0461 on Reductionism Revisited · 2018-03-21T16:47:03.018Z · score: 11 (3 votes) · LW · GW
It you can’t the alignment problem of getting yourself to sleep and wake up on time, expect to hurt yourself trying to save the world.

I disagree with this example. Bad wake and sleep times are often a physiological problem. Such problems, like any other problems, can sometimes be solved with competence and good decision making, but this post suggests sleeping at the right times is just a matter of playing the good decision making game on easy mode, and that definitely hasn't been my experience. (I've gone from a terrible sleep cycle to a great sleep cycle through strategies such as aging.)

Comment by steven0461 on Internal Double Crux · 2018-03-21T03:05:06.116Z · score: 14 (3 votes) · LW · GW

Most dream descriptions that I've come across have sounded to me like the output of Magic Realism Bot, which is arguably somewhat useful/interesting/coherent but definitely has no intentional content.

Comment by steven0461 on Internal Double Crux · 2018-03-19T20:00:17.189Z · score: 3 (1 votes) · LW · GW

This all seems to match my experience. I've looked for differences between my Dutch language mind and English language mind and nothing stood out. (They're more similar languages and cultures, of course.) Dreams seem very random, more like a monkey hitting my brain's narrative soundboard than like a story with an author.

Comment by steven0461 on Internal Double Crux · 2018-03-19T17:20:57.386Z · score: 30 (7 votes) · LW · GW

(edit: on second thought, feel free to delete this, because I should think more about how to frame the discussion, and this is probably not the best place for it)

Most small conflicts are just battles in raging wars between two giant elephants in the brain.

People keep telling me I contain multiple agents, but I subjectively feel like a single coherent agent working within non-agenty constraints of pain, pleasure, stupidity, and ignorance, I don't experience different voices struggling for control, and I haven't gotten much mileage out of modeling my mind as a battleground or parliament of selves. So it seems like either I'm confused, or you're confused, or we use words differently, or we're both right about ourselves but I'm atypical. How do I check which of these is true?

Comment by steven0461 on Values determined by "stopping" properties · 2018-03-16T15:49:00.667Z · score: 3 (1 votes) · LW · GW

It seems to me that the fact that we're having conversations like this implies that there's some meta level where we agree on the "rules of the game".

Comment by steven0461 on Strengthening the foundations under the Overton Window without moving it · 2018-03-16T15:35:11.673Z · score: 3 (1 votes) · LW · GW

All I'm saying is that near-unanimous agreement about P in the relevant scientific field is pretty strong probabilistic evidence for P, and reasonable people are more likely to take probabilistic evidence into account than unreasonable people, so if all you know is someone disagrees with P and hasn't heard the best arguments, such near-unanimous agreement constitutes probabilistic evidence against that person being reasonable.

Comment by steven0461 on Appropriateness of Discussing Rationalist Discourse of a Political Nature on LW? · 2018-03-15T21:42:53.441Z · score: 12 (3 votes) · LW · GW
All magazines with a long history started out with a short history at some point, and presumably they do not generally change their names when the history is long enough.

But these magazines mostly took names typical of their own time instead of names typical of times before their own time, so when they were young magazines, readers weren't misled into thinking they were old magazines. (In other words, the argument isn't that magazines should be named so as to suggest the right age, but that they should be named so as to suggest the right date of birth.)

Comment by steven0461 on Yoda Timers 3: Speed · 2018-03-15T18:23:12.606Z · score: 9 (2 votes) · LW · GW

I've sometimes found it helpful to subvocalize the word "go" as often as possible.

Comment by steven0461 on Appropriateness of Discussing Rationalist Discourse of a Political Nature on LW? · 2018-03-15T18:12:35.199Z · score: 17 (5 votes) · LW · GW
turns out there are plenty of other mindkillers and banning one doesn't make the other ones go away

I don't understand this argument. The claim was never that banning politics would drive out most or all mindkill; just that it would drive out politics-related mindkill and that this was worth the cost.

LW discouraged politics, but didn't ban it in any consistent way. Consequences included David Gerard's hate campaign, Eugine_Nier's long-running sockpuppet voting abuses, and (as far as I can tell mostly false) associations in the public mind with neoreactionaries.

We can also assess our ability to manage what happens when a person is being mindkilled in a comment thread; if we can't handle it in a political discussion then we probably can't handle it elsewhere either.

Plenty of political threads already exist in the LW archives. We do have different commenters and a different karma system now (though if the site migrates, the commenter base might return more to what it used to be), so maybe that's not a good reason not to have another such thread.

I do worry that many harms from political mindkill would be subtle. Political discussion might draw in commenters with different interests from the site, might be divisive in that it creates awareness of people being on different "sides", might create grudges even if we succeed at downvoting needless hostility, and would take up people's time and attention. None of those would necessarily be visible in a train wreck kind of way.

Comment by steven0461 on Strengthening the foundations under the Overton Window without moving it · 2018-03-15T15:43:34.175Z · score: 3 (1 votes) · LW · GW

It's a pretty strong authority, so it affects what inferences you should make about the reasonableness of people who hold the belief.

I'm not arguing that anyone who disagrees with a scientific majority is automatically unreasonable, or that the present is right on all points where the past and present disagree, if that's what you're worried about.

Comment by steven0461 on Values determined by "stopping" properties · 2018-03-15T15:35:21.846Z · score: 6 (2 votes) · LW · GW
For example, if the human values are determined by human feedback, then we can forbid the AI from coercing the human in any way, or restrict it to only using some methods (such as relaxed conversation).

It seems to me the natural way to do this is by looking for coherence among all the possible ways the AI could ask the relevant questions. This seems like something you'd want anyway, so if there's a meta-level step at the beginning where you consult humans on how to consult their true selves, you'd get it for free. Maybe the meta-level step itself can be hacked through some sort of manipulation, but it seems at least harder (and probably there'd be multiple levels of meta).

Comment by steven0461 on Values determined by "stopping" properties · 2018-03-15T15:29:47.147Z · score: 8 (2 votes) · LW · GW
nevertheless it seems possible to state that some values are further away from this undefined starting point than others (paperclipers are very far, money-maximiser quite far, situations where recognisably human beings do recognisably human stuff are much closer)

Whether a value system recommends creating humans doing human stuff depends not just on the value system but also on the relative costs of creating humans doing human stuff versus creating other good things. So it seems like defining value distance requires either making some assumptions about the underlying universe, or somehow measuring the distance between utility functions and not just the distance between recommendations. Maybe you'd end up with something like "if a hedon is a thousand times cheaper than a unit of eudaimonia and the values recommend using the universe's resources for hedons, that means the values are very distant from ours, but if a hedon is a million times cheaper than a unit of eudaimonia and the values recommend using the universe's resources for hedons, the values could still be very close to ours".

Comment by steven0461 on Strengthening the foundations under the Overton Window without moving it · 2018-03-14T16:21:53.409Z · score: 10 (3 votes) · LW · GW
Since it took the whole of humanity thousands of years to reject V, even if these new humans are especially smart and moral, they probably do not each have the resources to personally out-reason the whole of civilization for thousands of years.

New humans, even those who don't know the arguments, have background knowledge that the past didn't have. There are many false beliefs V such that past naive humans could reasonably believe V and present naive humans can reasonably believe V, but also many false beliefs V such that past naive humans could reasonably believe V but present naive humans can't reasonably believe V. (Maybe a reasonable person without domain knowledge could doubt evolution before it was known that almost all biologists would end up believing in it, but not after. Maybe it takes much more unreasonableness to be a fascist after WW2 and the Holocaust than before. And so on.) I think a lot of disagreement about whether to argue with people who believe V is driven by disagreement about how obvious not-V is given general modern background knowledge instead of by disagreement about general policy toward people of different reasonableness levels.

Comment by steven0461 on Person-moment affecting views · 2018-03-07T14:49:29.277Z · score: 10 (3 votes) · LW · GW

The closest view to person-affecting ethics that makes any sense to me is something like "it's hard for future lives to have much positive value except when seen as part of an organic four-dimensional human civilization, like notes in a piece of music, and individual survival is a special case of this". (If this were true, I'm not sure if it would limit the number of people whose lives could eventually have much positive value. I'm specifying positive value here because it seems plausible that there's an asymmetry between positive and negative, like how a good note outside a musical piece can be only slightly beautiful and nails on chalkboard outside a musical piece can still be very ugly.)

Comment by steven0461 on Arguments about fast takeoff · 2018-02-28T06:07:19.200Z · score: 2 (1 votes) · LW · GW

(2) was only meant as a claim about AGI effort needed to reach seed AI (perhaps meaning "something good enough to count as an upper bound on what it would take to originate a stage of the intelligence explosion that we agree will be very fast because of recursive self-improvement and copying"). Then between seed AI and superintelligence, a lot of additional R&D (mostly by AI) could happen in little calendar time without contradicting (2). We can analyze the plausibility of (2) separately from the question of what its consequences would be. (My guess is you're already taking all this into account and still think (2) is unlikely.)

Maybe I should have phrased the intuition as: "If you predict sufficiently many years of sufficiently fast AI acceleration, the total amount of pressure on the AGI problem starts being greater than I might naively expect is needed to solve it completely."

(For an extreme example, consider a prediction that the world will have a trillion ems living in it, but no strongly superhuman AI until years later. I don't think there's any plausible indirect historical evidence or reasoning based on functional forms of growth that could convince me of that prediction, simply because it's hard to see how you can have millions of Von Neumanns in a box without them solving the relevant problems in less than a year.)

Comment by steven0461 on Meta-tations on Moderation: Towards Public Archipelago · 2018-02-27T16:04:33.737Z · score: 13 (5 votes) · LW · GW

I suspect in practice the epistemic status of a post is signaled less by what it says under "epistemic status" and more by facts about where it's posted, who will see it, and how long it will remain accessible.

Sites acquire entrenched cultures. "The medium is the message" and the message of a LessWrong post is "behold my eternal contribution to the ivory halls of knowledge".

A chat message will be seen only by people in the chat room, a tweet will be seen mostly by people who chose to follow you, but it's much harder to characterize the audience of a LessWrong post or comment, so these will feel like they're aimed at convincing or explaining to a general audience, even if they say they're not.

In my experience, playing with ideas requires frequent low-stakes high-context back-and-forths, and this is in tension with each comment appearing in various feeds and remaining visible forever to everyone reading the original post, which might have become the standard reference for a concept. So I think LessWrong has always been much better for polished work than playing with ideas, and changing this would require a lot of tradeoffs that might not be worth it.

Comment by steven0461 on More on the Linear Utility Hypothesis and the Leverage Prior · 2018-02-27T14:46:35.212Z · score: 6 (2 votes) · LW · GW
Like, why would I care so much whether experiences are instantiated in this piece of universe over here or that piece of universe over there, if there's no real sense in which there is more of the experience if it is instantiated in one place than if it is instantiated in the other?

I suspect this question has a similar answer to the question "why would I care so much whether physical phenomena can be interpreted as minds with a simple program or with an extremely complicated program?" E.g., consider the case where locating a mind in space and time takes as many bits as reading it into the activity of your city's traffic lights. If the latter involves a measure that's physically real and the former does not, then I don't think I understand what you mean by that. Measures seem like the kind of thing that can be natural or unnatural but not physically real or physically not real.

It doesn't seem to me like we have especially strong reasons to believe such a measure to exist, and we certainly shouldn't believe that there is such a measure with probability 1. So you still have to decide what your preferences are in the absence of an objective probability measure on the universe.

Any probabilistic mix of moral theories has to decide this, and it's not clear to me that it's more of a problem for a mix that uses linear utility in the bounded case than for a mix that uses nonlinear utility in the bounded case. When we're not sure if alternative moral theories are even coherent, we're more in the domain of moral uncertainty than straightforward EU maximization. Utils in a bounded moral universe and an unbounded moral universe don't seem like the same type of thing; my intuition is there's no one-to-one calibration of bounded-moral-universe-utils to unbounded-moral-universe-utils that makes sense, and someone who accepts linear utility conditional on the moral universe being bounded isn't forced to also accept linear utility conditional on the moral universe being unbounded.

Comment by steven0461 on Arguments about fast takeoff · 2018-02-26T18:37:26.337Z · score: 2 (1 votes) · LW · GW

Here's an argument why (at least somewhat) sudden takeoff is (at least somewhat) plausible.

Supposing:

(1) At some point P, AI will be as good as humans at AI programming (grandprogramming, great-grandprogramming, ...) by some reasonable standard, and less than a month later, a superintelligence will exist.

(2) Getting to point P requires AI R&D effort roughly comparable to total past AI R&D effort.

(3) In an economy growing quickly because of AI, AI R&D effort increases by at least the same factor as general economic growth.

Then:

(4) Based on (3), if there's a four year period during which economic growth is ten times normal because of AI (roughly corresponding to a four year doubling period), then AI R&D effort during that period is also at least ten times normal.

(5) Because 4*10=40 and because of additional R&D effort between now and the start of the four year period, total AI R&D effort between now and the end of such a period would be at least roughly comparable to total AI R&D effort until now.

(6) Therefore, based on (2) and (1), at most a month after the end of the first four year doubling period, a superintelligence will exist.

I think (1) is probable and (2) is plausible (but plausibly false). I'm more confused about (3), but it doesn't seem wrong.

There's a lot of room to doubt as well as sharpen this argument, but I hope the intuition is clear. Something like this comes out if I introspect on why it feels easier to coherently imagine a sudden than a gradual takeoff.

If there's a hard takeoff claim I'm 90% sure of, though, the claim is more like (1) than (6); more like "superintelligence comes soon after an AI is a human-level programmer/researcher" than like "superintelligence comes soon after AI (or some other technology) causes drastic change". So as has been said, the difference of opinion isn't as big as it might at first seem.

Comment by steven0461 on Marginal Revolution Thoughts on Black Lives Matter Movement · 2017-01-22T18:24:44.512Z · score: 3 (3 votes) · LW · GW

Random opinions on hot-button political issues are off-topic, valueless, and harmful; please take them elsewhere.

Comment by steven0461 on Thoughts on "Operation Make Less Wrong the single conversational locus", Month 1 · 2017-01-22T18:15:13.311Z · score: 3 (3 votes) · LW · GW

"How do you get a clean sewer system if you insist on separating it from the rest of the city?"

Meetup : San Jose Meetup: Park Day (X)

2016-11-28T02:46:20.651Z · score: 0 (1 votes)

Meetup : San Jose Meetup: Park Day (IX), 3pm

2016-11-01T15:40:19.623Z · score: 0 (1 votes)

Meetup : San Jose Meetup: Park Day (VIII)

2016-09-06T00:47:23.680Z · score: 0 (1 votes)

Meetup : Meetup : San Jose Meetup: Park Day (VII)

2016-08-15T01:05:00.237Z · score: 0 (1 votes)

Meetup : San Jose Meetup: Park Day (VI)

2016-07-25T02:11:44.237Z · score: 0 (1 votes)

Meetup : San Jose Meetup: Park Day (V)

2016-07-04T18:38:01.992Z · score: 0 (1 votes)

Meetup : San Jose Meetup: Park Day (IV)

2016-06-15T20:29:04.853Z · score: 0 (1 votes)

Meetup : San Jose Meetup: Park Day (III)

2016-05-09T20:10:55.447Z · score: 0 (1 votes)

Meetup : San Jose Meetup: Park Day (II)

2016-04-20T06:23:28.685Z · score: 0 (1 votes)

Meetup : San Jose Meetup: Park Day

2016-03-30T04:39:09.532Z · score: 1 (2 votes)

Meetup : Amsterdam

2013-11-12T09:12:31.710Z · score: 4 (5 votes)

Bayesian Adjustment Does Not Defeat Existential Risk Charity

2013-03-17T08:50:02.096Z · score: 43 (46 votes)

Meetup : Chicago Meetup

2011-09-28T04:29:35.777Z · score: 3 (4 votes)

Meetup : Chicago Meetup

2011-07-07T15:28:57.969Z · score: 2 (3 votes)

PhilPapers survey results now include correlations

2010-11-09T19:15:47.251Z · score: 6 (7 votes)

Chicago Meetup 11/14

2010-11-08T23:30:49.015Z · score: 8 (9 votes)

A Fundamental Question of Group Rationality

2010-10-13T20:32:08.085Z · score: 10 (11 votes)

Chicago/Madison Meetup

2010-07-15T23:30:15.576Z · score: 9 (10 votes)

Swimming in Reasons

2010-04-10T01:24:27.787Z · score: 8 (17 votes)

Disambiguating Doom

2010-03-29T18:14:12.075Z · score: 16 (17 votes)

Taking Occam Seriously

2009-05-29T17:31:52.268Z · score: 28 (28 votes)

Open Thread: May 2009

2009-05-01T16:16:35.156Z · score: 4 (5 votes)

Eliezer Yudkowsky Facts

2009-03-22T20:17:21.220Z · score: 137 (216 votes)

The Wrath of Kahneman

2009-03-09T12:52:41.695Z · score: 25 (26 votes)

Lies and Secrets

2009-03-08T14:43:22.152Z · score: 14 (25 votes)