Posts

The Backup Plan 2011-10-13T19:53:06.941Z

Comments

Comment by Luke_A_Somers on Why Bayesians should two-box in a one-shot · 2017-12-26T00:27:10.557Z · LW · GW

If you find an Omega, then you are in an environment where Omega is possible. Perhaps we are all simulated and QM is optional. Maybe we have easily enough determinism in our brains that Omega can make predictions, much as quantum mechanics ought to in some sense prevent predicting where a cannonball will fly but in practice does not. Perhaps it's a hypothetical where we're AI to begin with so deterministic behavior is just to be expected.

Comment by Luke_A_Somers on Why Bayesians should two-box in a one-shot · 2017-12-26T00:11:58.072Z · LW · GW

I think the more relevant case is when the random noise is imperceptibly small. Of course you two-box if it's basically random.

Comment by Luke_A_Somers on Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” · 2017-12-12T01:41:29.291Z · LW · GW

… you don't think that pissing away credibility could weaken the arguments? I think presenting those particular arguments is more likely to do that than it is to work.

Comment by Luke_A_Somers on Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” · 2017-12-08T17:56:54.491Z · LW · GW

I suspect that an AI will have a bullshit detector. We want to avoid setting it off.

Comment by Luke_A_Somers on Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” · 2017-11-29T23:53:01.805Z · LW · GW

I read up to 3.1. The arguments in 3.1 are weak. It seems dubious that any AI would not be aware of the risks pertaining to disobedience. Persuasion to be corrigible seems too late - either already this would already work because its goals were made sufficiently indirect that this question would be obvious and pressing, or it doesn't care to have 'correct' goals in the first place; I really don't see how persuasion would help. The arguments for allowing itself to be turned off are especially weak, doubly-especially the MWI.

Comment by Luke_A_Somers on Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” · 2017-11-27T17:11:10.896Z · LW · GW

See: my first post on this site.

Comment by Luke_A_Somers on Fables grow around missed natural experiments · 2017-11-13T22:34:07.876Z · LW · GW

What do you mean by natural experiment, here? And what was the moral, anyway?

Comment by Luke_A_Somers on Toy model of the AI control problem: animated version · 2017-10-10T12:50:21.139Z · LW · GW

I remember poking at that demo to try to actually get it to behave deceptively - with the rules as he laid them out, the optimal move was to do exactly what the humans wanted it to do!

Comment by Luke_A_Somers on The Reality of Emergence · 2017-09-15T01:59:56.762Z · LW · GW

I understand EY thinks that if you simulate enough neurons sufficiently well you get something that's conscious.

Without specifying the arrangements of those neurons? Of course it should if you copy the arrangement of neurons out of a real person, say, but that doesn't sound like what you meant.

Comment by Luke_A_Somers on The Reality of Emergence · 2017-09-13T01:34:32.276Z · LW · GW

I would really want a cite on that claim. It doesn't sound right.

Comment by Luke_A_Somers on The Reality of Emergence · 2017-09-13T01:34:07.694Z · LW · GW

Like many cases of Motte-and-Bailey, the Motte is mainly held by people who dislike the Bailey. I suspect that an average scientist in a relevant field somewhere at or below neurophysics in the generality hierarchy (e.g. chemist, physicist, but not sociologist), would consider that bailey to be… non-likely at best, while holding the motte very firmly.

Comment by Luke_A_Somers on The Contrarian Sequences · 2017-09-02T16:14:52.774Z · LW · GW

This looks promising.

Also, the link to the Reality of Emergence is broken.

Comment by Luke_A_Somers on Intrinsic properties and Eliezer's metaethics · 2017-08-31T15:58:18.150Z · LW · GW

1) You could define the shape criteria required to open lock L, and then the object reference would fall away. And, indeed, this is how keys usually work. Suppose I have a key with tumbler heights 0, 8, 7, 1, 4, 9, 2, 4. This is an intrinsic property of the key. That is what it is.

Locks can have the same set of tumbler heights, and there is then a relationship between them. I wouldn't even consider it so much an extrinsic property of the key itself, as a relationship between the intrinsic properties of the key and lock.

2) Metaethics is a function from cultural situations and moral intuitions into a space of ethical systems. This function is not onto (i.e. not every coherent ethical system is the result of metaethical analysis on some cultural system and moral intuitions) , and it is not at all guaranteed to yield the same ethical system at use in that cultural situation. This is a very significant difference from Moral relativism, not a mere slight increase in temperature.

Comment by Luke_A_Somers on Priors Are Useless · 2017-07-08T16:15:50.352Z · LW · GW

Yes, but that's not the way the problem goes. You don't fix your prior in response to the evidence in order to force the conclusion (if you're doing it anything like right). So different people with different priors will have different amounts of evidence required: 1 bit of evidence for every bit of prior odds against, to bring it up to even odds, and then a few more to reach it as a (tentative, as always) conclusion.

Comment by Luke_A_Somers on Priors Are Useless · 2017-06-22T18:10:16.171Z · LW · GW

Where's that from?

Comment by Luke_A_Somers on Priors Are Useless · 2017-06-21T14:44:43.060Z · LW · GW

This is totally backwards. I would phrase it, "Priors get out of the way once you have enough data." That's a good thing, that makes them useful, not useless. Its purpose is right there in the name - it's your starting point. The evidence takes you on a journey, and you asymptotically approach your goal.

If priors were capable of skewing the conclusion after an unlimited amount of evidence, that would make them permanent, not simply a starting-point. That would be writing the bottom line first. That would be broken reasoning.

Comment by Luke_A_Somers on Thought experiment: coarse-grained VR utopia · 2017-06-15T14:29:03.625Z · LW · GW

Like, "Please, create a new higher bar that we can expect a truly super-intelligent being to be able to exceed."?

Comment by Luke_A_Somers on Why I think worse than death outcomes are not a good reason for most people to avoid cryonics · 2017-06-12T19:19:30.657Z · LW · GW

It missed in all story that superintelligecne will be probably able to resurrect people even if they were not cryopreserved, using creation of their copies based on digital immortality.

Enough of what makes me me hasn't and won't make into digital expression by accident short of post-singularity means, that I wouldn't identify such a poor individual as being me. It would be neuro-sculpture on the theme of me.

Comment by Luke_A_Somers on Open thread, June. 12 - June. 18, 2017 · 2017-06-12T15:09:19.462Z · LW · GW

That's more about the land moving in response to the changes in ice, and a tiny correction for changing the gravitational force previously applied by the ice.

This is (probably?) about the way the water settles around a spinning oblate spheroid.

Comment by Luke_A_Somers on Destroying the Utility Monster—An Alternative Formation of Utility · 2017-06-12T13:45:13.424Z · LW · GW

Good point; how about, someone who is stupider than the average dog.

Comment by Luke_A_Somers on Destroying the Utility Monster—An Alternative Formation of Utility · 2017-06-11T03:59:08.106Z · LW · GW

A) what cousin_it said.

B) consider, then, successively more and more severely mentally nonfunctioning humans. There is some level of incapability at which we stop caring (e.g. head crushed), and I would be somewhat surprised at a choice of values that put a 100% abrupt turn-on at some threshold; and if it did, I expect some human could be found or made that would flicker across that boundary regularly.

Comment by Luke_A_Somers on Destroying the Utility Monster—An Alternative Formation of Utility · 2017-06-08T19:05:08.602Z · LW · GW

There is a continuum on this scale. Is there a hard cutoff, or is there any scaling? And what about very similar forks of AIs?

Comment by Luke_A_Somers on Thoughts on civilization collapse · 2017-05-21T17:20:06.402Z · LW · GW

I'll go along with that.

Comment by Luke_A_Somers on Thoughts on civilization collapse · 2017-05-20T23:25:08.835Z · LW · GW

So, how do you characterize 'Merkelterrorists' and 'crimmigrants'? Terms of reasonable discourse?

Comment by Luke_A_Somers on Thoughts on civilization collapse · 2017-05-20T22:21:21.503Z · LW · GW

Your certainty that I am lying and blindly partisan appears to be much stronger than justifiable given the evidence publicly available, and from my point of view where I at least know that I am not lying… well, it makes your oh-so-clever insinuation fall a touch flat. As for being blindly partisan, what gives you the impression that I would tolerate this from the other side?

At the very least, I think this chain has shown that LessWrong is not a left-side echo chamber as Thomas has claimed above.

Except that risk is not in fact exaggerated

If so, the original expression of that risk was presented in such a fashion as to make that claim as non-credible as possible through explicit emotionally enflaming wording.

Comment by Luke_A_Somers on Thoughts on civilization collapse · 2017-05-20T20:00:49.242Z · LW · GW

It's possible to talk about politics without explicitly invoking Boo lights like 'crimmigrants' and appeals to exaggerated risks like 'may rob/rape/kill you anytime of day or night'. You can have a reasonable discussion of the problems of immigration, but this is not how you do it. Anyone who says this is A-OK argumentation and that calling it out is wrong is basically diametrically opposed to Lesswrong's core concepts.

Basically, you're accusing me of outright lying that I think that argument is quite badly written, and instead being blindly partisan. It was badly written, and I am not. I don't even know WHAT to do about the problems arising from the rapid immigration from the Middle-East into Europe. I certainly don't deny they exist. What I DO know is that talking about it like that does (ETA: not) help us approach the truth of the matter.

Comment by Luke_A_Somers on Thoughts on civilization collapse · 2017-05-19T00:07:43.996Z · LW · GW

Spreading this shitty argumentation in a place that had otherwise been quite clean, that's what's gotten under my skin.

Comment by Luke_A_Somers on Thoughts on civilization collapse · 2017-05-13T19:23:02.313Z · LW · GW

This is utterly LUDICROUS.

Look at what happened. tukabel wrote a post of rambling, grammar-impaired, hysteria-mongering hyperbole: 'invading millions of crimmigrants that may rob/rape/kill you anytime day or night'.This is utterly unquestionably NOT a rationally presented point on politics, and it does not belong on this forum, and it deserves to be downvoted into oblivion.

Stuart said he wished to be able to downvote it.

Then out of nowhere you come in and blame him personally or starting something he manifestly didn't start. It's a 100% false comment.

Upon being called out on this, you backtrack and say your earlier point didn't actually matter (meaning it was bullshit to begin with), complaining that he's gasp liberal.

But here it didn't take being liberal to want to downvote. If I agreed 100% with tukabel, I would be freaking EMBARRASSED to have that argument presented on my side. It was a really bad comment!

Comment by Luke_A_Somers on AI arms race · 2017-05-05T17:40:23.536Z · LW · GW

The main difference I see with nuclear weapons is that if neither side pursues them then you end up in much the same place as if it's very close, except that you have spent a lot on it.

While on AI, the benefits would be huge unless the failure is equally drastic.

Comment by Luke_A_Somers on Thoughts on civilization collapse · 2017-05-05T14:16:01.571Z · LW · GW

Seems to me like Daniel started it.

Comment by Luke_A_Somers on The AI Alignment Problem Has Already Been Solved(?) Once · 2017-04-24T02:38:49.237Z · LW · GW

This seems to be more about human development than AI alignment. The non-parallels between these two situations all seem very pertinent.

Comment by Luke_A_Somers on Putanumonit: Contra Noah Smith's bad argument for a good cause · 2017-04-06T12:19:55.857Z · LW · GW

What would a natural choice of 0 be on that log? I would nominate bare subsistence income, but then any person having less than that would completely wreck the whole thing.

Maybe switch to inverse hyperbolic sine of income over bare subsistence income?

Comment by Luke_A_Somers on Globally better means locally worse · 2017-03-22T22:58:51.147Z · LW · GW

Quite - I have a 10 year old car and haven't had to do anything more drastic than change the battery - regular maintenance kinds of stuff.

Comment by Luke_A_Somers on Building Safe A.I. - A Tutorial for Encrypted Deep Learning · 2017-03-22T15:46:19.533Z · LW · GW

This is about keeping the AI safe from being altered by bad actors before it becomes massively powerful. It is not an attempt at a Control Problem solution. It could still be useful.

Comment by Luke_A_Somers on The D-Squared Digest One Minute MBA – Avoiding Projects Pursued By Morons 101 · 2017-03-20T21:49:08.182Z · LW · GW

A) the audit notion ties into having our feedback cycles nice and tight, which we all like here.

B) This would be a little more interesting if he linked to his advance predictions on the war so we could compare how he did. And of course if he had posted a bunch of other predictions so we could see how he did on those (to avoid cherry-picking). That would rule out rear-view-mirror effects.

Comment by Luke_A_Somers on LW mentioned in influential 2016 Milo article on the Alt-Right · 2017-03-20T18:24:47.521Z · LW · GW

Really? There seems a little overlap to me, but plenty of mismatch as well. Like, MM says Bayesians are on crack, as one of the main points of the article.

Comment by Luke_A_Somers on Could utility functions be for narrow AI only, and downright antithetical to AGI? · 2017-03-17T18:26:53.788Z · LW · GW

Agreed on that last point particularly. Especially since, if they want similar enough things, they could easily cooperate without trade.

Like if two AIs supported Alice in her role as Queen of Examplestan, they would probably figure that quibbling with each other over whether Bob the gardener should have one or two buttons undone (just on the basis of fashion, not due to larger consequences) is not a good use of their time.

Also, the utility functions can differ as much as you want on matters aren't going to come up. Like, Agents A and B disagree on how awful many bad things are. Both agree that they are all really quite bad and all effort should be put forth to prevent them.

Comment by Luke_A_Somers on I Want To Live In A Baugruppe · 2017-03-17T14:55:29.999Z · LW · GW

An American Rationalist subculture question, perhaps. Certainly NOT America as a whole.

Comment by Luke_A_Somers on Excuses and validity · 2017-03-13T20:13:19.984Z · LW · GW

You say all excuses are equally valid and then turn around and say they're more or less valid. Do you mean that excuses people would normally think of making have a largely overlapping range of possible validities?

Comment by Luke_A_Somers on Open Thread, Feb. 27 - March 5, 2017 · 2017-03-10T19:50:04.330Z · LW · GW

Well, if the laws of the universe were such that it were unlikely but not impossible for life to form, MWI would take care of the rest, yes.

BUT, if you combine MWI with something that sets the force laws and particle zoo of the later universe as an aspect of quantum state, then MWI helps a lot - instead of getting only one, it makes ALL† of those laws real.

† or in case of precise interference that completely forces certain sets of laws to have a perfectly zero component, nearly all. Or if half of them end up having a precisely zero component due to some symmetry, then, the other half of these rule-sets… etc. Considering the high-dimensional messiness of these proto-universe-theories, large swaths being nodal (having zero wavefunction) seems unlikely.

Comment by Luke_A_Somers on Open Thread, Feb. 27 - March 5, 2017 · 2017-03-06T17:55:54.288Z · LW · GW

MWI is orthogonal to the question of different fundamental constants. MWI is just wavefunction realism plus no collapse plus 'that's OK'.

So, any quantum-governed system that generates local constants will do under MWI. The leading example of this would be String Theory.

MWI is important here because if only one branch is real, then you need to be just as lucky anyway - it doesn't help unless the mechanism makes an unusually high density of livable rules. That would be convenient, but also very improbable.

Comment by Luke_A_Somers on Open Thread, Feb. 20 - Feb 26, 2017 · 2017-02-27T20:46:10.810Z · LW · GW

Why should you believe any specific conclusion on this matter rather than remain in doubt?

Comment by Luke_A_Somers on Open Thread, Feb. 20 - Feb 26, 2017 · 2017-02-27T20:44:01.498Z · LW · GW

Your first sentence, for example, has a lot of parts, and uses terms in unusual ways, and there are multiple possible interpretations of several parts. The end effect is that we don't know what you're saying.

I suspect that what you're saying could make sense if presented more clearly, and it would not seem deep or mysterious. This would be a feature, not a bug.

Comment by Luke_A_Somers on Open Thread, Feb. 20 - Feb 26, 2017 · 2017-02-27T20:15:13.317Z · LW · GW

Seems more like trying to clarify the hypothetical. There's a genuine dependency here.

Comment by Luke_A_Somers on Open Thread, Feb. 20 - Feb 26, 2017 · 2017-02-27T20:12:04.067Z · LW · GW

Better answer: they would need to demonstrate experiencing subjective time, such as by flavor-oscillating.

Which they do.

Which is why we think they have mass.

Comment by Luke_A_Somers on Open Thread, Feb. 27 - March 5, 2017 · 2017-02-27T20:09:52.162Z · LW · GW

I haven't been following these threads, so I didn't even realize they were weekly. I'll take a look.

Comment by Luke_A_Somers on Open Thread, Feb. 27 - March 5, 2017 · 2017-02-27T20:05:30.359Z · LW · GW

There ought to be one fundamental set of rules. This fundamental set of rules may or may not shake out into different local sets of rules. Assuming that these local rulesets arise from aspects of quantum state, then MWI is capable of realizing an arbitrarily large number of them.

String Theory, for instance, has a mindbogglingly large number of wildly varying possible local rulesets - 'compactifications'. So, if String Theory is correct, then yes, this is taken care of unless the number of compactifications yielding rules even vaguely like ours is unexpectedly small.

Comment by Luke_A_Somers on Judgement Extrapolations · 2017-02-16T16:10:32.946Z · LW · GW

This would benefit from some more concrete examples, especially after 'No, not even "What if that person was me."'

What sort of screwups are we liable to make from these extrapolations?

Comment by Luke_A_Somers on Stupidity as a mental illness · 2017-02-11T03:43:20.364Z · LW · GW

Seems a bit harsh, though after you've debated a few creationists, it doesn't seem so unsupportable.

Comment by Luke_A_Somers on Open thread, Jan. 16 - Jan. 22, 2016 · 2017-02-06T14:43:53.529Z · LW · GW

OK, I had dropped this for a while, but here are my thoughts. I haven't scrubbed everything that could be seen through rot13 because it became excessively unreadable

For Part 1: gur enqvhf bs gur pragre fcurer vf gur qvfgnapr orgjrra bar bs gur qvnzrgre-1/2 fcurerf naq gur pragre.

Gur qvfgnapr sebz gur pragre bs gur fvqr-fcurer gb gur pragre bs gur birenyy phor vf fdeg(A)/4. Fhogenpg bss n dhnegre sbe gur enqvhf bs gur fcurer, naq jr unir gur enqvhf bs gur pragre fcurer: (fdeg(A)-1)/4. Guvf jvyy xvff gur bhgfvqr bs gur fvqr-1 ulcrephor jura gung'f rdhny gb n unys, juvpu unccraf ng avar qvzrafvbaf. Zber guna gung naq vg jvyy rkgraq bhgfvqr.

Part 2: I admit that I didn't have the volume of high-dimensional spheres memorized, but it's up on wikipedia, and from there it's just a matter of graphing and seeing where the curve crosses 1, taking into account the radius formula derived above.. I haven't done it, but will eventually.

Part 3 looks harder and I'll look at it later.