Posts

Comments

Comment by dlthomas on Eliezer Yudkowsky Facts · 2014-07-15T18:30:18.543Z · LW · GW

I'll accept that.

Comment by dlthomas on Eliezer Yudkowsky Facts · 2014-07-15T15:34:56.040Z · LW · GW

I don't see that that makes other formulations "not Occam's razor", it just makes them less useful attempts at formalizing Occam's razor. If an alternative formalization was found to work better, it would not be MDL - would MDL cease to be "Occam's razor"? Or would the new, better formalization "not be Occam's razor"? Of the latter, by what metric, since the new one "works better"?

For the record, I certainly agree that "space complexity alone" is a poor metric. I just don't see that it should clearly be excluded entirely. I'm generally happy to exclude it on the grounds of parsimony, but this whole subthread was "How could MWI not be the most reasonable choice...?"

Comment by dlthomas on Eliezer Yudkowsky Facts · 2014-07-15T13:03:29.171Z · LW · GW

Is there anything in particular that leads you to claim Minimum Description Length is the only legitimate claimaint to the title "Occam's razor"? It was introduced much later, and the wikipedia article claims it is "a forumlation of Occam's razor".

Certainly, William of Occam wasn't dealing in terms of information compression.

Comment by dlthomas on Eliezer Yudkowsky Facts · 2014-07-14T18:58:27.040Z · LW · GW

What particular gold-standard "Occam's razor" are you adhering to, then? It seems to fit well with "entities must not be multiplied beyond necessity" and "pluralities must never be posited without necessity".

Note that I'm not saying there is no gold-standard "Occam's razor" to which we should be adhering (in terms of denotation of the term or more generally); I'm just unaware of an interpretaton that clearly lays out how "entities" or "assumptions" are counted, or how the complexity of a hypothesis is otherwise measured, which is clearly "the canonical Occam's razor" as opposed to having some other name. If there is one, by all means please make me aware!

Comment by dlthomas on The AI in a box boxes you · 2014-01-21T18:11:37.463Z · LW · GW

But good reason to expect it not to torture people at greater than the maximum rate its hardware was capable of, so if you can bound that there exist some positive values of belief that cannot be inflated into something meaningful by upping copies.

Comment by dlthomas on The Meditation on Curiosity · 2013-07-31T04:32:53.489Z · LW · GW

Yes, fixed.

Comment by dlthomas on The Fabric of Real Things · 2012-10-17T20:33:23.782Z · LW · GW

I am not saying, "You value her continued existence, therefore you should believe in it." I am rather saying that your values may extend to things you do not (and will not, ever) know about, and therefore it may be necessary to make estimations about likelihoods of things that you do not (and will not, ever) know about. In this case, the epistemological work is being done by an assumption of regularity and a non-privileging of your particular position in the physical laws of the universe, which make it seem more likely that there is not anything special about crossing your light cone as opposed to just moving somewhere else where she will happen to have no communication with you in the future.

Comment by dlthomas on The Fabric of Real Things · 2012-10-16T23:39:50.502Z · LW · GW

Values aren't things which have predictive power. I don't necessarily have to be able to verify it to prefer one state of the universe over another.

Comment by dlthomas on Firewalling the Optimal from the Rational · 2012-10-09T23:32:39.775Z · LW · GW

You're assuming that display of loyalty can radically increase your influence. My model was that your initial influence is determined situationally, and your disposition can decrease it more easily than increase it.

That said, let's run with your interpretation; Bot-swa-na! Bot-swa-na!

Comment by dlthomas on Firewalling the Optimal from the Rational · 2012-10-09T22:12:01.458Z · LW · GW

Because states are still a powerful force for (or against) change in this world, you are limited in the number of them you can directly affect (determined largely by where you and relatives were born), and for political and psychological reasons that ability is diminished when you fail to display loyalty (of the appropriate sort, which varies by group) to those states.

Also, apple pie is delicious.

Comment by dlthomas on Rationality Quotes October 2012 · 2012-10-09T22:05:59.137Z · LW · GW

Irrelevant. The quote is not "If goods do cross borders, armies won't."

Comment by dlthomas on Bayes for Schizophrenics: Reasoning in Delusional Disorders · 2012-08-21T17:25:57.301Z · LW · GW

But one or more drawing-inferences-from-states-of-other-modules module could certainly exist, without invoking any separate homunculus. Whether they do and, if so, whether they are organized in a way that is relevant here are empirical questions that I lack the data to address.

Comment by dlthomas on What Is Signaling, Really? · 2012-07-10T09:00:42.114Z · LW · GW

What are the costs associated with flowers?

Comment by dlthomas on Thoughts on the Singularity Institute (SI) · 2012-05-26T17:45:12.478Z · LW · GW

Interesting.

Comment by dlthomas on Problematic Problems for TDT · 2012-05-25T20:33:32.692Z · LW · GW

Could you construct an agent which was itself disadvantaged relative to TDT?

"Take only the box with $1000."

Which itself is inferior to "Take no box."

Comment by dlthomas on Holden's Objection 1: Friendliness is dangerous · 2012-05-24T20:28:40.973Z · LW · GW

My point is that we currently have methods of preventing this that don't require an AI, and which do pretty well. Why do we need the AI to do it? Or more specifically, why should we reject an AI that won't, but may do other useful things?

Comment by dlthomas on Holden's Objection 1: Friendliness is dangerous · 2012-05-24T20:14:36.264Z · LW · GW

Because there's no consensus, your version of CEV would not interfere, and the 80% would be free to kill the 20%.

There may be a distinction between "the AI will not prevent the 80% from killing the 20%" and "nothing will prevent the 80% from killing the 20%" that is getting lost in your phrasing. I am not convinced that the math doesn't make them equivalent, in the long run - but I'm definitely not convinced otherwise.

Comment by dlthomas on Problematic Problems for TDT · 2012-05-23T23:28:34.012Z · LW · GW

If I'm right, we may have shown the impossibility of a "best' decision theory, no matter how meta you get (in a close analogy to Godelian incompleteness). If I'm wrong, what have I missed?

I would say that any such problem doesn't show that there is no best decision theory, it shows that that class of problem cannot be used in the ranking.

Edited to add: Unless, perhaps, one can show that an instantiation of the problem with particular choice of (in this case decision theory, but whatever is varied) is particularly likely to be encountered.

Comment by dlthomas on Problematic Problems for TDT · 2012-05-23T20:25:17.296Z · LW · GW

But doesn't that make cliquebots, in general?

Comment by dlthomas on Thoughts on the Singularity Institute (SI) · 2012-05-23T01:01:34.958Z · LW · GW

On the one hand a real distinction which makes a huge difference in feasibility. On the other hand, either way we're boned, so it makes not a lot of difference in the context of the original question (as I understand it). On balance, it's a cute digression but still a digression, and so I'm torn.

Comment by dlthomas on When is Winning not Winning? · 2012-05-23T00:00:56.998Z · LW · GW

Azathoth should probably link here. I think using our jargon is fine, but links to the source help keep it discoverable for newcomers.

Comment by dlthomas on Thoughts on the Singularity Institute (SI) · 2012-05-22T23:54:58.841Z · LW · GW

I wish I could vote you up and down at the same time.

Comment by dlthomas on Holden's Objection 1: Friendliness is dangerous · 2012-05-18T21:57:59.084Z · LW · GW

As long as we're using sci-fi to inform our thinking on criminality and corrections, The Demolished Man is an interesting read.

Comment by dlthomas on Holden's Objection 1: Friendliness is dangerous · 2012-05-18T20:56:29.674Z · LW · GW

Thank you. So, not quite consensus but similarly biased in favor if inaction.

Comment by dlthomas on Holden's Objection 1: Friendliness is dangerous · 2012-05-18T18:42:45.302Z · LW · GW

Assuming we have no other checks on behavior, yes. I'm not sure, pending more reflection, whether that's a fair assumption or not...

Comment by dlthomas on Holden's Objection 1: Friendliness is dangerous · 2012-05-18T18:26:43.556Z · LW · GW

It's a potential outcome, I suppose, in that

[T]here's nothing I prefer/antiprefer exist, but merely things that I prefer/antiprefer to be aware of.

is a conceivable extrapolation from a starting point where you antiprefer something's existence (in the extreme, with MWI you may not have much say what does/doesn't exist, just how much of it in which branches).

It's also possible that you hold both preferences (prefer X not exist, prefer not to be aware of X) and the existence preference gets dropped for being incompatible with other values held by other people while the awareness preference does not.

Comment by dlthomas on Holden's Objection 1: Friendliness is dangerous · 2012-05-18T17:28:58.492Z · LW · GW

My understanding is that CEV is based on consensus, in which case the majority is meaningless.

Comment by dlthomas on Holden's Objection 1: Friendliness is dangerous · 2012-05-18T17:19:07.609Z · LW · GW

Um, if you would object to your friends being killed (even if you knew more, thought faster, and grew up further with others), then it wouldn't be coherent to value killing them.

Comment by dlthomas on Thoughts on the Singularity Institute (SI) · 2012-05-17T21:28:53.455Z · LW · GW

I am not the one who is making positive claims here.

You did in the original post I responded to.

All I'm saying is that what has happened before is likely to happen again.

Strictly speaking, that is a positive claim. It is not one I disagree with, for a proper translation of "likely" into probability, but it is also not what you said.

"It can't deduce how to create nanorobots" is a concrete, specific, positive claim about the (in)abilities of an AI. Don't misinterpret this as me expecting certainty - of course certainty doesn't exist, and doubly so for this kind of thing. What I am saying, though, is that a qualified sentence such as "X will likely happen" asserts a much weaker belief than an unqualified sentence like "X will happen." "It likely can't deduce how to create nanorobots" is a statement I think I agree with, although one must be careful not use it as if it were stronger than it is.

A positive claim is that an AI will have a magical-like power to somehow avoid this.

That is not a claim I made. "X will happen" implies a high confidence - saying this when you expect it is, say, 55% likely seems strange. Saying this when you expect it to be something less than 10% likely (as I do in this case) seems outright wrong. I still buckle my seatbelt, though, even though I get in a wreck well less than 10% of the time.

This is not to say I made no claims. The claim I made, implicitly, was that you made a statement about the (in)capabilities of an AI that seemed overconfident and which lacked justification. You have given some justification since (and I've adjusted my estimate down, although I still don't discount it entirely), in amongst your argument with straw-dlthomas.

Comment by dlthomas on Thoughts on the Singularity Institute (SI) · 2012-05-17T04:28:21.526Z · LW · GW

No, my criticism is "you haven't argued that it's sufficiently unlikely, you've simply stated that it is." You made a positive claim; I asked that you back it up.

With regard to the claim itself, it may very well be that AI-making-nanostuff isn't a big worry. For any inference, the stacking of error in integration that you refer to is certainly a limiting factor - I don't know how limiting. I also don't know how incomplete our data is, with regard to producing nanomagic stuff. We've already built some nanoscale machines, albeit very simple ones. To what degree is scaling it up reliant on experimentation that couldn't be done in simulation? I just don't know. I am not comfortable assigning it vanishingly small probability without explicit reasoning.

Comment by dlthomas on Thoughts on the Singularity Institute (SI) · 2012-05-17T02:27:18.311Z · LW · GW

It can't deduce how to create nanorobots[.]

How do you know that?

Comment by dlthomas on Thoughts on the Singularity Institute (SI) · 2012-05-17T02:25:43.081Z · LW · GW

But in the end, it simply would not have enough information to design a system that would allow it to reach its objective.

I don't think you know that.

Comment by dlthomas on Thoughts on the Singularity Institute (SI) · 2012-05-17T01:29:32.544Z · LW · GW

You'll have to forgive Eliezer for not responding; he's busy dispatching death squads.

Comment by dlthomas on Thoughts on the Singularity Institute (SI) · 2012-05-17T01:26:18.438Z · LW · GW

The answer from the sequences is that yes, there is a limit to how much an AI can infer based on limited sensory data, but you should be careful not to assume that just because it is limited, it's limited to something near our expectations. Until you've demonstrated that FOOM cannot lie below that limit, you have to assume that it might (if you're trying to carefully avoid FOOMing).

Comment by dlthomas on Thoughts on the Singularity Institute (SI) · 2012-05-16T23:11:54.357Z · LW · GW

Of those who attempted, fewer thought they were close, but fifty still seems very generous.

Comment by dlthomas on But Somebody Would Have Noticed · 2012-05-16T22:32:45.524Z · LW · GW

Why isn't it a minor nitpick? I mean, we use dimensioned constants in other areas; why, in principle, couldn't the equation be E=mc (1 m/s)? If that was the only objection, and the theory made better predictions (which, obviously, it didn't, but bear with me), then I don't see any reason not to adopt it. Given that, I'm not sure why it should be a significant* objection.

Edited to add: Although I suppose that would privilege the meter and second (actually, the ratio between them) in a universal law, which would be very surprising. Just saying that there are trivial ways you can make the units check out, without tossing out the theory. Likewise, of course, the fact that the units do check out shouldn't be taken too strongly in a theory's favor. Not that anyone here hadn't seen the XKCD, but I still need to link it, lest I lose my nerd license.

Comment by dlthomas on Thoughts on the Singularity Institute (SI) · 2012-05-16T21:34:18.450Z · LW · GW

I don't think "certainty minus epsilon" improves much. It moves it from theoretical impossibility to practical - but looking that far out, I expect "likelihood" might be best.

Comment by dlthomas on GAZP vs. GLUT · 2012-05-16T20:58:39.005Z · LW · GW

Things that are true "by definition" are generally not very interesting.

If consciousness is defined by referring solely to behavior (which may well be reasonable, but is itself an assumption) then yes, it is true that something that behaves exactly like a human will be conscious IFF humans are conscious.

But what we are trying to ask, at the high level, is whether there is something coherent in conceptspace that partitions objects into "conscious" and "unconscious" in something that resembles what we understand when we talk about "consciousness," and then whether it applies to the GLUT. Demonstrating that it holds for a particular set of definitions only matters if we are convinced that one of the definitions in that set accurately captures what we are actually discussing.

Comment by dlthomas on I Stand by the Sequences · 2012-05-16T00:43:52.773Z · LW · GW

Ah, fair. So in this case, we are imagining a sequence of additional observations (from a privileged position we cannot occupy) to explain.

Comment by dlthomas on Can You Prove Two Particles Are Identical? · 2012-05-16T00:01:05.537Z · LW · GW

In the macro scale, spin (ie rotation) is definitely quantitative - any object is rotating at a particular rate about a particular axis. This can be measured, integrated to yield (change in) orientation, etc.

In QM, my understanding is that (much like "flavor" and "color") the term is just re-purposed for something else.

Comment by dlthomas on Can You Prove Two Particles Are Identical? · 2012-05-15T23:12:46.911Z · LW · GW

Spin is qualitative. Qm is dealing with a degree of spin (spin up or spin down) which is quantitative.

I believe the distinction you want is "continuous" vs. "discrete", rather than "quantitative" vs. "qualitative".

Comment by dlthomas on I Stand by the Sequences · 2012-05-15T21:44:52.805Z · LW · GW

I think this might be the most strongly contrarian post here in a while...

Comment by dlthomas on I Stand by the Sequences · 2012-05-15T21:43:56.238Z · LW · GW

Not all formalizations that give the same observed predictions have the same Kolmogorov complexity[.]

Is that true? I thought Kolmogorov complexity was "the length of the shortest program that produces the observations" - how can that not be a one place function of the observations?

Comment by dlthomas on I Stand by the Sequences · 2012-05-15T21:37:09.238Z · LW · GW

(and there's the whole big-endian/little-endian question).

That's cleared up by:

I am number 25 school member, since I agree with the last and two more.

Comment by dlthomas on Thoughts on the Singularity Institute (SI) · 2012-05-15T21:25:16.524Z · LW · GW

There are only 7 billion people on the planet, even if all of them gained internet access that would still be fewer than 13 billion. In this case, instead of looking at the exponential graph, consider where it needs to level off.

People are a lot more complicated than neurons, and it's not just people that are connected to the internet - there are many devices acting autonomously with varying levels of sophistication, and both the number of people and the number of internet connected devices are increasing.

If the question is "are there points in superhuman mind-space that could be implemented on the infrastructure of the internet roughly as it exists" my guess would be, yes.

[T]here's no selection pressure or other effect to cause people on the internet to self-organize into some sort of large brain.

This, I think, is key, and devastating. The chances that we've found any such point in mind-space without any means of searching are (I would guess) infinitesimal.

Maybe if everyone played a special game where you had to pretend to be a neuron and pass signals accordingly you could maybe get something like that.

Unless the game were carefully designed to simulate an existing brain (or one designed by other means) I don't see why restricting the scope of interaction between nodes is likely to help.

Comment by dlthomas on Thoughts on the Singularity Institute (SI) · 2012-05-15T02:23:47.235Z · LW · GW

My point was just that there's a whole lot of little issues that pull in various directions if you're striving for ideal. What is/isn't close enough can depend very much on context. Certainly, for any particular purpose something less than that will be acceptable; how gracefully it degrades no doubt depends on context, and likely won't be uniform across various types of difference.

Comment by dlthomas on Thoughts on the Singularity Institute (SI) · 2012-05-15T00:59:21.233Z · LW · GW

One complication here is that you ideally want it to be vague in the same ways the original was vague; I am not convinced this is always possible while still having the results feel natural/idomatic.

Comment by dlthomas on Thoughts on the Singularity Institute (SI) · 2012-05-14T23:50:32.099Z · LW · GW

What if we added a module that sat around and was really interested in everything going on?

Comment by dlthomas on Thoughts on the Singularity Institute (SI) · 2012-05-14T22:56:09.760Z · LW · GW

They are a bit rambly in places, but they're entertaining and interesting.

I also found this to be true.

Comment by dlthomas on [deleted post] 2012-05-11T23:29:39.519Z

That's not what "realist" means in philosophy.