Posts

The Ultraviolet 2011-05-22T23:59:00.212Z
Autism and Lesswrong 2011-04-07T15:34:38.846Z

Comments

Comment by CuSithBell on 2012 Less Wrong Census/Survey · 2012-11-07T03:06:55.813Z · LW · GW

I agree with your analysis, and further:

Gurer ner sbhe yvarf: gur gjb "ebgngvat" yvarf pbaarpgrq gb gur pragre qbg, naq gur gjb yvarf pbaarpgrq gb gur yrsg naq evtug qbgf. Gur pragre yvarf fgneg bhg pbaarpgrq gb gur fvqr qbgf, gura ebgngr pybpxjvfr nebhaq gur fdhner. Gur bgure yvarf NYFB ebgngr pybpxjvfr: gur yrsg bar vf pragrerq ba gur yrsg qbg, naq ebgngrf sebz gur pragre qbg qbja, yrsg, gura gb gur gbc, gura evtug, gura onpx gb gur pragre. Gur evtug yvar npgf fvzvyneyl.

Comment by CuSithBell on Existential Risk and Public Relations · 2012-06-12T03:35:00.031Z · LW · GW

On the other hand... people say they hate politicians and then vote for them anyway.

Who are they going to vote for instead?

Comment by CuSithBell on Welcome to Less Wrong! (2012) · 2012-06-12T03:33:31.983Z · LW · GW

Ah, I see what you mean. I don't think one has to believe in objective morality as such to agree that "morality is the godshatter of evolution". Moreover, I think it's pretty key to the "godshatter" notion that our values have diverged from evolution's "value", and we now value things "for their own sake" rather than for their benefit to fitness. As such, I would say that the "godshatter" notion opposes the idea that "maladaptive is practically the definition of immoral", even if there is something of a correlation between evolutionarily-selectable adaptive ideas and morality.

Comment by CuSithBell on Welcome to Less Wrong! (2012) · 2012-06-12T01:01:32.461Z · LW · GW

For those who think that morality is the godshatter of evolution, maladaptive is practically the definition of immoral.

Disagree? What do you mean by this?

Edit: If I believe that morality, either descriptively or prescriptively, consists of the values imparted to humans by the evolutionary process, I have no need to adhere to the process roughly used to select these values rather than the values themselves when they are maladaptive.

Comment by CuSithBell on General purpose intelligence: arguing the Orthogonality thesis · 2012-06-11T22:37:55.934Z · LW · GW

But the theory fails because this fits it but isn't wireheading, right? It wouldn't actually be pleasing to play that game.

Comment by CuSithBell on Rationality Quotes June 2012 · 2012-06-09T03:55:38.893Z · LW · GW

Fair question! I phrased it a little flippantly, but it was a sincere sentiment - I've heard somewhere or other that receiving a prosthetic limb results in a decrease in empathy, something to do with becoming detached from the physical world, and this ties in intriguingly with the scifi trope about cyborging being dehumanizing.

Comment by CuSithBell on [Link] Nerds are nuts · 2012-06-08T18:49:20.303Z · LW · GW

neurotypical

Are you using this to mean "non-autistic person", or something else?

Comment by CuSithBell on Fake Causality · 2012-06-08T14:29:32.891Z · LW · GW

a GAI with [overwriting its own code with an arbitrary value] as its only goal, for example, why would that be impossible? An AI doesn't need to value survival.

A GAI with the utility of burning itself? I don't think that's viable, no.

What do you mean by "viable"? You think it is impossible due to Godelian concerns for there to be an intelligence that wishes to die?

As a curiosity, this sort of intelligence came up in a discussion I was having on LW recently. Someone said "why would an AI try to maximize its original utility function, instead of switching to a different / easier function?", to which I responded "why is that the precise level at which the AI would operate, rather than either actually maximizing its utility function or deciding to hell with the whole utility thing and valuing suicide rather than maximizing functions (because it's easy)".

But anyway it can't be that Godelian reasons prevent intelligences from wanting to burn themselves, because people have burned themselves.

I'd be interested in the conclusions derived about "typical" intelligences and the "forbidden actions", but I don't see how you have derived them.

At the moment it's little more than professional intuition. We also lack some necessary shared terminology. Let's leave it at that until and unless someone formalizes and proves it, and then hopefully blogs about it.

Fair enough, though for what it's worth I have a fair background in mathematics, theoretical CS, and the like.

could you clarify your position, please?

I think I'm starting to see the disconnect, and we probably don't really disagree.

You said:

This sounds unjustifiably broad

My thinking is very broad but, from my perspective, not unjustifiably so. In my research I'm looking for mathematical formulations of intelligence in any form - biological or mechanical.

I meant that this was a broad definition of the qualitative restrictions to human self-modification, to the extent that it would be basically impossible for something to have qualitatively different restrictions.

Taking a narrower viewpoint, humans "in their current form" are subject to different laws of nature than those we expect machines to be subject to. The former use organic chemistry, the latter probably electronics. The former multiply by synthesizing enormous quantities of DNA molecules, the latter could multiply by configuring solid state devices.

Do you count the more restrictive technology by which humans operate as a constraint which artificial agents may be free of?

Why not? Though of course it may turn out that AI is best programmed on something unlike our current computer technology.

Comment by CuSithBell on Rationality Quotes June 2012 · 2012-06-08T13:05:57.662Z · LW · GW

I think it could make a pretty interesting Discussion post, and would pair well with some discussion of how becoming a cyborg supposedly makes you less empathic.

Comment by CuSithBell on Poly marriage? · 2012-06-08T03:33:44.370Z · LW · GW

I find this quite aesthetically pleasing :D

Comment by CuSithBell on Poly marriage? · 2012-06-07T18:19:19.144Z · LW · GW

I tend to agree. Customizable contracts would be the best solution.

For some reason I'm picturing the Creative Commons licenses.

Comment by CuSithBell on Poly marriage? · 2012-06-07T18:17:11.041Z · LW · GW

If polygamous people where high status they wouldn't voice nor perhaps even think of these objections.

Why isn't it the other way around?

Comment by CuSithBell on Poly marriage? · 2012-06-07T18:15:45.565Z · LW · GW

Hm. Some sort of standardized institution in place to take care of the pet in case the human dies, perhaps? Tax breaks?

Comment by CuSithBell on Poly marriage? · 2012-06-07T18:11:28.995Z · LW · GW

I don't care what other people are convinced.

When you said above that status was the real reason LW-associates oppose legal polygamy, you were implying that these people are not actually convinced of these issues, or only pretend to care about them for status reasons.

I'm in a happy polygamous relationship and I know I'm not the only one.

Certainly! I'd like to clarify that I don't think polyamory is intrinsically oppressive, and that I am on the whole pretty darn progressive (philosophically) regarding sexual / relationship rights etc. (That is, I think it probably ideally should be legal. There are probably additional political concerns but politics makes me ill.) I think it's kinda weird that government is in the marriage business to begin with, but probably it is useful to have some sort of structure for dealing with the related tax / property / etc. concerns. I think that polygamy does occur in some cultures that are oppressive towards women, but I don't really have a notion of how much a part of that oppression it facilitates, and I don't necessarily think that's a legitimate factor in whether to legalize the institution. I'm on your side philosophically / politically.

Comment by CuSithBell on Rationality Quotes June 2012 · 2012-06-07T17:54:24.187Z · LW · GW

Looks like there are a few pc input devices on the market that read brain activity in some way. The example game above sounds like this Star Wars toy.

Comment by CuSithBell on Debate between 80,000 hours and a socialist · 2012-06-07T17:50:05.316Z · LW · GW

Regarding your example, I think what Mills is saying is probably a fair point - or rather, it's probably a gesture towards a fair point, muddied by rhetorical constraints and perhaps misunderstanding of probability. It is very difficult to actually get good numbers to predict things outside of our past experience, and so probability as used by humans to decide policy is likely to have significant biases.

Comment by CuSithBell on Poly marriage? · 2012-06-07T17:45:41.258Z · LW · GW

I've certainly heard the argument that polygamy is tied into oppressive social structures, and therefore legitimizing it would be bad.

Same argument can and has been applied to other kinds of marriage.

On the one hand, the argument doesn't need to be correct to be the (or a) real reason. On the other, I'd expect more people to be more convinced that polygamy is more oppressive (as currently instantiated) than vanilla marriage (and other forms, such as arranged marriages or marriage of children to adults, are probably more strongly opposed).

Comment by CuSithBell on Poly marriage? · 2012-06-07T17:33:31.428Z · LW · GW

thus we tend to see forbidding that as a bad idea.

ITYM 'good'?

I've certainly heard the argument that polygamy is tied into oppressive social structures, and therefore legitimizing it would be bad. Would you say this is rationalization?

FWIW I'm very skeptical of the whole "status explains everything" notion in general.

Comment by CuSithBell on Poly marriage? · 2012-06-07T16:15:03.171Z · LW · GW

Ah! Well, good to know. Generally I expect "Utahans" and "weird brown foreigners" are to be inflected similarly in both of these versions, anyway.

Comment by CuSithBell on Poly marriage? · 2012-06-07T14:02:24.843Z · LW · GW

or that polyamory is when it's done by fashionable white people, and polygamy is when it's done by weird brown foreigners

I thought it was "polyamory is when it's done by New Yorkers (Californians?), polygamy is when it's done by Utahans," and weird brown people have harems and concubines instead.

(Though of course I also don't think this is a fair characterization)

Comment by CuSithBell on Serious Stories · 2012-06-07T03:17:28.559Z · LW · GW

Yeah, that's certainly a fair clarification. It'd probably take a lot more space to give a really robust definition of "suffering", but that's close enough for gummint work.

Comment by CuSithBell on Serious Stories · 2012-06-06T23:42:09.889Z · LW · GW

Roughly, pain is a sensation typically associated with damage to the body, suffering is an experience of stimuli as intrinsically unpleasant.

I do not suffer if my room is painted a color I do not like, but I still may care about the color my room is painted.

Comment by CuSithBell on Serious Stories · 2012-06-06T20:41:18.113Z · LW · GW

It means "being able to feel pain but not suffering from it."

Comment by CuSithBell on Fake Causality · 2012-06-06T17:45:39.500Z · LW · GW

Suppose an AI were to design and implement more efficient algorithms for processing sensory stimuli? Or add a "face recognition" module when it determines that this would be useful for interacting with humans?

The ancient Greeks have developed methods of improved memorization. It has been shown that human-trained dogs and chimps are more capable of human-face recognition than others of their kind. None of them were artificial (discounting selective breeding in dogs and Greeks).

It seems that you should be able to write a simple program that overwrites its own code with an arbitrary value. Wouldn't that be a counterexample?

Would you consider such a machine an artificial intelligent agent? Isn't it just a glorified printing press?

I'm not saying that some configurations of memory are physically impossible. I'm saying that intelligent agency entails typicality, and therefore, for any intelligent agent, there are some things it is extremely unlikely to do, to the point of practical impossibility.

Certainly that doesn't count as an intelligent agent - but a GAI with that as its only goal, for example, why would that be impossible? An AI doesn't need to value survival.

I'd be interested in the conclusions derived about "typical" intelligences and the "forbidden actions", but I don't see how you have derived them.

Do we agree, then, that humans and artificial agents are both subject to laws forbidding logical contradictions and the like, but that artificial agents are not in principle necessarily bound by the same additional restrictions as humans?

I would actually argue the opposite.

Are you familiar with the claim that people are getting less intelligent since modern technology allows less intelligent people and their children to survive? (I never saw this claim discussed seriously, so I don't know how factual it is; but the logic of it is what I'm getting at.) The idea is that people today are less constrained in their required intelligence, and therefore the typical human is becoming less intelligent.

Other claims are that activities such as browsing the internet and video gaming are changing the set of mental skills which humans are good at. We improve in tasks which we need to be good at, and give up skills which are less useful. You gave yet another example in your comment regarding face recognition.

The elasticity of biological agents is (quantitatively) limited, and improvement by evolution takes time. This is where artificial agents step in. They can be better than humans, but the typical agent will only actually be better if it has to. Generally, more intelligent agents are those which are forced to comply to tighter constraints, not looser ones.

I think we have our quantifiers mixed up? I'm saying an AI is not in principle bound by these restrictions - that is, it's not true that all AIs must necessarily have the same restrictions on their behavior as a human. This seems fairly uncontroversial to me. I suppose the disconnect, then, is that you expect a GAI will be of a type bound by these same restrictions. But then I thought the restrictions you were talking about were "laws forbidding logical contradictions and the like"? I'm a little confused - could you clarify your position, please?

Comment by CuSithBell on A plan for Pascal's mugging? · 2012-06-05T18:29:09.402Z · LW · GW

Could you rephrase this somehow? I'm not understanding it. If you actually won the bet and got the extra utility, your median expected utility would be higher, but you wouldn't take the bet, because your median expected utility is lower if you do.

Comment by CuSithBell on A plan for Pascal's mugging? · 2012-06-05T18:26:32.557Z · LW · GW

"Enough times" to make it >50% likely that you will win, yes? Why is this the correct cutoff point?

Comment by CuSithBell on A plan for Pascal's mugging? · 2012-06-05T18:25:39.084Z · LW · GW

This all seems very sensible and plausible!

Comment by CuSithBell on Fake Causality · 2012-06-05T18:20:43.348Z · LW · GW

Thanks for challenging my position. This discussion is very stimulating for me!

It's a pleasure!

Sure, but we could imagine an AI deciding something like "I do not want to enjoy frozen yogurt", and then altering its code in such a way that it is no longer appropriate to describe it as enjoying frozen yogurt, yeah?

I'm actually having trouble imagining this without anthropomorphizing (or at least zoomorphizing) the agent. When is it appropriate to describe an artificial agent as enjoying something? Surely not when it secretes serotonin into its bloodstream and synapses?

Yeah, that was sloppy of me. Leaving aside the question of when something is enjoying something, let's take a more straightforward example: Suppose an AI were to design and implement more efficient algorithms for processing sensory stimuli? Or add a "face recognition" module when it determines that this would be useful for interacting with humans?

This seems trivially false - if an AI is instantiated as a bunch of zeros and ones in some substrate, how could Godel or similar concerns stop it from altering any subset of those bits?

It's not a question of stopping it. Gödel is not giving it a stern look, saying: "you can't alter your own code until you've done your homework". It's more that these considerations prevent the agent from being in a state where it will, in fact, alter its own code in certain ways. This claim can and should be proved mathematically, but I don't have the resources to do that at the moment. In the meanwhile, I'd agree if you wanted to disagree.

Hm. It seems that you should be able to write a simple program that overwrites its own code with an arbitrary value. Wouldn't that be a counterexample?

You see reasons to believe that any artificial intelligence is limited to altering its motivations and desires in a way that is qualitatively similar to humans? This seems like a pretty extreme claim - what are the salient features of human self-rewriting that you think must be preserved?

I believe that this is likely, yes. The "salient feature" is being subject to the laws of nature, which in turn seem to be consistent with particular theories of logic and probability. The problem with such a claim is that these theories are still not fully understood.

This sounds unjustifiably broad. Certainly, human behavior is subject to these restrictions, but it is also subject to much more stringent ones - we are not able to do everything that is logically possible. Do we agree, then, that humans and artificial agents are both subject to laws forbidding logical contradictions and the like, but that artificial agents are not in principle necessarily bound by the same additional restrictions as humans?

Comment by CuSithBell on Newcomb's Problem and Regret of Rationality · 2012-06-05T18:12:28.557Z · LW · GW

knowing the value of Current Observation gives you information about Future Decision.

Here I'd just like to note that one must not assume all subsystems of Current Brain remain constant over time. And what if the brain is partly a chaotic system? (AND new information flows in all the time... Sorry, I cannot condone this model as presented.)

Well... okay, but the point I was making was milder and pretty uncontroversial. Are you familiar with bayesian networks?

Perhaps it can observe your neurochemistry in detail and in real time.

I already mentioned this possibility. Fallible models make the situation gameable. I'd get together with my friends, try to figure out when the model predicts correctly, calculate its accuracy, work out a plan for who picks what, and split the profits between ourselves. How's that for rationality? To get around this, the alien needs to predict our plan and - do what? Our plan treats his mission like total garbage. Should he try to make us collectively lose out? But that would hamper his initial design.

(Whether it cares about such games or not, what input the alien takes, when, how, and what exactly it does with said input - everything counts in charting an optimal solution. You can't just say it uses Method A and then replace it with Method B when convenient. THAT is the point: Predictive methods are NOT interchangeable in this context. (Reminder: Reading my brain AS I make the decision violates the original conditions.))

I never said it used method A? And what is all this about games? It predicts your choice.

You're not engaging with the thought experiment. How about this - how would you change the thought experiment to make it work properly, in your estimation?

Perhaps land-ape psychology turns out to be really simple if you're an omnipotent thought-experiment enthusiast.

We're veering into uncertain territory again... (Which would be fine if it weren't for the vagueness of mechanism inherent in magical algorithms.)

Well, yeah. We're in uncertain territory as a premise.

The reasoning wouldn't be "this person is a one-boxer" but rather "this person will pick one box in this particular situation".

Second note: An entity, alien or not, offering me a million dollars, or anything remotely analogous to this, would be a unique event in my life with no precedent whatever. My last post was written entirely under the assumption that the alien would be using simple heuristics based on similar decisions in the past. So yeah, if you're tweaking the alien's method, then disregard all that.

I'm not tweaking the method. There is no given method. The closest to a canonical method that I'm aware of is simulation, which you elided in your reply.

It's very difficult to be the sort of person who would pick one box in the situation you are in without actually picking one box in the situation you are in.

From the alien's point of view, this is epistemologically non-trivial if my box-picking nature is more complicated than a yes-no switch. Even if the final output must take the form of a yes or a no, the decision tree that generated that result can be as endlessly complex as I want, every step of which the alien must predict correctly (or be a Luck Elemental) to maintain its reputation of infallibility.

What makes you think you're so special - compared to the people who've been predicted ahead of you?

If it's worse, just do the other thing - isn't that more "rational"?

As long as I know nothing about the alien's method, the choice is arbitrary. See my second note. This is why the alien's ultimate goals, algorithms, etc, MATTER.

If you know nothing about the alien's methods, there still is a better choice. You do not have the same expected value for each choice.

(If the alien reads my brain chemistry five minutes before The Task, his past history is one of infallibility, and no especially cunning plan comes to mind, then my bet regarding the nature of brain chemistry would be that not going with one box is silly if I want the million dollars. I mean, he'll read my intentions and place the money (or not) like five minutes before... (At least that's what I'll determine to do before the event. Who knows what I'll end up doing once I actually get there. (Since even I am unsure as to the strength of my determination to keep to this course of action once I've been scanned, the conscious minds of me and the alien are freed from culpability. Whatever happens next, only the physical stance is appropriate for the emergent scenario. (("At what point then, does decision theory apply here?" is what I was getting at.) Anyway, enough navel-gazing and back to Timeless Decision Theory.))))

Comment by CuSithBell on A plan for Pascal's mugging? · 2012-06-05T14:54:02.339Z · LW · GW

Ah! Sorry for the mixed-up identities. Likewise, I didn't come up with that "51% chance to lose $5, 49% chance to win $10000" example.

But, ah, are you retracting your prior claim about a variance of greater than 5? Clearly this system doesn't work on its own, though it still looks like we don't know A) how decisions are made using it or B) under what conditions it works. Or in fact C) why this is a good idea.

Certainly for some distributions of utility, if the agent knows the distribution of utility across many agents, it won't make the wrong decision on that particular example by following this algorithm. I need more than that to be convinced!

For instance, it looks like it'll make the wrong decision on questions like "I can choose to 1) die here quietly, or 2) go get help, which has a 1/3 chance of saving my life but will be a little uncomfortable." The utility of surviving presumably swamps the rest of the utility function, right?

Comment by CuSithBell on A plan for Pascal's mugging? · 2012-06-04T19:43:50.254Z · LW · GW

google

Googol. Likewise, googolplex.

Comment by CuSithBell on A plan for Pascal's mugging? · 2012-06-04T15:56:29.770Z · LW · GW

But the median outcome is losing 5 utils?

Edit: Oh, wait! You mean the median total utility after some other stuff happens (with a variance of more than 5 utils)?

Suppose we have 200 agents, 100 of which start with 10 utils, the rest with 0. After taking this offer, we have 51 with -5, 51 with 5, 49 with 10000, and 49 with 10010. The median outcome would be a loss of -5 for half the agents, a gain of 5 for half, but only the half that would lose could actually get that outcome...

And what do you mean by "the possibility of getting tortured will manifest itself only very slightly at the 50th percentile"? I thought you were restricting yourself to median outcomes, not distributions? How do you determine the median distribution?

Comment by CuSithBell on Fake Causality · 2012-06-04T14:49:18.268Z · LW · GW

You are saying that a GAI being able to alter its own "code" on the actual code-level does not imply that it is able to alter in a deliberate and conscious fashion its "code" in the human sense you describe above?

I am saying pretty much exactly that. To clarify further, the words "deliberate", "conscious" and "wants" again belong to the level of emergent behavior: they can be used to describe the agent, not to explain it (what could not be explained by "the agent did X because it wanted to"?).

Sure, but we could imagine an AI deciding something like "I do not want to enjoy frozen yogurt", and then altering its code in such a way that it is no longer appropriate to describe it as enjoying frozen yogurt, yeah?

Let's instead make an attempt to explain. A complete control of an agent's own code, in the strict sense, is in contradiction of Gödel's incompleteness theorem. Furthermore, information-theoretic considerations significantly limit the degree to which an agent can control its own code (I'm wondering if anyone has ever done the math. I expect not. I intend to look further into this). In information-theoretic terminology, the agent will be limited to typical manipulations of its own code, which will be a strict (and presumably very small) subset of all possible manipulations.

This seems trivially false - if an AI is instantiated as a bunch of zeros and ones in some substrate, how could Godel or similar concerns stop it from altering any subset of those bits?

Can an agent be made more effective than humans in manipulating its own code? I have very little doubt that it can. Can it lead to agents qualitatively more intelligent than humans? Again, I believe so. But I don't see a reason to believe that the code-rewriting ability itself can be qualitatively different than a human's, only quantitatively so (although of course the engineering details can be much different; I'm referring to the algorithmic level here).

You see reasons to believe that any artificial intelligence is limited to altering its motivations and desires in a way that is qualitatively similar to humans? This seems like a pretty extreme claim - what are the salient features of human self-rewriting that you think must be preserved?

Generally GAIs are ascribed extreme powers around here

As you've probably figured out, I'm new here. I encountered this post while reading the sequences. Although I'm somewhat learned on the subject, I haven't yet reached the part (which I trust exists) where GAI is discussed here.

On my path there, I'm actively trying to avoid a certain degree of group thinking which I detect in some of the comments here. Please take no offense, but it's phrases like the above quote which worry me: is there really a consensus around here about such profound questions? Hopefully it's only the terminology which is agreed upon, in which case I will learn it in time. But please, let's make our terminology "pay rent".

I don't think it's a "consensus" so much as an assumed consensus for the sake of argument. Some do believe that any hypothetical AI's influence is practically unlimited, some agree to assume that because it's not ruled out and is a worst-case scenario or an interesting case (see wedrifid's comment on the grandparent (aside: not sure how unusual or nonobvious this is, but we often use familial relationships to describe the relative positions of comments, e.g. the comment I am responding to is the "parent" of this comment, the one you were responding to when you wrote it is the "grandparent". I think that's about as far as most users take the metaphor, though.)).

Comment by CuSithBell on Newcomb's Problem and Regret of Rationality · 2012-06-04T14:35:13.483Z · LW · GW

Unlike my (present) traits, my future decisions don't yet exist, and hence cannot leak anything or become entangled with anyone.

Your future decisions are entangled with your present traits, and thus can leak. If you picture a Bayesian network with the nodes "Current Brain", "Future Decision", and "Current Observation", with arrows from Current Brain to the two other nodes, then knowing the value of Current Observation gives you information about Future Decision.

Obviously the alien is better than a human at running this game (though, note that a human would only have to be right a little more than 50% of the time to make one-boxing have the higher expected value - in fact, that could be an interesting test to run!). Perhaps it can observe your neurochemistry in detail and in real time. Perhaps it simulates you in this precise situation, and just sees whether you pick one or both boxes. Perhaps land-ape psychology turns out to be really simple if you're an omnipotent thought-experiment enthusiast.

The reasoning wouldn't be "this person is a one-boxer" but rather "this person will pick one box in this particular situation". It's very difficult to be the sort of person who would pick one box in the situation you are in without actually picking one box in the situation you are in.

One use of the thought experiment, other than the "non-causal effects" thing, is getting at this notion that the "rational" thing to do (as you suggest two-boxing is) might not be the best thing. If it's worse, just do the other thing - isn't that more "rational"?

Comment by CuSithBell on Fake Causality · 2012-06-04T05:20:49.777Z · LW · GW

Having asserted that your claim is, in fact, new information

I wouldn't assert that. I thought I was stating the obvious.

Yes, I think I misspoke earlier, sorry. It was only "new information" in the sense that it wasn't in that particular sentence of Eliezer's - to anyone familiar with discussions of GAI, your assertion certainly should be obvious.

Comment by CuSithBell on Fake Causality · 2012-06-04T05:18:20.128Z · LW · GW

You are saying that a GAI being able to alter its own "code" on the actual code-level does not imply that it is able to alter in a deliberate and conscious fashion its "code" in the human sense you describe above?

Generally GAIs are ascribed extreme powers around here - if it has low-level access to its code, then it will be able to determine how its "desires" derive from this code, and will be able to produced whatever changes it wants. Similarly, it will be able to hack human brains with equal finesse.

Comment by CuSithBell on Fake Causality · 2012-06-04T04:56:01.263Z · LW · GW

An advanced AI could reasonably be expected to be able to explicitly edit any part of its code however it desires. Humans are unable to do this.

Comment by CuSithBell on Fake Causality · 2012-06-04T04:26:22.953Z · LW · GW

Not meant as an attack. I'm saying, "to be fair it didn't actually say that in the original text, so this is new information, and the response is thus a reasonable one". Your comment could easily be read as implying that this is not new information (and that the response is therefore mistaken), so I wanted to add a clarification.

Comment by CuSithBell on Complexity of value has implications for Torture vs Specks · 2012-06-04T04:20:13.820Z · LW · GW

But 'value is fragile' teaches us that it can't be a 1-dimensional number like the reals.

This is not in fact what "value is fragile" teaches us, and it is false. Without intending offense, I recommend you read about utility a bit more before presenting any arguments about it here, as it is in fact a 1-dimensional value.

What you might reasonably conclude, though, is that utility is a poor way to model human values, which, most of the time, it is. Still, that does not invalidate the results of properly-formed thought experiments.

Comment by CuSithBell on Fake Causality · 2012-06-04T04:04:02.183Z · LW · GW

To be fair, when structured as

Sadly, we humans can't rewrite our own code, the way a properly designed AI could.

then the claim is in fact "we humans can't rewrite our own code (but a properly designed AI could)". If you remove a comma:

Sadly, we humans can't rewrite our own code the way a properly designed AI could.

only then is the sentence interpreted as you describe.

Comment by CuSithBell on Open Thread, June 1-15, 2012 · 2012-06-04T00:22:57.433Z · LW · GW

Many find that sort of discounting to be contrary to intuition and desired results, e.g. the suffering of some particular person is more or less significant depending on how many other people are suffering in a similar enough way.

Comment by CuSithBell on [deleted post] 2012-06-04T00:19:49.968Z

It would be grating if a dozen companies made posts like this every month, but that isn't the case.

I'm a little wary of this. You think it would be bad if other people acted in a way similar to you in sufficient number? What determines who "gets" to reap the benefits of being the exception?

Comment by CuSithBell on Newcomb's Problem and Regret of Rationality · 2012-06-04T00:10:11.108Z · LW · GW

The power, without further clarification, is not incoherent. People predict the behavior of other people all the time.

Ultimately, in practical terms the point is that the best thing to do is "be the sort of person who picks one box, then pick both boxes," but that the way to be the sort of person that picks one box is to pick one box, because your future decisions are entangled with your traits, which can leak information and thus become entangled with other peoples' decisions.

Comment by CuSithBell on Review: Selfish Reasons to Have More Kids · 2012-06-03T16:17:41.974Z · LW · GW

Well! I may have to take a more in-depth look at it sometime this summer.

Comment by CuSithBell on Newcomb's Problem and Regret of Rationality · 2012-06-03T16:16:49.337Z · LW · GW

Well, it's a thought experiment, involving the assumption of some unlikely conditions. I think the main point of the experiment is the ability to reason about what decisions to make when your decisions have "non-causal effects" - there are conditions that will arise depending on your decisions, but that are not caused in any way by the decisions themselves. It's related to Kavka's toxin and Parfit's hitchhiker.

Comment by CuSithBell on Newcomb's Problem and Regret of Rationality · 2012-06-02T22:39:30.640Z · LW · GW

Well, try using numbers instead of saying something like "provided luck prevails".

If p is the chance that Omega predicts you correctly, then the expected value of selecting one box is:

1,000,000(p) + 0(1-p)

and the expected value of selecting both is:

1,000(p) + 1,001,000(1-p)

So selecting both is only higher expected value if Omega guesses wrong about half the time or more.

Comment by CuSithBell on Rationality Quotes May 2012 · 2012-06-01T00:11:25.704Z · LW · GW

I read this as "people who aren't ( (clownsuit enjoyers) and (autistic) ) ...", but it looks like others have read it as "people who aren't (clownsuit enjoyers) and aren't (autistic)" = "people who aren't ( (clownsuit enjoyers) or (autistic) )", which might be the stricter literal reading. Would you care to clarify which you meant?

Comment by CuSithBell on Review: Selfish Reasons to Have More Kids · 2012-05-31T17:41:02.054Z · LW · GW

It certainly could be - I read the anecdote from a book I picked idly off a shelf in a bookstore, and I retained the vague impression that it was from a book about the importance of social factors and the effects of technology on our social/psychological development, but I could have been conflating it with another such book. After reading an excerpt from "The Boy who was Raised as a Dog", the style matches, so that probably was the one I read. Would you recommend it?

Comment by CuSithBell on Review: Selfish Reasons to Have More Kids · 2012-05-31T16:56:56.712Z · LW · GW

I heard a horror story (anecdote from a book, for what it's worth) of a child basically raised in front of a TV, who learned from it both language and a general rule that the world (and social interaction) is non-interactive. If you could get his attention, he'd cheerfully recite some memorized lines then zone out.

Comment by CuSithBell on Only say 'rational' when you can't eliminate the word · 2012-05-31T16:41:25.971Z · LW · GW

My take on it is - "rationality" isn't the point. Don't try to do things "rationally" (as though it's a separate thing), try to do them right.

It's actually something we see with the nuts that occasionally show up here - they're obsessed with the notion of rationality as a concrete process or something, insisting (e.g.) that we don't need to look at the experimental evidence for a theory if it is "obviously false when subjected to rational thought", or that it's bad to be "too rational".