Complexity based moral values.

post by Dmytry · 2012-04-06T17:09:29.708Z · LW · GW · Legacy · 102 comments

Preface: I am just noting that we people seem to be basing our morality on some rather ill defined intuitive notion of complexity. If you think it is not workable for AI, or something like that, such thought clearly does not yet constitute a disagreement with what I am writing here.

More preface: The utilitarian calculus is an idea that what people value is described simply in terms of summation. The complexity is another kind of f(a,b,c,d) that behaves vaguely like a 'sum' , but is not as simple as summation. If the a,b,c,d are strings, and it is a programming language, the above expression would often be written like f(a+b+c+d) , using + to mean concatenation, while it is something very fundamentally different from summation of real valued numbers. But it can appear confusingly close, as for a,b,c,d that don't share a lot of information among themselves, the result will behave a lot like a function on sum of real numbers. It will, however, diverge from the sum like behaviour as the a,b,c,d share more information among themselves, much in similar to how our intuitions for what is right diverge from sum like behaviour when you start considering exact duplicates of people, which only diverged for a few minutes.

It's a very rough idea, but it seems to me that a lot of common sense moral values are based on some sort of intuitive notion of complexity. Happiness via highly complex stimuli that pass through highly complex neural circuitry inside your head seems like a good thing to pursue; happiness via wire, resistor, and battery seems like a bad thing. What makes the idea of literal wireheading and hard pleasure inducing drugs so revolting for me, is the simplicity, banality of it. I have much fewer objections to e.g. hallucinogens (never took any myself but I am also an artist and I can guess that other people may have lower levels of certain neurotransmitters, making them unable to imagine what I can imagine).

The complexity based metrics have a property that they easily eat for breakfast huge numbers like "a dust speck in the 3^^^3  eyes", and even the infinity. The torture of a conscious being for a long period of time can easily be more complex issue than even the infinite number of dust specks.

Unfortunately, the complexity metrics like Kolmogorov's complexity are noncomputable on arbitrary input, and are big for truly random values. But in so much as the scenario is specific and has been arrived at by computation, there is this computation's complexity which sets an upper bound on complexity of scenario. The mathematics may also be not here yet. We have the intuitive notion of complexity where the totally random noise is not very complex, the very regular signal is not either, but some forms of patterns are highly complex.

This may be difficult to formalize. We could of course only define the complexities when we are informed of properties of something, but can not compute them for arbitrary input from scratch; if we map something as 'random numbers', the complexity is low; if it is encrypted volumes of works of Shakespeare, even though we wouldn't be able to distinguish that from random in practice (assuming good encryption), as we are told what it is, we can assign it higher complexity.

This also aligns with what ever it is that the evolution has been maximizing on the path leading up to H. Sapiens (Note that for the most part, evolution's power gone into improving the bacteria; the path leading up H. Sapiens is a very special case). Maybe we for some reason try to extrapolate this [note: for example, a lot of people rank their preference of animals as food by the animal's complexity of behaviours, which makes the human least desirable food; we have anti-whaling treaties], maybe it is a form of goal convergence between brain as intelligent system, and evolution (both employ hill climbing to arrive at solutions), or maybe we evolved the system that aligns with where evolution was heading because that increased fitness [edit: to address possible comment, we have another system based on evolution - the immune system - it works by evolving the antigens using somatic hypermutation; it's not inconceivable that we use some evolution-like mechanism to tweak our own neural circuitry, given that our circuitry does undergo massive pruning in early stages of life].

102 comments

Comments sorted by top scores.

comment by DanielLC · 2012-04-06T18:36:13.718Z · LW(p) · GW(p)

People don't want you to be happy for complex reasons. They want you to be happy for specific reasons, that just happen to not be simple.

People want you to be happy because you enjoy some piece of fine art, not because the greatest common divisor of the number of red and black cars you saw on the way to work is prime.

Replies from: Dmytry
comment by Dmytry · 2012-04-06T18:39:34.420Z · LW(p) · GW(p)

what is complex about the greatest common divisor being prime? It's a laughably simple thing compared to image recognition of any kind, involved in the appreciation of piece of fine art. I can easily write the thing that will check the GCD of number of coloured cars for primeness. I can't write human level image recognition software. It's so bloody complex in comparison, it's not even funny.

Replies from: orthonormal, gjm
comment by orthonormal · 2012-04-07T00:10:36.432Z · LW(p) · GW(p)

You're not engaging with his point. By writing what he did, he was inviting you to consider the idea of a genuinely arbitrary complex concept, without the bother of writing one out explicitly (because there's no compact way to do so— the compact ways of representing complex concepts in English are reserved for the complex concepts that we do care about).

Replies from: Dmytry
comment by Dmytry · 2012-04-07T04:53:37.347Z · LW(p) · GW(p)

By writing what he did, he was inviting you to consider the idea of a genuinely arbitrary complex concept

If he picked actual complex concept, there'd be autistic happily doing just this and we'd be happy for this autistic that he found something he likes to do, that also engages his brain in precisely the way in which the GCD does not. Or at least, have found it far less objectionable than GCD example.

Replies from: DanielLC, cocopium
comment by DanielLC · 2012-04-07T17:32:18.058Z · LW(p) · GW(p)

Would you be happier for him than someone with down syndrome that was just always happy?

If I offered you a pill that would modify you so that you use some hash function to figure out happiness that's far more complex than what you have now, along with completely unrelated, would you take it?

that also engages his brain in precisely the way in which the GCD does not.

I don't see why it would engage it differently. More perhaps, but not differently.

Replies from: Dmytry
comment by Dmytry · 2012-04-07T17:39:06.443Z · LW(p) · GW(p)

You sort of are still agreeing with me here even though your explicit notion of complex is very different from your implicit notion.

The hash functions are simple. The visual recognition is hard. If I spend my life trying to break encryption codes and hash functions - that is very hard, and that's what is complex about the hash functions - they are easy to compute, not easy to reverse - and be intellectually happy when i break some encryption, it'd generally be a quite respectable thing to do. Even though that is a case of happiness from some really difficult nonsense.

edit: and I do agree that something entirely random, we don't consider complex.

Replies from: DanielLC
comment by DanielLC · 2012-04-07T17:52:00.547Z · LW(p) · GW(p)

Hash functions tend to be simple, but they don't have to be. If you came up with something of a certain complexity at random, it would look like an extremely complex hash function, not image recognition.

edit: and I do agree that something entirely random, we don't consider complex.

In that case, you are using an entirely different definition of complexity than I am. Define complexity.

Replies from: Dmytry
comment by Dmytry · 2012-04-07T18:13:25.086Z · LW(p) · GW(p)

I'll link other post, that I didn't know of, which explains it considerably better:

http://lesswrong.com/lw/196/boredom_vs_scope_insensitivity/

Ultimately, the complexity as in Kolmogorov's complexity, is not computable (nor always useful). There are various complexity metrics that are more practical. The interesting thing about complexity metrics, is that under certain conditions, the complexity of concatenation of A and B is close to sum of their complexities, and under other conditions, it is far; that generally has to do with how much A and B have in common. The problem of course is that we can't quite pin down which exact metric is used there.

One sort of complexity is the size of the internal representation inside the head. We don't know how we represent things internally. That is a very complicated problem. It does seem that we use compression, implying that the 'size' of thing inside the head depends to the complexity of it, in terms of it's repetitiveness, but ignoring randomness. It may be that - our hardware being faulty and all - it plays a role how big is the internal representation. It is clearly the case that the more abstractly we represent strangers the less we care about them.

Replies from: DanielLC
comment by DanielLC · 2012-04-07T19:38:49.216Z · LW(p) · GW(p)

http://lesswrong.com/lw/196/boredom_vs_scope_insensitivity/

I don't see how that's relevant.

There are various complexity metrics that are more practical.

What complexity metric are you using? I suspect it involves only counting information that you find interesting, or something to that extent. Otherwise, I don't see how random data could possibly have low complexity.

Replies from: Dmytry
comment by Dmytry · 2012-04-07T19:55:47.937Z · LW(p) · GW(p)

We compress random data into 'random data' (along with standard deviation etc) because we don't care about exact content or find it irrelevant. Maybe a bit like random noise image after it been blurred.

Replies from: DanielLC
comment by DanielLC · 2012-04-07T19:59:49.634Z · LW(p) · GW(p)

That changes a lot.

Before, I thought you were saying that people favor moral values that have high K-complexity. This essentially means that people favor moral values that don't seem arbitrary. I think I agree with that.

Replies from: Dmytry
comment by Dmytry · 2012-04-07T20:18:25.679Z · LW(p) · GW(p)

Not the moral values actually... the idea is that when making moral comparisons, the perceived complexity (length of internal representation perhaps) may be playing big role. Evolution also tends to pick easiest routes; if the size of internal representation correlated with tribal importance or genetic proximity, then caring more for those most complex represented, would be a readily available solution to discrimination between in-tribe and out-tribe.

comment by cocopium · 2012-04-07T22:31:35.049Z · LW(p) · GW(p)

I think you could have found a nicer way to make your point..... a better example.

In California autism rates have reached 1 in 88 (propaganda.... or real rate? Hard to tell. Nonetheless, it is high), and are steadily increasing all over the world.

This disorder is so prevalent now that when you speak on any issue at all, someone in your audience has probably been affected by autism.

Using traits of the disabled as some type of caricature example to espouse your unrelated opinions is not only unproductive..... but it also makes you look like a jerk.

I am absolutely NOT in support of a 'politically correct' society, but your example was in poor taste.

comment by gjm · 2012-04-07T20:25:28.468Z · LW(p) · GW(p)

How are you going to count the red and black cars without image recognition?

Replies from: Dmytry
comment by Dmytry · 2012-04-07T20:57:00.362Z · LW(p) · GW(p)

With very simple kind that I can write easily. Not with human like that takes immense effort. Detecting a car in image isn't as hard as it sounds. Having been shown pictures of cats and dogs or other arbitrary objects, and then telling apart cats and dogs, that's hard. Bonus points for not knowing which is which and finding out that there's two different types of item.

Replies from: gjm
comment by gjm · 2012-04-07T21:25:41.733Z · LW(p) · GW(p)

Surely identifying cars isn't that much easier than identifying cats. I dare say it's somewhat easier; cars commonly have uniform colours, geometrical shapes, and nice hard edges. But are you really sure you could easily write a piece of software that, given (say) a movie of what you saw on your way to work, would count the number of red and black cars you saw? (Note that it needs to determine when two things in different frames are the same car, and when they aren't.)

Replies from: Dmytry
comment by Dmytry · 2012-04-07T22:20:21.650Z · LW(p) · GW(p)

Well, processing a movie that was taken by eyes is somewhat difficult indeed. Still, the difficulty in free form image recognition at human level is so staggering, that this doesn't get close. Cars are recognizable with various hacks.

comment by Grognor · 2012-04-07T03:10:31.132Z · LW(p) · GW(p)

Complexity in morality is like consistency in truth. Let me explain.

If all of your beliefs are exactly correct, they will be consistent. But if you force all your beliefs to be consistent, there's no guarantee they'll be correct. You can end up fixing a right thing to make it consistent with a wrong thing.

Just so with morality; humans value complex things, but this complexity is a result, not a cause, and not something to strive for in and of itself.

Replies from: Dmytry
comment by Dmytry · 2012-04-07T18:10:20.669Z · LW(p) · GW(p)

Good point, however note that we call systems of consistent beliefs 'mathematics'; it is unlinked from reality, but is extremely useful just as long as one understands the sort of truth that is there to the consistency. The consistency produces conditional truth - truth of a statement that "if A is true, then B is true" . Without mathematics, there is no belief improving.

comment by cousin_it · 2012-04-06T17:24:25.461Z · LW(p) · GW(p)

The sequences contain a preemptive counterargument to your post, could you address the issues raised there?

Replies from: lukstafi, Dmytry
comment by lukstafi · 2012-04-06T17:56:39.900Z · LW(p) · GW(p)

I read Dmytry's post as a hint, not a solution. Since obviously pursuing complexity at "face value" would be pursuing entropy.

Replies from: Dmytry
comment by Dmytry · 2012-04-06T18:20:31.298Z · LW(p) · GW(p)

Yep. It'd be maximized by heating you up to maximum attainable temperature, or by throwing you in to black hole, depending to how you look at it.

Replies from: lukstafi
comment by lukstafi · 2012-04-06T19:50:32.774Z · LW(p) · GW(p)

We can have a low-information reference class with instances of high entropy, the "heat soup". But then, picking a reference class is arbitrary (we can contrive a complex class of heat soup flavors).

comment by Dmytry · 2012-04-06T17:30:36.913Z · LW(p) · GW(p)

I don't like EY's posts about AI. He's not immune to the sunk cost fallacy, and the worst form of sunk cost fallacy when one denies outright (with long handwave) any possibility of a better solution, having sunk the cost into the worse one.

Ultimately, if the laws of physics are simple, he's just flat out factually wrong that morality doesn't arise from simple rules. His morality arose from those laws of physics, and in so much as he's not a Boltzmann's brain, his values aren't incredibly atypical.

edit: To address it further. He does raise a valid point that there is no simple rule. The complexity metrics though are by no means a simple 'rule', they are in-computable and thus aren't even a rule.

Replies from: cousin_it, ArisKatsaris, Vaniver
comment by cousin_it · 2012-04-06T17:55:55.177Z · LW(p) · GW(p)

Physics can contain objects whose complexity is much higher than that of physics. Do you have a strong argument why randomness didn't make a big contribution to human morality?

Replies from: Dmytry, Will_Newsome
comment by Dmytry · 2012-04-06T18:05:18.002Z · LW(p) · GW(p)

Well, suppose I were to make just the rough evolution sim, given really powerful computer. Even if it evolves society with principles we can deem moral once in a trillion societies - which is probably way low given that much of our principles are game theoretic - that just adds 40 bits to description for indexing those sims. edit: and the idea of the evolution sim doesn't really have such a huge complexity; any particular evolution sim does, but we don't care which evolution simulator we are working with; we don't need the bits for picking one specific one, just the bits for picking a working one.

Replies from: cousin_it
comment by cousin_it · 2012-04-06T18:24:07.142Z · LW(p) · GW(p)

Game-theoretic principles might be simple enough, but the utility function of a FAI building a good future for humanity probably needs to encode other information too, like cues for tasty food or sexual attractiveness. I don't know any good argument why this sort of information should have low complexity.

Replies from: Dmytry
comment by Dmytry · 2012-04-06T18:29:41.849Z · LW(p) · GW(p)

You may be over-fitting there. The FAI could let people decide what they want when it comes to food and attractiveness. Actually it better would, or i'd be having some serious regrets about this FAI.

Replies from: cousin_it, cousin_it
comment by cousin_it · 2012-04-06T18:39:27.603Z · LW(p) · GW(p)

That's reasonable, but to let people decide, the FAI needs to recognize people, which also seems to require complexity...

Replies from: faul_sname, Dmytry
comment by faul_sname · 2012-04-06T20:31:50.183Z · LW(p) · GW(p)

If your biggest problem is on the order of recognizing people, the problem of FAI becomes much, much easier.

comment by Dmytry · 2012-04-06T18:47:21.936Z · LW(p) · GW(p)

Well, and the uFAI needs to know what "paperclips or something" means (or a real world goal at all). Obstacle faced by all contestants in the race. We humans learn what is other people and what isn't. (Or have evolved it, doesn't matter)

Replies from: endoself, cousin_it, othercriteria
comment by endoself · 2012-04-06T19:15:00.954Z · LW(p) · GW(p)

If you get paperclips slightly wrong, you get something equally bad (staples is the usual example, but the point is that any slight difference is about equally bad), but if you get FAI slightly wrong, you don't get something equally good. This breaks the symmetry.

Replies from: Dmytry
comment by Dmytry · 2012-04-06T19:17:17.961Z · LW(p) · GW(p)

I think if you get paperclips slightly wrong, you get a crash of some kind. If I get a ray-tracer slightly wrong, it doesn't trace electrons instead of photons.

edit: To clarify. It's about definition of person vs definition of paperclip. You need a very broad definition of person for FAI, so that it won't misidentify a person as non-person (misidentifying dolphins as persons won't be a big problem), and you need a very narrow definition of paperclip for uFAI, so that a person holding two papers together is not a paperclip. It's not always intuitive how broad definitions compare to narrow in difficulty, but it is worth noting that it is ridiculously hard to define paperclip making so that a Soviet factory anxious to maximize the paperclips would make anything at all, while it wasn't particularly difficult to define what a person is (or to define what 'money' are so that capitalist paperclip factory would make paperclips to maximize profit).

comment by cousin_it · 2012-04-06T18:54:03.685Z · LW(p) · GW(p)

I agree that paperclips could also turn out to be pretty complex.

comment by othercriteria · 2012-04-06T19:51:03.312Z · LW(p) · GW(p)

I don't think "paperclip maximizer" is taken as a complete declarative specification of what a paperclip maximizer is, let alone what it understands itself to be.

I imagine the setup is something like this. An AI has been created by some unspecified (and irrelevant) process and is now doing things to its (and our) immediate environment. We look at the things it has done and anthropomorphize it, saying "it's trying to maximize the quantity of paperclips in the universe". Obviously, almost every word in that description is problematic.

But the point is that the AI doesn't need to know what "paperclips or something" means. We're the ones who notice that the world is much more filled with paperclips after the AI got switched on.

This scenario is invariant under replacing "paperclips" with some arbitrary "X", I guess under the restriction that X is roughly at the scale (temporal, spatial, conceptual) of human experience. Picking paperclips, I assume, is just a rhetorical choice.

Replies from: Dmytry
comment by Dmytry · 2012-04-06T20:06:07.330Z · LW(p) · GW(p)

Well, I agree. That goes also for the what ever process determines something to be person. The difference is that the FAI doesn't have to create persons; it's definition doesn't need to process correctly things from the enormous space of possible things that can be or not be persons. It can have very broad definition that will include dolphins, and it will still be OK.

The intelligence, to some extent, is self defeating when finding a way to make something real; the easiest Y that is inside set X should be picked, by design, as instrumental to making more of some kind of X.

I.e. you define X to be something to hold papers together, the AI thinks and thinks and sees that a single atom, under some circumstances common in the universe (very far away in space), can hold the papers together; it finds the Kasimir effect which makes a vacuum able to hold two conductive papers together; and so on. The X has to be resistant against such brute forcing for the optimum solution.

Whenever the AI can come up with some real world manufacturing goal that it can't defeat in such a fashion, well, that's open to debate. Incomputable things seem hard to defeat.

edit: Actually. Would you consider a case of a fairly stupid nano-manufacturing AI destroying us, and itself, with gray goo, an unfriendly AI? That seems to be a particularly simple failure mode for self improving system, FAI or UFAI, under bounded computational power.And a failure mode for likely non-general AIs, as we are likely to employ such AIs to work on biotechnology and nanotechnology.

Replies from: othercriteria
comment by othercriteria · 2012-04-06T20:33:28.678Z · LW(p) · GW(p)

It doesn't sound like you are agreeing with me. I didn't make any assumptions about what the AI wants or whether its instrumental goals can be isolated. All I supposed was that the AI was doing something. I particularly didn't assume that the AI is at all concerned with what we think it is maximizing, namely, X.

As for the grey goo scenario, I think that an AI that caused the destruction of humanity not being called unfriendly would indicate a incorrect definition of at least one of "AI", "humanity", or "unfriendly" ("caused" too, I guess).

Replies from: Dmytry, TheOtherDave
comment by Dmytry · 2012-04-06T20:42:55.617Z · LW(p) · GW(p)

All I supposed was that the AI was doing something.

Can you be more specific? I have an AI that's iterating parameters to some strange attractor - defined within it - until it finds unusual behaviour. I can make the AI that would hillclimb+search for the improvements to the former AI. edit: Now, the worst thing that can happen, it makes mind hack image that kills everyone who looks at it. That wasn't the intent, but the 'unusual behaviour' might get too unusual for human brain to handle. Is that a serious risk? No it's a laughable one.

Replies from: othercriteria
comment by othercriteria · 2012-04-06T21:41:25.049Z · LW(p) · GW(p)

Implicit in my setup was that the AI reached the point where it was having noticeable macroscopic effects on our world. This is obviously easiest when the AI's substrate has some built-in capacity for input/output. If we're being really generous, it might have an autonomous body, cameras, an internet connection, etc. If we're being stingy, it might just be an isolated process running on a computer with its inputs limited to checking the wall-clock time and outputs limited to whatever physical effects it has on the CPU running it. In the latter case, doing something to the external world may be very difficult but not impossible.

The program you have doing local search in your example doesn't sound like an AI; even if you stuck it in the autonomous body, it wouldn't do anything to the world that's not a generic side-effect of its running. No one would describe it as maximizing anything.

Replies from: Dmytry
comment by Dmytry · 2012-04-06T22:13:04.329Z · LW(p) · GW(p)

No one would describe it as maximizing anything.

Well, it is maximizing what ever I defined for it to maximize, usefully for me, and in a way that is practical. In any case, you said, "All I supposed was that the AI was doing something." . My AI is doing something.

This is obviously easiest when the AI's substrate has some built-in capacity for input/output. If we're being really generous, it might have an autonomous body, cameras, an internet connection, etc.

Yea, and it's rolling forward and clamping it's manipulators until they wear out. Clearly you want it to maximize something in the real world, not just do something. The issue is that the only things it can do approximately this way is shooting at colour blue or the like.

Everything else requires very detailed model, and maximization of something in the model, followed by carrying out of the actions in the real world, which, interestingly, is entirely optional, and which even humans have trouble getting themselves to do (when I invent something and to my satisfaction am sure that it will work, it is boring to implement, and it is a common problem). Edit: and one other point, without model all you can do is try random stuff on the world itself, which is not at all intelligent (and resembles the Wheatley in portal 2 trying to crack the code).

comment by TheOtherDave · 2012-04-06T21:00:40.249Z · LW(p) · GW(p)

...or perhaps "destruction".

comment by cousin_it · 2012-04-06T18:34:48.913Z · LW(p) · GW(p)

Sorry, I don't understand what exactly you are proposing. A utility function is a function from states of the universe to real numbers. If the function contains a term like "let people decide", it should also define "people", which seems to require a lot of complexity.

Or are you coming at this from some other perspective, like assigning utilities to possible actions rather than world states? That's a type error and also very likely to be Bayesian-irrational.

comment by Will_Newsome · 2012-04-17T14:45:38.139Z · LW(p) · GW(p)

Randomness is Chaitin's omega is God implies stochasticity (mixed Strategies) implies winning in the limit due to hypercomputational advantages universally if not necessarily contingently. Hence randomness isn't at odds as such with morality. Maybe Schmidhuber's ideas about super-omegas are relevant. Doubt it.

comment by ArisKatsaris · 2012-04-06T17:48:16.218Z · LW(p) · GW(p)

His morality arose from those laws of physics

Plus the process of a few hundred million years of evolutionary pressures.

Do you think simulating those years and extrapolating the derived values from that simulation is clearly easier and simpler than extrapolating the values from e.g. a study of human neural scans/human biochemistry/human psychology?

Replies from: David_Gerard, Dmytry, faul_sname
comment by David_Gerard · 2012-04-06T22:21:34.543Z · LW(p) · GW(p)

Do you think simulating those years and extrapolating the derived values from that simulation is clearly easier and simpler than extrapolating the values from e.g. a study of human neural scans/human biochemistry/human psychology?

It's not clear to me how the second is obviously easier. How would you even do that? Are there simple examples of doing this that would help me understand what "extrapolating human values from a study of human neural scans" would entail?

comment by Dmytry · 2012-04-06T18:25:34.568Z · LW(p) · GW(p)

One could e.g. run a sim of bounded intelligence agents competing with each other for resources, then pick the best one, that will implement the tit for tat and more complex solutions that work. It was already the case that for iterated prisoner's dilemma there wasn't some enormous number of amoral solutions, to the much surprise of AI researchers of the time who wasted their efforts trying to make some sort of nasty sneaky Machiavellian AI.

edit: anyhow i digress. The point is that when something is derivable via simple rules (even if impractical), like laws of physics, that should enormously boost the likehood that it is derivable in some more practical way.

comment by faul_sname · 2012-04-06T20:34:03.591Z · LW(p) · GW(p)

Would "yes" be an acceptable answer? It probably is harder to run the simulations, but it's worth a shot at uncovering some simple cases where different starting conditions converge on the same moral/decision making system.

comment by Vaniver · 2012-04-06T17:45:53.534Z · LW(p) · GW(p)

You may want to check out this post instead; it seems like a much closer response to the ideas in your post.

Replies from: Dmytry
comment by Dmytry · 2012-04-06T18:19:54.611Z · LW(p) · GW(p)

I'm not proposing the AI, I'm noting that the humans seem to use some intuitive notion of complexity to decide what they like. edit: also had the Eliezer ever written a Rubik cube solving AI? Or anything even remotely equal? Easy to pontificate how other people think wrong when you aren't having to solve anything. The way engineers think, it works for making me a car. The way Eliezer thinks, that works for making him an atheist. Big difference. (I am atheist too, so not a religious stab, and I like Eliezer's sequences, it's just that problem solving is something we are barely at all capable of, and adding any extra crap to shoot down the lines of thought which may in fact work does not help you any)

edit: also, the solution: you just do hill climbing with n-move look ahead. As a pre-processing step you may search for sequences that climb the hill out of any condition. It's a very general problem solving method, hill climbing with move look-ahead. If you want the AI to invent hill climbing, well I know of one example, evolution, and this one does increase some kind of complexity on the line that is leading up to mankind, who invents better hill climbing, even though complexity is not the best solution to 'reproducing the most'. If the point is making the AI that comes up with the very goal of solving Rubik's cube, that gets into the AGI land, but using the cube for improving own problem solving skill is the way it is for us. I like to solve cube into some pattern. An alien may not care into what pattern to solve the cube, just as long as he pre-commits on something random, and its reachable.

comment by Dmytry · 2012-04-06T18:32:18.930Z · LW(p) · GW(p)

To address some topic digression. My point is not the theoretical notion whenever you can or can't derive the FAI rules this way. The point here is that we, humans, seem to use some intuitive notion of complexity - for lack of better word - to rank moral options. The wire-heading objection issue is particularly striking example of this.

comment by [deleted] · 2012-04-06T17:35:23.286Z · LW(p) · GW(p)

Just a note - I'd change your last sentence as it seems to imply some form of Lamarckianism and will probably get your post downvoted for that, when I'm sure that wasn't your intent...

comment by shminux · 2012-04-07T04:55:35.267Z · LW(p) · GW(p)

I don't understand why this post and some of Dmytry's comments are downvoted so hard. The idea might be far-fetched, but certainly not crazy, self-contradictory or obviously false.

My personal impression has been that emotions are a result of a hidden subconscious logical chain, and can be affected by consciously following this chain, thus reducing this apparent complexity to something simple. The experiences of others here seem to agree, from Eliezer's admission that he has developed a knack for "switching off arbitrary minor emotions" to Alicorn's "polyhacking".

It is not such a big leap to suggest that our snap moral judgments likewise result from a complex, or at least hidden, subconscious reasoning.

Replies from: TheOtherDave, wedrifid, David_Gerard, Dmytry
comment by TheOtherDave · 2012-04-07T14:00:05.989Z · LW(p) · GW(p)

I can't speak to the downvoting, but for my part I stopped engaging with Dmytry altogether a while back because I find their habit of framing interactions as adversarial both unproductive and unpleasant. That said, I certainly agree that our emotions and moral judgments are the result of reasoning (for a properly broad understanding of "reasoning", though I'd be more inclined to say "algorithms" to avoid misleading connotations) of which we're unaware. And, yes, recapitulating that covert reasoning overtly frequently gives us influence over those judgments. Similar things are true of social behavior when someone articulates the underlying social algorithms that are ordinarily left covert.

Replies from: Dmytry
comment by Dmytry · 2012-04-07T14:40:22.529Z · LW(p) · GW(p)

I can't speak to the downvoting, but for my part I stopped engaging with Dmytry altogether a while back because I find their habit of framing interactions as adversarial both unproductive and unpleasant.

Sorry for that, was a bit of leak out of how the interactions here about the AI issues are rather adversarial in nature, in the sense that ambiguity - unavoidable in human language - of anything that is in disagreement with the opinion here, is resolved in favour of interpretation that makes least amount of sense. The AI is, definitely, a very scary risk. Scariness doesn't result in most reasonable processing. I do not claim to be immune to this.

Replies from: TheOtherDave, Vladimir_Nesov
comment by TheOtherDave · 2012-04-07T16:37:03.252Z · LW(p) · GW(p)

I agree that some level of ambiguity is unavoidable, especially on initial exchange.
Given iterated exchange, I usually find that ambiguity can be reduced to negligible levels, but sometimes that fails.
I agree that some folks here have the habit you describe, of interpreting other people's comments uncharitably. This is not unique to AI issues; the same occurs from time to time with respect to decision theory, moral philosophy, theology, various other things.
I don't find it as common here as you describe it as being, either with respect to AI risks or anything else.
Perhaps it's more common here than I think but I attend to the exceptions disproportionally; perhaps it's less common here than you think but you attend to it disproportionally; perhaps we actually perceive it as equally common but you choose to describe it as the general case for rhetorical reasons; perhaps your notion of "the interpretation that makes the least amount of sense" is not what I would consider an uncharitable interpretation; perhaps something else is going on.
I agree that fear tends to inhibit reasonable processing.

Replies from: Dmytry
comment by Dmytry · 2012-04-07T16:40:33.886Z · LW(p) · GW(p)

Well, I think it is the case that the fear is mind killer to some extent. Fear rapidly assigns the truth value to a proposition, using a heuristic. That is necessary for survival. Unfortunately this value makes a very bad prior.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-07T16:53:20.148Z · LW(p) · GW(p)

Yup, that's one mechanism whereby fear tends to inhibit reasonable processing.

Replies from: wedrifid
comment by wedrifid · 2012-04-07T17:40:51.356Z · LW(p) · GW(p)

Excellent use of fogging in this conversation Dave.

Replies from: cousin_it, TheOtherDave
comment by cousin_it · 2012-04-08T12:36:51.657Z · LW(p) · GW(p)

Seconding TheOtherDave's thanks. I stumbled on this technique a couple days ago, it's nice to know that it has a name.

comment by TheOtherDave · 2012-04-07T20:55:47.774Z · LW(p) · GW(p)

Upvoted back to zero for teaching me a new word.
.

comment by Vladimir_Nesov · 2012-04-07T15:52:05.189Z · LW(p) · GW(p)

ambiguity - unavoidable in human language - of anything that is in disagreement with the opinion here, is resolved in favour of interpretation that makes least amount of sense

Ambiguity should be resolved by figuring out the intended meaning, irrespective of the intended meaning's merits, which should be discussed separately from the procedure of ambiguity resolution.

comment by wedrifid · 2012-04-07T08:31:30.121Z · LW(p) · GW(p)

I don't understand why this post and some of Dmytry's comments are downvoted so hard.

I'm going with the position that the post got the votes that it deserved. It's not very good thinking and Dmytry goes out of his way to convey arrogance and condescension while he posts. It doesn't help that rather than simply being uninformed of prior work he explicitly belligerently defies it - that changes a response of sympathy with his efforts and 'points for trying' to an expectation that he says stuff that makes sense. Of course that is going to get downvoted.

The idea might be far-fetched, but certainly not crazy, self-contradictory or obviously false.

It isn't self-contradictory, just the other two.

Seriously, complexity maximisation and "This also aligns with what ever it is that the evolution has been maximizing on the path leading up to H. Sapiens." That is crazy and obviously false.

It is not such a big leap to suggest that our snap moral judgments likewise result from a complex, or at least hidden, subconscious reasoning.

Of course that is true! But that isn't what the post says. There is a world of difference between "our values are complex" and "we value complexity".

Replies from: Dmytry
comment by Dmytry · 2012-04-07T12:28:30.120Z · LW(p) · GW(p)

Netting zero average, though i guess pointing that out is not a very good thing for votes.

Replies from: wedrifid
comment by wedrifid · 2012-04-07T12:30:18.596Z · LW(p) · GW(p)

Netting zero average, though i guess pointing that out is not a very good thing for votes.

I don't understand what you are trying to convey.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-07T13:40:31.255Z · LW(p) · GW(p)

I understood it to mean that comments about karma tend to get downvoted.

comment by David_Gerard · 2012-04-07T07:59:15.650Z · LW(p) · GW(p)

Because someone's going through mass-downvoting. Note that your defence got downvoted by them too. When someone gets a downvote for posting the actual answer to the question, there's little going on but blue-green politics with respect to local tropes.

Replies from: Manfred, wedrifid, Dmytry
comment by Manfred · 2012-04-07T08:18:07.324Z · LW(p) · GW(p)

This is often the first explanation proposed, but is wrong most of the time. Charity, context, etc. etc.

comment by wedrifid · 2012-04-07T08:35:59.153Z · LW(p) · GW(p)

Because someone's going through mass-downvoting. Note that your defence got downvoted by them too.

Not only that, his defense got downvoted by me before the post itself did and with greater intent to influence.

there's little going on but blue-green politics with respect to local tropes.

It doesn't take local tropes to prompt disagreement here. Not thinking that human values can be attributed to valuing complexity is hardly a weird and unique-to-lesswrong position. In fact Eliezer-values (in Fun-Theory) are if anything closer to what this post advocates than what can be expected in the mainstream.

Replies from: Dmytry
comment by Dmytry · 2012-04-07T12:12:33.434Z · LW(p) · GW(p)

Not only that, his defense got downvoted by me before the post itself did and with greater influence.

edit: oh wait, you are speaking of shminux . I was thinking of the answer to a question.

comment by Dmytry · 2012-04-07T12:30:41.248Z · LW(p) · GW(p)

Actually I had 2 upvotes on that answer then it got to -1. I think I'm just going to bail out because on that same post about the Rubik's cube I could of gotten a lot of 'thanks man' replies on e.g. a programming contest forum, or the like, if there was a Rubik's cube talk like this. edit: or wait, it was at -1, then at +2, then at -1

Also on the evolution part of it, it is the case that evolution is crappy hill climber (and mostly makes better bacteria), but you can look at human lineage, and reward something that's increasing along this line, to avoid wasting too much time on bacteria. E.g. by making agents play some sort of games of wit against each other where bacteria won't get free pass.

Replies from: wedrifid
comment by wedrifid · 2012-04-07T14:21:02.198Z · LW(p) · GW(p)

I think I'm just going to bail out

Consistent downvotes can be considered a signal sent by the voter consensus that they would prefer that you either bail or change your behavior. Unfortunately the behavior change in question here amounts to adopting a lower status role (ie. more willing to read and understand the words of others, less inclined to insult and dismiss others out of hand, more likely to change your mind about things when things are explained to you). I don't expect or presume that others will willingly adopt a lower status role - even when to do so will increase their status in the medium term. I must accept that they will do what they wish to do and continue to downvote and oppose the behaviors that I would see discouraged.

because on that same post about the Rubik's cube I could of gotten a lot of 'thanks man' replies on e.g. a programming contest forum, or the like, if there was a Rubik's cube talk like this. edit: or wait, it was at -1, then at +2, then at -1

It is quite possible - in fact my model puts it as highly likely - that your current style of social interaction would result in far greater social at other locations. Lesswrong communication norms are rather unique in certain regards.

Replies from: Dmytry
comment by Dmytry · 2012-04-07T14:44:54.340Z · LW(p) · GW(p)

You guys are very willing to insult me personally, but I am rather trying not to go personal (albeit it is rather difficult at times). That doesn't mean I don't say things that members of community can't take personally; still, in last couple days I've noticed that borderline personal insults here are tolerated way more than i'd consider normal while any stabs at community (or shared values) are not, and the disagreements tend to be taken more personally than normal in technical discourse.

comment by Dmytry · 2012-04-07T05:12:35.995Z · LW(p) · GW(p)

I don't understand why this post and some of Dmytry's comments are downvoted so hard. The idea might be far-fetched, but certainly not crazy, self-contradictory or obviously false.

Because meme complex generated by bright philosopher guy (Eliezer) doing depth-first search for solutions to engineering problem (FAI) and blogging what ever rationalizations he had for discarding anything relevant to alternative approaches.

comment by Dmytry · 2012-04-06T19:44:11.903Z · LW(p) · GW(p)

So many posts, so many disagreements in comments about what I supposedly imply about AI, only 1 disagreement about what people value (which is the point), and that disagreement basically relies on notion that counting cars and calculating GCD is more complex (or even comparably complex) to image recognition. I am disappointed. You people really are very mind killed by even vague relevance to AI.

Replies from: drethelin, wedrifid, Manfred, ciphergoth
comment by drethelin · 2012-04-06T22:51:55.355Z · LW(p) · GW(p)

Your point is vague and hard to understand and now you're insulting people who were talking about it. Not great tactics for either making people agree with you or for learning things yourself.

Replies from: Dmytry
comment by Dmytry · 2012-04-06T22:54:04.360Z · LW(p) · GW(p)

If you hill climb the self esteem value you get stuck in local maxima.

comment by wedrifid · 2012-04-07T07:14:32.624Z · LW(p) · GW(p)

You people really are very mind killed by even vague relevance to AI.

So many posts where you come up with conspiracy theories as to why you were downvoted, so little personal responsibility.

Replies from: Dmytry
comment by Dmytry · 2012-04-07T07:19:47.725Z · LW(p) · GW(p)

Where's the notion of anyone conspiring with anyone? Clearly there isn't conspiracy if what i post averages net more likes than dislikes. The best predictor i can make so far for whenever my post will be voted +10 or -10 is how well it aligns with the views held there (and some of the first vote noise of course), not how good quality it is. Some of the most vague and low actual quality crap I post is most high voted, the high being double digit positive. It's not as if i was netting negative.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-04-07T08:30:11.811Z · LW(p) · GW(p)

I hope you don't end up concluding that it's impossible for contrary idea to be taken seriously around here. Just in case, I've collected some of my highly upvoted posts arguing against or questioning Eliezer's ideas:

Replies from: Dmytry
comment by Dmytry · 2012-04-07T08:44:56.844Z · LW(p) · GW(p)

Well, one has to be ultra careful to keep number of contrary ideas very low within a post, one has to already have a giant body of posts aligning with the opinions (and it is boring to just generate texts that are in agreement). I may post on this exact topic with the wording more refined. edit: Also you may have way more skill at converting people to contrary ideas than I do. I lose patience.

Anyway, an idea for you: there is a huge range of behaviours that we human may deem moral enough. Within this huge range, there could well be something that is conceptually simple. It is necessary that the morality values are easily calculated by humans, as the humans do not like to live in constant anticipation of unpredictable intervention. Especially when the intervention may be based on other people's volitions. There can well be a simple (but not too simple) agreeable morality system.

edit: also, look at law making. All successful legal systems are based on few principles, on which the constitution is based, on which the law is based. The law needs to be predictable by the citizens. Easily and quickly, knee jerk reflex level predictable.

Replies from: Wei_Dai, Emile
comment by Wei Dai (Wei_Dai) · 2012-04-07T23:36:02.511Z · LW(p) · GW(p)

Well, one has to be ultra careful to keep number of contrary ideas very low within a post

Yes, this seems likely.

one has to already have a giant body of posts aligning with the opinions (and it is boring to just generate texts that are in agreement).

I also find it boring to generate texts that are in agreement, and hence rarely do so. I don't think that's the main issue.

edit: Also you may have way more skill at converting people to contrary ideas than I do. I lose patience.

I don't think "skill at converting people" and "patience" are the right way to think about it either. I think what helps are:

  • Establish a track record of being a careful thinker, who usually spends a lot of time looking for holes in their own ideas and arguments before posting them. And not in a cursory way or out of a sense of obligation, but because you know deep down that most new ideas, including your own, and even new arguments pointing out that other ideas are wrong, are wrong. Looks for steps in your argument that are weak. Intuitions that other people may not share. Equally plausible arguments with contradictory conclusions. Analogous arguments that lead to obviously wrong conclusions. Alternative hypotheses that can explain your observations.
  • Write clearly. Isolate one particular idea or line of argument at a time and try to explain it as clearly as possible before introducing another one.
  • Know existing work on your subject and explain how they relate (or why they aren't relevant) to your ideas, or why they are wrong or incomplete. Most people, when they're handed a problem that has stumped others for years, or is the subject of some long running debate, seem to still assume that they can solve it with a few days of thought, without researching the existing ideas and arguments, and quickly convince everyone else of their correctness. If you're not such a person, then signal it credibly!
  • Forget about fairness (in case you're thinking why Eliezer and his supporters get held to a different standard). Without Eliezer there would be no LessWrong and the next best discussion forum for these topics would probably be significantly worse. So be happy with what we've got and maybe work to improve it on the margins. There's no point in thinking "my posts ought to receive the same treatment as those of FAI boosters, therefore I refuse to do more".
Replies from: Dmytry
comment by Dmytry · 2012-04-08T06:02:16.369Z · LW(p) · GW(p)

Establish a track record of being a careful thinker, who usually spends a lot of time looking for holes in their own ideas and arguments before posting them. And not in a cursory way or out of a sense of obligation, but because you know deep down that most new ideas, including your own, and even new arguments pointing out that other ideas are wrong, are wrong. Looks for steps in your argument that are weak. Intuitions that other people may not share. Equally plausible arguments with contradictory conclusions. Analogous arguments that lead to obviously wrong conclusions. Alternative hypotheses that can explain your observations.

TBH with this community i'm feeling i'm dealing with some people who got in general a very deeply flawed approach to the thought which in a subtle way breaks problem solving, and especially cooperative problem solving.

The topic here is fuzzy, and I do say that it is rather unfinished; it is implied that I think it may not be true, doesn't it? It is also, a discussion post. At the same time what I do not say, is 'lets go ahead and implement AI based on this', or something similar. It is immediately presumed of me, that I has posted this with utter and complete certainty - even though this can not be inferred from anything. The disagreement I get also is that of utter - crackpot grade - certainty that no theres no way it is in any way related to human moral decisionmaking. Yes, I do not have a proof, or particularly convincing argument that it is related; that is absolutely true, and I do not think I do. At the same time, the point is to look and see how it may enhance the understanding.

For example - it is plausible that we humans do use size of our internal representation of concepts as proxy for something, because it generally e.g. associates with closer people, etc. Assuming any kind of compression, size of internal representation is a form of complexity measure.

Forget about fairness

I'll just go to a less pathological place. The issue is not the fairness; here it is not enough (nor needed) to have any domain specific knowledge (such as e.g. knowing that size of compressed representation = some form of complexity). What is necessary, is very extensive knowledge of a large body of half baked (or entirely un-baked while verbose), vague stuff, 'less you contradict any of it while trying to do any form of search for any kind of solution. What you're doing here is pathologically counter productive to any form of problem solving that involves several individuals (and likely counter productive to problem solving by individuals as well). You (lesswrong) are still apes with pretensions, and your 'you have not proved it' still leaks into 'your belief is wrong' just as much as for anyone else, because that's how brains work, nearby concepts collapse, and just because you know they do doesn't magically make it not so; the purpose of knowing fallibility of human brain is not for (frankly, very naive) assumption that now - that you know - you are magically not fallible. This is like those toy decision agents that second guess themselves into a faulty answer.

Replies from: Wei_Dai, wedrifid, David_Gerard
comment by Wei Dai (Wei_Dai) · 2012-04-08T07:30:46.148Z · LW(p) · GW(p)

Yes, I do not have a proof, or particularly convincing argument that it is related; that is absolutely true, and I do not think I do. At the same time, the point is to look and see how it may enhance the understanding.

The thing is, the idea that our values may have something to do with complexity isn't a new one. See this thread for example. It's the kind of idea that occurs to a lot of smart people, but doesn't seem to lead anywhere interesting (e.g., some formal definition of complexity that actually explains our apparent values, or good arguments for why such a definition must exist). What you see as unreasonable certainty may just reflect the fact that you're not offering anything new (or if you are, it's not clearly expressed) and others have already thought it over and decided that "complexity based moral values" is a dead end. If you don't want to take their word for it and find their explanations unsatisfactory, you'll just have to push ahead yourself and come back when you have stronger and/or clearer arguments (or decide that they're right after all).

I'll just go to a less pathological place.

Where?

comment by wedrifid · 2012-04-08T11:41:06.242Z · LW(p) · GW(p)

TBH with this community i'm feeling i'm dealing with some people who got in general a very deeply flawed approach to the thought which in a subtle way breaks problem solving, and especially cooperative problem solving.

And this community gets the impression that they are dealing with what amounts to a straw-man generator. Let's agree to disagree.

I'll just go to a less pathological place.

Please do. As you have said, you can expect to achieve more social success with your preferred behaviors if you execute them in different social hierarchies. And success here would require drastically changing how you behave in response to social incentives and local standards - something that you are not willing to do. So of you go elsewhere everybody wins. You can continue to believe you are superior to us and all disagreement with you is the result of us being brainwashed or inferior or whatever and we can go about and have more enjoyable conversations.

Really, you don't need to write a whole series of comments to 'break up with us'. You can just click the logout button and type a new address into the address bar. Parting declarations of superiority don't really achieve much.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-04-08T14:05:02.279Z · LW(p) · GW(p)

I thought Dmytry sometimes has interesting ideas, and it'd be worth trying to convince him to stick around but be more careful and less adversarial. As orthonormal said, LW needs better contrarians, and Dmytry seems like one of the more promising candidates. Why tell him to go away? Do you think my effort was doomed or counterproductive?

Replies from: wedrifid, gwern
comment by wedrifid · 2012-04-08T16:05:11.890Z · LW(p) · GW(p)

I thought Dmytry sometimes has interesting ideas, and it'd be worth trying to convince him to stick around but be more careful and less adversarial. As orthonormal said, LW needs better contrarians, and Dmytry seems like one of the more promising candidates.

There is some potential there - Dmytry has what seems to be a decent IQ and some technical knowledge in there somewhere. But the indications suggest that he has more potential to be destructive than useful. I would expect him to end up as a XiXiDu only far more powerful (more intelligent and rhetorically proficient) and far more hostile (XiXiDu's attitude hovers just on the border, Dmytry given time would be more consistently hostile).

Why tell him to go away?

His idea, I merely agree that it would benefit him and us. For what it is worth I don't think my agreement is likely to encourage him to leave. If anything he would be inclined to do the opposite of what my preference is.

In terms of my own personal interests - I incur a cost when there are people like Dmytry around. My nature (and considered, self-endorsed nature at that) is such that when I see people try to intellectually bully others with disingenuous non-sequiturs and straw men I am naturally inclined to interfere. Dmytry is far from the worst I've seen in this regard but he's not too far down the list.

If the guy wants to leave and has concluded we are too toxic for him then I'm not going to argue with that. It seems better for everyone. Arrogant nerds are a dime a dozen - we have plenty around here so don't need another. And communities where one can show off technical competence and rhetorical flair are a dime a dozen too so Dmytry doesn't need us. I'd recommend he try MENSA. He would fit in well (based on what I recall of my time there and what I have seen of Dmytry.)

Do you think my effort was doomed or counterproductive?

Doomed.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-04-08T21:47:03.162Z · LW(p) · GW(p)

If anything he would be inclined to do the opposite of what my preference is.

Why did you do it then?

Doomed.

Sigh... I should probably just let it go, given that it was a long shot anyway, but it's kind of frustrating to have put in the effort, and not even get a clean negative result back as evidence.

Replies from: wedrifid
comment by wedrifid · 2012-04-09T07:36:10.847Z · LW(p) · GW(p)

Sigh... I should probably just let it go, given that it was a long shot anyway

Perhaps you could let this one go but tell us how to catch the next one?

comment by gwern · 2012-04-08T14:16:23.049Z · LW(p) · GW(p)

I can't say I noticed anything worthwhile. What has Dmytry said that you regard as promising?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-04-08T14:48:18.957Z · LW(p) · GW(p)

Well, he has written 9 discussion posts with >10 karma in the last 4 months or so. Do you not like any of them? Or think of it this way: if he is the kind of person we want to drive away instead of help better fit into our community, then where are we going to find those "better contrarians"?

Replies from: gwern, wedrifid
comment by gwern · 2012-04-08T15:00:58.359Z · LW(p) · GW(p)

Looking through his posts, most are downvoted, and the bulk of his karma seems to be coming from a conjunction fallacy post which says nothing new that wasn't covered in previous posts by say Eliezer (or myself, in prediction-related posts), and another content-less post composed pretty much just of discussion (of a very low level). Brain shrinkage was a good topic, but unlike my essay on similar topics (covering brain shrinkage as a special case), Dmytry completely fails to bring the references. And so on.

So again, what do you regard as promising?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-04-08T15:45:17.446Z · LW(p) · GW(p)

I don't want to mention specific posts, since that would probably get me involved in a debate over the exact merits of those posts, but it seems like you missed the two posts with the highest upvotes. And yes, most of his posts are downvoted, but my guess is that it's easier to teach someone to avoid posting bad ideas than to come up with even semi-good ones.

Anyway, I don't want to argue too much over this. If, all things considered, you (or wedrifid) don't think there's much chance that Dmytry could become someone that would make LW better instead of worse, that's fine with me. I just wanted to make sure it was a considered decision on wedrifid's part to push Dmytry to leave, and not just an emotional reaction.

Replies from: wedrifid
comment by wedrifid · 2012-04-09T07:34:58.762Z · LW(p) · GW(p)

I just wanted to make sure it was a considered decision on wedrifid's part to push Dmytry to leave, and not just an emotional reaction.

Considered and strategic but not committed to and considered without awareness of your degree of personal interest. In such a circumstance if I knew there was someone with a particular interest in working with a (shall we call them 'candidate'?) I would stand back and refrain from replying or interacting with the candidate except in those circumstances where they are directly hampering the contributions of others.

When it comes to handling such situations better in the future it occurs to me that the material you have written already in your various comments here would make a decent post ("How to be a productive contrarian?"). If that were available as a post then when when the next guy came along and started saying "You guys disagree with me therefore you are all a bunch of brainwashed group thinking fools" we could fog and say "It's true, there is plenty of group think on lesswrong. Wei_Dai wrote this post on how he manages it." That would be equally as true as the response "You're actually getting downvoted because you're wrong and acting like a dick. STFU." but far more useful!

In fact, your advice (including what to do instead of worrying about 'fairness') generalizes well to dealing with new challenging social situations of all kinds.

comment by wedrifid · 2012-04-08T15:50:34.583Z · LW(p) · GW(p)

Or think of it this way: if he is the kind of person we want to drive away instead of help better fit into our community, then where are we going to find those "better contrarians"?

In my experience you don't find 'better contrarians' among people who are naturally contrary and have a chip on their shoulder. A good contrarian will mostly agrees with stuff (unless the community they are in really is defective) - but thinks things through and then carefully presents their contrary positions as though they are making a natural contribution.

Don't seek the contrariness. Seek good thinking and willingness to contribute. You get the contrarian positions for free when the generally good thinking gets results. For example you get lukeprog.

comment by Emile · 2012-04-07T11:07:44.981Z · LW(p) · GW(p)

Also you may have way more skill at converting people to contrary ideas than I do. I lose patience.

You may also be lacking in the skill of telling when your contrary ideas are actually wrong. I don't doubt are certainly correct ideas that go against what many LessWrongers think, but there are many more wrong ideas that do. It may be that Wei Dai brings the first kind, and you bring the second kind. Or it may be that Wei Dai is just a better writer than you. I'd say it's a mix of both.

Replies from: Dmytry
comment by Dmytry · 2012-04-07T11:10:52.588Z · LW(p) · GW(p)

The disagreement is mostly in the areas where LW does speculate massively.

comment by Manfred · 2012-04-06T22:56:38.792Z · LW(p) · GW(p)

So, I downvoted because of the idea about what people value, but did not leave a comment. It's not uncommon for people to think that we should be able to describe human values simply in terms of complexity. So you're wrong, but at least you're in fairly good company. A series of relevant LW posts (only some of which actually apply, but you might as well read them all :P) start here, though there are plenty others.

Replies from: Dmytry
comment by Dmytry · 2012-04-07T05:08:49.931Z · LW(p) · GW(p)

The utilitarian calculus is an idea that what people value, is described simply in terms of summation , ha. The complexity is another kind of f(a,b,c,d) that behaves vaguely like a 'sum' , but is not as hopelessly simple (and stupid) as summation. If the a,b,c,d are strings, and it is a programming language, the above expression is often written like f(a+b+c+d) while it is something very fundamentally different from summation of real valued numbers.

Go downvote everything on utilities summation, please, because it is much more simple than what I propose. It seems to me that we also vaguely describe our complexity-like metric of A,B as 'sum' of A and B

Replies from: Manfred
comment by Manfred · 2012-04-07T05:21:17.095Z · LW(p) · GW(p)

The trouble is not the simplicity (appropriately enough). The trouble is that complexity is not, not even a little bit, a general basis for what humans value.

Replies from: Dmytry
comment by Dmytry · 2012-04-07T05:31:15.010Z · LW(p) · GW(p)

The trouble is that complexity is not, not even a little bit, a general basis for what humans value.

Let's just go around making assertions and plus-ing the assertions that agree with EY's assertions.

How is complexity of concatenated strings inherently worse model than sum of real numbers? Where does it fail? It seems to me it describes what I think is right [better than summation does] - e.g. I don't think that if we make a duplicate of you, and synchronize duplicated brains every 20 seconds, we should give you both twice the candy or if there's 10 such duplicates, cut up anyone for transplants into them.

The EY's post that you linked, should be applied to every other notion of morality including the utilitarian summing. Which is strictly a dumber approach than concatenation followed by complexity metric. edit: and that's because there is complexity metric that just looks at length of the string, the dumbest one, in which case its identical to summation.

edit: also i'm becoming convinced the EY would make an utterly terrible engineer or scientist. The engineering and science works by - wishful thinking of course - assuming that there is a simple process X that coincides with complex process Y well enough, because all you can infer from available data - given the noise and limited number of observations - is a simple process X , and the alternative is to just sit doing nothing twiddling thumbs because if there is no simple approximation you won't be able to figure out the complex one, or build anything. The engineering requires ordering search for solutions by the probability of success times inverse difficulty, and this favours simple hypotheses. It may seem like a best guess that there is no unifying principle, when you are just guessing; when you are truing to build something, that is the very worst guess.

comment by Paul Crowley (ciphergoth) · 2012-04-07T07:57:08.945Z · LW(p) · GW(p)

Downvoted for "you people" - this is a mindkilling way to think.