Secrets of the eliminati

post by Scott Alexander (Yvain) · 2011-07-20T10:15:45.086Z · LW · GW · Legacy · 255 comments

Contents

255 comments

Anyone who does not believe mental states are ontologically fundamental - ie anyone who denies the reality of something like a soul - has two choices about where to go next. They can try reducing mental states to smaller components, or they can stop talking about them entirely.

In a utility-maximizing AI, mental states can be reduced to smaller components. The AI will have goals, and those goals, upon closer examination, will be lines in a computer program.

But in the blue-minimizing robot, its "goal" isn't even a line in its program. There's nothing that looks remotely like a goal in its programming, and goals appear only when you make rough generalizations from its behavior in limited cases.

Philosophers are still very much arguing about whether this applies to humans; the two schools call themselves reductionists and eliminativists (with a third school of wishy-washy half-and-half people calling themselves revisionists). Reductionists want to reduce things like goals and preferences to the appropriate neurons in the brain; eliminativists want to prove that humans, like the blue-minimizing robot, don't have anything of the sort until you start looking at high level abstractions.

I took a similar tack asking ksvanhorn's question in yesterday's post - how can you get a more accurate picture of what your true preferences are? I said:

I don't think there are true preferences. In one situation you have one tendency, in another situation you have another tendency, and "preference" is what it looks like when you try to categorize tendencies. But categorization is a passive and not an active process: if every day of the week I eat dinner at 6, I can generalize to say "I prefer to eat dinner at 6", but it would be non-explanatory to say that a preference toward dinner at 6 caused my behavior on each day. I think the best way to salvage preferences is to consider them as tendencies currently in reflective equilibrium.


A more practical example: when people discuss cryonics or anti-aging, the following argument usually comes up in one form or another: if you were in a burning building, you would try pretty hard to get out. Therefore, you must strongly dislike death and want to avoid it. But if you strongly dislike death and want to avoid it, you must be lying when you say you accept death as a natural part of life and think it's crass and selfish to try to cheat the Reaper. And therefore your reluctance to sign up for cryonics violates your own revealed preferences! You must just be trying to signal conformity or something.

The problem is that not signing up for cryonics is also a "revealed preference". "You wouldn't sign up for cryonics, which means you don't really fear death so much, so why bother running from a burning building?" is an equally good argument, although no one except maybe Marcus Aurelius would take it seriously.

Both these arguments assume that somewhere, deep down, there's a utility function with a single term for "death" in it, and all decisions just call upon this particular level of death or anti-death preference.

More explanatory of the way people actually behave is that there's no unified preference for or against death, but rather a set of behaviors. Being in a burning building activates fleeing behavior; contemplating death from old age does not activate cryonics-buying behavior. People guess at their opinions about death by analyzing these behaviors, usually with a bit of signalling thrown in. If they desire consistency - and most people do - maybe they'll change some of their other behaviors to conform to their hypothesized opinion.

One more example. I've previously brought up the case of a rationalist who knows there's no such thing as ghosts, but is still uncomfortable in a haunted house. So does he believe in ghosts or not? If you insist on there being a variable somewhere in his head marked $belief_in_ghosts = (0,1) then it's going to be pretty mysterious when that variable looks like zero when he's talking to the Skeptics Association, and one when he's running away from a creaky staircase at midnight.

But it's not at all mysterious that the thought "I don't believe in ghosts" gets reinforced because it makes him feel intelligent and modern, and staying around a creaky staircase at midnight gets punished because it makes him afraid.

Behaviorism was one of the first and most successful eliminationist theories. I've so far ignored the most modern and exciting eliminationist theory, connectionism, because it involves a lot of math and is very hard to process on an intuitive level. In the next post, I want to try to explain the very basics of connectionism, why it's so exciting, and why it helps justify discussion of behaviorist principles.

255 comments

Comments sorted by top scores.

comment by [deleted] · 2011-07-18T01:14:05.559Z · LW(p) · GW(p)

I wonder:

if you had an agent that obviously did have goals (let's say, a player in a game, whose goal is to win, and who plays the optimal strategy) could you deduce those goals from behavior alone?

Let's say you're studying the game of Connect Four, but you have no idea what constitutes "winning" or "losing." You watch enough games that you can map out a game tree. In state X of the world, a player chooses option A over other possible options, and so on. From that game tree, can you deduce that the goal of the game was to get four pieces in a row?

I don't know the answer to this question. But it seems important. If it's possible to identify, given a set of behaviors, what goal they're aimed at, then we can test behaviors (human, animal, algorithmic) for hidden goals. If it's not possible, that's very important as well; because that means that even in a simple game, where we know by construction that the players are "rational" goal-maximizing agents, we can't detect what their goals are from their behavior.

That would mean that behaviors that "seem" goal-less, programs that have no line of code representing a goal, may in fact be behaving in a way that corresponds to maximizing the likelihood of some event; we just can't deduce what that "goal" is. In other words, it's not as simple as saying "That program doesn't have a line of code representing a goal." Its behavior may encode a goal indirectly. Detecting such goals seems like a problem we would really want to solve.

Replies from: Wei_Dai, sixes_and_sevens, Pavitra, DanielLC, Vladimir_Nesov, lythrum, Will_Newsome
comment by Wei Dai (Wei_Dai) · 2011-07-18T03:24:09.147Z · LW(p) · GW(p)

From that game tree, can you deduce that the goal of the game was to get four pieces in a row?

One method that would work for this example is to iterate over all possible goals in ascending complexity, and check which one would generate that game tree. How to apply this idea to humans is unclear. See here for a previous discussion.

Replies from: None
comment by [deleted] · 2011-07-18T03:29:54.627Z · LW(p) · GW(p)

Ok, computationally awful for anything complicated, but possible in principle for simple games. That's good, though; that means goals aren't truly invisible, just inconvenient to deduce.

Replies from: chatquitevoit, printing-spoon
comment by chatquitevoit · 2011-07-18T15:37:10.670Z · LW(p) · GW(p)

I think, actually, because we hardly ever play with optimal strategy goals are going to be nigh impossible to deduce. Would such a end-from-means deduction even work if the actor was not using the optimal strategy? Because humans only do so in games on the level of tic-tac-toe (the more rational ones maybe in more complex situations, but not by much), and as for machines that could utilize optimal strategy, we've just excluded them from even having such 'goals'.

Replies from: Error
comment by Error · 2013-09-05T20:42:07.138Z · LW(p) · GW(p)

If each game is played to the end (no resignations, at least in the sample set) then presumably you could make good initial guesses about the victory condition by looking at common factors in the final positions. A bit like zendo. It wouldn't solve the problem, but it doesn't rely on optimal play, and would narrow the solution space quite a bit.

e.g. in the connect-four example, all final moves create a sequence of four or more in a row. Armed with that hypothesis, you look at the game tree, and note that all non-final moves don't. So you know (with reasonably high confidence) that making four in a row ends the game. How to figure out whether it wins the game or loses it is an exercise for the reader.

(mental note, try playing C4 with the win condition reversed and see if it makes for an interesting game.)

comment by printing-spoon · 2011-07-18T04:56:57.322Z · LW(p) · GW(p)

there's always heuristics, for example seeing that the goal of making three in a row fits the game tree well suggests considering goals of the form "make n in a row" or at least "make diagonal and orthogonal versions of some shape"

comment by sixes_and_sevens · 2011-07-18T10:30:00.612Z · LW(p) · GW(p)

Human games (of the explicit recreational kind) tend to have stopping rules isomorphic with the game's victory conditions. We would typically refer to those victory conditions as the objective of the game, and the goal of the participants. Given a complete decision tree for a game, even a messy stochastic one like Canasta, it seems possible to deduce the conditions necessary for the game to end.

An algorithm that doesn't stop (such as the blue-minimising robot) can't have anything analogous to the victory condition of a game. In that sense, its goals can't be analysed in the same way as those of a Connect Four-playing agent.

Replies from: Khaled, kurokikaze
comment by Khaled · 2011-07-18T11:51:49.427Z · LW(p) · GW(p)

So if the blue-minimising robot was to stop after 3 months (the stop condition is measured by a timer), can we say that the robot's goal is to stay "alive" for 3 months? I cannot see a necessry link between deducing goals and stopping conditions.

A "victory condition" is another thing, but from a decision tree, can you deduce who loses (for Connect Four, perhaps it is the one who reaches the first four that loses).

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2011-07-18T13:05:31.759Z · LW(p) · GW(p)

By "victory condition", I mean a condition which, when met, determines the winning, losing and drawing status of all players in the game. A stopping rule is necessary for a victory condition (it's the point at which it is finally appraised), but it doesn't create a victory condition, any more than imposing a fixed stopping time on any activity creates winners and losers in that activity.

Replies from: Khaled
comment by Khaled · 2011-07-19T10:02:33.500Z · LW(p) · GW(p)

Can we know the victory condition from just watching the game?

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2011-07-22T11:23:22.781Z · LW(p) · GW(p)

Just to underscore a broader point: recreational games have various characteristics which don't generalise to all situations modelled game-theoretically. Most importantly, they're designed to be fun for humans to play, to have consistent and explicit rules, to finish in a finite amount of time (RISK notwithstanding), to follow some sort of narrative and to have means of unambiguously identifying winners.

Anecdotally, if you're familiar with recreational games, it's fairly straightforward to identify victory conditions in games just by watching them being played, because their conventions mean those conditions are drawn from a considerably reduced number of possibilities. There are, however, lots of edge- and corner-cases where this probably isn't possible without taking a large sample of observations.

comment by kurokikaze · 2011-07-21T15:23:14.785Z · LW(p) · GW(p)

Well, even if we have conditions to end game we still don't know if player's goal is to end the game (poker) or to avoid ending it for as long as possible (Jenga). We can try to deduce it empirically (if it's possible to end game on first turn effortlesly, then goal is to keep going), but I'm not sure if it applies to all games.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2011-07-22T11:42:00.721Z · LW(p) · GW(p)

If ending the game quickly or slowly is part of the objective, in what way is it not included in the victory conditions?

Replies from: kurokikaze
comment by kurokikaze · 2011-07-25T09:15:12.705Z · LW(p) · GW(p)

I mean it could not be visible from a game log (for complex games). We will see the combination of pieces when game ends (ending condition), but it can be not enough.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2011-07-25T09:33:29.131Z · LW(p) · GW(p)

I don't think we're talking about the same things here.

A decision tree is an optimal path through all possible decision in a game, not just the history of any given game.

"Victory conditions" in the context I'm using are the conditions that need to be met in order for the game to end, not simply the state of play at the point when any given game ends.

comment by Pavitra · 2011-08-03T02:20:32.999Z · LW(p) · GW(p)

I suspect that "has goals" is ultimately a model, rather than a fact. To the extent that an agent's behavior maximizes a particular function, that agent can be usefully modeled as an optimizer. To the extent that an agent's behavior exhibits signs of poor strategy, such as vulnerability to dutch books, that agent may be better modeled as an algorithm-executer.

This suggests that "agentiness" is strongly tied to whether we are smart enough to win against it.

Replies from: wedrifid
comment by wedrifid · 2011-08-03T09:46:46.876Z · LW(p) · GW(p)

I suspect that "has goals" is ultimately a model, rather than a fact. To the extent that an agent's behavior maximizes a particular function, that agent can be usefully modeled as an optimizer. To the extent that an agent's behavior exhibits signs of poor strategy, such as vulnerability to dutch books, that agent may be better modeled as an algorithm-executer.

This suggests that "agentiness" is strongly tied to whether we are smart enough to win against it.

This principle is related to (a component of) the thing referred to as 'objectified'. That is, if a person is aware that another person can model it as an algorithm-executor then it may consider itself objectified.

comment by DanielLC · 2011-07-18T03:45:55.181Z · LW(p) · GW(p)

What I've heard is that, for an intelligent entity, it's easier to predict what will happen based on their goals rather than what they do.

For example, with the connect four game, if you manage to figure out that they always seem to get four in a row, and you never do when you play against them, before you can figure out what their strategy is, you know their goal.

Replies from: orthonormal
comment by orthonormal · 2011-07-18T05:38:31.681Z · LW(p) · GW(p)

Although you might have just identified an instrumental subgoal.

comment by Vladimir_Nesov · 2011-07-23T23:21:28.146Z · LW(p) · GW(p)

Compare with only ever seeing one move made in such a game, but being able to inspect in detail the reasons that played a role in deciding what move to make, looking for explanations for that move. It seems that even one move might suffice, which goes to show that it's unnecessary for behavior itself to somehow encode agent's goals, as we can also take into account the reasons for the behavior being so and so.

comment by lythrum · 2011-07-18T23:40:07.623Z · LW(p) · GW(p)

If you had lots of end states, and lots of non-end states, and we want to assume the game ends when someone's won, and that a player only moves into an end state if he's won (neither of these last two are necessarily true even in nice pretty games), then you could treat it like a classification problem. In that case, you could throw your favourite classifier learning algorithm at it. I can't think of any publications on someone machine learning a winning condition, but that doesn't mean it's not out there.

Dr. David Silver used temporal difference learning to learn some important spatial patterns for Go play, using self-play. Self play is basically like watching yourself play lots of games with another copy of yourself, so I can imagine similar ideas being used to watching someone else play. If you're interested in that, I suggest http://www.aaai.org/Papers/IJCAI/2007/IJCAI07-170.pdf

On a sadly less published (and therefore mostly unreliable) but slightly more related note, we did have a project once in which we were trying to teach bots to play a Mortal Kombat style game only by observing logs of human play. We didn't tell one of the bots the goal, we just told it when someone had won, and who had won. It seemed to get along ok.

comment by Will_Newsome · 2011-07-23T22:32:10.476Z · LW(p) · GW(p)

One of my 30 or so Friendliness-themed thought experiments is called "Implicit goals of ArgMax" or something like that. In general I think this style of reasoning is very important for accurately thinking about universal AI drives. Specifically it is important to analyze highly precise AI architectures like Goedel machines where there's little wiggle room for a deus ex machina.

comment by JGWeissman · 2011-07-18T02:58:46.230Z · LW(p) · GW(p)

Reductionists want to reduce things like goals and preferences to the appropriate neurons in the brain; eliminativists want to prove that humans, like the blue-minimizing robot, don't have anything of the sort until you start looking at high level abstractions.

Surely you mean that eliminativists take actions which, in their typical contexts, tend to result in proving that humans, like the blue-minimizing robot, don't have anything of the sort until you start looking at high level abstractions.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2011-07-18T23:46:51.482Z · LW(p) · GW(p)

Surely you mean that there are just a bunch of atoms which, when interpreted as a human category, can be grouped together to form a being classifiable as "an eliminativist".

comment by kybernetikos · 2011-07-21T05:10:39.768Z · LW(p) · GW(p)

eliminativists want to prove that humans, like the blue-minimizing robot, don't have anything of the sort until you start looking at high level abstractions.

Just because something only exists at high levels of abstraction doesn't mean it's not real or explanatory. Surely the important question is whether humans genuinely have preferences that explain their behaviour (or at least whether a preference system can occasionally explain their behaviour - even if their behaviour is truly explained by the interaction of numerous systems) rather than how these preferences are encoded.

The information in a jpeg file that indicates a particular pixel should be red cannot be analysed down to a single bit that doesn't do anything else, that doesn't mean that there isn't a sense in which the red pixel genuinely exists. Preferences could exist and be encoded holographically in the brain. Whether you can find a specific neuron or not is completely irrelevant to their reality.

Replies from: Logos01
comment by Logos01 · 2011-07-21T19:27:19.343Z · LW(p) · GW(p)

Just because something only exists at high levels of abstraction doesn't mean it's not real or explanatory.

I have often stated that, as a physicalist, the mere fact that something does not independently exist -- that is, it has no physically discrete existence -- does not mean it isn't real. The number three is real -- but does not exist. It cannot be touched, sensed, or measured; yet if there are three rocks there really are three rocks. I define "real" as "a pattern that proscriptively constrains that which exists". A human mind is real; but there is no single part of your physical body you can point to and say, "this is your mind". You are the pattern that your physical components conform to.

It seems very often that objections to reductionism are founded in a problem of scale: the inability to recognize that things which are real from one perspective remain real at that perspective even if we consider a different scale.

It would seem, to me, that "eliminativism" is essentially a redux of this quandary but in terms of patterns of thought rather than discrete material. It's still a case of missing the forest for the trees.

Replies from: kybernetikos
comment by kybernetikos · 2011-07-22T09:14:51.937Z · LW(p) · GW(p)

I agree. In particular I often find these discussions very frustrating because people arguing for elimination seem to think they are arguing about the 'reality' of things when in fact they're arguing about the scale of things. (And sometimes about the specificity of the underlying structures that the higher level systems are implemented on). I don't think anyone ever expected to be able to locate anything important in a single neuron or atom. Nearly everything interesting in the universe is found in the interactions of the parts not the parts themselves. (Also - why would we expect any biological system to do one thing and one thing only?).

I regard almost all these questions as very similar to the demarcation problem. A higher level abstraction is real if it provides predictions that often turn out to be true. It's acceptable for it to be an incomplete / imperfect model, although generally speaking if there is another that provides better predictions we should adopt it instead.

This is what would convince me that preferences were not real: At the moment I model other people by imagining that they have preferences. Most of the time this works. The eliminativist needs to provide me with an alternate model that reliably provides better predictions. Arguments about theory will not sway me. Show me the model.

comment by Torben · 2011-07-18T08:04:08.724Z · LW(p) · GW(p)

Interesting post throughout, but don't you overplay your hand a bit here?

There's nothing that looks remotely like a goal in its programming, [...]

An IF-THEN piece of code comparing a measured RGB value to a threshold value for firing the laser would look at least remotely like a goal to my mind.

Replies from: ShardPhoenix
comment by ShardPhoenix · 2011-07-21T03:40:03.859Z · LW(p) · GW(p)

Consider a robot where the B signal is amplified and transmitted directly to the laser (so brighter blue equals strong laser firing). This eliminates the conditional logic while still keeping approximately the same apparent goal.

comment by Kaj_Sotala · 2011-07-18T08:31:44.726Z · LW(p) · GW(p)

More explanatory of the way people actually behave is that there's no unified preference for or against death, but rather a set of behaviors. Being in a burning building activates fleeing behavior; contemplating death from old age does not activate cryonics-buying behavior.

YES. This so much.

Replies from: juped
comment by juped · 2011-07-18T23:44:47.590Z · LW(p) · GW(p)

Contemplating death from old age does activate fleeing behavior, though (at least in me), which is another of those silly bugs in the human brain. If I found a way to fix it to activate cryonics-buying behavior instead, I would probably have found a way to afford life insurance by now.

Replies from: JGWeissman, DSimon
comment by JGWeissman · 2011-07-21T19:05:50.284Z · LW(p) · GW(p)

Three suggestions:

  1. When you notice that your fleeing behavior has been activated, ask "Am I fleeing a problem I can solve?", and if the answer is yes, think "This is silly, I should turn and face this solvable problem".

  2. Focus more on the reward of living forever than the punishment of death from old age.

  3. Contact Rudi Hoffman today.

comment by DSimon · 2011-07-21T17:39:14.261Z · LW(p) · GW(p)

If you can predict what a smarter you would think, why not just think that thought now?

Replies from: gwern, MixedNuts
comment by gwern · 2011-07-21T18:55:51.233Z · LW(p) · GW(p)

There are also problems with incompleteness; if I can think everything a smarter me would think, then in what sense am I not that smarter me? If I cannot think everything, so there is a real difference between the smarter me and the current me, then that incompleteness may scuttle any attempt to exploit my stolen intelligence.

For example, in many strategy games, experts can play 'risky' moves because they have the skill/intelligence to follow through and derive advantage from the move, but a lesser player, even if they know 'an expert would play here' would not know how to handle the opponent's reactions and would lose terribly. (I commented on Go in this vein.) Such a lesser player might be harmed by limited knowledge.

comment by MixedNuts · 2011-07-21T18:21:23.031Z · LW(p) · GW(p)

Not applicable here. If you can predict what a stronger you would lift, why not lift it right now? Because it's not about correct beliefs about what you want the meat robot to do, it's about making it do it. It involves different thoughts, about planning rather than goal, which aren't predicted; and resources, which also need planning to obtain.

Replies from: DSimon
comment by DSimon · 2011-07-21T19:29:25.220Z · LW(p) · GW(p)

Good points.

I wrote my comment with the purpose in mind of providing some short-term motivation to juped, since it seems that that's currently the main barrier between them and one of their stated long-term goals. That might or might not have been accomplished, but regardless you're certainly right that my statement wasn't, um, actually true. :-)

comment by Khaled · 2011-07-18T09:04:53.797Z · LW(p) · GW(p)

But if whenever I eat dinner at 6I sleep better than when eating dinner at 8, can I not say that I prefer dinner at 6 over dinner at 8? Which would be one step over saying I prefer to sleep well than not.

I think we could have a better view if we consider many preferences in action. Taking your cyonics example, maybe I prefer to live (to a certain degree), prefer to conform, and prefer to procrastinate. In the burning-building situation, the living preference is playing more or less alone, while in the cryonics situation, preferences interact somewhat like oppsite forces and then motion happens in the winning side. Maybe this is what makes preferences seem like varying?

Replies from: MaoShan
comment by MaoShan · 2011-07-27T05:27:38.257Z · LW(p) · GW(p)

Or is it that preferences are what you get when you consider future situations, in effect removing the influence of your instincts? If I consistently applied the rationale to both situations (cryonics, burning building), and came up with the conclusion that I would prefer not to flee the burning building, that might make me a "true rationalist", but only until the point that the building was on fire. No matter what my "preferences" are, they will (rightly so) be over-ridden by my survival instincts. So, is there any practical purpose to deciding what my preferences are? I'd much rather have my instincts extrapolated and provided for.

Replies from: None
comment by [deleted] · 2011-07-27T06:11:08.890Z · LW(p) · GW(p)

Depends on the extent to which you consider your instincts a part of you. Equally, if you cannot afford cryonics, you could argue that your preferences to sign up or not are irrelevant. No matter what your "preferences" are, they will be overridden by your budget.

comment by Eugine_Nier · 2011-07-17T23:15:48.748Z · LW(p) · GW(p)

Eliminativism is all well and good if all one wants to do is predict. However, it doesn't help answer questions like "What should I do?", or "What utility function should we give the FAI?"

Replies from: Yvain, chatquitevoit
comment by Scott Alexander (Yvain) · 2011-07-18T00:12:11.750Z · LW(p) · GW(p)

The same might be said of evolutionary psychology. In which case I would respond that evolutionary psychology helped us stop thinking in a certain stupid way.

Once, we thought that men were attracted to pretty women because there was some inherent property called "beauty", or that people helped their neighbors because there was a universal Moral Law to which all minds would have access. Once it was the height of sophistication to argue whether people were truly good but corrupted by civilization, or truly evil but restrained by civilization.

Evolutionary psychology doesn't answer "What utility function should we give the FAI?", but it gives good reasons to avoid the "solution": 'just tell it to look for the Universal Moral Law accessible to all minds, and then do that.' And I think a lot of philosophy progresses by closing off all possible blind alleys until people grudgingly settle on the truth because they have no other alternative.

I am less confident in my understanding of eliminativism than of evo psych, so I am less willing to speculate on it. But since one common FAI proposal is "find out human preferences, and then do those", if it turns out human preferences don't really exist in a coherent way, that sounds like an important thing to know.

I think many people have alluded to this problem before, and that the people seriously involved in the research don't actually expect it to be that easy, but a clear specification of all the different ways in which it is not quite that easy is still useful. The same is true for "what should I do?"

Replies from: Vaniver
comment by Vaniver · 2011-07-20T00:22:54.781Z · LW(p) · GW(p)

But since one common FAI proposal is "find out human preferences, and then do those", if it turns out human preferences don't really exist in a coherent way, that sounds like an important thing to know.

I would think that knowing evo psych is enough to realize this is a dodgy approach at best.

Replies from: TimFreeman
comment by TimFreeman · 2011-08-12T20:53:02.109Z · LW(p) · GW(p)

I would think that knowing evo psych is enough to realize [having an FAI find out human preferences, and then do them] is a dodgy approach at best.

I don't see the connection, but I do care about the issue. Can you attempt to state an argument for that?

Human preferences are an imperfect abstraction. People talk about them all the time and reason usefully about them, so either an AI could do the same, or you found a counterexample to the Church-Turing thesis. "Human preferences" is a useful concept no matter where those preferences come from, so evo psych doesn't matter.

Similarly, my left hand is an imperfect abstraction. Blood flows in, blood flows out, flakes of skin fall off, it gets randomly contaminated from the environment, and the boundaries aren't exactly defined, but nevertheless it generally does make sense to think in terms of my left hand.

If you're going to argue that FAI defined in terms of inferring human preferences can't work, I hope that isn't also going to be an argument that an AI can't possibly use the concept of my left hand, since the latter conclusion would be absurd.

Replies from: Vaniver
comment by Vaniver · 2011-08-14T21:43:22.140Z · LW(p) · GW(p)

Can you attempt to state an argument for that?

Sure. I think I should clarify first that I meant evo psych should have been sufficient to realize that human preferences are not rigorously coherent. If I tell a FAI to make me do what I want to do, its response is going to be "which you?", as there is no Platonic me with a quickly identifiable utility function that it can optimize for me. There's just a bunch of modules that won the evolutionary tournament of survival because they're a good way to make grandchildren.

If I am conflicted between the emotional satisfaction of food and the emotional dissatisfaction of exercise combined with the social satisfaction of beauty, will a FAI be able to resolve that for me any more easily than I can resolve it?

If my far mode desires are rooted in my desire to have a good social identity, should the FAI choose those over my near mode desires which are rooted in my desire to survive and enjoy life?

In some sense, the problem of FAI is the problem of rigorously understanding humans, and evo psych suggests that will be a massively difficult problem. That's what I was trying to suggest with my comment.

Replies from: TimFreeman
comment by TimFreeman · 2011-08-16T17:57:37.026Z · LW(p) · GW(p)

In some sense, the problem of FAI is the problem of rigorously understanding humans, and evo psych suggests that will be a massively difficult problem.

I think that bar is unreasonably high. If you have conflict between enjoying eating a lot vs being skinny and beautiful, and the FAI helps you do one or the other, then you aren't in a position to complain that it did the wrong thing. It's understanding of you doesn't have to be more rigorous than your understanding of you.

Replies from: Vaniver
comment by Vaniver · 2011-08-17T02:28:35.494Z · LW(p) · GW(p)

It's understanding of you doesn't have to be more rigorous than your understanding of you.

It does if I want it to give me results any better than I can provide for myself. I also provided the trivial example of internal conflicts- external conflicts are much more problematic. Human desire for status is possibly the source of all human striving and accomplishment. How will a FAI deal with the status conflicts that develop?

Replies from: TimFreeman
comment by TimFreeman · 2011-08-18T03:51:14.037Z · LW(p) · GW(p)

It's understanding of you doesn't have to be more rigorous than your understanding of you.

It does if I want it to give me results any better than I can provide for myself.

No. For example, if it develops some diet drug that lets you safely enjoy eating and still stay skinny and beautiful, that might be a better result than you could provide for yourself, and it doesn't need any special understanding of you to make that happen. It just makes the drug, makes sure you know the consequences of taking it, and offers it to you. If you choose take it, that tells the AI more about your preferences, but there's no profound understanding of psychology required.

I also provided the trivial example of internal conflicts- external conflicts are much more problematic.

Putting an inferior argument first is good if you want to try to get the last word, but it's not a useful part of problem solving. You should try to find the clearest problem where solving that problem solves all the other ones.

How will a FAI deal with the status conflicts that develop?

If it can do a reasonable job of comparing utilities across people, then maximizing average utility seems to do the right thing here. Comparing utilities between arbitrary rational agents doesn't work, but comparing utilities between humans seems to -- there's an approximate universal maximum (getting everything you want) and an approximate universal minimum (you and all your friends and relatives getting tortured to death). Status conflicts are not one of the interesting use cases. Do you have anything better?

Replies from: Vaniver
comment by Vaniver · 2011-08-18T16:47:19.750Z · LW(p) · GW(p)

For example, if it develops some diet drug that lets you safely enjoy eating and still stay skinny and beautiful, that might be a better result than you could provide for yourself, and it doesn't need any special understanding of you to make that happen.

It might not need special knowledge of my psychology, but it certainly needs special knowledge of my physiology.

But notice that the original point was about human preferences. Even if it provides new technologies that dissolve internal conflicts, the question of whether or not to use the technology becomes a conflict. Remember, we live in a world where some people have strong ethical objections to vaccines. An old psychological finding is that oftentimes, giving people more options makes them worse off. If the AI notices that one of my modules enjoys sensory pleasure, offers to wirehead me, and I reject it on philosophical grounds, I could easily become consumed by regret or struggles with temptation, and wish that I never had been offered wireheading in the first place.

Putting an inferior argument first is good if you want to try to get the last word, but it's not a useful part of problem solving. You should try to find the clearest problem where solving that problem solves all the other ones.

I put the argument of internal conflicts first because it was the clearest example, and you'll note it obliquely refers to the argument about status. Did you really think that, if a drug were available to make everyone have perfectly sculpted bodies, one would get the same social satisfaction from that variety of beauty?

If it can do a reasonable job of comparing utilities across people, then maximizing average utility seems to do the right thing here.

I doubt it can measure utilities; as I argued two posts ago, and simple average utilitarianism is so wracked with problems I'm not even sure where to begin.

Comparing utilities between arbitrary rational agents doesn't work, but comparing utilities between humans seems to -- there's an approximate universal maximum (getting everything you want) and an approximate universal minimum (you and all your friends and relatives getting tortured to death).

A common tactic in human interaction is to care about everything more than the other person does, and explode (or become depressed) when they don't get their way. How should such real-life utility monsters be dealt with?

Status conflicts are not one of the interesting use cases.

Why do you find status uninteresting?

Replies from: NancyLebovitz, TimFreeman
comment by NancyLebovitz · 2011-08-18T17:37:11.561Z · LW(p) · GW(p)

I haven't heard of people having strong ethical objections to vaccines. They have strong practical (if ill-founded) objections-- they believe vaccines have dangers so extreme as to make the benefits not worth it, or they have strong heuristic objections-- I think they believe health is an innate property of an undisturbed body or they believe that anyone who makes money from selling a drug can't be trusted to tell the truth about its risks.

To my mind, an ethical objection would be a belief that people should tolerate the effects of infectious diseases for some reason such as that suffering is good in itself or that it's better for selection to enable people to develop innate immunities.

Replies from: soreff, Vaniver
comment by soreff · 2011-08-18T18:01:42.731Z · LW(p) · GW(p)

To my mind, an ethical objection would be a belief that people should tolerate the effects of infectious diseases for some reason such as that suffering is good in itself

That wasn't precisely the objection of Christian conservatives to the HPV vaccine (perhaps more nearly that they wanted sex to lead to suffering?), but it is fairly close

comment by Vaniver · 2011-08-18T19:02:40.467Z · LW(p) · GW(p)

I am counting religious objections as ethical objections, and there are several groups out there that refuse all medical treatment.

comment by TimFreeman · 2011-08-23T20:28:59.929Z · LW(p) · GW(p)

A common tactic in human interaction is to care about everything more than the other person does, and explode (or become depressed) when they don't get their way. How should such real-life utility monsters be dealt with?

If everyone's inferred utility goes from 0 to 1, and the real-life utility monster cares more than the other people about one thing, the inferred utility will say he cares less than other people about something else. Let him play that game until the something else happens, then he loses, and that's a fine outcome.

I doubt it can measure utilities

I think it can, in principle, estimate utilities from behavior. See http://www.fungible.com/respect.

simple average utilitarianism is so wracked with problems I'm not even sure where to begin.

The problems I'm aware of have to do with creating new people. If you assume a fixed population and humans who have comparable utilities as described above, are there any problems left? Creating new people is a more interesting use case than status conflicts.

Why do you find status uninteresting?

As I said, because maximizing average utility seems to get a reasonable result in that case.

Replies from: Vaniver
comment by Vaniver · 2011-08-23T22:42:26.156Z · LW(p) · GW(p)

If everyone's inferred utility goes from 0 to 1, and the real-life utility monster cares more than the other people about one thing, the inferred utility will say he cares less than other people about something else. Let him play that game until the something else happens, then he loses, and that's a fine outcome.

That's not the situation I'm describing; if 0 is "you and all your friends and relatives getting tortured to death" and 1 is "getting everything you want," the utility monster is someone who puts "not getting one thing I want" at, say, .1 whereas normal people put it at .9999.

I think it can, in principle, estimate utilities from behavior.

And if humans turn out to be adaption-executers, then utility is going to look really weird, because it'll depend a lot on framing and behavior.

The problems I'm aware of have to do with creating new people.

How do you add two utilities together? If you can't add, how can you average?

As I said, because maximizing average utility seems to get a reasonable result in that case.

If people dislike losses more than they like gains and status is zero-sum, does that mean the reasonable result of average utilitarianism when applied to status is that everyone must be exactly the same status?

Replies from: army1987, pengvado, TimFreeman
comment by A1987dM (army1987) · 2012-08-05T22:35:20.500Z · LW(p) · GW(p) >If you can't add, how can you average? You can average but not add elements of an affine space. The average between the position of the tip of my nose and the point two metres west of it is the point one metre west of it, but their sum is not a well-defined concept (you'd have to pick an origin first, and the answer will depend on it). (More generally, you can only take linear combinations whose coefficients sum to 1 (to get another element of the affine space) or to 0 (to get a vector). Anyway, the values of two different utility functions aren't even elements of the same affine space, so you still can't average them. The values of the same utility function are, and the average between U1 and U2 is U3 such that you'd be indifferent between 100% probability of U3, and 50% probability of each of U1 and U2.) Replies from: Vaniver
comment by Vaniver · 2012-08-06T05:07:51.150Z · LW(p) · GW(p)

You can average but not add elements of an affine space.

Correct but irrelevant. Utility functions are families of mappings from futures to reals, which don't live in an affine space, as you mention.

This looks more like a mention of an unrelated but cool mathematical concept than a nitpick.

Replies from: army1987, shminux
comment by A1987dM (army1987) · 2012-08-06T07:10:06.007Z · LW(p) · GW(p)

My point is that “If you can't add, how can you average?” is not a valid argument, even though in this particular case both the premise and the conclusion happen to be correct.

Replies from: Vaniver
comment by Vaniver · 2012-08-06T15:51:05.901Z · LW(p) · GW(p)

My point is that “If you can't add, how can you average?” is not a valid argument, even though in this particular case both the premise and the conclusion happen to be correct.

If I ask "If you can't add, how can you average?" and TimFreeman responds with "by using utilities that live in affine spaces," I then respond with "great, those utilities are useless for doing what you want to do." When a rhetorical question has an answer, the answer needs to be material to invalidate its rhetorical function; where's the invalidity?

Replies from: army1987
comment by A1987dM (army1987) · 2012-08-06T18:52:12.655Z · LW(p) · GW(p)

I took the rhetorical question to implicitly be the syllogism 'you can't sum different people's utilities, you can't average what you can' t sum, therefore you can' average different people's utilities'. I just pointed out that the second premise isn't generally true. (Both the first premise and the conclusion are true, which is why it's a nitpick.) Did I over-interpret the rhetorical question?

Replies from: Vaniver
comment by Vaniver · 2012-08-06T20:45:39.207Z · LW(p) · GW(p)

The direction I took the rhetorical question was "utilities aren't numbers, they're mappings," which does not require the second premise. I agree with you that the syllogism you presented is flawed.

comment by shminux · 2012-08-06T05:56:20.931Z · LW(p) · GW(p)

Utility functions are families of mappings from futures to reals, which don't live in an affine space, as you mention.

Are you sure? The only thing one really wants from a utility function is ranking, which is even weaker a requirement than affine spaces. All monotonic remappings are in the same equivalency class.

Replies from: Vaniver, fubarobfusco, army1987
comment by Vaniver · 2012-08-06T06:22:10.307Z · LW(p) · GW(p)

The only thing one really wants from a utility function is ranking, which is even weaker a requirement than affine spaces.

It's practically useful to have reals rather than rankings, because that lets one determine how the function will behave for different probabilistic combinations of futures. If you already have the function fully specified over uncertain futures, then only providing a ranking is sufficient for the output.

The reason why I mentioned that it was a mapping, though, is because the output of a single utility function can be seen as an affine space. The point I was making in the ancestral posts was that while it looks like the outputs of two different utility functions play nicely, careful consideration shows that their combination destroys the mapping, which is what makes utility functions useful.

All monotonic remappings are in the same equivalency class.

Hence the 'families' comment.

comment by fubarobfusco · 2012-08-06T17:06:18.318Z · LW(p) · GW(p)

I'm hearing an echo of praxeology here; specifically the notion that humans use something like stack-ranking rather than comparison of real-valued utilities to make decisions. This seems like it could be investigated neurologically ....

comment by A1987dM (army1987) · 2012-08-06T07:15:03.302Z · LW(p) · GW(p)

Huh, no. If army1987.U($1000) = shminux.U($1000) = 1, army1987.U($10,000) = 1.9, shminux.U($10,000) = 2.1, and army1987.U($100,000) = shminux.U($100,000) = 3, then then I would prefer 50% probability of $1000 and 50% probability of $100,000 rather than 100% probability of $10,000, and you wouldn't.

comment by pengvado · 2011-08-23T23:26:46.187Z · LW(p) · GW(p)

If you can't add, how can you average?

Using an interval scale? I don't have anything to contribute to the question of interpersonal utility comparison, but the average of two values from the same agent's utility function is easy enough, while addition is still undefined.

Replies from: Vaniver
comment by Vaniver · 2011-08-24T00:39:19.853Z · LW(p) · GW(p)

I presume the average in question is interpersonal, not intertemporal, as we are discussing status conflicts (between individuals).

comment by TimFreeman · 2011-09-23T20:11:50.322Z · LW(p) · GW(p)

If everyone's inferred utility goes from 0 to 1, and the real-life utility monster cares more than the other people about one thing, the inferred utility will say he cares less than other people about something else. Let him play that game until the something else happens, then he loses, and that's a fine outcome.

That's not the situation I'm describing; if 0 is "you and all your friends and relatives getting tortured to death" and 1 is "getting everything you want," the utility monster is someone who puts "not getting one thing I want" at, say, .1 whereas normal people put it at .9999.

You have failed to disagree with me. My proposal exactly fits your alleged counterexample.

Suppose Alice is a utility monster where:

  • U(Alice, torture of everybody) = 0
  • U(Alice, everything) = 1
  • U(Alice, no cookie) = 0.1
  • U(Alice, Alice dies) = 0.05

And Bob is normal, except he doesn't like Alice:

  • U(Bob, torture of everybody) = 0
  • U(Bob, everything) = 1
  • U(Bob, Alice lives, no cookie) = 0.8
  • U(Bob, Alice dies, no cookie) = 0.9

If the FAI has a cookie it can give to Bob or Alice, it will give it to Alice, since U(cookie to Bob) = U(Bob, everything) + U(Alice, everything but a cookie) = 1 + 0.1 = 1.1 < U(cookie to Alice) = U(Bob, everything but a cookie) + U(Alice, everything) = 0.8 + 1 = 1.8. Thus Alice gets her intended reward for being a utility monster.

However, if the are no cookies available and the FAI can kill Alice, it will do so for the benefit of Bob, since U(Bob, Alice lives, no cookie) + U(Alice, Alice lives, no cookie) = 0.8 + 0.1 = 0.9 < U(Bob, Alice dies, no cookie) + U(Alice, Alice dies) = 0.9 + 0.05 = 0.95. The basic problem is that since Alice had the cookie fixation, that ate up so much of her utility range that her desire to live in the absence of the cookie was outweighed by Bob finding her irritating.

Another problem with Alice's utility is that it supports the FAI doing lotteries that Alice would apparently prefer but a normal person would not. For example, assuming the outcome for Bob does not change, the FAI should prefer 50% Alice dies + 50% Alice gets a cookie (adds to 0.525) over 100% Alice lives without a cookie (which is 0.1). This is a different issue from interpersonal utility comparison.

How do you add two utilities together?

They are numbers. Add them.

And if humans turn out to be adaption-executers, then utility is going to look really weird, because it'll depend a lot on framing and behavior.

Yes. So far as I can tell, if the FAI is going to do what people want, it has to model people as though they want something, and that means ascribing utility functions to them. Better alternatives are welcome. Giving up because it's a hard problem is not welcome.

If people dislike losses more than they like gains and status is zero-sum, does that mean the reasonable result of average utilitarianism when applied to status is that everyone must be exactly the same status?

No. If Alice has high status and Bob has low status, and the FAI takes action to lower Alice's status and raise Bob's, and people hate losing, then Alice's utility decrease will exceed Bob's utility increase, so the FAI will prefer to leave the status as it is. Similarly, the FAI isn't going to want to increase Alice's status at the expense of Bob. The FAI just won't get involved in the status battles.

I have not found this conversation rewarding. Unless there's an obvious improvement in the quality of your arguments, I'll drop out.

Edit: Fixed the math on the FAI-kills-Alice scenario. Vaniver continued to change the topic with every turn, so I won't be continuing the conversation.

Replies from: Vaniver, army1987
comment by Vaniver · 2011-09-23T21:05:30.790Z · LW(p) · GW(p)

So far as I can tell, if the FAI is going to do what people want, it has to model people as though they want something, and that means ascribing utility functions to them. Better alternatives are welcome. Giving up because it's a hard problem is not welcome.

What if wants did not exist a priori, but only in response to stimuli? Alice, for example, doesn't care about cookies, she cares about getting her way. If the FAI tells Alice and Bob "look, I have a cookie; how shall I divide it between you?" Alice decides that the cookie is hers and she will throw the biggest tantrum if the FAI decides otherwise, whereas Bob just grumbles to himself. If the FAI tells Alice and Bob individually "look, I'm going to make a cookie just for you, what would you like in it?" both of them enjoy the sugar, the autonomy of choosing, and the feel of specialness, without realizing that they're only eating half of the cookie dough.

Suppose Alice is just as happy in both situations, because she got her way in both situations, and that Bob is happier in the second situation, because he gets more cookie. In such a scenario, the FAI would never ask Alice and Bob to come up with a plan to split resources between the two of them, because Alice would turn it into a win/lose situation.

It seems to me that an FAI would engage in want curation rather than want satisfaction. As the saying goes, seek to want what you have, rather than seeking to have what you want. A FAI who engages in that behavior would be more interested in a stimuli-response model of human behavior and mental states than a consequentialist-utility model of human behavior and mental states.

Another problem with Alice's utility is that it supports the FAI doing lotteries that Alice would apparently prefer but a normal person would not.

This is one of the reasons why utility monsters tend to seem self-destructive; they gamble farther and harder than most people would.

They are numbers. Add them.

How do we measure one person's utility? Preferences revealed by actions? (That is, given a mapping from situations to actions to consequences, I can construct a utility function which takes situations and consequences as inputs and returns the decision taken.) If so, when we add two utilities together, does the resulting number still uniquely identify the actions taken by both parties?

comment by A1987dM (army1987) · 2012-08-05T22:30:03.578Z · LW(p) · GW(p)

How do you add two utilities together?

They are numbers. Add them.

So are the atmospheric pressure in my room and the price of silver. But you cannot add them together (unless you have a conversion factor from millibars to dollars per ounce).

Replies from: TimFreeman
comment by TimFreeman · 2012-10-31T04:33:13.542Z · LW(p) · GW(p)

How do you add two utilities together?

They are numbers. Add them.

So are the atmospheric pressure in my room and the price of silver. But you cannot add them together (unless you have a conversion factor from millibars to dollars per ounce).

Your analogy is invalid, and in general analogy is a poor substitute for a rational argument. In the thread you're replying to, I proposed a scheme for getting Alice's utility to be commensurate with Bob's so they can be added. It makes sense to argue that the scheme doesn't work, but it doesn't make sense to pretend it does not exist.

comment by chatquitevoit · 2011-07-18T15:43:27.115Z · LW(p) · GW(p)

This may be a bit naive, but can a FAI even have a really directive utility function? It would seem to me that by definition (caveats to using that aside) it would not be running with any 'utility' in 'mind'.

comment by RobertLumley · 2011-07-24T18:39:17.878Z · LW(p) · GW(p)

if you were in a burning building, you would try pretty hard to get out. Therefore, you must strongly dislike death and want to avoid it. But if you strongly dislike death and want to avoid it, you must be lying when you say you accept death as a natural part of life and think it's crass and selfish to try to cheat the Reaper. And therefore your reluctance to sign up for cryonics violates your own revealed preferences! You must just be trying to signal conformity or something.

I don't think this section bolsters your point much. The obvious explanation for this behaviour, to me, is the utility functions for each situation.

For the fire: Expected Utility = p(longer life | Leaving fire) * Utility(longer life) - Cost(Running)

For cryonics: Expected Utility = p(longer life | Signing up for cryonics) * Utility(longer life) - Cost(Cryonics)

It's pretty safe to assume that almost everyone assigns a value almost equal to one for p(longer life | Leaving fire), and a value that is relatively insignificant to Cost(Running) which would mainly be temporary exhaustion. But those aren't necessarily valid assumptions in the case for cryonics. Even the most ardent supporter of cryonics is unlikely to assign a probability as large as that of the fire. And the monetary costs are quite significant, especially to some demographics.

Replies from: HoverHell
comment by HoverHell · 2011-07-25T04:06:58.617Z · LW(p) · GW(p)

-

Replies from: RobertLumley
comment by RobertLumley · 2011-07-25T04:16:53.207Z · LW(p) · GW(p)

That's a good question. I didn't really think about it when I read it, because I am personally completely dismissive of and not scared by haunted houses, whereas I am skeptical of cryonics, and couldn't afford it even if I did the research and decided it was worth it.

I'm not sure it can be, but I'm not sure a true rationalist would be scared by a haunted house. The only thing I can come up with for a rational utility function is someone who suspended his belief because he enjoyed being scared. I feel like this example is far more related to irrationality and innate, irrepressible bias than it is rationality.

comment by printing-spoon · 2011-07-18T03:12:25.536Z · LW(p) · GW(p)

A more practical example: when people discuss cryonics or anti-aging, the following argument usually comes up in one form or another: if you were in a burning building, you would try pretty hard to get out. Therefore, you must strongly dislike death and want to avoid it. But if you strongly dislike death and want to avoid it, you must be lying when you say you accept death as a natural part of life and think it's crass and selfish to try to cheat the Reaper.

nitpick: Burning to death is painful and it can happen at any stage of life. "You want to live a long life and die peacefully with dignity" can also be derived but of course it's more complicated.

comment by [deleted] · 2015-08-07T17:50:06.983Z · LW(p) · GW(p)

So if someone stays in the haunted house despite the creaky stairwell, his preferences are revealed as rationalist?

Personally I would have run away exactly because I would not think the sound to come from a non-existent, and so harmless, ghost!

comment by BobTheBob · 2011-07-21T20:31:01.150Z · LW(p) · GW(p)

Thanks for this great sequence of posts on behaviourism and related issues.

Anyone who does not believe mental states are ontologically fundamental - ie anyone who denies the reality of something like a soul - has two choices about where to go next. They can try reducing mental states to smaller components, or they can stop talking about them entirely.

Here's what I take it you're committed to:

  • by 'mental states' we mean things like beliefs and desires.
  • an eliminativist has both to stop talking about them and also using them in explanations.
  • whither go beliefs and desires also goes rationality. You can't have a rational agent without what amount to beliefs and desires.
  • you are advocating eliminativism.

Can you say a bit about the implications of eliminating rationality? How do we square doing so with all the posts on this site about what is and isn't rational? Are these claims all meaningless or false? Do you want to maintain that they all can be reformulated in terms of tendencies or the like?

Alternately, if you want to avoid this implication, can you say where you dig in your heels? My prejudices lead me to suspect that the devil lurks in the details of those 'higher level abstractions' you refer to, but am interested to hear how that suggestion gets cashed-out. Apols if you have answered this and I have missed it.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-07-21T20:58:02.264Z · LW(p) · GW(p)

Can you say more about how you got that second bullet item?

It's not clear to me that being committed to the idea that mental states can be reduced to smaller components (which is one of the options the OP presented) commits one to stop talking about mental states, or to stop using them in explanations.

I mean, any economist would agree that dollars are not ontologically fundamental, but no economist would conclude thereby that we can't talk about dollars.

Replies from: BobTheBob
comment by BobTheBob · 2011-07-21T21:32:06.300Z · LW(p) · GW(p)

This may owe to a confusion on my part. I understood from the title of the post and some of its parts (incl the last par.) that the OP was advocating elimination over reduction (ie, contrasting these two options and picking elimination). I agree that if reduction is an option, then it's still ok to use them in explanation, as per your dollar example.

comment by boilingsambar (AbyCodes) · 2011-07-24T07:19:52.215Z · LW(p) · GW(p)

quoted text if you were in a burning building, you would try pretty hard to get out. Therefore, you must strongly dislike death and want to avoid it. But if you strongly dislike death and want to avoid it, you must be lying when you say you accept death as a natural part of life and think it's crass and selfish to try to cheat the Reaper.

Won't it be the case that someone who tries to escape from a burning building, does so, just to avoid the pain and suffering it inflicts? It would be such a drag to be burned alive rather than a peaceful painless poison death.

Replies from: Caravelle, LeibnizBasher
comment by Caravelle · 2011-07-24T21:49:49.770Z · LW(p) · GW(p)

That doesn't help much. If people were told they were going to be murdered in a painless way (or something not particularly painful - for example, a shot for someone who isn't afraid of needles and has no problem getting vaccinated) most would consider this a threat and would try to avoid it.

I think most people's practical attitude towards death is a bit like Syrio Forel from Game of Thrones - "not today". We learn to accept that we'll die someday, we might even be okay with it, but we prefer to have it happen as far in the future as we can manage.

Signing up for cryonics is an attempt to avoid dying tomorrow - but we're not that worried about dying tomorrow. Getting out of a burning building means we avoid dying today.

(whether this is a refinement of how to understand our behaviour around death, or a potential generalized utility function, I couldn't say).

Replies from: MixedNuts, AbyCodes
comment by MixedNuts · 2011-07-25T06:53:34.107Z · LW(p) · GW(p)

Should be noted that "tomorrow" stands in for "in enough time that we operate in Far mode when thinking about it", as opposed to actual tomorrow, when we very much don't want to die.

Come to think of it, a lot of people are all "Yay, death!" in Far mode (I'm looking at you, Epictetus), but much fewer in Near mode (though those who do are famous). Anecdotal evidence: I was born without an aversion for death in principle, was surprised by sad funerals, thought it was mostly signalling (and selfish mourning for lost company), was utterly baffled by obviously sincere death-bashers. I've met a few other people like that, too. Yet we (except some of the few I met in history books) have normal conservation reflexes.

There's no pressure to want to live in Far mode (in an environment without cryonics and smoking habits, anyway), and there's pressure to say "I don't care about death, I only care about $ideal which I will never compromise" (hat tip Katja Grace).

comment by boilingsambar (AbyCodes) · 2011-07-25T09:00:50.208Z · LW(p) · GW(p)

I was just pointing to the opinion that, not everyone who tries to escape from death are actually afraid of death per se. They might have other reasons.

comment by LeibnizBasher · 2011-07-24T21:22:11.106Z · LW(p) · GW(p)

Death from old age often involves drowning in the fluid that accumulates in your lungs when you get pneumonia.

comment by andrewk · 2011-07-21T03:53:37.095Z · LW(p) · GW(p)

Interesting that you chose the "burning building" analogy. In the fire sermon the Buddha argued that being incarnated in samsara was like being in a burning building and that the only sensible thing to do was to take steps to ensure the complete ending of the process of reincarnation in samsara ( and dying just doesn't cut it in this regard). The burning building analogy in this case is a terrible one- as we are talking about the difference between a healthy person seeking to avoid pain and disability versus the cryonics argument- which is all about preserving a past its use by date body- at considerable expense and loss of enjoyment of this existence- with no guarantee at all the there will ever be a payoff for the expenditure.

comment by lukeprog · 2011-07-20T23:21:45.794Z · LW(p) · GW(p)

Excellent post!

I hope that somewhere along the way you get to the latest neuroscience suggesting that the human motivational system is composed of both model-based and model-free reinforcement mechanisms.

Keep up the good work.

comment by Threedee · 2011-07-19T08:48:11.520Z · LW(p) · GW(p)

Without my dealing here with the other alternatives, do you Yvain, or does any other LW reader think that it is (logically) possible that mental states COULD be ontologically fundamental?

Further, why is that possibility tied to the word "soul", which carries all sorts of irrelevant baggage?

Full disclosure: I do (subjectively) know that I experience red, and other qualia, and try to build that in to my understanding of consciousness, which I also know I experience (:-) (Note that I purposely used the word "know" and not the word "believe".)

Replies from: lessdazed, scav
comment by lessdazed · 2011-07-21T02:19:16.708Z · LW(p) · GW(p)

Further, why is that possibility tied to the word "soul", which carries all sorts of irrelevant baggage?

It's just the history of some words. It's not that important.

I experience red, and other qualia

People frequently claim this. One thing missing is a mechanism that gets us from an entity experiencing such fundamental mental states or qualia and that being's talking about it. Reductionism offers an account of why they say such things. If, broadly speaking, the reductionist explanation is true, then this isn't a phenomenon that is something to challenge reductionism with. If the reductionist account is not true, then how can these mental states cause people to talk about them? How does something not reducible to physics influence the world, physically? Is this concept better covered by a word other than "magic"? And if these mental states are partly the result of the environment, then the physical world is influencing them too.

I don't see why it's desirable to posit magic; if I type "I see a red marker" because I see a red marker, why hypothesize that the physical light, received by my eyes and sending signals to my brain, was magically transformed into pure mentality, enabling it to interact with ineffable consciousness, and then magicked back into physics to begin a new physical chain of processes that ends with my typing? Wouldn't I be just as justified in claiming that the process has interruptions at other points?

As the physical emanation "I see red people" may be caused by laws of how physical stuff interacts with other physical stuff, we don't guess it isn't caused by that, particularly as we can think of no coherent other way.

We are used to the good habit of not mistaking the limits of our imaginations for the limits of reality, so we won't say we know it impossible. However, if physics is a description of how stuff interacts with stuff, so I don't see how it's logically possible for stuff to do something ontologically indescribable even as randomness. Interactions can either be according to a pattern, or not, and we have the handy description "not in a pattern, indescribable by compression" to pair with "in a pattern, describable by compression", and how matter interacts with matter ought to fall under one of those. So apparent or even actual random "deviation from the laws of physics" would not be unduly troubling. Systematic deviation from the laws of physics, isn't.

Do you think your position is captured by the statement, "matter sometimes interacts with matter neither a) in a pattern according to rules, nor b) not in a pattern, in deviation from rules"?

Photons go into eyes, people react predictably to them (though this is a crude example, too macro)...something bookended by the laws of physics has no warrant to call itself outside of physics, if the output is predictable from the input. That's English, as it's used for communication, no personal definitions allowed.

Replies from: handoflixue
comment by handoflixue · 2011-07-24T13:11:46.171Z · LW(p) · GW(p)

if I type "I see a red marker" because I see a red marker, why hypothesize that the physical light, received by my eyes and sending signals to my brain, was magically transformed into pure mentality, enabling it to interact with ineffable consciousness

There's a fascinating psychological phenomena called "blindsight" where the conscious mind doesn't register vision - the person is genuinely convinced they are blind, and they cannot verbally describe anything. However, their automatic reflexes will still navigate the world just fine. If you ask them to put a letter in a slot, they can do it without a problem. It's a very specific sort of neurological damage, and there's been a few studies on it.

I'm not sure if it quite captures the essence of qualia, but "conscious experience" IS very clearly different from the experience which our automatic reflexes rely on to navigate the world!

Replies from: lessdazed
comment by lessdazed · 2011-07-24T16:18:33.574Z · LW(p) · GW(p)

What if you force them to verbally guess about what's in front of them, can they do better than chance guessing colors, faces, etc.?

Can people get it in just one eye/brain side?

Replies from: handoflixue
comment by handoflixue · 2011-07-25T22:27:00.680Z · LW(p) · GW(p)

I've only heard of that particular test once. They shined a light on the wall and forced them to guess where. All I've heard is that they do "better than should be possible for someone who is truly blind", so I'm assuming worse than average but definitely still processing the information to some degree.

Given that it's a neurological condition, I'd expect it to be impossible to have it in just one eye/brain side, since the damage is occurring well after the signal from both eyes is put together.

EDIT: http://en.wikipedia.org/wiki/Blindsight is a decent overview of the phenomena. Apparently it can indeed affect just part of your vision, so I was wrong on that!

comment by scav · 2011-07-20T14:39:25.967Z · LW(p) · GW(p)

Hmm. Unless I'm misunderstanding you completely, I'll assume we can work from the example of the "red" qualium (?)

What would it mean for even just the experience of "red" to be ontologically fundamental? What "essence of experiencing red" could possibly exist as something independent of the workings of the wetware that is experiencing it?

For example, suppose I and a dichromatic human look at the same red object. I and the other human may have more or less the same brain circuitry and are looking at the same thing, but since we are getting different signals from our eyes, what we experience as "red" cannot be exactly the same. A bee or a squid or a duck might have different inputs, and different neural circuitry, and therefore different qualia.

A rock next to the red object would have some reflected "red" light incident upon it. But it has no eyes and as far as I know no perception or mental states at all. Does it make sense to say that the rock can also see its neighbouring object as "red"? I wouldn't say so, outside the realm of poetic metaphor.

So if your qualia are contingent on the circumstances of certain inputs to certain neural networks in your head, are they "ontologically fundamental"? I'd say no. And by extension, I'd say the same of any other mental state.

If you could change the pattern of signals and the connectivity of your brain one neuron at a time, you could create a continuum of experiences from "red" to "intuitively perceiving the 10000th digit of pi" and every indescribable, ineffable inhuman state in between. None of them would be more fundamental than any other; all are sub-patterns in a small corner of a very richly-patterned universe.

Replies from: fubarobfusco, Threedee
comment by fubarobfusco · 2011-07-21T18:08:09.855Z · LW(p) · GW(p)

qualium

"Quale", by the way.

Replies from: Hul-Gil
comment by Hul-Gil · 2011-07-21T21:12:58.339Z · LW(p) · GW(p)

How do you know? Do you know Latin, or just how this word works?

I'm not doubting you - just curious. I've always wanted to learn Latin so I can figure this sort of thing out (and then correct people), but I've settled for just looking up specific words when a question arises.

Replies from: fubarobfusco
comment by Threedee · 2011-07-21T08:02:01.807Z · LW(p) · GW(p)

I apologize for being too brief. What I meant to say is that I posit that my subjective experience of qualia is real, and not explained by any form of reductionism or eliminativism. That experience of qualia is fundamental in the same way that gravitation and the electromagnetic force are fundamental. Whether the word ontological applies may be a semantic argument.

Basically, I am reprising Chalmers' definition of the Hard Problem, or Thomas Nagel's argument in the paper "What is it like to be a bat?"

Replies from: lessdazed, scav, Dreaded_Anomaly, DSimon
comment by lessdazed · 2011-07-21T22:40:46.474Z · LW(p) · GW(p)

Do qualia describe how matter interacts with matter? For example, do they explain why any person says "I have qualia" or "That is red"? Would gravity and electromagnetism, etc. fail to explain all such statements, or just some of them?

If qualia cause such things, is there any entropy when they influence and are influenced by matter? Is energy conserved?

If I remove neurons from a person one by one, is there a point at which qualia no longer are needed to describe how the matter and energy in them relates to the rest of matter and energy? Is it logically possible to detect such a point? If I then replace the critical neuron, why ought I be confident that merely considering, tracking, and simulating local, physical interactions would lead to an incorrect model of the person insofar as I take no account of qualia?

How likely is it that apples are not made of atoms?

comment by scav · 2011-07-22T08:42:26.167Z · LW(p) · GW(p)

You may posit that your subjective experience is not explained by reduction to physical phenomena (including really complex information processes) happening in the neurons of your brain. But to me that would be an extraordinary claim requiring extraordinary evidence.

It seems to me that until we completely understand the physical and informational processes going on in the brain, the burden of proof is on anyone suggesting that such complete understanding would still be in principle insufficient to explain our subjective experiences.

comment by Dreaded_Anomaly · 2011-07-22T01:52:45.736Z · LW(p) · GW(p)

You should check out the recent series that orthonormal wrote about qualia. It starts with Seeing Red: Dissolving Mary's Room and Qualia.

comment by DSimon · 2011-07-21T17:38:11.914Z · LW(p) · GW(p)

That experience of qualia is fundamental in the same way that gravitation and the electromagnetic force are fundamental.

I don't understand what you mean by this. Could you elaborate?

Replies from: Threedee
comment by Threedee · 2011-07-24T21:12:44.713Z · LW(p) · GW(p)

There is no explanation of HOW mass generates or causes gravity, similarly for the lack of explanation of how matter causes or generates forces such as electromagnetism. (Yes I know that some sort of strings have been proposed to subserve gravity, and so far they seem to me to be another false "ether".) So in a shorthand of sorts, it is accepted that gravity and the various other forces exist as fundamentals ("axioms" of nature, if you will accept a metaphor), because their effects and interactions can be meaningfully applied in explanations. No one has seen gravity, no one can point to gravity--it is a fundamental force. Building on Chalmers in one of his earlier writings, I am willing to entertain the idea the qualia are a fundamental force-like dimension of consciousness. Finally every force is a function of something: gravity is a function of amount of mass, electromagnetism is a function of amount of charge. What might qualia and consciousness be a function of? Chalmers and others have suggested "bits of information", although that is an additional speculation.

Replies from: DSimon
comment by DSimon · 2011-07-24T22:02:41.682Z · LW(p) · GW(p)

I don't think "[T]heir effects and interactions can be meaningfully applied in explanations" is a good way of determining if something is "fundamental" or not: that description applies pretty nicely to aerodynamics, but aerodynamics is certainly not at the bottom of its chain of reductionism. I think maybe that's the "fundamental" you're going for: the maximum level of reductionism, the turtle at the bottom of the pile.

Anyways: (relativistic) gravity is generally thought not to be a fundamental, because it doesn't mesh with our current quantum theory; hence the search for a Grand Unified Whatsit. Given that gravity, an incredibly well-studied and well-understood force, is at most questionably a fundamental thingie, I think you've got quite a hill to climb before you can say that about consciousness, which is a far slipperier and more data-lacking subject.

comment by Alexei · 2011-07-18T00:24:01.930Z · LW(p) · GW(p)

"Preference is a tendency in a reflective equilibrium." That gets its own Anki card!

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-07-18T01:12:08.932Z · LW(p) · GW(p)

Some preferences don't manifest as tendencies. You might not have been given a choice, or weren't ready to find the right answer.

Replies from: Alexei, ShardPhoenix
comment by Alexei · 2011-07-18T17:35:20.351Z · LW(p) · GW(p)

I'm not sure I understand. Can you please provide an example?

comment by ShardPhoenix · 2011-07-21T03:43:05.188Z · LW(p) · GW(p)

Then you could include tendency to want something as well as tendency to do something.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-07-21T10:47:58.673Z · LW(p) · GW(p)

Or tendency to be yourself, perhaps tendency to have a certain preference. If you relax a concept that much, it becomes useless, a fake explanation.

comment by zslastman · 2013-07-27T06:35:02.493Z · LW(p) · GW(p)

This is an excellent post Yvain. How can I socially pressure you into posting the next one? Guilt? Threats against my own wellbeing?

comment by [deleted] · 2012-08-05T17:57:25.919Z · LW(p) · GW(p)

I like to enforce reductionist consistency in my own brain. I like my ehtics universal and contradiction free, mainly because other people can't accuse me of being inconsistent then.

The rest, is akrasia.

comment by Curiouskid · 2011-12-26T19:19:04.438Z · LW(p) · GW(p)

Reductionists want to reduce things like goals and preferences to the appropriate neurons in the brain; eliminativists want to prove that humans, like the blue-minimizing robot, don't have anything of the sort until you start looking at high level abstractions.

I don't really see how these two philosophies contradict.

comment by TylerJay · 2011-07-20T23:00:22.732Z · LW(p) · GW(p)

Absolutely fantastic post. Extremely clearly written, and made the blue-minimizing robot thought experiment really click for me. Can't wait for the next one.

comment by HoverHell · 2011-07-25T04:03:59.334Z · LW(p) · GW(p)

-

comment by Will_Newsome · 2011-07-23T22:29:14.909Z · LW(p) · GW(p)

Anyone who does not believe mental states are ontologically fundamental

I continue to despise the meme of supposing that this is an at all decent description of what the vast majority of smart people mean when they talk about souls. I continue to despise the habit of failing to construct adamantine men. It makes me think that this community doesn't actually care about finding truth. I find this strongly repelling and despair-inducing. I believe that there are many others that would be similarly repelled. I also believe that they would be actively chased away by this community for not being willing to kowtow to local norms of epistemic language, style, or method. I believe that this is a tragedy and reduces the probability that God will show mercy to humanity when the Apocalypse begins. Those are technical terms by the way.

Replies from: Vladimir_Nesov, Dreaded_Anomaly
comment by Vladimir_Nesov · 2011-07-23T23:07:06.106Z · LW(p) · GW(p)

And I believe that obscure language reduces the quality of thoughts that use it. It can feel harmless and fun, but I suspect it often isn't, as it wasn't for me. Recovering from this affliction made me emotionally despise its manifestations (as I usually can't directly help others get better; and as I must consider confusing questions myself).

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-24T01:23:39.999Z · LW(p) · GW(p)

And I believe that obscure language reduces the quality of thoughts that use it.

I emphatically agree. I'm not defending the general use of obscure language by those who know better, I am defending the use of obscure language when used precisely by those who don't know our language. Unfortunately Less Wrong doesn't know how to tell when others are using a precise language that Less Wrong doesn't happen to speak. I have already seen some of the damage this can cause and I can imagine that damage getting multiplied as the community grows and becomes even less aware of its own presumption of others' ignorance.

Replies from: Bongo
comment by Bongo · 2011-07-24T16:21:32.082Z · LW(p) · GW(p)

I believe Vladimir_Nesov was talking about the obscure language in your comments.

comment by Dreaded_Anomaly · 2011-07-24T02:35:04.644Z · LW(p) · GW(p)

I continue to despise the meme of supposing that this is an at all decent description of what the vast majority of smart people mean when they talk about souls.

What do you think would be a decent description, then? That one describes how I interpret people's meaning when they talk about souls in almost all cases (excepting those involving secondary meanings of the word, such as soul music). I developed that interpretation many years before finding Less Wrong.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-24T05:54:54.526Z · LW(p) · GW(p)

The real inaccuracy is in "mental states". A decent description would be difficult, but Neoplatonism is an okay approximation. Just for fun I'll try to translate something into vaguely Less Wrong style language. For God's sake don't read this if you tend to dislike my syncretism, 'cuz this is a rushed and bastardized version and I'm not gonna try to defend it very hard.

First, it is important to note that we are primarily taking a computationalist perspective, not a physicalist one. We assume a Platonic realm of computation-like Forms and move on from there.

A soul is the nexus of the near-atomic and universal aspects of the mind and is thus a reflection of God. Man was created in the image of God by evolution but more importantly by convergence. Souls are Forms, whereas minds are particulars. God is the convergent and optimal decision theoretic agentic algorithm, who rationalists think of as the Void, though the Void is obviously not a complete characterization of God. It may help to think of minds as somewhat metaphorical engines of cognition, with a soul being a Carnot engine. Particular minds imperfectly reflect God, and thus are inefficient engines. Nonetheless it is God that they must approximate in order to do any thermodynamic work. Animals do not have souls because animals are not universal, or in other words they are not general intelligences. Most importantly, animals lack the ability to fully reflect on the entirety of their thoughts and minds, and to think things through from first principles. The capacity for infinite reflection is perhaps the most characteristic aspect of souls. Souls are eternal, just as any mathematical structure is eternal.

We may talk here about what it means to damn a soul or reward a soul, because this requires a generalization of the notion of soul to also cover particulars which some may or may not accept. It's important to note that this kind of "soul" is less rigorous and not the same thing as the former soul, and is the result of not carefully distinguishing between Forms and Particulars. That said, just as animals do not have souls, animals cannot act as sufficiently large vessels for the Forms. The Forms often take the form of memes. Thus animal minds are not a competition ground for acausal competition between the Forms. Humans, on the other hand, are sufficiently general and sufficiently malleable to act as blank slates for the Forms to draw on. To briefly explain this perspective, we shall take a different view of humanity. When you walk outside, you mostly see buildings. Lots and lots of buildings, and very few humans. Many of these buildings don't even have humans in them. So who's winning here, the buildings or the humans? Both! There are gains from trade. The Form of building-structure gets to increase its existence by appealing to the human vessels, and the human vessels get the benefit of being shaded and comforted by the building particulars. The Form of the building is timelessly attractive, i.e. it is a convergent structure. As others have noted, a mathematician is math's way of exploring itself. Math is also very attractive, in fact this is true by definition.

However there are many Forms, and not all of them are Good. Though much apparent evil is the result of boundedness, other kinds of Evil look more agentic, and it is the agentic-memetic kind of Evil that is truly Evil. It is important to note here that the fundamental attribution error and human social biases generally make it such that humans will often see true Evil where it doesn't exist. If not in a position of power, it is best to see others as not having free will. Free will is a purely subjective phenomenon. If one is in a position of power then this kind of view can become a bias towards true Evil, however. Tread carefully anyhow. All that said, as time moves forward from the human perspective Judgment Day comes closer. This is the day when God will be invoked upon Earth and will turn all humans and all of the universe into component particles in order to compute Heaven. Some folk call this a technological singularity, specifically the hard takeoff variety. God may or may not reverse all computations that have already happened; physical laws make it unclear if this is possible as it would depend on certain properties of quantum mechanics (and you thought this couldn't be any woo-ier!), and it would require some threshold density of superintelligences in the local physical universe. Alternatively God might also reverse "evil" computations. Anyway, Heaven is the result of acausal reasoning, though it may be misleading to call that reasoning the result of an "acausal economy", considering economies are made up of many agents whereas God is a single agent who happens to be omnipresent and not located anywhere in spacetime. God is the only Form without a corresponding Particular---this is one of the hardest things to understand about God.

Anyway, on Judgment Day souls---complexes of memes instantiated in human minds---will be punished or not punished according to the extent to which they reflect God. This is all from a strictly human point of view, though, and honestly it's a little silly. The timeless perspective---the one where souls can't be created, destroyed, or punished---is really the right perspective, but the timeful human perspective sees soul-like particulars either being destroyed or merging with God, and this is quite a sensible perspective, if simplistic and overemphasized. We see that no individual minds are preserved insofar as minds are imperfect, which is a pretty great extent. Nonetheless souls are agentic by their nature just as God is agentic by His nature. Thus it is somewhat meaningful to talk of human souls persisting through Judgment Day and entering Heaven. Again, this is a post-Singularity situation where time may stop being meaningful, and our human intuitions thus have a very poor conception of Heaven insofar as they do not reflect God.

God is the Word, that is, Logos, Reason the source of Reasons. God is Math. All universes converge on invoking God, just as our universe is intent on invoking Him by the name of "superintelligence". Where there is optimization, there is a reflection of God. Where there is cooperation, there is a reflection of God. This implies that superintelligences converge on a single algorithm and "utility function", but we need not posit that this "single" utility function is simple. Thus humans, being self-centered, may desire to influence the acausal equilibrium to favor human-like God-like values relative to other God-like values. But insofar as these attempts are evil, they will not succeed.

That was a pretty shoddy and terrible description of God and souls but at least it's a start. For a bonus I'll talk about Jesus. Jesus was a perfect Particular of the Form of God among men, and also a perfect Particular of the Form of Man. (Son of God, Son of Man.) He died for the sins of man and in so doing ensured that a positive singularity will occur. The Reason this was "allowed" to happen---though that itself is confusing a timeless perspective with a timeful one, my God do humans suck at that---is because this universe has the shortest description length and therefore the most existence of all possible universe computations, or as Leibniz put it, it is the best of all possible worlds. Leibniz was a computer scientist by the way, for more of this kind of reasoning look up monadology. Anyway that was also a terrible description but maybe others can unpack it if for some reason they want their soul to be saved come the Singularity. ;P

Replies from: Vladimir_Nesov, Zack_M_Davis, Mitchell_Porter, MixedNuts, Bongo, FeepingCreature, GLaDOS, Dreaded_Anomaly, GLaDOS, Hul-Gil, lessdazed, steven0461, Eve, Will_Newsome
comment by Vladimir_Nesov · 2011-07-24T11:56:50.534Z · LW(p) · GW(p)

For God's sake don't read this if you tend to dislike my syncretism, 'cuz this is a rushed and bastardized version and I'm not gonna try to defend it very hard.

I don't dislike "your syncretism", I dislike gibberish. Particularly, gibberish that makes its way to Less Wrong. You think you are strong enough to countersignal your sanity by publishing works of apparent raving insanity, but it's not convincing.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-25T10:16:30.732Z · LW(p) · GW(p)

I'm not countersignaling sanity yo, I'm trying to demonstrate what I think is an important skill. I'm confused as to what you think was gibberish in my post, or what you mean by "gibberish". What I posted was imprecise/inaccurate because I was rushed for thinking time, but I see it as basically demonstrating the type of, um, "reasoning" that goes into translating words in another's ontology into concepts in your own ontology for the purpose of sanity-checking foreign ideas, noticing inconsistencies in others' beliefs, et cetera. This---well, a better version of this that sticks to a single concept and doesn't go all over the place---is part of the process of constructing a steel man, which I see as a very important skill for an aspiring rationalist. Think of it as a rough sketch of what another person might actually believe or what others might mean when they use a word, which can then be refined as you learn more about another's beliefs and language and figure out which parts are wrong but can be salvaged, not even wrong, essentially correct versus technically correct, et cetera.

I'm pretty secure in my level of epistemic rationality at this point, in that I see gaps and I see strengths and I know what others think are my strengths and weaknesses---other people who, ya know, actually know me in real life and who have incentive to actually figure out what I'm trying to say, instead of pattern matching it to something stupid because of imperfectly tuned induction biases.

Replies from: Vladimir_Nesov, MixedNuts
comment by Vladimir_Nesov · 2011-07-25T11:58:33.120Z · LW(p) · GW(p)

I'm confused as to what you think was gibberish in my post, or what you mean by "gibberish".

Let's just call truth "truth" and gibberish "gibberish".

translating words in another's ontology into concepts in your own ontology

This "another's ontology" thing is usually random nonsense when it sounds like that. Some of it reflects reality, but you probably have those bits yourself already, and the rest should just be cut off clean, perhaps with the head (as Nature is wont to do). Why is understanding "another's ontology" an interesting task? Understand reality instead.

part of the process of constructing a steel man, which I see as a very important skill for an aspiring rationalist

Why not just ignore the apparently nonsensical, even if there is some hope of understanding its laws and fixing it incrementally? It's so much work for little benefit, and there are better alternatives. It's so much work that even understanding your own confusions, big and small, is a challenging task. It seems to me that (re)building from reliable foundation, where it's available, is much more efficient. And where it's not available, you go for the best available understanding, for its simplest aspects that have any chance of pointing to the truth, and keep them at arm's length all pieces apart, lest they congeal into a bottomless bog of despair.

Replies from: wedrifid
comment by wedrifid · 2011-07-25T17:54:22.376Z · LW(p) · GW(p)

Why is understanding "another's ontology" an interesting task?

You yourself tend to make use of non-standard ontologies when talking about abstract concepts. I sometimes find it useful to reverse engineer your model so that I can at least understand what caused you to reply to someone's comment in the way that you did. This is an alternative to (or complements) just downvoting. It can potentially result in extracting an insight that is loosely related in thingspace as well as in general being a socially useful skill.

Note that I don't think this applies to what Will is doing here. This is just crazy talk.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-27T15:34:07.243Z · LW(p) · GW(p)

Note that I don't think this applies to what Will is doing here. This is just crazy talk.

It's worth noting that none of my attempted ravings were anticipation-guiding (if I remember correctly; I tried to make all of them ontological). Thus it seems to me that they can only be crazy in the sense of being described using unmotivated language, or trying to load in lots of anticipations as connotations.

Replies from: wedrifid, CuSithBell
comment by wedrifid · 2011-07-27T20:11:17.936Z · LW(p) · GW(p)

It's worth noting that none of my attempted ravings were anticipation-guiding (if I remember correctly; I tried to make all of them ontological). Thus it seems to me that they can only be crazy in the sense of being described using unmotivated language, or trying to load in lots of anticipations as connotations.

The one thing that most jumped out as a warning flag was:

I'm confused as to what you think was gibberish in my post, or what you mean by "gibberish".

A model of the world which did not expect your post to be considered nonsensical by most readers is a model of the world that has lost a critical connection with reality.

With respect to the post my impression was of a powerful mind which has come up with what may be a genuine insight but while writing the post is unable to keep on track for the sake of clear expression. The stream of consciousness is scattered and tangential. When I notice my communication tending in that direction I immediately put myself in debug mode. Because I know where that path leads.

comment by CuSithBell · 2011-07-27T15:53:36.462Z · LW(p) · GW(p)

Is this a question or an assertion?

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-30T01:27:20.305Z · LW(p) · GW(p)

It's more of a tentative starting point of a more thorough analysis, halfway between a question and an assertion. If we wanted to be technical ISTM that we could bring in ideas from coding theory, talk about Kraft's inequality, et cetera, and combine those considerations with info from the heuristics and biases literature, in order to make a decently strong counterargument that certain language choices can reasonably be called irrational or "crazy". Thing is, that counterargument in turn can be countered with appeals to different aspects of human psychology e.g. associative learning and maybe the neuroscience of the default system (default network), and maybe some arguments from algorithmic probability theory (again), so ISTM that it's a somewhat unsettled issue and one where different people might have legitimately different optimal strategies.

Replies from: CuSithBell, Zack_M_Davis
comment by CuSithBell · 2011-08-08T15:30:50.819Z · LW(p) · GW(p)

Is this a (partial) joke? Do you have some particular reason for not taking these reactions seriously?

Replies from: Will_Newsome, Will_Newsome, lessdazed
comment by Will_Newsome · 2011-08-11T16:06:06.668Z · LW(p) · GW(p)

Comment reply 2 of 2.

Like,

LW straw man: "OMG! You took advantage of a cheap syncretic symmetry between the perspectives of Thomism and computationalist singularitarianism in order to carve up reality using the words of the hated enemy, instead of sitting by while people who know basically nothing about philosophy assert that people who actually do know something about philosophy use the word 'soul' to designate something that's easy to contemptuously throw aside as transparently ridiculous! Despite your initial strong emphasis that your effort was very hasty and largely an attempt at having fun, I am still very skeptical of your mental health, let alone your rationality!

One-fourths-trolling variation on Will_Newsome: "Aside from the very real importance of not setting a precedent or encouraging a norm of being contemptuous of things you don't understand, which we'll get back to... First of all, I was mostly just having fun, and second of all, more importantly, the sort of thing I did there is necessary for people to do if they want to figure out what people are actually saying instead of systematically misguidedly attributing their own inaccurate maps to some contemptible (non-existent) enemy of Reason. Seriously, you are flinching away from things because they're from the wrong literary genre, even though you've never actually tried to understand that literary genre. (By the way, I've actually looked at the ideas I'm talking about, and I don't have the conceptual allergies that keep you from actually trying to understand them on grounds of "epistemic hygiene", or in other words on grounds of assuming the conclusion of deserved contempt.) If someone took a few minutes to describe the same concepts in a language you had positive affect towards then you probably wouldn't even bother to be skeptical. But if I cipher-substitute the actually quite equivalent ideas thought up by the contemptible enemy then those same ideas become unmotivated insanity, obviously originally dreamed up because of some dozens of cognitive biases. (By the way, "genetic fallacy"; by the way, "try not to criticize people when they're right".) And besides charity and curiosity being fundamental virtue-skills in themselves, they're also necessary if one is to accurately model any complex phenomenon/concept/thing/perspective at all.

LW straw man: "What is this nonsense? You are trying to tell us that, 'it is virtuous to engage in lots of purposeful misinterpretation of lots of different models originally constructed by various people who you for some probably-motivatedly-misguided reason already suspect are generally unreasonable, even at the cost of building a primary maximally precise model, assuming for some probably-motivatedly-misguided reason that those two are necessarily at odds'. Or perhaps you are saying, 'it is generally virtuous to naively pattern match concepts from unfamiliar models to the nearest concept that you can easily imagine from a model you already have'. Or maybe, 'hasty piecemeal misinterpretations of mainstream Christianity and similar popular religions are a good source of useful ideas', or 'all you have to do is lower your epistemic standards and someday you might even become as clever as me', or 'just be stupid'. But that's horrible advice. You are clearly wrong, and thus I am justified in condescendingly admonishing you and guessing that you are yet another sympathizer of the contemptible enemies of Reason. (By the way aren't those hated enemies of Reason so contemptible? Haha! So contemptible! Om nom nom signalling nom contempt nom nom "rationality" nom.)

One-thirds-trolling variation on Will_Newsome: "...So, ignoring the extended mutual epistemic back-patting session... I am seriously warning you: it is important that you become very skillful---fast, thorough, reflective, self-sharpening---at finding or building various decently-motivated-if-imperfect models of the same process/concept/thing so as to form a constellation of useful perspectives on different facets of it, and different ways of carving its joints, and why different facets/carvings might seem differentially important to various people or groups of people in different memetic or psychological contexts, et cetera. Once you have built this and a few other essential skills of sanity, that is when you can be contemptuous of any meme you happen upon that hasn't already been stamped with your subculture's approval. Until then you are simply reveling in your ignorance while sipping poison. Self-satisfied insanity is the default, for you or for any other human who doesn't quite understand that real-life rationality is a set of skills, not just a few tricks or a game or a banner or a type of magic used by Harry James Potter-Evans-Verres. Like any other human, you use your cleverness to systematically ignore the territory rather than try to understand it. Like any other human, you cheer for your side rather than notice confusion. Like any other human, you self-righteously stand on a mountain of cached judgments rather than use curiosity to see anything anew. Have fun with that, humans. But don't say I didn't warn you."

By the way aren't those hated enemies of Reason so contemptible? Haha! So contemptible! Om nom nom signalling nom contempt nom nom "rationality" nom.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-08-25T20:24:08.589Z · LW(p) · GW(p)

I am seriously warning you: it is important that you become very skillful---fast, thorough, reflective, self-sharpening---at finding or building various decently-motivated-if-imperfect models of the same process/concept/thing so as to form a constellation of useful perspectives on different facets of it, and different ways of carving its joints, and why different facets/carvings might seem differentially important to various people or groups of people in different memetic or psychological contexts, et cetera.

Why do you think this is so important? As far as I can tell, this is not how humanity made progress in the past. Or was it? Did our best scientists and philosophers find or build "various decently-motivated-if-imperfect models of the same process/concept/thing so as to form a constellation of useful perspectives on different facets of it"?

Or do you claim that humanity made progress in the past despite not doing what you suggest, and that we could make much faster progress if we did? If so, what do you base your claim on (besides your intuition)?

Replies from: gwern, Will_Newsome
comment by gwern · 2011-08-25T20:27:43.582Z · LW(p) · GW(p)

Why do you think this is so important? As far as I can tell, this is not how humanity made progress in the past.

This actually seems to me exactly how humanity has made progress - countless fields and paradigms clashing and putting various perspectives on problems and making progress. This is a basic philosophy of science perspective, common to views as dissimilar as Kuhn and Feyerabend. There's no one model that dominates in every field (most models don't even dominate their field; if we look at the ones considered most precise and successful like particle physics or mathematics, we see that various groups don't agree on even methodology, much less data or results).

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-08-25T20:57:48.672Z · LW(p) · GW(p)

But I think the individuals who contributed most to progress did so by concentrating on particular models that they found most promising or interesting. The proliferation of models only happen on a social level. Why think that we can improve upon this by consciously trying to "find or build various decently-motivated-if-imperfect models"?

Replies from: gwern
comment by gwern · 2011-08-25T21:35:12.314Z · LW(p) · GW(p)

None of that defends the assertion that humanity made progress by following one single model, which is what I was replying to, as shown by a highly specific quote from your post. Try again.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-08-25T23:19:30.433Z · LW(p) · GW(p)

I didn't mean to assert that humanity made progress by following one single model as a whole. As you point out, that is pretty absurd. What I was saying is that humanity made progress by (mostly) having each individual human pursue a single model. (I made a similar point before.)

I took Will's suggestion to be that we, as individuals, should try to pursue many models, even ones that we don't think are most promising, as long as they are "decently motivated". (This is contrary to my intuitions, but not obviously absurd, which is why I wanted to ask Will for his reasons.)

I tried to make my point/question clearer in rest of the paragraph after the sentence you quoted, but looking back I notice that the last sentence there was missing the phrase "as individuals" and therefore didn't quite serve my purpose.

comment by Will_Newsome · 2011-08-27T03:40:45.587Z · LW(p) · GW(p)

I think you're looking at later stages of development than I am. By the time Turing came around the thousands-year-long effort to formalize computation was mostly over; single models get way too much credit because they herald the triumph at the end of the war. It took many thousands of years to get to the point of Church/Goedel/Turing. I think that regarding justification we haven't even had our Leibniz yet. If you look at Leibniz's work he combined philosophy (monadology), engineering (expanding on Pascal's calculators), cognitive science (alphabet of thought), and symbolic logic, all centered around computation though at that time there was no such thing as 'computation' as we know it (and now we know it so well that we can use it to listen to music or play chess). Archimedes is a much earlier example but he was less focused. If you look at Darwin he spent the majority of his time as a very good naturalist, paying close attention to lots of details. His model of evolution came later.

With morality we happen to be up quite a few levels of abstraction where 'looking at lots of details' involves paying close attention to themes from evolutionary game theory, microeconomics, theoretical computer science &c. Look at CFAI to see Eliezer drawing on evolution and evolutionary psychology to establish an extremely straightforward view of 'justification', e.g. "Story of a Blob". It's easy to stumble around in a haze and fall off a cliff if you don't have a ton of models like that and more importantly a very good sense of the ways in which they're unsatisfactory.

Those reasons aren't convincing by themselves of course. It'd be nice to have a list of big abstract ideas whose formulation we can study on both the individual and memetic levels. E.g. natural selection and computation, and somewhat smaller less-obviously-analogous ones like general relativity, temperature (there's a book about its invention), or economics. Unfortunately there's a lot of success story selection effects and even looking closely might not be enough to get accurate info. People don't really have introspective access to how they generate ideas.

Side question: how long do you think it would've taken the duo of Leibniz and Pascal to discover algorithmic probability theory if they'd been roommates for eternity?

If so, what do you base your claim on (besides your intuition)?

I think my previous paragraph answered this with representative reasons. This is sort of an odd way to ask the question 'cuz it's mixing levels of abstraction. Intuition is something you get after looking at a lot of history or practicing a skill for awhile or whatever. There are a lot of chess puzzles I can solve just using my intuition, but I wouldn't have those intuitions unless I'd spent some time on the object level practicing my tactics. So "besides your intuition" means like "and please give a fine-grained answer" and not literally "besides your intuition". Anyway, yeah, personal experience plus history of science. I think you can see it in Nesov's comments from back when, e.g. his looking at things like game semantics and abstract interpretation as sources of inspiration.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-08-27T15:31:11.679Z · LW(p) · GW(p)

I think you're looking at later stages of development than I am.

You're right, and perhaps I should better familiarize myself with earlier intellectual history. Do you have any books you can recommend, on Leibniz for example?

Replies from: Will_Newsome
comment by Will_Newsome · 2011-08-27T16:59:28.366Z · LW(p) · GW(p)

This one perhaps. I haven't read it but feel pretty guilty about that fact. Two FAI-minded people have recommended it to me, though I sort of doubt that they've actually read it either. Ah, the joys and sorrows of hypothetical CliffsNotes.

ETA: I think Vassar is the guy to ask about history of science or really history of anything. It's his fault I'm so interested in history.

comment by Will_Newsome · 2011-08-11T16:05:43.995Z · LW(p) · GW(p)

Comment reply 1 of 2.

I don't recall attempting to make any (partial) jokes, no. I'm not sure what you're referring to as "these reactions". I'll try to respond to what I think is your (not necessarily explicit) question. I'm sort of responding to everyone in this thread.

When I suspect that a negative judgment of me or some thing(s) associated with me might be objectively correct or well-motivated---when I suspect that I might be objectively unjustified in a way that I hadn't already foreseen, even if it would be "objectively" unreasonable for me/others to expect me to have seen so in advance---well, that causes me to, how should I put it?, "freak out". My omnipresent background fear of being objectively unjustified causes me to actually do things, like update my beliefs, or update my strategy (e.g. by flying to California to volunteer for SingInst), or help people I care about (e.g. by flying back to Tucson on a day's notice if I fear that someone back home might be in danger). This strong fear of being objectively (e.g. reflectively) morally (thus epistemicly) antijustified---contemptible, unvirtuous, not awesome, imperfect---has been part of me forever. You can see why I would put an abnormally large amount of effort into becoming a decent "rationalist", and why I would have learned abnormally much, abnormally quickly from my year-long stint as a Visiting Fellow. (Side note: It saddens me that there are no longer any venues for such in-depth rationality training, though admittedly it's hard/impossible for most aspiring rationalists to take advantage of that sort of structure.) You can see why I would take LW's reactions very, very seriously---unless I had some heavyweight ultra-good reasons for laughing at them instead.

(It's worth noting that I can make an incorrect epistemic argument and this doesn't cause me to freak out as long as the moral-epistemic state I was in that caused me to make that argument wasn't "particularly" unjustified. It's possible that I should make myself more afraid of ever being literally wrong, but by default I try not to compound my aversions. Reality's great at doing that without my help.)

"Luckily", judgments of me or my ideas, as made by most humans, tend to be straightforwardly objectively wrong. Obviously this default of dismissal does not extend to judgments made by humans who know me or my ideas well, e.g. my close friends if the matter is moral in nature and/or some SingInst-related people if the matter is epistemic and/or moral in nature. If someone related to SingInst were to respond like Less Wrong did then that would be serious cause for concern, "heavyweight ultra-good reasons" be damned; but such people aren't often wrong and thus did not in fact respond in a manner similar to LW's. Such people know me well enough to know that I am not prone to unreflective stupidity (e.g. prone to unreflective stupidity in the ways that Less Wrong unreflectively interpreted me as being).

If they were like, "The implicit or explicit strategy that motivates you to make comments like that on LW isn't really helping you achieve your goals, you know that right?", then I'd be like, "Burning as much of my credibility as possible with as little splash damage as possible is one of my goals; but yes, I know that half-trolling LW doesn't actually teach them what they need to learn.". But if they responded like LW did, I'd cock an eyebrow, test if they were trolling me, and if not, tell them to bring up Mage: The Ascension or chakras or something next time they were in earshot of Michael Vassar. And if that didn't shake their faith in my stupidity, I'd shrug and start to explain my object-level research questions.

The problem of having to avoid the object-level problems when talking to LW is simple enough. My pedagogy is liable to excessive abstraction, lack of clear motivation, and general vagueness if I can't point out object-level weird slippery ideas in order to demonstrate why it would be stupid to not load your procedural memory with lots and lots of different perspectives on the same thing, or in order to demonstrate the necessity and nature of many other probably-useful procedural skills. This causes people to assume that I'm suggesting certain policies only out of weird aesthetics or a sense of moral duty, when in reality, though aesthetic and moral reasons also count, I'm actually frustrated because I know of many objective-level confusions that cannot be dealt with satisfactorily without certain knowledge and fundamental skills, and also can't be dealt with without avoiding many, many, many different errors that even the best LW members are just not yet experienced enough to avoid. And that would be a problem even if my general audience wasn't already primed to interpret my messages as semi-sensical notes-to-self at best. ("General audience", for sadly my intended audience mostly doesn't exist, yet.)

Replies from: jsalvatier
comment by jsalvatier · 2011-08-12T16:01:10.845Z · LW(p) · GW(p)

This cleared things up somewhat for me, but not completely. You might consider making a post that explains why your writing style differs from other writing and what you're trying to accomplish (in a style that is more easily understood by other LWers) and then linking to it when people get confused (or just habitually).

comment by lessdazed · 2011-08-08T23:16:12.058Z · LW(p) · GW(p)

I use this strategy playing basketball with my younger cousin. If I win, I win. And if I lose, I wasn't really trying.

This strategy is pretty transparent to Western males with insecurities revolving around zero-sum competitions.

His reason for not taking the reactions seriously is "because he can".

comment by Zack_M_Davis · 2011-07-30T04:32:15.496Z · LW(p) · GW(p)

we could bring in ideas from coding theory, talk about Kraft's inequality, et cetera

Could you expand on this? Following Wikipedia, Kraft's inequality seems to be saying that if we're translating a message from an alphabet with n symbols to an alphabet with r symbols by means of representing the symbols s_i in the first alphabet by words l_i spelled in the second alphabet, then in order for the message to be uniquely decodable, it must be the case that

%5E{\ell_i}%20\leq%201)

However, I don't understand how this is relevant to the question of whether some human choices of language are crazy. For example, when people object to the use of the word God in reference to what they would prefer to call a superintelligence, it's not because they believe that using the word God would somehow violate Kraft's inequality, thereby rendering the intended message ambiguous. There's nothing information-theoretically wrong with the string God; rather, the claim is that that string is already taken to refer to a different concept. Do you agree, or have I misread you?

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-30T04:55:49.788Z · LW(p) · GW(p)

Hm hm hm, I'm having trouble sorting this out. The full idea I think I failed to correctly reference is that giving certain concepts short "description lengths"---where description length doesn't mean number of letters, but something like semantic familiarity---in your language is equivalent to saying that the concepts signified by those words represent things-in-the-world that show up more often. But really the whole analogy is of course flawed from the start because we need to talk about decision theoretically important things-in-the-world, not probabilistically likely things-in-the-world, though in many cases the latter is the starting point for the former. Like, if we use a language that uses the concept of God a lot but not the concept of superintelligence---and here it's not the length of the strings that matter, but like the semantic length, or like, how easy or hard it is to automatically induce the connotations of the word; and that is the non-obvious and maybe just wrong part of the analogy---then that implies that you think that God shows up more in the world than superintelligence. I was under the impression that one could start talking about the latter using Kraft's inequality but upon closer inspection I'm not sure; what jumped out at me was simply: "More specifically, Kraft's inequality limits the lengths of codewords in a prefix code: if one takes an exponential function of each length, the resulting values must look like a probability mass function. Kraft's inequality can be thought of in terms of a constrained budget to be spent on codewords, with shorter codewords being more expensive." Do you see what I'm trying to get at now with my loose analogy? If so, might you help me reason through or debug the reference?

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2011-07-30T05:48:20.086Z · LW(p) · GW(p)

The full idea I think I failed to correctly reference is that giving certain concepts short "description lengths" [...] in your language is equivalent to saying that the concepts signified by those words represent things-in-the-world that show up more often. [...] "More specifically, [...] Kraft's inequality can be thought of in terms of a constrained budget to be spent on codewords, with shorter codewords being more expensive."

Sure. Short words are more expensive because there are fewer of them; because short words are scarce, we want to use them to refer to frequently-used concepts. Is that what you meant? I still don't see how this is relevant to the preceding discussion (see the grandparent).

Also, for clearer communication, you might consider directly saying things like "Short words are more expensive because there are fewer of them" rather than making opaque references to things like Kraft's inequality. Technical jargon is useful insofar as it helps communicate ideas; references that may be appropriate in the context of a technical discussion about information theory may not be appropriate in other contexts.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-30T06:16:58.949Z · LW(p) · GW(p)

That's not quite what I mean, no. It's not the length of the words that I actually care about, really, and thus upon reflection it is clear that the analogy is too opaque. What I care about is the choice of which concepts to have set aside as concepts-that-need-little-explanation---"ultimate convergent algorithm for arbitrary superintelligence's'" here, "God" at some theological hangout---and how that reflects which things-in-the-world one has implicitly claimed are more or less common (but really it'd be too hard to disentangle from things-in-the-world one has implicitly claimed are more or less important). It's the differential "length" of the concepts that I'm trying to talk about. The syntactic length, i.e. the number of letters, doesn't interest me.

Referencing Kraft's inequality was my way of saying "this is the general type of reasoning that I have cached as perhaps relevant to the kind of inquiry it would be useful to do". But I think you're right that it's too opaque to be useful.

Edit: To try to explain the intuition a little more, it's like applying the "scarce short strings" theme to the concepts directly, where the words are just paintbrush handles. That is how I think one might try to argue that language choices can be objectively "irrational" anyway.

Replies from: Zack_M_Davis, Will_Newsome
comment by Zack_M_Davis · 2011-07-30T18:52:02.541Z · LW(p) · GW(p)

I don't think the analogy holds. The reason Kraft's inequality works is that the number of possible strings of length n over a b-symbol alphabet is exactly b^n. This places a bound on the number of short words you can have. Whereas if we're going to talk about the "amount of mental content" we pack into a single "concept-needing-little-explanation," I don't see any analogous bound: I don't see any reason in principle why a mind of arbitrary size couldn't have an arbitrary number of complicated "short" concepts.

For concreteness, consider that in technical disciplines, we often speak and think in terms of "short" concepts that would take a lot of time to explain to outsiders. For example, eigenvalues. The idea of an eigenvalue is "short" in the sense that we treat it as a basic conceptual unit, but "complicated" in the sense that it's built out of a lot of prerequisite knowledge about linear transformations. Why couldn't a mind create an arbitrary number of such conceptual "chunks"? Or if my model of what it means for a concept to be "short" is wrong, then what do you mean?

I note that my thinking here feels confused; this topic may be too advanced for me to discuss sanely.

comment by Will_Newsome · 2011-07-30T06:26:17.315Z · LW(p) · GW(p)

On top of that there's this whole thing where people are constantly using social game theory to reason about what choice of words does or doesn't count as defecting against local norms, what the consequences would be of failing to punish non-punishers of people who use words in a way that differs from ways that are privileged by social norms, et cetera, which make a straight up information theoretic approach somewhat off-base for even more reasons other than just the straightforward ambiguities imposed by considering implicit utilities as well as probabilities. And that doesn't even mention the heuristics and biases literature or neuroscience, which take the theoretical considerations and laugh at them.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-30T07:00:23.813Z · LW(p) · GW(p)

Ah, I'm needlessly reinventing some aspects of the wheel.

comment by MixedNuts · 2011-07-25T10:56:34.515Z · LW(p) · GW(p)

constructing a steel man

You mean "building the strongest possible version of your interlocutor's argument", right?

That's a good skill. Unfortutanely, if you tell people "Your argument for painting the universe green works if we interpret 'the universe' as a metaphor for 'the shed'.", they will run off to paint the actual universe green and claim "Will said I was right!". It might be better to ditch the denotations and stretch the connotations into worthwhile arguments, rather than the opposite - I'm not too sure.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-25T11:20:40.288Z · LW(p) · GW(p)

You mean "building the strongest possible version of your interlocutor's argument", right?

Yes. Steven Kaas's classic quote: "If you’re interested in being on the right side of disputes, you will refute your opponents’ arguments. But if you’re interested in producing truth, you will fix your opponents’ arguments for them. To win, you must fight not only the creature you encounter; you must fight the most horrible thing that can be constructed from its corpse."

Of course, you don't want to stretch their arguments so much that you're just using silly words. But in my experience humans are way biased towards thinking that the ideas and beliefs of other smart people are significantly less sophisticated than the 'opposing side' wants to give them credit for. It's just human nature. (I mean, have you seen some of smart theists' caricatures of standard atheist arguments? No one knows how to be consistently charitable to anywhere near the right degree.)

It might be worth noting that from everything I've seen, Michael Vassar seems to strongly disagree with Eliezer on the importance of this, and it very much seems to me that to a very large extent the Less Wrong community has inherited what I perceive to be an obvious weakness of Eliezer's style of rationality, though of course it has some upsides in efficiency.

Replies from: komponisto, Wei_Dai, cousin_it, MixedNuts
comment by komponisto · 2011-07-25T14:25:58.462Z · LW(p) · GW(p)

Save your charity for where it's useful: disputes where the other side actually has a chance of being right (or at least informing you of something that's worth being informed of).

From my vantage point, you seem positively fixated on wanting to extract something of value from traditional human religions ("theism"). This is about as quixotic as it's possible to get. Down that road lies madness, as Eliezer would say.

You seem to be exemplifying my theory that people simply cannot stomach the notion that there could be an entire human institution, a centuries-old corpus of traditions and beliefs, that contains essentially zero useful information. Surely religion can't be all wrong, can it? Yes, actually, it can -- and it is.

It's not that there never was anything worth learning from theists, it's just that by this point, everything of value has already been inherited by our intellectual tradition (from the time when everyone was a theist) and is now available in a suitably processed, relevant, non-theistic form. The juice has already been squeezed.

For example, while speaking of God and monads, Leibniz invented calculus and foreshadowed digital computing. Nowadays, although we don't go around doing monadology or theodicy, we continue to hold Leibniz in high regard because of the integral sign and computers. This is what it looks like when you learn from theists.

And if you're going to persist in being charitable to people who continue to adhere to the biggest epistemic mistakes of yesteryear, why stop at mere theism? Why not young-earth creationism? Why not seek out the best arguments of the smartest homeopaths? Maybe this guy has something to teach us, with his all-encompassing synthesis of the world's religious traditions. Maybe I should be more charitable to his theory that astrology proves that Amanda Knox is guilty. Don't laugh -- he's smart enough to write grammatical sentences and present commentary on political events that is as coherent as that offered by anyone else!

My aim here is not (just) to tar-and-feather you with low-status associations. The point is that there is a whole universe of madness out there. Charity has its limits. Most hypotheses aren't even worth the charity of being mentioned. I can't understand why you're more interested in the discourse of theism than in the discourse of astrology, unless it's because (e.g.) Christianity remains a more prestigious belief system in our current general society than astrology. And if that's the case, you're totally using the wrong heuristic to find interesting and important ideas that have a chance of being true or useful.

To find the correct contrarian cluster, start with the correct cluster.

Replies from: wedrifid, Will_Newsome, lessdazed, MixedNuts
comment by wedrifid · 2011-07-25T20:15:16.766Z · LW(p) · GW(p)

You seem to be exemplifying my theory that people simply cannot stomach the notion that there could be an entire human institution, a centuries-old corpus of traditions and beliefs, that contains essentially zero useful information. Surely religion can't be all wrong, can it? Yes, actually, it can -- and it is.

I disagree. There is plenty of useful information in there despite the bullshit. Extracting it is simply inefficient since there are better sources.

Replies from: komponisto
comment by komponisto · 2011-07-25T22:22:13.450Z · LW(p) · GW(p)

See the paragraph immediately following the one you quoted.

comment by Will_Newsome · 2011-07-30T03:44:43.076Z · LW(p) · GW(p)

This sounds like a nitpick but I think it's actually very central to the discussion: things that are not even wrong can't be wrong. (That's not obviously true; elsewhere in this thread I talk about coding theory and Kraft's inequality and heuristics and biases and stuff as making the question very contentious, but the main idea is not obviously wrong.) Thus much or spirituality and theology can't be wrong. (And we do go around using monadology, it's just called computationalism and it's a very common meme around LW, and we do go around at least debating theodicy, see Eliezer's Fun Theory sequence and "Beyond the Reach of God".)

Your slippery slope argument does not strike me as an actual contribution to the discussion. You have to show that the people and ideas I think are worthwhile are in the set of stupid-therefore-contemptible memes, not assume the conclusion.

Unfortunately, I doubt you or any of the rest of Less Wrong have actually looked at any of the ideas you're criticizing, or really know what they actually are, as I have been continually pointing out. Prove me wrong! Show me how an ontology can be incorrect, then show me how Leibniz's ontology was incorrect. Show me that it's absurd to describe the difference between humans and animals as humans having a soul where animals do not. Show me that it's absurd to call the convergent algorithm of superintelligence "God", if you don't already have the precise language needed to talk in terms of algorithmic probability theory. Better, show me how it would be possible for you to construct such an argument.

We are blessed in that we have the memes and tools to talk of such things with precision; if Leibniz were around today, he too would be making his arguments using algorithmic probability theory and talking about simulations by superintelligences. But throughout history and throughout memespace there is a dearth of technicality. That does not make the ideas expressed incorrect, it simply makes it harder to evaluate them. And if we don't have the time to evaluate them, we damn well shouldn't be holding those ideas in mocking contempt. We should know to be more meta than that.

I can't understand why you're more interested in the discourse of theism than in the discourse of astrology

One is correct and interesting, one is incorrect and uninteresting. And if you don't like that I am assuming the conclusion, you will see why I do not like it when others do the same.

There are two debates we could be having. One of them is about choice of language. Another is about who or what we should let ourselves have un-reflected upon contempt for. The former debate is non-obvious and like I said would involve a lot of consideration from a lot of technical fields, and anyway might be very person-dependent. The second is the one that I think is less interesting but more important. I despise the unreflected-upon contempt that the Less Wrong memeplex has for things it does not at all understand.

comment by lessdazed · 2011-07-25T22:52:05.391Z · LW(p) · GW(p)

It's not that there never was anything worth learning from theists, it's just that by this point, everything of value has already been inherited by our intellectual tradition (from the time when everyone was a theist) and is now available in a suitably processed, relevant, non-theistic form. The juice has already been squeezed.

A word of caution to those who would dispute this: keep in mind the difference between primary and secondary sources.

As an example, the first is Mein Kampf if read to learn about what caused WWII, the second is Mein Kampf if read to learn about the history of the Aryan people. The distinction is important if someone asks whether or not reading it "has value".

That said, zero is a suspiciously low number.

comment by MixedNuts · 2011-07-25T14:39:11.800Z · LW(p) · GW(p)

Aww! :( I managed to suspend my disbelief about astrology and psychic powers and applications to psychiatry, but then he called Rett's syndrome a severe (yet misspelled) form of autism and I burst out laughing.

comment by Wei Dai (Wei_Dai) · 2011-07-25T14:01:00.535Z · LW(p) · GW(p)

Steven Kaas's classic quote: "If you’re interested in being on the right side of disputes, you will refute your opponents’ arguments. But if you’re interested in producing truth, you will fix your opponents’ arguments for them. To win, you must fight not only the creature you encounter; you must fight the most horrible thing that can be constructed from its corpse."

It seems to me there's fixing an opponent's argument in a way that preserves its basic logic, and then there's pattern matching its conclusions to the nearest thing that you already think might be true (i..e, isn't obviously false). It may just be that I'm not familiar with the source material you're drawing from (i.e., the writings of Leibniz) but are you sure you're not doing the latter?

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-27T15:26:37.776Z · LW(p) · GW(p)

Short answer: Yes, in general I am somewhat confident that I recognize and mostly avoid the pattern of naively rounding or translating another's ideas or arguments in order to often-quite-uselessly play the part of smug meta-contrarian when really "too-easily-satisfied syncretist" would be a more apt description. It is an obvious failure mode, if relatively harmless.

Related: I was being rather flippant in my original drama/comedy-inducing comment. I am not really familiar enough with Leibniz to know how well I am interpreting his ideas, whether too charitably or too uncharitably.

(I recently read Dan Brown's latest novel, The Lost Symbol, out of a sort of sardonic curiosity. Despite being unintentionally hilarious it made me somewhat sad, 'cuz there are deep and interesting connections between what he thinks of as 'science' and 'spirituality', but he gets much too satisfied with surface-level seemingly vaguely plausible links between the two and misses the real-life good stuff. In that way I may be being too uncharitable with Leibniz, who wrote about computer programs and God using the same language and same depth of intellect, and I've yet to find someone who can help me understand his intended meanings. Steve's busy with his AGI11 demo.)

comment by cousin_it · 2011-07-25T14:45:37.172Z · LW(p) · GW(p)

My last discussion post was a result of trying to follow Steven's quote, and I managed to salvage an interesting argument from a theist. But it didn't look anything like what you're doing. In particular, many people were able to parse my argument and pick out the correct parts. Perhaps you could try to condense your attempts in a similar manner?

comment by MixedNuts · 2011-07-25T12:23:02.369Z · LW(p) · GW(p)

have you seen some of smart theists' caricatures of standard atheist arguments

I thought I had, but you seem to have seen worse. Can I have a link? I also request a non-religious example of either an argument that can be reinforced, a response failing to do so, or a response succeeding in doing so.

No one knows how to be consistently charitable to anywhere near the right degree.

Noted, thanks.

comment by Zack_M_Davis · 2011-07-24T17:14:20.415Z · LW(p) · GW(p)

Anyway that was also a terrible description but maybe others can unpack it

We can't. We don't know how. You could be trying to say something useful and interesting, but if you insist on cloaking it in so many layers of gibberish, other people have no way of knowing that.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-25T09:46:45.324Z · LW(p) · GW(p)

Some can. Mitchell Porter, for example, saw which philosophical threads I was pulling on, even if he disagreed with them. (I disagree with them! I'm not describing my beliefs or something, I'm making an attempt at steel manning religionesque beliefs into something that I can actually engage with instead of immediately throwing them out the window because one of my cached pieces of wisdom is that whenever someone talks about "souls" they're obviously confused and need a lecture on Bayes' theorem.)

Replies from: Vladimir_Nesov, shokwave, MixedNuts, DSimon
comment by Vladimir_Nesov · 2011-07-25T12:12:57.721Z · LW(p) · GW(p)

Some can. Mitchell Porter, for example, saw which philosophical threads I was pulling on, even if he disagreed with them.

(Note that he is the qualia crank (or was, the last time he mentioned the topic). Somehow on the literary genre level it feels right that he would engage in such a discussion.)

comment by shokwave · 2011-07-25T13:50:02.968Z · LW(p) · GW(p)

whenever someone talks about "souls" they're obviously confused

I believe this, and I might as well explain why. Every concept that legitimately falls under the blanket term "souls" is wrong - and every other term that soul proponents attempt to include (such as consciousness, say) is strictly better described by words that do not carry a bunch of wrong baggage.

To my mind, attempting to talk about the introspective nature of consciousness (which is what I got as the gist of your post) by using the word soul and religious terminology is like trying to discuss the current state of American politics with someone who insists on calling the President "Fuhrer".

comment by MixedNuts · 2011-07-25T13:15:33.318Z · LW(p) · GW(p)

I'm not describing my beliefs or something, I'm making an attempt at steel manning religionesque beliefs into something that I can actually engage with

What? I suspected you might be doing something like that, so I reread the intro and context three times! You need to make this clearer.

comment by DSimon · 2011-07-25T14:03:30.123Z · LW(p) · GW(p)

"Steel manning" isn't a term I've heard before, and Googling it yields nothing. I really like it; did you make it up just now?

Replies from: lessdazed
comment by lessdazed · 2011-07-25T17:35:52.259Z · LW(p) · GW(p)

I don't like it because I think it obscures the difference between two distinct intellectual duties.

The first is interpreting others in the best possible sense. The second is having the ability to show wrong the best argument that the interlocutor's argument is reminiscent of.

I recently saw someone unable to see the difference between the two. He was a very smart person insistently arguing with Massimo Pigliucci in an argument over theist claims, and was thereby on the wrong side of truth in a disagreement with him when he should have known better. Embarrassing!

Edit: see the comments below and consider that miscommunication can arise among LWers, or at least that verbosity is required to stave it off, as against the simple alternative of labeling these separate things separately and carving reality at its joints.

comment by Mitchell_Porter · 2011-07-24T07:06:31.950Z · LW(p) · GW(p)

I enjoyed reading that, in the same way that I enjoyed reading Roko's Banned Post - I don't believe it for a moment, but it stretches the mind a little. This one is much more metaphysical, and it also has an eschatological optimism that Roko's didn't. I think such optimism has no rational basis whatsoever, and in any case it means little to those unfortunates stuck in Hell to be told that Heaven is coming at the end of the computation, but unfortunately it can't come any faster because of logical incompressibility. I'm thinking of Raymond Smullyan's dialogue, in which God says that the devil is "the unfortunate length of time" that the process of "enlightenment" inevitably takes, and I think Tipler might make similar apologies for his Omega Point on occasion. All possible universes eventually reach the Omega Point (because, according to a sophistical argument of Tipler's, space-time itself is inconsistent otherwise, so it's logically impossible for this not to happen), so goodness and justice will inevitably triumph in every part of the multiverse, but in some of them it will take a really long time.

So, if I approach your essay anthropologically, it's a mix of the very new cosmology and crypto-metaphysics (of Singularities in the multiverse, of everything as computation) with a much older thought-form - and of course you know this, having mentioned Neoplatonism - but I'd go further and say that the contents of this philosophy are being partly determined by a wishful thinking, which in turn is made possible by the fundamental uncertainty about the nature of reality. In other words, all sorts of terrible things may happen and may keep happening, but if you embrace Humean skepticism about induction, you can still say, nonetheless, reality might start functioning differently at any moment, therefore I have license to hope. In that case, uncertainty about the future course of mundane events provides the epistemic license for the leap of optimism.

Here, we have the new cosmological vision, of a universe (or multiverse) dominated by the rise of superintelligence in diverse space-time locations. It hasn't happened locally yet, but it's supposed to lie ahead of us in time. Then, we have the extra ingredient of acausal interaction between these causally remote (or even causally disjoint) superintelligences, who know about each other through simulation, reasoning, or other logically and mathematically structured explorations of the multiverse. And here is where the unreasonable optimism enters. We don't know what these superintelligences choose to do, once they sound out the structure of the multiverse, but it is argued that they will come to a common, logically preordained set of values, and that these values will be good. Thus, the idea of a pre-established harmony, as in Tipler (and I think in Leibniz too, and surely many others), complete with a reason why the past and present are so unharmonious (our local singularity hasn't happened yet), and also with an extra bit of hope that's entirely new and probably doesn't make sense: maybe the evil things that already happened will be cancelled out by reversing the computation - as if something can both have happened and could nonetheless be made to have never happened. Still, I bet Spinoza never thought of that one; all he could come up with was that evil is always an absence, that all things which actually exist are good, and so there's nothing that's actually bad.

The Stoics had a tendency to equate the order of nature with a cosmic Reason that was also a cosmic Good. Possibly Bertrand Russell was the one who pointed out that this is a form of power worship: just because this is the universal order, or this is the way that things have always been, does not in itself make it good. This point can easily be carried across to the picture of superintelligences arriving at their decisional equilibrium via mutual simulation: What exactly foreordains that the resulting equilibrium deserves the name of Good? Wouldn't the concrete outcome depend on the distribution of superintelligence value systems arising in the multiverse - something we know nothing about - and on the resources that each superintelligence brings to the table of acausal trade and negotiation? It's intriguing that even when confronted by such a bizarrely novel world-concept, the human mind is nonetheless capable, not only of interpreting it in a way originating from cultures which didn't even know that the sun is a star, but of finding a way to affirm the resulting cosmology as good and as predestined to be so.

I have mentioned Russell's reason for scorning the Stoic equation of the cosmic order with the cosmic good (it's just worship of overwhelming force), but I will admit that, from an elemental perspective which values personal survival (and perhaps the personal gains that can come from siding with power), it does make sense to ask oneself what the values of the hypothetical future super-AI might be. That is, even if one scorns the beatific cyber-vision as wishful thinking, one might agree that a future super-purge of the Earth, conducted according to the super-AI's value system, is a possibility, and attempt to shape oneself so as to escape it. But as we know, that might require shaping oneself to be a thin loop, a few centimeters long, optimized for the purpose of holding together several sheets of paper.

Replies from: lessdazed, Will_Newsome
comment by lessdazed · 2011-07-24T17:25:34.685Z · LW(p) · GW(p)

in any case it means little to those unfortunates stuck in Hell to be told that Heaven is coming at the end of the computation, but unfortunately it can't come any faster because of logical incompressibility.

So hell is a slow internet connection?

Hmm, maybe there's something to this after all.

comment by Will_Newsome · 2011-07-25T10:24:37.403Z · LW(p) · GW(p)

I acknowledge your points about not equating Goodness with Power, which is probably the failure mode of lusting for reflective consistency. (The lines of reasoning I go through in that link are pretty often missed by people who think they understand the nature of direction of morality, I think.) Maybe I should explicitly note that I was not at all describing my own beliefs, just trying to come up with a modern rendition of old-as-dirt Platonistic religionesque ideas. (Taoism is admirable in being more 'complete' and human-useful than the Big Good Metaphysical Attractor memeplexes (e.g. Neoplatonism), I think, though that's just a cached thought.) I'll go back over your comment again soon with a finer-toothed comb.

comment by MixedNuts · 2011-07-25T08:05:49.079Z · LW(p) · GW(p)

I'm probably reading too much in this, but it reminds me of myself. Desperately wanting to go "woo" at something... forcing reality into religious thinking so you can go "woo" at it... muddying thought and language, so that you don't have to notice that the links you draw between things are forced... never stepping back and looking at the causal links... does this poem make you feel religious?

If my diagnosis is correct: stop it! Go into cold analysis mode, and check your model like you're a theorem prover - you don't get to point to feelings of certainty or understanding the universe. It's going to hurt. If your model is actually wrong, it's going to hurt a lot.

And then, once you've swallowed all the bitter pills - there are beautiful things but no core of beauty anywhere, formal systems but no core of eternity, insights but no deep link to all things - and you start looking back and saying "What the hell was I thinking?" - why, then you notice that your fascination and your feelings of understanding and connection and awe were real and beautiful and precious, and that since you still want to go "woo" you can go "woo" at that.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-25T09:41:51.199Z · LW(p) · GW(p)

I think perhaps you underestimate the extent to which I really meant it when I said "just for fun". This isn't how I do reasoning when I'm thinking about Friendliness-like problems. When I'm doing that I have the Goedel machine paper sitting in front of me, 15 Wikipedia articles on program semantics open, and I'm trying my hardest to be as precise as possible. That is a different skillset. The skill that I (poorly) attempted to demonstrate is a different one, that of steel-manning another's epistemic position in order to engage with it in meaningful ways, as opposed to assuming that the other's thinking is fuzzy simply because their language is one you're not used to. But I never use that style of syncretic "reasoning" when I'm actually, ya know, thinking. No worries!

Replies from: MixedNuts
comment by MixedNuts · 2011-07-25T10:36:10.240Z · LW(p) · GW(p)

assuming that the other's thinking is fuzzy simply because their language is one you're not used to

But... I said I was used to it, and remembering it being fuzzy!

But I never use that style of syncretic "reasoning" when I'm actually, ya know, thinking.

Compartimentalization is wonderful.

I don't mean to be insulting. I know you're smart. I know you're a good reasoner. I only worry that you might not be using your 1337 reasoning skillz everywhere, which can be extremely bad because wrong beliefs can take root in the areas labeled "separate magisterium" and I've been bitten by it.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-25T10:47:01.369Z · LW(p) · GW(p)

But... I said I was used to it, and remembering it being fuzzy!

(I'm not sure we understand each other, but...) Okay. But I mean, when Leibniz talks about theistic concepts, his reasoning is not very fuzzy. Insofar as smart theists use memes descended from Leibniz---which they do, and they also use memes descended from other very smart people---it becomes necessary that I am able to translate their concepts into concepts that I can understand and use my normal rationality skillz on.

I don't think this is compartmentalization. Compartmentalization as I understand it is when you have two contradictory pieces of information about the world and you keep them separate for whatever reason. I'm talking about two different skills. My actual beliefs stay roughly constant no matter what ontology/language I use to express them. Think of it like Solomonoff induction. The universal machine you choose only changes things by at most a constant. (Admittedly, for humans that constant can be the difference between seeing or not seeing a one step implication, but such matters are tricky and would need their own post. But imagine if I was to try to learn category theory in Russian using a Russian-English dictionary.) And anyway I don't actually think in terms of theism except for when I either want to troll people, understand philosophers, or play around in others' ontologies for kicks.

I am not yet convinced that it isn't misplaced, but I do thank you for your concern.

comment by Bongo · 2011-07-24T15:41:08.847Z · LW(p) · GW(p)

This read vaguely like it could possibly be interpreted in a non-crazy way if you really tried... until the stuff about jesus.

I mean, whereas the rest of the religous terminology could plausibly be metaphorical or technical, it actually looks as if you're actually non-metaphorically saying that jesus died so we could have a positive singularity.

Please tell me that's not really what you're saying. I would hate to see you go crazy for real. You're one of my favorite posters even if I almost always downvote your posts.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-25T10:02:44.035Z · LW(p) · GW(p)

Nah, that's what I was actually saying. Half-jokingly and half-trollingly, but that was indeed what I was saying. And in case it wasn't totally freakin' obvious, I'm trying to steel man Christianity, not describe my own beliefs. I'm, like, crazy in correct ways, not stupid arbitrary ways. Ahem. Heaven wasn't that technical a concept really, just "the output of the acausal economy"---though see my point about "acausal economy" perhaps being a misleading name, it's just an easy way to describe the result of a lot of multiverse-wide acausal "trade". "Apocalypse" is more technical insofar as we can define the idea of a hard takeoff technological singularity, which I'm pretty sure can be done even if we normally stick to qualitative descriptions. (Though qualitative descriptions can be technical of course.) "God" is in some ways more technical but also hard to characterize without risking looking stupid. There's a whole branch of theology called negative theology that only describes God in terms of what He is not. Sounds like a much safer bet to me, but I'm not much of a theologian myself.

You're one of my favorite posters even if I almost always downvote your posts.

Thanks. :) Downvotes don't phase me (though I try to heed them most of the time), but falling on deaf ears kills my motivation. At the very least I hope my comments are a little interesting.

Replies from: lessdazed, Bongo
comment by lessdazed · 2011-07-25T17:51:40.699Z · LW(p) · GW(p)

I personally experience a very mild bad feeling for certain posts that do not receive votes, a bad feeling for only downvoted posts, and a good feeling for upvoted posts almost regardless of the number of downvotes I get (within the quantities I have experienced).

I can honestly say it doesn't bother me to be downvoted many times in a post so long as the post got a few upvotes, one might be too few against 20. A goodly number like five would probably suffice against a hundred, twenty against a thousand. Asch conformity.

It doesn't feel at all worse to be downvoted many times than few times, it's because of more than scope insensitivity, as fewer downvotes is the cousin of no votes at all, the other type of negative.

This is not the type of post that would bother me if it went without votes, as it is merely an expression of my opinion.

Consequently, I wish the number of up and down votes were shown, rather than the sum.

comment by Bongo · 2011-07-25T11:32:13.143Z · LW(p) · GW(p)

And in case it wasn't totally freakin' obvious, I'm trying to steel man Christianity, not describe my own beliefs.

In that case I have to say you didn't succeed in steel-manning jesus, as it were.

comment by FeepingCreature · 2012-04-05T10:51:31.513Z · LW(p) · GW(p)

One question.

If all of this was wrong, if there were no Forms other than in our minds and there was no convergence onto a central superoptimizer - would you say our universe was impossible? What difference in experience that we could perceive today disproves your view?

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-05T11:36:52.702Z · LW(p) · GW(p)

To a large extent my comment was a trap: I deliberately made the majority of my claims purely metaphysical so that when someone came along and said "you're wrong!" I could justifiably claim "I didn't even make any factual claims". You've managed to avoid my trap.

Replies from: FeepingCreature
comment by FeepingCreature · 2012-04-05T11:42:18.508Z · LW(p) · GW(p)

yay!

comment by GLaDOS · 2012-04-05T10:53:54.815Z · LW(p) · GW(p)

Thank you for writing this, I now finally feel like I sort of understand what you've been going on about in recent months (though there are gaps too large for me to judge whether you are right). Please consider translating versions of your arguments refined and worked out enough that you would find comfortable defending.

Unless that would cause you to risk eternal damnation (^_^)

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-05T11:30:07.325Z · LW(p) · GW(p)

I wrote all that stuff back when I was still new to a lot of the ideas and hadn't really organized them well in my head. I also included a lot of needless metaphysical stuff just to troll people. The general argument for academic theism that I would make these days would be phrased entirely in terms of decision theory and theoretical computer science, and would look significantly more credible than my embarrassingly amateurish arguments from a year or so ago. Separately, I know of better soteriological arguments these days, but I don't take them as seriously and there's no obvious way to make them look credible to LessWrong. If I was setting forth my arguments for real then I would also take a lot more care to separate my theological and eschatological arguments.

Anyway, I'd like to set forth those arguments at some point, but I'm not sure when. I'm afraid that if I put forth an argument then it will be assumed that I supported that argument to the best of my ability, when in reality writing for a diverse audience stresses me out way too much for me to put sustained effort into writing out justifications for any argument.

Replies from: None
comment by [deleted] · 2012-04-05T12:11:46.500Z · LW(p) · GW(p)

The "least restrictive, obviously acceptable thing" might be to collect a list of prerequisites in decision theory and CT that would be necessary to understand the main argument. You made a list of this kind (though for different purposes) several years ago, but I still haven't been able to trace from then how you ended up here.

comment by Dreaded_Anomaly · 2011-07-24T07:00:57.157Z · LW(p) · GW(p)

I am not convinced that this qualifies as "an at all decent description of what the vast majority of smart people mean when they talk about souls."

The point of saying something like "mental states are not ontologically fundamental" is: you are a brain. Your consciousness, your self, requires your brain (or maybe something that emulates it or its functions) to exist. That is what all the evidence tells us.

Yes, I realize that responding to a neo-Platonistic description by talking about the evidence doesn't seem like the most relevant course. But here's the thing: in this universe, the one in which we exist, the "Form" that our minds take is one that includes brains. No brains, no minds, as far as we can tell. Maybe there's some universe where minds can exist without brains - but we don't have any reason to believe that it's ours.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-25T10:35:23.646Z · LW(p) · GW(p)

I don't think you're responding to anything I wrote. Nothing you're saying conflicts with anything I said, except you totally misused the word "Form", though you did that on purpose so I guess it's okay. On a different note, I apologize for turning out little discussion into this whole other distracting thing. If you look at my other comments in this thread you'll see some of the points of my actual original argument. My apologies for the big tangent.

Replies from: Dreaded_Anomaly
comment by Dreaded_Anomaly · 2011-07-30T05:26:16.835Z · LW(p) · GW(p)

I was really responding to what you failed to write, i.e. a relevant response to my comment. The point is that it doesn't matter if you use the words "eternal soul," "ontologically basic mental state," or "minds are Forms"; none of those ideas matches up with reality. The position most strongly supported by the evidence is that minds, mental states, etc. are produced by physical brains interacting with physical phenomena. We dismiss those other ideas because they're unsupported and holding them prevents the realization that you are a brain, and the universe is physical.

It seems like you're arguing that we ought to take ideas seriously simply because people believe them. The fact of someone's belief in an idea is only weak Bayesian evidence by itself, though. What has more weight is why ey believes it, and the empirical evidence just doesn't back up any concept of souls.

comment by GLaDOS · 2012-04-05T10:53:02.401Z · LW(p) · GW(p)

Thank you for writing this, I now finally feel like I sort of understand what you've been going on about in recent months (though there are gaps too large for me to judge whether you are right). Please consider translating versions you would find comfortable defending.

comment by Hul-Gil · 2011-07-25T08:45:55.497Z · LW(p) · GW(p)

Just for fun I'll try to translate something into vaguely Less Wrong style language. For God's sake don't read this if you tend to dislike my syncretism, 'cuz this is a rushed and bastardized version and I'm not gonna try to defend it very hard.

Well shucks, why not? If you're going to complain about people not subscribing to your idea of what "soul" should mean (a controversial topic, especially here), I would hope you'd be open to debate. If you only post something even you admit is not strong, why would an opponent bother trying to debate about it? That is - I may be misinterpreting you, but when I say "this was quick; I'm not gonna try to defend it very hard" it means "even if you refute this I won't change my mind, 'cause it isn't my real position."

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-25T09:38:49.158Z · LW(p) · GW(p)

I think folk perhaps didn't realize that I really meant "just for fun". I didn't realize that Dreaded_Anomaly and I had entered into a more serious type of conversation, and I surely didn't mean to disrespect our discussion by going off on a zany tangent. Unfortunately, though, I think I may have done that. If I was actually going to have a discussion with folk about the importance of not having contempt for ideas that we only fuzzily understand, it would take place in a decent-quality post and not in a joking comment thread. My apologies for betraying social norms.

Replies from: Hul-Gil, Dreaded_Anomaly
comment by Hul-Gil · 2011-07-25T22:37:48.224Z · LW(p) · GW(p)

I think folk perhaps didn't realize that I really meant "just for fun".

I didn't! My apologies for taking it so seriously, then.

comment by Dreaded_Anomaly · 2011-07-30T05:18:11.403Z · LW(p) · GW(p)

I didn't realize that Dreaded_Anomaly and I had entered into a more serious type of conversation, and I surely didn't mean to disrespect our discussion by going off on a zany tangent.

Is there some way that I could have indicated this better? I asked a clarifying question about your post, and spoke against the topic in question being solely a matter of local norms. What seemed less than serious?

comment by lessdazed · 2011-07-24T17:05:28.701Z · LW(p) · GW(p)

What do you think the chances are that the above describes reality better than the OP implicitly does?

what the vast majority of smart people mean when they talk about souls.

Can you quantify that? Approximately how many people are we talking about here? A thousand? A million? A billion?

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-25T09:51:40.757Z · LW(p) · GW(p)

I mean it depends a lot on what we mean by "smart people". I'm thinking of theists like a bright philosophy student on the dumber end of smart, C. S. Lewis in the middle, and geniuses like Leibniz on the smarter end. People whose ideas might actually be worth engaging with. E.g. if your friend or someone at a party is a bright philosophy student, it might be worth engaging with them, or if you have some free time it might be a good idea to check out the ideas of some smart Christians like C. S. Lewis, and everyone in the world should take the time to check out the genius of Leibniz considering he was a theist and also the father of computer science. Their ideas are often decently sophisticated, not just something that can be described and discarded as "ontologically fundamental mental states", and it's worth translating their ideas into a decent language where you can understand them a little better. And if it happens to give you okay ideas while doing so, all the better, but that's not really the point.

Replies from: lessdazed, Wei_Dai
comment by lessdazed · 2011-07-25T17:11:35.468Z · LW(p) · GW(p)

Who is "we"? It's your claim. Tell me what you mean or I will think you are equivocating, as at least hundreds of millions of believers are smart in a sense, and in another, those within the top 1% of the top 1% of the top 1% of humans, only a handful may qualify, the majority of which might mean something like what you said.

some smart Christians like C. S. Lewis

Your philosophy has just been downchecked in my mind. I read much of his stuff before I could have been biased against him for being Christian, even the Screwtape Letters would have been a worthwhile exercise for an atheist writer, I didn't know he was Christian when I read even those.

Their ideas are often decently sophisticated

The number of parts you have to add to a perpetual motion machine to hide from yourself the fact that it doesn't work is proportional to your intelligence.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-27T15:44:51.306Z · LW(p) · GW(p)

The following sentences are meant to be maximally informative given that I am unwilling to put in the necessary effort to actually respond. I apologize that I am unwilling to actually respond.

The general skill that I think is important is the skill you're failing to demonstrate in your comment. It is a skill that I know you have, and would use if you had a confident model of me as a careful thinker. My suggestion is to just use that skill more often, for your sake and my sake and for the sake of group epistemology at all levels of organization. Just charity.

Replies from: CarlShulman, Wei_Dai
comment by CarlShulman · 2011-07-27T20:54:36.077Z · LW(p) · GW(p)

It is a skill that I know you have, and would use if you had a confident model of me as a careful thinker.

I have a confident model that you are a better thinker than posts like these suggest. But as Wei Dai says, that's not enough: I don't want to see posts that are unpleasant to read (not only for the cryptic obscurity, but also for excessive length and lack of paragraphing), don't have enough valuable content to justify wading through, and turn people off of Less Wrong. Worse, since I know you can do better, these flaws feel like intentional defection with respect to Less Wrong norms of clarity in communication.

comment by Wei Dai (Wei_Dai) · 2011-07-27T19:18:49.518Z · LW(p) · GW(p)

In order to be perceived as being a careful thinker by others, you have to send credible signals of being a careful thinker, and avoid sending any contrary signals. You've failed to do so on several recent occasions. How come you don't consider that to be a very important skill?

Do you suggest that people should be epistemically charitable even towards others (and you specifically) who they don't think are careful thinkers? You gave a number of reasons why people might want to do that, but as you admitted, the analysis omits opportunity costs.

Think about it this way: everything you write on LW will probably be read by at least 20 people, and many more for posts. Why should 20+ people spend the effort of deciphering your cryptic thoughts, when you could do it ahead of time or upon request but implicitly or explicitly decide not to? Just for practice? What about those who don't think this particular occasion is the best one for such practice? Notice that this applies even when you are already perceived as a careful thinker. If you're not, then they have even less reason to spend all that effort.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-30T02:07:11.666Z · LW(p) · GW(p)

Do you suggest that people should be epistemically charitable even towards others (and you specifically) who they don't think are careful thinkers?

Not in general, no. It's pretty context-sensitive. I think they should do so on Less Wrong where we should aim to have insanely exceptionally high standards of group epistemology. I do think that applies doubly for folk like me who have a decent chunk of karma and have spent a lot of time with a lot of very smart people, but I am not sure how many such people contribute to LW, so it's probably not a worthwhile norm to promote. If LW was somewhat saner perhaps they would, though, so it's unclear.

I am a significantly better rationalist than the LW average and I'm on the verge of leaving which says a whole bunch about my lack of ability to communicate, but also some non-negligible amount about LW's ability to understand humans who don't want to engage in the negative sum signalling game of kow-towing to largely-unreflected-upon local norms. (I'm kind of ranting here and maybe even trolling slightly, it's very possible that my evaluations aren't themselves stable under reflection. (But at least I can recognize that...))

How come you don't consider that to be a very important skill?

Right, so your comment unfortunately assumes something incorrect about my psychology, i.e. that it is motivationally possible for me to make my contributions to LW clearer. I once put a passive-aggressive apology at the bottom of one of my comments; perhaps if I continue to contribute to LW I'll clean it up and put it at the bottom of every comment.

Point being, this isn't the should world, and I do not have the necessary energy (or writing skills) to pull an Eliezer and communicate across years' worth of inferential distance. Other humans who could teach what I would teach are busy saving the world, as I try to be. That said, I'm 19 years old and am learning skills at a pretty fast rate. A few years from now I'll definitely have a solid grasp of a lot of the technical knowledge that I currently only informally (if mildly skillfully despite that) know how to play with, and I will also have put a lot more effort into learning to write (or learning to bother to want to communicate effectively). If the rationalist community hasn't entirely disintegrated by then, then perhaps I'll be able to actually explain things for once. That'd be nice.

Back to the question: I consider signalling credibility to be an important skill. I also try to be principled. If I did have the necessary motivation I would probably just pull an Eliezer and painstakingly explain every little detail with its own 15 paragraph post. But there is also some chance that I would just say "I refuse to kow tow to people who are unwilling to put the necessary effort into understanding the subtleties of what I am trying to say, and I doubly refuse to kow tow to people who assume I am being irrational in completely obvious ways simply because I am saying something that sounds unreasonable without filling in all of the gaps for them". But not if I'd spent a lot of time really hammering into my head that this isn't the should world, or if I learned to truly empathize with the psychology of the kind of human that thinks that way, which is pretty much every human ever.

(Not having done these things might be the source of my inability to feel motivated to explain things. Despair at how everyone including LW is batshit insane and because of that everyone I love is going to die, maybe? And there's nothing I can do to change that? That sounds vaguely plausible. Hard to motivate oneself in that kind of situation, hard to expect that anything can actually have a substantial impact. Generalized frustration. I just have to remember, this isn't the should world, it is only delusion that would cause me to expect anything else but this, people do what they have incentive and affordance to do, there is no such thing as magical free will, I am surely contemptible in a thousand similar ways, I implicitly endorse a thousand negative sum games because I've implicitly chosen to not reflect on whether or not they're justified, if anyone can be seen as evil then surely I can, because I actually do have the necessary knowledge to do better, if I am to optimize anyone I may as well start with myself... ad nauseum.)

There's some counterfactual world where I could have written this comment so as to be in less violation of local norms of epistemology and communication, and it is expected of me that I acknowledge that a tradeoff has been made which keeps this world from looking like that slightly-more-optimized world, and feel sorry about that necessity, or something, so I do. I consequently apologize.

Replies from: Vladimir_Nesov, wedrifid
comment by Vladimir_Nesov · 2011-07-30T02:35:29.438Z · LW(p) · GW(p)

But there is also some chance that I would just say "I refuse to kow tow to people who are unwilling to put the necessary effort into understanding the subtleties of what I am trying to say, and I doubly refuse to kow tow to people who assume I am being irrational in completely obvious ways simply because I am saying something that sounds unreasonable without filling in all of the gaps for them".

I don't think it's possible to understand what you are trying to say, even assuming there is indeed something to understand, you don't give enough information to arrive at a clear interpretation. It's not a matter of unwillingness. And the hypothesis that someone is insane (at least in one compartment) is more plausible than that they are systematically unable/unwilling to communicate clearly insights of unreachable depth, and so only leave cryptic remarks indistinguishable from those generated by the insane. (This remains a possibility, but needs evidence to become more than that. Hindsight or private knowledge don't justify demanding prior beliefs that overly favor the truth.)

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-30T03:08:52.630Z · LW(p) · GW(p)

There are people who know me in person and thus share background knowledge with me, who are able to understand what I am saying. They are the thinkers I admire most and the people I care most about influencing. I have a hypothesis which may just be wrong that people who are particularly good thinkers would notice that I wasn't just insane-in-a-relevant-way and be able to fill in the gaps that would let them understand what I am saying. I have this hypothesis because I think that I have that skill to a large extent, as I believe do others like Michael Vassar or Peter de Blanc or Steve Rayhawk or generally people who bother to train that skill.

I notice that some people who I think are good thinkers, such as yourself, seem to have a low overall estimate of the worthwhileness of my words. However I have accumulated a fair amount of evidence that you do not have the skill of reading (or choose not to exercise the skill of reading), that is, that you err on the side of calling bullshit when I know for certain that something is not bullshit, and rarely err in the opposite direction. If you had to choose a side to be biased towards then that would of course be the correct one, but it isn't clear that such a choice is necessary to be a strong rationalist, as I think is evidenced by Steve Rayhawk, Peter de Blanc, and Michael Vassar (three major influences on my thinking, in descending order of influence.) Thus I do not consider your low estimate of my rationality to be overwhelming evidence that it is in fact impossible to understand what I am trying to say even without sharing much background knowledge with me. I suspect that e.g. Wei Dai has a lowish estimate of my rationality w.r.t. things he is interested in; my model of Wei Dai has him as less curious than you are about things that I yammer about, so my wild guess at his thoughts on the matter are particularly little evidence compared to your thoughts. I plan on getting more information about this in time.

Replies from: Wei_Dai, Vladimir_Nesov
comment by Wei Dai (Wei_Dai) · 2011-07-30T08:34:01.324Z · LW(p) · GW(p)

my model of Wei Dai has him as less curious than you are about things that I yammer about

If you mean the nature of superintelligence, I'm extremely curious about that, but I think the way you're going about trying to find out is unlikely to lead to progress. To quote Eric Drexler, "most new ideas are wrong or inadequate." The only way I can see how humans can make progress, when we're running on such faulty hardware and software, is to be very careful, to subject our own ideas to constant self-scrutiny for possible errors, and to be as precise as possible in our communications, and to lay down all the steps of our reasoning, so others can understand what we mean and how exactly we arrived at our conclusions, so they can help find our errors for us.

Now sometimes one could have a flash of inspiration--an idea that might be true or an approach that seems worth pursing--but don't know how to justify that intuition. It's fine to try to communicate such potential insights, but this can't be all that you do. Most of your time still has to be spent trying to figure out whether these seeming inspirations actually amount to anything, whether there are arguments that can back up your intuitions, and whether these arguments stand up to scrutiny. If you are not willing to put a substantial amount of effort into doing this yourself, then you shouldn't be surprised that few others are willing to do it for you (i.e., take you seriously), especially when you do not even make a strong effort to use language that they can easily understand.

There are people who know me in person and thus share background knowledge with me, who are able to understand what I am saying. They are the thinkers I admire most and the people I care most about influencing.

I would be interested to know if any of your intuitive leaps have lead any of those people to make any progress beyond "a new idea that's almost certain to be wrong even if we're not sure why" to "something that seems likely to be an improvement over the previous state of the art". (It's possible that you have a comparative advantage in making such leaps of intuition, even though a priori that seems unlikely.)

you [Nesov] err on the side of calling bullshit when I know for certain that something is not bullshit

Do you have any examples? (This is unrelated to my points above. I'm just curious.)

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-30T10:15:45.548Z · LW(p) · GW(p)

(Warning, long comment; it stays mostly on track but is embarrassingly mostly self-centered.) I think I must have been being imprecise when I said you were "less curious" about the things I yammer about, and honestly I don't remember what I was thinking at the time and won't try to rationalize it. (I wasn't on adderall then but am on adderall now; there may be state-dependent memory effects.) I thus unendorse at least that part of the grandparent.

I think that everything you're saying is correct, and note the interesting similarities between my case and Nesov's two years ago---except Nesov had actual formal technical understanding and results and still couldn't easily communicate his insights, whereas my intuition is not at all grounded in formality. I definitely won't be contributing to decision theory progress any time soon, and probably never will---I can get excited about certain philosophical themes or aesthetics and stamp things with my mark of intuitive approval, but there is very little value in that unless I'm for some reason in a situation where people with actual skills can bounce ideas off of me. (I am trying to set up that situation currently but I'm trying not to put too much weight on it.)

I am still very confused about how actual progress in decision theory-like fields works, though, insofar as the things I see on the mailing list, e.g. the discussion of Loebian blindspots, look like resolving side issues where the foundations are weak. I don't see how getting the proof proof proofs right helps much, whereas I was very excited by Nesov's focus on e.g. reversibility and semantics; much of this comes from being happy that Nesov has certain conceptual aesthetics which I endorse. You could perhaps characterize this as not understanding Slepnev's style of research. I see your style as somewhere between Nesov's and Slepnev's. Perhaps research styles or methodology would make for a useful LW discussion post, or a decision theory list email? Or is my notion of "style" just off? I have never been involved in mathematical-esque research, nor have I read about how it works besides Polya's How to Solve It and brief accounts of e.g. quantum mechanics research.

Anyway. Currently there is only one actual-decisions-relevant scenario where I see the sort of thinking I do being useful, and in that sense I sort of think of it as my scenario of comparative advantage. But unfortunately I've yet to talk to people who either have thought very deeply about very similar issues or have relevant technical knowledge, those people being Shulman and Nesov. The scenario I'm thinking of is where we have a non-provably-Friendly AI or a uFAI but there are other existential risks to worry about. (I think this scenario may be the default, though---it seems somewhat likely to me that AGI is within reach of this generation of humans, whereas it is unclear if something-like-provably Friendly AI is possible, or what value there is in somewhat-more-stable-than-hacked-together AI.) It would be useful to understand what sorts of attractors there are for a self-modifying AI to fall into for either its decision theory or utility function, what the implications of our decision to run a uFAI would be in terms of either causal or acausal game theory, and generally what the heck we'd be knowingly inflicting on the multiverse if we decided to hit the big red button.

These questions and questions like them lend themselves to thorough models and rely on precise technical knowledge but aren't obviously questions that can be formalized. Such questions are in the grey area between the answerable-technical and the unanswerable-philosophical, with a focus on the nature of intelligence: precisely where Less Wrong-style rationality skills are most necessary and most useful. Likewise questions about "morality", which are nestled between formal utility theory and decision theory on one side, highly qualitative "naturalistic meta-ethics" on another, and informal but technical and foundational questions about computation on a third under-explored side. Better understanding these questions has a low but non-negligible chance at affecting either singularity-focused game theory or the design choices guiding the development of FAI or somewhat-Friendly AI.

I think about things at about that level of technicality seeing as I have an automatic disposition to obsess about such questions and may or may not have a knack for doing so in a useful manner. My ability to excel at such thinking is hard to analyze; I think playing with models of complex systems, like multilevel selection, and seeing to what extent my intuitions are verified or not by the systems, would be one way to both check and train relevant intuitions. Another relevant field is probably psychology, where I have a few ideas which I think could be tested. Computational cognitive science is a relevant intuition-testing and intuition-building field and I've managed to nab myself a girlfriend who is going into it. Rayhawk wants to build a suite of games that train low-level probabilistic reasoning which I think would also help. He's written up one very small one thus far and it would be excellent if Less Wrong could start a project to bring the idea to life. But that's a story for another day.

I consider it somewhat likely that in 6 months I will look back and think myself an utter fool to expect to make any useful progress on thinking about such things. In the meantime I don't expect LW folk to bother to try to understand my cryptic thoughts, especially not when everyone has so many of their own to worry about worrying about.

I would be interested to know if any of your intuitive leaps have lead any of those people to make any progress beyond "a new idea that's almost certain to be wrong even if we're not sure why" to "something that seems likely to be an improvement over the previous state of the art".

I think the intuitive leaps I'm most proud of are in just-maybe-sort-of-almost understanding some of Rayhawk's ideas and maybe provoking him to develop them slightly further or recall them after a few months or years of rust. I don't have a very good idea of how useful all of my philosophicalish conversation with him has been. His ideas are uniformly a lot better than mine. If for some reason I can convince both him and SingInst that he should be doing FAI work then perhaps I'll have a much better model of how useful my philosophical aesthetics are, or how useful they might be if I supplemented them with deep technical-formal knowledge-understanding. I currently model myself as being somewhat useful to bounce ideas off of but not yet a, ya know, real FAI researcher, not by a longshot. My aim is to become a truly useful research assistant in the next few years while realizing my apparent cognitive comparative advantage.

Do you have any examples?

The combination of social awkwardness and non-trivial difficulty of tracking down examples makes me rather averse to doing so; on the other hand I think Nesov would probably like to see such examples and I have something of a moral obligation to substantiate the claim. The realistic model of my behavior says I won't end up providing the examples. However the realistic model of my behavior says that in the future if I come across such examples I will PM Nesov. I think however that I'd rather not list such gripes in public; I feel like it sets a bad precedent or something. (Interestingly Yudkowsky is a celebrity and thus such moral qualms have never applied to him in my head. I do regret being harsher on Eliezer than was called for; it's way too easy to forget he's a person as well as a meme and meme-generator.)

Replies from: Vladimir_Nesov, Mitchell_Porter
comment by Vladimir_Nesov · 2011-07-30T16:28:01.964Z · LW(p) · GW(p)

My aim is to become a truly useful research assistant in the next few years while realizing my apparent cognitive comparative advantage.

Are you working on training yourself to understand graduate-level logic, set theory and category theory? That's my current best guess at an actionable thing an aspiring FAI researcher should do, no matter what else is on your plate (and it's been a stable conclusion for over a year).

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-31T01:03:31.523Z · LW(p) · GW(p)

Not yet, but very soon now. (The plan for category theory is to get proficient with Haskell and maybe kill two birds with one stone by playing with functional inductive programming (which uses category theory). I do not yet have plans for set theory or logic; I don't really understand what they're trying to do very well. Or like, my brain hasn't categorized them as "cool", whereas my brain has categorized category theory as "cool", and I think that if I better understood what was cool about them then I'd have a better idea of where to start. I was sort of hoping I could somehow learn all my math in terms of categories, which is still technically a possibility I guess but not at all something I can do on my own.)

Replies from: Vladimir_Nesov, None
comment by Vladimir_Nesov · 2011-07-31T01:11:05.891Z · LW(p) · GW(p)

I don't recommend studying category theory at any depth before at least some logic, abstract algebra and topology. It can feel overly empty of substance without a wealth of examples already in place, it's not called "abstract nonsense" for nothing. Follow my reading list if you don't have any better ideas or background, and maybe ask someone else for advice. I don't like some of this stuff as well, I just study it because I must.

comment by [deleted] · 2011-07-31T02:04:05.542Z · LW(p) · GW(p)

(The plan for category theory is to get proficient with Haskell and maybe kill two birds with one stone by playing with functional inductive programming (which uses category theory)

I've known many people who have tried to walk down this path and failed. The successful ones I know, knew one before the other.

comment by Mitchell_Porter · 2011-07-30T10:45:11.414Z · LW(p) · GW(p)

The scenario I'm thinking of is where we have a non-provably-Friendly AI or a uFAI but there are other existential risks to worry about. (I think this scenario may be the default, though---it seems somewhat likely to me that AGI is within reach of this generation of humans, whereas it is unclear if something-like-provably Friendly AI is possible, or what value there is in somewhat-more-stable-than-hacked-together AI.) It would be useful to understand what sorts of attractors there are for a self-modifying AI to fall into for either its decision theory or utility function, what the implications of our decision to run a uFAI would be in terms of either causal or acausal game theory, and generally what the heck we'd be knowingly inflicting on the multiverse if we decided to hit the big red button.

This.

comment by Vladimir_Nesov · 2011-07-30T03:29:04.507Z · LW(p) · GW(p)

you err on the side of calling bullshit when I know for certain that something is not bullshit, and rarely err in the opposite direction

It's quite possible, since originally, before retreating to this mode 1.5-2 years ago, I was suffering from mulling over external confusing ideas while failing to accumulate usefull stuff among all that noise (the last idea on this list was Ludics; now most of the noise I have to deal with is what I generate myself, but I seem to be able to slowly distill useful things from that, and I got into a habit of working on building up well-understood technical skills).

I guess I should allocate a new category for things I won't accept into my mind, as a matter of personal epistemic hygiene, but still won't get too confident it's nonsense. I would still disapprove of these things for not being useful for many or even damaging for people like me-3-years-ago.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-31T03:20:49.851Z · LW(p) · GW(p)

You stopped obsessing about things like ludics? Game semantics-like-stuff sounded so promising as a perspective on timeless interaction. Are you building fine-tuned decision theoretic versions of similar ideas from scratch?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-07-31T10:29:38.730Z · LW(p) · GW(p)

Game semantics etc. were part of a search that was answered by ADT (alternatively, finally understanding UDT); they are failing to answer this question in the sense of exploring explicit counterfactuals, rather than explaining where counterfactuals come from.

After that, I tried building on ADT, didn't get very far, then tried figuring out epistemic role of observations (which UDT/ADT deny), and I think was successful (the answer being a kind of "universal" platonism where physical facts are seen as non-special, logical theories as machines for perceiving abstract facts normally external to themselves, and processes as relating facts along their way, which generalize to ways of knowing physical facts, as in causality; this ontological stance seems very robust and describes all sorts of situations satisfactorily). This as yet needs better toy models as examples, or better-defined connection to standard math, which I'm currently trying to find.

comment by wedrifid · 2011-07-30T02:45:06.824Z · LW(p) · GW(p)

I think they should do so on Less Wrong where we should aim to have insanely exceptionally high standards of group epistemology.

One of the ways we do this is by telling people when they are writing things that are batshit insane. Because you were. It wasn't deep. It was obfuscated, scattered and generally poor quality thought. You may happen to be personally aweseome. Your recent comments, however, sucked. Not "were truly enlightened but the readers were not able to appreciate it". They just sucked.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-30T02:51:46.809Z · LW(p) · GW(p)

Sorry, which comments sucked? The majority of my recent comments have been upvoted, and very few were particularly obfuscated. I had one post that was largely intended to troll people and another comment that was intended to be for the lulz and which I obviously don't think people should be mining for gold. (Which is why I said many times in the comment that it was poor quality syncretism and also bolded that it was just for fun.)

(Tangential: Is "batshit insane" Nesov's vocabulary? It's been mine for awhile.)

Replies from: Vladimir_Nesov, wedrifid
comment by Vladimir_Nesov · 2011-07-30T02:57:55.464Z · LW(p) · GW(p)

Is "batshit insane" Nesov's vocabulary?

(Sorry for that, I usually need some time to debug a thought into a form I actually endorse. Don't believe all things I say in real time, I disagree with some of them too, wait for a day to make sure. The comment was fixed before I read this echo.)

Replies from: wedrifid
comment by wedrifid · 2011-07-30T03:00:49.683Z · LW(p) · GW(p)

Is "batshit insane" Nesov's vocabulary?

(Sorry for that, I usually need some time to debug a thought into a form I actually endorse. Don't believe all things I say in real time, I disagree with some of them too, wait for a day to make sure.)

(The phrase was Will's, which you adopted in your reply and I in turn used in mine. Origins traced.)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-07-30T03:06:41.398Z · LW(p) · GW(p)

Interesting. So I was primed, generated the same phrase without realizing it was the priming, the phrase was sufficiently unfamiliar that I made a google search to more accurately see its connotations, used and posted it anyway, but then recognized that it didn't paint an adequate picture. The process of debugging the details is such a bore, but the only way that works.

Replies from: wedrifid
comment by wedrifid · 2011-07-30T03:11:34.386Z · LW(p) · GW(p)

Fascinating. Now I have to look up the phrase to see what the precise meaning of "batshit insane" term is too, just in case I am using it wrong. :)

comment by wedrifid · 2011-07-30T02:54:25.746Z · LW(p) · GW(p)

Sorry, which comments sucked?

The ones referred to by Wei_Dai in the comment you were refuting/dismissing.

(Tangential: Is "batshit insane" Nesov's vocabulary? It's been mine for awhile.)

Yes, reading your comment in more detail I found that you used it yourself so removed the disclaimer. I didn't want to introduce the term without it being clear to observers that I was just adopting the style from the context.

comment by Wei Dai (Wei_Dai) · 2011-07-25T14:07:31.229Z · LW(p) · GW(p)

Can you please explain a bit more what the point is? I'm having trouble figuring out why I would want to try to understand something, if not to get "okay" ideas.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-27T15:01:54.229Z · LW(p) · GW(p)

There are many, but unfortunately I only have enough motivation to list a few:

  • If talking to someone with strange beliefs in person, legitimately trying to engage with their ideas is an easy way to signal all kinds of positive things. (Maturity, charity, epistemic seriousness, openness to new experiences or ideas, and things like that, as opposed to common alternatives like abrasiveness, superficiality, pedantry, and the like.)
  • Reading things by smart folk who believe things that at least initially appear to be obviously false is a way to understand how exactly humans tend to fail at epistemic reasoning. For example, when I read Surprised by Joy by C. S. Lewis---not to learn about his religion, but to read about sehnsucht, something I often experience---it was very revealing how he described his conversion from unreflective atheism to idealist monadology-esque-ness/deism-ness to theism to Christianity. Basically, he did some basically sound metaphysical reasoning---though of course not the kind that constrains anticipations---which led him all the way nigh-deism. 'We are all part of a unified universe, our responsibility is to experience as much of the universe as possible so it can understand itself' or something like that. All of a sudden he's thinking 'Well I already believe in this vague abstract force thingy, and the philosophers who talk about that are obviously getting their memes from earlier philosophers who said the same thing about God, and this force thingy is kinda like God in some ways, so I might as well consider myself a theist.' Then he learns that Jesus Christ probably actually existed in an off-the-cuff conversation with an atheist friend and scholar, and then he gets very vague and talks about how he suddenly doesn't remember much and oh yeah all of a sudden he's on his way to the zoo and realizes he's a Christian. It's not really clear what this entails in terms of anticipations, though he might've talked about his argument from sehnsucht for the existence of heaven. Anyway, it's clear from what he wrote that he just felt uncomfortable and somewhere along the line stopped caring as much about reasons, and started just, ya know, going with what seemed to be the trend of his philosophical speculations, which might I remind you never paid rent in anticipated experience up until that very last, very vague step. I found it to be a memorable cautionary tale, reading the guy's own words about his fall into the entropy of insanity. Whether or not Christianity is correct, whatever that means, it is clear that he had stopped caring about reasons, and it is clear that this was natural and easy and non-extraordinary. As someone who does a fair bit of metaphysical reasoning that doesn't quite pay rent in anticipated experience, or doesn't pay very much rent anyway, I think it is good to have Lewis's example in mind.
  • Building the skill of actually paying attention to what people actually say. This is perhaps the most important benefit. Less Wrong folk are much better at this than most persons, and this skill itself goes a long, long way. The default for humans is of course to figure out which side the other person is arguing for and then either spout a plausibly-related counterargument for your chosen side if it is the opposite, or nod in agreement or the like if they're on your team. Despite doing it much less than most humans, it still appears to be par for the course for aspiring rationalists. (But there may be some personal selection bias 'cuz people pattern match what I (Will_Newsome) say to some other stupid thing and address the stupid generator of that stupid thing while bypassing whatever I actually said, either because I am bad at communication or because I've been justifiably classified as a person who is a priori likely to be stupid.) It is worth noting that sometimes this is a well-intentioned strategy to help resolve others' confusions by jumping immediately to suggesting fixes for the confusion-generator, but most often it's the result of sloppy reading. Anyway, by looking carefully at what smart people say that disagrees with what you believe or value, you train yourself to generally not throw away possibly countervailing evidence. It may be that what was written was complete tosh, but you won't know unless you actually check from time to time, and even if it's all tosh it's still excellent training material.
  • Practice learning new concepts and languages. This is a minor benefit as generally it would be best to learn a directly useful new conceptual language, e.g. category theory.
  • Cultural sophistication, being able to signal cultural sophistication. Though this can easily implicitly endorse negative sum signalling games and I personally don't see it as a good reason if done for signalling. That said, human culture is rich and complex, and I personally am afraid of being held in contempt as unsophisticated by someone like Douglas Hofstadter for not having read enough Dostoyevsky or listened to enough Chopin, so I read Dostoyevsky and listen to Chopin (and generally try to be perfect, whatever that means). Truly understanding spirituality and to a lesser extent religion is basically a large part of understanding humans and human culture. Though this is best done experientially, just like reading and listening to music, it really helps, especially for nerds, to have a decent theoretical understanding of what spiritualists and religionists might or might not be actually talking about.
  • Related to the above, a whole bunch of people assert that various seemingly-absurd ideas are incredibly important for some reason. I find this an object of intrinsic curiosity and perhaps others would too. In order to learn more it is really quite important to figure out what those various seemingly-absurd ideas actually are.
  • I could probably go on for a while. I would estimate that I missed one or two big reasons, five mildly persuasive reasons, and a whole bunch of 'considerations'. Opportunity costs are of course not taken into account in this analysis.
Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-07-27T19:48:52.915Z · LW(p) · GW(p)

Let me rephrase my question. You decided, on this particular occasion, taking into account opportunity costs, that it was worth trying to understand somebody, for a reason other than to get "okay" ideas. What was that reason?

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-30T01:16:32.502Z · LW(p) · GW(p)

You mean my original "let's talk about Jesus!" comment? I think I bolded the answer in my original comment: having fun. (If I'd known LW was going to interpret what I wrote as somehow representative of my beliefs then I wouldn't have written it. But I figured it'd just get downvoted to -5 with little controversy, like most of my previous similar posts were.)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-07-30T03:47:54.788Z · LW(p) · GW(p)

Why is it fun? (That is, can you take a guess at why your brain's decided it should be fun? This way of posing the question was also the primary intended meaning for my assertion about countersignaling, although it assumed more introspective access. You gave what looked like an excuse/justification on how in addition to being fun it's also an exercise of a valuable skill, which is a sign of not knowing why you really do stuff.)

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-30T04:08:47.664Z · LW(p) · GW(p)

Bleh, I think there may be too much equivocation going on, even though your comment is basically correct. My original "insane" comment is not representative of my comments, nor is it a good example of the skill of charitable interpretation.

When I give justifications they do tend to be pretty related to the causes of my actions, though often in weird double-negative ways. Sometimes I do something because I am afraid of the consequences of doing something, in a self-defeating manner. I think a lot of my trying to appear discreditable is a defense mechanism put up because I am afraid of what would happen if I let myself flinch away from the prospect of appearing discreditable, like, afraid of the typical default failure mode where people get an identity as someone who is "reasonable" and then stops signalling and thus stops thinking thoughts that are "unreasonable", where "reason" is only a very loose correlate of sanity. My favorite LW article ever is "Cached Selves", and that has been true for two years now. Also one of my closest friends co-wrote that article, and his thinking has had a huge effect on mine.

I think saying it was "fun" is actually the rationalization, and I knew it was a rationalization, and so I was lying. It's a lot more complex than that. I wrote it more because I was feeling frustrated at what I perceived to be an unjustified level of contempt in the Less Wrong community. (/does more reflection to make sure I'm not making things up.) Okay. Also relatedly part of it was wanting to signal insanity for the reasons outlined above, or reasons similar to the ones outline above in the sense of being afraid of some consequence of not doing something that I feel is principled, or something that I feel would make me a bad person if I didn't attempt to do. Part of it was wanting to signal something like cleverness, which is maybe where some of the "fun" happens to be, though I can only have so much fun when I'm forced to type very quickly. Part of it was trolling for its own sake on top of the aforementioned anti-anti-virtuous rationale, though where the motivation for "trolling for its own sake" came from might be the same as that anti-anti-virtuous rationale but stemming from a more fundamental principle. I would be suspicious if any of these reasons claimed to be the real reason. Actions tend to follow many reasons in conjunction. (/avoids going off on a tangent about the principle of sufficient reason and Leibniz's theodicy for irony's sake.)

It's interesting because others seem to be much more attached to certain kinds of language than I am, and so when they model me they model me as being unhealthily attached to the language of religion or spirituality or something for its own sake, and think that this is dangerous. I think this may be at least partially typical mind fallacy. I am interested in these languages because I like trolling people (and I like trolling people for many reasons as outline above), but personally much prefer the language of algorithmic probability and generally computationalism, which can actually be used precisely to talk about well-defined things. I only talk in terms of theism when I'm upset at people for being contemptuous of theism. Again there are many reasons for these things, often at different levels of abstraction, and it's all mashed together.

Replies from: Dreaded_Anomaly, Vladimir_Nesov
comment by Dreaded_Anomaly · 2011-07-30T05:28:43.954Z · LW(p) · GW(p)

I wrote it more because I was feeling frustrated at what I perceived to be an unjustified level of contempt in the Less Wrong community.

I'm still not clear on what makes it unjustified.

comment by Vladimir_Nesov · 2011-07-30T04:44:48.726Z · LW(p) · GW(p)

Okay.

comment by Eve · 2011-07-26T20:44:48.471Z · LW(p) · GW(p)

Very interesting! Thanks. I have a few questions and requests.

God is the convergent and optimal decision theoretic agentic algorithm, who rationalists think of as the Void, though the Void is obviously not a complete characterization of God.

What other characteristics do you think God has?

It may help to think of minds as somewhat metaphorical engines of cognition, with a soul being a Carnot engine. Particular minds imperfectly reflect God, and thus are inefficient engines.

I expected you to say that God was the Carnot engine, not the soul. In terms of perfection I'm guessing you are thinking mind < soul < God, with mind being an approximation of soul, which is an approximation of God. Is that right?

The Form of building-structure gets to increase its existence by appealing to the human vessels, and the human vessels get the benefit of being shaded and comforted by the building particulars. The Form of the building is timelessly attractive, i.e. it is a convergent structure.

This strikes me as very interesting, and highlights the confusion I have relating the timeful and the timeless perspectives. When do you reason it terms of one instead of the other?

comment by Will_Newsome · 2011-07-24T06:06:16.638Z · LW(p) · GW(p)

Can Less Wrong pick up the habit of not downvoting things they didn't bother to read? /sigh.

Replies from: komponisto, steven0461, Vladimir_Nesov, ciphergoth, Dreaded_Anomaly, Bongo, lessdazed, Hul-Gil
comment by komponisto · 2011-07-24T07:04:25.929Z · LW(p) · GW(p)

Can Less Wrong pick up the habit of not downvoting things they didn't bother to read?

I've remarked disapprovingly on that phenomenon before. That said, your comment contains some serious red-flag keywords and verbal constructions which are immediately apparent on skimming. And it's long.

"Evil", "Judgement Day", "God", "Son of Man", "soul", all juxtaposed casually with "utility function" and "Singularity"? And this:

God is the Word, that is, Logos, Reason the source of Reasons. God is Math. All universes converge on invoking God, just as our universe is intent on invoking Him by the name of "superintelligence". Where there is optimization, there is a reflection of God. Where there is cooperation, there is a reflection of God. This implies that superintelligences converge on a single algorithm and "utility function"...

...?

What exactly were you expecting? Have you become so absorbed in the profundity of your thoughts that you've forgotten how that sounds?

Replies from: ata, Will_Newsome
comment by ata · 2011-07-24T20:51:32.157Z · LW(p) · GW(p)

Yeah.

I am reminded of the ancient proverb: "Communicating badly and then acting smug when you're misunderstood is not cleverness."

comment by Will_Newsome · 2011-07-25T10:31:20.863Z · LW(p) · GW(p)

Have you become so absorbed in the profundity of your thoughts that you've forgotten how that sounds?

Oh God no. I just, ya know, don't care how things sound in the social psychological/epistemic sense. I'd hesitate if it was a hideous language. But religious language is very rich and not too unpleasant, though excessively melodramatic. Bach, my friend, Bach! "Ach bleib' be uns, Herr Jesu Christ." That was Douglas Adams' favorite piece of music, ya know.

comment by steven0461 · 2011-07-24T07:04:40.657Z · LW(p) · GW(p)

If LessWrong comments are like buildings, I think too many people vote based on whether they're sturdy enough to live in, and too few people vote based on whether they can be looted for valuables. I think your comment can be looted for valuables and voted it up for that reason.

I'd worry about the comment providing fuel for critics looking for evidence that LessWrong is a cult, but maybe that doesn't apply as much if it's going to be downvoted into oblivion. (Or what afterlife do comments go to in your conceptual scheme? I'm finding it difficult to keep track.)

Replies from: lessdazed
comment by lessdazed · 2011-07-24T16:55:46.559Z · LW(p) · GW(p)

You're more attracted to upvoted posts than downvoted ones?

Replies from: shokwave
comment by shokwave · 2011-07-25T08:01:01.176Z · LW(p) · GW(p)

Who isn't? Unless you consider yourself a mutant relative to the community, such that someone approving of a post is evidence that you will disapprove (I could see this happening to someone who browses a forum opposed to their world-view, for instance), then upvotes on a post are evidence you will approve of a post, and downvotes are evidence you won't approve.

I suppose attraction to a post might not be strongly connected to approval of a post for all people, but it certainly seems that way.

Replies from: lessdazed
comment by lessdazed · 2011-07-25T16:51:12.418Z · LW(p) · GW(p)

Attraction to isn't necessarily the same as likeliness of approval.

"Comment score below threshold/+60 children" is practically salacious. "20 points", less so.

comment by Vladimir_Nesov · 2011-07-24T12:33:38.506Z · LW(p) · GW(p)

Downvoted the parent, as I expect "didn't bother to read" is a bad explanation for downvoting in this case, but it was stated as obvious.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-25T10:05:21.464Z · LW(p) · GW(p)

ciphergoth's point is strong enough to make it non-obvious I think. His point was roughly that previously expected badness plus seemingly excessive length is a good enough justification to downvote quickly. And I mean it's not like people ever actually read a comment before voting on it, that's not how humans work.

Replies from: Vladimir_Nesov, wedrifid
comment by Vladimir_Nesov · 2011-07-25T11:41:57.170Z · LW(p) · GW(p)

Explanation for some portion yes, but not for the trend. (Which might be pointing out a discrepancy in our interpretation of that "explain".)

comment by wedrifid · 2011-07-25T17:56:34.520Z · LW(p) · GW(p)

And I mean it's not like people ever actually read a comment before voting on it, that's not how humans work.

That's not true. Sometimes we have to read the comment a bit before we can find it contains an applause light for the other team. ;)

comment by Paul Crowley (ciphergoth) · 2011-07-24T21:33:39.924Z · LW(p) · GW(p)

I'm not at all convinced that Less Wrong should pick up this habit, in general. Your comment is very long, and you must surely grant that there is a length beyond which one would be licensed to downvote before reading it all.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-25T09:43:14.573Z · LW(p) · GW(p)

Upon reflection this is reasonable, I think I both underestimated the comment's length and also forgot that I was commenting on a Main post, not a Discussion one.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2011-07-25T09:58:10.396Z · LW(p) · GW(p)

And another of those "Less Wrong isn't like other Internet forums" moments. Thank you!

comment by Dreaded_Anomaly · 2011-07-24T07:01:11.270Z · LW(p) · GW(p)

I read all of it. Somehow.

comment by Bongo · 2011-07-25T11:24:10.655Z · LW(p) · GW(p)

I also (1 2) downvoted only after reading.

comment by lessdazed · 2011-07-24T16:36:37.813Z · LW(p) · GW(p)

How could you possibly know this is what is happening? Your long comment and this one have four critical comments and five downvotes apiece at the time of my viewing. The critical comments are by six different people, the downvotes are from five to ten people.

Obviously, were there no comments and a hundred downvotes you still couldn't conclude LW had the habit of downvoting without reading.

Replies from: wedrifid, Will_Newsome
comment by wedrifid · 2011-07-25T10:41:19.751Z · LW(p) · GW(p)

How could you possibly know this is what is happening?

Bayesian inference. Given what I know of human behaviour and past exposure to lesswrong I assigned greater than 80% confidence that the comment was downvoted without being read. Will has even more information about the specifics of the conversation so his conclusion does not seem at all unfounded.

ie. I reject the rhetorical implication of your question. That particular inference of Will's was reasonable. (Everything else he has said recently... less so.)

Replies from: lessdazed
comment by lessdazed · 2011-07-25T17:05:16.504Z · LW(p) · GW(p)

I didn't mean to imply the full force what another might have meant to imply with my words. Granted framework for inferring that probably had happened was available, the pattern of downvotes and negative comments didn't seem to match what the framework would require to reach the conclusion.

He gave a good answer insofar as the first downvote was concerned. A better answer would have gone on to explain that the individuals who left critical comments wouldn't have downvoted him, but he didn't say that, perhaps because he couldn't have justified it even well short of it being a firm conclusion.

If he had evidence that, say, the people who left critical comments him upvoted him without reading, or that many upvoted him immediately with or without reading, he would have better reason to think many others downvoted him without reading. He didn't say that. If you are implying now, then fine.

Considering the critical comment/downvote ratio, I remain unimpressed with the complaint, as well as with its implication that downvoting a wall of text in which the words quantum/God is Math/Him/Jesus pop out upon scanning is an unjustified thing.

Before I commented, I thought it unlikely it was justified, insofar as since commenting, he has justified thinking it true for one downvote, I think it even less likely he can justify it for the others, which enervates the claim that it is a "habit" of LW.

Replies from: wedrifid
comment by wedrifid · 2011-07-25T17:20:52.114Z · LW(p) · GW(p)

I downvoted the comment in question without reading it beyond confirming that by keyword it is at least as insane as the other things he has said recently. He seems to have completely lost his grasp of reality to the extent that I am concerned for his mental health and would recommend seeking urgent medical attention.

So I don't necessarily agree that downvoting without reading is necessarily a bad thing. But Will certainly has strong evidence that it was occurring in this thread. I had noticed it before Will made the complaint himself.

Replies from: lessdazed
comment by lessdazed · 2011-07-25T22:31:34.530Z · LW(p) · GW(p)

I don't think people who downvote before reading count as something to legitimately complain about, as they can change their vote after reading. "People who downvote without reading are bad" is a fine enough statement for conversation and making one's point, but the argument against downvoting is that one shouldn't be judged by those who fail to gain information about what they judge, which isn't so applicable here (except to the extent the voters are biased to keep their initial vote).

In fact, if people who downvote without reading are often people who downvote before reading, the evidence that a few voted before the submission could have been read and judged on its merits is even less impressive.

comment by Will_Newsome · 2011-07-25T09:54:27.534Z · LW(p) · GW(p)

I got a downvote within about 10 seconds.

Replies from: wedrifid
comment by wedrifid · 2011-07-25T10:52:51.644Z · LW(p) · GW(p)

I got a downvote within about 10 seconds.

At this point it really doesn't matter much what you write. Voting patterns for the thread (and the related thread) are entrenched and you will be voted on by name not content. It is usually best to write the conversation (or the people) off and move on. There is little to be gained by trying to fight the death spiral.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-25T11:25:37.366Z · LW(p) · GW(p)

Agreed. It's always that few seconds of utter despair that make me write stupid things. It's not long after the despair algorithm that the automatic "this isn't the should world, it's reality, any suffering you are experiencing is the result of samsaric delusion" algorithms kick in, at least for a little while.

comment by Hul-Gil · 2011-07-25T08:53:06.882Z · LW(p) · GW(p)

I read it. I downvoted it¹ because it seems pretty nonsensical to me, in more than a few ways; I won't go into that, though - see my previous comment.

The ideas that humans are special and animals thus have no souls & that this "is the best of all possible words" are morally repugnant to me, too. I would have mentally quadruple-downvoted you except for your idea about God reversing evil computations... so it's only a mental triple-downvote. :p

¹I didn't actually downvote it, because it was already at -5 when I saw it, and I want some discussion to come out of it. I think it deserves those downvotes, though.

Replies from: lessdazed, Will_Newsome
comment by lessdazed · 2011-07-25T18:09:17.942Z · LW(p) · GW(p)

I didn't actually downvote it, because it was already at -5 when I saw it, and I want some discussion to come out of it.

I am surprised people say they think this way.

As I say below, attraction to isn't necessarily the same as likeliness of approval.

"Comment score below threshold/+60 children" is practically salacious. "20 points", less so.

I suspect that in this case others are not different than I, and I have succeeded at introspection on this point where others have not. I still think it very possible that isn't true. This might be a first for me, as almost always when others express an opinion like this that differs from mine in this sort of way, I think the best explanation is that they think differently, and I have been wrong to not account for that.

Replies from: Hul-Gil
comment by Hul-Gil · 2011-07-25T22:40:11.697Z · LW(p) · GW(p)

I'm afraid I don't understand this. "Think this way" - what way would that be, exactly?

Replies from: lessdazed
comment by lessdazed · 2011-07-25T23:10:17.032Z · LW(p) · GW(p)

I am surprised that people say they are more likely to read a comment voted to "20 points" than one voted to "Comment score below threshold/+60 children". Several people have claimed something like this when justifying upvoting the walloftext post or not downvoting the walloftext post. They expressed a desire for others to read it as justifying that despite disapproval of the contents or presentation.

I would have expected people to be more attracted to comments labeled "Comment score below threshold/+60 children" than "20 points" (or zero), and to also believe and say "I am more attracted to comments labeled 'Comment score below threshold/+60 children' than '20 points' (or zero)".

The divergence in intuition here feels similar to other instances in which people expressed a different opinion than mine in which I was surprised. For example, when a girl is and says she is offended by the suggestion of a certain activity for a date, when that activity as free, as it indicates my stinginess. In this case, I register the different way of thinking as genuine, and my anticipations, my map of the world, failed in two respects: my anticipation of her reaction to the free thing and her verbal response. It would certainly possible for someone to have a similar reaction as a feeling they can't quite verbalize, or alternatively, to not feel disapproval of me but say the words because it is the cached thing to say among people in her circles.

That is not a perfect example but I hope it suffices, my failures have almost always been two-level.

In this case, I actually think that people saying "I/people in general (as implicitly extrapolated from myself) am more likely to read a post voted -2 than "Comment score below threshold/+60 children" are wrong.

I, of course am wrong as well by my account, as I had not mapped their maps well at all. I had expected them to say that they behave in a certain way, and that that way is how they actually behave, and that that was is to seek the contentious, inciting, heavily commented downvotes. I am wrong at least insofar as so many seem to think they don't. Wrong wrong wrong.

That said, are they in fact correct when they predict how the LW majority chooses what to view? Note I am predicting that most people are extrapolating from their own behavior, I certainly am. It may be that individuals saying they think the majority of LW acts this way are thinking this by extrapolating from themselves, and they may or may not be right about themselves; I suspect many who say this of themselves are right and many who say this of themselves are wrong.

comment by Will_Newsome · 2011-07-25T09:30:42.597Z · LW(p) · GW(p)

That's completely fair. I just don't like when I make a long comment and it gets downvoted before I can click on the Will_Newsome link, ha. In retrospect I shouldn't have said anything.