[LINK] Scott Aaronson on Google, Breaking Circularity and Eigenmorality

post by shminux · 2014-06-19T20:17:14.063Z · LW · GW · Legacy · 46 comments


  A moral person is someone who cooperates with other moral people, and who refuses to cooperate with immoral people.

Scott suggests that ranking morality is similar to ranking web pages. A quote:

Philosophers from Socrates on, I was vaguely aware, had struggled to define what makes a person “moral” or “virtuous,” without tacitly presupposing the answer.  Well, it seemed to me that, as a first attempt, one could do a lot worse than the following:

A moral person is someone who cooperates with other moral people, and who refuses to cooperate with immoral people.

Proposed solution:

Just like in CLEVER or PageRank, we can begin by giving everyone in the community an equal number of “morality starting credits.”  Then we can apply an iterative update rule, where each person A can gain morality credits by cooperating with each other person B, and A gains more credits the more credits B has already.  We apply the rule over and over, until the number of morality credits per person converges to an equilibrium.  (Or, of course, we can shortcut the process by simply finding the principal eigenvector of the “cooperation matrix,” using whatever algorithm we like.)  We then have our objective measure of morality for each individual, solving a 2400-year-old open problem in philosophy.

He then talks about "eigenmoses and eigenjesus" and other fun ideas, like Plato at the Googleplex.

One final quote:

All that's needed to unravel the circularity is a principal eigenvector computation on the matrix of trust.

EDIT: I am guessing that after judicious application of this algorithm one would end up with the other Scott A's loosely connected components with varying definitions of morality, the Archipelago. UPDATE: He chimes in.

EDIT2: The obvious issue of equating prevailing mores with morality is discussed to death in the comments. Please read them first before raising it yet again here.



Comments sorted by top scores.

comment by Qiaochu_Yuan · 2014-06-20T03:30:25.182Z · LW(p) · GW(p)

I'm annoyed at how negative the comments on this post are. I think this is a great example of making progress on an apparently philosophical problem by bringing in some nontrivial mathematics (in this case, the idea of using eigenvector decompositions to make sense of circular definitions), and it seems extremely uncharitable to me to judge it for failing to be a fully general and correct solution to the problem when it's obviously not intended to be.

comment by [deleted] · 2014-06-20T10:40:21.624Z · LW(p) · GW(p)

it seems extremely uncharitable to me to judge it for failing to be a fully general and correct solution to the problem when it's obviously not intended to be.

This is really the crux of the problem. It feels to me like an extension of the "posts to Main have to be perfect" problem.

It's easy to criticize someone; it's hard to have an original thought.

comment by Nornagest · 2014-06-20T04:50:54.360Z · LW(p) · GW(p)

I give it eight out of ten for cleverness, but minus four for stating an obvious trap in the introduction and then proceeding to walk blithely into it.

Pretty obviously not meant to be totally serious, though, so I can't condemn it too much.

comment by MrMind · 2014-06-20T08:32:30.006Z · LW(p) · GW(p)

I think that the article is important because it fails critically, that is it serves to identify the fact that morality is important precisely when it's not the result of aggregated preferences.

And we should all know by now how much dangerous a sub-optimal morality can be.

comment by Qiaochu_Yuan · 2014-06-20T17:22:32.981Z · LW(p) · GW(p)

And we should all know by now how much dangerous a sub-optimal morality can be.

Agh, but if you want to solve that problem, the solution is not to criticize everyone who offers a proposal. That is not how you incentivize people to solve a problem.

comment by MrMind · 2014-06-23T07:38:48.121Z · LW(p) · GW(p)

the solution is not to criticize everyone who offers a proposal

I think the 'solution' is exactly to criticize everyone who offers a proposal, but do so in a respectful, clear and constructive manner, highlighting the good and the bad.

Indeed, I think that Aaronson's proposal was interesting, new and very worth of reflection and further expansion. Yet I still think it fails, and badly.

comment by IlyaShpitser · 2014-06-20T05:05:06.154Z · LW(p) · GW(p)

I don't think he's talking about morality at all.

comment by Lumifer · 2014-06-20T14:37:09.872Z · LW(p) · GW(p)

I don't think he's talking about morality at all.

Well, no, he doesn't -- he's talking, basically, about popularity and about clustering of people on the basis of some cooperation metric. But then, for some reason, he calls that whole thing "morality".

comment by Lumifer · 2014-06-20T03:58:01.819Z · LW(p) · GW(p)

this is a great example of making progress on an apparently philosophical problem by bringing in some nontrivial mathematics

In which way slapping emotion-laden labels on some well-known statistical techniques constitutes "making progress"?

comment by Username · 2014-06-23T13:46:28.608Z · LW(p) · GW(p)

Indeed. The post, although thought-provoking, doesn't appear to have anything to do with a "philosophical problem", although that doesn't stop the author from pompously speaking as if it does.

ETA: I'm assuming the downvoter thought I was being ironic.

comment by V_V · 2014-06-20T09:50:23.901Z · LW(p) · GW(p)

I think that many negative comments here are missing the point of what Scott Aaronson is doing:
If I understand correctly, he is not attempting to formulate a normative theory of morality, but rather a descriptive theory of morality: an attempt to scientifically explain our moral intuitions.

I think that his attempt probably incomplete, since it fails to explain our intuitions in various important scenarios, a fact that he recognizes. But it seems to me that the framework of explaining moral intuitions from mathematical properties of the social network holds merit.

comment by Lumifer · 2014-06-20T14:50:49.173Z · LW(p) · GW(p)

a descriptive theory of morality

So, do you see there anything more than morality == popularity? Moral is whatever the majority does?

comment by shminux · 2014-06-20T18:44:26.218Z · LW(p) · GW(p)

No, you got it backwards. "Descriptive" means that whatever the majority thinks is described as prevailing mor[e|al]s.

comment by Lumifer · 2014-06-23T16:31:12.333Z · LW(p) · GW(p)

No, I don't think I got it backwards. "Descriptive" means "what is" (as opposed to "what should be"). Assigning labels, choosing interpretations, deriving meaning -- these are all parts of "descriptive" theories. And here the issue is precisely with that.

comment by Nisan · 2014-06-20T17:22:26.422Z · LW(p) · GW(p)

There's an interesting parallel with Modal Combat. Both approaches want to express the idea that "moral agents are those that cooperate with moral agents". Modal Combat resolves the circularity with diagonalization, and Eigenmorality resolves it by finding a stable distribution.

comment by shminux · 2014-06-20T18:43:00.836Z · LW(p) · GW(p)

Maybe one of you guys and Scott should get together and see if exploring some combination of the two is worthwhile.

comment by shminux · 2014-06-19T21:22:45.369Z · LW(p) · GW(p)

Can't resist more quotes. Calculating the morality (rather than the game score) of IPD bots:

Tyler sets up and runs a fairly standard IPD tournament, with a mix of strategies that includes TIT_FOR_TAT, TIT_FOR_TWO_TATS, other TIT_FOR_TAT variations, PAVLOV, FRIEDMAN, EATHERLY, CHAMPION (see the paper for details), and degenerate strategies like always defecting, always cooperating, and playing randomly. However, Tyler then asks an unusual question about the IPD tournament: namely, purely on the basis of the cooperate/defect sequences, which players should we judge to have acted morally toward their partners?

(In this particular case and using the "eigenmoses" niceness scoring, TIT_FOR_TWO_TATS ended up the "most moral".)

comment by IlyaShpitser · 2014-06-23T16:50:09.665Z · LW(p) · GW(p)

The obvious issue of equating prevailing mores with morality is discussed to death in the comments.

Except, as far as I can tell, he is proposing using Pagerank (or some modification -- an update proposes using temporal info per Luca's suggestion) as a way to classify into good and bad guys that accords with our intuitions. This does not work (for fairly obvious reasons, which I am sure Scott is aware of, so I am not sure why he decided to talk about morality at all).

re: "why are people so negative:" this is how analytic philosophy works! Someone proposes a defensible point, and then everyone else shoots it full of holes.

comment by Lumifer · 2014-06-19T20:55:54.177Z · LW(p) · GW(p)

What this measures is holding socially acceptable beliefs and behaving in a socially acceptable way. I don't know why he thinks this is what morality is. After all, the entire village collectively and joyfully cooperates in burning the witch.

comment by Alejandro1 · 2014-06-19T21:09:52.828Z · LW(p) · GW(p)

Scott remarks on this himself:

Another type of scenario involves minorities. Imagine, for instance, that 98% of the players are unfailingly nice to each other, but unfailingly cruel to the remaining 2% (who they can recognize, let’s say, by their long noses or darker skin—some trivial feature like that). Meanwhile, the put-upon 2% return the favor by being nice to each other and mean to the 98%. Who, in this scenario, is moral, and who’s immoral? The mathematical verdict of both eigenmoses and eigenjesus is unequivocal: the 98% are almost perfectly good, while the 2% are almost perfectly evil. After all, the 98% are nice to almost everyone, while the 2% are mean to those who are nice to almost everyone, and nice only to a tiny minority who are mean to almost everyone. Of course, for much of human history, this is precisely how morality worked, in many people’s minds. But I dare say it’s a result that would make moderns uncomfortable.

comment by XiXiDu · 2014-06-20T09:33:25.557Z · LW(p) · GW(p)

Scott Says:

There’s a crucial observation that I took for granted in the post but shouldn’t have, so let me now make it explicit. The observation is this:

No system for aggregating preferences whatsoever—neither direct democracy, nor representative democracy, nor eigendemocracy, nor anything else—can possibly deal with the “Nazi Germany problem,” wherein basically an entire society’s value system becomes inverted to the point where evil is good and good evil.

comment by Viliam_Bur · 2014-06-20T11:43:08.030Z · LW(p) · GW(p)

By the way, this is also related to the argument in "Well-Kept Gardens Die By Pacifism". When we design a system for moderating a web community, we are choosing between "order" and "chaos", not between "good" and "evil".

We can move the power to moderator, to some inner circle of users, to most active users, even to users with most sockpuppets, but we can't just move it to "good". We can choose which kind of people or which kind of behavior gets the most power, but we can't choose that the power will magically disappear if they try to abuse it; because any rule designed to prevent abuse can also be abused. The values have to come from outside of the voting system; from the humans who use it. So at the end, the only reasonable choice is to design the system to preserve the existing power, whatever it is -- allowing change only when it is initiated by the currently existing power -- because the only alternative is to let forces from outside of the garden optimize for their values, again, whatever they are, not only the "good" ones. And yes, if the web community had a horrible values at the beginning, the proper moderating system will preserve them. That's not bug, that's a side-effect of a feature. (Luckily, on the web, you have the easy option of leaving the community.)

In this sense, we have to realize that the eigen-whatever system proposed in the article, if designed correctly (how to do this specifically is still open to discussion), would capture something like "the applause lights of the majority of the influential people", or something similar. If the "majority of the influential people" are evil, or just plain stupid, the eigen-result can easily contain evil or stupidity. It almost certainly contains religion and other irrationality. At best, this system is a useful tool to see what the "majority of influential people" think is morality (as V_V said), which itself is a very nice result for a mathematical equation, but I wouldn't feel immoral for disagreeing with in at some specific points. Also, it misses the "extrapolated" part of the CEV; for example, if people's moral opinions are based on incorrect or confused beliefs, the result will contain morality based on incorrect beliefs, so it could give you a recommendation to do both X and Y, where X and Y are contradictory.

comment by [deleted] · 2014-06-20T16:26:07.994Z · LW(p) · GW(p)

Well yes, and attempting to group all actual or possible individuals into one tribe is a major mistake, one that I think should be given a name. Well, as it turns out, the name I was already going to give it is at least partially in use: False Universalism.

Ethics ought to include some kind of reasoning for determining when some bit of universalism (some universalization of a maxim, in the Kantian or Timeless sense, or some value cohering, in the CEV sense) has become False Universalism, so that the groups or individuals who diverge from each other to the point of incompatibility can be handled as conflicting, rather than simply having the ethical algorithm return the answer that one or the other is Right and the other is Wrong and the Wrong shall be corrected until they follow the values of the Right.

comment by Leonhart · 2014-06-20T22:40:42.005Z · LW(p) · GW(p)

handled as conflicting

"Handled as conflicting" seems to either mean "all-out war" or at best "temporary putting off of all-out war until we've used all the atoms on our side of the universe".

If the two sides shared your desire to be symmetrically peaceful with other sides whose only point of similarity with them was the desire to be symmetrically peaceful with other sides whose... then Universalism isn't false. That's its minimal case.

And if it does fail, it seems counterproductive for you to point that out to us, because while we're happily and deludedly trying to apply it, we're not genociding each other all over your lawn.

comment by [deleted] · 2014-06-21T19:39:28.965Z · LW(p) · GW(p)

Sorry, when I said "False Universalism", I meant things like, "one group wants to have kings, and another wants parliamentary democracy". Or "one group wants chocolate, and the other wants vanilla". Common moral algorithms seem to simply assume that the majority wins, so if the majority wants chocolate, everyone gets chocolate. Moral constructionism gets around this by saying: values may not be universal, but we can come to game-theoretically sound agreements (even if they're only Timelessly sound, like Rawls' Theory of Justice) on how to handle the disagreements productively, thus wasting fewer resources on fighting each other when we could be spending them on Fun.

Basically, I think the correct moral algorithm is: use a constructionist algorithm to cluster people into groups who can then use realist universalisms internally.

comment by Caspian · 2014-06-28T02:09:00.473Z · LW(p) · GW(p)

If we're aggregating cooperation rather than aggregating values, we certainly can create a system that distinguishes between societies that apply an extreme level of noncooperation (i.e. killing) to larger groups of people than other societies, and that uses our own definition of noncooperation rather than what the Nazi values judge as noncooperation.

That's not to say you couldn't still find tricky example societies where the system evaluation isn't doing what we want, I just mean to encourage further improvement to cover moral behaviour towards and from hated minorities, and in actual Nazi Germany.

comment by TheAncientGeek · 2014-06-20T18:50:15.333Z · LW(p) · GW(p)

But his own scheme isn't the aggregation of arbitrary values, it's based on rewarding co operation.

Perhaps in-group problems could be fixed with an eigenSinger algorithm that gives extra points to those who cop operate with people they have not cooperated with before, ie widening the circle.

comment by fubarobfusco · 2014-06-20T01:00:19.952Z · LW(p) · GW(p)

After all, the entire village collectively and joyfully cooperates in burning the witch.

Collectively, perhaps; but joyfully? Speaking out in defense of the witch risks being labeled a witch yourself.

comment by Lumifer · 2014-06-20T02:01:59.919Z · LW(p) · GW(p)

The burning of a witch is a rightful celebration because the community is rid of a serious danger and the spectacle is a welcome diversion from everyday drudgery :-/

comment by fubarobfusco · 2014-06-20T02:29:26.166Z · LW(p) · GW(p)

In retrospect, so much of my awareness of actual historical European and American witch-scares is so heavily colored by modern politics and fiction — as, I suspect, most folks' is — that I think we would probably be ill-advised to draw many conclusions from it.

comment by Lumifer · 2014-06-20T02:58:32.863Z · LW(p) · GW(p)

Well, we can make it easier. The age of lynchings and the age of photography intersected. Here, take a look at these two photographs and tell me what the mood of the crowd is.

Warning: persons of sensitive nervous disposition should not click on the links as there will be gruesomeness involved.

Photo 1 Photo 2

comment by satt · 2014-06-23T03:06:04.424Z · LW(p) · GW(p)

I'm broadly sympathetic to your point that Aaronson's algorithm is measuring something like conformity or group consensuses rather than morality, but the specific example you picked to garnish that point is dodgy. From Randall Collins's Violence: A Micro-sociological Theory, pages 425-426:

But in fact, as I have tried to show throughout, violent confrontations generate their own situational emotions, above all ten- sion and fear; and we see this on the faces and in the body language of most of the crowd during these events. The demonstrative extremists are not expressing an emotion that we can safely infer, on the basis of the evidence, as generally existing in the crowd; they are another specialized minority (different as well from the minority of violent activists) who have found their own emotional niche in the context of the crowd. And in fact the bulk of the crowd does not usually follow them, even when it is safe to do so because there is no opponent present but only a dead body or a captured building; the demonstrative extremists are usually separate from the rest of the crowd.

Some evidence bearing on this point comes from photos of lynchings in the American South and West in the period 1870-1935 (Allen 2000). Most of these photos—and the ones that are most revealing on this point—were taken several hours in the aftermath of the actual violence, or the following day. Individuals offering acts of gratuitous insult to dead bodies are safely disconnected from the actual commission of the violence. In a photo (Allen 2000: plate 93, not shown here) we see two white men standing beside the body of a black man hung from a tree, one poking the body with a stick, the other punching it. In the background are four other white men looking on. In another photo (Allen 2000: plate 25, not shown) a young man leans nonchalantly with his eyes closed against the post from which the charred body of a black man, lynched the previous night, is hanging; the nineteen other faces visible in the crowd are somber. This is the usual pattern in all the photos: a few are the demonstrative extremists; most of the others are somber, serious, awed, or uncomfort- able in the presence of death. [I skip a paragraph here.]

But the mood of demonstrative extremists in the aftermath contrasts with the emotions that are displayed during the actual lynching itself. In a rare set of photos, we see a lynching while it is going on: a black man showing the welts of whipping on his back stands in a wagon shortly before he is hung; his executioners stare at his face with hard, hostile stares (Allen 2000: plates 42 and 43; not shown here). There is no clown- ing, no expression of joy; these are the frontline activist few, engaged in an angry, domineering stare-down. A violent confrontation itself is tense; even when one side has the upper hand (the usual formula for successful violence) it is compelling, focused into the business at hand, unable to ironicize it.

Thus the demonstrative extremists, operating safely in the aftermath, show something else: an attempt to raise themselves from the bulk of the crowd, the back line of merely nominal supporters of the violence, and to raise their status by closer connection with the violent action that had galvanized the attention of the group. By putting themselves close to the dead body, engaging in gratuitous acts of insult upon it, they put them- selves closer to the center of the attention space.⁹ The most blatant expres- sion of joy among Allen's (2000) lynching photos is in plate 97 (not shown), which shows two well-dressed young men grinning (rather mirth- lessly) at the camera while viewing a black body being burned. This was not in the heat of the action, however, but in the aftermath; the victim, accused of molesting a white girl, had already been hung from a lamppost and riddled with bullets. These cheery demonstrative extremists are prob- ably proud of themselves for standing so close to the workman in grimy overalls who is tending the fire; everyone else in the photo is farther away, and the twenty-nine other visible faces range from somber to apprehen- sive.

The last photo Collins refers to ("plate 97") sounds like it's your photo 2, and Collins's interpretation of it seems to me to fit better than the one you imply. I wouldn't go as far as to say that all of the "twenty-nine other visible faces range from somber to apprehensive", as a few of the people in the photo are evidently just trying to get a closer look, and a boy near the front seems more intrigued than anything else. Nonetheless, the crowd as a whole doesn't appear joyful or celebratory to me.

Turning to photo 1, I see perhaps ten spectators' faces clearly enough to make a guess at what they're expressing. Going from left to right, I see (1) thick-eyebrowed man in cap in background who looks as if he's whistling while thinking hard about something; (2) foreground man in white shirt with no tie with neutral-ish expression, but maybe happy; (3) man in middle distance with shorter man in front and a boater-wearer behind, the former gazing off to the photographer's right with a worried/pensive look; (4) smirking man in tie; (5) foreground woman in dress with irregular spots who looks surprised/wary; (6) another woman behind her with open mouth, who might be amused or surprised or scandalized; (7) cluster of three foreground women, where the nearest one's face is too blurry for me to interpret, but the two further back both look apprehensive; (8) foreground man with moustache, pointing as he stares intently; and (9) man at right edge with left eye not visible in photo, looking not especially happily to the photographer's right. As in photo 2, although there are happy-looking people present, they are a minority.

I don't believe it's accurate to use these scenes as examples of rightful celebration or joy. fubarobfusco's warning to exercise caution in how we read these photos may be well-advised.

comment by Lumifer · 2014-06-23T17:02:45.377Z · LW(p) · GW(p)

Yes, I know these photos were analyzed quite substantially, but my point is really simple -- it's that the lynchings (and the witch burnings before them) were culturally normal. The were intense events and, of course, brought out a range of emotions, not just joy, but all I'm trying to say is that the majority of people did not see them as something to be ashamed of. It was OK, it was fine, it was moral.

comment by fubarobfusco · 2014-06-20T03:34:22.959Z · LW(p) · GW(p)

I'm well aware of such things. My question was aimed at the phrase "the entire village" — which is a claim of unanimity. My point was that members of the village who disapproved would have a strong incentive not to express their disapproval.

Also, American lynchings (of the period you refer to) and witch-hunts are not really the same thing, socially speaking — in part because of the supernatural corruption implied by a claim of witchcraft.

Witch-hunts are an ongoing thing, by the way, in a number of parts of the world. In some cases, economic motives are pretty clear; in others, it seems pretty clearly a matter of superstitious fear that isn't really comparable to the self-evident political or sexual-political motives behind American racial lynchings.

comment by Lumifer · 2014-06-20T03:54:59.187Z · LW(p) · GW(p)

Recall the top-level post.

Sure there will be deviants who disapprove. They are, clearly, very bad people.

American racist lynchings and witch-scares are not really the same thing, socially speaking.

They are expressions of the morality of the majority which is good by definition, isn't it?

comment by satt · 2014-06-23T03:11:06.460Z · LW(p) · GW(p)

Cosma Shalizi, in bookmarking Scott's post, offers some specific, relevant references to Plato (and some amusing tags).

comment by DanielVarga · 2014-06-23T20:43:00.492Z · LW(p) · GW(p)

I was unsurprised but very disappointed when it turned out there are no other posts tagged one_mans_vicious_circle_is_another_mans_successive_approximation. But Shalizi has already used the joke once in his lecture notes on Expectation Maximization.

comment by Emile · 2014-06-20T04:52:09.984Z · LW(p) · GW(p)

Interesting post, thanks, I recommend reading the whole thing!

comment by Gunnar_Zarncke · 2014-06-20T14:52:14.335Z · LW(p) · GW(p)

Yes. An interesting post. But much too long. He should have split it.

comment by Punoxysm · 2014-06-19T21:02:28.331Z · LW(p) · GW(p)

Yep; this is fun. Also a good way of evaluating truth of statements, in the absence of all other evidence.

PageRank variants ARE very good at finding clusters within graph, so different clusters could be found standing out from the main group (for instance, you could find the N people who most accord with your own views/each others' views rather trivially).

comment by Yosarian2 · 2014-06-20T18:57:19.884Z · LW(p) · GW(p)

So, in a community where a majority of the people believe in a Christian idea of morality and only co-operate with other people with that Christian ideal of morality (strict sexual rules, go to Church every Sunday, ect), then wouldn't your system say that the majority is behaving morally?

And, conversely, that the minority of people in that society with a more humanist value system, who the majority will not co-operate with because they disagree with their value system, will therefore be defined as less moral under that system?

For that matter, if nobody is willing to co-operate with someone because their face looks ugly by the standards of most of the community, then your system would declare the ugly person to also be immoral.

comment by shminux · 2014-06-20T18:59:45.292Z · LW(p) · GW(p)

This obvious objection is extensively discussed in the comments to the linked blog post. Please read them.

comment by ShardPhoenix · 2014-06-20T04:07:57.832Z · LW(p) · GW(p)

I like his idea for using this for governance, but aside from any actual issues with it, it like most proposals for fancier government suffers from the problem of being difficult for the average person to understand. This reinforces further in my mind that one of the highest priorities for civilization ought to be researching ways to use mass genetic engineering to raise the average IQ level, since (all other benefits aside) it's much easier to solve complex societal problems if people can actually understand the solutions enough to trust them.

comment by mwengler · 2014-06-29T14:21:55.952Z · LW(p) · GW(p)

If you run this analysis over groups of people that include competing religions or just plain competing tribes or nations, I think you will get eigenmodes which sort those people by their affinity groups, and eigenvalues which essentially just count up how many people are in each affinity group. So we find supporting Team6 is more "moral" because there are more people in Team6 than any other team, and we conclude essentially that might makes right.

I think evolutionarily speaking, our propensity for morality is designed to make us team players, or at least to make enough of us enough of a team player to reap the benefits to the group of cooperation. So if this proposal just identifies teams and counts their members, this doesn't make it wrong, but it would be important to point out that it is just finding the affinity groups and not answering deep questions about whether incest is wrong or whether we should push fat people in front of trolley cars.

comment by Salemicus · 2014-06-19T21:52:53.376Z · LW(p) · GW(p)

Someone who thinks that morality (or even an important part of morality) is "co-operate with the good people, punish the bad" is immediately revealing the shallowness of their thinking (and incidentally, giving me a 90%+ chance of guessing their politics).

It should be "co-operate with good actions, punish bad actions." And you can get that unambiguously with a graph, providing you also (separately) have a ranker on action value. But you can't get anywhere without a ranker on action value, as the post adequately demonstrates.

comment by mwengler · 2014-06-29T14:16:48.332Z · LW(p) · GW(p)

This comment seems to have missed the point that by looking at who you are cooperating with you are declaring the "ranker on action value" to be what the people who cooperate with each other do. Which is a clever way of getting around the problem of having to have an independent machine that ranks actions that somehow people are supposed to agree isn't just a matter of assuming what is moral in your assumptions rather than discovering it as a conclusion.

The way I wrote this, I ranked your action. How different is it if I say "you are wrong' and downvote you, and people look at graphs of who downvoted whom?