Setting Up Metaethics

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-07-28T02:25:20.000Z · LW · GW · Legacy · 34 comments

Contents

34 comments

Followup toIs Morality Given?, Is Morality Preference?, Moral Complexities, Could Anything Be Right?, The Bedrock of Fairness, ...

Intuitions about morality seem to split up into two broad camps: morality-as-given and morality-as-preference.

Some perceive morality as a fixed given, independent of our whims, about which we form changeable beliefs.  This view's great advantage is that it seems more normal up at the level of everyday moral conversations: it is the intuition underlying our everyday notions of "moral error", "moral progress", "moral argument", or "just because you want to murder someone doesn't make it right".

Others choose to describe morality as a preference—as a desire in some particular person; nowhere else is it written.  This view's great advantage is that it has an easier time living with reductionism—fitting the notion of "morality" into a universe of mere physics.  It has an easier time at the meta level, answering questions like "What is morality?" and "Where does morality come from?"

Both intuitions must contend with seemingly impossible questions.  For example, Moore's Open Question:  Even if you come up with some simple answer that fits on T-Shirt, like "Happiness is the sum total of goodness!", you would need to argue the identity.  It isn't instantly obvious to everyone that goodness is happiness, which seems to indicate that happiness and rightness were different concepts to start with.  What was that second concept, then, originally?

Or if "Morality is mere preference!" then why care about human preferences?  How is it possible to establish any "ought" at all, in a universe seemingly of mere "is"?

So what we should want, ideally, is a metaethic that:

  1. Adds up to moral normality, including moral errors, moral progress, and things you should do whether you want to or not;
  2. Fits naturally into a non-mysterious universe, postulating no exception to reductionism;
  3. Does not oversimplify humanity's complicated moral arguments and many terminal values;
  4. Answers all the impossible questions.

I'll present that view tomorrow.

Today's post is devoted to setting up the question.

Consider "free will", already dealt with in these posts.  On one level of organization, we have mere physics, particles that make no choices.  On another level of organization, we have human minds that extrapolate possible futures and choose between them. How can we control anything, even our own choices, when the universe is deterministic?

To dissolve the puzzle of free will, you have to simultaneously imagine two levels of organization while keeping them conceptually distinct.  To get it on a gut level, you have to see the level transition—the way in which free will is how the human decision algorithm feels from inside.  (Being told flatly "one level emerges from the other" just relates them by a magical transition rule, "emergence".)

For free will, the key is to understand how your brain computes whether you "could" do something—the algorithm that labels reachable states. Once you understand this label, it does not appear particularly meaningless—"could" makes sense—and the label does not conflict with physics following a deterministic course.  If you can see that, you can see that there is no conflict between your feeling of freedom, and deterministic physics.  Indeed, I am perfectly willing to say that the feeling of freedom is correct, when the feeling is interpreted correctly.

In the case of morality, once again there are two levels of organization, seemingly quite difficult to fit together:

On one level, there are just particles without a shred of should-ness built into them—just like an electron has no notion of what it "could" do—or just like a flipping coin is not uncertain of its own result.

On another level is the ordinary morality of everyday life: moral errors, moral progress, and things you ought to do whether you want to do them or not.

And in between, the level transition question:  What is this should-ness stuff?

Award yourself a point if you thought, "But wait, that problem isn't quite analogous to the one of free will.  With free will it was just a question of factual investigation—look at human psychology, figure out how it does in fact generate the feeling of freedom.  But here, it won't be enough to figure out how the mind generates its feelings of should-ness.  Even after we know, we'll be left with a remaining question—is that how we should calculate should-ness?  So it's not just a matter of sheer factual reductionism, it's a moral question."

Award yourself two points if you thought, "...oh, wait, I recognize that pattern:  It's one of those strange loops through the meta-level we were talking about earlier."

And if you've been reading along this whole time, you know the answer isn't going to be, "Look at this fundamentally moral stuff!"

Nor even, "Sorry, morality is mere preference, and right-ness is just what serves you or your genes; all your moral intuitions otherwise are wrong, but I won't explain where they come from."

Of the art of answering impossible questions, I have already said much:  Indeed, vast segments of my Overcoming Bias posts were created with that specific hidden agenda.

The sequence on anticipation fed into Mysterious Answers to Mysterious Questions, to prevent the Primary Catastrophic Failure of stopping on a poor answer.

The Fake Utility Functions sequence was directed at the problem of oversimplified moral answers particularly.

The sequence on words provided the first and basic illustration of the Mind Projection Fallacy, the understanding of which is one of the Great Keys.

The sequence on words also showed us how to play Rationalist's Taboo, and Replace the Symbol with the Substance.  What is "right", if you can't say "good" or "desirable" or "better" or "preferable" or "moral" or "should"?  What happens if you try to carry out the operation of replacing the symbol with what it stands for?

And the sequence on quantum physics, among other purposes, was there to teach the fine art of not running away from Scary and Confusing Problems, even if others have failed to solve them, even if great minds failed to solve them for generations.  Heroes screw up, time moves on, and each succeeding era gets an entirely new chance.

If you're just joining us here (Belldandy help you) then you might want to think about reading all those posts before, oh, say, tomorrow.

If you've been reading this whole time, then you should think about trying to dissolve the question on your own, before tomorrow.  It doesn't require more than 96 insights beyond those already provided.

Next:  The Meaning of Right.

 

Part of The Metaethics Sequence

Next post: "The Meaning of Right"

Previous post: "Changing Your Metaethics"

34 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Psy-Kosh · 2008-07-28T04:00:57.000Z · LW(p) · GW(p)

"Even after we know, we'll be left with a remaining question - is that how we should calculate should-ness? So it's not just a matter of sheer factual reductionism, it's a moral question."

I think that fits a different pattern. Specifcally the whole epiphenomenalism/pzombie thing.

If I actually fully understood everything about how the brain generates that sense of shouldness, not just some qualitative evolutionary history of why it might be there... ie, if I knew how to build that feeling out of toothpicks and rubber bands and fully understood why what I did did what it did, then I'd actually genuinely understand something I really don't understand now, and that understanding may, itself, tell me something about why I, ahem, should or shouldn't accept that particular computation of shouldness.

comment by TGGP4 · 2008-07-28T04:04:17.000Z · LW(p) · GW(p)

then why care about human preferences? Preference is caring.

How is it possible to establish any "ought" at all, in a universe seemingly of mere "is"? You seem to start with the premise that it is possible. I would deny it.

Adds up to moral normality, including moral errors, moral progress, and things you should do whether you want to or not; Why should demand it add up to "normative normality" any more than "theological normality" or a variety of things that strike us as intuitive?

comment by Caledonian2 · 2008-07-28T04:11:56.000Z · LW(p) · GW(p)
Intuitions about morality seem to split up into two broad camps

Why are you concerning yourself with intuitions? Imagine what physics would be like if we had not abandoned intuitive concepts and turned to rational analysis. Just abolish everything done by and after Galileo Galilei, and that'd be it.

So what we should want, ideally, is a view that: (various desiderata)

There is nothing so fatal to intellectual inquiry as deciding what the answers are before the questions are asked.

comment by Z._M._Davis · 2008-07-28T04:46:23.000Z · LW(p) · GW(p)"Nor even, 'Sorry, morality is mere preference, [...]"

"Nothing is 'mere.'" Clearly morality is not just like any other preference, like one's taste in music or ice cream. Indeed, morality is different enough that we really shouldn't use the word preference. We want to actually understand the mechanisms underlying our notions of moral argument, progress, error, &c. No doubt our discussions of moral issues would be much improved should we be armed with such an understanding.

Still, it seems to me that once you admit materialism, that "goals [...] need minds to be goals in," then that answers the fundamental, ontological, philosophical question. "Is anything really truly universally right, no matter what anyone thinks?" No.

The rest is "mere" cognitive science. I'm looking forward to tomorrow--the details of the proposed algorithm--but I'm not expecting any major surprise. Subhan has it essentially right.

comment by Richard4 · 2008-07-28T05:46:00.000Z · LW(p) · GW(p)

Doug S. - see here for one objection to Fyfe's view.

comment by JulianMorrison · 2008-07-28T06:14:09.000Z · LW(p) · GW(p)

There's at least two other ultimate metamoral trends I know of:

  1. Selfish morality, human-centric. Ayn Rand. A purely selfish agent can expect to prosper best by following and demanding a moral code.

  2. Selfish morality, evolutionary. Per game theory and natural selection, moral apes have more descendants.

comment by Tiiba2 · 2008-07-28T07:38:13.000Z · LW(p) · GW(p)

While spacing out in a networking class a few years ago, it occured to me that morality is a lot like network protocols, or in general, computer protocols for multiple agents that compete for resources or cooperate on a task. A compiler assumes that a program will be written in a certain language. A programmer assumes that the compiler will implicitly coerce ints to doubles. If the two cooperate, the result is a compiled executable. Likewise, when I go to a store, I don't expect to meet a pickaxe murderer at the door, and the manager expects me to pay for the groceries. Those who do not obey these rules get the "25: to life" error.

Morality is a protocol for social networks. Some traditions of morality are arbitrary; It really doesn't matter whether people drive on the right or on the left. However, some moralities are so bogus that societies using them wouldn't last a week. If anyone drives on the left, EVERYONE had better drive on the left. It's possible to create a workaround for any one action (there used to be societies of cannibals!), but some complete moralities are sufficiently broken that you won't find any affluent civilizations that use them.

Moral progress/error cannot be judged in absolute terms, relative to the Bible. It must be judged based on the desires of the participants of the social network. However, this might be a two-parameter function, the other parameter being the definition of "participant".

How's this?

And screw Belldandy. The Lord of Nightmares can kick her ass.

(My god can beat up your god?)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-07-28T10:49:58.000Z · LW(p) · GW(p)

Alonzo Fyfe looks like he might be a fellow traveler, though not as much of one as Gary Drescher. Still, I'm not sure his conclusions add up to moral normality.

There is nothing so fatal to intellectual inquiry as deciding what the answers are before the questions are asked.

I didn't say that my four desiderata were what I wanted. I said they were what we should want. Back when I was first investigating, I honestly didn't know it would come out that way - and in fact thought quite differently - but now I know it's how it should come out.

Also, Caledonian, as others have been complaining about your constant trolling and deliberate misinterpretations, I am once again considering banning you. Actually, I've pretty much decided to do that, once I'm done with today's post - you should be given a chance to see that and say anything relevant, on the faint chance you have anything relevant to say once you see the actual theory. After that, bye.

comment by Caledonian2 · 2008-07-28T12:16:34.000Z · LW(p) · GW(p)

I didn't say that my four desiderata were what I wanted. I said they were what we should want. Back when I was first investigating, I honestly didn't know it would come out that way - and in fact thought quite differently - but now I know it's how it should come out.
Claiming to want things requires a lot less justification than claiming we should want things. You're substituting assertions that are very hard to support for ones that are fairly easy - and you're not providing enough support to justify the easy ones, much less the hard ones. And in the process, you're increasing the relevance of my criticism. Instead of my having a moderately good point, you've made me have an incredibly good point.

as others have been complaining about your constant trolling
I don't expect you to care, given your general contempt for defining the words and concepts you sling around, but that word does not mean what you think it does.

It's a lovely new wardrobe you have there, Emperor Eliezer, but it doesn't leave much to the imagination.

comment by Roko · 2008-07-28T12:18:03.000Z · LW(p) · GW(p)

I've been thinking about this for a while, and I think I have some useful ideas that people here may wish to look at. You may want to look at theses posts I have written:

Transhumanism and the need for realist ethics

The road to Universal Ethics: Universal Instrumental Values

The error of conteporary ethics: values from nowhere?

A summary of what I have written so far: Any agent which interacts with the world to achieve certain goals ( <==> follow some set of "terminal values") will pursue a certain set of instrumental values, or subgoals. It is a non-trivial fact about the universe that we live in that these subgoals show a fairly weak dependence on the supergoals that motivated them. Steve Omohundro realized this in his paper on "The nature of self-improving artificial intelligence".

I realized, independently, that this line of argument may well apply to a civilization pursuing a certain notion of "the good life": the instrumental values that they pursue may turn out to be fairly independent of their terminal values. I quote:

Let U denote a utility function which represents some idea of what is intrinsically valuable, and write I(U) for the notion of instrumental value that U gives rise to. For any notion of value which grows with the number of people alive, “Progress”(progress in physics, engineering, economics, communication, etc) always becomes an instrumental value. For example, if R = the number of people alive who have red hair, then I(R) includes “Progress” as defined above. If Z = number of prayers which are said to the god Zeus, then I(Z) also includes “Progress”.

Anyone whose instrumental value includes progress should obviously also include “Knowledge” and it should include “Creativity”, because one moment of creative genius can equate to a huge amount of progress, and it is an inherent property of creativity that you cannot predict in advance where that creativity will come from. Certain personal (and even political) types of “Freedom”, and “Diversity” are therefore also included – because a group of people who all think in the same ways are less creative than a diverse group.

Even some kinds of intrinsic value which make no reference to people will include instrumental values which require people. For example if P = “the number of paperclips in the universe”, then I(P) includes “Knowledge” and “Progress”. But then it also includes “Creativity”, and “Freedom” and “Diversity”.

comment by PK2 · 2008-07-28T15:30:19.000Z · LW(p) · GW(p)

@Tiiba I think you nailed it on the head. That is pretty much my view but you worded it better than I ever could. There is no The Meta-Morality. There are multiple possible memes(moralities and meta-moralities) and some work better than others at producing and keeping civilizations from falling apart.

@Eliezer I am very interested in reading your meta-morality theory. Do you think it will be universally compelling to humans, or at least non brain damaged humans? Assuming there are humans out there who would not accept the theory, I am curious how those who do accept the theory 'should' react to them.

As for myself, I have my own idea of a meta-morality but it's kind of rough at the moment. The gist of it involves bubbles. The basic bubble is the individual, than individual bubbles come together to form a new bubble containing the previous bubbles; families etc. etc. until you have the country bubbles and the world bubble. Any bubble can run under it's own rules as long as it doesn't interfere with other bubbles. If there is interference the smaller bubbles usually have priority over their own content. So for example no unconsented violence because individual bubbles have priority when it comes to their own bodies(content of individual bubbles), unless it's the only way to prevent them from harming other individuals. Private gay stuff between 2 consenting adults is ok because it's 2 individual bubbles coming together to make a 3d bubble and they have more say about their rules than anyone on the outside. Countries can have their own laws and rules but they may not hold or harm any smaller bubbles within them. At most they could expel them. Yeah it's still kind of rough. I've dreamed up this system with the idea that a centralized super intelligence would be enforcing the rules. It's probably not feasible without one. If this seems incomprehensible just ignore this paragraph.

comment by Pete_Carlton · 2008-07-28T16:29:51.000Z · LW(p) · GW(p)
>> "Or if "Morality is mere preference!" then why care about human preferences? How is it possible to establish any "ought" at all, in a universe seemingly of mere "is"?

I don't think it's possible, but why is that a problem? Can't all moral statements be rewritten as conditionals? i.e. - "You ought not to murder" -> "If you murder someone, we will punish you".

You might say these conditionals aren't justified, but what on earth could it mean to say they are or are not justified, other than whether they do or do not eventually fit into a "fixed given" moral scheme? Maybe we do not need to justify our moral preferences in this sense.

Replies from: DanielLC
comment by DanielLC · 2012-07-15T06:26:09.701Z · LW(p) · GW(p)

Can't all moral statements be rewritten as conditionals? i.e. - "You ought not to murder" -> "If you murder someone, we will punish you".

Not really. Moral statements need to tell you what to do. The example you gave only helps make predictions. I know murdering will result in my punishment, but unless I know whether being punished is good or bad, this doesn't tell me whether committing murder is good or bad.

comment by Doug_S. · 2008-07-28T17:02:27.000Z · LW(p) · GW(p)

Off topic:

If it matters, I vote for not banning Caledonian. The last thing we need is for this blog to turn into an echo chamber. Wasn't there some post earlier about the value of keeping a dissenter around in a conversation?

comment by Patrick_(orthonormal) · 2008-07-28T18:10:37.000Z · LW(p) · GW(p)

Well, I find that my metamorality meets those criteria, with one exception.

To reiterate once, I think that the foundations of morality as we understand it are certain evolved impulses like the ones we can find in other primates (maternal love, desire to punish a cheater, etc); these are like other emotions, with one key difference: the social component that we expect and rely on others having the same reaction, and accordingly we experience other emotions as more subjective and our moral impulses as more objective.

Note that when I'm afraid of something, and you're not, this may surprise me but doesn't anger me; but if I feel moral outrage at something, and you don't, then I'm liable to get angry with you.

But of course our moralities aren't just these few basic impulses. Given our capacity for complex thought and for passing down complex cultures, we've built up many systems of morality that try to integrate all these impulses. It's a testament of the power of conscious thought to reshape our very perceptions of the world that we can get away with this— we foment one moral impulse to restrain another when our system tells us so, and we can work up a moral sentiment in extended contexts when our system tells us to do so. (When we fail to correctly extrapolate and apply our moral system, we later think of this as a moral error.)

Of course, some moral systems cohere logically better than others (which is good if we want to think of them as objective), some have better observable consequences, and some require less strenuous effort at reinterpreting experience. Moving from one moral system to another which improves in some of these areas is generally what we call "moral progress".

This account has no problems with #2 and #3; I don't see an "impossible question" suggesting itself (though I'm open to suggestions); the only divergence from your desired properties is that it only claims that we can hardly help but believe that some things are right objectively, whether we want them or not. It's not impossible for an alien species to evolve to conscious thought without any such concept of objective morality, or with one that differs from ours on the most crucial of points (say, our immediate moral pain at seeing something like us suffer); and there'd be nothing in the universe to say which one of us is "right".

In essence, I think that Subhan is weakly on the right track, but he doesn't realize that there are some human impulses stronger than anything we'd call "preference", or that a mix of moral impulse and reasoning and reclassifying of experience is at stake and is that much more complex than the interactions he supposes. Since we as humans have in common both the first-order moral impulses and the perception that these are objective and thus ought to be logically coherent, we aren't in fact free to construct our moral systems with too many degrees of freedom.

Sorry for the overlong comment. I'm eager to see what tomorrow's post will bring...

comment by IL · 2008-07-28T18:15:18.000Z · LW(p) · GW(p)

I second Doug.

comment by Unknown · 2008-07-28T18:27:27.000Z · LW(p) · GW(p)

I vote in favor of banning Caledonian. He isn't just dissenting, which many commenters do often enough. He isn't even trying to be right, he's just trying to say Eliezer is wrong.

comment by Adam2 · 2008-07-28T19:16:39.000Z · LW(p) · GW(p)

I'm really wondering how Kantian this is going to be. One of my dream projects has always been naturalizing a proper understanding of Kantian ethics, and the background conditions you're starting from are pretty close to the ones he starts from. And he also has a very big place in his theory for free will.

comment by Tiiba2 · 2008-07-28T20:29:36.000Z · LW(p) · GW(p)

Well, belligerent dissent can actually be polarizing.

But although Caledonian makes accusations that I find more than unfounded, I've seen him make sense, too. Overall, I don't feel that his presence is so deleterious as to require banishment.

comment by Richard4 · 2008-07-28T20:29:36.000Z · LW(p) · GW(p)

I second Unknown. It's worth noting that trolls like Caledonian also deter other (more reasonable) voices from joining the conversation, so it's not at all clear that his contributions promote dissent on net. (And I think it is clear that they don't promote reasonable dissent.)

comment by Z._M._Davis · 2008-07-28T20:46:29.000Z · LW(p) · GW(p)

Seconding Doug, IL, and Tiiba re Caledonian.

comment by AnnaSalamon · 2008-07-28T20:54:28.000Z · LW(p) · GW(p)

Seconding Doug, IL, Tiiba, and Z. M Davis re: Caledonian.

comment by TGGP4 · 2008-07-28T21:23:13.000Z · LW(p) · GW(p)

I am against banning Caledonian. He's impolite, but not a troll. He seems to have genuine disagreements (though it sometimes takes prodding for him to elaborate on what those are) and doesn't spam with lots of posts or even really long ones. It's actually the opposite. His comments are usually too brief, as if he thinks any moron should get the obvious point he's making. He doesn't disrupt the conversation and in numerous threads existing dialogues have proceeded as if he wasn't even there. There are plenty of comments that just consist of "Eliezer, you're awesome", as should be expected given Eliezer's awesomeness. I think its good to have people around who don't hesitate to point out even minor errors.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-07-28T21:33:38.000Z · LW(p) · GW(p)

My primary problem with Caledonian is his systematic misrepresentation of everything I say, which is either deliberate dishonesty or so self-deceiving as to amount to dishonest. A new reader coming across a post may be misled by this, not knowing Caledonian, and thinking he is summarizing things I've said elsewhere.

If Caledonian was an obvious troll with no spelling, I would worry less about his misleading new readers. As it is... well, he managed to get himself banned from Pharyngula and I'm thinking maybe PZ Myers had it right.

I'll take into account these disagreements, but I don't think there's any shortage of disagreement here, and Caledonian is looking like more trouble than he's worth. Maybe I'll institute a sudden-death rule, next misrepresentation gets him gone.

comment by Laura B (Lara_Foster) · 2008-07-28T22:18:16.000Z · LW(p) · GW(p)

I think Caledonian should stay. Even if he does misrepresent Eliezer, he offers an opportunity to correct misconceptions that others might have regarding what Eliezer was trying to say... And on some rare occasions, he is right...

comment by Doug_S. · 2008-07-28T22:24:20.000Z · LW(p) · GW(p)

On the other hand, I don't know how much effort goes into moderating Caledonian's comments, as I don't see the posts that get deleted. This could be giving me a more positive view of his comments than I would otherwise have.

comment by Laura B (Lara_Foster) · 2008-07-28T22:36:44.000Z · LW(p) · GW(p)

Oh- back on topic, I think the exploration of metemorality will need to include people who are only softly sociopathic but not 'brain damaged'. Here is an example: An ex-boy-friend of mine claimed to have an 'empathy switch,' by which he had complete and total empathy with the few chosen people he cared about, and complete zero empathy with everyone else. To him, killing millions of people half-way around the world in order to get a super-tasty toasted pastrami and cheese sandwich would be a no-brainer. Kill the mother fuckers! He didn't know them before, he won't know them afterwards, what difference does it make? The sandwich on the other hand... well, that will be a fond memory indeed! I think many people actually live by this moral code, but are simply to dishonest with themselves to admit it. What says metemorality to that???

comment by Brian_Jaress2 · 2008-07-29T00:10:25.000Z · LW(p) · GW(p)

Eliezer, please don't ban Caledonian.

He's not disrupting anything, and doesn't seem to be trying to.

He may describe your ideas in ways that you think are incorrect, but so what? You spend a lot of time describing ideas that you disagree with, and I'll bet the people who actually hold them often disagree with your description.

Caledonian almost always disagrees with you, but treats you no differently than other commenters treat each other. He certainly treats you better than you treat some of your targets. For example, I've never seem him write a little dialog in which a character named "Goofus" espouses exaggerated versions of your ideas.

In this case, Caledonian seems to think that your four criteria are aimed at reconciling the two clashing intuitions and that it's a mistake to set such a goal. Well, so what? If that's not what you're trying to do, you fooled me as well.

To me, Caledonian just seems to have a very different take on the world than you and express it bluntly.

comment by Nick_Tarleton · 2008-07-29T00:39:32.000Z · LW(p) · GW(p)

I'm with Brian, Lara, and TGGP. Delete the content-free negations*; point out the misrepresentations; but he still makes reasonable points just often enough that I'd rather have him around.

  • I don't see the need to remove flat negations from posts that do contain content, though. They're unpleasant, but we can take it.
comment by steven · 2008-07-29T02:14:33.000Z · LW(p) · GW(p)

I'm not sure I want to see Caledonian banned, but I would love to see explicitly very elitist fora created, perhaps like the erstwhile Polymath mailing list.

comment by TGGP4 · 2008-07-29T05:34:41.000Z · LW(p) · GW(p)

steven, you should read Hopefully Anonymous' blog. He is also very interested in that.

comment by Hopefully_Anonymous · 2008-07-29T08:11:15.000Z · LW(p) · GW(p)

Who cares if Caledonian is banned from here? Hopefully he'll post more on my blog as a result. I've never edited or deleted a post from Caledonian or anyone else (except to protect my anonymity). Neither has TGGP to my knowledge. As I've posted before on TGGP's blog, I think there's a hierarchy of blogs, and blogs that ban and delete for something other than stuff that's illegal, can bring liability, or is botspam aren't at the top of the heirarchy.

If no post of Caledonians was ever edited or deleted from here (except perhaps for excessive length), this blog would be just as good. Maybe, even better.

comment by MichaelAnissimov · 2008-07-31T07:18:34.000Z · LW(p) · GW(p)

You can't ban anyone from commenting on a blog. -.- They can just change their name and/or come back with a different IP address. Shame on the commenters here for not realizing this incredibly obvious fact.

Tiiba: 2nd re: LoN.