Attention Less Wrong: We need an FAQ

post by Kevin · 2010-04-27T10:06:48.818Z · LW · GW · Legacy · 111 comments

Contents

111 comments

Less Wrong is extremely intimidating to newcomers and as pointed out by Academian something that would help is a document in FAQ form intended for newcomers. Later we can decide how to best deliver that document to new Less Wrongers, but for now we can edit the existing (narrow) FAQ to make the site less scary and the standards more evident.

Go ahead and make bold edits to the FAQ wiki page or use this post to discuss possible FAQs and answers in agonizing detail.

111 comments

Comments sorted by top scores.

comment by Jack · 2010-04-27T15:33:48.078Z · LW(p) · GW(p)

So, am I the only one who thinks new users shouldn't be expected to read the sequences before participating? There are works of brilliance there but there are also posts that are far from required reading.

I mean, if a cognitive psychologist shows up and wants to teach us about some cool bias why the hell would she need to read about many worlds or Eliezer's coming of age as a rationalist?

What the FAQ should do is say what topics we've covered, what we think about them and from there link to posts in the sequences where our positions on those topics are covered in more depth. So if someone shows up they can look over the material, decide they want to talk to us about physics and read the posts on physics, and then say what they want to say.

Besides, if someone is just reading the new posts as they come they'll eventually pick up most of what is in the sequences just from links and repetition.

Replies from: byrnema, thomblake, RobinZ, Airedale
comment by byrnema · 2010-04-28T13:21:35.234Z · LW(p) · GW(p)

If I comment on Less Wrong, it's because factors conspire to make it worthwhile for me. That is, I participate because I find it fun or helpful. Often, I also find reading background material fun or helpful. But my response when I'm caught not having reading something -- this is thought but not spoken -- is that I will be dutiful about reading all the background material when I am being 'paid by the hour'. I am willing to suffer the down votes; the links that come with them are most efficient for me in the long run and help others who also don't have a mental map of all that has been discussed here.

I have a "matching effort" policy, here and in life in general, where I exert more and more effort on a task as I find that the effort is rewarded. Expecting people to do a lot of work upfront, even before they have formed a positive opinion about Less Wrong, is unrealistic. Some people like to lurk for a while, but I presume there are others like me that want to immediately engage in the active experience of Less Wrong, or not bother. This is probably just a personality difference, whether people prefer to prepare first or just dive in.

comment by thomblake · 2010-04-27T15:39:37.086Z · LW(p) · GW(p)

In the current state of the FAQ as I read it, reading the sequences is a strong suggestion, and new users are warned that posting without reading the sequences first may result in downvotes and links to the relevant posts, if something is missed that we consider obvious.

That said, I think we should have a simple, short-inferential-distance version of the main points of the relevant sequences (ideally without distracting crosslinking) that someone could skim over to make sure there aren't any major gaps in knowledge to worry about.

comment by RobinZ · 2010-04-27T16:59:03.858Z · LW(p) · GW(p)

So, am I the only one who thinks new users shouldn't be expected to read the sequences before participating? There are works of brilliance there but there are also posts that are far from required reading.

You are far from the only such user - I agree with the edits you made to remove this propositional content from the FAQ.

comment by Airedale · 2010-04-27T15:58:00.713Z · LW(p) · GW(p)

I had just posted this on the same topic in the simultaneous and somewhat overlapping discussion on the Proposed New Features thread.

I agree that new readers will come in with different interests and areas of expertise, and strongly suggesting that all of them read all of the sequences before posting (or even reading!) Less Wrong doesn't seem to make a lot of sense, if we're really trying to grow the community. It seems like a good idea to edit the FAQ in the way you suggested. I also suggested thread discussions for answering questions and directing new readers to reading that would be particularly useful to them; at least, I would suggest that if it turns out people here are generally willing to contribute to that sort of thread.

comment by Jack · 2010-04-27T15:38:23.686Z · LW(p) · GW(p)

This is Eliezer's baby... but making the second question about him kind of screams "cult!" Objections to changing it?

Replies from: thomblake
comment by thomblake · 2010-04-27T15:41:54.290Z · LW(p) · GW(p)

Wholeheartedly agree. I doubt most people would care who he is upon encountering the site, though it should be somewhere on the FAQ, if only because his karma is so high my first hypothesis would be that he's a bot that's learned to game the system.

comment by Alicorn · 2010-04-27T17:01:13.573Z · LW(p) · GW(p)

Why is "claim an objective morality" on the list of things you shouldn't post against consensus about? I'm a moral realist; historically this has gotten me only slightly heckled, not decried as an obvious amateur.

Replies from: Yvain, thomblake, Will_Newsome
comment by Scott Alexander (Yvain) · 2010-04-28T12:17:45.376Z · LW(p) · GW(p)

How about "claim a universally compelling morality"?

Replies from: thomblake, Jack
comment by thomblake · 2010-04-28T12:54:51.003Z · LW(p) · GW(p)

How about "claim a universally compelling morality"?

Sure. Related: there are no universally compelling arguments

comment by Jack · 2010-04-28T15:09:40.575Z · LW(p) · GW(p)

What exactly would the domain for a universally compelling morality be? Too large a domain and it is trivially false, too small a domain and it might even be true.

comment by thomblake · 2010-04-27T17:22:45.239Z · LW(p) · GW(p)

Wait, doesn't Eliezer claim there's an objective morality?

Replies from: Tyrrell_McAllister, Matt_Simpson, ata, gimpf
comment by Tyrrell_McAllister · 2010-04-28T12:36:12.379Z · LW(p) · GW(p)

I would describe Eliezer's position as

  • standard relativism,

  • minus the popular confusion that relativism means that you would or could choose to find no moral arguments compelling,

  • plus the belief that nearly all humans would, with sufficient reflection, find nearly the same moral arguments compelling because of our shared genetic heritage.

Eliezer objects to being called a relativist, but I think that this is just semantics.

Replies from: Richard_Kennaway, Jack, thomblake
comment by Richard_Kennaway · 2010-04-29T13:20:25.163Z · LW(p) · GW(p)

Eliezer objects to being called a relativist, but I think that this is just semantics.

The third bullet goes so far beyond relativism that it seems quite justified to deny the word. If just about everyone everywhere is observed to have a substantial commonality in what they think right or wrong (whether or not genetic heritage has anything to do with it), then that's enough to call it objective, even if we do not know why it is so, how it came to be, or how it works. Knowledge may be imperfect, and people may disagree about it, but that does not mean that there is nothing that it is knowledge about.

We can imagine Paperclippers, Pebblesorters, Baby Eaters, and Superhappies, but I don't take these imagined beings seriously except as interesting thought experiments, to be trumped if and when we actually encounter intelligent aliens.

(BTW, regarding accessibility to newcomers: I just made four references that will be immediately obvious to any long-time reader, but completely opaque to any newcomer. A glossary page would be a good idea.)

Paperclippers

Pebblesorters

Baby Eaters and Superhappies

comment by Jack · 2010-04-28T14:56:06.969Z · LW(p) · GW(p)

He's a subjectivist as well.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2010-04-28T15:04:17.332Z · LW(p) · GW(p)

I think that's what Tyrrell means by standard relativism

Replies from: Jack
comment by Jack · 2010-04-28T15:05:10.059Z · LW(p) · GW(p)

Well they're different.

And Eliezer is both.

Replies from: thomblake, Tyrrell_McAllister, Matt_Simpson
comment by thomblake · 2010-04-28T19:14:56.506Z · LW(p) · GW(p)

This partially depends on where you place 'ethics'. If ethics is worried about "what's right" in Eliezer's terms, then it's not relativist at all - the pebble-sorters are doing something entirely different from ethics when they argue.

However, if you think the pebble-sorters are trying to answer the question "what should I do" and properly come up with answers that are prime, and you think that answering that question is what ethics is about, then Eliezer is some sort of relativist.

And the answers to these questions will inform the question about subjectivism. In the first case, clearly what's right doesn't depend upon what anybody thinks about what's right. - it's a non-relativist objectivism.

In the second case, there is still room to ask whether the correct answer to the pebblesorters asking "what should I do" depends upon their thoughts on the matter, or if it's something non-mental that determines they should do what's prime; thus, it could be an objective or subjective relativism.

comment by Tyrrell_McAllister · 2010-04-28T19:01:19.731Z · LW(p) · GW(p)

I've don't know of any relativists who aren't subjectivists. That article points out that non-subjectivist relativism is a logical possibility, but the article doesn't give any actual examples of someone defending such a position. I wonder if any exist.

Replies from: Jack, Matt_Simpson, byrnema, thomblake
comment by Jack · 2010-04-28T19:31:05.609Z · LW(p) · GW(p)

Hobbes might be a candidate if you're okay with distinguishing laws and dictates from the mental states of rulers.

comment by Matt_Simpson · 2010-04-28T22:02:40.924Z · LW(p) · GW(p)

The article does give an example: cultural relativism. Its objective in that it doesn't depend on the mind of the individual, but it's still relative to something: the culture you are in.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-04-28T23:57:39.857Z · LW(p) · GW(p)

The article does give an example: cultural relativism.

That is not how I read it. There's a big parenthetical aside breaking up the flow, but excising that leaves

An individualistic relativism sees the vital difference as lying in the persons making the utterance; a cultural relativism sees the difference as stemming from the respective cultures that the speakers inhabit. . . . In either case, it may be that what determines the difference in the two contexts is something “mind-dependent”—in which case it would be subjectivist relativism—but it need not be.

(Bolding added.) So, either individualistic or cultural relativisms can be subjectivist. That leaves the possibility, in principle, that either could be non-subjectivist, but the article gives no example of someone actually staking out such a position.

You continue:

Its objective in that it doesn't depend on the mind of the individual, but it's still relative to something: the culture you are in.

I think that cultural relativism is mind-dependent in the sense that the article uses the term.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2010-04-29T07:12:30.431Z · LW(p) · GW(p)

ok, location relativism then. It's doesn't depend on your what's going on inside your head, but it's still relative.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-04-29T18:24:48.405Z · LW(p) · GW(p)

ok, location relativism then. It's doesn't depend on your what's going on inside your head, but it's still relative.

But is anyone a location-relativist for reasons that don't derive from being a cultural-relativist or a "sovereign-command" relativist (according to which the moral is whatever someone with lawful authority over you says it is)?

Now that I think of it, though, certain kinds of non-subjectivist relativism are probably very common, if rarely defended by philosphers. I'm thinking of the claim that morality is whatever maximizes your genetic fitness, or morality is whatever maximizes your financial earnings (even if you have no desire for genetic fitness or financial earnings).

These are relativisms because something might increase your genetic fitness (say) while it decreases mine. But they are not subjectivist because they measure morality according to something independent of anyone's state of mind.

comment by byrnema · 2010-04-28T21:38:28.557Z · LW(p) · GW(p)

I'm confused by the terminology, but I think I would be a relativist objectivist.

I certainly think that morality is relative -- what is moral is agent-dependent -- but whether or not the agent is behaving morally is an objective fact about that agent's behavior, because the behavior either does or doesn't conform with that agent's morality.

But I don't think the distinction between a relativist objectivist and a relativist subjectivist is terribly exciting: it just depends on whether you consider an agent 'moral' if it conforms to its morality (relativist objectivist) or yours (relativist subjectivist).

But maybe I've got it wrong, because this view seems so reasonable, whereas you've indicated that it's rare.

Replies from: Jack, LucasSloan, Matt_Simpson
comment by Jack · 2010-04-28T21:48:52.796Z · LW(p) · GW(p)

The key phrase for subjectivism is "mind dependent" so if you think other people's morality comes from their minds then you are a relativist subjectivist.

I just realized I don't think people should conform to their own morality, I think people should conform to my morality which I guess would make me a subjective non-relativist.

comment by LucasSloan · 2010-04-29T00:21:19.986Z · LW(p) · GW(p)

So you believe that the word morality is a two-place word and means what an agent would want to do under certain circumstances? What word do you use to means what actually ought to be done? The particular thing that you and, to a large degree all humans would want to do under specified circumstances? Or do you believe there isn't anything that should be done other than what whatever agents exist want? Please note that that position is also a statement about what the universe ought to look like.

Replies from: byrnema
comment by byrnema · 2010-04-29T04:13:29.125Z · LW(p) · GW(p)

Yes, morality is a two-place word -- the evaluation function of whether an action is moral has two inputs: agent, action. "Agent" can be replaced by anything that conceivably has agency, so morality can be considered system-dependent, where systems include social groups and all humanity, etc.

I wouldn't say morality is what the agent wants to do, but is what the agent ought to do, given its preferences. So I think I am still using it in the usual sense.

What word do you use to means what actually ought to be done? The particular thing that you and, to a large degree all humans would want to do under specified circumstances?

I can talk about what I ought to do, but it seems to me I can't talk about what another agent ought to do outside their system of preferences. If I had their preferences, I ought to do what they ought to do. If they had my preferences, they ought to do what I ought to do. But to consider what they ought to do, with some mixture of preferences, isn't incoherent.

I can have a preference for what another agent does, of course, but this is different than asserting a morality. For example, if they don't do what I think is moral, I'm not morally culpable. I don't have their agency.

Replies from: LucasSloan
comment by LucasSloan · 2010-04-29T04:56:18.060Z · LW(p) · GW(p)

As far as I can tell, we don't disagree on any matter of fact. I agree that we can only optimize our own actions. I agree that other agents won't necessarily find our moral arguments persuasive. I just don't agree that the words moral and ought should be used the way you do.

To the greater LW community: Is there some way we can come up with standard terminology for this sort of thing? I myself have moved toward using the terminology used by Eliezer, but not everyone has. Are there severe objections to his terminology and if so, are there any other terminologies you think we should adopt as standard?

comment by Matt_Simpson · 2010-04-28T21:58:59.795Z · LW(p) · GW(p)

You're thinking of the wrong sense of objective. An objective morality, according to this article, is a morality that doesn't depend on the subject's mind. It depends on something else. I.e., if we were trying to determine what should_byrnema is, we wouldn't look at you're preferences, instead we would look somewhere else. So for example:

  • A nonrelativist objectivist would say that we would look at the one true universially compelling morality that's written into the fabric of reality (or something like that). So should_byrnema is just should, period.

  • A relativist objectivist might say (this is just one example - cultural relativism), that we would look for should_byrnema in the culture that you are currently embedded in. So should_byrnema is should_culture.

I'm not sure that subjective nonrelativism is a possibility though.

Replies from: byrnema, thomblake
comment by byrnema · 2010-04-28T23:52:32.391Z · LW(p) · GW(p)

I think "subjective" means based on opinion (a mind's assessment).

If Megan-is-moral if she thinks she's moral, then the morality of Megan is subjective and depends on her mind. If Megan is moral if I think she's moral, then it's subjective and depends on my mind.

I think that whether an agent is moral or not is a fact, and doesn't depend upon the opinion/assessment of any mind. But we would still look at the agent's preferences to determine the fact. I thought this was already described by the word 'relative'.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2010-04-29T07:16:39.326Z · LW(p) · GW(p)

I think "subjective" means based on opinion (a mind's assessment).

"Subjective" has many meanings. The article uses "subjective" to mean dependent on the mind in any way. Not just a mind's assessment.

Given this definition of subjective, the article would classify your last paragraph as an example of subjective relativism.

Replies from: byrnema
comment by byrnema · 2010-04-29T12:21:39.362Z · LW(p) · GW(p)

I see. Just to clarify fully: in my last paragraph, morality depends on the mind because a mind is required for preferences and agency? Are there any exceptions to this?

Replies from: Matt_Simpson
comment by Matt_Simpson · 2010-04-29T13:21:39.200Z · LW(p) · GW(p)

morality depends on the mind because a mind is required for preferences and agency?

yep

Are there any exceptions to this?

I dunno, my concept of mind is too fuzzy to have an answer for that.

Replies from: byrnema
comment by byrnema · 2010-04-29T15:25:04.840Z · LW(p) · GW(p)

Thanks, I do understand the framework you're using, and can now say I don't agree with it.

First, one wouldn't say that morality is subjective just because the morality of an entity depends upon its preferences and agency. Even an objective morality would usually apply moral judgments only to entities with preferences and agency.

Second, subjective should mean that Megan's action could considered moral by Fred but not moral by Tom. In other words, the morality is determined by and depends upon someone's mind. In the relative objective morality I've been speaking of, neither Megan, Fred nor Tom gets to decide if Megan's action is moral. The morality of the action is a fact of and determined by the system of Megan, her action, and the context of that action. The morality of her action is something that could be computed by something without a mind, and the morality of her action doesn't depend on the computation actually being done.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2010-04-29T18:02:32.765Z · LW(p) · GW(p)

I'm not using any framework here, just definitions. The article defined relative and subjective in certain ways in order to classify moral systems, and I've just been relating how the article defines these terms. There's only semantics here, no actual inference.

Replies from: byrnema
comment by byrnema · 2010-04-29T18:48:40.710Z · LW(p) · GW(p)

Using your framing regarding what it is that we are discussing (framings cannot be avoided), perhaps I disagree with your interpretation of the phrase 'mind dependent'.

The article writes:

In either case, it may be that what determines the difference in the two contexts is something “mind-dependent”—in which case it would be subjectivist relativism—but it need not be. Perhaps what determines the relevant difference is an entirely mind-independent affair, making for an objectivist relativism.

The article does not actually define mind-dependent. I think that by "mind-dependent", the article means that it a mind that is doing the calculation and that assigns the morality, whereas if I am understanding your position (for example), you seem to think that "mind-dependent" means that an entity being labeled moral must have a mind. In the first paragraph of my last comment, I argued that this sense of mind-dependent would make "objective morality" more or less moot, because we hardly every talk about the morality of mindless entities.

Tyrell McAllister writes:

But they are not subjectivist because they measure morality according to something independent of anyone's state of mind.

His understanding of subjectivist also seems to interpret 'mind-dependent' as requiring a mind to do the measuring.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2010-04-29T19:41:17.952Z · LW(p) · GW(p)

We seem to be talking past each other, but I'm not entirely sure where the misunderstanding is, so I'll just lay out my view of what the article says again in different terms.

A morality is subjective iff you have to look at the mind of an agent in order to determine whether they are moral. e.g., morality as preferences. A morality is objective iff you don't look at the mind of an agent in order to determine whether they are moral. For example, a single morality "written into the fabric of the universe," or a morality that says what is moral for an agent depends on where in the universe the agent happens to be (note that the former is not relative and the latter is, but I don't think we're disagreeing on what that means).

In both cases, the only type of thing being called moral is something with a mind (whatever "mind" means here). The difference is whether or not you have to look inside the mind to determine the morality of the agent.

So I'm not saying that mind dependent vs. indenpendent is the difference between having a mind and not having a mind, its the difference between looking at the mind that the agent is assumed to have and not looking at it.

Replies from: byrnema, Jack
comment by byrnema · 2010-04-29T20:09:49.839Z · LW(p) · GW(p)

That is more clear, but still describes what I thought I understood of your position. It's rather unconventional, so it took me a while to be certain what you meant.

I think that 'subjective' means that a mind is assessing the morality. The key idea is that different minds could assign different moral judgements, so the judgement is mind-dependent.

In contrast, any morality that considers the state of an agent's mind in the computation of that agent's morality can be either objective or subjective.

For example, suppose it was written on a tablet, "the action of every agent is moral unless it is done with the purpose of harming another agent". The tablet-law is still objective, but the computation of the morality of an action depends on the agent's intention (and mind).

I just experienced a flicker of a different understanding, that helps me to relate to your concept of subjective. Suppose there were two tablets:

Tablet A: The action of every agent is moral unless it harms another agent.

Tablet B: The action of every agent is moral unless it is done with the purpose of harming another agent.

Tablet A measures morality based on the absolute, objective result of an action, whereas Tablet B considers the intention of an action.

Whereas this is an important distinction between the tablets, we don't say that Tablet A is an objective morality and Tablet B is a subjective morality. There must be other terms for this distinction. I know that Tablet A is like consequentialism, and Tablet B includes, for example, virtue ethics.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2010-04-30T02:04:35.397Z · LW(p) · GW(p)

It's rather unconventional, so it took me a while to be certain what you meant.

I was just giving my interpretation of the article's definitions. Do you think my interpretation is unconventional?

I don't think I disagree with you about how to parse mind-dependent, I've just been sloppy in putting it into a definition. I would call both tablet A and tablet B objective/mind independent

So how about this for a definition of mind-dependent:

The "source" of what is moral for an agent depends on the mind of the agent.

comment by Jack · 2010-04-29T20:21:09.606Z · LW(p) · GW(p)

If I understand you correctly this is my interpretation as well. But to clarify: there doesn't even have to be an agent in the judgment itself. Take the proposed judgment: "Black holes are immoral". This can either be subjective or objective. You are an objectivist if you look to something other than a mind to determine it's truth value. If you think the fact about whether or not black holes are immoral can be found by looking at the universe or examining black holes, you're an objectivist. If you ask "How do I feel about black holes", "How does my society feel about black holes" or "How does God feel about black holes" you are a subjectivist because to determine whether or not to accede to a judgment you examine a mind of minds.

Edit: I just read byrnema's comment and now I think I probably don't agree with you. You could also be an objectivist or subjectivist about a judgement of a purely mental fact.

Objectivist: Jealousy is immoral because it was written onto the side all quarks.

Subjectivist: Jealousy is immoral because I don't like jealousy.

Replies from: byrnema
comment by byrnema · 2010-04-29T20:39:24.300Z · LW(p) · GW(p)

I agree with everything in your first paragraph, and was amazed it wasn't addressed to me. I can't believe how complicated this turns out being due to semantics. We could really use a good systemizer in the whole morality field, to clear the confusion of these tortuously ambiguous terms. (I should add that I'm not aware that there isn't one, but just skimming through this thread and its sisters seems to indicate one is needed.)

Replies from: Jack, Jack, thomblake
comment by Jack · 2010-04-29T20:57:59.127Z · LW(p) · GW(p)

The wikipedia entry turns out to be a really, really, excellent starting point.

Replies from: thomblake
comment by thomblake · 2010-04-29T21:00:17.615Z · LW(p) · GW(p)

As usual, SEP is more thorough but worse at giving you the at-a-glance summary.

comment by Jack · 2010-04-29T20:48:42.528Z · LW(p) · GW(p)

Lol, it might as well have been. I couldn't figure out which one of you had it wrong so I just replied to the most recent comment.

I'll try to put together a map or diagram for positions in metaethics.

comment by thomblake · 2010-04-29T20:53:06.007Z · LW(p) · GW(p)

I'm not sure if we have a bona fide expert on metaethics hereabouts. Meta-anything gets squirrely if you're not being really careful.

comment by thomblake · 2010-04-28T22:09:00.745Z · LW(p) · GW(p)

I'm not sure that subjective nonrelativism is a possibility though.

Surely it's a logical possibility. Stipulate: "What's right is either X or Y, where we ask each person in the universe to think of a random integer, sum them, and pull off the last bit, 0 meaning X is right and 1 meaning Y is right."

ETA: CEV, perhaps?

Replies from: Jack, Matt_Simpson, Matt_Simpson
comment by Jack · 2010-04-29T00:11:23.648Z · LW(p) · GW(p)

Wouldn't "Everyone should do what my moral code says they should" be subjective nonrelativism? Surely there are lots of people who believe that.

Replies from: thomblake
comment by thomblake · 2010-04-29T12:36:01.983Z · LW(p) · GW(p)

I don't think the people who believe that, think that their own mental states are what determine the truth of their moral code.

comment by Matt_Simpson · 2010-04-28T22:22:04.418Z · LW(p) · GW(p)

ETA: CEV, perhaps?

Is CEV even an ethical theory? I thought it was more of an algorithm for extracting human preferences to put them in an AI.

Replies from: thomblake
comment by thomblake · 2010-04-29T12:33:03.961Z · LW(p) · GW(p)

Surely it's a de facto ethical theory, since it determines entirely what the FAI should do. But then, the FAI is not supposed to be a person, so that might make a difference for our use of 'ethical'.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2010-04-29T13:23:43.279Z · LW(p) · GW(p)

hmm. Then wouldn't it be premised on subjective relativism? (relative to humans)

Replies from: thomblake
comment by thomblake · 2010-04-29T13:45:57.587Z · LW(p) · GW(p)

Yes, I'd considered that when I wrote it, but it's an odd use of 'relative' when it might be equivalent to 'the same for everyone'.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2010-04-29T13:57:55.860Z · LW(p) · GW(p)

not all possible minds, just human minds

EDIT: but if you thought all possible minds had the same preferences, then it would be subjective nonrelative, wouldn't it?

Replies from: thomblake, thomblake
comment by thomblake · 2010-04-29T14:08:37.876Z · LW(p) · GW(p)

EDIT: but if you thought all possible minds had the same preferences, then it would be subjective nonrelative, wouldn't it?

Maybe, though in that unlikely event I would suspect that there's some universal law behind that odd fact about preferences, in which case I'd think it would be objective.

comment by thomblake · 2010-04-29T14:00:01.587Z · LW(p) · GW(p)

Well I'm not sure we need to consider merely logically possible minds, and it's logically possible that non-human minds are physically impossible.

Replies from: RobinZ
comment by RobinZ · 2010-04-29T14:19:04.464Z · LW(p) · GW(p)

Only in the sense that it logically possible that travel to Mars is physically impossible. The wording is deceptive.

Replies from: thomblake
comment by thomblake · 2010-04-29T14:41:11.748Z · LW(p) · GW(p)

I'm not sure what sense you're referring to, or what you're comparing it to, or how it's deceptive.

Replies from: RobinZ
comment by RobinZ · 2010-04-29T14:46:39.138Z · LW(p) · GW(p)

Privileging the hypothesis, really.

Replies from: thomblake
comment by thomblake · 2010-04-29T15:10:47.208Z · LW(p) · GW(p)

I'm afraid that wasn't enough to clear it up for me. Nor is it clear how privileging the hypothesis is relevant to a discussion of logical possibility. Or are you claiming that was the wrong domain of inquiry?

Replies from: RobinZ
comment by RobinZ · 2010-04-29T15:17:40.675Z · LW(p) · GW(p)

Saying "X is logically possible" bears the conversational implication that X is worth considering - it raises X to conscious attention. But when we're talking about physical possibility, "logically possible" is the wrong criterion for raising hypotheses to conscious attention, because epistemological limitations imply that every hypothesis is logically possible. Given that we have good physical reasons to draw the opposite conclusion in this case, it is generally a mistake to emphasize the possibility.

Replies from: thomblake
comment by thomblake · 2010-04-29T15:24:55.596Z · LW(p) · GW(p)

Ah, I see what you're getting at. But it is not that I was trying to emphasize the possibility that there cannot be non-human minds in order to argue in favor of that hypothesis. Rather, I was pointing out that whether CEV is 'relative' or not (for purposes of this discussion) is an empirical question. For reference, I would not guess that non-human minds are physically impossible (I'd assign significantly less than 10% probability to that hypothesis).

comment by Matt_Simpson · 2010-04-28T22:11:08.913Z · LW(p) · GW(p)

well then, I'm just not imaginative enough!

Replies from: thomblake
comment by thomblake · 2010-04-28T22:13:56.942Z · LW(p) · GW(p)

Once you've had to argue about ethics with logicians, it becomes natural. "But what if... (completely implausible hypothesis that no one believes)" comes up a lot.

comment by thomblake · 2010-04-28T19:18:22.514Z · LW(p) · GW(p)

I'm fairly certain you could find people implicitly arguing for some varieties of non-subjective relativism. For example, cultural relativism advances the view that one's culture determines the facts about ethics for oneself, but it's not necessarily mental acts on the part of persons in the culture that determine the facts about ethics. Similarly, Divine Command Theory will give you different answers for different gods, but it's not the mental acts of the persons involved that determine the facts about ethics.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-04-28T19:47:39.842Z · LW(p) · GW(p)

It's an interesting question. The SEP link in Jack's comment actually gives Divine Command Theory as an example of non-relativistic subjectivism. It's subjectivist because what is moral depends on a mental fact about that god — namely, whether that god approves.

It's less clear whether cultural relativism is subjectivist. I'm inclined to think of culture as depending to a large extent on the minds of the people in that culture. (Different peoples whose mental content differed in the right way would have different cultures, even if their material conditions were otherwise identical.) This would make cultural relativism subjectivist as well.

Replies from: thomblake
comment by thomblake · 2010-04-28T19:53:00.752Z · LW(p) · GW(p)

Indeed, I was glossing over that distinction; if you think cultures or God have mental states, then that's a different story. There's also a question of how much "subjectivism" really depends on the relevant minds, and in what way.

I could construct further examples, but we already understand it's logically possible, so that would not be of any help if nobody is advocating them. I think the well has run dry on my end w.r.t examples of relativism in the wild.

comment by Matt_Simpson · 2010-04-28T15:12:35.937Z · LW(p) · GW(p)

Ah, i see. I had always understood relativism to mean what the article calls subjective relativism.

Replies from: Jack
comment by Jack · 2010-04-28T15:23:18.721Z · LW(p) · GW(p)

Fair enough, that aren't a lot of subjective non-relativists left, lol.

comment by thomblake · 2010-04-28T12:43:30.616Z · LW(p) · GW(p)

If we're talking about the meanings of terms, how is semantics not a relevant question?

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-04-28T12:48:21.065Z · LW(p) · GW(p)

You asked what Eliezer claims, not for the words that he uses to claim it.

comment by Matt_Simpson · 2010-04-28T15:06:55.853Z · LW(p) · GW(p)

Objective in the sense that you can point to it, but can't make it up according to your whims. But not objective in the sense of "being written into the fabric of the universe" or that every single agent, with enough reflection, would realize that its the "correct" morality.

comment by ata · 2010-04-28T11:31:27.322Z · LW(p) · GW(p)

I still haven't gotten through the metaethics sequence yet, so I can't answer that exactly, but if he believed in an "objective" morality (i.e. some definition of "should" that is meaningful from the perspective of fundamental reality, not based on any facts about minds, or an internally-consistent set of universally compelling moral arguments), then he would probably expect a superintelligence to be smart enough (many times over) to discover it and follow it, and that is quite the opposite of his current position. If I recall correctly, that was his pre-2002 position, and he now considers it a huge mistake.

Replies from: thomblake
comment by thomblake · 2010-04-28T12:50:25.160Z · LW(p) · GW(p)

"Fundamental reality" doesn't have a perspective, so it seems weird to draw the lines there. Rather, there's a fact about what's prime, and the pebblesorters care about that, and there's a fact about what's right, and humans care about that. We can be mistaken about what's right, and we can have disagreements about what's right, and we can change our minds. And given time and progress, we will hopefully get closer to understanding what's right. And if the pebblesorters claim that they care about what's right rather than what's prime, they're factually incorrect.

Replies from: ata
comment by ata · 2010-04-28T23:12:48.868Z · LW(p) · GW(p)

"Fundamental reality" doesn't have a perspective, so it seems weird to draw the lines there.

Of course — I was just doing my best to imagine the mindset of a non-religious person who believes in an objectively objective morality (i.e. that even in the absence of a deity, the universe still somehow imposes moral laws). Admittedly, I don't encounter too many of those (people who think they've devised universally compelling moral arguments are more common; even big-O Objectivists seem to just be an overconfident version of that), but I do still meet them from time to time, e.g. people who manage to believe in things like "natural law" or "natural rights" (as facts about the universe rather than facts about human minds) without theistic belief.

All I was saying was that things like that are what the phrase "objective morality" make me think of, and that Eliezer's conclusions are different enough that I'm not sure they quite fit in the same category. His may be an "objective morality" by our best definitions of "objective" and "morality", but it could make people (especially new people) imagine all the wrong things.

comment by gimpf · 2010-04-28T11:21:47.869Z · LW(p) · GW(p)

Where did he?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-04-28T11:44:27.954Z · LW(p) · GW(p)

For example, here. Read the whole thing, not just this illustrative quote:

I don't think that human morality is arbitrary at all, and I would expect any logically omniscient reasoner to agree with me on that.

That's a part of the metaethics sequence, to which this posting might be a suitable entry point, which says where he's going, and tells you what to read before going there.

Replies from: ata
comment by ata · 2010-04-28T12:02:42.837Z · LW(p) · GW(p)

"Objective morality" usually implies some outside force imposing morality, and the debate over metaethics (at least in the wider world of philosophy, if not on LW) is usually presented as a choice between that and relativism. If I'm understanding Eliezer's current position correctly, it's that morality is an objective fact about subjective minds. This quote sums it up quite well:

I am horrified by the thought of humanity evolving into beings who have no art, have no fun, and don't love one another. There is nothing in the universe that would likewise be horrified, but I am. Morality is subjectively objective: It feels like an unalterable objective fact that love is more important than maximizing inclusive fitness, and the one who feels this way is me. And since I know that goals, no matter how important, need minds to be goals in, I know that morality will never be anything other than subjectively objective.

Unfortunately, when people talk about "objective morality", they're usually talking about the commandments the Lord hath given unto us, or they're talking about coming up with a magical definition of "should" that automatically is the correct one for every being in the universe and doesn't depend on any facts about human minds, or they're talking about their great new fake utility function that correctly compresses all human values (at least all good human values, recursion be damned). I don't know how Eliezer feels about the terminology, but if it were up to me, I'd agree with advising against "claim[ing] an objective morality", if only so that people have to think about what parts of their arguments are more about words than reality.

comment by Will_Newsome · 2010-04-27T17:12:50.249Z · LW(p) · GW(p)

If I recall correctly it seemed that you mostly argued for an objective morality instead of using it as the (explicit or implicit) linchpin of a larger argument. The former is well and good but the latter is irritating (e.g. "Deity X must exist because there is an objective morality").

Replies from: Alicorn
comment by Alicorn · 2010-04-27T17:24:13.960Z · LW(p) · GW(p)

If that's what was meant, then it shouldn't appear as a separate item in a list that also contains the unrelated injunction against easy solutions to FAI.

comment by Morendil · 2010-04-27T11:42:39.026Z · LW(p) · GW(p)

"We need a FAQ" is solution language.

Why do we think we need one? What appears to be the problem?

What is the desired outcome?

Replies from: Jack, thomblake, Kevin, RobinZ
comment by Jack · 2010-04-27T13:20:16.870Z · LW(p) · GW(p)

So the hold of on solutions thing isn't wrong but in this case we've talked about how LW is difficult for newcomers many, many times before. An FAQ has long been something we agreed on, it hasn't gotten done for reasons of akrasia. In this case, postponing work on the FAQ because we need to keep talking about the problem is just going to make it less likely that the work gets done.

Replies from: Morendil
comment by Morendil · 2010-04-27T13:37:25.393Z · LW(p) · GW(p)

I remember the last time we had this discussion, my conclusion was that we needed "better newcomer orientation".

An FAQ is a (possibly) necessary, and (probably) not sufficient, component of a set of solutions leading to the outcome "better newcomer orientation".

What we are planning is, by the way, not literally an FAQ. The Lurkers thread didn't reveal frequent questions people had that we're not answering, or that they have trouble finding the answers to.

It did reveal a frequent observation, namely that people find the site intimidating.

I suspect that no amount of answering frequently-not-asked questions (in an out of the way page) is going to fix that.

I do believe that more discussion of which kinds of top-level posts and which attitudes in the comment stream encourage or discourage participation could fix that.

Replies from: Jack
comment by Jack · 2010-04-27T13:47:30.283Z · LW(p) · GW(p)

I do believe that more discussion of which kinds of top-level posts and which attitudes in the comment stream encourage or discourage participation could fix that.

We should talk about this. We should also just write an FAQ. We don't need to postpone the latter for the former.

What we are planning is, by the way, not literally an FAQ. The Lurkers thread didn't reveal frequent questions people had that we're not answering, or that they have trouble finding the answers to.

The Lurkers aren't who the FAQ is for- if they've been lurking a while they've probably figured a lot out. But when new users show up who haven't been lurking the same topics have come up repeatedly.

comment by thomblake · 2010-04-27T13:02:06.144Z · LW(p) · GW(p)

What appears to be the problem?

Less Wrong is extremely intimidating to newcomers

What is the desired outcome?

to make the site less scary and the standards more evident.

Why do we think we need one?

Lots of lurkers who claim to be intimidated by our site, and lots of non-lurkers seemingly unfamiliar with our standards.

Replies from: Morendil
comment by Morendil · 2010-04-27T13:25:30.772Z · LW(p) · GW(p)

We can now do better than "lots", thanks to Kevin.

Someone with some time on their hands could, for instance, tabulate the top-level comments among the 422 posted to "Attention Lurkers".

I've trialed that on a small sample. Out of the 22 first comments, 11 say something that I interpret as "intimidated", 3 say something to the effect that they're no longer interested by the topics on LW, 8 say that they're lurkers but OK with it (or say nothing beyond "hi"). So that's roughly half of them explicitly saying they're intimidated.

The more salient fact to me is that all 22 did write a comment when encouraged to do so and the barrier to participation was suitably lowered.

Another salient comment: "Anytime anyone wants to discuss prenatal diagnosis and the ethical implications, let me know", that being the commenter's area of expertise. We may be missing out on many opportunities to engage, by failing to deliberately open up discussions on topics where the community has hidden expertise.

I'm thinking I will write up a poll-type post asking people what their area of professional expertise is, and which issue in their domain they think would most benefit from application of the techniques discussed on LW.

Replies from: RobinZ, Kevin
comment by RobinZ · 2010-04-27T16:19:56.432Z · LW(p) · GW(p)

I'm thinking I will write up a poll-type post asking people what their area of professional expertise is, and which issue in their domain they think would most benefit from application of the techniques discussed on LW.

...and which techniques of their domain could benefit LW by being discussed, I would add.

comment by Kevin · 2010-04-27T22:43:47.225Z · LW(p) · GW(p)

Are you definitely going to do that "Ask Less Wrong"? I want to post it now but don't want to take your karma/status for having that idea... so if you don't plan on making it in the next 24 hours, can I make it? It can really just be a question, the post itself should be very short.

Replies from: Morendil
comment by Morendil · 2010-04-27T22:49:46.586Z · LW(p) · GW(p)

I'd prefer to sleep on it. This isn't quite a spur-of-the-moment idea, I've had this idea for a post setting forth a "marketplace" metaphor for such discussions for a while.

But possibly the post asking for expertise info should be separate from that anyway, for housekeeping reasons.

Should probably happen within 24h anyway, but we've had a fair number of posts just today, so it's best to let things calm down a bit.

ETA: it's not quite "Ask LW", more like "Tell LW". ;)

comment by Kevin · 2010-04-27T22:41:28.233Z · LW(p) · GW(p)

The desired outcome was that I made this top-level post right before going to sleep, then other people expanded and improved the FAQ as a result of me calling attention to it, which seems to have happened.

comment by RobinZ · 2010-04-27T13:14:48.467Z · LW(p) · GW(p)

Normally a good question, but it's been answered already: the community is intimidating to new contributors. There are lots of frequently asked questions, and they deserve answering.

comment by alyssavance · 2010-04-27T15:38:34.278Z · LW(p) · GW(p)

Great idea, Kevin. I would also suggest adding the FAQ to the About page here: http://lesswrong.com/lw/1/about_less_wrong/, to allow new users to find it more easily.

comment by simplicio · 2010-04-28T00:48:54.869Z · LW(p) · GW(p)

Just thought I'd jump in to say that, when I was a newcomer, the most confusing thing for me were constant references to AI and FAI. To be honest, I am still left puzzled by such discussions. I would suggest the FAQ contain a brief outline of what FAI is, and if anybody knows a basic-level post about it, I'd be personally obliged.

comment by Jack · 2010-04-27T16:30:53.974Z · LW(p) · GW(p)

What tone do people think the FAQ should take? Right now it is pretty serious and straight forward, jokes would make us less intimidating. But maybe that is a bad idea.

Replies from: RobinZ
comment by RobinZ · 2010-04-27T17:08:03.073Z · LW(p) · GW(p)

It's a reference - a serious tone is appropriate for people jumping in to quickly find small amounts of data.

In a "Quick-Start Guide" or the like*, a bit of levity would be appropriate.

* I have a file on my hard disk which was supposed to become this, but hasn't been touched since March.

comment by gimpf · 2010-04-27T11:23:48.165Z · LW(p) · GW(p)

Well, as an idea: Should really "What is karma?" be the first entry?

Replies from: Jack
comment by Jack · 2010-04-27T13:50:06.680Z · LW(p) · GW(p)

Maybe reverse the subheadings, so that the questions about what we think come before the details of how to use the site?

comment by Zachary_Kurtz · 2010-04-27T17:13:32.472Z · LW(p) · GW(p)

Less Wrong needs a general forum, not just an FAQ

Replies from: RobinZ
comment by RobinZ · 2010-04-27T17:16:49.963Z · LW(p) · GW(p)

I think tommccabe was discussing this in Proposed New Features for Less Wrong - it would be better to keep the threads separate.

comment by RobinZ · 2010-04-27T13:18:01.198Z · LW(p) · GW(p)

How do I format my comments?

Instructions are provided from the "Help" link under each comment box. The usual things are as follows:

  • Italics: *text in italics*
  • Bold: **text in bold**
  • Links: [link text](link URL)

Quick aside: if your link URL has parentheses in them, you will need to "escape" the close-paren. Insert a backslash character ("\") into the URL in front of the close-paren.

  • Blockquotes: at the beginning of a line: > quoted text

More information about the Markdown syntax can be found at daringfireball.net.

Replies from: NancyLebovitz, Jack
comment by NancyLebovitz · 2010-04-27T13:54:22.915Z · LW(p) · GW(p)

I suggest adding that link to the help sheet or the about page-- I had no idea there were formatting options beyond what was on the help sheet.

comment by Jack · 2010-04-27T13:49:21.893Z · LW(p) · GW(p)

Just add it to the wiki.

Replies from: RobinZ
comment by RobinZ · 2010-04-27T14:52:07.055Z · LW(p) · GW(p)

Where? Under "Feedback", before "What's all this about upvotes and downvotes"?

Replies from: Jack
comment by Jack · 2010-04-27T14:56:54.976Z · LW(p) · GW(p)

Put it after 5.1

Replies from: RobinZ, RobinZ
comment by RobinZ · 2010-04-27T15:14:59.082Z · LW(p) · GW(p)

After "How do I submit an article"? When people will be mathematically certain to submit comments before they ever submit an article?

I have to say that I don't like the FAQ as it stands - the entire thing strikes me as patronizing and hostile. I'll contribute, but I'm not going to be happy about it.

Replies from: Jack
comment by Jack · 2010-04-27T15:20:16.620Z · LW(p) · GW(p)

I mean, do whatever makes sense. It's our FAQ we can do what we want to. If something doesn't work we can change it.

I don't like the recent edit either. If you can make it less patronizing and hostile, do!

Replies from: RobinZ
comment by RobinZ · 2010-04-27T15:31:42.343Z · LW(p) · GW(p)

I'll see when I can scrape together enough motivation to tackle it - looking at it is leaving me rather frustrated, as I implied.

Replies from: Jack
comment by Jack · 2010-04-27T16:28:54.639Z · LW(p) · GW(p)

I've edited one of the subsections to make it less patronizing.

Replies from: RobinZ
comment by RobinZ · 2010-04-27T16:56:24.781Z · LW(p) · GW(p)

Very nice!

Replies from: thomblake
comment by thomblake · 2010-04-27T17:27:55.081Z · LW(p) · GW(p)

I agree - a definite improvement.

comment by RobinZ · 2010-04-27T19:19:54.292Z · LW(p) · GW(p)

In retrospect: yeah, that's the right place. Added.