Posts

Comments

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-25T16:15:20.114Z · LW · GW

I don't get it: any agent that fooms becomes superintelligent. It's values don't necessarily change at all, nor does its connection to its society.

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-25T14:31:37.163Z · LW · GW

Asserting that some bases for comparison are "moral values" and others are merely "values" implicitly privileges a moral reference frame.

I don't see why. The question of what makes a value a moral value is metaethical, not part of object-level ethics.

Again: if I decide that this hammer is better than that hammer because it's blue, is that valid in the sense you mean it?

It isn't valid as a moral judgement because "blue" isn't a moral judgement, so a moral conclusion cannot validly follow from it.

Beyond that, I don't see where you are going. The standard accusation of invalidity to judgements of moral progress, is based on circularity or question-begging. The Tribe who Like Blue things are going to judge having all hammers painted blue as moral progress, the Tribe who Like Red Things are going to see it as retrogressive. But both are begging the question -- blue is good, because blue is good.

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-25T14:01:28.747Z · LW · GW

UFAI is about singletons. If you have an AI society whose members compare notes and share information -- which ins isntrumentally useful for them anyway -- your reduce the probability of singleton fooming.

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-25T13:50:42.011Z · LW · GW

The argument against moral progress is that judging one moral reference frame by another is circular and invalid--you need an outside view that doesn't presuppose the truth of any moral reference frame.

The argument for is that such outside views are available, because things like (in)coherence aren't moral values.

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-25T13:38:21.579Z · LW · GW

So much for avoiding the cliche.

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-25T13:09:26.277Z · LW · GW

And is that valid or not? If you can validly decide some systems are better than others, you are some of the way to deciding which is best.

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-25T12:56:33.819Z · LW · GW

Sorry..did you mean FAI is about societies, or FAI is about singletons?

But if ethics does emerge as an organisational principle in socieities, that's all you need for FAI. You don't even to to worry about one sociopathic AI turning unfriendly, because the majority will be able to restrain it.

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-25T12:44:46.706Z · LW · GW

If "better" is defined within a reference frame, there is not sensible was of defining moral progress. That is quite a hefty bullet to bite: one can no longer say that South Africa is better society after the fall of Apartheid, and so on.

But note, that "better" doesn't have to question-beggingly mean "morally better". it could mean "more coherent/objective/inclusive" etc.

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-25T12:34:28.394Z · LW · GW

I'm increasingly baffled as to why AI is always brought in to discussions of metaethics. Societies of rational agents need ethics to regulate their conduct. Out AIs aren't sophisticated enough to live in their own socieities. A wireheading AI isn't even going to be able to survive "in the wild". If you could build an artificial society of AI, then the questions of whether they spontaneously evolved ethics would be a very interesting and relevant datum. But AIs as we know them aren't good models for the kinds of entities to which morality is relevant. And Clippy is particularly exceptional example of an AI. So why do people keep saying "Ah, but Clippy..."...?

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-25T11:28:54.023Z · LW · GW

Isn't the idea of moral progress based on one reference frame being better than another?

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-24T22:43:56.996Z · LW · GW

As far as I understand, if anything like objective morality existed, it would be a property of our physical reality, similar to fluid dynamics or the electromagnetic spectrum or the inverse square law that governs many physical interactions. The same laws of physics that will not allow you to fly to Mars on a balloon will not allow you to perform certain immoral actions (at least, not without suffering some severe and mathematically predictable consequences).

Objective facts, in the sense of objectively true statements, can be derived from other objetive facts. I don't know why you think some separate ontlogical category is cagtegory is required. I also don't know why you think the universe has to do the punishing. Morality is only of interest to the kind of agent that has values and lives in societies. Sanctions against moral lapses can be arranged at the social level, along with the inculcation of morality, debate about the subject, and so forth. Moral objectivism only supplies a good, non-arbnitrary epistemic basis for these social institutions. It doesn;t have to throw lightning bolts.

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-24T02:23:49.066Z · LW · GW

At last, an interesting reply!

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-24T01:44:26.816Z · LW · GW

The Futility of Emergence

"Emergent" in this context means "not explicitly programmed in". There are robust examples.

A paperclipper no more has a wall stopping it from updating into morality than my laptop has a wall stopping it from talking to me.

Your laptop cannot talk to you because the natural language is an unsolved problem.

Does a stone roll uphill on a whim?

Not wanting to do something is not the slightest guarantee of not actually doing it.f

An AI can update its values because value drift is an unsolved problem

Clippers can't update their values by definition, but you can't define anything into existence or statistical significance.

You do not update into pushing pebbles into prime-numbered heaps because you're not programmed to do so.

Not programmed to, or programmed not to? If you can code up a solution to value drift, lets see it. Otherwise, note that Life programmes can update to implement glider generators without being "programmed to".

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-24T01:24:35.566Z · LW · GW

1). We lack any capability to actually replace our core values

...voluntarily.

2). We cannot truly imagine what it would be like not to have our core values.

Which is one of the reasons we cannot keep values stable by predicting the effects of whatever experiences we choose to undergo.How does your current self predict what an updated version would be like? The value stability problem is unsolved in humans and AIs.

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-23T22:51:23.970Z · LW · GW

but it hasn't compelled us to try and replace our values.

The ethical outlook of the Western world has changed greatly in the past 150 years.

Comment by PrawnOfFate on Normativity and Meta-Philosophy · 2013-04-23T21:38:44.050Z · LW · GW

"better according to some shared, motivating standard or procedure of evaluation",

I broadly agree. My thinking ties shoulds and musts to rules and payoffs. Wherever you are operating a set of rules (which might be as localised as playing chess), you have certain localised "musts".

It seems to me that different people (including different humans) can have different motivating standards and procedures of evaluation, and apparent disagreements about "should' sentences can arise from having different standards/procedures, or disagreement about whether something is better according to a shared standard/procedure.

I'm very resitant to the idea, promoted by EY in the thread you refenced, that the meaning of should changes. Does he think chess players have a different concept of "rule" to poker players?

Comment by PrawnOfFate on A Less Wrong singularity article? · 2013-04-23T21:25:08.018Z · LW · GW

the criterion by which they choose is simply not that which we name morality.

Even if "morality" means "criterion for choosing.."? Their criterion might have a different referent, but that does not imply a different sense. cf. "This planet". Out of the two, sense has more to do with meaning, since it doesn't change with changes of place and time.

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-23T21:12:51.653Z · LW · GW

Arbitrary and Bias are not defined properties in formal logic. The bare assertion that they are properties of rationality assumes the conclusion.

There's plenty of material on this site and elsewhere advising rationalists to avoid arbitrariness and bias. Arbitrariness and bias are essentially structural/functional properties, so I do not see why they could not be given formal definitions.

Sure, but the discussion is partially a search for other criteria to evaluate of the truth of moral propositions. Arbitrary is not such a criteria.

Arbitrary and biased claims are not candidates for being ethical claims at all.

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-23T19:39:41.601Z · LW · GW

I am not suggesting that human ethics is coincidentally universal ethics. I am suggesting that if neither moral realism nor relativism is initially discarded, one can eventually arrive at a compromise position where rational agents in a particular context arrive at a non arbitrary ethics which is appropriate to that context.

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-23T18:39:56.372Z · LW · GW

It's uncontrovesial that rational agents need to update, and that AIs need to self-modify. The claim that values are in either case insulated from updates is the extraordinary one. The Cipper theory tells you that you could build something like that if you were crazy enough. Since Clippers are contrived, nothing can be inferred from them about typical agents. People are messy, and can accidentally update their values when trying to do something else, For instance, LukeProg updated to "atheist" after studying Christian apologetics for the opposite reason.

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-23T17:40:13.199Z · LW · GW

A machine that maximises paperclips can believe all true propositions in the world, and go on maximising paperclips. Nothing compels it to act any differently. You expect that rational agents will eventually derive the true theorems of morality. Yes, they will.

Well, that justifies moral realism.

Along with the true theorems of everything else. It won't change their behaviour, unless they are built so as to send those actions identified as moral to the action system.

...or its an emergent feature, or they can update into something that works that way. You are tacitly assuming that you clipper is barely an AI at all...that is just has certain functions it performs blindly because its built that way. But a supersmart, uper-rational clipper has to be able to update. By hypothesis, clippers have certain functionalities walled off from update. People are messilly designed and unlikely to work that way. So are likely AIs and aliens.

Only rational agents, not all mindful agents, will have what it takes to derive objective moral truths. They don't need to converge on all their values to converge on all their moral truths, because ratioanity can tell you that a moral claim is true even if it is not in your (other) interests. Individuals can value rationality, and that valuation can override other valuations.

Only rational agents, not all mindful agents, will have what it takes to derive objective moral truths. The further claim that agents will be motivated to do derive moral truths., and to act on them, requires a further criterion. Morality is about regulating behaviour in a society, So only social rational agents will have motivation to update. Again, they do not have to converge on values beyond the shared value of sociality.

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-23T15:35:24.091Z · LW · GW

Arbitrary and biased are value judgments.

And they'ree built into rationality.

Whether more than one non-contradictory value system exists is the topic of the conversation, isn't it?

Non contradictoriness probably isn't a sufficient condition for truth.

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-23T15:16:44.398Z · LW · GW

Under a totally neutral lens, which implements no values at all, no system of behavior should look any more or less silly than any other?

Including arbitrary, biased or contradictory ones? Are there values built into logic/rationality?

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-23T15:06:08.940Z · LW · GW

Not very rational for those to adopt a losing strategy (from their point of view), is it? Especially since they shouldn't reason from a point of "I could be the king". They aren't, and they know that. No reason to ignore that information, unless they believe in some universal reincarnation or somesuch.

Someone who adopts the "I don;t like X, but I respect peoples right to do it" approach is sacrificing some of their values to their evaluation of rationality and fairness. They would not do that if their rationality did not outweigh other values, But they are not having all their values maximally satisfied, so in that sense they are losing out.

Yes. Which is why rational agents wouldn't just go and change/compromise their terminal values, or their ethical judgements (=no convergence).

There's no evidence of terminal values. Judgements can be updated without changing values.

Starting out with different interests. A strong clippy accomodating a weak beady wouldn't be in its best self-interest. It could just employ a version of morality which is based on some tweaked axioms, yielding different results.

Not all agents are interested in physics or maths. Doesn't stop their claims being objetive.

Comment by PrawnOfFate on Qualitatively Confused · 2013-04-23T12:41:00.968Z · LW · GW

there's nothing so strange that no-one has seriously proposed it

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-23T12:14:54.538Z · LW · GW

You are the monarch in that society, you do not need to guess which role you're being born into, you have that information. You don't need to make all the slaves happy to help your goals, you can just maximize your goals directly. You may choose any moral principle you want to govern your actions. The Categorical Imperative would not give you the best result.

For what value of "best"? If the CI is the correct theory of morality, it will necessarily give your the morally best result. Maybe your complaint is that it wouldn't maximise your personally utility. But I don't see why you would expect that. Things like utilitarianism that seek to maximise group utility, don't promise to make everyone blissfully happy individually. Some will lose out.

A different scenario: Clippy and Anti-Clippy sit in a room. Why can they not agree on epistemic facts about the most accurate laws of physics and other Aumann-mandated agreements, yet then go out and each optimize/reshape the world according to their own goals? Why would that make them not rational?

It would be irrational for Clippy to sing up to an agreement with Beady according to which Beady gets to turn Clippy and all his clips into beads. It is irrational for agents to sign up to anyhting which is not in their interests, and it is not in their interests to have no contract at all. So rational agents, even if they do not converge on all their goals, will negotiate contracts that minimise their disutility Clippy and Beady might take half the universe each.

Lastly, whatever Kant's justification, why can you not optimize for a different principle - peak happiness versus average happiness, what makes any particular justifying principle correct across all - rational - agents.

If you think RAs can converge on an ultimately correct theory of physics (which we don't have), what is to stop them converging on the correct theory of morality, which we also don't have?

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-23T12:03:19.735Z · LW · GW

How many 5 year olds have the goal of Sitting Down WIth a Nice Cup of Tea?

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-23T11:51:39.164Z · LW · GW

Nobody cares about clips except clippy. Clips can only seem important because of Clippy's egotistical bias.

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-23T11:40:59.748Z · LW · GW

However, both the pebble-sorters and myself share one key weakness: we cannot examine ourselves from the outside; we can't see our own source code.

Being able to read all you source code could be ultimate in self-reflection (absent Loeb's theorem), but it doens't follow that those who can't read their source-code can;t self reflect at all. It's just imperfect, like everything else.

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-23T11:18:54.123Z · LW · GW

I don't require their values to converge, I require them to accept the truths of certain claims. This happens in real life. People say "I don't like X, but I respect your right to do it". The first part says X is a disvalue, the second is an override coming from rationality.

Comment by PrawnOfFate on We Don't Have a Utility Function · 2013-04-23T11:06:38.606Z · LW · GW

First of all, thanks for the comment. You have really motivated me to read and think about this more

That's what I like to hear!

If there are no agents to value something, intrinsically or extrinsically, then there is also nothing to act on those values. In the absence of agents to act, values are effectively meaningless. Therefore, I'm not convinced that there is objective truth in intrinsic or moral values.

But there is no need for morality in the absence of agents. When agents are there, values will be there, when agents are not there, the absence of values doesn't matter.

I think there is a difference between it being objectively true that, in certain circumstances, the values of rational agents converge, and it being objectively true that those values are moral. A rational agent can do really "bad" things if the beliefs and intrinsic values on which it is acting are "bad". Why else would anyone be scared of AI?

I don't require their values to converge, I require them to accept the truths of certain claims. This happens in real life. People say "I don't like X, but I respect your right to do it". The first part says X is a disvalue, the second is an override coming from rationality.

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-23T10:50:13.361Z · LW · GW

But is the justification for its global applicability that "if everyone lived by that rule, average happiness would be maximized"?

Well, not, that's not Kant's justification!

That (or any other such consideration) itself is not a mandatory goal, but a chosen one.

Why would a rational agent choose unhappiness?

If you find yourself to be the worshipped god-king in some ancient Mesopotanian culture, there may be many more effective ways to make yourself happy, other than the Categorical Imperative.

Yes, but that wouldn't count as ethics. You wouldn't want a Universal Law that one guy gets the harem, and everyone else is a slave, because you wouldn't want to be a slave, and you probably would be. This is brought out in Rawls' version of Kantian ethics: you pretend to yourself that you are behind a veil that prevents you knowing what role in society you are going to have, and choose rules that you would want to have if you were to enter society at random.

My argument against moral realism and assorted is that if you had an axiomatic system from which it followed that strawberry is the best flavor of ice cream, but other agents which are just as intelligent with just as much optimizing power could use different axiomatic systems leading to different conclusions,

You don't have object-level stuff like ice cream or paperclips in your axioms (maxims), you have abstract stuff, like the Categorical Imperative. You then arrive at object level ethics by plugging in details of actual circumstances and values. These will vary, but not in an arbitrary way, as is the disadvantage of anything-goes relativism.

how could one such system possibly be taken to be globally correct and compelling-to-adopt across agents with different goals?

The idea is that things like the CI have rational appeal.

Once you've taken a rational agent apart and know its goals and, as a component, its ethical subroutines, there is no further "core spark" which really yearns to adopt the Categorical Imperative.

Rational agents will converge on a number of things because they are rational. None of them will think 2+2-=5.

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-23T10:21:56.927Z · LW · GW

Yea, honestly I've never seen the exact distinction between goals which have an ethics-rating, and goals which do not

A number of criteria have been put forward. For instance, do as you would be done by. If you don't want to be murdered, murder is not an ethical goal.

My problem with "objectively correct ethics for all rational agents" is, you could say, where the compellingness of any particular system comes in. There is reason to believe an agent such as Clippy could exist, and its very existence would contradict some "'rational' corresponds to a fixed set of ethics" rule. If someone would say "well, Clippy isn't really rational then", that would just be torturously warping the definition of "rational actor" to "must also believe in some specific set of ethical rules".

The argument is not that rational agents (for some vaue of "rational") must believe in some rules, it is rather that they must not adopt arbitrary goals. Also, the argument only requires a statistical majority of rational agents to converge, because of the P<1.0 thing.

Should some bionically enhanced human, or an upload on a spacestation which doesn't even have parents, still share all the same rules for "good" and "bad" as an Amazon tribe living in an enclosed reservation?

Maybe not. The important thing is that variations in ethics should not be arbitrary--they should be systematically related to variations in circumstances.

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-23T09:19:25.702Z · LW · GW

Spun off from what, and how?

I am not sure I can expalin that succintly at the moment. It is also hard to summarise how you get from counting apples to transfinite numbers.

Why does their being rational demand that they have values in common? Being rational means that they necessarily share a common process, namely rationality, but that process can be used to optimize many different, mutually contradictory things. Why should their values converge?

Rationality is not an automatic process, it is skill that has to be learnt and consciously applied. Individuals will only be rational if their values prompt them to. And rationality itself implies valuing certain things (lack of bias, non arbitrariness).

So what if a paperclipper arrives at "maximize group utility," and the only relevant member of the group which shares its conception of utility is itself, and its only basis for measuring utility is paperclips? The fact that it shares the principle of maximizing utility doesn't demand any overlap of end-goal with other utility maximizers.

Utilitarians want to maximise the utiity of their groups, not their own utility. They don;t have to believe the utlity of others is utilitous to them, they just need to feed facts about group utility into an aggregation function. And, using the same facts and same function, different utilitarians will converge. That's kind of the point.

But, as I've pointed out previously, intuitions are often unhelpful, or even actively misleading, with respect to locating the truth.

Compared to what? Remember, I am talking about foundational intuitions, the kind at the bottom of the stack. The empirical method of locating the truth rests on the intuition that the senses reveal a real external world. Which I share. But what proves it? That's the foundational issue.

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-22T21:24:56.897Z · LW · GW

Yes, but the fact that the universe itself seems to adhere to the logical systems by which we construct mathematics gives credence to the idea that the logical systems are fundamental, something we've discovered rather than producing. We judge claims about nonobserved mathematical constructs like transfinites according to those systems,

But claims about transfinities don't correspond directly to any object. Maths is "spun off" from other facts, on your view. So, by analogy, moral realism could be "spun off" without needing any Form of the Good to correspond to goodness.

Metaethical systems usually have axioms like "Maximising utility is good".

But utility is a function of values. A paperclipper will produce utility according to different values than a human.

You seem to be assumig that morality is about individual behaviour. A moral realist system like utiitarianism operates at the group level, and woud take paperclipper values into account along with all others. Utilitarianism doens't care what values are, it just sums or averages them.

Or perhaps you are making the objection that an entity woud need moral values to care about the preferences of others in the first place. That is addressed by, another kind of realism, the rationality-based kind, which starts from noting that rational agents have to have some value in common, because they are all rational.

Why would most rational minds converge on values?

a) they don't have to converge on preferences, since thing like utilitariansim are preference-neutral.

b) they already have to some extent because they are rational

Most human minds converge on some values, but we share almost all our evolutionary history and brain structure. The fact that most humans converge on certain values is no more indicative of rational minds in general doing so than the fact that most humans have two hands is indicative of most possible intelligent species converging on having two hands.

I was talking about rational minds converging on the moral claims, not on values.. Rational minds can converge on "maximise group utility" whilst what is utilitous varies considerably.

Philosphers talk about intuitions, because that is the term for something foundational that seems true, but can't be justified by anything more foundational. LessWrongians don't like intuitions, but don't see to be able to explain how to manage without them.

It seems like you're equating intuitions with axioms here.

Axioms are formal statements, intuitions are gut feelings tha are often used to justify axioms.

We can (and should) recognize that our intuitions are frequently unhelpful at guiding us to he truth, without throwing out all axioms.

There is another sense of "intuition" where someone feels that it's going to rain tomorrow or something. They're not the foundational kind.

And philosophers frequently fall into the pattern of believing that other philosophers disagree with each other due to failure to understand the problems they're dealing with.

So do they call for them to be fired?

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-22T13:52:03.179Z · LW · GW

Since no claim has a probability of 1.0, I only need to argue that a clear majority of rational minds converge.

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-22T13:50:35.382Z · LW · GW

The standards by which we judge the truth of mathematical claims are not just inside us.

How do we judge claims about transfinite numbers?

One object plus another object will continue to equal two objects whether or not there are any living beings to make that judgment. Math is not something we've created within ourselves, but something we've discovered and observed.

If our mathematical models ever stop being able to predict in advance the behavior of the universe, then we will have rather more reason to doubt that the math inside us is different from the math outside of us.

Mathematics isn't physics. Mathematicians prove theorems from axioms, not from experiments.

Provide evidence that ethics is a whole separate modue, and not part of general reasoning ability.

My assertion is that, if we judge ethics as a rational system, innate values are among the axioms that the system is predicated on.

Not necessarily. Eg, for utilitarians, values are just facts that are plugged into the metaethics to get concrete actions.

You cannot prove the axioms of a system within that system, and an ethical system predicated on premises like "happiness is good" will not itself be able to prove the goodness of happiness.

Metaethical systems usually have axioms like "Maximising utility is good".

While we could suppose that the axioms which our ethical systems are predicated on are objectively true, we have considerable reason to believe that we would have developed these axioms for adaptive reasons, even if there were no sense in which objective moral axioms exist, and we do not have evidence which suggests that objective, independently existing true moral axioms do exist.

I am not sure what you mean by "exist" here. Claims are objectively true if most rational minds converge on them. That doesn't require Objective Truth to float about in space here.

Please explain why moral intuitions don't work that way.

People can be induced to strongly support opposing responses to the same moral dilemma, just by rephrasing it differently to trigger different heuristics. Our moral intuitions are incoherent.

Does that mean we can;t use moral intuitions at all, or that they must be used with caution?

I don't think I understand this, can you rephrase it?

Philosphers talk about intuitions, because that is the term for something foundational that seems true, but can't be justified by anything more foundational. LessWrongians don't like intuitions, but don't see to be able to explain how to manage without them.

I do not recall any creditable attempts, which places me in a disadvantaged position with respect to locating them.

Did you post any comments explaining to the professional philosophers where they had gone wrong?

Imagine the conversation we're having now going on for eighty years, and neither of us has changed our minds. If you didn't find my arguments convincing, and I hadn't budged in all that time, don't you'd think you'd start to suspect that I was particularly thick?

I don;'t see the problem. Philosophical competence is largely about understanding the problem.

Comment by PrawnOfFate on Ritual Report: Schelling Day · 2013-04-22T13:29:19.580Z · LW · GW

I haven't seen anything to say that is for meta discussion, it mostly isn't de facto, and I haven't seen a "take it elsewhere" notice anywhere as an aternative to downvote and delete.

Comment by PrawnOfFate on Would Your Real Preferences Please Stand Up? · 2013-04-22T13:21:17.418Z · LW · GW

All that's needed to is reject the idea that there are some mysterious properties to sensation which somehow violate basic logic and the principles of information theory.

Blatant strawman.

Comment by PrawnOfFate on Ritual Report: Schelling Day · 2013-04-22T13:11:52.988Z · LW · GW

Banning all meta discussion on LW of any kind seems like an increasingly good idea - in terms of it being healthy for the community, or rather, meta of any kind being unhealth

Have you considered having a separate "place" for it?

Comment by PrawnOfFate on Ritual Report: Schelling Day · 2013-04-22T12:59:47.471Z · LW · GW

But being able to handle criticism properly is a very important rational skill. Those who feel they cannot do it need to adjust their levels of self-advertisement as rationalists accordingly.

Comment by PrawnOfFate on Guardians of Ayn Rand · 2013-04-22T12:55:05.864Z · LW · GW

Maths isn't very relevant to Rand's philosophy. What's more relevant about her Aristoteleanism is her attitude to modern science; she was fairly ignorant. and fairly sceptical, of evolution, QM, and relativity.

Comment by PrawnOfFate on Ritual Report: Schelling Day · 2013-04-22T12:47:15.454Z · LW · GW

Is unpleasantness the only criterion? Nobody much likes criticism, but it is hardly rational to disregard it becuase you don't like it.

Comment by PrawnOfFate on Ritual Report: Schelling Day · 2013-04-22T12:45:16.283Z · LW · GW

Really? I get much more of the "have a deliberately built, internally consistent, and concordant with reality worldview" vibe from EY and LW than I do from most of the new atheist movement.

I don't see what you are getting at. Are you saying the psychological basis of a belief (groupthink vesus rational appraisal) doens't matter, so long as the belief is correct.

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-22T11:02:56.436Z · LW · GW

I should've said, "updatable terminal goals".

You can make the evidence compatble with the theory of terminal values, but there is still no support for the theory of terminal values.

Comment by PrawnOfFate on Ritual Report: Schelling Day · 2013-04-22T09:20:40.467Z · LW · GW

Successful at what? There isn't an organisation on the planet that's successful at Overcoming Bias.

Edit: Oh, and the original comment (examining whether LW meets the criteria for culthood) has been deleted. Hmm....

Comment by PrawnOfFate on Ritual Report: Schelling Day · 2013-04-21T19:49:21.949Z · LW · GW

You might come to the surprising conclusion that most hi-tech businesses are actually cults. (What went wrong?)

And that's a reductio? Or maybe not-actual-cult organisations exploit foibles that everyone has, in a comprehensive way, and other social organsiations do so in a lesser way. Maybe its a spectrum. Glass half-full, glass half-enpty.

For my money, a successful rationalist organisation should be right up at the zero end of the scale. I don't think anyone has ever done this. I think it's an interesting idea to design it. I'm pretty sure EY has zero interest. He thinks he is succeeding when people become deconverted from formal religion, and doesn't check they they have become reconverted to lesswrongianity. I don't think that is evil on his part. I think most cult leaders slip into it. If someone wants to design a rationalist organistation that is free from all the group-level, sociological forces towards irrationality, they are going to have to study some (yech!) social psyhchoogy...I know, soft sciences!

Edited for claity

Comment by PrawnOfFate on The Hidden B.I.A.S. · 2013-04-21T16:35:53.043Z · LW · GW

I take it that you're nitpicking my grammar because you disagree with my views.

I was (and am now) nitpicking your semantics, in order to establish your meaning.

As for what topic I am talking about, it is this: In the most practical sense, what you did yesterday has already happened. What will you do five minutes from now? Let's call it Z.. Yes, as a human agent the body and brain running the program you call yourself is the one who appears to make those decisions five minutes from now, but six minutes from now Z has already happened.

The fixity of the past does not imply the fixity of the future.

In this practical universe there is only one Z,

Before or after it happened?

and you can imagine all you like that Z could have been otherwise, but six minutes from now, IT WASN'T OTHERWISE.

Four minutes from now it might have been. The fixity of the past does not imply the fixity of the future.

There may be queeftillions of other universes where a probability bubble in one of your neurons flipped a different way, but those make absolutely no practical difference in your own life.

Free Will isn't less important than a practical difference, it is much more important. It affects what makiing a difference is. If FW is true, I can steer the world to a different future. If it is false, i cannot make that kind of difference: in a sense, I cannot make any kind.

You're not enslaved to physics,

Whatever that means.

you still made the decisions you made, you're still accountable according to all historical models of accountability

As you have guessed, lack of accountability (in certain senses) is a key issue in Libertarianism.

(except for some obscure example you're about to look up on Wikipedia just to prove me wrong), and you still have no way of knowing the exact outcomes of your decisions, so you've got to do the best you can on limited resources, just like the rest of us.

That is irrelevant to the existence of FW. Nothing about FW implies omniscience, or the ability to second-guess oneself.

"Free Will" is just a place-holder until we can replace that concept with "actual understanding", and I'm okay with that.

How do you know that hasn't happened already?

I understand that the concept of free-will gives you comfort and meaning in your life

You're trying to ad-hom me as a fuzzy-minded irratiolanist. Please don't.

Comment by PrawnOfFate on Welcome to Less Wrong! (5th thread, March 2013) · 2013-04-21T15:19:16.772Z · LW · GW

" values are nothing to do with rationality"=the Orthogonality Thesis, so it's a step in the argument.

Comment by PrawnOfFate on Undiscriminating Skepticism · 2013-04-21T15:13:36.532Z · LW · GW

Too see why someone might think that, imagine the following scenario: You find scientific evidence for the fact that if one forces the minority of the best-looking young women of a society at gunpoint to be of sexual service to whomever wishes to be pleased (there will be a government office regulating this) increases the average happiness of the country.

If you disregard the happiness of the women, anyway

In other words, my argument questions that the happiness (needs/wishes/etc.) of a majority is at all relevant. This position is also known as individualism and at the root of (American) conservatism.

This can be looked at as a form of deontology: govts don't have the right to tax anybody, and the outcomes of wisely spent taxation don't affect that.