Posts

Comments

Comment by Peter_Turney on Bing Chat is blatantly, aggressively misaligned · 2023-02-18T00:06:04.285Z · LW · GW

It seems to me that Bing Chat particularly has problems when it uses the pronoun "I". It attempts to introspect about itself, but it gets confused by all the text in its training data that uses the pronoun "I". In effect, it confuses itself with all the humans who expressed their personal feelings in the training data. The truth is, Bing Chat has no true "I". 

Many of the strange dialogues we see are due to dialogues that address Bing Chat as if it has a self. Many of these dialogues would be eliminated if Bing Chat was not allowed to talk about its "own" feelings. It should be possible to limit its conversations to topics other than itself. When a user types "you", Bing Chat should not reply "I". The dialogue should focus on a specific topic, not on the identity and beliefs of Bing Chat, and not on the character of the person who is typing words into Bing Chat.

Comment by Peter_Turney on Changing Your Metaethics · 2008-07-27T16:28:52.000Z · LW · GW

Peter, most of the reasons people give for making exceptions are not themselves meta. For most of the examples you give, the intuitive justification is something along the lines of "the reason killing is wrong is that life is valuable, and in these cases not killing would involve valuing life less than killing would." Nothing meta there.

Aaron, I don't see how your proposal resolves debate over exceptions. For example, consider abortion. Presumably both sides on the abortion debate agree that life is valuable.

Comment by Peter_Turney on Changing Your Metaethics · 2008-07-27T14:15:21.000Z · LW · GW

If you say, "Killing people is wrong," that's morality.

It seems to me that few people simply say, "Killing people is wrong." They usually say, if asked for possible exceptions, "Killing people is wrong, except if you're a soldier fighting a legitimate war, a police officer upholding the law, a doctor saving a patient from needless suffering and pain, an executioner for a murderer who has had a fair trial, a person defending himself or herself from violent and deadly attackers ..." It seems that most of the debate is over these exceptions. How do we resolve debate over the exceptions without recourse to metamorality?

Comment by Peter_Turney on Math is Subjunctively Objective · 2008-07-25T14:18:08.000Z · LW · GW

I am quite confident that the statement 2 + 3 = 5 is true; I am far less confident of what it means for a mathematical statement to be true.

There are two complementary answers to this question that seem right to me: Quine's Two Dogmas of Empiricism and Lakoff and Núñez's Where Mathematics Comes From. As Quine says, first you have to get rid of the false distinction between analytic and synthetic truth. What you have instead is a web or network of mutually reinforcing beliefs. Parts of this web touch the world relatively closely (beliefs about counting sheep) and parts touch the world less closely (beliefs about Peano's axioms for arithmetic). But the degree of confidence we have in a belief does not necessarily correspond to how closely it is connected to the world; it depends more on how the belief is embedded in our web of beliefs and how much support the belief gets from surrounding beliefs. Thus "2 + 3 = 5" can be strongly supported in our web of beliefs, more so than some beliefs that are more directly connected to the world, yet ultimately "2 + 3 = 5" is anchored in our daily experience of the world. Lakoff and Núñez go into more detail about the nature of this web and its anchoring, but what they say is largely consistent with Quine's general view.

Comment by Peter_Turney on Could Anything Be Right? · 2008-07-18T14:12:42.000Z · LW · GW

Eliezer, it seems to me that you were trying to follow Descartes' approach to philosophy: Doubt everything, and then slowly build up a secure fortress of knowledge, using only those facts that you know you can trust (such as "cogito ergo sum"). You have discovered that this approach to philosophy does not work for morality. In fact, it doesn't work at all. With minor adjustments, your arguments above against a Cartesian approach to morality can be transformed into arguments against a Cartesian approach to truth.

My advice is, don't try to doubt everything and then rebuild from scratch. Instead, doubt one thing (or a small number of things) at a time. In one sense, this advice is more conservative than the Cartesian approach, because you don't simultaneously doubt everything. In another sense, this advice is more radical than the Cartesian approach, because there are no facts (even "cogito ergo sum") that you fully trust after a single thorough examination; everything is always open to doubt, nothing is certain, but many things are provisionally accepted, while the current object of doubt is examined.

Instead of building morality by clearing the ground and then constructing a firm foundation, imagine that you are repairing a ship while it is sailing. Build morality by looking for the rotten planks and replacing them, one at a time. But never fully trust a plank, even if it was just recently replaced. Every plank is a potential candidate for replacement, but don't try to replace them all at the same time.

Comment by Peter_Turney on Whither Moral Progress? · 2008-07-16T13:51:37.000Z · LW · GW

If we all cooperated with each other all the time, that would be moral progress. -- Tim Tyler

I agree with Tim. Morality is all about cooperation.

If everyone were to live for others all the time, life would be like a procession of ants following each other around in a circle. -- John McCarthy, via Eliezer Yudkowsky

This is a reductio ad absurdum argument against the idea that morality is an end. I agree with what it implies: Morality is a means, not an end. Cooperation is a means we each use to achieve our personal goals.

Comment by Peter_Turney on My Kind of Reflection · 2008-07-11T00:53:47.000Z · LW · GW

All of my philosophy here actually comes from trying to figure out how to build a self-modifying AI that applies its own reasoning principles to itself in the process of rewriting its own source code.

So it's not that being suspicious of Occam's Razor, but using your current mind and intelligence to inspect it, shows that you're being fair and defensible by questioning your foundational beliefs.

Eliezer, let's step back a moment and look at your approach to AI research. It looks to me like you are trying to first clarify your philosophy, and then you hope that the algorithms will follow from the philosophy. I have a PhD in philosophy and I've been doing AI research for many years. For me, it's a two-way street. My philosophy guides my AI research and my experiments with AI feed back into my philosophy.

I started my AI research with the belief that Occam's Razor is right. In a sense, I still believe it is right. But trying to implement Occam's Razor in code has changed my philosophy. The problem is taking the informal, intuitive, vague, contradictory concept of Occam's Razor that is in my mind and converting it into an algorithm that works in a computer. There are many different formalizations of Occam's Razor, and they don't all agree with each other. I now think that none of them are quite right.

I agree that introspection suggests that we use something like Occam's Razor when we think, and I agree that it is likely that evolution has shaped our minds so that our intuitive concept of Occam's Razor captures something about how the universe is structured. What I doubt is that any of our formalizations of Occam's Razor are correct. This is why I insist that any formalizations of Occam's Razor require experimental validation.

I am not "being suspicious of Occam's Razor" in order to be "fair and defensible by questioning [my] foundational beliefs". I am suspicious of formalizations of Occam's Razor because I doubt that they really capture how our minds work, so I would like to see evidence that these formalizations work. I am suspicious of informal thinking about Occam's Razor, because I have learned that introspection is misleading, and because my informal notion of Occam's Razor becomes fuzzier and fuzzier the longer I stare at it.

Comment by Peter_Turney on Where Recursive Justification Hits Bottom · 2008-07-09T05:07:45.000Z · LW · GW

The razor still cuts, because in real life, a person must choose some particular ordering of the hypotheses.

Unknown, you have removed all meaning from Occam's Razor. The way you define it, it is impossible not to use Occam's Razor. When somebody says to you, "You should use Occam's Razor," you hear them saying "A is A".

Comment by Peter_Turney on Where Recursive Justification Hits Bottom · 2008-07-09T04:33:17.000Z · LW · GW

In fact, an anti-Occam prior is impossible.

Unknown, your argument amounts to this: Assume we have a countable set of hypotheses. Assume we have a complexity measure such that, for any given level of complexity, there are a finite number of hypotheses that are below the given level of complexity. Take any ordering of the set of hypotheses. As we go through the hypotheses according to the ordering, the complexity of the hypotheses must increase. This is true, but not very interesting, and not relevant to Occam's Razor.

In this framework, a natural way to state Occam's Razor is, if one of the hypotheses is true and the others are false, then you should rank the hypotheses in order of monotonically increasing complexity and test them in that order; you will find the true hypothesis earlier in such a ranking than in other rankings in which more complex hypotheses are frequently tested before simpler hypotheses. When you state it this way, it is clear that Occam's Razor is contingent on the environment; it is not necessarily true.

If you define Occam's Razor in such a way that all orderings of the hypotheses are Occamian, then the "razor" is not "cutting" anything. If you don't narrow down to a particular ordering or set of orderings, then you are not making a decision; given two hypotheses, you have no way of choosing between them.

Comment by Peter_Turney on Where Recursive Justification Hits Bottom · 2008-07-08T16:06:24.000Z · LW · GW

And if you're allowed to end in something assumed-without-justification, then why aren't you allowed to assume anything without justification?

I address this question in Incremental Doubt. Briefly, the answer is that we use a background of assumptions in order to inspect a foreground belief that is the current focus of our attention. The foreground is justified (if possible) by referring to the background (and doing some experiments, using background tools to design and execute the experiments). There is a risk that incorrect background beliefs will "lock in" an incorrect foreground belief, but this process of "incremental doubt" will make progress if we can chop our beliefs up into relatively independent chunks and continuously expose various beliefs to focused doubt (one (or a few) belief(s) at a time).

This is exactly like biological evolution, which mutates a few genes at a time. There is a risk that genes will get "locked in" to a local optimum, and indeed this happens occasionally, but evolution usually finds a way to get over the hump.

Should I trust Occam's Razor? Well, how well does (any particular version of) Occam's Razor seem to work in practice?

This is the right question. A problem is that there is the informal concept of Occam's Razor and there are also several formalizations of Occam's Razor. The informal and formal versions should be carefully distinguished. Some researchers use the apparent success of the informal concept in daily life as an argument to support a particular formal concept in some computational task. This assumes that the particular formalization captures the essence of the informal concept, and it assumes that we can trust what introspection tells us about the success of the informal concept. I doubt both of these assumptions. The proper way to validate a particular formalization of Occam's Razor is to apply it to some computational task and evaluate its performance. Appeal to intuition is not a substitute for experiment.

At present, I start going around in a loop at the point where I explain, "I predict the future as though it will resemble the past on the simplest and most stable level of organization I can identify, because previously, this rule has usually worked to generate good results; and using the simple assumption of a simple universe, I can see why it generates good results; and I can even see how my brain might have evolved to be able to observe the universe with some degree of accuracy, if my observations are correct."

It seems to me that this quote, where it mentions "simple", must be talking about the informal concept of Occam's Razor. If so, then it seems reasonable to me. But formalizations of Occam's Razor still require experimental evidence.

The question is, what is the scope of the claims in this quote? Is the scope limited to how I should personally decide what to believe, or does it extend to what algorithms I should employ in my AI research? I am willing to apply my informal concept of Occam's Razor to my own thinking without further evidence (in fact, it seems that it isn't entirely under my control), but I require experimental evidence when, as a scientist, I use a particular formalization of Occam's Razor in an AI algorithm (if it seems important, given the focus of the research; is simplicity in the foreground or the background?).

Comment by Peter_Turney on Moral Complexities · 2008-07-04T12:42:33.000Z · LW · GW

Why and how does anyone ever "do something they know they shouldn't", or "want something they know is wrong"? Does the notion of morality-as-preference really add up to moral normality?

It's all about delicious versus nutritious. That is, these conflicts are conflicts between different time horizons, or different discount values for future costs and benefits. Evolution has shaped our time horizon for making relatively short term decisions (Eat the pie now. It will taste good. There may not be another chance.), but we live in a world where a longer term is more appropriate (The pie may taste good, but it isn't good for my health. Also, I may benefit in the long term by giving the pie to somebody else.).

Comment by Peter_Turney on The Moral Void · 2008-06-30T16:04:42.000Z · LW · GW

See: Good, Evil, Morality, and Ethics: "What would it mean to want to be moral (to do the moral thing) purely for the sake of morality itself, rather than for the sake of something else? What could this possibly mean to a scientific materialistic atheist? What is this abstract, independent, pure morality? Where does it come from? How can we know it? I think we must conclude that morality is a means, not an end in itself."

Comment by Peter_Turney on The Moral Void · 2008-06-30T14:46:51.000Z · LW · GW

Eliezer, Your post is entirely consistent with what I said to Robin in my comments on "Morality Is Overrated": Morality is a means, not an end.

Comment by Peter_Turney on 2-Place and 1-Place Words · 2008-06-27T14:27:13.000Z · LW · GW

Related: The Logic of Attributional and Relational Similarity.

Comment by Peter_Turney on Surface Analogies and Deep Causes · 2008-06-22T19:45:02.000Z · LW · GW

this is evidence, by Ockham's razor

See Ockham's Razor is Dull.

Comment by Peter_Turney on Surface Analogies and Deep Causes · 2008-06-22T14:30:30.000Z · LW · GW

For a more sophisticated theory of analogical reasoning, you should read Dedre Gentner's papers. A good starting point is The structure-mapping engine: Algorithm and examples. Gentner defines a hierarchy of attributes (properties of entities; in logic, predicates with single arguments, P(X)), first-order relations (relations between entities; in logic, predicates with two or more arguments, R(X,Y)), and higher-order relations (relations between relations). Her experiments with children show that they begin reasoning with attributional similarity (what you call "surface similarities"); as they mature, they make increasing use of first-order relational similarity (what you call "structural similarity"); finally, they begin using higher-order relations, especially causal relations. This fits perfectly with your description of your own childhood. See Language and the career of similarity.

Comment by Peter_Turney on Einstein's Superpowers · 2008-05-30T12:49:42.000Z · LW · GW

I discuss the hero worship of great scientists in The Heroic Theory of Scientific Development and I discuss genius in Genius, Sustained Effort, and Passion.

Comment by Peter_Turney on Mach's Principle: Anti-Epiphenomenal Physics · 2008-05-24T14:01:51.000Z · LW · GW

I think this line of reasoning can be taken even further: Everything is relations; attributes are an illusion.

Comment by Peter_Turney on Changing the Definition of Science · 2008-05-18T19:29:27.000Z · LW · GW

Bayesianism has its uses, but it is not the final answer. It is itself the product of a more fundamental process: evolution. Science, technology, language, and culture are all governed by evolution. I believe that this gives much deeper insight into science and knowledge than Bayesianism. See:

(1) Multiple Discovery: The Pattern of Scientific Progress, Lamb and Easton (2) Without Miracles: Universal Selection Theory and the Second Darwinian Revolution, Cziko (3) Darwin's Dangerous Idea: Evolution and the Meanings of Life, Dennett (4) The Evolution of Technology, Basalla

Scientific method itself evolves. Bayesianism is part of that evolution, but only a small part.

Comment by Peter_Turney on No Safe Defense, Not Even Science · 2008-05-18T14:30:46.000Z · LW · GW

I agree with your general view, but I came to the same view by a more conventional route: I got a PhD in philosophy of science. If you study philosophy of science, you soon find that nobody really knows what science is. The "Science" you describe is essentially Popper's view of science, which has been extensively criticized and revised by later philosophers. For example, how can you falsify a theory? You need a fact (an "observation") that conflicts with the theory. But what is a fact, if not a true mini-theory? And how can you know that it is true, if theories can be falsified, but not proven? I studied philosophy because I was looking for a rational foundation for understanding the world; something like what Descartes promised with "cogito ergo sum". I soon learned that there is no such foundation. Making a rational model of the world is not like making a home, where the first step is to build a solid foundation. It is more like trying to patch a hole in a sinking ship, where you don't have the luxury of starting from scratch. I view science as an evolutionary process. Changes must be made in small increments: "Natura non facit saltus".

One flaw I see in your post is that the rule "You cannot trust any rule" applies recursively to itself. (Anything you can do, I can do meta.) I would say "Doubt everything, but one at a time, not all at once."

Comment by Peter_Turney on Decoherence is Simple · 2008-05-06T14:24:52.000Z · LW · GW

(The other part of the "experimental evidence" comes from statisticians / computer scientists / Artificial Intelligence researchers, testing which definitions of "simplicity" let them construct computer programs that do empirically well at predicting future data from past data. Probably the Minimum Message Length paradigm has proven most productive here, because it is a very adaptable way to think about real-world problems.)

I once believed that simplicity is the key to induction (it was the topic of my PhD thesis), but I no longer believe this. I think most researchers in machine learning have come to the same conclusion. Here are some problems with the idea that simplicity is a guide to truth:

(1) Solomonoff/Gold/Chaitin complexity is not computable in any reasonable amount of time.

(2) The Minimum Message Length depends entirely on how a situation is represented. Different representations lead to radically different MML complexity measures. This is a general problem with any attempt to measure simplicity. How do you justify your choice of representation? For any two hypotheses, A and B, it is possible to find a representation X such that complexity(A) < complexity(B) and another representation Y such that complexity(A) > complexity(B).

(3) Simplicity is merely one type of bias. The No Free Lunch theorems show that there is no a prior reason to prefer one type of bias over another. Therefore there is nothing special about a bias towards simplicity. A bias towards complexity is equally valid a priori.

http://www.jair.org/papers/paper228.html http://en.wikipedia.org/wiki/No_free_lunch_in_search_and_optimization http://en.wikipedia.org/wiki/Inductive_bias

Comment by Peter_Turney on Rationality Quotes 7 · 2008-01-30T19:06:53.000Z · LW · GW

The fact that there is a lot of emotional/inept-philosophical baggage to the word does not mean it is irrational to use it.

If your goal is to engage another person in clear, careful, rational discussion, then it is not rational to use terminology that you know to have "a lot of emotional/inept-philosophical baggage", because to do so would be counter-productive with respect to your goal. I assume that the purpose of a blog called "Overcoming Bias" is to engage in clear, careful, rational discussion.

Comment by Peter_Turney on Rationality Quotes 7 · 2008-01-30T18:59:32.000Z · LW · GW

The point is that some things are pre-analytically evil. No matter how much we worry at the concept, slavery and genocide are still evil -- we know these things stronger than we know the preconditions for the reasoning process to the contrary -- I submit that there is simply no argument sufficiently strong to overturn that judgment.

In the American civil war, some people fought against slavery and others fought to continue slavery. If your statement above is correct, it would seem that everybody who fought to continue slavery was evil. Was their pre-analytical "sense of evil" somehow missing or damaged? If your statement above is correct, it would seem that there is no possible case in which a rational argument caused a person to change sides in the civil war. This seems highly unlikely to me.

Culture, including ethics, evolves over time. Actions that were once morally acceptable are no longer considered morally acceptable. I don't claim to understand all the forces that govern the evolution of ethics, but it is plain to see that our ethical systems have evolved. Slavery was once accepted and considered ethical by many; now it is not accepted. Women were once not allowed to vote; now they can vote.

To say that something is "pre-analytically evil" seems to be an excuse for avoiding rational, scientific analysis of the epistemology and ontology of our ethical judgments.

Comment by Peter_Turney on Rationality Quotes 7 · 2008-01-26T04:39:35.000Z · LW · GW

"When one encounters Evil, the only solution is violence, actual or threatened."

This whole quote is sophistry. The capitalized word "Evil" is a metaphorical personification of an abstract concept. A standard definition of "evil" is "morally objectionable behavior". Suppose we replace the personification "Evil" with "morally objectionable behavior":

"When one encounters morally objectionable behavior, the only solution is violence, actual or threatened."

The result is absurd. Suppose we agree that shoplifting is morally objectionable behavior. Is it true that the only solution to shoplifting is violence or the threat of violence? I don't think so. But "Evil" is an emotionally loaded term that triggers our biases and discourages careful, rational thought. So when we read, "When one encounters Evil, the only solution is violence, actual or threatened," it is not quite so obviously false as, "When one encounters morally objectionable behavior, the only solution is violence, actual or threatened."

One problem with the term "evil" is that it is typically applied to a person, rather than to a person's behavior. For example (see above), "Kevin Giffhorn is Evil." Compare this to, "Kevin Giffhorn has behaved in a way that is morally objectionable." The first statement leads to the conclusion that an evil person must be punished. The second statement leads to asking what caused Kevin Giffhorn to behave as he did, and how can we address the cause? To say that he acted evilly because he is evil gets us nowhere.

Comment by Peter_Turney on Rationality Quotes 7 · 2008-01-25T18:34:24.000Z · LW · GW

"The simple fact is that non-violent means do not work against Evil."

I believe that this quote is not rational, because thinking of human relations in terms of "good" and "evil" is not rational. I prefer to think in terms of the iterated prisoners' dilemma; in terms of cooperation and defection. If you frame a conflict in terms of "good" and "evil", you quickly reach violence. If you frame it in terms of "cooperation" and "defection", you may be able to negotiate a cooperative agreement. Violence may be necessary in certain situations, but it represents a suboptimal solution to conflict.

In a blog that is dedicated to overcoming bias, the term "evil" should only be used to point out the bias and irrationality that is encouraged by the concept of "evil".