Comment by philgoetz on How SIAI could publish in mainstream cognitive science journals · 2019-05-15T23:35:17.212Z · score: 3 (1 votes) · LW · GW

Thanks!

I have great difficulty finding any philosophy published after 1960 other than post-modern philosophy, probably because my starting point is literary theory, which is completely confined to--the words "dominated" and even "controlled" are too weak--Marxian post-modern identity politics, which views literature only as a political barometer and tool.

Comment by philgoetz on Pascal's Mugging: Tiny Probabilities of Vast Utilities · 2019-05-15T20:29:15.871Z · score: 9 (3 votes) · LW · GW

I think you're assuming that to give in to the mugging is the wrong answer in a one-shot game for a being that values all humans in existence equally, because it feels wrong to you, a being with a moral compass evolved in iterated multi-generational games.

Consider these possibilities, any one of which would create challenges for your reasoning:

1. Giving in is the right answer in a one-shot game, but the wrong answer in an iterated game. If you give in to the mugging, the outsider will keep mugging you and other rationalists until you're all broke, leaving the universe's future in the hands of "Marxians" and post-modernists.

2. Giving in is the right answer for a rational AI God, but evolved beings (under the Darwinian definition of "evolved") can't value all member of their species equally. They must value kin more than strangers. You would need a theory to explain why any being that evolved due to resource competition wouldn't consider killing a large number of very distantly-related members of its species to be a good thing.

3. You should interpret the conflict between your intuition, and your desire for a rational God, not as showing that you're reasoning badly because you're evolved, but that you're reasoning badly by desiring a rational God bound by a static utility function. This is complicated, so I'm gonna need more than one paragraph:

Intuitively, my argument boils down to applying the logic behind free markets, freedom of speech, and especially evolution, to the question of how to construct God's utility function. This will be vague, but I think you can fill in the blanks.

Free-market economic theory developed only after millenia during which everyone believed that top-down control was the best way of allocating resources. Freedom of speech developed only after millenia during which everyone believed that it was rational for everyone to try to suppress any speech they disagreed with. Political liberalism developed only after millenia during which everybody believed that the best way to reform society was to figure out what the best society would be like, then force that on everyone. Evolution was conceived of--well, originally about 2500 years ago, probably by Democritus, but it became popular only after millenia during which everyone believed that life could be created only by design.

All of these developments came from empiricists. Empiricism is one of the two opposing philosophical traditions of Western thought. It originated, as far as we know, with Democritus (about whom Plato reportedly said that he wished all his works to be burned--which they eventually were). It went through the Skeptics, the Stoics, Lucretius, nominalism, the use of numeric measurements (re-introduced to the West circa 1300), the Renaissance and Enlightenment, and eventually (with the addition of evolution, probability, statistics, and operationalized terms) created modern science.

A key principle of empiricism, on which John Stuart Mill explicitly based his defense of free speech, is that we can never be certain. If you read about the skeptics and stoics today, you'll read that they "believed nothing", but that was because, to their opponents, "believe" meant "know something with 100% certainty".

(The most-famous skeptic, Sextus Empiricus, was called "Empiricus" because he was of the empirical school of medicine, which taught learning from experience. Its opponent was the rational school of medicine, which used logic to interpret the dictums of the ancient authorities.)

The opposing philosophical tradition, founded by Plato--is rationalism. "Rational" does not mean "good thinking". It has a very specific meaning, and it is not a good way of thinking. It means reasoning about the physical world the same way Euclid constructed geometric proofs. No measurements, no irrational numbers, no observation of the world, no operationalized nominalist definitions, no calculus or differential equations, no testing of hypotheses--just armchair a priori logic about universal categories, based on a set of unquestionable axioms, done in your favorite human language. Rationalism is the opposite of science, which is empirical. The pretense that "rational" means "right reasoning" is the greatest lie foisted on humanity by philosophers.

Dualist rationalism is inherently religious, as it relies on some concept of "spirit", such as Plato's Forms, Augustine's God, Hegel's World Spirit, or an almighty programmer converting sense data into LISP symbols, to connect the inexact, ambiguous, changeable things of this world to the precise, unambiguous, unchanging, and usually unquantified terms in its logic.

(Monist rationalists, like Buddha, Parmenides, and post-modernists, believe sense data can't be divided unambiguously into categories, and thus we may not use categories. Modern empiricists categorize sense data using statistics.)

Rationalists support strict, rigid, top-down planning and control. This includes their opposition to free markets, free speech, gradual reform, and optimization and evolution in general. This is because rationalists believe they can prove things about the real world, and hence their conclusions are reliable, and they don't need to mess around with slow, gradual improvements or with testing. (Of course each rationalist believes that every other rationalist was wrong, and should probably be burned at the stake.)

They oppose all randomness and disorder, because it makes strict top-down control difficult, and threatens to introduce change, which can only be bad once you've found the truth.

They have to classify every physical thing in the world into a discrete, structureless, atomic category, for use in their logic. That has led inevitably to theories which require all humans to ultimately have, at reflective equilibrium, the same values--as Plato, Augustine, Marx, and CEV all do.

You have, I think, picked up some of these bad inclinations from rationalism. When you say you want to find the "right" set of values (via CEV) and encode them into an AI God, that's exactly like the rationalists who spent their lives trying to find the "right" way to live, and then suppress all other thoughts and enforce that "right way" on everyone, for all time. Whereas an empiricist would never claim to have found final truth, and would always leave room for new understandings and new developments.

Your objection to randomness is also typically rationalist. Randomness enables you to sample without bias. A rationalist believes he can achieve complete lack of bias; an empiricist believes that neither complete lack of bias nor complete randomness can be achieved, but that for a given amount of effort, you might achieve lower bias by working on your random number generator and using it to sample, than by hacking away at your biases.

So I don't think we should build an FAI God who has a static set of values. We should build, if anything, an AI referee, who tries only to keep conditions in the universe that will enable evolution to keep on producing behaviors, concepts, and creatures of greater and greater complexity. Randomness must not be eliminated, for without randomness we can have no true exploration, and must be ruled forever by the beliefs and biases of the past.

Comment by philgoetz on Rescuing the Extropy Magazine archives · 2019-05-15T17:32:56.179Z · score: 3 (1 votes) · LW · GW

I scanned in Extropy 1, 3, 4, 5, 7, 16, and 17, which leaves only #2 missing. How can I send these to you? Contact me at [my user name] at gmail.

Comment by philgoetz on Why Bayesians should two-box in a one-shot · 2017-12-17T18:26:40.960Z · score: 0 (0 votes) · LW · GW

I just now read that one post. It isn't clear how you think it's relevant. I'm guessing you think that it implies that positing free will is invalid.

You don't have to believe in free will to incorporate it into a model of how humans act. We're all nominalists here; we don't believe that the concepts in our theories actually exist somewhere in Form-space.

When someone asks the question, "Should you one-box?", they're using a model which uses the concept of free will. You can't object to that by saying "You don't really have free will." You can object that it is the wrong model to use for this problem, but then you have to spell out why, and what model you want to use instead, and what question you actually want to ask, since it can't be that one.

People in the LW community don't usually do that. I see sloppy statements claiming that humans "should" one-box, based on a presumption that they have no free will. That's making a claim within a paradigm while rejecting the paradigm. It makes no sense.

Consider what Eliezer says about coin flips:

We've previously discussed how probability is in the mind. If you are uncertain about whether a classical coin has landed heads or tails, that is a fact about your state of mind, not a property of the coin. The coin itself is either heads or tails. But people forget this, and think that coin.probability == 0.5, which is the Mind Projection Fallacy: treating properties of the mind as if they were properties of the external world.

The mind projection fallacy is treating the word "probability" not in a nominalist way, but in a philosophical realist way, as if they were things existing in the world. Probabilities are subjective. You don't project them onto the external world. That doesn't make "coin.probability == 0.5" a "false" statement. It correctly specifies the distribution of possibilities given the information available within the mind making the probability assessment. I think that is what Eliezer is trying to say there.

"Free will" is a useful theoretical construct in a similar way. It may not be a thing in the world, but it is a model for talking about how we make decisions. We can only model our own brains; you can't fully simulate your own brain within your own brain; you can't demand that we use the territory as our map.

Comment by philgoetz on 37 Ways That Words Can Be Wrong · 2017-12-17T16:28:54.260Z · score: 1 (1 votes) · LW · GW

Yep, nice list. One I didn't see: Defining a word in a way that is less useful (that conveys less information) and rejecting a definition that is more useful (that conveys more information). Always choose the definition that conveys more information; eliminate words that convey zero information. It's common for people to define words that convey zero information. But if everything has the Buddha nature, nothing empirical can be said about what it means and it conveys no information.

Along similar lines, always define words so that no other word conveys too much mutual information about them. For instance, many people have argued with me that I should use the word "totalitarian" to mean "the fascist nations of the 20th century". Well, we already have a word for that, which is "fascist", so to define "totalitarian" as a synonym makes it a useless word.

The word "fascist" raises the question of when to use extensional vs. intensional definitions. It's conventionally defined extensionally, to mean the Axis powers in World War 2. This is not a useful definition, as we already have a label for that. Worse, people define it extensionally but pretend they've defined it intensionally. They call people today "fascist", conveying connotations in a way that can't be easily disputed, because there is no intensional definition to evaluate the claim.

Sometimes you want to switch back and forth between extensional and intensional definitions. In art history, we have a term for each period or "movement", like "neo-classical" and "Romantic". The exemplars of the category are defined both intensionally and extensionally, as those artworks having certain properties and produced in certain geographic locations during a certain time period. It is appropriate to use the intensional definition alone if describing a contemporary work of art (you can call it "Romantic" if it looks Romantic), but inappropriate to use examples that fit the intension but not the extension as exemplars, or to deduce things about the category from them. This keeps the categories stable.

A little ways back I talked about defining the phrase "Buddha nature". Phrases also have definitions--words are not atoms of meaning. Analyzing a phrase as if our theories of grammar worked, ignoring knowledge about idioms, is an error rationalists sometimes commit.

Pretending words don't have connotations is another error rationalists commit regularly--often in sneaky ways, deliberately using the connotations, while pretending they're being objective. Marxist literary criticism, for instance, loads a lot into the word "bourgeois".

Another category missing here is gostoks and doshes. This is when a word's connotations and tribal affiliation-signalling displace its semantic content entirely, and no one notices it has no meaning. Extremely common in Marxism and in "theory"; "capitalism" and "bourgeois" being the most-common examples. "Bourgeoisie" originally meant people like Rockefeller and the Borges, but as soon as artists began using the word, they used it to mean "people who don't like my scribbles," and now it has no meaning at all, but demonic connotations. "Capitalism" has no meaning that can single out post-feudal societies in the way Marxists pretend it does; any definition of it that I've seen includes things that Marxists don't want it to, like the Soviet Union, absolute monarchies, or even hunter-gatherer tribes. It should be called simply "free markets", which is what they really object to and much more accurate at identifying the economic systems that they oppose, but they don't want to admit that the essence of their ideology is opposition to freedom.

Avoid words with connotations that you haven't justified. Don't say "cheap" if you mean "inexpensive" or "shoddy". Especially avoid words which have a synonym with the opposite connotation: "frugal" and "miserly". Be aware of your etymological payloads: "awesome" and "awful" (full of awe), "incredible" (not credible), "wonderful" (thought-provoking).

Another category is when 2 subcultures have different sets of definitions for the same words, and don't realize it. For instance, in the humanities, "rational" literally means ratio-based reasoning, which rejects the use of real numbers, continuous equations, empirical measurements, or continuous changes over time. This is the basis of the Romantic/Modernist hatred of "science" (by which they mean Aristotelian rationality), and of many post-modern arguments that rationality doesn't work. Many people in the humanities are genuinely unaware that science is different than it was 2400 years ago, and most were 100% ignorant of science until perhaps the mid-20th century. A "classical education" excludes all empiricism.

Another problem is meaning drift. When you use writings from different centuries, you need to be aware of how the meanings of words and phrases have changed over time. For instance, the official academic line nowadays is that alchemy and astrology are legitimate sciences; this is justified in part by using the word "science" as if it meant the same as the Latin "scientia".

A problem in translation is decollapsing definitions. Medieval Latin conflated some important concepts because their neo-Platonist metaphysics said that all good things sort of went together. So for instance they had a single word, "pulchrum", which meant "beautiful", "sexy", "appropriate to its purpose", "good", and "noble". Translators will translate that into English based on the context, but that's not conveying the original mindset. This comes up most frequently when ancient writers made puns, like Plato's puns in the Crito, or "Jesus'" (Greek) puns in the opening chapters of John, which are destroyed in translation, leaving the reader with a false impression of the speaker's intent.

I disagree that saying "X is Y by definition" Is usually wrong, but I should probably leave my comment on that post instead of here.

Comment by philgoetz on 37 Ways That Words Can Be Wrong · 2017-12-17T16:17:28.433Z · score: 0 (0 votes) · LW · GW

[moved to top level of replies]

Comment by philgoetz on 37 Ways That Words Can Be Wrong · 2017-12-17T16:02:04.291Z · score: 0 (0 votes) · LW · GW

But you're arguing against Eliezer, as "God" and "miracle" were (and still are) commonly-used words, and so Eliezer is saying those are good, short words for them.

Comment by philgoetz on Fallacies of Compression · 2017-12-17T02:10:13.451Z · score: 0 (0 votes) · LW · GW

Great post! There is also the non-discrete aspect of compression: information loss. English has, according to some dictionaries, over a million words. It's unlikely we store most of our information in English. Probably there is some sort of dimension reduction, like PCA. There is in any case probably lossy compression. This means people with different histories will use different frequency tables for their compression, and will throw out different information when encoding a verbal statement. I think you would almost certainly find that if you measure word use frequency for different people, then cluster the word use distributions, some clusters would correspond to ideologies. The interesting question is which comes first, the ideology, or the word usage frequency (caused by different life experiences).

Comment by philgoetz on Why Bayesians should two-box in a one-shot · 2017-12-16T14:44:20.593Z · score: 1 (1 votes) · LW · GW

I don't think that what you need has any bearing on what reality has actually given you. Nor can we talk about different decision theories here--as long as we are talking about maximizing expected utility, we have our decision theory; that is enough specification to answer the Newcomb one-shot question. We can only arrive at a different outcome by stating the problem differently, or by sneaking in different metaphysics, or by just doing bad logic (in this case, usually allowing contradictory beliefs about free will in different parts of the analysis.)

Your comment implies you're talking about policy, which must be modelled as an iterated game. I don't deny that one-boxing is good in the iterated game.

My concern in this post is that there's been a lack of distinction in the community between "one-boxing is the best policy" and "one-boxing is the best decision at one point in time in a decision-theoretic analysis, which assumes complete freedom of choice at that moment." This lack of distinction has led many people into wishful or magical rather than rational thinking.

Comment by philgoetz on Why Bayesians should two-box in a one-shot · 2017-12-16T01:03:00.693Z · score: 0 (0 votes) · LW · GW

I can believe that it would make sense to commit ahead of time to one-box at such an event. Doing so would affect your behavior in a way that the predictor might pick up on.

Hmm. Thinking about this convinces me that there's a big problem here in how we talk about the problem, because if we allow people who already knew about Newcomb's Problem to play, there are really 4 possible actions, not 2:

  • intended to one-box, one-boxed
  • intended to one-box, two-boxed
  • intended to two-box, one-boxed
  • intended to two-box, two-boxed

I don't know if the usual statement of Newcomb's problem specifies whether the subjects learns the rules of the game before or after the predictor makes a prediction. It seems to me that's a critical factor. If the subject is told the rules of the game before the predictor observes the subject and makes a prediction, then we're just saying Omega is a very good lie detector, and the problem is not even about decision theory, but about psychology: Do you have a good enough poker face to lie to Omega? If not, pre-commit to one-box.

We shouldn't ask, "Should you two-box?", but, "Should you two-box now, given how you would have acted earlier?" The various probabilities in the present depend on what you thought in the past. Under the proposition that Omega is perfect at predicting, the person inclined to 2-box should still 2-box, 'coz that $1M probably ain't there.

So Newcomb's problem isn't a paradox. If we're talking just about the final decision, the one made by a subject after Omega's prediction, then the subject should probably two-box (as argued in the post). If we're talking about two decisions, one before and one after the box-opening, then all we're asking is whether you can convince Omega that you're going to one-box if you aren't. Then it would not be terribly hard to say that a predictor might be so good (say, an Amazing Kreskin-level cold-reader of humans, or that you are an AI) that your only hope is to precommit to one-box.

Comment by philgoetz on Why Bayesians should two-box in a one-shot · 2017-12-15T19:48:31.980Z · score: 0 (0 votes) · LW · GW

This was argued against in the Sequences and in general, doesn't seem to be a strong argument. It seems perfectly compatible to believe your actions follow deterministically and still talk about decision theory - all the functional decision theory stuff is assuming a deterministic decision process, I think.

It is compatible to believe your actions follow deterministically and still talk about decision theory. It is not compatible to believe your actions follow deterministically, and still talk about decision theory from a first-person point of view, as if you could by force of will violate your programming.

To ask what choice a deterministic entity should make presupposes both that it does, and does not, have choice. Presupposing a contradiction means STOP, your reasoning has crashed and you can prove any conclusion if you continue.

Comment by philgoetz on Announcing the AI Alignment Prize · 2017-12-15T19:41:12.041Z · score: 1 (3 votes) · LW · GW

I think that first you should elaborate on what you mean by "the goals of humanity". Do you mean majority opinion? In that case, one goal of humanity is to have a single world religious State, although there is disagreement on what that religion should be. Other goals of humanity include eliminating homosexuality and enforcing traditional patriarchal family structures.

Okay, I admit it--what I really think is that "goals of humanity" is a nonsensical phrase, especially when spoken by an American academic. It would be a little better to talk about values instead of goals, but not much better. The phrase still implies the unspoken belief that everyone would think like the person who speaks it, if only they were smarter.

Comment by philgoetz on Why Bayesians should two-box in a one-shot · 2017-12-15T19:12:03.303Z · score: 0 (0 votes) · LW · GW

The part of physics that implies someone cannot scan your brain and simulate inputs so as to perfectly predict your actions is quantum mechanics. But I don't think invoking it is the best response to your question. Though it does make me wonder how Eliezer reconciles his thoughts on one-boxing with his many-worlds interpretation of QM. Doesn't many-worlds imply that every game with Omega creates worlds in which Omega is wrong?

If they can perfectly predict your actions, then you have no choice, so talking about which choice to make is meaningless. If you believe you should one-box based if Omega can perfectly predict your actions, but two-box otherwise, then you are better off trying to two-box: In that case, you've already agreed that you should two=box if Omega can't perfectly predict your actions. If Omega can, you won't be able to two-box unless Omega already predicted that you would, so it won't hurt to try to 2-box.

Why Bayesians should two-box in a one-shot

2017-12-15T17:39:32.491Z · score: 1 (1 votes)
Comment by philgoetz on The Ancient God Who Rules High School · 2017-04-13T01:45:15.816Z · score: 1 (1 votes) · LW · GW

Sorry. I've been reading English literary journals and lit theory books for the past year, and the default assumption is always that the reader is a Marxist.

Comment by philgoetz on Belief in Belief · 2017-04-13T01:32:49.697Z · score: 0 (0 votes) · LW · GW

The rationalist virtue of empiricism...

I'm not disagreeing with any of the content above, but a note about terminology--

LessWrong keeps using the word "rationalism" to mean something like "reason" or possibly even "scientific methodology". In philosophy, however, "rationalism" is not allied to "empiricism", but diametrically opposed to it. What we call science was a gradual development, over a few centuries, of methodologies that harnessed the powers both of rationalism and empiricism, which had previously been thought to be incompatible.

But if you talk to a modernist or post-modernist today, when they use the term "rational", they mean old-school Greek, Platonic-Aristotelian rationalism. They, like us, think so much in this old Greek way that they may use the term "reason" when they mean "Aristotelian logic". All post-modernism is based on the assumption that scientific methodology is essentially the combination of Platonic essences, Aristotelian physics, and Aristotelian logic, which is rationalism. They are completely ignorant of what science is and how it works. But this is partly our fault, because they hear us talking about science and using the term "rationality" as if science were rationalism!

(Inb4 somebody says Plato was a rationalist and Aristotle was an empiricist: Really, really not. Aristotle couldn't measure things, and very likely couldn't do arithmetic. In any case the most important Aristotelian writings to post-modernists are the Physics, which aren't empirical in the slightest. No time to go into it here, though.)

Comment by philgoetz on What conservatives and environmentalists agree on · 2017-04-09T23:55:00.785Z · score: 0 (0 votes) · LW · GW

I was unfairly inserting in the parentheses my own presumption about why Christians saw the world as having been created perfect. The passage I was talking about from Aquinas did not talk about perfection of the environment.

I'd like to see what Aquinas did say. Have you got a citation? I'm pretty sure that the notion that the world was created imperfect has never been tolerated by the Catholic Church. Asserting that creation was imperfect might even be condemned as Manicheeism. Opinions vary on what happened after the Fall, but I find it unlikely that Aquinas could have said God's original creation was imperfect. (If he did, he was probably copying Aristotle, and making some fine definitional distinction not explained here, to avoid heresy.)

Comment by philgoetz on What conservatives and environmentalists agree on · 2017-04-09T15:45:52.389Z · score: 1 (1 votes) · LW · GW

Yep, the argument to justify the imperfection of children, and thus the necessity of growth, is based on Aristotle's notion of perfect and imperfect actualities. Aquinas wrote:

Everything is perfect inasmuch as it is in actuality; imperfect, inasmuch as it is in potentiality, with privation of actuality. ... It is impossible therefore for any effect that is brought into being by action to be of a nobler actuality than is the actuality of the agent. It is possible though for the actuality of the effect to be less perfect than the actuality of the acting cause, inasmuch as action may be weakened on the part of the object to which it is terminated, or upon which it is spent.

The reason God created humans so that they have to grow from imperfect childhood (lacking the maturity of a complete human) towards a perfect adult state, rather than being adult, is thus so that they may learn virtue, which is the process of striving for perfection. (The environment does not need to learn virtue; therefore it was created perfect.)

I don't know whether humans would have born offspring that were babies if not for the Fall, nor why animals bear babies, if not for the sake of their spiritual growth.

Comment by philgoetz on What conservatives and environmentalists agree on · 2017-04-09T15:44:47.193Z · score: 0 (0 votes) · LW · GW

I didn't mean to retract this, but to delete it and move the comment down below.

Comment by philgoetz on What conservatives and environmentalists agree on · 2017-04-09T15:31:44.105Z · score: 0 (0 votes) · LW · GW

Historically, Christians objected strongly to fossil evidence that some species had gone extinct. They said God would not have created species and then let them go extinct.

Perfection is a crucial part of Christian ontology. God's creation was perfect. That means, in the Christian way of thinking, it is unchanging. Read Christian descriptions of God (who is perfect), and "unchanging" is always one of the adjectives. "Unchanging" is a necessary attribute of perfection in Christian theology, and God's creation is necessarily perfect. The environment, therefore, was designed and created not to ever change.

One could argue that individuals are thus imperfect because they are born young and then mature. I've never heard a counter-argument against this accusation, though I suspect they exist in the wreckage of medieval theology.

Comment by philgoetz on The Ancient God Who Rules High School · 2017-04-09T15:05:42.111Z · score: 0 (0 votes) · LW · GW

The kid says that school is competitive, and that's bad--why can't they all agree to work less hard (presumably so they can have more time to play video games)? "Getting students to accept the reality that they might just not go to the best schools is good, I guess. But unless it also comes with the rallying call of engaging in a full-on socialist revolution, it doesn’t really deal with the whole issue."

This kid is the straw man conservatives present of socialism--the idea that the purpose of labor unions and socialism isn't to have a decent wage, but to not have to work hard.

There is a competition crisis, though. The problem is partly the idea that getting into an elite school is a measure of your intelligence--it isn't; they're explicit that that isn't the sole basis of admission, nor do they even have any measure of intelligence other than standardized test scores, so why not use the standardize test scores?

But it's also the allocation of social attention. Each field of study is too large now relative to the number of practitioners. Merit doesn't work anymore. There is no such thing as reputation anymore, except within a small circle of colleagues. Nobody trusts grades or recommendations. The problem isn't competition, but that we have no functioning reputation system anymore.

What conservatives and environmentalists agree on

2017-04-08T00:57:32.012Z · score: 9 (10 votes)
Comment by philgoetz on Against responsibility · 2017-04-07T18:21:51.780Z · score: 1 (1 votes) · LW · GW

I don't see how this follows. Evolutionary psychology provides some explanations for our intuitions and instincts that the majority of humans share but that doesn't really say anything about morality as Is Cannot Imply Ought.

Start by saying "rationality" means satisficing your goals and values. The issue is what values you have. You certainly have selfish values. A human also has values that lead to optimizing group survival. Behavior oriented primarily towards those goals is called altruistic.

The model of rationality presented on LessWrong usually treats goals and values that are of negative utility to the agent as biases or errors rather than as goals evolved to benefit the group or the genes. That leads to a view of rationality as strictly optimizing selfish goals.

As to old Utilitarianism 1.0, where somebody just declares by fiat that we are all interested in the greatest good for the greatest number of people--that isn't on the table anymore. People don't do that. Anyone who brings that up is the one asserting an "ought" with no justification. There is no need to talk about "oughts" yet.

Comment by philgoetz on What's up with Arbital? · 2017-04-04T19:49:53.120Z · score: 4 (2 votes) · LW · GW

This sounds great! There is no FAQ on the linked-to website, though. Is Arbital open-source? What are the key licensing terms? How's it implemented? How does voting work?

If we're all supposed to use the same website, there are advantages to that, but I would be less excited about that.

Also, the home page links to https://arbital.com/explore/math, but that page is blank. Er... https://arbital.com/explore/ai_alignment is also blank for me. Perhaps Arbital doesn't work for Chrome on Windows 7 without flash installed.

Comment by philgoetz on Against responsibility · 2017-04-04T19:46:10.089Z · score: 1 (1 votes) · LW · GW

Rational Utilitarianism is the greatest good for the greatest number given the constraints of imperfect information and faulty brains.

No; I object to your claiming the term "rational" for that usage. That's just plain-old Utilitarianism 1.0 anyway; it doesn't take a modifier.

Rationality plus Utilitarianism plus evolutionary psychology leads to the idea that a rational person is one who satisfies their own goals. You can't call trying to achieve the greatest good for the greatest number of people "rational" for an evolved organism.

Comment by philgoetz on Against responsibility · 2017-04-04T04:42:41.077Z · score: 3 (3 votes) · LW · GW

Benquo isn't saying that these attitudes necessarily follow, but that in practice he's seen it happen. There is a lot of unspoken LessWrong / SIAI history here. Eliezer Yudkowsky and many others "at the top" of SIAI felt personally responsible for the fate of the human race. EY believed he needed to develop an AI to save humanity, but for many years he would only discuss his thoughts on AI with one other person, not trusting even the other people in SIAI, and requiring them to leave the area when the two of them talked about AI. (For all I know, he still does that.) And his plans basically involve creating an AI to become world dictator and stop anybody else from making an AI. All of that is reducing the agency of others "for their own good."

This secrecy was endemic at SIAI; when I've walked around NYC with their senior members, sometimes 2 or 3 people would gather together and whisper, and would ask anyone who got too close to please walk further away, because the ideas they were discussing were "too dangerous" to share with the rest of the group.

Comment by philgoetz on Against responsibility · 2017-04-04T04:34:03.947Z · score: 2 (2 votes) · LW · GW

Great post, and especially appropriate for LW. I add the proviso that you may in some cases be making the most-favorable interpretation rather than the correct interpretation.

I know one person on LessWrong who has talked himself into overwriting his natural morality with his interpretation of rational utilitarianism. This ended up giving him worse-than-human morality, because he assumes that humans are not actually moral--that humans don't derive utility from helping others. He ended up convincing himself to do the selfish things that he thinks are "in his own best interests" in order to be a good rationalist, even in cases where he didn't really want to be selfish--or wouldn't have, before rewriting his goals.

Comment by philgoetz on Stupidity as a mental illness · 2017-03-25T04:44:33.952Z · score: 0 (0 votes) · LW · GW

That's basically what I'm saying--well, I think it was; I can't see my original text now. But IIRC I misused the word "necessarily" because I thought doing so was closer to the truth than not using any modifier at all. I wanted to imply a causative link, and the notion that, even in cases where it appears there is no economic cost, the length of and multiplicity of paths from a nation's values to its economic health are so great that the bias towards finding an economic cost on each such path make it statistically very unlikely that the net economic impact is not negative.

Comment by philgoetz on Open thread, Mar. 20 - Mar. 26, 2017 · 2017-03-25T04:37:54.247Z · score: 0 (0 votes) · LW · GW

The main page lesswrong.com no longer has a link to the Discussion section of the forum, nor a login link. I think these changes are both mistakes.

Comment by philgoetz on Stupidity as a mental illness · 2017-03-07T21:06:08.474Z · score: 0 (0 votes) · LW · GW

I was talking about the fitness of a culture. That's why I said I was talking about the fitness of a culture. Individual happiness is not fitness, but it is of interest to us.

Comment by philgoetz on Rationality Quotes January - March 2017 · 2017-03-07T17:54:05.310Z · score: 1 (1 votes) · LW · GW

Three inventions which may perhaps be long delayed, but which possibly are near at hand, will give to this overcrowded island the prosperous condition of the United States. The first is the discovery of a motive force which will take the place of steam, with its cumbrous fuel of oil and coal; the second, the invention of aerial locomotion which will transport labour at a trifling cost of money and of time to any part of the planet, and which by annihilating distance will speedily extinguish national distinctions; the third, the manufacture of flesh and flour from the elements by a chemical process in the laboratory, similar to that which is now performed within the bodies of animals and plants. Food will then be manufactured in unlimited quantities at a trifling expense, and our enlightened prosperity will look back upon us who eat oxen and sheep just as we look back upon cannibals. Hunger and starvation will then be unknown, and the best part of human life will no longer be wasted in a tedious process of cultivating the fields. ... [claims that everyone will embrace Victorian morality omitted] ... These bodies which now we wear belong to the lower animals; our minds have already outgrown them; already we look upon them with contempt. A time will come when science will transform them by means which we cannot conjecture, and which, even if explained to us, we could not now understand, much as the savage cannot understand electricity, magnetism, or steam. Disease will be extirpated; the causes of decay will be removed; immortality will be invented. And then, the earth being small, mankind will migrate into space, and will cross the airless Saharas which separate planet from planet and sun from sun. The earth will become a Holy Land which will be visited by pilgrims from all the quarters of the universe. Finally, men will master the forces of Nature; they will become themselves architects of system, manufacturers of worlds.

-- Winwood Reade, "The Martyrdom of Man", 1872

quoted (and ridiculed) by Patrick Allitt in 2002

Comment by philgoetz on Rationality Quotes Thread February 2016 · 2017-03-07T17:19:37.898Z · score: 1 (1 votes) · LW · GW

Note that the major relevant historical disagreement is not over any of these ideas, but over what the true territory is. Most medieval maps (pre-1300) were deliberately warped not to represent their territory as it looked in the physical world, but to show "spiritual truths". Jerusalem would be at the center, each city's size would be proportional to its importance in God's plan, and distances and directions would be warped to make a particular set of points draw the figure of a cross on the map. Similarly, maps of medieval cities would not show the city to scale, but would plant the richest part of the city in the center of the map, occupying a large fraction of the map, regardless of its actual physical location or size. Judging from the theories of perception and reality then in circulation, the people making (or at least the people buying) these maps probably thought they were not distorting, but correcting the distortions of the senses and presenting a view that would actually lead to more correct beliefs.

Comment by philgoetz on Rationality Quotes Thread February 2016 · 2017-03-07T17:14:01.210Z · score: 0 (0 votes) · LW · GW

The notion of the non-elementalistic is important--that was the basis of structuralism--but it reinforces the old view that these operationalizations of our observations were unfortunate but necessary concessions to the limitations of observation, rather than that, e.g., space-time really is the lattice the Universe is laid upon. I doubt there's a real difference between these views mathematically, but I think there is conceptually.

Comment by philgoetz on Allegory On AI Risk, Game Theory, and Mithril · 2017-02-16T18:43:03.786Z · score: 2 (2 votes) · LW · GW

I have the feeling you still don't agree with Thorin. Why not?

Increasing GDP is not growth

2017-02-16T18:04:16.959Z · score: 13 (14 votes)
Comment by philgoetz on Stupidity as a mental illness · 2017-02-10T23:08:50.975Z · score: 4 (4 votes) · LW · GW

From what I've read, most of the protest in the deaf community currently is deaf parents insisting they have the right to deny treatment and audible education to their children--which they want to do because it will be too late for the children to get the treatment themselves when they're adults. If it were possible for their children to get the treatment and learn spoken language once they grew up, and potentially leave the deaf community, parents would have less motivation to deny treatment to them as children.

Comment by philgoetz on Stupidity as a mental illness · 2017-02-10T23:04:38.182Z · score: 2 (2 votes) · LW · GW

I know a lot of people who are stupid in one way or another. I would hate to see "treatment" forced onto them because they're not as smart as we'd like.

Do we force people to be treated for diabetes, cancer, or gout? No; we at most work to make it possible for them to get treatment.

Comment by philgoetz on Stupidity as a mental illness · 2017-02-10T22:57:16.621Z · score: 4 (4 votes) · LW · GW

This is not a thing that we need to check statistics for. Americans now talk openly about seeing a psychologist or having depression. Americans two generations prior did not. Depression was not recognized as a legitimate disease; it was considered a weakness, and psychotherapy was an act of desperation.

Comment by philgoetz on Stupidity as a mental illness · 2017-02-10T22:42:02.999Z · score: 1 (1 votes) · LW · GW

This is technically correct, but misleading in context. James' point is, I think, directed towards the idea that for a culture to embrace values that decrease its fitness has a cost, and increases the odds of your culture going extinct. More relevant to us in practice is that such values have an economic cost that inevitably reduces our individual happiness. This is correct regardless of whether you are at equilibrium.

Comment by philgoetz on Stupidity as a mental illness · 2017-02-10T22:30:40.027Z · score: 2 (2 votes) · LW · GW

IQ is largely hereditary (~70%, IIRC) and polygenic. This mean that attempting to "cure" it by anything short of major genetic engineering will have quite limited upside.

Depression is, according to Google and web pages I haven't studied, polygenic and 40-50% heritable, yet medicine often works for it.

It isn't especially hard to develop drugs for genetic diseases. Genetic diseases have single points of attack--receptors to block, proteins to disrupt. "Polygenic" may not matter at all; that may just mean there is one pathway with 30 genes in it, and 300 genes impinging on it, and you need to supplement the pathway's end product.

That, ahem, is exactly what's happening already :-/

I wasn't going to mention it, but I thought of that example because Harvard's current admissions website boasts that it provides no merit-based financial aid. I thought that was odd when I read it, but it fits in with the idea that a meritocracy is morally objectionable.

Comment by philgoetz on Stupidity as a mental illness · 2017-02-10T22:21:47.291Z · score: 2 (2 votes) · LW · GW

That's a good point--if a type of question on an IQ test shows variability from year to year, do psychologists say it's a bad type of question and remove it from the test?

Comment by philgoetz on Open thread, Feb. 06 - Feb. 12, 2017 · 2017-02-10T05:11:53.062Z · score: 2 (2 votes) · LW · GW

I came here looking for a Rationality Quotes thread to quote that in. :)

I'm especially sensitive to it because I spent a lot of time last year reading postmodernist literary theory, which rejects logic in favor of rhetoric. They support theories that have impressive-sounding words because postmodernist theory says the point of theory is to have fun rather than to understand things.

Stupidity as a mental illness

2017-02-10T03:57:20.182Z · score: 19 (18 votes)
Comment by philgoetz on Is Global Reinforcement Learning (RL) a Fantasy? · 2017-01-24T02:25:04.930Z · score: 0 (0 votes) · LW · GW

OP's arguments against RL seem to be based on a conception of RL as mapping a set of stimuli directly to an action, which would be silly. They could also be taken as arguments that the brain cannot possibly be implemented in neurons.

Comment by philgoetz on Well-Kept Gardens Die By Pacifism · 2017-01-23T21:05:16.685Z · score: 4 (4 votes) · LW · GW

The balance for a moderator is between too much craziness and too much groupthink.

Moderation easily becomes enforcement of a dogma. In English literary theory today, you're required to be a cultural relativist. You only get to choose one of three kinds of cultural relativist to be: Marxist, feminist, or post-modernist. Anyone else is dismissed as irrelevant to the discourse. This is the result of "moderation," which I place in quotes because it is anything but moderate.

It is especially problematic when the moderator is a key contributor. A moderator should, ideally, be a neutral referee.

Revisiting this post in 2017, I'm calling it wrong in retrospect. It seems to me that LessWrong is less vibrant than it used to be, and this is not because of too little moderation, but may be partly because of too much, both from above (post promotion, comments from EY, and harassment of divergent views from moderators) and from below (karma voting). LW has become a place of groupthink on many issues. Karma did not prevent that, and may have accelerated it.

EY encouraged this. He refused to engage with criticism of his ideas other than with rudeness or silence. He chased away Richard Loosemore, one of the only people on LW who was qualified to talk about AI and willing to disagree with EY's ideas. EY's take on him was:

Warning: Richard Loosemore is a known permanent idiot,

(And, looking at that thread, how exactly did timtyler, one of the other stars of LW, get banned?)

Comment by philgoetz on Rationality Quotes January - March 2017 · 2017-01-04T20:32:08.431Z · score: 10 (10 votes) · LW · GW

One lesson I’ve learned in life is that when someone thinks that something you’re doing is crazy, having a logical, multi-point defense of said action will not make you look less crazy to them.

Comment by philgoetz on Worth remembering (when comparing ‘the US’ to ‘Europe’) · 2016-12-02T16:16:46.983Z · score: 0 (0 votes) · LW · GW

It is now.

Comment by philgoetz on MIRI's 2016 Fundraiser · 2016-10-19T02:51:06.119Z · score: 0 (0 votes) · LW · GW

Please use a page break when you post an article, so we can easily scroll past it and see the previous articles.

Comment by philgoetz on Too good to be true · 2016-09-20T15:01:10.315Z · score: 0 (0 votes) · LW · GW

I wrote a paragraph on that in the post. I predicted a publication bias in favor of positive results, assuming the community is not biased on the particular issue of vaccines & autism. This prediction is probably wrong, but that hypothesis (lack of bias) is what I was testing.

Comment by philgoetz on Market Failure: Sugar-free Tums · 2016-08-01T19:37:24.166Z · score: 0 (0 votes) · LW · GW

I already responded to both those comments, so you already know I don't think those comments are any refutation at all.

Cup-holders are not a complex engineering problem (as the first comment said), and the preferences of the European and Asian markets are irrelevant for cars that are, please recall, built specifically to be sold in the United States. Nobody sells identical cars in Europe and in the United States. They often don't even sell the same models in the US.

The response (of the second comment) boils down to "you are silly if you think you know enough to criticize the work of people who make cup-holders for a living". If you accept that as a valid response, that could be used to shut down all critical activity. It's the same as responding to #BlackLives Matter by saying "You can't criticize the police for shooting people unless you're a policeman."

Comment by philgoetz on Market Failure: Sugar-free Tums · 2016-08-01T19:29:57.317Z · score: 0 (0 votes) · LW · GW

I didn't see that. If you check your source by reading the reviews on the page you linked to, you'll see that Medique 10133 Alcalak Sugar Free Tablets are not, in fact, sugar-free. The label on them says "May contain sucrose."

Comment by philgoetz on Irrationality Quotes August 2016 · 2016-08-01T19:15:57.904Z · score: 2 (6 votes) · LW · GW

From the International Craniofacial Institute's web page on cleft palate.

What they say:

Statistics reassure us that having a child with a cleft does not mean you’ll have other children with the same condition. In fact, your chances only increase by 2 to 5 percent compared to couples with no cleft-affected children.

What they mean:

The chances that your next child will have cleft palate increases from 0.15% to about 4%. Your odds ratio multiplier is 25.

Irrationality Quotes August 2016

2016-08-01T19:12:35.571Z · score: 5 (6 votes)
Comment by philgoetz on Market Failure: Sugar-free Tums · 2016-07-25T00:54:08.841Z · score: 0 (0 votes) · LW · GW

That doesn't look like a viable hypothesis because if it were true, such people would not be VCs at all.

That statement makes no sense and has no support. What, you're imagining that I said that VCs think all profitable things are already being done? That is not what I said. What I said, which is true, is that VCs don't jump into established markets that already have huge dominant players.

In real life the markets drive the price down to the cost of production only occasionally.

"Close to", not "to". The difference is enormous--it's the difference between free market theory and Marxism.

The theory of the free market is that markets do so; failure to do so is called a failure of the market. It is a theoretical term, so saying "theories are secondary" is nonsense.

Restated:

In real life the markets drive the price down close to the cost of production only occasionally.

Citation needed.

Comment by philgoetz on Market Failure: Sugar-free Tums · 2016-07-25T00:49:04.117Z · score: 0 (0 votes) · LW · GW

Where?

Market Failure: Sugar-free Tums

2016-06-30T00:12:16.143Z · score: 3 (12 votes)

"3 Reasons It’s Irrational to Demand ‘Rationalism’ in Social Justice Activism"

2016-03-29T15:16:37.309Z · score: 11 (62 votes)

The increasing uselessness of Promoted

2016-03-19T18:23:03.221Z · score: 19 (22 votes)

Is altruistic deception really necessary? Social activism and the free market

2016-02-26T06:38:16.032Z · score: 5 (11 votes)

Is there a recursive self-improvement hierarchy?

2015-10-29T02:55:00.909Z · score: 7 (10 votes)

The mystery of Brahms

2015-10-21T05:12:47.749Z · score: 5 (12 votes)

Monty Hall Sleeping Beauty

2015-09-18T21:18:23.137Z · score: 1 (4 votes)

An accidental experiment in location memory

2015-08-31T16:50:19.306Z · score: 9 (10 votes)

Calling references: Rational or irrational?

2015-08-28T21:06:46.872Z · score: 7 (8 votes)

Words per person year and intellectual rigor

2015-08-27T03:31:49.373Z · score: 13 (24 votes)

Is semiotics bullshit?

2015-08-25T14:09:04.000Z · score: 15 (17 votes)

Why people want to die

2015-08-24T20:13:37.830Z · score: 50 (61 votes)

How to escape from your sandbox and from your hardware host

2015-07-31T17:26:00.083Z · score: 28 (35 votes)

"Risk" means surprise

2015-05-22T04:47:08.768Z · score: 6 (9 votes)

My mind must be too highly trained

2015-02-20T21:43:59.036Z · score: 5 (16 votes)

Easy wins aren't news

2015-02-19T19:38:38.471Z · score: 39 (40 votes)

Uncategories and empty categories

2015-02-16T01:18:28.970Z · score: 16 (27 votes)

The morality of disclosing salary requirements

2015-02-08T21:12:26.534Z · score: 6 (11 votes)

Reductionist research strategies and their biases

2015-02-06T04:11:32.650Z · score: 16 (29 votes)

Don't estimate your creative intelligence by your critical intelligence

2015-02-05T02:41:28.108Z · score: 39 (44 votes)

How Islamic terrorists reduced terrorism in the US

2015-01-11T05:19:17.376Z · score: 13 (20 votes)

Dark Arts 101: Be rigorous, on average

2014-12-31T00:37:28.765Z · score: 17 (35 votes)

Every Paul needs a Jesus

2014-08-10T19:13:04.694Z · score: 11 (21 votes)

Why humans suck: Ratings of personality conditioned on looks, profile, and reported match

2014-08-09T18:48:17.021Z · score: 10 (23 votes)

The rational way to name rivers

2014-08-06T15:41:06.598Z · score: 4 (22 votes)

The dangers of dialectic

2014-08-05T20:02:25.531Z · score: 11 (18 votes)

Fifty Shades of Self-Fulfilling Prophecy

2014-07-24T00:17:43.189Z · score: 18 (33 votes)

Too good to be true

2014-07-11T20:16:24.277Z · score: 23 (33 votes)

What should a Bayesian do given probability of proving X vs. of disproving X?

2014-06-07T18:40:38.419Z · score: 0 (13 votes)

The Universal Medical Journal Article Error

2014-04-29T17:57:09.854Z · score: 7 (77 votes)

Don't teach people how to reach the top of a hill

2014-03-04T21:38:53.926Z · score: 30 (41 votes)

Prescriptive vs. descriptive and objective vs. subjective definitions

2014-01-21T23:21:45.645Z · score: 4 (13 votes)

Using vs. evaluating (or, Why I don't come around here no more)

2014-01-20T02:36:29.575Z · score: 23 (44 votes)

The dangers of zero and one

2013-11-21T12:21:23.684Z · score: 27 (32 votes)

To like, or not to like?

2013-11-14T02:26:59.072Z · score: 4 (28 votes)

Dark Arts 101: Winning via destruction and dualism

2013-09-21T01:53:02.169Z · score: -13 (41 votes)

Thought experiment: The transhuman pedophile

2013-09-17T22:38:06.160Z · score: 6 (17 votes)

Fiction: Written on the Body as love versus reason

2013-09-08T06:13:35.794Z · score: -11 (22 votes)

I know when the Singularity will occur

2013-09-06T20:04:18.560Z · score: -7 (52 votes)

The 50 Shades of Grey Book Club

2013-08-24T20:55:47.307Z · score: 5 (38 votes)

Humans are utility monsters

2013-08-16T21:05:28.195Z · score: 70 (88 votes)

Free ebook: Extraordinary Popular Delusions and the Madness of Crowds

2013-07-05T19:20:39.493Z · score: 1 (10 votes)

Anticipating critical transitions

2013-06-09T16:28:51.006Z · score: 19 (19 votes)

Education control?

2013-05-17T16:32:55.717Z · score: 12 (17 votes)

Social intelligence, education, & the workplace

2013-05-02T20:51:30.154Z · score: 3 (26 votes)