Why Bayesians should two-box in a one-shot 2017-12-15T17:39:32.491Z
What conservatives and environmentalists agree on 2017-04-08T00:57:32.012Z
Increasing GDP is not growth 2017-02-16T18:04:16.959Z
Stupidity as a mental illness 2017-02-10T03:57:20.182Z
Irrationality Quotes August 2016 2016-08-01T19:12:35.571Z
Market Failure: Sugar-free Tums 2016-06-30T00:12:16.143Z
"3 Reasons It’s Irrational to Demand ‘Rationalism’ in Social Justice Activism" 2016-03-29T15:16:37.309Z
The increasing uselessness of Promoted 2016-03-19T18:23:03.221Z
Is altruistic deception really necessary? Social activism and the free market 2016-02-26T06:38:16.032Z
Is there a recursive self-improvement hierarchy? 2015-10-29T02:55:00.909Z
The mystery of Brahms 2015-10-21T05:12:47.749Z
Monty Hall Sleeping Beauty 2015-09-18T21:18:23.137Z
An accidental experiment in location memory 2015-08-31T16:50:19.306Z
Calling references: Rational or irrational? 2015-08-28T21:06:46.872Z
Words per person year and intellectual rigor 2015-08-27T03:31:49.373Z
Is semiotics bullshit? 2015-08-25T14:09:04.000Z
Why people want to die 2015-08-24T20:13:37.830Z
How to escape from your sandbox and from your hardware host 2015-07-31T17:26:00.083Z
"Risk" means surprise 2015-05-22T04:47:08.768Z
My mind must be too highly trained 2015-02-20T21:43:59.036Z
Easy wins aren't news 2015-02-19T19:38:38.471Z
Uncategories and empty categories 2015-02-16T01:18:28.970Z
The morality of disclosing salary requirements 2015-02-08T21:12:26.534Z
Reductionist research strategies and their biases 2015-02-06T04:11:32.650Z
Don't estimate your creative intelligence by your critical intelligence 2015-02-05T02:41:28.108Z
How Islamic terrorists reduced terrorism in the US 2015-01-11T05:19:17.376Z
Dark Arts 101: Be rigorous, on average 2014-12-31T00:37:28.765Z
Every Paul needs a Jesus 2014-08-10T19:13:04.694Z
Why humans suck: Ratings of personality conditioned on looks, profile, and reported match 2014-08-09T18:48:17.021Z
The rational way to name rivers 2014-08-06T15:41:06.598Z
The dangers of dialectic 2014-08-05T20:02:25.531Z
Fifty Shades of Self-Fulfilling Prophecy 2014-07-24T00:17:43.189Z
Too good to be true 2014-07-11T20:16:24.277Z
What should a Bayesian do given probability of proving X vs. of disproving X? 2014-06-07T18:40:38.419Z
The Universal Medical Journal Article Error 2014-04-29T17:57:09.854Z
Don't teach people how to reach the top of a hill 2014-03-04T21:38:53.926Z
Prescriptive vs. descriptive and objective vs. subjective definitions 2014-01-21T23:21:45.645Z
Using vs. evaluating (or, Why I don't come around here no more) 2014-01-20T02:36:29.575Z
The dangers of zero and one 2013-11-21T12:21:23.684Z
To like, or not to like? 2013-11-14T02:26:59.072Z
Dark Arts 101: Winning via destruction and dualism 2013-09-21T01:53:02.169Z
Thought experiment: The transhuman pedophile 2013-09-17T22:38:06.160Z
Fiction: Written on the Body as love versus reason 2013-09-08T06:13:35.794Z
I know when the Singularity will occur 2013-09-06T20:04:18.560Z
The 50 Shades of Grey Book Club 2013-08-24T20:55:47.307Z
Humans are utility monsters 2013-08-16T21:05:28.195Z
Free ebook: Extraordinary Popular Delusions and the Madness of Crowds 2013-07-05T19:20:39.493Z
Anticipating critical transitions 2013-06-09T16:28:51.006Z
Education control? 2013-05-17T16:32:55.717Z
Social intelligence, education, & the workplace 2013-05-02T20:51:30.154Z


Comment by PhilGoetz on Rescuing the Extropy Magazine archives · 2021-06-03T14:34:50.389Z · LW · GW

Hi, see above for my email address. Email me a request at that address. I don't have your email. I just sent you a message.

ADDED in 2021: Some people tried to contact me thru LessWrong and Facebook. I check messages there like once a year.  Nobody sent me an email at the email address I gave above. I've edited it to make it more clear what my email address is.

Comment by PhilGoetz on Debate update: Obfuscated arguments problem · 2021-01-12T02:09:32.448Z · LW · GW

[Original first point deleted, on account of describing something that resembled Bayesian updating closely enough to make my point invalid.]

I don't think this approach applies to most actual bad arguments.

The things we argue about the most are ones over which the population is polarized, and polarization is usually caused by conflicts between different worldviews.  Worldviews are constructed to be nearly self-consistent.  So you're not going to be able to reconcile people of different worldviews by comparing proofs.  Wrong beliefs come in sets, where each contradiction caused by one wrong belief is justified by other wrong beliefs.

So for instance, a LessWrongian would tell a Christian that positing a God doesn't explain how life was made, because she's just replaced a complex first life form with an even more-complex God, and what made God?  The Christian will reliably respond that God is eternal, outside of space and time, and was never made.

This response sounds stupid to us, but it's part of a philosophical system built by Plato, which he designed to be self-consistent.  The key parts here are the inversion of "complexity" and the denial of mechanism.

The inversion of complexity is the belief that simple things are greater and more powerful than complex things. The central notion is "purity", and pure, simple things are always superior to complicated things. God is defined as ultimate purity and simplicity.  God is simple because you can fully describe Him just by saying he's perfect, and there's only one way of being perfect.  He's eternal, because if he had a starting-point or an ending-point in time, then other points in time would be equally good, and "perfection" would be ambiguous.  "God is perfectly simple" is actually part of Catholic dogma, and derived from Plato.  So a Christian doesn't think she's replaced complex life with a more-complex God; she's replaced it with a more-simple and therefore more-powerful God.

The denial of mechanism is the denial that anything gets its properties mechanistically.  An animal isn't alive because it eats food and metabolizes it and reproduces; it eats food and metabolizes it and reproduces because it's alive.  Functions are magically inherited from categories ("Forms"), rather than categories arising from a cooperative combination of functions.  (This is why spiritualists who believe in a good God dislike machinery.  It's an abomination to them, as it has new capabilities not inherited from any eternal Form, and their intuition is that it must be animated by some spirit other than God.  They think of magic as natural, and causes other than magic as unnatural; we think just the opposite.)

Because God is perfect, He is omnipotent, and hence has every possible capability, just as he is perfect in every way.  Everything less than God is less powerful, lacking some capabilities, and more-complex, because you must enumerate all those missing capabilities and perfections to describe it.  (This is the metaphysics behind Tolstoy's saying, "Every happy family is happy in the same way. Every unhappy family is unhappy in different ways.”)  The Great Chain of Being is a complete linear ordering of every eternal Form, proceeding from God at the top (perfect, simple, omnipotent), down to complete lack and emptiness at the other end (which is Augustinian Evil).  Each step along that chain is a loss of some perfection.

Hence, to the Christian there's no "problem" of complexity in saying that God created life, because God is less-complex than life, and therefore also more-powerful, since complexity implies many losses of perfection and capabilities.  There is no need to posit that God is complex to explain His powers, because capabilities arise from essence, not from mechanics, and God's perfectly-simple essence is to have all capabilities.  This is because Plato designed his ontology to eliminate the problem of how complex life arose.

If you argue with Marxists, post-modernists, or the Woke, you'll similarly find that, for every solid argument you have that proves a belief of theirs is wrong, they have some assumptions which to them justify dismissing your argument.  You'll never find yourself able to compare proofs with an ideological opposite and agree on the validity of each step.

Comment by PhilGoetz on Where do (did?) stable, cooperative institutions come from? · 2020-12-01T20:34:39.734Z · LW · GW

"Cynicism is a self-fulfilling prophecy; believing that an institution is bad makes the people within it stop trying, and the good people stop going there."

I think this is a key observation. Western academia has grown continually more cynical since the advent of Marxism, which assumes an almost absolute cynicism as a point of dogma: all actions are political actions motivated by class, except those of bourgeois Marxists who for mysterious reasons advocate the interests of the proletariat.

This cynicism became even worse with Foucault, who taught people to see everything as nothing but power relations.  Western academics today are such knee-jerk cynics that they can't conceive of loyalty to any organization other than Marxism or the Social Justice movement as being anything but exploitation of the one being loyal.

Pride is the opposite of cynicism, and is one of the key feelings that makes people take brave, altruistic actions.  Yet today we've made pride a luxury of the oppressed.  Only groups perceived as oppressed are allowed to have pride in group memberships.  If you said you were proud of being American, or of being manly, you'd get deplatformed, and possibly fired.

The defamation of pride in mainstream groups is thus destroying our society's ability to create or maintain mainstream institutions.  In my own cynicism, I think someone deliberately intended this.  This defamation began with Marxism, and is now supported by the social justice movement, both of which are Hegelian revolutionary movements which believe that the first step toward making civilization better is to destroy it, or at least destabilize it enough to stage a coup or revolution.  This is the "clean sweep" spoken of so often by revolutionaries since the French Revolution.

Since their primary goal is to destroy civilization, it makes perfect sense that they begin by convincing people that taking pride in any mainstream identity or group membership is evil, as this will be sufficient to destroy all cooperative social institutions, and hence civilization.

Comment by PhilGoetz on The Solomonoff Prior is Malign · 2020-10-17T03:14:19.401Z · LW · GW

"At its core, this is the main argument why the Solomonoff prior is malign: a lot of the programs will contain agents with preferences, these agents will seek to influence the Solomonoff prior, and they will be able to do so effectively."

First, this is irrelevant to most applications of the Solomonoff prior.  If I'm using it to check the randomness of my random number generator, I'm going to be looking at 64-bit strings, and probably very few intelligent-life-producing universe-simulators output just 64 bits, and it's hard to imagine how an alien in a simulated universe would want to bias my RNG anyway.

The S. prior is a general-purpose prior which we can apply to any problem.  The output string has no meaning except in a particular application and representation, so it seems senseless to try to influence the prior for a string when you don't know how that string will be interpreted.

Can you give an instance of an application of the S. prior in which, if everything you wrote were correct, it would matter?

Second, it isn't clear that this is a bug rather than a feature.  Say I'm developing a program to compress photos.  I'd like to be able to ask "what are the odds of seeing this image, ever, in any universe?"  That would probably compress images of plants and animals better than other priors, because in lots of universes life will arise and evolve, and features like radial symmetry, bilateral symmetry, leafs, legs, etc., will arise in many universes.  This biasing of priors by evolution doesn't seem to me different than biasing of priors by intelligent agents; evolution is smarter than any agent we know.  And I'd like to get biasing from intelligent agents, too; then my photo-compressor might compress images of wheels and rectilinear buildings better.

Also in the category of "it's a feature, not a bug" is that, if you want your values to be right, and there's a way of learning the values of agents in many possible universes, you ought to try to figure out what their values are, and update towards them.  This argument implies that you can get that for free by using Solomonoff priors.

(If you don't think your values can be "right", but instead you just believe that your values morally oblige you to want other people to have those values, you're not following your values, you're following your theory about your values, and probably read too much LessWrong for your own good.)

Third, what do you mean by "the output" of a program that simulates a universe? How are we even supposed to notice the infinitesimal fraction of that universe's output which the aliens are influencing to subvert us?  Take your example of Life--is the output a raster scan of the 2D bit array left when the universe goes static?  In that case, agents have little control over the terminal state of their universe (and also, in the case of Life, the string will be either almost entirely zeroes, or almost entirely 1s, and those both already have huge Solomonoff priors).  Or is it the concatenation of all of the states it goes through, from start to finish?  In that case, by the time intelligent agents evolve, their universe will have already produced more bits than our universe can ever read.

Are you imagining that bits are never output unless the accidentally-simulated aliens choose to output a bit?  I can't imagine any way that could happen, at least not if the universe is specified with a short instruction string.

This brings us to the 4th problem:  It makes little sense to me to worry about averaging in outputs from even mere planetary simulations if your computer is just the size of a planet, because it won't even have enough memory to read in a single output string from most such simulations.

5th, you can weigh each program's output proportional to 2^-T, where T is the number of steps it takes the TM to terminate.  You've got to do something like that anyway, because you can't run TMs to completion one after another; you've got to do something like take a large random sample of TMs and iteratively run each one step.  Problem solved.

Maybe I'm misunderstanding something basic, but I feel like we're talking about many angels can dance on the head of a pin.

Perhaps the biggest problem is that you're talking about an entire universe of intelligent agents conspiring to change the "output string" of the TM that they're running in.  This requires them to realize that they're running in a simulation, and that the output string they're trying to influence won't even be looked at until they're all dead and gone.  That doesn't seem to give them much motivation to devote their entire civilization to twiddling bits in their universe's final output in order to shift our priors infinitesimally.  And if it did, the more likely outcome would be an intergalactic war over what string to output.

(I understand your point about them trying to "write themselves into existence, allowing them to effectively "break into" our universe", but as you've already required their TM specification to be very simple, this means the most they can do is cause some type of life that might evolve in their universe to break into our universe.  This would be like humans on Earth devoting the next billion years to tricking God into re-creating slime molds after we're dead.  Whereas the things about themselves that intelligent life actually care about with and self-identify with are those things that distinguish them from their neighbors.  Their values will be directed mainly towards opposing the values of other members of their species.  None of those distinguishing traits can be implicit in the TM, and even if they could, they'd cancel each other out.)

Now, if they were able to encode a message to us in their output string, that might be more satisfying to them.  Like, maybe, "FUCK YOU, GOD!"

Comment by PhilGoetz on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-27T01:05:26.337Z · LW · GW

I think we learned that trolls will destroy the world.

Comment by PhilGoetz on Stupidity as a mental illness · 2020-03-19T15:29:28.804Z · LW · GW

It's only offensive if you still think of mental illness as shameful.

Comment by PhilGoetz on Stupidity as a mental illness · 2020-03-19T15:26:13.404Z · LW · GW

Me: We could be more successful at increasing general human intelligence if we looked at low intelligence as something that people didn't have to be ashamed of, and that could be remedied, much as how we now try to look at depression and other mental illness as illness--a condition which can often be treated and which people don't need to be ashamed of.

You: YOU MONSTER! You want to call stupidity "mental illness", and mental illness is a bad and shameful thing!

Comment by PhilGoetz on Group selection update · 2019-06-14T18:39:02.166Z · LW · GW

That's technically true, but it doesn't help a lot. You're assuming one starts with fixation to non-SC in a species. But how does one get to that point of fixation, starting from fixation of SC, which is more advantageous to the individual? That's the problem.

Comment by PhilGoetz on Group selection update · 2019-06-01T05:30:53.093Z · LW · GW

It's not that I no longer endorse it; it's that I replied to a deleted comment instead of to the identical not-deleted comment.

Comment by PhilGoetz on Group selection update · 2019-06-01T05:30:13.514Z · LW · GW
Group selection, as I've heard it explained before, is the idea that genes spread because their effects are for the good of the species. The whole point of evolution is that genes do well because of what they do for the survival of the gene. The effect isn't on the group, or on the individual, the species, or any other unit other than the unit that gets copied and inherited.

Group selection is group selection: selection of groups. That means the phenotype is group behavior, and the effect of selection is spread equally among members of the group. If the effect is death, this eliminates an entire group at once--and the nearer a selfish gene approaches fixation, the more likely it is to trigger a group extinction. Consider what would happen if you ran Axelrod's experiments with group selection implemented, so that groups went extinct if total payoff in the group below some threshold.

The key point is nonlinearity. If the group fitness function is a nonlinear function of the prevalence of a gene, then it dramatically changes fixation and extinction rates.

Well, maybe. If the plant has a typical set of recessive genes in its genome, self-fertilisation is a disaster. A few generations down the line, the self-fertilising plant will have plenty of genetic problems arising from recessive gene problems, and will probably die out. This means that self-fertilisation is bad - a gene for self-fertilisation will only prosper in those cases where it's not fertilising itself. It will do worse.

No. Self-fertilisation doesn't prevent cross-fertilisation. The self-fertilizer has just as many offspring from cross-fertilization as the self-sterile plant, but it has in addition clones of itself. Many of these clones may die, but if just one of them survives, it's still a gain.


Comment by PhilGoetz on Group selection update · 2019-06-01T05:27:16.781Z · LW · GW

You're assuming that the benefits of an adaptation can only be linear in the fraction of group members with that adaptation. If the benefits are nonlinear, then they can't be modeled by individual selection, or by kin selection, or by the Haystack model, or by the Harpending & Rogers model, in all of which the total group benefit is a linear sum of the individual benefits.

For instance, the benefits of the Greek phalanx are tremendous if 100% of Greek soldiers will hold the line, but negligible if only 99% of them do. We can guess--though I don't know if it's been verified--that slime mold aggregative reproduction can be maintained against invasion only because a slime mold aggregation in which 100% of the single-cell organisms play "fairly" in deciding which of them get to produce germ cells survives, while a slime mold aggregation in which just one cell's genome insisted on becoming the germ cell would die off in 2 generations. I think individual selection would predict the population would be taken over by that anti-social behavior.

Comment by PhilGoetz on Group selection update · 2019-06-01T05:16:55.596Z · LW · GW
Group selection, as I've heard it explained before, is the idea that genes spread because their effects are for the good of the species. The whole point of evolution is that genes do well because of what they do for the survival of the gene. The effect isn't on the group, or on the individual, the species, or any other unit other than the unit that gets copied and inherited.

Group selection is group selection: selection of groups. That means the phenotype is group behavior, and the effect of selection is spread equally among members of the group. If the effect is death, this eliminates an entire group at once--and the nearer a selfish gene approaches fixation, the more likely it is to trigger a group extinction. Consider what would happen if you ran Axelrod's experiments with group selection implemented, so that groups went extinct if total payoff in the group below some threshold.

The key point is nonlinearity. If the group fitness function is a nonlinear function of the prevalence of a gene, then it dramatically changes fixation and extinction rates.

Well, maybe. If the plant has a typical set of recessive genes in its genome, self-fertilisation is a disaster. A few generations down the line, the self-fertilising plant will have plenty of genetic problems arising from recessive gene problems, and will probably die out. This means that self-fertilisation is bad - a gene for self-fertilisation will only prosper in those cases where it's not fertilising itself. It will do worse.

No. Self-fertilisation doesn't prevent cross-fertilisation. The self-fertilizer has just as many offspring from cross-fertilization as the self-sterile plant, but it has in addition clones of itself. Many of these clones may die, but if just one of them survives, it's still a gain.

Comment by PhilGoetz on How SIAI could publish in mainstream cognitive science journals · 2019-05-15T23:35:17.212Z · LW · GW


I have great difficulty finding any philosophy published after 1960 other than post-modern philosophy, probably because my starting point is literary theory, which is completely confined to--the words "dominated" and even "controlled" are too weak--Marxian post-modern identity politics, which views literature only as a political barometer and tool.

Comment by PhilGoetz on Pascal's Mugging: Tiny Probabilities of Vast Utilities · 2019-05-15T20:29:15.871Z · LW · GW

I think you're assuming that to give in to the mugging is the wrong answer in a one-shot game for a being that values all humans in existence equally, because it feels wrong to you, a being with a moral compass evolved in iterated multi-generational games.

Consider these possibilities, any one of which would create challenges for your reasoning:

1. Giving in is the right answer in a one-shot game, but the wrong answer in an iterated game. If you give in to the mugging, the outsider will keep mugging you and other rationalists until you're all broke, leaving the universe's future in the hands of "Marxians" and post-modernists.

2. Giving in is the right answer for a rational AI God, but evolved beings (under the Darwinian definition of "evolved") can't value all member of their species equally. They must value kin more than strangers. You would need a theory to explain why any being that evolved due to resource competition wouldn't consider killing a large number of very distantly-related members of its species to be a good thing.

3. You should interpret the conflict between your intuition, and your desire for a rational God, not as showing that you're reasoning badly because you're evolved, but that you're reasoning badly by desiring a rational God bound by a static utility function. This is complicated, so I'm gonna need more than one paragraph:

Intuitively, my argument boils down to applying the logic behind free markets, freedom of speech, and especially evolution, to the question of how to construct God's utility function. This will be vague, but I think you can fill in the blanks.

Free-market economic theory developed only after millenia during which everyone believed that top-down control was the best way of allocating resources. Freedom of speech developed only after millenia during which everyone believed that it was rational for everyone to try to suppress any speech they disagreed with. Political liberalism developed only after millenia during which everybody believed that the best way to reform society was to figure out what the best society would be like, then force that on everyone. Evolution was conceived of--well, originally about 2500 years ago, probably by Democritus, but it became popular only after millenia during which everyone believed that life could be created only by design.

All of these developments came from empiricists. Empiricism is one of the two opposing philosophical traditions of Western thought. It originated, as far as we know, with Democritus (about whom Plato reportedly said that he wished all his works to be burned--which they eventually were). It went through the Skeptics, the Stoics, Lucretius, nominalism, the use of numeric measurements (re-introduced to the West circa 1300), the Renaissance and Enlightenment, and eventually (with the addition of evolution, probability, statistics, and operationalized terms) created modern science.

A key principle of empiricism, on which John Stuart Mill explicitly based his defense of free speech, is that we can never be certain. If you read about the skeptics and stoics today, you'll read that they "believed nothing", but that was because, to their opponents, "believe" meant "know something with 100% certainty".

(The most-famous skeptic, Sextus Empiricus, was called "Empiricus" because he was of the empirical school of medicine, which taught learning from experience. Its opponent was the rational school of medicine, which used logic to interpret the dictums of the ancient authorities.)

The opposing philosophical tradition, founded by Plato--is rationalism. "Rational" does not mean "good thinking". It has a very specific meaning, and it is not a good way of thinking. It means reasoning about the physical world the same way Euclid constructed geometric proofs. No measurements, no irrational numbers, no observation of the world, no operationalized nominalist definitions, no calculus or differential equations, no testing of hypotheses--just armchair a priori logic about universal categories, based on a set of unquestionable axioms, done in your favorite human language. Rationalism is the opposite of science, which is empirical. The pretense that "rational" means "right reasoning" is the greatest lie foisted on humanity by philosophers.

Dualist rationalism is inherently religious, as it relies on some concept of "spirit", such as Plato's Forms, Augustine's God, Hegel's World Spirit, or an almighty programmer converting sense data into LISP symbols, to connect the inexact, ambiguous, changeable things of this world to the precise, unambiguous, unchanging, and usually unquantified terms in its logic.

(Monist rationalists, like Buddha, Parmenides, and post-modernists, believe sense data can't be divided unambiguously into categories, and thus we may not use categories. Modern empiricists categorize sense data using statistics.)

Rationalists support strict, rigid, top-down planning and control. This includes their opposition to free markets, free speech, gradual reform, and optimization and evolution in general. This is because rationalists believe they can prove things about the real world, and hence their conclusions are reliable, and they don't need to mess around with slow, gradual improvements or with testing. (Of course each rationalist believes that every other rationalist was wrong, and should probably be burned at the stake.)

They oppose all randomness and disorder, because it makes strict top-down control difficult, and threatens to introduce change, which can only be bad once you've found the truth.

They have to classify every physical thing in the world into a discrete, structureless, atomic category, for use in their logic. That has led inevitably to theories which require all humans to ultimately have, at reflective equilibrium, the same values--as Plato, Augustine, Marx, and CEV all do.

You have, I think, picked up some of these bad inclinations from rationalism. When you say you want to find the "right" set of values (via CEV) and encode them into an AI God, that's exactly like the rationalists who spent their lives trying to find the "right" way to live, and then suppress all other thoughts and enforce that "right way" on everyone, for all time. Whereas an empiricist would never claim to have found final truth, and would always leave room for new understandings and new developments.

Your objection to randomness is also typically rationalist. Randomness enables you to sample without bias. A rationalist believes he can achieve complete lack of bias; an empiricist believes that neither complete lack of bias nor complete randomness can be achieved, but that for a given amount of effort, you might achieve lower bias by working on your random number generator and using it to sample, than by hacking away at your biases.

So I don't think we should build an FAI God who has a static set of values. We should build, if anything, an AI referee, who tries only to keep conditions in the universe that will enable evolution to keep on producing behaviors, concepts, and creatures of greater and greater complexity. Randomness must not be eliminated, for without randomness we can have no true exploration, and must be ruled forever by the beliefs and biases of the past.

Comment by PhilGoetz on Rescuing the Extropy Magazine archives · 2019-05-15T17:32:56.179Z · LW · GW

I scanned in Extropy 1, 3, 4, 5, 7, 16, and 17, which leaves only #2 missing. How can I send these to you? Contact me at [my LessWrong user name] at

Comment by PhilGoetz on Why Bayesians should two-box in a one-shot · 2017-12-17T18:26:40.960Z · LW · GW

I just now read that one post. It isn't clear how you think it's relevant. I'm guessing you think that it implies that positing free will is invalid.

You don't have to believe in free will to incorporate it into a model of how humans act. We're all nominalists here; we don't believe that the concepts in our theories actually exist somewhere in Form-space.

When someone asks the question, "Should you one-box?", they're using a model which uses the concept of free will. You can't object to that by saying "You don't really have free will." You can object that it is the wrong model to use for this problem, but then you have to spell out why, and what model you want to use instead, and what question you actually want to ask, since it can't be that one.

People in the LW community don't usually do that. I see sloppy statements claiming that humans "should" one-box, based on a presumption that they have no free will. That's making a claim within a paradigm while rejecting the paradigm. It makes no sense.

Consider what Eliezer says about coin flips:

We've previously discussed how probability is in the mind. If you are uncertain about whether a classical coin has landed heads or tails, that is a fact about your state of mind, not a property of the coin. The coin itself is either heads or tails. But people forget this, and think that coin.probability == 0.5, which is the Mind Projection Fallacy: treating properties of the mind as if they were properties of the external world.

The mind projection fallacy is treating the word "probability" not in a nominalist way, but in a philosophical realist way, as if they were things existing in the world. Probabilities are subjective. You don't project them onto the external world. That doesn't make "coin.probability == 0.5" a "false" statement. It correctly specifies the distribution of possibilities given the information available within the mind making the probability assessment. I think that is what Eliezer is trying to say there.

"Free will" is a useful theoretical construct in a similar way. It may not be a thing in the world, but it is a model for talking about how we make decisions. We can only model our own brains; you can't fully simulate your own brain within your own brain; you can't demand that we use the territory as our map.

Comment by PhilGoetz on 37 Ways That Words Can Be Wrong · 2017-12-17T16:28:54.260Z · LW · GW

Yep, nice list. One I didn't see: Defining a word in a way that is less useful (that conveys less information) and rejecting a definition that is more useful (that conveys more information). Always choose the definition that conveys more information; eliminate words that convey zero information. It's common for people to define words that convey zero information. But if everything has the Buddha nature, nothing empirical can be said about what it means and it conveys no information.

Along similar lines, always define words so that no other word conveys too much mutual information about them. For instance, many people have argued with me that I should use the word "totalitarian" to mean "the fascist nations of the 20th century". Well, we already have a word for that, which is "fascist", so to define "totalitarian" as a synonym makes it a useless word.

The word "fascist" raises the question of when to use extensional vs. intensional definitions. It's conventionally defined extensionally, to mean the Axis powers in World War 2. This is not a useful definition, as we already have a label for that. Worse, people define it extensionally but pretend they've defined it intensionally. They call people today "fascist", conveying connotations in a way that can't be easily disputed, because there is no intensional definition to evaluate the claim.

Sometimes you want to switch back and forth between extensional and intensional definitions. In art history, we have a term for each period or "movement", like "neo-classical" and "Romantic". The exemplars of the category are defined both intensionally and extensionally, as those artworks having certain properties and produced in certain geographic locations during a certain time period. It is appropriate to use the intensional definition alone if describing a contemporary work of art (you can call it "Romantic" if it looks Romantic), but inappropriate to use examples that fit the intension but not the extension as exemplars, or to deduce things about the category from them. This keeps the categories stable.

A little ways back I talked about defining the phrase "Buddha nature". Phrases also have definitions--words are not atoms of meaning. Analyzing a phrase as if our theories of grammar worked, ignoring knowledge about idioms, is an error rationalists sometimes commit.

Pretending words don't have connotations is another error rationalists commit regularly--often in sneaky ways, deliberately using the connotations, while pretending they're being objective. Marxist literary criticism, for instance, loads a lot into the word "bourgeois".

Another category missing here is gostoks and doshes. This is when a word's connotations and tribal affiliation-signalling displace its semantic content entirely, and no one notices it has no meaning. Extremely common in Marxism and in "theory"; "capitalism" and "bourgeois" being the most-common examples. "Bourgeoisie" originally meant people like Rockefeller and the Borges, but as soon as artists began using the word, they used it to mean "people who don't like my scribbles," and now it has no meaning at all, but demonic connotations. "Capitalism" has no meaning that can single out post-feudal societies in the way Marxists pretend it does; any definition of it that I've seen includes things that Marxists don't want it to, like the Soviet Union, absolute monarchies, or even hunter-gatherer tribes. It should be called simply "free markets", which is what they really object to and much more accurate at identifying the economic systems that they oppose, but they don't want to admit that the essence of their ideology is opposition to freedom.

Avoid words with connotations that you haven't justified. Don't say "cheap" if you mean "inexpensive" or "shoddy". Especially avoid words which have a synonym with the opposite connotation: "frugal" and "miserly". Be aware of your etymological payloads: "awesome" and "awful" (full of awe), "incredible" (not credible), "wonderful" (thought-provoking).

Another category is when 2 subcultures have different sets of definitions for the same words, and don't realize it. For instance, in the humanities, "rational" literally means ratio-based reasoning, which rejects the use of real numbers, continuous equations, empirical measurements, or continuous changes over time. This is the basis of the Romantic/Modernist hatred of "science" (by which they mean Aristotelian rationality), and of many post-modern arguments that rationality doesn't work. Many people in the humanities are genuinely unaware that science is different than it was 2400 years ago, and most were 100% ignorant of science until perhaps the mid-20th century. A "classical education" excludes all empiricism.

Another problem is meaning drift. When you use writings from different centuries, you need to be aware of how the meanings of words and phrases have changed over time. For instance, the official academic line nowadays is that alchemy and astrology are legitimate sciences; this is justified in part by using the word "science" as if it meant the same as the Latin "scientia".

A problem in translation is decollapsing definitions. Medieval Latin conflated some important concepts because their neo-Platonist metaphysics said that all good things sort of went together. So for instance they had a single word, "pulchrum", which meant "beautiful", "sexy", "appropriate to its purpose", "good", and "noble". Translators will translate that into English based on the context, but that's not conveying the original mindset. This comes up most frequently when ancient writers made puns, like Plato's puns in the Crito, or "Jesus'" (Greek) puns in the opening chapters of John, which are destroyed in translation, leaving the reader with a false impression of the speaker's intent.

I disagree that saying "X is Y by definition" Is usually wrong, but I should probably leave my comment on that post instead of here.

Comment by PhilGoetz on 37 Ways That Words Can Be Wrong · 2017-12-17T16:17:28.433Z · LW · GW

[moved to top level of replies]

Comment by PhilGoetz on 37 Ways That Words Can Be Wrong · 2017-12-17T16:02:04.291Z · LW · GW

But you're arguing against Eliezer, as "God" and "miracle" were (and still are) commonly-used words, and so Eliezer is saying those are good, short words for them.

Comment by PhilGoetz on Fallacies of Compression · 2017-12-17T02:10:13.451Z · LW · GW

Great post! There is also the non-discrete aspect of compression: information loss. English has, according to some dictionaries, over a million words. It's unlikely we store most of our information in English. Probably there is some sort of dimension reduction, like PCA. There is in any case probably lossy compression. This means people with different histories will use different frequency tables for their compression, and will throw out different information when encoding a verbal statement. I think you would almost certainly find that if you measure word use frequency for different people, then cluster the word use distributions, some clusters would correspond to ideologies. The interesting question is which comes first, the ideology, or the word usage frequency (caused by different life experiences).

Comment by PhilGoetz on Why Bayesians should two-box in a one-shot · 2017-12-16T14:44:20.593Z · LW · GW

I don't think that what you need has any bearing on what reality has actually given you. Nor can we talk about different decision theories here--as long as we are talking about maximizing expected utility, we have our decision theory; that is enough specification to answer the Newcomb one-shot question. We can only arrive at a different outcome by stating the problem differently, or by sneaking in different metaphysics, or by just doing bad logic (in this case, usually allowing contradictory beliefs about free will in different parts of the analysis.)

Your comment implies you're talking about policy, which must be modelled as an iterated game. I don't deny that one-boxing is good in the iterated game.

My concern in this post is that there's been a lack of distinction in the community between "one-boxing is the best policy" and "one-boxing is the best decision at one point in time in a decision-theoretic analysis, which assumes complete freedom of choice at that moment." This lack of distinction has led many people into wishful or magical rather than rational thinking.

Comment by PhilGoetz on Why Bayesians should two-box in a one-shot · 2017-12-16T01:03:00.693Z · LW · GW

I can believe that it would make sense to commit ahead of time to one-box at such an event. Doing so would affect your behavior in a way that the predictor might pick up on.

Hmm. Thinking about this convinces me that there's a big problem here in how we talk about the problem, because if we allow people who already knew about Newcomb's Problem to play, there are really 4 possible actions, not 2:

  • intended to one-box, one-boxed
  • intended to one-box, two-boxed
  • intended to two-box, one-boxed
  • intended to two-box, two-boxed

I don't know if the usual statement of Newcomb's problem specifies whether the subjects learns the rules of the game before or after the predictor makes a prediction. It seems to me that's a critical factor. If the subject is told the rules of the game before the predictor observes the subject and makes a prediction, then we're just saying Omega is a very good lie detector, and the problem is not even about decision theory, but about psychology: Do you have a good enough poker face to lie to Omega? If not, pre-commit to one-box.

We shouldn't ask, "Should you two-box?", but, "Should you two-box now, given how you would have acted earlier?" The various probabilities in the present depend on what you thought in the past. Under the proposition that Omega is perfect at predicting, the person inclined to 2-box should still 2-box, 'coz that $1M probably ain't there.

So Newcomb's problem isn't a paradox. If we're talking just about the final decision, the one made by a subject after Omega's prediction, then the subject should probably two-box (as argued in the post). If we're talking about two decisions, one before and one after the box-opening, then all we're asking is whether you can convince Omega that you're going to one-box if you aren't. Then it would not be terribly hard to say that a predictor might be so good (say, an Amazing Kreskin-level cold-reader of humans, or that you are an AI) that your only hope is to precommit to one-box.

Comment by PhilGoetz on Why Bayesians should two-box in a one-shot · 2017-12-15T19:48:31.980Z · LW · GW

This was argued against in the Sequences and in general, doesn't seem to be a strong argument. It seems perfectly compatible to believe your actions follow deterministically and still talk about decision theory - all the functional decision theory stuff is assuming a deterministic decision process, I think.

It is compatible to believe your actions follow deterministically and still talk about decision theory. It is not compatible to believe your actions follow deterministically, and still talk about decision theory from a first-person point of view, as if you could by force of will violate your programming.

To ask what choice a deterministic entity should make presupposes both that it does, and does not, have choice. Presupposing a contradiction means STOP, your reasoning has crashed and you can prove any conclusion if you continue.

Comment by PhilGoetz on Announcing the AI Alignment Prize · 2017-12-15T19:41:12.041Z · LW · GW

I think that first you should elaborate on what you mean by "the goals of humanity". Do you mean majority opinion? In that case, one goal of humanity is to have a single world religious State, although there is disagreement on what that religion should be. Other goals of humanity include eliminating homosexuality and enforcing traditional patriarchal family structures.

Okay, I admit it--what I really think is that "goals of humanity" is a nonsensical phrase, especially when spoken by an American academic. It would be a little better to talk about values instead of goals, but not much better. The phrase still implies the unspoken belief that everyone would think like the person who speaks it, if only they were smarter.

Comment by PhilGoetz on Why Bayesians should two-box in a one-shot · 2017-12-15T19:12:03.303Z · LW · GW

The part of physics that implies someone cannot scan your brain and simulate inputs so as to perfectly predict your actions is quantum mechanics. But I don't think invoking it is the best response to your question. Though it does make me wonder how Eliezer reconciles his thoughts on one-boxing with his many-worlds interpretation of QM. Doesn't many-worlds imply that every game with Omega creates worlds in which Omega is wrong?

If they can perfectly predict your actions, then you have no choice, so talking about which choice to make is meaningless. If you believe you should one-box based if Omega can perfectly predict your actions, but two-box otherwise, then you are better off trying to two-box: In that case, you've already agreed that you should two=box if Omega can't perfectly predict your actions. If Omega can, you won't be able to two-box unless Omega already predicted that you would, so it won't hurt to try to 2-box.

Comment by PhilGoetz on The Ancient God Who Rules High School · 2017-04-13T01:45:15.816Z · LW · GW

Sorry. I've been reading English literary journals and lit theory books for the past year, and the default assumption is always that the reader is a Marxist.

Comment by PhilGoetz on Belief in Belief · 2017-04-13T01:32:49.697Z · LW · GW

The rationalist virtue of empiricism...

I'm not disagreeing with any of the content above, but a note about terminology--

LessWrong keeps using the word "rationalism" to mean something like "reason" or possibly even "scientific methodology". In philosophy, however, "rationalism" is not allied to "empiricism", but diametrically opposed to it. What we call science was a gradual development, over a few centuries, of methodologies that harnessed the powers both of rationalism and empiricism, which had previously been thought to be incompatible.

But if you talk to a modernist or post-modernist today, when they use the term "rational", they mean old-school Greek, Platonic-Aristotelian rationalism. They, like us, think so much in this old Greek way that they may use the term "reason" when they mean "Aristotelian logic". All post-modernism is based on the assumption that scientific methodology is essentially the combination of Platonic essences, Aristotelian physics, and Aristotelian logic, which is rationalism. They are completely ignorant of what science is and how it works. But this is partly our fault, because they hear us talking about science and using the term "rationality" as if science were rationalism!

(Inb4 somebody says Plato was a rationalist and Aristotle was an empiricist: Really, really not. Aristotle couldn't measure things, and very likely couldn't do arithmetic. In any case the most important Aristotelian writings to post-modernists are the Physics, which aren't empirical in the slightest. No time to go into it here, though.)

Comment by PhilGoetz on What conservatives and environmentalists agree on · 2017-04-09T23:55:00.785Z · LW · GW

I was unfairly inserting in the parentheses my own presumption about why Christians saw the world as having been created perfect. The passage I was talking about from Aquinas did not talk about perfection of the environment.

I'd like to see what Aquinas did say. Have you got a citation? I'm pretty sure that the notion that the world was created imperfect has never been tolerated by the Catholic Church. Asserting that creation was imperfect might even be condemned as Manicheeism. Opinions vary on what happened after the Fall, but I find it unlikely that Aquinas could have said God's original creation was imperfect. (If he did, he was probably copying Aristotle, and making some fine definitional distinction not explained here, to avoid heresy.)

Comment by PhilGoetz on What conservatives and environmentalists agree on · 2017-04-09T15:45:52.389Z · LW · GW

Yep, the argument to justify the imperfection of children, and thus the necessity of growth, is based on Aristotle's notion of perfect and imperfect actualities. Aquinas wrote:

Everything is perfect inasmuch as it is in actuality; imperfect, inasmuch as it is in potentiality, with privation of actuality. ... It is impossible therefore for any effect that is brought into being by action to be of a nobler actuality than is the actuality of the agent. It is possible though for the actuality of the effect to be less perfect than the actuality of the acting cause, inasmuch as action may be weakened on the part of the object to which it is terminated, or upon which it is spent.

The reason God created humans so that they have to grow from imperfect childhood (lacking the maturity of a complete human) towards a perfect adult state, rather than being adult, is thus so that they may learn virtue, which is the process of striving for perfection. (The environment does not need to learn virtue; therefore it was created perfect.)

I don't know whether humans would have born offspring that were babies if not for the Fall, nor why animals bear babies, if not for the sake of their spiritual growth.

Comment by PhilGoetz on What conservatives and environmentalists agree on · 2017-04-09T15:44:47.193Z · LW · GW

I didn't mean to retract this, but to delete it and move the comment down below.

Comment by PhilGoetz on What conservatives and environmentalists agree on · 2017-04-09T15:31:44.105Z · LW · GW

Historically, Christians objected strongly to fossil evidence that some species had gone extinct. They said God would not have created species and then let them go extinct.

Perfection is a crucial part of Christian ontology. God's creation was perfect. That means, in the Christian way of thinking, it is unchanging. Read Christian descriptions of God (who is perfect), and "unchanging" is always one of the adjectives. "Unchanging" is a necessary attribute of perfection in Christian theology, and God's creation is necessarily perfect. The environment, therefore, was designed and created not to ever change.

One could argue that individuals are thus imperfect because they are born young and then mature. I've never heard a counter-argument against this accusation, though I suspect they exist in the wreckage of medieval theology.

Comment by PhilGoetz on The Ancient God Who Rules High School · 2017-04-09T15:05:42.111Z · LW · GW

The kid says that school is competitive, and that's bad--why can't they all agree to work less hard (presumably so they can have more time to play video games)? "Getting students to accept the reality that they might just not go to the best schools is good, I guess. But unless it also comes with the rallying call of engaging in a full-on socialist revolution, it doesn’t really deal with the whole issue."

This kid is the straw man conservatives present of socialism--the idea that the purpose of labor unions and socialism isn't to have a decent wage, but to not have to work hard.

There is a competition crisis, though. The problem is partly the idea that getting into an elite school is a measure of your intelligence--it isn't; they're explicit that that isn't the sole basis of admission, nor do they even have any measure of intelligence other than standardized test scores, so why not use the standardize test scores?

But it's also the allocation of social attention. Each field of study is too large now relative to the number of practitioners. Merit doesn't work anymore. There is no such thing as reputation anymore, except within a small circle of colleagues. Nobody trusts grades or recommendations. The problem isn't competition, but that we have no functioning reputation system anymore.

Comment by PhilGoetz on Against responsibility · 2017-04-07T18:21:51.780Z · LW · GW

I don't see how this follows. Evolutionary psychology provides some explanations for our intuitions and instincts that the majority of humans share but that doesn't really say anything about morality as Is Cannot Imply Ought.

Start by saying "rationality" means satisficing your goals and values. The issue is what values you have. You certainly have selfish values. A human also has values that lead to optimizing group survival. Behavior oriented primarily towards those goals is called altruistic.

The model of rationality presented on LessWrong usually treats goals and values that are of negative utility to the agent as biases or errors rather than as goals evolved to benefit the group or the genes. That leads to a view of rationality as strictly optimizing selfish goals.

As to old Utilitarianism 1.0, where somebody just declares by fiat that we are all interested in the greatest good for the greatest number of people--that isn't on the table anymore. People don't do that. Anyone who brings that up is the one asserting an "ought" with no justification. There is no need to talk about "oughts" yet.

Comment by PhilGoetz on What's up with Arbital? · 2017-04-04T19:49:53.120Z · LW · GW

This sounds great! There is no FAQ on the linked-to website, though. Is Arbital open-source? What are the key licensing terms? How's it implemented? How does voting work?

If we're all supposed to use the same website, there are advantages to that, but I would be less excited about that.

Also, the home page links to, but that page is blank. Er... is also blank for me. Perhaps Arbital doesn't work for Chrome on Windows 7 without flash installed.

Comment by PhilGoetz on Against responsibility · 2017-04-04T19:46:10.089Z · LW · GW

Rational Utilitarianism is the greatest good for the greatest number given the constraints of imperfect information and faulty brains.

No; I object to your claiming the term "rational" for that usage. That's just plain-old Utilitarianism 1.0 anyway; it doesn't take a modifier.

Rationality plus Utilitarianism plus evolutionary psychology leads to the idea that a rational person is one who satisfies their own goals. You can't call trying to achieve the greatest good for the greatest number of people "rational" for an evolved organism.

Comment by PhilGoetz on Against responsibility · 2017-04-04T04:42:41.077Z · LW · GW

Benquo isn't saying that these attitudes necessarily follow, but that in practice he's seen it happen. There is a lot of unspoken LessWrong / SIAI history here. Eliezer Yudkowsky and many others "at the top" of SIAI felt personally responsible for the fate of the human race. EY believed he needed to develop an AI to save humanity, but for many years he would only discuss his thoughts on AI with one other person, not trusting even the other people in SIAI, and requiring them to leave the area when the two of them talked about AI. (For all I know, he still does that.) And his plans basically involve creating an AI to become world dictator and stop anybody else from making an AI. All of that is reducing the agency of others "for their own good."

This secrecy was endemic at SIAI; when I've walked around NYC with their senior members, sometimes 2 or 3 people would gather together and whisper, and would ask anyone who got too close to please walk further away, because the ideas they were discussing were "too dangerous" to share with the rest of the group.

Comment by PhilGoetz on Against responsibility · 2017-04-04T04:34:03.947Z · LW · GW

Great post, and especially appropriate for LW. I add the proviso that you may in some cases be making the most-favorable interpretation rather than the correct interpretation.

I know one person on LessWrong who has talked himself into overwriting his natural morality with his interpretation of rational utilitarianism. This ended up giving him worse-than-human morality, because he assumes that humans are not actually moral--that humans don't derive utility from helping others. He ended up convincing himself to do the selfish things that he thinks are "in his own best interests" in order to be a good rationalist, even in cases where he didn't really want to be selfish--or wouldn't have, before rewriting his goals.

Comment by PhilGoetz on Stupidity as a mental illness · 2017-03-25T04:44:33.952Z · LW · GW

That's basically what I'm saying--well, I think it was; I can't see my original text now. But IIRC I misused the word "necessarily" because I thought doing so was closer to the truth than not using any modifier at all. I wanted to imply a causative link, and the notion that, even in cases where it appears there is no economic cost, the length of and multiplicity of paths from a nation's values to its economic health are so great that the bias towards finding an economic cost on each such path make it statistically very unlikely that the net economic impact is not negative.

Comment by PhilGoetz on Open thread, Mar. 20 - Mar. 26, 2017 · 2017-03-25T04:37:54.247Z · LW · GW

The main page no longer has a link to the Discussion section of the forum, nor a login link. I think these changes are both mistakes.

Comment by PhilGoetz on Stupidity as a mental illness · 2017-03-07T21:06:08.474Z · LW · GW

I was talking about the fitness of a culture. That's why I said I was talking about the fitness of a culture. Individual happiness is not fitness, but it is of interest to us.

Comment by PhilGoetz on Rationality Quotes January - March 2017 · 2017-03-07T17:54:05.310Z · LW · GW

Three inventions which may perhaps be long delayed, but which possibly are near at hand, will give to this overcrowded island the prosperous condition of the United States. The first is the discovery of a motive force which will take the place of steam, with its cumbrous fuel of oil and coal; the second, the invention of aerial locomotion which will transport labour at a trifling cost of money and of time to any part of the planet, and which by annihilating distance will speedily extinguish national distinctions; the third, the manufacture of flesh and flour from the elements by a chemical process in the laboratory, similar to that which is now performed within the bodies of animals and plants. Food will then be manufactured in unlimited quantities at a trifling expense, and our enlightened prosperity will look back upon us who eat oxen and sheep just as we look back upon cannibals. Hunger and starvation will then be unknown, and the best part of human life will no longer be wasted in a tedious process of cultivating the fields. ... [claims that everyone will embrace Victorian morality omitted] ... These bodies which now we wear belong to the lower animals; our minds have already outgrown them; already we look upon them with contempt. A time will come when science will transform them by means which we cannot conjecture, and which, even if explained to us, we could not now understand, much as the savage cannot understand electricity, magnetism, or steam. Disease will be extirpated; the causes of decay will be removed; immortality will be invented. And then, the earth being small, mankind will migrate into space, and will cross the airless Saharas which separate planet from planet and sun from sun. The earth will become a Holy Land which will be visited by pilgrims from all the quarters of the universe. Finally, men will master the forces of Nature; they will become themselves architects of system, manufacturers of worlds.

-- Winwood Reade, "The Martyrdom of Man", 1872

quoted (and ridiculed) by Patrick Allitt in 2002

Comment by PhilGoetz on Rationality Quotes Thread February 2016 · 2017-03-07T17:19:37.898Z · LW · GW

Note that the major relevant historical disagreement is not over any of these ideas, but over what the true territory is. Most medieval maps (pre-1300) were deliberately warped not to represent their territory as it looked in the physical world, but to show "spiritual truths". Jerusalem would be at the center, each city's size would be proportional to its importance in God's plan, and distances and directions would be warped to make a particular set of points draw the figure of a cross on the map. Similarly, maps of medieval cities would not show the city to scale, but would plant the richest part of the city in the center of the map, occupying a large fraction of the map, regardless of its actual physical location or size. Judging from the theories of perception and reality then in circulation, the people making (or at least the people buying) these maps probably thought they were not distorting, but correcting the distortions of the senses and presenting a view that would actually lead to more correct beliefs.

Comment by PhilGoetz on Rationality Quotes Thread February 2016 · 2017-03-07T17:14:01.210Z · LW · GW

The notion of the non-elementalistic is important--that was the basis of structuralism--but it reinforces the old view that these operationalizations of our observations were unfortunate but necessary concessions to the limitations of observation, rather than that, e.g., space-time really is the lattice the Universe is laid upon. I doubt there's a real difference between these views mathematically, but I think there is conceptually.

Comment by PhilGoetz on Allegory On AI Risk, Game Theory, and Mithril · 2017-02-16T18:43:03.786Z · LW · GW

I have the feeling you still don't agree with Thorin. Why not?

Comment by PhilGoetz on Stupidity as a mental illness · 2017-02-10T23:08:50.975Z · LW · GW

From what I've read, most of the protest in the deaf community currently is deaf parents insisting they have the right to deny treatment and audible education to their children--which they want to do because it will be too late for the children to get the treatment themselves when they're adults. If it were possible for their children to get the treatment and learn spoken language once they grew up, and potentially leave the deaf community, parents would have less motivation to deny treatment to them as children.

Comment by PhilGoetz on Stupidity as a mental illness · 2017-02-10T23:04:38.182Z · LW · GW

I know a lot of people who are stupid in one way or another. I would hate to see "treatment" forced onto them because they're not as smart as we'd like.

Do we force people to be treated for diabetes, cancer, or gout? No; we at most work to make it possible for them to get treatment.

Comment by PhilGoetz on Stupidity as a mental illness · 2017-02-10T22:57:16.621Z · LW · GW

This is not a thing that we need to check statistics for. Americans now talk openly about seeing a psychologist or having depression. Americans two generations prior did not. Depression was not recognized as a legitimate disease; it was considered a weakness, and psychotherapy was an act of desperation.

Comment by PhilGoetz on Stupidity as a mental illness · 2017-02-10T22:42:02.999Z · LW · GW

This is technically correct, but misleading in context. James' point is, I think, directed towards the idea that for a culture to embrace values that decrease its fitness has a cost, and increases the odds of your culture going extinct. More relevant to us in practice is that such values have an economic cost that inevitably reduces our individual happiness. This is correct regardless of whether you are at equilibrium.

Comment by PhilGoetz on Stupidity as a mental illness · 2017-02-10T22:30:40.027Z · LW · GW

IQ is largely hereditary (~70%, IIRC) and polygenic. This mean that attempting to "cure" it by anything short of major genetic engineering will have quite limited upside.

Depression is, according to Google and web pages I haven't studied, polygenic and 40-50% heritable, yet medicine often works for it.

It isn't especially hard to develop drugs for genetic diseases. Genetic diseases have single points of attack--receptors to block, proteins to disrupt. "Polygenic" may not matter at all; that may just mean there is one pathway with 30 genes in it, and 300 genes impinging on it, and you need to supplement the pathway's end product.

That, ahem, is exactly what's happening already :-/

I wasn't going to mention it, but I thought of that example because Harvard's current admissions website boasts that it provides no merit-based financial aid. I thought that was odd when I read it, but it fits in with the idea that a meritocracy is morally objectionable.

Comment by PhilGoetz on Stupidity as a mental illness · 2017-02-10T22:21:47.291Z · LW · GW

That's a good point--if a type of question on an IQ test shows variability from year to year, do psychologists say it's a bad type of question and remove it from the test?