Posts

Good HPMoR scenes / passages? 2024-03-03T22:42:12.673Z
On my AI Fable, and the importance of de re, de dicto, and de se reference for AI alignment 2023-10-05T00:50:43.012Z
Why Bayesians should two-box in a one-shot 2017-12-15T17:39:32.491Z
What conservatives and environmentalists agree on 2017-04-08T00:57:32.012Z
Increasing GDP is not growth 2017-02-16T18:04:16.959Z
Stupidity as a mental illness 2017-02-10T03:57:20.182Z
Irrationality Quotes August 2016 2016-08-01T19:12:35.571Z
Market Failure: Sugar-free Tums 2016-06-30T00:12:16.143Z
"3 Reasons It’s Irrational to Demand ‘Rationalism’ in Social Justice Activism" 2016-03-29T15:16:37.309Z
The increasing uselessness of Promoted 2016-03-19T18:23:03.221Z
Is altruistic deception really necessary? Social activism and the free market 2016-02-26T06:38:16.032Z
Is there a recursive self-improvement hierarchy? 2015-10-29T02:55:00.909Z
The mystery of Brahms 2015-10-21T05:12:47.749Z
Monty Hall Sleeping Beauty 2015-09-18T21:18:23.137Z
An accidental experiment in location memory 2015-08-31T16:50:19.306Z
Calling references: Rational or irrational? 2015-08-28T21:06:46.872Z
Words per person year and intellectual rigor 2015-08-27T03:31:49.373Z
Is semiotics bullshit? 2015-08-25T14:09:04.000Z
Why people want to die 2015-08-24T20:13:37.830Z
How to escape from your sandbox and from your hardware host 2015-07-31T17:26:00.083Z
"Risk" means surprise 2015-05-22T04:47:08.768Z
My mind must be too highly trained 2015-02-20T21:43:59.036Z
Easy wins aren't news 2015-02-19T19:38:38.471Z
Uncategories and empty categories 2015-02-16T01:18:28.970Z
The morality of disclosing salary requirements 2015-02-08T21:12:26.534Z
Reductionist research strategies and their biases 2015-02-06T04:11:32.650Z
Don't estimate your creative intelligence by your critical intelligence 2015-02-05T02:41:28.108Z
How Islamic terrorists reduced terrorism in the US 2015-01-11T05:19:17.376Z
Dark Arts 101: Be rigorous, on average 2014-12-31T00:37:28.765Z
Every Paul needs a Jesus 2014-08-10T19:13:04.694Z
Why humans suck: Ratings of personality conditioned on looks, profile, and reported match 2014-08-09T18:48:17.021Z
The rational way to name rivers 2014-08-06T15:41:06.598Z
The dangers of dialectic 2014-08-05T20:02:25.531Z
Fifty Shades of Self-Fulfilling Prophecy 2014-07-24T00:17:43.189Z
Too good to be true 2014-07-11T20:16:24.277Z
What should a Bayesian do given probability of proving X vs. of disproving X? 2014-06-07T18:40:38.419Z
The Universal Medical Journal Article Error 2014-04-29T17:57:09.854Z
Don't teach people how to reach the top of a hill 2014-03-04T21:38:53.926Z
Prescriptive vs. descriptive and objective vs. subjective definitions 2014-01-21T23:21:45.645Z
Using vs. evaluating (or, Why I don't come around here no more) 2014-01-20T02:36:29.575Z
The dangers of zero and one 2013-11-21T12:21:23.684Z
To like, or not to like? 2013-11-14T02:26:59.072Z
Dark Arts 101: Winning via destruction and dualism 2013-09-21T01:53:02.169Z
Thought experiment: The transhuman pedophile 2013-09-17T22:38:06.160Z
Fiction: Written on the Body as love versus reason 2013-09-08T06:13:35.794Z
I know when the Singularity will occur 2013-09-06T20:04:18.560Z
The 50 Shades of Grey Book Club 2013-08-24T20:55:47.307Z
Humans are utility monsters 2013-08-16T21:05:28.195Z
Free ebook: Extraordinary Popular Delusions and the Madness of Crowds 2013-07-05T19:20:39.493Z
Anticipating critical transitions 2013-06-09T16:28:51.006Z

Comments

Comment by PhilGoetz on Biological risk from the mirror world · 2024-12-16T07:13:25.486Z · LW · GW

If there is an equilibrium, It will probably be a world where half the bacteria is of each chirality. If there are bacteria of both kinds which can eat the opposite kind, then the more numerous bacteria will always replicate more slowly.

Eukaryotes evolve much more slowly, and would likely all be wiped out.

Comment by PhilGoetz on Biological risk from the mirror world · 2024-12-16T07:04:54.480Z · LW · GW

Yes, creating mirror life would be a terrible existential risk. But how did this sneak up on us? People were talking about this risk in the 1990s if not earlier. Did the next generation never hear of it?

Comment by PhilGoetz on Why Bayesians should two-box in a one-shot · 2024-11-14T20:42:48.495Z · LW · GW

All right, yes.  But that isn't how anyone has ever interpreted Newcomb's Problem.  AFAIK is literally always used to support some kind of acausal decision theory, which it does /not/ if what is in fact happening is that Omega is cheating.

Comment by PhilGoetz on Why Bayesians should two-box in a one-shot · 2024-11-14T20:40:02.532Z · LW · GW

But if the premise is impossible, then the experiment has no consequences in the real world, and we shouldn't consider its results in our decision theory, which is about consequences in the real world.

Comment by PhilGoetz on Why Bayesians should two-box in a one-shot · 2024-11-14T20:38:04.222Z · LW · GW

That equation you quoted is in branch 2, "2. Omega is a "nearly perfect" predictor.  You assign P(general) a value very, very close to 1."  So it IS correct, by stipulation.

Comment by PhilGoetz on If we can't lie to others, we will lie to ourselves · 2024-10-27T16:34:48.524Z · LW · GW
Comment by PhilGoetz on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong · 2024-10-19T23:49:09.077Z · LW · GW

But there is no possible world with a perfect predictor, unless it has a perfect track record by chance.  More obviously, there is no possible world in which we can deduce, from a finite number of observations, that a predictor is perfect.  The Newcomb paradox requires the decider to know, with certainty, that Omega is a perfect predictor.  That hypothesis is impossible, and thus inadmissible; so any argument in which something is deduced from that fact is invalid.

Comment by PhilGoetz on A My Little Pony fanfic allegedly but not mainly about immortality · 2024-10-15T03:56:55.314Z · LW · GW

I appreciated this comment a lot.  I didn't reply at the time, because I thought doing so might resurrect our group-selection argument.  But thanks.

Comment by PhilGoetz on A vote against spaced repetition · 2024-09-11T00:22:25.378Z · LW · GW

What about using them to learn a foreign vocabulary?  E.g., to learn that "dormir" in Spanish means "to sleep" in English.

Comment by PhilGoetz on You don't know how bad most things are nor precisely how they're bad. · 2024-08-21T02:10:21.441Z · LW · GW

To reach statistical significance, they must have tested each of the 8 pianists more than once.

Comment by PhilGoetz on Environmentalism in the United States Is Unusually Partisan · 2024-05-28T03:08:06.040Z · LW · GW

I think you need to get some data and factor out population density before you can causally relate environmentalism to politics.  People who live in rural environment don't see as much need to worry about the environment as people who live in cities.  It just so happens that today, rural people vote Republican and city people vote Democrat.  That didn't used to be the case.

Though, sure, if you call the Sierra Club "environmentalist", then environmentalism is politically polarized today.  I don't call them environmentalists anymore; I call them a zombie organization that has been parasitized by an entirely different political organization.  I've been a member for decades, and they completely stopped caring about the environment during the Trump presidency.  As in, I did not get one single letter from them in those years that was aimed at helping the environment.  Lots on global warming, but none of that was backed up by science.  (I'm not saying global warming isn't real; I'm saying the issues the Sierra Club was raising had no science behind them, like "global warming is killing off the redwoods".) 

Comment by PhilGoetz on You Get About Five Words · 2024-04-21T04:50:34.403Z · LW · GW

Isn't LessWrong a disproof of this?  Aren't we thousands of people?  If you picked two active LWers at random, do you think the average overlap in their reading material would be 5 words?  More like 100,000, I'd think.

Comment by PhilGoetz on Acting Wholesomely · 2024-03-21T04:03:01.868Z · LW · GW

I think it would be better not to use the word "wholesome".  Using it is cheating, by letting us pretend at the same time that (A) we're explaining a new kind of ethics, which we name "wholesome", and (B) that we already know what "wholesome" means.  This is a common and severe epistemological failure mode which traces back to the writings of Plato.

If you replace every instance of "wholesome" with the word "frobby", does the essay clearly define "frobby"?

It seems to me to be a way to try to smuggle virtue ethics into the consequentialist rationality community by disguising it with a different word.  If you replace every instance of "wholesome" with the word "virtuous", does the essay's meaning change?

Comment by PhilGoetz on Good HPMoR scenes / passages? · 2024-03-04T04:36:13.855Z · LW · GW

Thank you!  The 1000-word max has proven to be unrealistic, so it's not too long.  You and g-w1 picked exactly the same passage.

Comment by PhilGoetz on Good HPMoR scenes / passages? · 2024-03-04T02:12:32.850Z · LW · GW

Thank you!  I'm just making notes to myself here, really:

  • Harry teaches Draco about blood science and scientific hypothesis testing in Chapter 22.
  • Harry explains that muggles have been to the moon in Chapter 7.
  • Quirrell's first lecture is in chapter 16, and it is epic!  Especially the part about why Harry is the most-dangerous student.
Comment by PhilGoetz on Even if you have a nail, not all hammers are the same · 2024-01-04T19:31:10.061Z · LW · GW

I think the problem is that each study has to make many arbitrary decisions about aspects of the experimental protocol.  This decision will be made the same way for each subject in a single study, but will vary across studies.  There are so many such decisions that, if the meta-analysis were to include them as dependent variables, each study would introduce enough new variables to cancel out the statistical power gain of introducing that study.

Comment by PhilGoetz on Ends Don't Justify Means (Among Humans) · 2024-01-04T18:18:23.421Z · LW · GW

You have it backwards.  The difference between a Friendly AI and an unfriendly one is entirely one of restrictions placed on the Friendly AI.  So an unfriendly AI can do anything a friendly AI could, but not vice-versa.

The friendly AI could lose out because it would be restricted from committing atrocities, or at least atrocities which were strictly bad for humans, even in the long run.

Your comment that they can commit atrocities for the good of humanity without worrying about becoming corrupt is a reason to be fearful of "friendly" AIs.

Comment by PhilGoetz on On my AI Fable, and the importance of de re, de dicto, and de se reference for AI alignment · 2023-10-07T23:35:35.329Z · LW · GW

By "just thinking about IRL", do you mean "just thinking about the robot using IRL to learn what humans want"?  'Coz that isn't alignment.

'But potentially a problem with more abstract cashings-out of the idea "learn human values and then want that"' is what I'm talking about, yes.  But it also seems to be what you're talking about in your last paragraph.

"Human wants cookie" is not a full-enough understanding of what the human really wants, and under what conditions, to take intelligent actions to help the human.  A robot learning that would act like a paper-clipper, but with cookies.  It isn't clear whether a robot which hasn't resolved the de dicto / de re / de se distinction in what the human wants will be able to do more good than harm in trying to satisfy human desires, nor what will happen if a robot learns that humans are using de se justifications.

Here's another way of looking at that "nor what will happen if" clause:  We've been casually tossing about the phrase "learn human values" for a long time, but that isn't what the people who say that want.  If AI learned human values, it would treat humans the way humans treat cattle.  But if the AI is to learn to desire to help humans satisfy their wants, it isn't clear that the AI can (A) internalize human values enough to understand and effectively optimize for them, while at the same time (B) keeping those values compartmentalized from its own values, which make it enjoy helping humans with their problems.  To do that the AI would need to want to propagate and support human values that it disagrees with.  It isn't clear that that's something a coherent, let's say "rational", agent can do.

Comment by PhilGoetz on Applause Lights · 2023-10-06T02:43:30.973Z · LW · GW

How is that de re and de dicto?

Comment by PhilGoetz on On my AI Fable, and the importance of de re, de dicto, and de se reference for AI alignment · 2023-10-05T22:29:50.221Z · LW · GW

You're looking at the logical form and imagining that that's a sufficient understanding to start pursuing the goal. But it's only sufficient in toy worlds, where you have one goal at a time, and the mapping between the goal and the environment is so simple that the agent doesn't need to understand the value, or the target of "cookie", beyond "cookie" vs. "non-cookie". In the real world, the agent has many goals, and the goals will involve nebulous concepts, and have many considerations and conditions attached, eg how healthy is this cookie, how tasty is it, how hungry am I.  It will need to know /why/ it, or human24, wants a cookie in order to intelligently know when to get the cookie, and to resolve conflicts between goals, and to do probability calculations which involve the degree to which different goals are correlated in the higher goals they satisfy.

There's a confounding confusion in this particular case, in which you seem to be hoping the robot will infer that the agent of the desired act is the human, both in the case of the human, and of the AI.  But for values in general, we often want the AI to act in the way that the human would act, not to want the human to do something. Your posited AI would learn the goal that it wants human24 to get a cookie.

What it all boils down to is:  You have to resolve the de re / de dicto / de se interpretation in order to understand what the agent wants.  That means an AI also has to resolve that question in order to know what a human wants. Your intuitions about toy examples like "human 24 always wants a cookie, unconditionally, forever" will mislead you, in the ways toy-world examples misled symbolic AI researchers for 60 years.

Comment by PhilGoetz on [AN #58] Mesa optimization: what it is, and why we should care · 2023-10-05T01:25:32.714Z · LW · GW

So, "mesa" here means "tabletop", and is pronounced "MAY-suh"?

Comment by PhilGoetz on The 99% principle for personal problems · 2023-10-05T00:36:34.300Z · LW · GW

I think your insight is that progress counts--that counting counts.  It's overcoming the Boolean mindset, in which anything that's true some of the time, must be true all of the time.  That you either "have" or "don't have" a problem.

I prefer to think of this as "100% and 0% are both unattainable", but stating it as the 99% rule might be more-motivating to most people.

Comment by PhilGoetz on The Pointers Problem: Human Values Are A Function Of Humans' Latent Variables · 2023-09-02T03:14:53.937Z · LW · GW

What do you mean by a goodhearting problem, & why is it a lossy compression problem?  Are you using "goodhearting" to refer to Goodhart's Law?

Comment by PhilGoetz on The Pointers Problem: Human Values Are A Function Of Humans' Latent Variables · 2023-09-02T03:07:35.808Z · LW · GW

I'll preface this by saying that I don't see why it's a problem, for purposes of alignment, for human values to refer to non-existent entities.  This should manifest as humans and their AIs wasting some time and energy trying to optimize for things that don't exist, but this seems irrelevant to alignment.  If the AI optimizes for the same things that don't exist as humans do, it's still aligned; it isn't going to screw things up any worse than humans do.

But I think it's more important to point out that you're joining the same metaphysical goose chase that has made Western philosophy non-sense since before Plato.

You need to distinguish between the beliefs and values a human has in its brain, and the beliefs & values it expresses to the external world in symbolic language.  I think your analysis concerns only the latter.  If that's so, you're digging up the old philosophical noumena / phenomena distinction, which itself refers to things that don't exist (noumena).

Noumena are literally ghosts; "soul", "spirit", "ghost", "nature", "essence", and "noumena" are, for practical purposes, synonyms in philosophical parlance.  The ghost of a concept is the metaphysical entity which defines what assemblages in the world are and are not instances of that concept.

But at a fine enough level of detail, not only are there no ghosts, there are no automobiles or humans.  The Buddhist and post-modernist objections to the idea that language can refer to the real world are that the referents of "automobiles" are not exactly, precisely, unambiguously,  unchangingly, completely, reliably specified, in the way Plato and Aristotle thought words should be.  I.e., the fact that your body gains and loses atoms all the time means, for these people, that you don't "exist".

Plato, Aristotle, Buddhists, and post-modernists all assumed that the only possible way to refer to the world is for noumena to exist, which they don't.  When you talk about "valuing the actual state of the world," you're indulging in the quest for complete and certain knowledge, which requires noumena to exist.  You're saying, in your own way, that knowing whether your values are satisfied or optimized requires access to what Kant called the noumenal world.  You think that you need to be absolutely, provably correct when you tell an AI that one of two words is better.  So those objections apply to your reasoning, which is why all of this seems to you to be a problem.

The general dissolution of this problem is to admit that language always has slack and error.  Even direct sensory perception always has slack and error.  The rationalist, symbolic approach to AI safety, in which you must specify values in a way that provably does not lead to catastrophic outcomes, is doomed to failure for these reasons, which are the same reasons that the rationalist, symbolic approach to AI was doomed to failure (as almost everyone now admits).  These reasons include the fact that claims about the real world are inherently unprovable, which has been well-accepted by philosophers since Kant's Critique of Pure Reason.

That's why continental philosophy is batshit crazy today.  They admitted that facts about the real world are unprovable, but still made the childish demand for absolute certainty about their beliefs.  So, starting with Hegel, they invented new fantasy worlds for our physical world to depend on, all pretty much of the same type as Plato's or Christianity's, except instead of "Form" or "Spirit", their fantasy worlds are founded on thought (Berkeley), sense perceptions (phenomenologists), "being" (Heidegger), music, or art.

The only possible approach to AI safety is one that depends not on proofs using symbolic representations, but on connectionist methods for linking mental concepts to the hugely-complicated structures of correlations in sense perceptions which those concepts represent, as in deep learning.  You could, perhaps, then construct statistical proofs that rely on the over-determination of mental concepts to show almost-certain convergence between the mental languages of two different intelligent agents operating in the same world.  (More likely, the meanings which two agents give to the same words don't necessarily converge, but agreement on the probability estimates given to propositions expressed using those same words will converge.)

Fortunately, all mental concepts are over-determined.  That is, we can't learn concepts unless the relevant sense data that we've sensed contains much more information than do the concepts we learned.  That comes automatically from what learning algorithms do.  Any algorithm which constructed concepts that contained more information than was in the sense data, would be a terrible, dysfunctional algorithm.

You are still not going to get a proof that two agents interpret all sentences exactly the same way.  But you might be able to get a proof which shows that catastrophic divergence is likely to happen less than once in a hundred years, which would be good enough for now.

Perhaps what I'm saying will be more understandable if I talk about your case of ghosts.  Whether or not ghosts "exist", something exists in the brain of a human who says "ghost".  That something is a mental structure, which is either ultimately grounded in correlations between various sensory perceptions, or is ungrounded.  So the real problem isn't whether ghosts "exist"; it's whether the concept "ghost" is grounded, meaning that the thinker defines ghosts in some way that relates them to correlations in sense perceptions.  A person who thinks ghosts fly, moan, and are translucent white with fuzzy borders, has a grounded concept of ghost.  A person who says "ghost" and means "soul" has an ungrounded concept of ghost.

Ungrounded concepts are a kind of noise or error in a representational system.  Ungrounded concepts give rise to other ungrounded concepts, as "soul" gave rise to things like "purity", "perfection", and "holiness".  I think it highly probable that grounded concepts suppress ungrounded concepts, because all the grounded concepts usually provide evidence for the correctness of the other grounded concepts.  So probably sane humans using statistical proofs don't have to worry much about whether every last concept of theirs is grounded, but as the number of ungrounded concepts increases, there is a tipping point beyond which the ungrounded concepts can be forged into a self-consistent but psychotic system such as Platonism, Catholicism, or post-modernism, at which point they suppress the grounded concepts.

Sorry that I'm not taking the time to express these things clearly.  I don't have the time today, but I thought it was important to point out that this post is diving back into the 19th-century continental grappling with Kant, with the same basic presupposition that led 19th-century continental philosophers to madness.  TL;DR:  AI safety can't rely on proving statements made in human or other symbolic languages to be True or False, nor on having complete knowledge about the world.

Comment by PhilGoetz on Progress, humanism, agency: An intellectual core for the progress movement · 2023-08-01T15:32:27.893Z · LW · GW

When you write of A belief in human agency, it's important to distinguish between the different conceptions of human agency on offer, corresponding to the 3 main political groups:

  • The openly religious or reactionary statists say that human agency should mean humans acting as the agents of God.  (These are a subset of your fatalists.  Other fatalists are generally apolitical.)
  • The covertly religious or progressive statists say human agency can only mean humans acting as agents of the State (which has the moral authority and magical powers of God).  This is partly because they think individual humans are powerless and/or stupid, and partly because, ontologically, they don't believe individual humans exist, where to exist is to have an eternal Platonic Form.  (Plato was notoriously vague on why each human has an individual soul, when every other category of thing in the world has only one collective category soul; and people in the Platonic line of thought have wavered back and forth over this for millenia.)  This includes Rousseau, Hegel, Marx, and the Social Justice movement.
    • The Nazis IMHO fall into both categories at the same time, showing how blurry and insignificant the lines between these 2 categories are.  Most progressives are actually reactionaries, as most believe in the concept of "perfection", and that perfection is the natural state of all things in the absence of evil actors, so that their "progress" is towards either a mythical past perfection in exactly the same way as that of the Nazis, or towards a perfection that was predestined at creation, as in left Hegelians such as Marxists and Unitarian Universalists.
  • The empiricists believe that individual humans can and should each have their own individual agency.  (The reasons why empiricist epistemology is naturally opposed to statism is too complex for me to explain right now.  It has to do with the kind of ontology that leads to statism being incompatible with empirical investigation and individual freedom, and opposition to individual freedom being incompatible with effective empirical investigation.)

Someone who wants us united under a document written by desert nomads 3000 years ago, or someone who wants the government to force their "solutions" down our throats and keep forcing them no matter how many people die, would also say they believe in human agency; but they don't want private individuals to have agency.

This is a difficult but critical point.  Big progressive projects, like flooding desert basins, must be collective.  But movements that focus on collective agency inevitably embrace, if only subconsciously, the notion of a collective soul.  This already happened to us in 2010, when a large part of the New Atheist movement split off and joined the Social Justice movement, and quickly came to hate free speech, free markets, and free thought.

I think it's obvious that the enormous improvements in material living standards in the last ~200 years you wrote of was caused by the Enlightenment, and can be summarized as the understanding of how liberating individuals leads to economic and social progress.  Whereas modernist attempts to deliberately cause economic and social progress are usually top-down and require suppressing individuals, and so cause the reverse of what they intend.  This is the great trap that we must not fall into, and it hinges on our conception of human agency.

A great step forward, or backwards (towards Athens), was made by the founders of America when they created a nation based in part on the idea of competition and compromise as being good rather than bad, basically by applying Adam Smith's invisible hand to both economics and politics.  One way forward is to understand how to do large projects that have a noble purpose.  That is, progressive capitalism.  Another way would be to understand how governments have sometimes managed to do great things, like NASA's Apollo project, without them degenerating into economic and social disasters like Stalin's or Mao's 5-Year-Plans.  Either way, how you conceptualize human agency will be a decisive factor in whether you produce heaven or hell.

Comment by PhilGoetz on That Tiny Note of Discord · 2023-04-10T16:47:16.794Z · LW · GW

It sounds like I didn't consider the possibility that Eliezer isn't trying to be moral--that his concern about AI replacing humans is just self-interested racism, with no need for moral justification beyond the will to power.

Comment by PhilGoetz on Here's the exit. · 2023-01-15T03:29:47.619Z · LW · GW

I think it would be more-graceful of you to just admit that it is possible that there may be more than one reason for people to be in terror of the end of the world, and likewise qualify your other claims to certainty and universality.

That's the main point of what gjm wrote.  I'm sympathetic to the view you're trying to communicate, Valentine; but you used words that claim that what you say is absolute, immutable truth, and that's the worst mind-killer of all.  Everything you wrote just above seems to me to be just equivocation trying to deny that technical yet critical point.

I understand that you think that's just a quibble, but it really, really isn't.  Claiming privileged access to absolute truth on LessWrong is like using the N-word in a speech to the NAACP.  It would do no harm to what you wanted to say to use phrases like "many people" or even "most people" instead of the implicit "all people", and it would eliminate a lot of pushback.

Comment by PhilGoetz on We need a new philosophy of progress · 2022-12-11T22:52:55.597Z · LW · GW
Comment by PhilGoetz on Alexander Gietelink Oldenziel's Shortform · 2022-11-19T03:15:25.136Z · LW · GW

I say that knowing particular kinds of math, the kind that let you model the world more-precisely, and that give you a theory of error, isn't like knowing another language.  It's like knowing language at all.  Learning these types of math gives you as much of an effective intelligence boost over people who don't, as learning a spoken language gives you above people who don't know any language (e.g., many deaf-mutes in earlier times).

The kinds of math I mean include:

  • how to count things in an unbiased manner; the methodology of polls and other data-gathering
  • how to actually make a claim, as opposed to what most people do, which is to make a claim that's useless because it lacks quantification or quantifiers
    • A good example of this is the claims in the IPCC 2015 report that I wrote some comments on recently.  Most of them say things like, "Global warming will make X worse", where you already know that OF COURSE global warming will make X worse, but you only care how much worse.
    • More generally, any claim of the type "All X are Y" or "No X are Y", e.g., "Capitalists exploit the working class", shouldn't be considered claims at all, and can accomplish nothing except foment arguments.
  • the use of probabilities and error measures
  • probability distributions: flat, normal, binomial, poisson, and power-law
  • entropy measures and other information theory
  • predictive error-minimization models like regression
  • statistical tests and how to interpret them

These things are what I call the correct Platonic forms.  The Platonic forms were meant to be perfect models for things found on earth.  These kinds of math actually are.  The concept of "perfect" actually makes sense for them, as opposed to for Earthly categories like "human", "justice", etc., for which believing that the concept of "perfect" is coherent demonstrably drives people insane and causes them to come up with things like Christianity.

They are, however, like Aristotle's Forms, in that the universals have no existence on their own, but are (like the circle , but even more like the normal distribution ) perfect models which arise from the accumulation of endless imperfect instantiations of them.

There are plenty of important questions that are beyond the capability of the unaided human mind to ever answer, yet which are simple to give correct statistical answers to once you know how to gather data and do a multiple regression.  Also, the use of these mathematical techniques will force you to phrase the answer sensibly, e.g., "We cannot reject the hypothesis that the average homicide rate under strict gun control and liberal gun control are the same with more than 60% confidence" rather than "Gun control is good."

Comment by PhilGoetz on Daniel Kokotajlo's Shortform · 2022-11-19T02:40:46.864Z · LW · GW

Agree.  Though I don't think Turing ever intended that test to be used.  I think what he wanted to accomplish with his paper was to operationalize "intelligence".  When he published it, if you asked somebody "Could a computer be intelligent?", they'd have responded with a religious argument about it not having a soul, or free will, or consciousness.  Turing sneakily got people to  look past their metaphysics, and ask the question in terms of the computer program's behavior.  THAT was what was significant about that paper.

Comment by PhilGoetz on Gunnar_Zarncke's Shortform · 2022-11-19T02:35:24.455Z · LW · GW

It's a great question.  I'm sure I've read something about that, possibly in some pop book like Thinking, Fast & Slow.  What I read was an evaluation of the relationship of IQ to wealth, and the takeaway was that your economic success depends more on the average IQ in your country than it does on your personal IQ.  It may have been an entire book rather than an article.

Google turns up this 2010 study from Science.  The summaries you'll see there are sharply self-contradictory.

First comes an unexplained box called "The Meeting of Minds", which I'm guessing is an editorial commentary on the article, and it says, "The primary contributors to c appear to be the g factors of the group members, along with a propensity toward social sensitivity."

Next is the article's abstract, which says, "This “c factor” is not strongly correlated with the average or maximum individual intelligence of group members but is correlated with the average social sensitivity of group members, the equality in distribution of conversational turn-taking, and the proportion of females in the group."

These summaries directly contradict each other: Is g a primary contributor, or not a contributor at all?

I'm guessing the study of group IQ is strongly politically biased, with Hegelians (both "right" and "left") and other communitarians, wanting to show that individual IQs are unimportant, and individualists and free-market economists wanting to show that they're important.

Comment by PhilGoetz on Where I agree and disagree with Eliezer · 2022-11-08T00:21:58.201Z · LW · GW

But what makes you so confident that it's not possible for subject-matter experts to have correct intuitions that outpace their ability to articulate legible explanations to others?

That's irrelevant, because what Richard wrote was a truism. An Eliezer who understands his own confidence in his ideas will "always" be better at inspiring confidence in those ideas in others.  Richard's statement leads to a conclusion of import (Eliezer should develop arguments to defend his intuitions) precisely because it's correct whether Eliezer's intuitions are correct or incorrect.

Comment by PhilGoetz on The Debtor's Revolt · 2022-09-18T14:33:50.689Z · LW · GW

The way to dig the bottom deeper today is to get government bailouts, like bailing out companies or lenders, and like Biden's recent tuition debt repayment bill.  Bailouts are especially perverse because they give people who get into debt a competitive advantage over people who don't, in an unpredictable manner that encourages people to see taking out a loan as a lottery ticket.

Comment by PhilGoetz on Replacing Karma with Good Heart Tokens (Worth $1!) · 2022-04-01T17:57:35.573Z · LW · GW

Finding a way for people to make money by posting good ideas is a great idea.

Saying that it should be based on the goodness of the people and how much they care is a terrible idea.  Privileging goodness and caring over reason is the most well-trodden path to unreason.  This is LessWrong.  I go to fimfiction for rainbows and unicorns.

Comment by PhilGoetz on The Cluster Structure of Thingspace · 2021-11-03T14:53:38.699Z · LW · GW

No; most philosophers today do, I think, believe that the alleged humanity of 9-fingered instances *homo sapiens* is a serious philosophical problem.  It comes up in many "intro to philosophy" or "philosophy of science" texts or courses.  Post-modernist arguments rely heavily on the belief that any sort of categorization which has any exceptions is completely invalid.

Comment by PhilGoetz on The Cluster Structure of Thingspace · 2021-11-03T14:31:18.393Z · LW · GW

I'm glad to see Eliezer addressed this point.  This post doesn't get across how absolutely critical it is to understand that {categories always have exceptions, and that's okay}.  Understanding this demolishes nearly all Western philosophy since Socrates (who, along with Parmenides, Heraclitus, Pythagoras, and a few others, corrupted Greek "philosophy" from the natural science of Thales and Anaximander, who studied the world to understand it, into a kind of theology, in which one dictates to the world what it must be like).

Many philosophers have recognized that Aristotle's conception of categories fails; but most still assumed that that's how categories must work in order to be "real", and so proving that categories don't work that way proved that categorizations "aren't real".  They them became monists, like the Hindus / Buddhists / Parmenides / post-modernists.  The way to avoid this is to understand nominalism, which dissolves the philosophical understanding of that quoted word "real", and which I hope Eliezer has also explained somewhere.

Comment by PhilGoetz on Kenshō · 2021-10-29T16:29:17.296Z · LW · GW

I theorize that you're experiencing at least two different common, related, yet almost opposed mental re-organizations.

One, which I approve of, accounts for many of the effects you describe under "Bemused exasperation here...".  It sounds similar to what I've gotten from writing fiction.

Writing fiction is, mostly, thinking, with focus, persistence, and patience, about other people, often looking into yourself to try to find some point of connection that will enable you to understand them.  This isn't quantifiable, at least not to me; but I would still call it analytic.  I don't think there's anything mysterious about it, nor anything especially difficult other than (A) caring about other individuals--not other people, in the abstract, but about particular, non-abstract individuals--and (B) acquiring the motivation and energy to think long and hard about them.  Writing fiction is the hardest thing I've ever done.  I don't find it as mentally draining per minute as chess, though perhaps that's because I'm not very interested in chess.  But one does it for weeks on end, not just hours.

(What I've just described applies only to the naturalist school of fiction, which says that fiction studies about particular, realistic individuals in particular situations in order to query our own worldview.  The opposed, idealistic school of fiction says that fiction presents archetypes as instructional examples in order to promulgate your own worldview.)

The other thing, your "flibble", sounds to me like the common effect, seen in nearly all religions and philosophies, of a drastic simplification of epistemology, when one blinds oneself to certain kinds of thoughts and collapses one's ontology into a simpler world model, in order to produce a closed, self-consistent, over-simplified view of the world.  Platonists, Christians, Hegelians, Marxists, Nazis, post-modernists, and SJWs each have a drastically-simplified view of what is in the world and how it operates, which always includes "facts" and techniques which discount all evidence to the contrary.

For example, the Buddhist / Hindu / Socratic / post-modernist technique of deconstruction relies on an over-simplified concept of what concepts and categories are--that they must have a clearly delineated boundary, or else must not exist at all.  This goes along with an over-simplified logocentric conception of Truth, which claims that any claim stated in human language must be either True (necessarily, provably, 100% of the time) or False (necessarily, etc.), disregarding both context and the slipperiness of words.  From there, they either choose dualism (this system really works and we must find out what is True: Plato, Christians, Hegel, Marx) or monism (our ontology is obviously broken and there is no true or false, no right or wrong, no you or me: Buddhism, Hinduism, Parmenides, Nazis, Foucault, Derrida, and other post-modernists).  Nearly all of Western and Eastern philosophy is built on this misunderstanding of reality.

For another example, phenomenologists (including Heidegger), Nazis, and SJWs use the concept of "lived experience" to deny that quantified empirical observations have any epistemological value.  This is how they undermine the authority of science, and elevate violence and censorship over reasoned debate as a way of resolving disagreements.

A third example is the claim, made by Parmenides, Plato, Buddhists, Hindus, Christians, and too many others to name, that the senses are misleading.  This argument begins with the observation that every now and then, maybe one time in a million--say, when seeing a mirage in the desert, or a stick underwater (the most-frequent examples)--the senses mislead you.  Then it concludes the senses are always wrong, and assumes that reason is always 100% reliable despite the obvious fact that no 2 philosophers have ever agreed with each other using abstract reason as a guide.  It's a monumentally stupid claim, but once one has accepted it, one can't get rid of it, because all of the evidence that one should do so is now ruled out.

Derrida's statement "there is no outside text" is another argument that observational evidence should be ignored, and that rather than objective quantified evidence, epistemology should be based on dialectic.  In practice this means that a claim is considered proven once enough people talk about it.  This is the epistemology of German idealism and post-modernism.  This is why post-modernists continually talk about claims having been "proven" when a literature search can't turn up a single argument supporting their claims; they are simply accepted as "the text" because they've been repeated enough.  (Barthes' "Death of the Author" is the clearest example: its origin is universally acknowledged to be Barthes' paper of that title; yet that paper makes no arguments in favor of its thesis, but rather asserts that everyone already knows it.)  Needless to say, once someone has accepted this belief, their belief system is invulnerable to any good argument, which would necessarily involve facts and observations.

The "looking up" is usually a looking away from the world and ignoring those complicating factors which make simple solutions unworkable.  Your "flibble" is probably not the addition of some new understanding, but the cutting away and denial of some of the complexities of life to create a self-consistent view of the world.

Genuine enlightenment, the kind provided by the Enlightenment, or by understanding calculus, or nominalism, isn't non-understandable.  It doesn't require any sudden leap, because it can be explained piece by piece.

There are some insights which must be experienced, such as that of learning to whistle, or ride a bicycle, or feeling your voice resonate in your sinuses for the first time when trying to learn to sing.  These are all slightly mysterious; even after learning, you can't communicate them verbally.  But none of them have the grand, sweeping scale of changes in epistemology, which is the sort of thing you're talking about, and which, I think, must necessarily always be explainable, on the grounds that the epistemology we've already got isn't completely useless.

Your perception of needing to make a quantum leap in epistemology sounds like Kierkegaard's "leap of faith", and is symptomatic not of a gain of knowledge, but a rejection of knowledge.  This rejection seems like foolishness beforehand (because it is), but like wisdom after making it (because now everything "makes sense").

Escaping from such a trap, after having fallen into it, is even harder than making the leap of faith that constructed the trap.  I was raised in an evangelical family, who went to an evangelical church, had evangelical friends, read evangelical books, and went on evangelical vacations.  I've known thousands of evangelicals throughout my life, and not one of them other than I rejected their faith.

Genuine enlightenment doesn't feel like suddenly understanding everything.  It feels like suddenly realizing how much you don't understand.

Comment by PhilGoetz on Kenshō · 2021-10-29T15:57:24.951Z · LW · GW

This sound suspiciously like Plato telling people to stop looking at the shadows on the wall of the cave, turn around, and see the transcendental Forms.

Comment by PhilGoetz on Common knowledge about Leverage Research 1.0 · 2021-10-14T17:38:36.125Z · LW · GW

To me, saying that someone is a better philosopher than Kant seems less crazy than saying that saying that someone is a better philosopher than Kant seems crazy.

Comment by PhilGoetz on Quantum Russian Roulette · 2021-10-01T18:40:50.440Z · LW · GW

An easy reason not to play quantum roulette is that, if your theory justifying it is right, you don't gain any expected utility; you just redistribute it, in a manner most people consider unjust, among different future yous.  If your theory is wrong, the outcome is much worse.  So it's at the very best a break even / lose proposition.

Comment by PhilGoetz on We need a new philosophy of progress · 2021-10-01T18:36:01.261Z · LW · GW

The Von Neumann-Morgenstern theory is bullshit.  It assumes its conclusion.  See the comments by Wei Dai and gjm here.

Comment by PhilGoetz on We need a new philosophy of progress · 2021-10-01T02:38:28.707Z · LW · GW

See the 2nd-to-last paragraph of my revised comment above, and see if any of it jogs your memory.

Comment by PhilGoetz on We need a new philosophy of progress · 2021-10-01T00:45:01.627Z · LW · GW

Republic is the reference. I'm not going to take the hours it would take to give book-and-paragraph citations, because either you haven't read the the entire Republic, or else you've read it, but you want to argue that each of the many terrible things he wrote don't actually represent Plato's opinion or desire.

(You know it's a big book, right? 89,000 words in the Greek.  If you read it in a collection or anthology, it wasn't the whole Republic.)

The task of arguing over what in /Republic/ Plato approves or disapproves of is arduous and, I think, unnecessary.

First, everybody agrees that the topic of Republic is "social justice", and Plato makes his position on that clear, in Republic and in his other works: Justice is when everybody accepts the job and the class they're born into, without any grumbling or backtalk, and Plato is king and tells everybody what to do.  His conclusion, that justice is when everybody minds their own business (meaning they don't get involved in politics, which should be the business of  philosophers), is clearly meant as a direct refutation of Pericles' summary of Athenian values in his famous funeral oration: "We do not say that a man who shows no interest in politics is a man who minds his own business; we say that he has no business here at all."

When the topic of the book is social justice, and you get to the end and it says "Justice is when everyone does what I say and stays in their place", you should throw that book in the trash.

(This is a bit unfair to Plato, because the Greek word he used meant something more like "righteousness".  "justice" is a lousy translation.  But this doesn't matter to me, because I don't care what Plato meant as much as I care about how people use it; and the Western tradition is to say that Plato was talking about justice.  And it's still a totalitarian conclusion, whether you call it "justice" or "righteousness".)

This view of justice (or righteousness) is consistent with his life and his writings.  He seems to support slavery as natural and proper, though he never talks about it directly; see Vlastos 1941, Slavery in Plato's Thought.  He literally /invented/ racism, in order to theorize that a stable, race-based state, in which the inferior races were completely conditioned and situated so as to be incapable of either having or acting on independent desires or thoughts, would have neither the unrest due to social mobility that democratic Athens had, nor the periodic slave revolts that Sparta had.  He and his clan preferred Sparta to Athens; his uncle, a fellow student of Socrates, was the tyrant of Athens in 404 BC, appointed by Sparta; and murdered 1500 Athenian citizens, mostly for supporting democracy.  Socrates was probably executed in 399 BC not for being a "gadfly", but because the Athenians believed that they'd lost the war with Sparta thanks to the collusion of Socrates' students with Sparta.

Plato had personal, up-close experience of the construction of a bloody totalitarian state, and far from ever expressing a word of disapproval of it, he mocked at least one of its victims in Republic, and continued to advocate totalitarian policies in his writings, such as /The Laws/.  He was a wealthy aristocrat who wanted to destroy democracy and bring back the good old days when you couldn't be taken to court just for killing a slave, as evidenced by the scorn he heaps on working people and merchants in many of his dialogues, and also his jabs at Athens and democracy; and by the Euthyphro, a dialogue with a man who's a fool for taking his father to court for killing a slave.

One common defense of Plato is that his preferred State was the first state he described, the "true state", in which everyone gets just what they need to survive; he actually detested the second, "fevered state", in which people have luxuries (which, he says, can only ever be had by theft and war--property is theft!)

I find this implausible, or at best hypocritical, for several reasons.

  • It's in line with the persona of Socrates, but not at all in line with Plato's actual life of luxury as a powerful and wealthy man.
  • Plato spends a few paragraphs describing the "true state", and the rest of Republic describing the "fevered state" or defending or elaborating on its controversial aspects.
  • He supports the totalitarian polices, such as banning all music, poetry, and art other than government propaganda, with arguments which are sometimes solid if you accept Plato's philosophy.
  • Many of the controversial aspects of the "fevered state" are copied from Sparta, which Plato admired, and which his friends and family fought for against their own city; and direct opposites of Athens, which he hated.

The simplest reading of Republic, I think, is that the second state he described is one he liked to dream about, but knew wasn't plausible.

But my second reason for thinking this debate over Plato's intent is unimportant is that people don't usually read Republic for its brief description of the "true state". Either they just read the first 2 or 3 books and a few other extracts carefully chosen by professors to avoid all the nasty stuff and give the impression that Plato was legitimately trying to figure out what justice means like he claimed; or they read it to get off on the radical policies of the fevered state (which is the political equivalent of BDSM porn).

Some of the policies of that state include: breeding citizens like cattle into races that must be kept distinct, with philosophers telling everyone whom to have sex with, sometimes requiring brothers and sisters to have sex with each other (5.461e); allowing soldiers on campaign to rape any citizen they want to (5.468c); dictating jobs by race; abolishing all art, poetry, and music except government propaganda; banning independent philosophy; the death sentence for repeatedly questioning authority; forbidding doctors from wasting their time on people who are no longer useful to the State because they're old or permanently injured; forced abortions of all children conceived without the State's permission (including for all women over age 40 and all men over age 55); forbidding romantic love, marriage, or raising your own children; outlawing private property (5.464); allowing any citizen to violently assault any other citizen, in order to encourage citizens to stay physically fit (5.464e); and founding of the city by killing everyone over the age, IIRC, of 10.  (He writes "exiling", but you would have to kill them to get them all to give up their children; see e.g. Cambodia).

The closest anybody ever came to implementing the ideas in /Republic/ (which was not a republic, and which Plato actually titled /Polis/, "The State") was Sparta (which it was obviously based on).  The second-closest was Nazi Germany (also patterned partly on Sparta).  /Brave New World/ is also similar, though much freer.

Comment by PhilGoetz on We need a new philosophy of progress · 2021-09-30T20:13:02.910Z · LW · GW

The most-important thing is to explicitly repudiate these wrong and evil parts of the traditional meaning of "progress":

  • Plato's notion of "perfection", which included his belief that there is exactly one "perfect" society, and that our goal should be to do ABSOLUTELY ANYTHING NO MATTER HOW HORRIBLE to construct it, and then do ABSOLUTELY ANYTHING NO MATTER HOW HORRIBLE to make sure it STAYS THAT WAY FOREVER.
  • Hegel's elaboration on Plato's concept, claiming that not only is there just one perfect end-state, but that there is one and only one path of progress, and that at any one moment, there is only one possible step forward to take.
  • Hegel's corollary to the above, that taking that one next step is literally the only thing in the world that matters, and therefore individual human lives don't matter, and individual liberties such as freedom of speech are just obstructions to progress.
  • Hegel's belief that movement along this path is predestined, and nothing can stop it.
  • Hegel's belief that there is a God ("Weltgeist") watching over Progress and making sure that it happens, so the only thing progressives really need to do to take that One Next Step is to destroy whatever society they're in; and if they are indeed God's current chosen people, God will make sure that something farther along the One True Path rises from the ashes.
  • The rationalist belief, implicit in Plato and Hegel but most prominent in Marx, that through dialectic we can achieve absolute certainty in our understanding of what the perfect society is, and how to get there; and at that point debate should be stopped and all opposition should be silenced.
Comment by PhilGoetz on Group selection update · 2021-09-29T22:48:34.462Z · LW · GW

Sorry; your example is interesting and potentially useful, but I don't follow your reasoning.  This manner of fertilization would be evidence that kin selection should be strong in Chimaphila, but I don't see how this manner of fertilization is itself evidence that kin selection has taken place.  Also, I have no good intuitions about what differences kin selection predicts in the variables you mentioned, except that maybe dispersion would be greater in Chimaphila because of teh greater danger of inbreeding.  Also, kin selection isn't controversial, so I don't know where you want to go with this comment.

Comment by PhilGoetz on Rescuing the Extropy Magazine archives · 2021-06-03T14:34:50.389Z · LW · GW

Hi, see above for my email address. Email me a request at that address. I don't have your email. I just sent you a message.

ADDED in 2021: Some people tried to contact me thru LessWrong and Facebook. I check messages there like once a year.  Nobody sent me an email at the email address I gave above. I've edited it to make it more clear what my email address is.

Comment by PhilGoetz on Debate update: Obfuscated arguments problem · 2021-01-12T02:09:32.448Z · LW · GW

[Original first point deleted, on account of describing something that resembled Bayesian updating closely enough to make my point invalid.]

I don't think this approach applies to most actual bad arguments.

The things we argue about the most are ones over which the population is polarized, and polarization is usually caused by conflicts between different worldviews.  Worldviews are constructed to be nearly self-consistent.  So you're not going to be able to reconcile people of different worldviews by comparing proofs.  Wrong beliefs come in sets, where each contradiction caused by one wrong belief is justified by other wrong beliefs.

So for instance, a LessWrongian would tell a Christian that positing a God doesn't explain how life was made, because she's just replaced a complex first life form with an even more-complex God, and what made God?  The Christian will reliably respond that God is eternal, outside of space and time, and was never made.

This response sounds stupid to us, but it's part of a philosophical system built by Plato, which he designed to be self-consistent.  The key parts here are the inversion of "complexity" and the denial of mechanism.

The inversion of complexity is the belief that simple things are greater and more powerful than complex things. The central notion is "purity", and pure, simple things are always superior to complicated things. God is defined as ultimate purity and simplicity.  God is simple because you can fully describe Him just by saying he's perfect, and there's only one way of being perfect.  He's eternal, because if he had a starting-point or an ending-point in time, then other points in time would be equally good, and "perfection" would be ambiguous.  "God is perfectly simple" is actually part of Catholic dogma, and derived from Plato.  So a Christian doesn't think she's replaced complex life with a more-complex God; she's replaced it with a more-simple and therefore more-powerful God.

The denial of mechanism is the denial that anything gets its properties mechanistically.  An animal isn't alive because it eats food and metabolizes it and reproduces; it eats food and metabolizes it and reproduces because it's alive.  Functions are magically inherited from categories ("Forms"), rather than categories arising from a cooperative combination of functions.  (This is why spiritualists who believe in a good God dislike machinery.  It's an abomination to them, as it has new capabilities not inherited from any eternal Form, and their intuition is that it must be animated by some spirit other than God.  They think of magic as natural, and causes other than magic as unnatural; we think just the opposite.)

Because God is perfect, He is omnipotent, and hence has every possible capability, just as he is perfect in every way.  Everything less than God is less powerful, lacking some capabilities, and more-complex, because you must enumerate all those missing capabilities and perfections to describe it.  (This is the metaphysics behind Tolstoy's saying, "Every happy family is happy in the same way. Every unhappy family is unhappy in different ways.”)  The Great Chain of Being is a complete linear ordering of every eternal Form, proceeding from God at the top (perfect, simple, omnipotent), down to complete lack and emptiness at the other end (which is Augustinian Evil).  Each step along that chain is a loss of some perfection.

Hence, to the Christian there's no "problem" of complexity in saying that God created life, because God is less-complex than life, and therefore also more-powerful, since complexity implies many losses of perfection and capabilities.  There is no need to posit that God is complex to explain His powers, because capabilities arise from essence, not from mechanics, and God's perfectly-simple essence is to have all capabilities.  This is because Plato designed his ontology to eliminate the problem of how complex life arose.

If you argue with Marxists, post-modernists, or the Woke, you'll similarly find that, for every solid argument you have that proves a belief of theirs is wrong, they have some assumptions which to them justify dismissing your argument.  You'll never find yourself able to compare proofs with an ideological opposite and agree on the validity of each step.

Comment by PhilGoetz on Where do (did?) stable, cooperative institutions come from? · 2020-12-01T20:34:39.734Z · LW · GW

"Cynicism is a self-fulfilling prophecy; believing that an institution is bad makes the people within it stop trying, and the good people stop going there."

I think this is a key observation. Western academia has grown continually more cynical since the advent of Marxism, which assumes an almost absolute cynicism as a point of dogma: all actions are political actions motivated by class, except those of bourgeois Marxists who for mysterious reasons advocate the interests of the proletariat.

This cynicism became even worse with Foucault, who taught people to see everything as nothing but power relations.  Western academics today are such knee-jerk cynics that they can't conceive of loyalty to any organization other than Marxism or the Social Justice movement as being anything but exploitation of the one being loyal.

Pride is the opposite of cynicism, and is one of the key feelings that makes people take brave, altruistic actions.  Yet today we've made pride a luxury of the oppressed.  Only groups perceived as oppressed are allowed to have pride in group memberships.  If you said you were proud of being American, or of being manly, you'd get deplatformed, and possibly fired.

The defamation of pride in mainstream groups is thus destroying our society's ability to create or maintain mainstream institutions.  In my own cynicism, I think someone deliberately intended this.  This defamation began with Marxism, and is now supported by the social justice movement, both of which are Hegelian revolutionary movements which believe that the first step toward making civilization better is to destroy it, or at least destabilize it enough to stage a coup or revolution.  This is the "clean sweep" spoken of so often by revolutionaries since the French Revolution.

Since their primary goal is to destroy civilization, it makes perfect sense that they begin by convincing people that taking pride in any mainstream identity or group membership is evil, as this will be sufficient to destroy all cooperative social institutions, and hence civilization.

Comment by PhilGoetz on The Solomonoff Prior is Malign · 2020-10-17T03:14:19.401Z · LW · GW

"At its core, this is the main argument why the Solomonoff prior is malign: a lot of the programs will contain agents with preferences, these agents will seek to influence the Solomonoff prior, and they will be able to do so effectively."

First, this is irrelevant to most applications of the Solomonoff prior.  If I'm using it to check the randomness of my random number generator, I'm going to be looking at 64-bit strings, and probably very few intelligent-life-producing universe-simulators output just 64 bits, and it's hard to imagine how an alien in a simulated universe would want to bias my RNG anyway.

The S. prior is a general-purpose prior which we can apply to any problem.  The output string has no meaning except in a particular application and representation, so it seems senseless to try to influence the prior for a string when you don't know how that string will be interpreted.

Can you give an instance of an application of the S. prior in which, if everything you wrote were correct, it would matter?

Second, it isn't clear that this is a bug rather than a feature.  Say I'm developing a program to compress photos.  I'd like to be able to ask "what are the odds of seeing this image, ever, in any universe?"  That would probably compress images of plants and animals better than other priors, because in lots of universes life will arise and evolve, and features like radial symmetry, bilateral symmetry, leafs, legs, etc., will arise in many universes.  This biasing of priors by evolution doesn't seem to me different than biasing of priors by intelligent agents; evolution is smarter than any agent we know.  And I'd like to get biasing from intelligent agents, too; then my photo-compressor might compress images of wheels and rectilinear buildings better.

Also in the category of "it's a feature, not a bug" is that, if you want your values to be right, and there's a way of learning the values of agents in many possible universes, you ought to try to figure out what their values are, and update towards them.  This argument implies that you can get that for free by using Solomonoff priors.

(If you don't think your values can be "right", but instead you just believe that your values morally oblige you to want other people to have those values, you're not following your values, you're following your theory about your values, and probably read too much LessWrong for your own good.)

Third, what do you mean by "the output" of a program that simulates a universe? How are we even supposed to notice the infinitesimal fraction of that universe's output which the aliens are influencing to subvert us?  Take your example of Life--is the output a raster scan of the 2D bit array left when the universe goes static?  In that case, agents have little control over the terminal state of their universe (and also, in the case of Life, the string will be either almost entirely zeroes, or almost entirely 1s, and those both already have huge Solomonoff priors).  Or is it the concatenation of all of the states it goes through, from start to finish?  In that case, by the time intelligent agents evolve, their universe will have already produced more bits than our universe can ever read.

Are you imagining that bits are never output unless the accidentally-simulated aliens choose to output a bit?  I can't imagine any way that could happen, at least not if the universe is specified with a short instruction string.

This brings us to the 4th problem:  It makes little sense to me to worry about averaging in outputs from even mere planetary simulations if your computer is just the size of a planet, because it won't even have enough memory to read in a single output string from most such simulations.

5th, you can weigh each program's output proportional to 2^-T, where T is the number of steps it takes the TM to terminate.  You've got to do something like that anyway, because you can't run TMs to completion one after another; you've got to do something like take a large random sample of TMs and iteratively run each one step.  Problem solved.

Maybe I'm misunderstanding something basic, but I feel like we're talking about many angels can dance on the head of a pin.

Perhaps the biggest problem is that you're talking about an entire universe of intelligent agents conspiring to change the "output string" of the TM that they're running in.  This requires them to realize that they're running in a simulation, and that the output string they're trying to influence won't even be looked at until they're all dead and gone.  That doesn't seem to give them much motivation to devote their entire civilization to twiddling bits in their universe's final output in order to shift our priors infinitesimally.  And if it did, the more likely outcome would be an intergalactic war over what string to output.

(I understand your point about them trying to "write themselves into existence, allowing them to effectively "break into" our universe", but as you've already required their TM specification to be very simple, this means the most they can do is cause some type of life that might evolve in their universe to break into our universe.  This would be like humans on Earth devoting the next billion years to tricking God into re-creating slime molds after we're dead.  Whereas the things about themselves that intelligent life actually care about with and self-identify with are those things that distinguish them from their neighbors.  Their values will be directed mainly towards opposing the values of other members of their species.  None of those distinguishing traits can be implicit in the TM, and even if they could, they'd cancel each other out.)

Now, if they were able to encode a message to us in their output string, that might be more satisfying to them.  Like, maybe, "FUCK YOU, GOD!"

Comment by PhilGoetz on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-27T01:05:26.337Z · LW · GW

I think we learned that trolls will destroy the world.