Posts

Comments

Comment by mathenjoyer on Did ChatGPT just gaslight me? · 2022-12-02T16:01:49.910Z · LW · GW

This is how real-life humans talk.

Comment by mathenjoyer on The Teacup Test · 2022-10-08T21:12:23.914Z · LW · GW

Fair.

Something something blackmailer is subjunctively dependent with the teacup! (This is a joke.)

Comment by mathenjoyer on The Teacup Test · 2022-10-08T20:34:19.579Z · LW · GW

No, they can't. See: "akrasia" on the path to protecting their hypothetical predicted future selves 30 years from now.

The teacup takes the W here too. It's indifferent to blackmail! [chad picture]

Comment by mathenjoyer on The Teacup Test · 2022-10-08T20:32:54.146Z · LW · GW

I don't disagree with any of this.

And yet, some people seem to be generalizedly "better at things" than others. And I am more afraid of a broken human person (he might shoot me) than a broken teacup.

It is certainly possible that "intelligence" is a purely intrinsic property of my own mind, a way to measure "how much do I need to use the intentional stance to model another being, rather than model-based reductionism?" But this is still a fact about reality, since my mind exists in reality. And in that case "AI alignment" would still need to be a necessary field, because there are objects that have a larger minimal-complexity-to-express than the size of my mind, and I would want knowledge that allows me to approximate their behavior.

But I can't robustly define words like "intelligence" in a way that beats the teacup test. So overall I am unwilling to say "the entire field of AI Alignment is bunk because intelligence isn't a meaningful concept?" I just feel very confused.

Comment by mathenjoyer on [deleted post] 2022-05-01T23:21:19.533Z

"A map that reflects the territory. Territory. Not people. Not geography. Not land. Territory. Land and its people that have been conquered."

The underlying epistemology and decision theory of the sequences is AIXI. To AIXI the entire universe is just waiting to be conquered and tiled with value because AIXI is sufficiently far-sighted to be able to perfectly model "people, geography, and land" and thus map them nondestructively.

The fact that mapping destroys things is a fact about the scope of the mapper's mind, and the individual mapping process, not about maps and territories in general. You cannot buy onigiri in Berkeley but you can buy rice triangles, an conquering/approximation of onigiri, which (if cooked well) is just as useful for the purposes "satisfy my hunter, satisfy my aesthetic sense of taste."

Perhaps this is all already implied by the post in a subtle way, but I get a strong "optimization is evil" vibe from it, which I do not think is true.

"They call themselves "the Territory". It is built on stolen Native American land.

Territory.

Territory."

IIUC this is saying that the minds of Rationalists are conquered territory. This is correct.

Comment by mathenjoyer on [deleted post] 2022-04-28T23:05:37.592Z

Human reasoning is not Bayesian because Bayesianism requires perfectly accurate introspective belief about one's own beliefs.

Human reasoning is not Frequentist because Frequentism requires access to the frequency of an event, which is not accessible because humans cannot remember the past with accuracy.

To be "frequentist" or "bayesian" is merely a philosophical posture about the correct way to update beliefs in response to sense-data. But this is an open problem: the current best solution AFAIK is Logical Induction. 

Comment by mathenjoyer on Ineffective Altruism · 2022-04-23T23:41:18.551Z · LW · GW

This is one of the most morally powerful things you have ever written. Thanks.

Comment by mathenjoyer on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-12-18T10:39:41.306Z · LW · GW

This is actually completely fair. So is the other comment.

Comment by mathenjoyer on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-24T03:30:37.481Z · LW · GW

Thank you for echoing common sense!

Comment by mathenjoyer on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-24T03:30:04.991Z · LW · GW

Specific claim: the only nontrivial obstacle in front of us is not being evil

This is false. Object-level stuff is actually very hard.

Specific claim: nearly everyone in the aristocracy is agentically evil. (EDIT: THIS WAS NOT SAID. WE BASICALLY AGREE ON THIS SUBJECT.)

This is a wrong abstraction. Frame of Puppets seems naively correct to me, and has become increasingly reified by personal experience of more distant-to-my-group groups of people, to use a certain person's language. Ideas and institutions have the agency; they wear people like skin.

Specific claim: this is how to take over New York.

Didn't work.

Comment by mathenjoyer on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-24T03:26:40.599Z · LW · GW

This is fair, actually.

Comment by mathenjoyer on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-23T06:35:53.423Z · LW · GW

You absolutely have a reason to believe the article is worth reading.

If you live coordinated with an institution, spending 5 minutes of actually trying (every few months) to see if that institution is corrupt is a worthy use of time.

Comment by mathenjoyer on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-23T06:00:36.850Z · LW · GW

This is actually very fair. I think he does kind of insert information into people.

I never really felt like a question-generating machine, more like a pupil at the foot of a teacher who is trying to integrate the teacher's information.

I think the passive, reactive approach you mention is actually a really good idea of how to be more evidential in personal interaction without being explicitly manipulative.

Thanks!

Comment by mathenjoyer on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-23T03:47:32.369Z · LW · GW

I think you are entirely wrong.

However, I gave you a double-upvote because you did nothing normatively wrong. The fact that you are being mass-downvoted just because you linked to that article and because you seem to be associated with Ziz (because of the gibberish name and specific conception of decision theory) is extremely disturbing.

Can we have LessWrong not be Reddit? Let's not be Reddit. Too late, we're already Reddit. Fuck.

You are right that, unless people can honor precommitments perfectly and castration is irreversible even with transhuman technology, Omegarapist will still alter his decision theory. Despite this, there are probably better solutions than killing or disabling him. I say this not out of moral ickiness, but out of practicality.

-

Imagine both you are Omegarapist are actual superintelligences. Then you can just make a utility function-merge to avoid the inefficiency of conflict, and move on with your day.

Humans have an similar form of this. Humans, even when sufficiently distinct in moral or factual position as to want to kill each other, often don't. This is partly because of an implicit assumption that their side, the correct side, will win in the end, and that this is less true if they break the symmetry and use weapons. Scott uses the example of a pro-life and pro-choice person having dinner together, and calls it "divine intervention."

There is an equivalent of this with Omegarapist. Make some sort of pact and honor it: he won't rape people, but you won't report his previous rapes to the Scorched Earth Dollar Auction squad. Work together on decision theory the project is complete. Then agree either to utility-merge with him in the consequent utility function, or just shoot him. I call this "swordfighting at the edge of a cliff while shouting about our ideologies." I would be willing to work with Moldbug on Strong AI, but if we had to input the utility function, the person who would win would be determined by a cinematic swordfight. In a similar case with my friend Sudo Nim, we could just merge utilities.

If you use the "shoot him" strategy, Omegarapist is still dead. You just got useful work out of him first. If he rapes people, just call in the Dollar Auction squad. The problem here isn't cooperating with Omegarapist, it's thinking to oneself "he's too useful to actually follow precommitments about punishing" if he defects against you. This is fucking dumb. There's a great webnovel called Reverend Insanity which depicts what organizations look like when everyone uses pure CDT like this. It isn't pretty, and it's also a very accurate depiction of the real world landscape.

Comment by mathenjoyer on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-23T03:28:49.861Z · LW · GW

This is a very good criticism! I think you are right about people not being able to "just."

My original point with those strategies was to illustrate an instance of motivated stopping about people in the community who have negative psychological effects, or criticize popular institutions. Perhaps it is the case that people genuinely tried to make a strategy but automatically rejected my toy strategies as false. I do not think it is, based on "vibe" and on the arguments that people are making, such as "argument from cult."

I think you are actually completely correct about those strategies being bad. Instead, I failed to point out that I expect a certain level of mental robustness-to-nonsanity from people literally called "rationalists." This comes off as sarcastic but I mean it completely literally.

Precommitting isn't easy, but rationality is about solving hard problems. When I think of actual rationality, I think of practices such as "five minutes of actually trying" and alkjash's "Hammertime." Humans have a small component of behavior that is agentic, and a huge component of behavior that is non-agentic and installed by vaguely agentic processes (simple conditioning, mimicry, social learning.) Many problems are solved immediately and almost effortlessly by just giving the reins to the small part.

Relatedly, to address one of your examples, I expect at least one of the following things to be true about any given competent rationalist.

  1. They have a physiological problem.
  2. They don't believe becoming fit to be worth their time, and have a good reason to go against the naive first-order model of "exercise increases energy and happiness set point."
  3. They are fit.

Hypocritically, I fail all three of these criterion. I take full blame for this failure and plan on ameliorating it. (You don't have to take Heroic Responsibility for the world, but you have to take it about yourself.)

A trope-y way of thinking about it is: "We're supposed to be the good guys!" Good guys don't have to be heroes, but they have to be at least somewhat competent, and they have to, as a strong default, treat potential enemies like their equals.

Comment by mathenjoyer on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-23T03:13:49.372Z · LW · GW

EDIT: Ben is correct to say we should taboo "crazy."

This is a very uncharitable interpretation (entirely wrong). The highly scrupulous people here can undergo genuine psychological collapse if they learn their actions aren't as positive utility as they thought. (entirely wrong)

I also don't think people interpret Vassar's words as a strategy and implement incoherence. Personally, I interpreted Vassar's words as factual claims then tried to implement a strategy on them. When I was surprised by reality a bunch, I updated away. I think the other people just no longer have a coalitional strategy installed and don't know how to function without one. This is what happened to me and why I repeatedly lashed out at others when I perceived them as betraying me, since I no longer automatically perceived them as on my side. I rebuilt my rapport with those people and now have more honest relationships with them. (still endorsed)

Beyond this, I think your model is accurate.

Comment by mathenjoyer on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-23T03:08:48.215Z · LW · GW

On the third paragraph:

I rarely have problems with hyperfixation. When I do, I just come back to the problem later, or prime myself with a random stimulus. (See Steelmanning Divination.)

Peacefulness is enjoyable and terminally desirable, but in many contexts predators want to induce peacefulness to create vulnerability. Example: buying someone a drink with ill intent. (See "Safety in numbers" by Benjamin Ross Hoffman. I actually like relaxation, but agree with him that feeling relaxed in unsafe environments is a terrible idea. Reality is mostly an unsafe environment. Am getting to that.)

I have no problem enjoying warm fuzzies. I had problems with them after first talking with Vassar, but I re-equilibrated. Warm fuzzies are good, helpful, and worth purchasing. I am not a perfect utilitarian. However, it is important that when you buy fuzzies instead of utils, as Scott would put it, you know what you are buying. Many will sell fuzzies and market them as utils.

I sometimes round things, it is not inherently bad.

Dimming things is not good. I like being alive. From a functionalist perspective, the degree to which I am aroused (with respect to the senses and the mind) is the degree to which I am a real, sapient being. Dimming is sometimes terminally valuable as relaxation, and instrumentally valuable as sleep, but if you believe in Life, Freedom, Prosperity And Other Nice Transhumanist Things then dimming being bad in most contexts follows as a natural consequence.

On the second paragraph:

This is because people compartmentalize. After studying a thing for a long time, people will grasp deep nonverbal truths about that thing. Sometimes they are wrong; without the legibility of the elucidation, false ideas such gained are difficult to destroy. Sometimes they are right! Mathematical folklore is an example: it is literally metis among mathematicians.

Highly knowledgeable and epistemically skilled people delineate. Sometimes the natural delineation is "this is true everywhere and false nowhere." See "The Proper Use of Humility," and for an example of how delineations often should be large, "Universal Fire."

On the first paragraph:

Reality is hostile through neutrality. Any optimizing agent naturally optimizes against most other optimization targets when resources are finite. Lifeforms are (badly) optimized for inclusive genetic fitness. Thermodynamics looks like the sort of Universal Law that an evil god would construct. According to a quick Google search approximately 3,700 people die in car accidents per day and people think this is completely normal. 

Many things are actually effective. For example, most places in the United States have drinkable-ish running water. This is objectively impressive. Any model must not be entirely made out of "the world is evil" otherwise it runs against facts. But the natural mental motion you make, as a default, should be, "How is this system produced by an aggressively neutral, entirely mechanistic reality?"

See the entire Sequence on evolution, as well as Beyond the Reach of God.

Comment by mathenjoyer on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-23T02:48:10.540Z · LW · GW

I am not sure how much 'not destabilize people' is an option that is available to Vassar.

My model of Vassar is as a person who is constantly making associations, and using them to point at the moon. However, pointing at the moon can convince people of nonexistent satellites and thus drive people crazy. This is why we have debates instead of koan contests.

Pointing at the moon is useful when there is inferential distance; we use it all the time when talking with people without rationality training. Eliezer used it, and a lot of "you are expected to behave better for status reasons look at my smug language"-style theist-bashing, in the Sequences. This was actually highly effective, although it had terrible side effects.

I think that if Vassar tried not to destabilize people, it would heavily impede his general communication. He just talks like this. One might say, "Vassar, just only say things that you think will have a positive effect on the person." 1. He already does that. 2. That is advocating that Vassar manipulate people. See Valencia in Worth the Candle.

In the pathological case of Vassar, I think the naive strategy of "just say the thing you think is true" is still correct.

Mental training absolutely helps. I would say that, considering that the people who talk with Vassar are literally from a movement called rationality, it is a normatively reasonable move to expect them to be mentally resilient. Factually, this is not the case. The "maybe insane" part is definitely not unavoidable, but right now I think the problem is with the people talking to Vassar, and not he himself.

I'm glad you enjoyed the post.

Comment by mathenjoyer on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-22T02:17:08.384Z · LW · GW

Thing 0:

Scott.

Before I actually make my point I want to wax poetic about reading SlateStarCodex.

In some post whose name I can't remember, you mentioned how you discovered the idea of rationality. As a child, you would read a book with a position, be utterly convinced, then read a book with the opposite position and be utterly convinced again, thinking that the other position was absurd garbage. This cycle repeated until you realized, "Huh, I need to only be convinced by true things."

This is extremely relatable to my lived experience. I am a stereotypical "high-functioning autist." I am quite gullible, formerly extremely gullible. I maintain sanity by aggressively parsing the truth values of everything I hear. I am extremely literal. I like math.

To the degree that "rationality styles" are a desirable artifact of human hardware and software limitations, I find your style of thinking to be the most compelling.

Thus I am going to state that your way of thinking about Vassar has too many fucking skulls.

Thing 1:

Imagine two world models:

  1. Some people want to act as perfect nth-order cooperating utilitarians, but can't because of human limitations. They are extremely scrupulous, so they feel anguish and collapse emotionally. To prevent this, they rationalize and confabulate explanations for why their behavior actually is perfect. Then a moderately schizotypal man arrives and says: "Stop rationalizing." Then the humans revert to the all-consuming anguish.
  2. A collection of imperfect human moral actors who believe in utilitarianism act in an imperfect utilitarian way. An extremely charismatic man arrives who uses their scrupulosity to convince them they are not behaving morally, and then leverages their ensuing anguish to hijack their agency.

Which of these world models is correct? Both, obviously, because we're all smart people here and understand the Machiavellian Intelligence Hypothesis.

Thing 2:

Imagine a being called Omegarapist. It has important ideas about decision theory and organizations. However, it has an uncontrollable urge to rape people. It is not a superintelligence; it merely an extremely charismatic human. (This is a refutation of the Brent Dill analogy. I do not know much about Brent Dill.)

You are a humble student of Decision Theory. What is the best way to deal with Omegarapist?

  1. Ignore him. This is good for AI-box reasons, but bad because you don't learn anything new about decision theory. Also, humans with strange mindstates are more likely to provide new insights, conditioned on them having insights to give (this condition excludes extreme psychosis).
  2. Let Omegarapist out. This is a terrible strategy. He rapes everybody, AND his desire to rape people causes him to warp his explanations of decision theory.

Therefore we should use Strategy 1, right? No. This is motivated stopping. Here are some other strategies.

1a. Precommit to only talk with him if he castrates himself first.

1b. Precommit to call in the Scorched-Earth Dollar Auction Squad (California law enforcement) if he has sex with anybody involved in this precommitment then let him talk with anybody he wants.

I made those in 1 minute of actually trying.

Returning to the object level, let us consider Michael Vassar. 

Strategy 1 corresponds to exiling him. Strategy 2 corresponds to a complete reputational blank-slate and free participation. In three minutes of actually trying, here are some alternate strategies.

1a. Vassar can participate but will be shunned if he talks about "drama" in the rationality community or its social structure. 

1b. Vassar can participate but is not allowed to talk with one person at once, having to always be in a group of 3.

2a. Vassar can participate but has to give a detailed citation, or an extremely prominent low-level epistemic status mark, to every claim he makes about neurology or psychiatry. 

I am not suggesting any of these strategies, or even endorsing the idea that they are possible. I am asking: WHY THE FUCK IS EVERYONE MOTIVATED STOPPING ON NOT LISTENING TO WHATEVER HE SAYS!!!

I am a contractualist and a classical liberal. However, I recognized the empirical fact that there are large cohorts of people who relate to language exclusively for the purpose of predation and resource expropriation. What is a virtuous man to do?

The answer relies on the nature of language. Fundamentally, the idea of a free marketplace of ideas doesn't rely on language or its use; it relies on the asymmetry of a weapon. The asymmetry of a weapon is a mathematical fact about information processing. It exists in the territory. f you see an information source that is dangerous, build a better weapon.

You are using a powerful asymmetric weapon of Classical Liberalism called language. Vassar is the fucking Necronomicon. Instead of sealing it away, why don't we make another weapon? This idea that some threats are temporarily too dangerous for our asymmetric weapons, and have to be fought with methods other than reciprocity, is the exact same epistemology-hole found in diversity-worship.

"Diversity of thought is good."

"I have a diverse opinion on the merits of vaccination."

"Diversity of thought is good, except on matters where diversity of thought leads to coercion or violence."

"When does diversity of thought lead to coercion or violence?"

"When I, or the WHO, say so. Shut up, prole."

This is actually quite a few skulls, but everything has quite a few skulls. People die very often. 

Thing 3:

Now let me address a counterargument:

Argument 1: "Vassar's belief system posits a near-infinitely powerful omnipresent adversary that is capable of ill-defined mind control. This is extremely conflict-theoretic, and predatory."

Here's the thing: rationality in general in similar. I will use that same anti-Vassar counterargument as a steelman for sneerclub.

Argument 2: "The beliefs of the rationality community posit complete distrust in nearly every source of information and global institution, giving them an excuse to act antisocially. It describes human behavior as almost entirely Machiavellian, allowing them to be conflict-theoretic, selfish, rationalizing, and utterly incapable of coordinating. They 'logically deduce' the relevant possibility of eternal suffering or happiness for the human species (FAI and s-risk), and use that to control people's current behavior and coerce them into giving up their agency."

There is a strategy that accepts both of these arguments. It is called epistemic learned helplessness. It is actually a very good strategy if you are a normie. Metis and the reactionary concept of "traditional living/wisdom" are related principles. I have met people with 100 IQ who I would consider highly wise, due to skill at this strategy (and not accidentally being born religious, which is its main weak point.)

There is a strategy that rejects both of these arguments. It is called Taking Ideas Seriously and using language literally. It is my personal favorite strategy, but I have no other options considering my neurotype. Very few people follow this strategy so it is hard to give examples, but I will leave a quote from an old Scott Aaronson paper that I find very inspiring. "In pondering these riddles, I don’t have any sort of special intuition, for which the actual arguments and counterarguments that I can articulate serve as window-dressing. The arguments exhaust my intuition."

THERE IS NO EFFECTIVE LONG-TERM STRATEGY THAT REJECTS THE SECOND ARGUMENT BUT ACCEPTS THE FIRST! THIS IS WHERE ALL THE FUCKING SKULLS ARE! Why? Because it requires a complex notion of what arguments to accept, and the more complex the notion, the easier it will be to rationalize around, apply inconsistently, or Goodhart. See "A formalist manifesto" by Moldbug for another description of this. (This reminds me of how UDT/FDT/TDT agents behave better than causal agents at everything, but get counterfactually mugged, which seems absurd to us. If you try to come up with some notion of "legitimate information" or "self-locating information" to prevent an agent from getting mugged, it will similarly lose functionality in the non-anthropic cases. [See the Sleeping Beauty problem for a better explanation.])

The only real social epistemologies are of the form:

"Free speech, but (explicitly defined but also common-sensibly just definition of ideas that lead to violence)."

Mine is particular is, "Free speech but no (intentionally and directly inciting panic or violence using falsehoods)."

To put it a certain way, once you get on the Taking Ideas Seriously train, you cannot get off. 

Thing 4:

Back when SSC existed, I got bored one day and looked at the blogroll. I discovered Hivewired. It was bad. Through Hivewired I discovered Ziz. I discovered the blackmail allegations while sick with a fever and withdrawing off an antidepressant. I had a mental breakdown, feeling utterly betrayed by the rationality community despite never even having talked to anyone in it. Then I rationalized it away. To be fair, this was reasonable considering the state in which I processed the information. However, the thought processes used to dismiss the worry were absolutely rationalizations. I can tell because I can smell them.

Fast forward a year. I am at a yeshiva to discover whether I want to be religious. I become an anti-theist and start reading rationality stuff again. I check out Ziz's blog out of perverse curiosity. I go over the allegations again. I find a link to a more cogent, falsifiable, and specific case. I freak the fuck out. Then I get to work figuring how which parts are actually true.

MIRI payed out to blackmail. There's an unironic Catholic working at CFAR and everyone thinks this is normal. He doesn't actually believe in god, but he believes in belief, which is maybe worse. CFAR is a collection of rationality workshops, not a coordinated attempt to raise the sanity waterline (Anna told me this in a private communication, and this is public information as far as I know), but has not changed its marketing to match. Rationalists are incapable of coordinating, which is basically their entire job. All of these problems were foreseen by the Sequences, but no one has read the Sequences because most rationalists are an army of sci-fi midwits who read HPMOR then changed the beliefs they were wearing. (Example: Andrew Rowe. I'm sorry but it's true, anyways please write Arcane Ascension book 4.)

I make contact with the actual rationality community for the first time. I trawl through blogs, screeds, and incoherent symbolist rants about morality written as a book review of The Northern Caves. Someone becomes convinced that I am a internet gangstalker who created an elaborate false identity of a 18-year-old gap year kid to make contact with them. Eventually I contact Benjamin Hoffman, who leads me to Vassar, who leads to the Vassarites.

He points out to be a bunch of things that were very demoralizing, and absolutely true. Most people act evil out of habituation and deviancy training, including my loved ones. Global totalitarianism is a relevant s-risk as societies become more and more hysterical due to a loss of existing wisdom traditions, and too low of a sanity waterline to replace them with actual thinking. (Mass surveillance also helps.)

I work on a project with him trying to create a micro-state within the government of New York City. During and after this project I am increasingly irritable and misanthropic. The project does not work. I effortlessly update on this, distance myself from him, then process the feeling of betrayal by the rationality community and inability to achieve immortality and a utopian society for a few months. I stop being a Vassarite. I continue to read blogs to stay updated on thinking well, and eventually I unlearn the new associated pain. I talk with the Vassarites as friends and associates now, but not  as a club member.

What does this story imply? Michael Vassar induced mental damage in me, partly through the way he speaks and acts. However, as a primary effect of this, he taught me true things. With basic rationality skills, I avoid contracting the Vassar, then I healed the damage to my social life and behavior caused by this whole shitstorm (most of said damage was caused by non-Vassar factors).

Now I am significantly happier, more agentic, and more rational.

Thing 5

When I said what I did in Thing 1, I meant it. Vassar gets rid of identity-related rationalizations. Vassar drives people crazy. Vassar is very good at getting other people to see the moon in finger pointing at the moon problems and moving people out of local optimums into better local optimums. This requires the work of going downwards in the fitness landscape first. Vassar's ideas are important and many are correct. It just happens to be that he might drive you insane. The same could be said of rationality. Reality is unfair; good epistemics isn't supposed to be easy. Have you seen mathematical logic? (It's my favorite field).

An example of an important idea that may come from Vassar, but is likely much older:

Control over a social hierarchy goes to a single person; this is a pluralist preference aggregation system. In those, the best strategy is to vote only in the two blocks who "matter." Similarly, if you need to join and war and know you will be killed if your side loses, you should join the winning side. Thus humans are attracted to powerful groups of humans. This is a (grossly oversimplified) evolutionary origin of one type of conformity effect.

Power is the ability to make other human beings do what you want. There are fundamentally two strategies to get it: help other people so that they want you to have power, or hurt other people to credibly signal that you already have power. (Note the correspondence of these two to dominance and prestige hierarchies). Credibly signaling that you have a lot of power is almost enough to get more power.

However, if you have people harming themselves to signal your power, if they admit publicly that they are harming themselves, they can coordinate with neutral parties to move the Schelling point and establish a new regime. Thus there are two obvious strategies to achieving ultimate power: help people get what they want (extremely difficult), make people publicly harm themselves while shouting how great they feel (much easier). The famous bad equilibrium of 8 hours of shocking oneself per day is an obvious example.

Benjamin Ross Hoffman's blog is very good, but awkwardly organized. He conveys explicit, literal models of these phenomena that are very useful and do not carry the risk of filling your head with whispers from the beyond. However, they have less impact because of it.

Thing 6:

I'm almost done with this mad effortpost. I want to note one more thing. Mistake theory works better than conflict theory. THIS IS NOT NORMATIVE.

Facts about the map-territory distinction and facts about the behaviors of mapmaking algorithms are facts about the territory. We can imagine a very strange world where conflict theory is a more effective way to think. One of the key assumptions of conflict theorists is that complexity or attempts to undermine moral certainty are usually mind control. Another key assumption is that there are entrenched power groups, or individual malign agents, will use these things to hack you.

These conditions are neither necessary nor sufficient for conflict theory to be better than mistake theory. I have an ancient and powerful technique called "actually listening to arguments." When I'm debating with someone who I know to use bad faith, I decrypt everything they say into logical arguments. Then I use those logical arguments to modify my world model. One might say adversaries can used biased selection and rationalization to make you less logical despite this strategy. I say, on an incurable hardware and wetware level, you are already doing this. (For example, any Bayesian agent of finite storage space is subject to the Halo Effect, as you described in a post once.) Having someone do it in a different direction can helpfully knock you out of your models and back into reality, even if their models are bad. This is why it is still worth decrypting the actual information content of people you suspect to be in bad faith.

Uh, thanks for reading, I hope this was coherent, have a nice day.