Posts

Class consciousness for those against the class system 2023-12-08T01:02:49.613Z
Are humans misaligned with evolution? 2023-10-19T03:14:14.759Z
Bíos brakhús 2023-10-17T03:52:25.277Z
EconTalk podcast: "Eliezer Yudkowsky on the Dangers of AI" 2023-05-09T11:14:27.173Z
Interpersonal alignment intuitions 2023-02-23T09:37:22.603Z
The male AI alignment solution 2023-02-22T16:34:12.414Z
Gamified narrow reverse imitation learning 2023-02-21T04:26:45.792Z
What are some Works that might be useful but are difficult, so forgotten? 2022-08-09T02:22:41.891Z
Why do some people try to make AGI? 2022-06-06T09:14:44.346Z
Defensiveness as hysterical invisibility 2022-02-27T15:28:46.083Z
Does Braess's paradox show up in markets? 2021-12-29T12:09:18.761Z
Analysis of Bird Box (2018) 2021-12-13T17:30:15.045Z
Do factored sets elucidate anything about how to update everyday beliefs? 2021-11-22T06:51:15.655Z
Hope and False Hope 2021-09-04T09:46:23.513Z
Thinking about AI relationally 2021-08-16T22:03:07.780Z
Strategy vs. PR-narrative 2021-08-15T22:40:59.527Z
Evidence that adds up 2021-07-29T03:27:34.676Z
ELI12: how do libertarians want wages to work? 2021-06-24T07:00:02.206Z
Visualizing in 5 dimensions 2021-06-19T18:15:18.160Z

Comments

Comment by TekhneMakre on ejenner's Shortform · 2024-03-13T08:00:56.992Z · LW · GW

I'm not saying that it looks like you're copying your views, I'm saying that the updates look like movements towards believing in a certain sort of world: the sort of world where it's natural to be optimistically working together with other people on project that are fulfilling because you believe they'll work. (This is a super empathizable-with movement, and a very common movement to make. Also, of course this is just one hypothesis.) For example, moving away from theory and "big ideas", as well as moving towards incremental / broadly-good-seeming progress, as well as believing more in a likely continuum of value of outcomes, all fit with trying to live in a world where it's more immediately motivating to do stuff together. Instead of witholding motivation until something that might really work is found, the view here says: no, let's work together on whatever, and maybe it'll help a little, and that's worthwhile because every little bit helps, and the witholding motivation thing wasn't working anyway.

(There could be correct reasons to move toward believing and/or believing in such worlds; I just want to point out the pattern.)

Comment by TekhneMakre on ejenner's Shortform · 2024-03-13T02:41:20.941Z · LW · GW

I note that almost all of these updates are (weakly or strongly) predicted by thinking of you as someone who is trying to harmonize better with a nice social group built around working together to do "something related to AI risk".

Comment by TekhneMakre on Evolution did a surprising good job at aligning humans...to social status · 2024-03-10T20:29:33.136Z · LW · GW

How are you telling the difference between "evolution aligned humans to this thing that generalized really well across the distributional shift of technological civilization" vs. "evolution aligned humans to this thing, which then was distorted / replaced / cut down / added to by the distributional shift of technological civilization"?

Comment by TekhneMakre on Thomas Kwa's Shortform · 2024-03-07T02:45:54.292Z · LW · GW

Isn't a major point of purifiers to get rid of pollutants, including tiny particles, that gradually but cumulatively damage respiration over long-term exposure?

Comment by TekhneMakre on Acting Wholesomely · 2024-02-27T07:39:01.483Z · LW · GW

From Owen's post: "I’d suggested her as a candidate earlier in the application process, but was not part of their decision-making process". "Unrelated job offer" is a bad description of that. I don't see the claim about hosting in the post, but that would a little soften things if true.

Anyway, it's not a random blog post! If it was a post about how many species of flowers there are or whatever, then my comment wouldn't make sense. But it's not random! It's literally about acting wholesomely! His very unwholesome behavior is very relevant to a post he's making to the forum of record about what wholesome behavior is!

Comment by TekhneMakre on CFAR Takeaways: Andrew Critch · 2024-02-27T06:30:30.659Z · LW · GW

It makes sense, but I think it's missing that adults who try to want in the current social world get triggered and/or traumatized as fuck because everyone else is behaving the way you describe.

Comment by TekhneMakre on Acting Wholesomely · 2024-02-27T06:03:05.031Z · LW · GW

Really...? You think it's not indicative of a nontrivial amount of poison, to be in a position of brokering statusful positions to young people, fly one out, suprise her with an expectation that she'll stay in his house, and greet her with "hold on, I'm gonna masturbate"? This is... a pretty big disconnect between your mindset and mine.

Comment by TekhneMakre on Acting Wholesomely · 2024-02-27T05:45:02.996Z · LW · GW

I agree, but he should be more forthcoming!

Comment by TekhneMakre on Acting Wholesomely · 2024-02-27T03:40:08.035Z · LW · GW

@Zach Stein-Perlman @habryka Since I guess you don't understand what I'm saying: If someone's going to read an essay about a topic that's entwined with soulcrafting, and that essay is written by someone who has some amount of poison in them, then the reader should be aware of this. Care to say what you disagree with about that?

Comment by TekhneMakre on Acting Wholesomely · 2024-02-27T03:14:40.985Z · LW · GW

When it comes to soul-related stuff like this, I'd rather want to keep in mind who the author is... https://forum.effectivealtruism.org/posts/QMee23Evryqzthcvn/a-statement-and-an-apology

Comment by TekhneMakre on Believing In · 2024-02-08T19:18:39.682Z · LW · GW

https://www.lesswrong.com/posts/KktH8Q94eK3xZABNy/hope-and-false-hope

Comment by TekhneMakre on Ciocourut's Shortform · 2024-01-08T08:57:41.539Z · LW · GW

climate?

Comment by TekhneMakre on Ciocourut's Shortform · 2024-01-08T08:57:19.685Z · LW · GW

My guess is that a good way to start is to write a short or medium length post that talks about one thing that seems really interesting to you, that LessWrong readers probably haven't heard about / thought about.

Comment by TekhneMakre on Practically A Book Review: Appendix to "Nonlinear's Evidence: Debunking False and Misleading Claims" (ThingOfThings) · 2024-01-06T08:53:38.901Z · LW · GW

The standard that you seem to be suggesting is Kafkaesque. Someone accuses you of something, you prove them false, but that doesn't count because of strategic meanings of words. What?

But imagine this from the other side of a conflict. There's a social norm:

Don't isolate people (e.g. because it makes them vulnerable, e.g. to abuse).

Now a hypothetical (cartoonishly explicit) bad actor comes along and says "Aha, I know what to do, I will use my soft power to isolate my employee, but only from some people, and that way I'm not "isolating" them, but I can still control their social context of influences, support, and ideology". (To be extra clear: I'm not following the story in detail and I'm genuinely not claiming that Nonlinear is like this; there's some possible relevance, in that their genuinely well-intended actions might possibly have had a similarly bad effect as this hypothetical cartoonish bad actor would hypothetically have had.)

So this bad actor does this. Now, did they isolate the person? Did they violate the norm? Can you accuse them of isolating their employee? Do you have to exactly specify what shape of isolation, on pain of making an infinitely malleable accusation? If you later specify the shape / form of the isolation, are you changing the accusation?

Comment by TekhneMakre on If Clarity Seems Like Death to Them · 2024-01-03T11:47:20.748Z · LW · GW

Any of them. My point is that "climb!" is kind of like a message about the territory, in that you can infer things from someone saying it, and in that it can be intended to communicate something about the territory, and can be part of a convention where "Climb!" means "There's a bear!" or whatever; but still, "Climb!" is, besides being an imperative, a word that's being used to bundle actions together. Actions are kinda part of the territory, but as actions they're also sort of internal to the speaker (in the same way that a map is also part of the territory, but it's also internal to the speaker) and so has some special status. Part of that special status is that your actions, and how you bundle your actions, is up to your choice, in a way that it's not up to your choice whether there's a biological male/female approximate-cluster-approximate-dichotomy, or whether 2+4=6 etc.

Comment by TekhneMakre on If Clarity Seems Like Death to Them · 2024-01-03T07:12:08.014Z · LW · GW

If someone wants to be classified as "... has XY chromosomes, is taller-on-average, has a penis..." and they aren't that, then it's a pathological preference, yeah. But categories aren't just for describing territory, they're also for coding actions. If a human says "Climb!" to another human, is that a claim about the territory? You can try to infer a claim about reality, like "There's something in reality that makes it really valuable for you to climb right now, assuming you have the goals that I assume you have".

If someone says "call me 'he' ", it could be a pathological preference. Or it could be a preference to be treated by others with the male-role bundle of actions. That preference could be in conflict with others' preferences, because others might only want to treat a person with the male-role bundle if that person "... has XY chromosomes, is taller-on-average, has a penis..." . Probably it's both, and they haven't properly separated out their preferences / society hasn't made it convenient for them to separate out their preferences / there's a conflict about treatment that is preventing anyone from sorting out their preferences.

"Okay, let's redefine the word 'pretty' such that it includes you" actually makes some sense. Specifically, it's an appeal to anti-lookism. It's of course confused, because ugliness is also an objective thing. And it's a conflict, because most people want to treat ugly people differently than they treat pretty people, so the request to be treated like a pretty person is being refused.

Comment by TekhneMakre on Dating Roundup #2: If At First You Don’t Succeed · 2024-01-03T06:35:47.879Z · LW · GW

On reflection, this post seems subtly but deeply deranged, assuming this is true:

People living 50, or 100, or 200 years ago didn't have nearly this much trouble dating.

If that's true, then all this stuff is besides the point, and the question is what changed.

Comment by TekhneMakre on If Clarity Seems Like Death to Them · 2024-01-02T17:05:31.354Z · LW · GW

categories are useful insofar as they compress information by "carving reality at the joints";

I think from context you're saying "...are only useful insofar...". Is that what you're saying? If so, I disagree with the claim. Compressing information is a key way in which categories are useful. Another key way in which categories are useful is compressing actions, so that you can in a convenient way decide and communicate about e.g. "I'm gonna climb that hill now". More to the point, calling someone "he" is mixing these two things together: you're both kinda-sorta claiming the person has XY chromosomes, is taller-on-average, has a penis, etc.; and also kinda-sorta saying "Let's treat this person in ways that people tend to treat men". "He" compresses the cluster, and also is a button you can push to treat people in that way. These two things are obviously connected, but they aren't perfectly identical. Whether or not the actions you take make someone happy or sad is relevant.

Comment by TekhneMakre on If Clarity Seems Like Death to Them · 2023-12-31T11:32:18.285Z · LW · GW

You can't just use redefinitions to turn trans women similar to cis women.

What does this mean? It seems like if the original issue is something about whether to call an XY-er "she" if the XY-er asks for that, then, that's sort of like a redefinition and sort of not like a redefinition... Is the claim something like:

Eliezer wants to redefine "woman" to mean "anyone who asks to be called 'she' ". But there's an objective cluster, and just reshuffling pronouns doesn't make someone jump from being typical of one cluster to typical of the other.

Trans women start out much more similar to cis men than to cis women, and transitioning doesn't do very much.

This one is a set of empirical, objective claims.... but elsewhere you said:

Focusing on brains seems like the wrong question to me. Brains matter due to their effect on psychology, and psychology is easier to observe than neurology.

Even if psychology is similar in some ways, it may not be similar in the ways that matter though, and in fact the ways that matter need not be restricted to psychology. Even if trans women are psychologically the same as cis women, trans women in women's sports is still a contentious issue.

So I guess that was representing your viewpoint, not Zack's?

Comment by TekhneMakre on If Clarity Seems Like Death to Them · 2023-12-31T07:05:58.311Z · LW · GW

Are you claiming that Zack is claiming that there's no such thing as gender? Or that there's no objective thing? Or that there's nothing that would show up in brain scans? I continue to not know what the basic original object-level disagreement is!

Comment by TekhneMakre on If Clarity Seems Like Death to Them · 2023-12-31T07:04:19.331Z · LW · GW

Ok. (I continue to not know what the basic original object-level disagreement is!)

Comment by TekhneMakre on If Clarity Seems Like Death to Them · 2023-12-30T18:46:46.188Z · LW · GW

I certainly haven't read even a third of your writing about this. But... I continue to not really get the basic object-level thing. Isn't it simply factually unknown whether or not there's such a thing as men growing up with brains that develop like female brains? Or is that not a crux for anything?

Separately, isn't the obvious correct position simply: there's a bunch of objective stuff about the differences between men and women; there's uncertainty about exactly how these clusters overlap / are violated in real life, e.g. as described in the previous paragraph; and separately there's a bunch of conduct between people that people modulate depending on whether they are interacting with a man or a woman; and now that there are more people openly not falling neatly into the two clusters, there's some new questions about conduct; and some of the conduct questions involve factual questions, for which calling a particular XY-er a woman would be false, and some of the conduct questions involve factual questiosn (e.g. the brain thing) for which calling a particular XY-er a woman would be true, and some of the conduct questions are instead mainly about free choices, like whether or not to wear a dress or whatever?

I mean, if person 1 is using the word "he" to mean something like "that XY-er", then yeah, it's false for them to say "he" of an XX-er. If person 2 is using the word "he" to mean something like "that person, who wants to be treated in the way that people usually treat men", then for some XX-ers, they should call the XX-er "he". This XX-er certainly might seek to decieve person 1; e.g. if the XX-er wants to be treated by person 1 the way person 1 treats XY-ers, and person 1 does not want to treat this XX-er that way, but would treat the XX-er this way if they don't know the XX status, then the XX-er might choose to have allies say "he" in order to decieve person 1. But that's not the only reason. One can imagine simply that everyone is like person 2; then an XX-er asking to be called "he" is saying something like "I prefer to not be flirted with by heterosexual men; I'd like people to accurately expect me to be more interested in going to a hackathon rather than going to a mall; etc.", or something. I mean, I'm not at all saying there's no problem, but... It's not clear (though again, I didn't read your voluminous writing on this carefully) who is saying what that's wrong... Like, if there's a bunch of conventional conduct that's tied up with words, then it's not just about the words' meaning, and you have to actually do work to separate the conduct from the reference, if you want them to be separate.

Comment by TekhneMakre on We're all in this together · 2023-12-12T02:32:03.261Z · LW · GW

It's not just epistemic confusion that can be most easily corrected with good evidence and arguments. That's what I think we're talking about.

Comment by TekhneMakre on Class consciousness for those against the class system · 2023-12-11T19:15:13.564Z · LW · GW

But these people are in control of most institutions in our society. It's not a small problem.

Comment by TekhneMakre on Class consciousness for those against the class system · 2023-12-10T19:44:08.766Z · LW · GW

I totally agree with what you say! ... And that's why I'm on the side of those against the system of conflict between groups of people with common interests amongst themselves, against the side of those in favor of that system.

That taking sides in this way, is paradoxical (cf. the paradox of intolerance), is why I asked:

How can those against the class system gain appropriate class consciousness without being thereby destroyed?

A key aspect of that is to not look away from the fact that there is a class struggle between those in favor of class struggle and those against it.

I think the key premise that you didn't say you agree with, is this: that there are people who are opposed to sharing information, pointing out norm violations, justice in general; perspective synthesizing, pulling the rope sideways. Cf. http://benjaminrosshoffman.com/notes-on-the-autobiography-of-malcolm-x-2/

Comment by TekhneMakre on Class consciousness for those against the class system · 2023-12-10T03:47:10.517Z · LW · GW

Jesus christ. Savages on lesswrong.

Comment by TekhneMakre on Class consciousness for those against the class system · 2023-12-08T22:54:42.971Z · LW · GW

Because when you have an enemy, you try to

  1. enforce your boundaries to exclude the enemy;
  2. try to generally decrease your enemy's power, including cutting off resources, which includes lying to them and otherwise harming their thinking (think propaganda, gaslighting, misinformation, FUD);
  3. view moves by the enemy as hostile--e.g. the enemy's public statements are propagandistic lies, the enemy's overtures for and moves within a negotiation are trying to dispossess you, etc.;
  4. in particular you use the misinterpretations of your enemy's actions as hostile, to further strengthen your boundaries and internal unity of will;
  5. and all of this escalates in a self-reinforcing way.
Comment by TekhneMakre on We're all in this together · 2023-12-08T01:46:59.363Z · LW · GW

Well, I wrote about this here: https://www.lesswrong.com/posts/tMtMHvcwpsWqf9dgS/class-consciousness-for-those-against-the-class-system

But the internet loves to downvote without explaining why...

Comment by TekhneMakre on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-08T01:01:27.505Z · LW · GW

Ooh. That makes a lot of sense and is even better... I simply didn't realize there were inline reacts! Kudos.

Comment by TekhneMakre on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-07T21:45:38.851Z · LW · GW

IDK the reasons.

Comment by TekhneMakre on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-06T23:15:50.514Z · LW · GW

I guess there's a reason for not having it on top-level posts, but I miss having it on top-level posts.

Comment by TekhneMakre on On Trust · 2023-12-06T21:19:08.134Z · LW · GW

"Trust" is like "invest". It's an action-policy; it's related to beliefs, such as "this person will interpret agreements reasonably", "this person will do mostly sane things", "this person won't breach contracts except in extreme circumstances", etc., but trust is the action-policy of investing in plans that only make sense if the person has those properties.

Comment by TekhneMakre on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-06T05:45:26.178Z · LW · GW

Overall feels like it's ok, but very frustrating because it feels like it could be so much better. But I don't think this is mainly about the software of LW; it's about culture more broadly in decay (or more precisely, all the methods of coordinating on visions having been corrupted and new ones not gaining steam while defending boundaries).

A different thing: This is a problem for everyone, but: stuff gets lost. https://www.lesswrong.com/posts/DtW3sLuS6DaYqJyis/what-are-some-works-that-might-be-useful-but-are-difficult It's bad, and there's a worldwide problem of indexing the Global Archives.

Comment by TekhneMakre on We're all in this together · 2023-12-06T00:47:22.798Z · LW · GW

I appreciate these views being stated clearly, and at once feel a positive feeling toward the author, and also am shaking my head No. As others have pointed out, the mistake theory here is confused.

I think it's not exactly wrong. The way in which it's right is this:

If people doing AGI research understood what we understand about the existential risk of AGI, most of them would stop, and AGI research would go much slower.

In other words, most people are amenable to reason on this point, in the sense that they'd respond to reasons to not do something that they've been convinced of. This is not without exception; some players, e.g. Larry Page (according to Elon Musk), want AGI to take the world from humanity.

The way in which the mistake theory is wrong is this:

Many people doing AGI research are not trying, and in many cases trying not, to understand what we understand about AGI risk.

So it's not just a mistake. It's a choice, that choice has motivations, and those motivations are in conflict with our motivations, insofar as they shelter themselves from reason.

Comment by TekhneMakre on AI Safety is Dropping the Ball on Clown Attacks · 2023-10-23T02:46:04.689Z · LW · GW

What do you mean? Surely they aren't offering this for anyone who writes anything manicly. It would be nice if someone volunteered for doing that service more often though.

Comment by TekhneMakre on Are humans misaligned with evolution? · 2023-10-19T04:32:29.067Z · LW · GW

I think you're right that it will take work to parse; it's definitely taking me work to parse! Possibly what you suggest would be good, but it sounds like work. I'll see what I think after the dialogue.

Comment by TekhneMakre on Rationalist horror movies · 2023-10-15T08:53:45.033Z · LW · GW

I was going to say The Thing. https://en.wikipedia.org/wiki/The_Thing_(1982_film)

Comment by TekhneMakre on Bíos brakhús · 2023-10-13T00:15:45.201Z · LW · GW

Seems like someone went through my top-level posts and strong downvoted them.

Comment by TekhneMakre on Evolution Solved Alignment (what sharp left turn?) · 2023-10-12T23:33:06.442Z · LW · GW

The analogy from historical evolution is the misalignment between human genes and human minds, where the rise of the latter did not result in extinction of the former. It plausibly could have, but that is not what we observe.

The analogy is that the human genes thing produces a thing (human minds) which wants stuff, but the stuff it wants is different from what what the human genes want. From my perspective you're strawmanning and failing to track the discourse here to a sufficient degree that I'm bowing out.

Comment by TekhneMakre on Evolution Solved Alignment (what sharp left turn?) · 2023-10-12T21:21:28.065Z · LW · GW

For evolution in general, this is obviously pattern measure, and truly can not be anything else.

This sure sounds like my attempt elsewhere to describe your position:

There's no such thing as misalignment. There's one overarching process, call it evolution or whatever you like, and this process goes through stages of creating new things along new dimensions, but all the stages are part of the overall process. Anything called "misalignment" is describing the relationship of two parts or stages that are contained in the overarching process. The overarching process is at a higher level than that misalignment relationship, and the misalignment helps compute the overarching process.

Which you dismissed.

Comment by TekhneMakre on Evolution Solved Alignment (what sharp left turn?) · 2023-10-12T18:54:58.739Z · LW · GW

I'm saying that you, a bio-evolved thing, are saying that you hope something happens, and that something is not what bio-evolution wants. So you're a misaligned optimizer from bio-evolution's perspective.

Comment by TekhneMakre on Evolution Solved Alignment (what sharp left turn?) · 2023-10-12T18:51:14.201Z · LW · GW

A different way to maybe triangulate here: Is misalignment possible, on your view? Like does it ever make sense to say something like "A created B, but failed at alignment and B was misaligned with A"? I ask because I could imagine a position, that sort of sounds a little like what you're saying, which goes:

There's no such thing as misalignment. There's one overarching process, call it evolution or whatever you like, and this process goes through stages of creating new things along new dimensions, but all the stages are part of the overall process. Anything called "misalignment" is describing the relationship of two parts or stages that are contained in the overarching process. The overarching process is at a higher level than that misalignment relationship, and the misalignment helps compute the overarching process.

Comment by TekhneMakre on Evolution Solved Alignment (what sharp left turn?) · 2023-10-12T18:45:51.106Z · LW · GW

The original argument that your OP is responding to is about "bio evolution". I understand the distinction, but why is it relevant? Indeed, in the OP you say:

For the evolution of human intelligence, the optimizer is just evolution: biological natural selection. The utility function is fitness: gene replication count (of the human defining genes).

So we're talking about bio evolution, right?

Comment by TekhneMakre on Evolution Solved Alignment (what sharp left turn?) · 2023-10-12T18:27:58.328Z · LW · GW

I'm saying that the fact that you, an organism built by the evolutionary process, hope to step outside the evolutionary process and do stuff that the evolutionary process wouldn't do, is misalignment with the evolutionary process.

Comment by TekhneMakre on Evolution Solved Alignment (what sharp left turn?) · 2023-10-12T18:12:06.344Z · LW · GW

The search process is just searching for designs that replicate well in environment.

This is a retcon, as I described here:

If you run a big search process, and then pick a really extreme actual outcome X of the search process, and then go back and say "okay, the search process was all along a search for X", then yeah, there's no such thing as misalignment. But there's still such a thing as a search process visibly searching for Y and getting some extreme and non-Y-ish outcome, and {selection for genes that increase their relative frequency in the gene pool} is an example.

Comment by TekhneMakre on Evolution Solved Alignment (what sharp left turn?) · 2023-10-12T17:48:43.256Z · LW · GW

Ok so the point is that the vast vast majority of optimization power coming from {selection over variation in general} is coming more narrowly from {selection for genes that increase their relative frequency in the gene pool} and not from {selection between different species / other large groups}. In arguments about misalignment, evolution refers to {selection for genes that increase their relative frequency in the gene pool}.

If you run a big search process, and then pick a really extreme actual outcome X of the search process, and then go back and say "okay, the search process was all along a search for X", then yeah, there's no such thing as misalignment. But there's still such a thing as a search process visibly searching for Y and getting some extreme and non-Y-ish outcome, and {selection for genes that increase their relative frequency in the gene pool} is an example.

Comment by TekhneMakre on Evolution Solved Alignment (what sharp left turn?) · 2023-10-12T17:39:25.232Z · LW · GW

Of course - and we'd hope that there is some decoupling eventually! Otherwise it's just be fruitful and multiply, forever.

This "we'd hope" is misalignment with evolution, right?

Comment by TekhneMakre on Evolution Solved Alignment (what sharp left turn?) · 2023-10-12T17:11:29.388Z · LW · GW

Say you have a species. Say you have two genes, A and B.

Gene A has two effects:

A1. Organisms carrying gene A reproduce slightly MORE than organisms not carrying A.

A2. For every copy of A in the species, every organism in the species (carrier or not) reproduces slightly LESS than it would have if not for this copy of A.

Gene B has two effects, the reverse of A:

B1. Organisms carrying gene B reproduce slightly LESS than organisms not carrying B.

B2. For every copy of B in the species, every organism in the species (carrier or not) reproduces slightly MORE than it would have if not for this copy of B.

So now what happens with this species? Answer: A is promoted to fixation, whether or not this causes the species to go extinct; B is eliminated from the gene pool. Evolution doesn't search to increase total gene count, it searches to increase relative frequency. (Note that this is not resting specifically on the species being a sexually reproducing species. It does rest on the fixedness of the niche capacity. When the niche doesn't have fixed capacity, evolution is closer to selecting for increasing gene count. But this doesn't last long; the species grows to fill capacity, and then you're back to zero-sum selection.)

Comment by TekhneMakre on Evolution Solved Alignment (what sharp left turn?) · 2023-10-12T16:47:46.709Z · LW · GW

I don't see how this detail is relevant. The fact remains that humans are, in evolutionary terms, much more successful than most other mammals.

What do you mean by "in evolutionary terms, much more successful"?

Comment by TekhneMakre on Evolution Solved Alignment (what sharp left turn?) · 2023-10-12T09:46:15.466Z · LW · GW

IIUC a lot of DNA in a lot of species is gene-drive-like things.

These seem like failure modes rather than the utility function.

By what standard are you judging when something is a failure mode or a desired outcome? I'm saying that what evolution is, is a big search process for genes that increase their relative frequency given the background gene pool. When evolution built humans, it didn't build agents that try to promote the relative frequency of the genes that they are carrying. Hence, inner misalignment and sharp left turn.