Posts

Halifax – ACX Meetups Everywhere Spring 2024 2024-03-30T11:28:31.961Z
Halifax Rationality Meetup 2024-02-13T04:17:49.763Z
Open-Source AI and Bioterrorism Risk 2023-11-04T22:45:55.171Z
Halifax, Nova Scotia, Canada – ACX Meetups Everywhere Fall 2023[UPDATE:POSTPONED BY 1 WEEK] 2023-08-25T23:33:54.802Z
Meetup - Prediction Markets 2023-07-31T03:48:49.708Z
Halifax LW Meetup - July 15 2023-07-08T03:02:03.887Z
Halifax LW Meetup - June 10th 2023-06-05T02:23:12.226Z
Halifax, Nova Scotia, Canada – ACX Spring Meetups Everywhere Spring 2023 2023-04-11T01:19:56.513Z
Alignment Might Never Be Solved, By Humans or AI 2022-10-07T16:14:37.047Z
Will Values and Competition Decouple? 2022-09-28T16:27:23.078Z
Kolmogorov's AI Forecast 2022-06-10T02:36:00.869Z
Tao, Kontsevich & others on HLAI in Math 2022-06-10T02:25:38.341Z
Halifax Rationality / EA Coworking Day 2022-06-01T17:47:00.463Z
What's the Relationship Between "Human Values" and the Brain's Reward System? 2022-04-19T05:15:48.971Z
Halifax Spring Meetup 2022-04-18T20:12:23.769Z
Consciousness: A Compression-Based Approach 2022-04-16T16:40:11.168Z
Algorithmic Measure of Emergence v2.0 2022-03-10T20:26:26.996Z
Meetup at Propeller Brewing Company 2022-02-06T07:22:36.499Z
Advancing Mathematics By Guiding Human Intuition With AI 2021-12-04T20:00:41.408Z
NTK/GP Models of Neural Nets Can't Learn Features 2021-04-22T03:01:43.973Z
interstice's Shortform 2021-03-08T21:14:11.183Z
What Are Some Alternative Approaches to Understanding Agency/Intelligence? 2020-12-29T23:21:05.779Z
Halifax SSC Meetup -- FEB 8 2020-02-08T00:45:37.738Z
HALIFAX SSC MEETUP -- FEB. 1 2020-01-31T03:59:05.110Z
SSC Halifax Meetup -- January 25 2020-01-25T01:15:13.090Z
Clarifying The Malignity of the Universal Prior: The Lexical Update 2020-01-15T00:00:36.682Z
Halifax SSC Meetup -- Saturday 11/1/20 2020-01-10T03:35:48.772Z
Recent Progress in the Theory of Neural Networks 2019-12-04T23:11:32.178Z
Halifax Meetup -- Board Games 2019-04-15T04:00:02.799Z
Predictors as Agents 2019-01-08T20:50:49.599Z
A Candidate Complexity Measure 2017-12-31T20:15:39.629Z
Please Help: How to make a big improvement in the alignment of political parties’ incentives with the public interest? 2017-01-18T00:51:56.355Z

Comments

Comment by interstice on Is being a trans woman +20 IQ? · 2024-04-25T04:22:30.613Z · LW · GW

I don't know enough about hormonal biology to guess a specific cause(some general factor of neoteny, perhaps??). It's much easier to infer that it's likely some third factor than to know exactly what third factor it is. I actually think most of the evidence in this very post supports the 3rd-factor position or is equivocal - testosterone acting as a nootropic is very weird if it makes you dumber, that men and women have equal IQs seems not to be true, the study cited to support a U-shaped relationship seems flimsy, that most of the ostensible damage occurs before adulthood seems in tension with your smarter friends transitioning after high school.

Comment by interstice on Is being a trans woman +20 IQ? · 2024-04-24T22:14:43.591Z · LW · GW

I buy that trans women are smart but I doubt "testosterone makes you dumber" is the explanation, more likely some 3rd factor raises IQ and lowers testosterone.

Comment by interstice on Tamsin Leake's Shortform · 2024-04-13T20:36:29.013Z · LW · GW

I think using the universal prior again is more natural. It's simpler to use the same complexity metric for everything; it's more consistent with Solomonoff induction, in that the weight assigned by Solomonoff induction to a given (world, claw) pair would be approximately the sum of their Kolmogorov complexities; and the universal prior dominates the inverse square measure but the converse doesn't hold.

Comment by interstice on Tamsin Leake's Shortform · 2024-04-13T18:38:19.873Z · LW · GW

If you want to pick out locations within some particular computation, you can just use the universal prior again, applied to indices to parts of the computation.

Comment by interstice on Tamsin Leake's Shortform · 2024-04-13T16:54:26.668Z · LW · GW

If you're running on the non-time-penalized solomonoff prior[...]a bunch of things break including anthropic probabilities and expected utility calculations

This isn't true, you can get perfectly fine probabilities and expected utilities from ordinary Solmonoff induction(barring computability issues, ofc). The key here is that SI is defined in terms of a prefix-free UTM whose set of valid programs forms a prefix-free code, which automatically grants probabilities adding up to less than 1, etc. This issue is often glossed over in popular accounts.

Comment by interstice on Any evidence or reason to expect a multiverse / Everett branches? · 2024-04-10T17:01:19.328Z · LW · GW

certain aspects of MWI theory (like how you actually get the Born probabilities) are unresolved

You can add the Born probabilities in with minimal additional Kolmogorov complexity, simply stipulate that worlds with a given amplitude have probabilities given by the Born rule(this does admittedly weaken the "randomness emerges from indexical uncertainty" aspect...)

Comment by interstice on On Complexity Science · 2024-04-05T02:39:21.048Z · LW · GW

Having briefly looked into complexity science myself, I came to similar conclusions -- mostly a random hodgepodge of various fields in a sort of impressionistic tableau, plus an unsystematic attempt at studying questions of agency and self-reference.

Comment by interstice on Matthew Barnett's Shortform · 2024-03-30T16:39:20.426Z · LW · GW

That is, I think humans generally (though not always) attempt to avoid death when credibly threatened, even when they're involved in a secret conspiracy to overthrow the government.

This seems like a misleading comparison, because human conspiracies usually don't try to convince the government that they're perfectly obedient slaves even unto death, because everyone already knows that humans aren't actually like that. If we imagine a human conspiracy where there is some sort of widespread deception like this, it seems more plausible that they would try to continue to be deceptive even in the face of death(like, maybe, uh, some group of people are pretending to be fervently religious and have no fear of death, or something)

Comment by interstice on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T23:50:31.078Z · LW · GW

Statements can be epistemically legit or not. Statements have content, they aren't just levers for influencing the world.

Comment by interstice on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T23:03:12.263Z · LW · GW

I mean it's epistemically legitimate for him to bring them up. They are in fact evidence that Scott holds hereditarian views.

Now, regarding the "overall" legitimacy of calling attention to someone's controversial views, it probably does have a chilling effect, and threatens Scott's livelihood which I don't like. But I think that continuing to be mad at Metz for his sloppy inference doesn't really make sense here. Sure, maybe at the time it was tactically smart to feign outrage that Metz would dare to imply Scott was a hereditarian, but now that we have direct documentation of Scott admitting exactly that, it's just silly. If you're still worried about Scott getting canceled (seems unlikely at this stage tbh) it's better to just move on and stop drawing attention to the issue by bringing it up over and over.

Comment by interstice on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T16:37:11.403Z · LW · GW

But was Metz acting as a "prosecutor" here? He didn't say "this proves Scott is a hereditarian" or whatever, he just brings up two instances where Scott said things in a way that might lead people to make certain inferences....correct inferences, as it turns out. Like yeah, maybe it would have been more epistemically scrupulous if he said "these articles represent two instances of a larger pattern which is strong Bayesian evidence even though they are not highly convincing on their own" but I hardly think this warrants remaining outraged years after the fact.

Comment by interstice on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T05:39:41.192Z · LW · GW

How is Metz's behavior here worse than Scott's own behavior defending himself? After all, Metz doesn't explicitly say that Scott believes in racial iq differences, he just mentions Scott's endorsement of Murray in one post and his account of Murray's beliefs in another, in a way that suggests a connection. Similarly, Scott doesn't explicitly deny believing in racial iq differences in his response post, he just lays out the context of the posts in a way that suggests that the accusation is baseless(perhaps you think Scott's behavior is locally better? But he's following a strategy of covertly communicating his true beliefs while making any individual instance look plausibly deniable, so he's kinda optimizing against "locally good behavior" tracking truth here, so it seems perverse to give him credit for this)

Comment by interstice on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-26T22:28:52.760Z · LW · GW

"For my friends, charitability -- for my enemies, Bayes Rule"

Comment by interstice on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-26T22:12:16.708Z · LW · GW

ZMD: Looking at “Silicon Valley’s Safe Space”, I don’t think it was a good article. Specifically, you wrote,

In one post, [Alexander] aligned himself with Charles Murray, who proposed a link between race and I.Q. in “The Bell Curve.” In another, he pointed out that Mr. Murray believes Black people “are genetically less intelligent than white people.”

End quote. So, the problem with this is that the specific post in which Alexander aligned himself with Murray was not talking about race. It was specifically talking about whether specific programs to alleviate poverty will actually work or not.

So on the one hand, this particular paragraph does seem like it's misleadingly implying Scott was endorsing views on race/iq similar to Murray's even though, based on the quoted passages alone, there is little reason to think that. On the other hand, it's totally true that Scott was running a strategy of bringing up or "arguing" with hereditarians with the goal of broadly promoting those views in the rationalist community, without directly being seen to endorse them. So I think it's actually pretty legitimate for Metz to bring up incidents like this or the Xenosystems link in the blogroll. Scott was basically using a strategy of communicating his views in a plausibly deniable way by saying many little things which are more likely if he was a secret hereditarian, but any individual instance of which is not so damning. So I feel it's total BS to then complain about how tenuous the individual instances Metz brought up are -- he's using it as an example or a larger trend, which is inevitable given the strategy Scott was using.

(This is not to say that I think Scott should be "canceled" for these views or whatever, not at all, but at this stage the threat of cancelation seems to have passed and we can at least be honest about what actually happened)

Comment by interstice on What does "autodidact" mean? · 2024-03-24T03:07:41.677Z · LW · GW

This seems significantly overstated. Most subjects are not taught in school to most people, but they don't thereby degrade into nonsense.

Comment by interstice on Toward a Broader Conception of Adverse Selection · 2024-03-15T14:23:54.652Z · LW · GW

Why should Michael Burry have assumed that he had more insight about Avant! Corporation than the people trading with him?

Because he did a lot of research and "knew more about the Avant! Corporation than any man on earth"? If you have good reason to think that you're the one with an information advantage trades like this can be rational. Of course it's always possible to be wrong about that, but there are enough irrational traders out there that it's not ruled out. Also note that it's not actually needed that your counterparties are irrational on average, it's enough that there are irrational traders somewhere in the broader ecosystem, as they can "subsidize" moderately-informed trading by others(which you can take advantage of in individual cases)

Comment by interstice on Toward a Broader Conception of Adverse Selection · 2024-03-15T02:43:22.142Z · LW · GW

An amended slogan that more accurately captures the phenomenon the post is trying to point to would be "Conditional on your trade seemingly not creating value for your counterparty, your trade likely wasn't all that good".

Comment by interstice on Evolution did a surprising good job at aligning humans...to social status · 2024-03-10T20:51:58.664Z · LW · GW

Not sure how much I believe this myself, but Jacob cannell has an interesting take that social status isn't a "base drive" either, but is basically a proxy for "empowerment", influence over future states of the world. If that's true it's perhaps not so surprising that we're still well-aligned, since "empowerment" is in some sense always being selected for by reality.

Comment by interstice on The Parable Of The Fallen Pendulum - Part 1 · 2024-03-01T14:12:29.229Z · LW · GW

I would tell the students that any compactly specified model has to rely on a certain amount of "common-sensical" interpretation on their part, such that they need to evaluate what "counts" as a legitimate application of the theory and what does not. I'd argue this by analogy to their daily lives where interpretation of this sort is constantly needed to make sense of basic statements. Abstractly, this arises due to reality having a lot of detail which needs to be dynamically interpreted by a large parallel model like their brain and can't be handled by a very compact equation or statement, so they need to act as a "bridge" between the compact thing and actual experiments. (Indeed, this sort of interpretation is so ubiquitous that real students would almost never make this kind of mistake, at least not so blatantly) There's also something to be said about how most of our evidence that a given world-model is true necessarily comes from the extended social web of other scientists, but I would focus on the more basic error of interpretation first.

Comment by interstice on Theism Isn't So Crazy · 2024-02-29T00:43:01.942Z · LW · GW

It's a counterexample to a single step of reasoning(large multiverse of people --> God), it doesn't have to be globally a valid theory of reality. And clearly the existence of an imaginable multiverse satisfying a certain property makes it more plausible that our actual multiverse might satisfy the same property. (As an analogy, consider math, where you might want an object satisfying properties A and B. Constructing an object with property A makes it more plausible that you might eventually construct one with both properties)

Comment by interstice on In set theory, everything is a set · 2024-02-24T00:11:08.489Z · LW · GW

Can you in practice use set theory to discover something new in other branches or math, or does it merely provide a different (and less convenient) way to express things that were already discovered otherwise?

The value of set theory as a foundation comes more from being a widely-agreed upon language that is also powerful enough to express pretty much everything mathematicians can think up, rather than as a tool for making new discoveries. I think it's worth learning at least at a shallow level for this reason, if you want to learn advanced math.

Comment by interstice on Theism Isn't So Crazy · 2024-02-22T01:26:49.422Z · LW · GW

Well, UDASSA is false https://joecarlsmith.com/2021/11/28/anthropics-and-the-universal-distribution

Did you notice that I linked the very same article that you replied with? :P I'm aware of the issues with UDASSA, I just think it provides a clear example of an imaginable atheistic multiverse containing a great many possible people.

Comment by interstice on Theism Isn't So Crazy · 2024-02-21T01:40:11.894Z · LW · GW

I think the cardinality should be Beth(0) or Beth(1) since finite beings should have finite descriptions, and additionally finite beings can have at most Beth(1)(if we allow immortality) distinct sequences of thoughts, actions, and observations, given that they can only think, observe, act, in a finite number of ways in finite time, so if you quotient by identical experiences and behaviors you get Beth(0) or Beth(1)(you might think we can e.g. observe a continuum amount of stuff in our visual field but this is an illusion, the resolution is bounded). The Bekenstein bound also implies physically limited beings in our universe have a finite description length.

There could be a God-less universe with Beth 2 people, but I don't know how that would work

I don't think it's hard to imagine such a universe, e.g. consider all possible physical theories in some formal language and all possible initial conditions of such theories. This might be less simple to state than "imagine an infinitely perfect being" but it's also much less ambiguous, so it's hard to judge which is actually less simple.

SIA gives reason to think you should assign a uniform prior across possible people

My perspective on these matters is influenced a lot by UDASSA, which recovers a lot of the nice behaviors of SIA at the cost of non-uniform priors. I don't actually think UDASSA is likely a correct description of reality, but it gives a coherent pictures of what an atheistic multiverse containing a great many possible people could look like.

Comment by interstice on Theism Isn't So Crazy · 2024-02-20T18:48:50.596Z · LW · GW

I don't think the anthropic argument works. I have some technical objections to discussion of the set of possible people(I think Beth(0) or Beth(1) at most are more plausible cardinalities, I don't think we have to assume a uniform prior over possible people which means we don't need to assign a 0% probability to any particular being's existence in the absence of uniform existence) but more basically, I just don't see why God makes much of a difference as to the plausibility of any particular ontological arrangement. If you think God might create a universe with Beth(2) people, why couldn't there be a God-less universe with the same cardinality of people? If you think God might create a proper Class of people, why couldn't there be a God-less Proper Universe with the same people? Conversely, if modal realism undermines induction, doesn't a God-created set of all people undermine it in the same way? These universes might sound pretty "wild" and so appear implausible without intelligent design, but on a description-length perspective, "having beth(2) people and no God" or whatever can be specified pretty compactly. You might appeal to the infallibility, etc. of God to explain away paradoxes, but I think this is essentially invoking a "get out of paradoxes free" card without doing any explanatory work.

Comment by interstice on I played the AI box game as the Gatekeeper — and lost · 2024-02-13T18:06:34.255Z · LW · GW

I also don't think I would lose as the gatekeeper(against a human), and would be willing to make a similar bet if anyone's interested.

Comment by interstice on Where is the Town Square? · 2024-02-13T04:05:49.573Z · LW · GW

I think twitter is still the closest thing to a global town square. This post by Tyler Cowen is good on the topic.

Comment by interstice on What's this 3rd secret directive of evolution called? (survive & spread & ___) · 2024-02-07T17:10:18.513Z · LW · GW

Subsist? Sustain? Self-actualize? Start?

Comment by interstice on Is a random box of gas predictable after 20 seconds? · 2024-01-25T06:40:54.895Z · LW · GW

Why are you guys talking about waves necessarily dissipating, wouldn't there be an equal probability of waves forming and dissipating given that we are sampling a random initial configuration, hence in equilibrium w.r.t. formation/dispersion of waves?

Comment by interstice on Will quantum randomness affect the 2028 election? · 2024-01-25T06:22:23.421Z · LW · GW

For younger people(in presidential-candidate age ranges) the annual death rate ranges from 0.15% to 0.5%, see here (So the 4-year death rate ranges from 0.6% to 2%)

Comment by interstice on Will quantum randomness affect the 2028 election? · 2024-01-25T06:14:36.835Z · LW · GW

Also, how are you geting 0.8%? This website says that the mortality rate of 65-75 year olds is 2%. So over 4 years that should be 8%, which I think makes it much more plausible that quantum death probability is 0.1%(although clearly 8% isn't the real probability, any presidential candidate is probably way less likely to die than the background population)

Comment by interstice on Will quantum randomness affect the 2028 election? · 2024-01-25T06:04:40.321Z · LW · GW

Yyyyeah, I'm not totally sure if you get 0.1% just from people dying, hence the ~. But I think it's at least within a factor of 10, which makes me think the total quantum randomness factor is at least 0.1%. And to defend the "people dying" factor, (a) many of the candidates are in fact pretty old these days (b) presidents have a relatively high rate of being assassinated - 4 of 45(!), although I assume the actual probability is lower now than the historical average (c) randomness within the 4-year window could affect how quickly a pre-existing health problem progresses, although this might result in them dropping out early/not rather than actually dying.

Comment by interstice on Will quantum randomness affect the 2028 election? · 2024-01-25T02:59:16.687Z · LW · GW

I don’t think most people die for quantum-randomness reasons

You don't think so? I think this is clearly the case over someone's entire life. Starting to condition on a 4-year timescale, I think accidental deaths, assassinations, and viral infections are certainly quantum-randomness-affected at greater than 0.1% probability. Maybe also things like cancer and its progression(dependent on mutations which may or may not happen on a short time scale?) but I don't really know much about it.

Comment by interstice on Will quantum randomness affect the 2028 election? · 2024-01-25T02:48:04.160Z · LW · GW

I think "one of the potential candidates might quantum-randomly die in the timeframe" is a pretty strong argument that there's at least ~0.1% quantum uncertainty.

ETA: For some stats on this, see this table from the government of Canada. Annual death rate ranges from 0.1% for 35-year olds, up to 0.5% for 55-year-olds, and 3% for 75-year-olds. Multiply those by 4 to get the death rate in the relevant window. Obviously only a small fraction of those deaths will be quantum-randomness-influenced. Also note the relatively high rate of presidential assassinations -- 4 of 45(!) presidents were asssassinated in office(although I assume the "true" probability is lower now)

Comment by interstice on Is a random box of gas predictable after 20 seconds? · 2024-01-24T23:56:59.667Z · LW · GW

I think you're probably right. It does seem plausible that there is some subtle structure which is preserved after 20 seconds, such that the resulting distribution over states is feasibly distinguishable from a random configuration, but I don't think we have any reason to think that this structure would be strongly correlated with which side of the box contains the majority of particles.

Comment by interstice on An even deeper atheism · 2024-01-11T21:45:47.033Z · LW · GW

but their control over their "portion of the universe" would actually increase

Yes, in the medium term. But given a very long future it's likely that any control so gained could eventually also be gained while on a more conservative trajectory, while leaving you/your values with a bigger slice of the pie in the end. So I don't think that gaining more control in the short run is very important -- except insofar as that extra control helps you stabilize your values. On current margins it does actually seem plausible that human population growth improves value stabilization faster than it erodes your share I suppose, although I don't think I would extend that to creating an AI population larger in size than the human one.

Comment by interstice on An even deeper atheism · 2024-01-11T21:12:46.463Z · LW · GW

I see, I think I would classify this under "values can be satisfied with a small portion of the universe" since it's about what makes your life as an individual better in the medium term.

Comment by interstice on An even deeper atheism · 2024-01-11T21:07:40.165Z · LW · GW

Another point, I don't think that Joe was endorsing the "yet deeper atheism", just exploring it as a possible way of orienting. So I think that he could take the same fork in the argument, denying that humans have ultimately dissimilar values in the same way that future AI systems might.

Comment by interstice on An even deeper atheism · 2024-01-11T20:31:54.808Z · LW · GW

In that case I'm actually kinda confused as to why you don't think that population growth is bad. Is it that you think that your values can be fully satisfied with a relatively small portion of the universe, and you or people sharing your values will be able to bargain for enough of a share to do this?

Comment by interstice on An even deeper atheism · 2024-01-11T20:22:14.563Z · LW · GW

I think people sharing Yudkowsky's position think that different humans ultimately(on reflection?) have very similar values, so making more people doesn't decrease the influence of your values that much.

ETA: apparently Eliezer thinks that maybe even ancient Athenians wouldn't share our values on reflection?! That does sound like he should be nervous about population growth and cultural drift, then. Well "vast majority of humans would have similar values on reflection" is at least a coherent position, even if EY doesn't hold it.

Comment by interstice on Lack of Spider-Man is evidence against the simulation hypothesis · 2024-01-09T06:15:55.440Z · LW · GW

A vast within-simulation conspiracy is possible, but it increases the complexity of the hypothesis.

Comment by interstice on Lack of Spider-Man is evidence against the simulation hypothesis · 2024-01-07T21:21:28.553Z · LW · GW

You're right and all the other commenters and downvoters are wrong -- the absence of spider-man or other locally-physics-defying phenomena is obviously evidence against a simulation. To all of them -- think about it -- if we saw people magically levitating or controlling the weather or whatever, that would clearly be strong evidence that we were simulated, so not seeing those things has to be evidence against. Sad!

Now that being said I don't think it's very strong evidence, since in a large future there would be many, many simulations run for a variety of purposes, so there would still likely be a vast number of realistic simulations even if they weren't the most popular(I also don't think we have strong reason to believe they wouldn't be popular since it's hard to predict what use-case for simulations would be the most common, what people's taste in entertainment would be like, etc.)

Comment by interstice on Lack of Spider-Man is evidence against the simulation hypothesis · 2024-01-07T05:39:27.369Z · LW · GW

You can’t say something is evidence against the simulation evidence without saying what crazy event would need to happen to provide evidence for the simulation hypothesis.

He has already provided that -- since not seeing spider-man is evidence against simulation, it follows that seeing spider-man, or another person who could apparently violate the laws of physics, would be strong evidence for a simulation. Conservation of evidence is not being violated here.

Comment by interstice on Lack of Spider-Man is evidence against the simulation hypothesis · 2024-01-07T05:30:07.687Z · LW · GW

I disagree, I think that our world is objectively simple in that everything is apparently consistent with a simple set of physical laws and initial conditions. Simulators wouldn't need to be constrained in this way(and could access a lot of fun possibilities by not being so constrained)

Comment by interstice on MIRI 2024 Mission and Strategy Update · 2024-01-05T04:30:28.154Z · LW · GW

Given that uploads may be able to think faster than regular humans, make copies of themselves to save on cost of learning, more easily alter their brains, etc., I think it's more likely that regular humans will be unable to effectively defend themselves if a conflict arises.

Comment by interstice on Puzzle Games · 2023-12-30T08:05:29.955Z · LW · GW

I would put a lot of Jack Lance's games at Tier 1 or 2. I'm still working through them but so far I really liked Enigmash, I'm Too Far Gone and Hilbert Highway.

Comment by interstice on What's the minimal additive constant for Kolmogorov Complexity that a programming language can achieve? · 2023-12-20T19:49:28.199Z · LW · GW

The constant is defined between pairs of languages, and it tells you "how many bits does it take to emulate language A in language B". So it doesn't make sense to talk about "the" constant of a language, it's relative to what other language you are comparing it to.

Comment by interstice on Is being sexy for your homies? · 2023-12-13T22:03:19.748Z · LW · GW

If within-sex coalitions are common and adaptive, isn't being good at the politics of those coalitions "eugenic" also?

Comment by interstice on Embedded Agents are Quines · 2023-12-12T22:05:07.682Z · LW · GW

Yes.

Comment by interstice on Embedded Agents are Quines · 2023-12-12T22:01:09.573Z · LW · GW

and the conservation of energy/the Second Law is a special case of this when the symmetry is time

Conservation of energy is not the same thing as the second law of thermodynamics.

Comment by interstice on Based Beff Jezos and the Accelerationists · 2023-12-07T00:01:51.980Z · LW · GW

You might believe that the orthogonality thesis is probabilistically false, in that it is very unlikely for intelligent beings to arise that highly value paperclips or whatever. Aliens might not create humanoid societies but it seems plausible that they would likely be conscious, value positive valence, have some sort of social emotion suite, value exploration and curiosity, etc.