Posts

Comments

Comment by antanaclasis on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T01:42:36.858Z · LW · GW

Another big thing is that you can’t get tone-of-voice information via text. The way that someone says something may convey more to you than what they said, especially for some types of journalism.

Comment by antanaclasis on Community Notes by X · 2024-03-20T16:01:25.509Z · LW · GW

I’d imagine that once we see the axis it will probably (~70%) have a reasonably clear meaning. Likely not as obvious as the left-right axis on Twitter but probably still interpretable.

Comment by antanaclasis on Community Notes by X · 2024-03-18T22:56:24.866Z · LW · GW

I think a lot of the value that I’d get out of something like that being implemented would be getting an answer to “what is the biggest axis along which LW users vary” according to the algorithm. I am highly unsure about what the axis would even end up being.

Comment by antanaclasis on Don't Endorse the Idea of Market Failure · 2024-03-03T01:54:59.893Z · LW · GW

To lay out some of the foundation of public choice theory:

We can model the members of an organization (such as the government) as being subject to the dynamics of natural selection. In particular, in a democracy elected officials are subject to selection whereby those who are better at getting votes can displace those who are worse at it, through elections.

This creates a selection dynamic where over time the elected officials will become better at vote-gathering, whether through conscious or unconscious adaptation by the officials to their circumstances, or simply through those who are naturally better at vote-gathering replacing those worse at it.

This is certainly not a bad thing per se. After all, coupling elected officials’ success to what the electorate wants is one of the major purposes of democracy, but “what gets votes” is not identical to “what’s good for the electorate”, and Goodhart’s law can bite us through that gap.

One of the classic examples of this is “doling out pork”, where concentrated benefits (such as construction contracts) can be distributed to a favored sub-group (thus ensuring their loyalty in upcoming elections) while the loss in efficiency from that favoritism is only indirectly and diffusely suffered by the rest of the electorate (making it much less likely that any of them get outraged about it enough to not vote for the pork-doler).

The application of this to market failures is that you can look at a market under government regulation as two systems (the market and the government), each with different incentives that imperfectly bind their constituent actors to the public good. The market generally encourages positive-sum trades to happen, but has various imperfections, especially regarding externalities and transaction costs, and the government generally encourages laws/regulations that benefit the public, but has its own imperfections, such as pork-doling and encouraging actions which look better to the public than their actual results would merit.

The result of this is that it is not necessarily clear whether whether changing how much influence market vs government dynamics have on a specific domain will improve it or not. Moving something to more government control may fix market failures, or it may just encourage good-looking-but-ineffective political posturing, and moving something to the market may cut down on corruption, or may just hit you with a bunch of not-properly-accounted-for externalities.

In the particular case of “government action to solve market failures”, the incentives may be against the government actors solving them, as in the case of the coal industry providing a loyal voting bloc, thereby encouraging coal subsidies that make the externality problem worse.

Therefore, my presentation of the market-failure-idea-skeptic’s position would be something like “we should be wary of moving the locus of control in such-and-such domains away from the market toward the government, because we expect that likely the situation will be made worse by doing so, whether due to government action exacerbating existing market failures more than it solves them, or due to other public-choice problems arising”.

Comment by antanaclasis on Don't Endorse the Idea of Market Failure · 2024-03-02T12:00:27.634Z · LW · GW

Just because the US government contains agents that care about market failures, does not mean that it can be accurately modeled as itself being agentic and caring about market failures.

The more detailed argument would be public choice theory 101, about how the incentives that people in various parts of the government are faced with may or may not encourage market-failure-correcting behavior.

Comment by antanaclasis on Balancing Games · 2024-02-29T08:39:35.033Z · LW · GW

For chess in particular the piece-trading nature of the game also makes piece handicaps pretty huge in impact. Compare to shogi: in shogi having multiple non-pawn pieces handicapped can still be a moderate handicap, whereas multiple non-pawns in chess is basically a predestined loss unless there is a truly gargantuan skill difference.

I haven’t played many handicapped chess games, but my rough feel for it is that each successive “step” of handicap in chess is something like 3 times as impactful as the comparable shogi handicap. This makes chess handicaps harder to use as there’s much more risk of over- or under-shooting the appropriate handicap level and ending up with one side being highly likely to win.

Comment by antanaclasis on shoes with springs · 2024-01-01T06:56:34.180Z · LW · GW

Also note that socks with sandals being uncool is not a universal thing. For example, in Japan it is reasonably common to wear (often split-toed) socks with sandals, though it’s more associated with traditional garb than modern fashion.

Comment by antanaclasis on Is being sexy for your homies? · 2023-12-15T06:51:39.963Z · LW · GW

A way of implementing the serving-vs-kitchen separation that avoids that problem (and actually the way of doing it I initially envisioned after reading the post) would be that within each workplace there is a separation, but different workplaces are split between the polarities of separation. That way any individual’s available options of workplace are, at worst, ~half of what they could be with mixed workplaces, regardless of their preference.

(Caveat that an individual’s options could end up being less than half the total if there is a workplace-gender correlation overall (creating an imbalance of how many workplaces of each polarity there are), and an individual has a workplace-gender matchup which is opposite to the trend, but in this case at least that individual’s lesser amount of choices is counterbalanced by the majority of people having more than 50% of the max choices of workplace fitting them.)

Comment by antanaclasis on Redirecting one’s own taxes as an effective altruism method · 2023-11-13T21:49:38.958Z · LW · GW

It kind of passed without much note in the post, but isn’t the passport non-renewal one of the biggest limiters here? $59,000 divided by 10 years is $5,900 per year, so unless you’re willing to forgo having a passport that’s the upper limit of how much you could benefit from non-payment (exclusive of the tax liability reduction strategies). That seems like a pretty low amount per year in exchange for having to research and plan this, then having your available income and saving methods limited (which could easily lower your income by more than $5,900 just by limiting the jobs available to you).

Comment by antanaclasis on What’s going on? LLMs and IS-A sentences · 2023-11-08T21:18:56.186Z · LW · GW

One other way of putting the reverse order, though it sounds a bit stilted in English: “beagles have Fido”. I don’t think it’s used commonly at all but it came to mind as a form in the reverse order without looping.

Comment by antanaclasis on Lying to chess players for alignment · 2023-10-25T18:06:04.028Z · LW · GW

I would be interested in this, probably in role A (but depending on the pool of other players possibly one of the other roles; I have no opposition to any of them). I play chess casually with friends, and am probably at somewhere around 1300 elo (based on my winrate against one friend who plays online).

Comment by antanaclasis on Should the US House of Representatives adopt rank choice voting for leadership positions? · 2023-10-25T18:00:02.979Z · LW · GW

To add to this, if the ranked choice voting is implemented with a “no confidence” option (as it should to prevent the vote-in vote-out cycle described above), then you could easily end up in the same situation as the house currently is in, where no candidate manages to beat out “no confidence”.

Comment by antanaclasis on The Gods of Straight Lines · 2023-10-15T04:30:19.382Z · LW · GW

Related: https://slatestarcodex.com/2019/03/13/does-reality-drive-straight-lines-on-graphs-or-do-straight-lines-on-graphs-drive-reality/

Comment by antanaclasis on SSA rejects anthropic shadow, too · 2023-07-29T13:49:20.258Z · LW · GW

SIA can be considered (IMO more naturally) as randomly sampling you from “observers in your epistemic situation”, so it’s not so much “increasing the prior” but rather “caring about the absolute number of observers in your epistemic situation” rather than “caring about the proportion of observers in your epistemic situation” as SSA does.

This has the same end result as “up-weighting the prior then using the proportion of observers in your epistemic situation”, but I find it to be much more intuitive than that, as the latter seems to me to be overly circuitous by multiplying by population then dividing by population (as part of taking the proportion of the reference class that you comprise), rather than just taking the number we care about (number of observers in your epistemic situation) in the first place.

Comment by antanaclasis on Attempting to Deconstruct "Real" · 2023-07-09T17:21:16.510Z · LW · GW

Related: https://everythingstudies.com/2017/09/12/the-big-list-of-existing-things/

Comment by antanaclasis on Thoughts on LessWrong norms, the Art of Discourse, and moderator mandate · 2023-05-12T05:25:39.781Z · LW · GW

I think the point being made in the post is that there’s a ground-truth-of-the-matter as to what comprises Art-Following Discourse.

To move into a different frame which I feel may capture the distinction more clearly, the True Laws of Discourse are not socially constructed, but our norms (though they attempt to approximate the True Laws) are definitely socially constructed.

Comment by antanaclasis on An Intro to Anthropic Reasoning using the 'Boy or Girl Paradox' as a toy example · 2023-04-28T21:45:30.594Z · LW · GW

From the SIA viewpoint the anthropic update process is essentially just a prior and an update. You start with a prior on each hypothesis (possible universe) and then update by weighting each by how many observers in your epistemic situation each universe has.

This perspective sees the equalization of “anthropic probability mass” between possible universes prior to apportionment as an unnecessary distortion of the process: after all, “why would you give a hypothesis an artificial boost in likelihood just because it posits fewer observers than other hypotheses”.

Of course, this is just the flip side of what SSA sees as an unnecessary distortion in the other direction. “Why would you give a hypothesis an artificial boost due to positing more observers” it says. And here we get back to deep-seated differences in what people consider the intuitive way of doing things that underlie the whole disagreement over different anthropic methods.

Comment by antanaclasis on An Intro to Anthropic Reasoning using the 'Boy or Girl Paradox' as a toy example · 2023-04-25T06:02:23.233Z · LW · GW

On the question of how to modify your prior over possible universe+index combinations based on observer counts, the way that I like to think of the SSA vs SIA methods is that with SSA you are first apportioning probability mass to each possible universe, then dividing that up among possible observers within each universe, while with SIA you are directly apportioning among possible observers, irrespective of which possible universes they are in.

The numbers come out the same as considering it in the way you write in the post, but this way feels more intuitive to me (as a natural way of doing things, rather than “and then we add an arbitrary weighing to make the numbers come out right”) and maybe to others.

Comment by antanaclasis on The salt in pasta water fallacy · 2023-03-27T23:22:38.632Z · LW · GW

If you’re adding the salt after you turn on the burner then it doesn’t actually add to the heating+cooking time.

Comment by antanaclasis on Don't take bad options away from people · 2023-03-26T20:40:25.536Z · LW · GW

To steelman the anti-sex-for-rent case, it could be considered that after the tenant has entered into that arrangement, the tenant could feel pressure to keep having sex with the landlord (even if they would prefer not to and would not at that later point choose to enter the contract) due to the transfer cost of moving to a new home. (Though this also applies to monetary rent, the potential for threatening the boundaries of consent is generally seen as more harmful than threatening the boundaries of one’s budget)

This could also be used as a point of leverage by the landlord to e.g. pressure the tenant to engage in sex acts they would otherwise not want to or else be evicted (unless the contract specifies from the beginning exactly what kind of sex the payment will entail). I think many people would see such actions by the landlord as more of an infringement upon the tenant than e.g. raising the amount of monetary rent (sacredness of sex/consent).

Additionally, this could be seen as a specific manifestation of the modern trend of more general opposition to sexual relationships with a power imbalance between the participants.

(Parenthetically, I also want to thank you for writing this post, as it’s a good expression of a principle I generally agree with)

Comment by antanaclasis on Rationality-related things I don't know as of 2023 · 2023-02-13T04:12:53.296Z · LW · GW

In terms of similarity between telling the truth and lying, think about how much of a change you would have to make to the mindset of a person at each level to get them to level 1 (truth)

Level 2: they’re already thinking about world models, you just need to get them cooperate with you in seeking the truth rather than trying to manipulate you.

Level 3: you need to get them the idea of words as having some sort of correspondence with the actual world, rather than just as floating tribal signifiers. After doing that, you still have to make sure that they are focusing on the truth of those words, like the level 2 case.

Level 4: the hardest of them all; you need to get them the idea of words having any sort of meaning in the first place, rather than just being certain patterns of mouth movements that one does when it feels like the right time to do so. After doing that, you again still have the whole problem of making sure that they focus on truth instead of manipulation or tribal identity.

For a more detailed treatment of this, see Zvi’s https://thezvi.wordpress.com/2020/09/07/the-four-children-of-the-seder-as-the-simulacra-levels/

Comment by antanaclasis on Conflict Theory of Bounded Distrust · 2023-02-13T03:09:50.299Z · LW · GW

Re: “best vs better”: claiming that something is the best can be a weaker claim than claiming that it it better than something else. Specifically, if two things are of equal quality (and not surpassed) then both are the best, but neither is better than the other.

Apocryphally, I’ve heard that certain types of goods are regarded by regulatory agencies as being of uniform quality, such that there’s not considered to be an objective basis for claiming that your brand is better than another. However, you can freely claim that yours is the best, as there is similarly no objective basis on which to prove that your product is inferior to another (as would be needed to show that it is not the best).

Comment by antanaclasis on Why Are Bacteria So Simple? · 2023-02-06T23:29:31.869Z · LW · GW

One other mechanism that would lead to the persistence of e.g. antibiotic resistance would be when the mutation that confers the resistance is not costly (e.g. a mutation which changes the shape of a protein targeted by an antibiotic to a different shape that, while equally functional, is not disrupted by the antibiotic). Note that I don’t actually know whether this mechanism is common in practice.

Comment by antanaclasis on Three Fables of Magical Girls and Longtermism · 2022-12-04T18:50:15.283Z · LW · GW

Thanks for writing this nice article. Also thanks for the “Qualia the Purple” recommendation. I’ve read it now and it really is great.

In the spirit of paying it forward, I can recommend https://imagakblog.wordpress.com/2018/07/18/suspended-in-dreams-on-the-mitakihara-loopline-a-nietzschean-reading-of-madoka-magica-rebellion-story/ as a nice analysis of themes in PMMM.

Comment by antanaclasis on In Defence of Temporal Discounting in Longtermist Ethics · 2022-11-14T18:34:24.447Z · LW · GW

It seems like this might be double-counting uncertainty? Normal EV-type decision calculations already (should, at least) account for uncertainty about how our actions affect the future.

Adding explicit time-discounting seems like it would over-adjust in that regard, with the extra adjustment (time) just being an imperfect proxy for the first (uncertainty), when we only really care about the uncertainty to begin with.

Comment by antanaclasis on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-06-07T21:16:28.809Z · LW · GW

Indeed humans are significantly non-aligned. In order for an ASI to be non-catastrophic, it would likely have to be substantially more aligned than humans are. This is probably less-than-impossible due to the fact that the AI can be built from the get-go to be aligned, rather than being a bunch of barely-coherent odds and ends thrown together by natural selection.

Of course, reaching that level of alignedness remains a very hard task, hence the whole AI alignment problem.

Comment by antanaclasis on Covid 6/2/22: Declining to Respond · 2022-06-04T00:52:39.473Z · LW · GW

I had another thing planned for this week, but turned out I’d already written a version of it back in 2010

What is the post that this is referring to, and what prompted thinking of those particular ideas now?

Comment by antanaclasis on Reflections on My Own Missing Mood · 2022-04-21T21:07:54.853Z · LW · GW

I see it in a similar light to “would you rather have more or fewer cells in your body?”. If you made me choose I probably would rather have more, but only insofar as having fewer might be associated with certain bad things (e.g. losing a limb).

Correspondingly, I don’t care intrinsically about e.g. how much algae exists except insofar as that amount being too high or low might cause problems in things I actually care about (such as human lives).

Comment by antanaclasis on How dath ilan coordinates around solving alignment · 2022-04-13T07:06:36.062Z · LW · GW

Seeing the relative lack of pickup in terms of upvotes, I just want to thank you for putting this together. I’ve only read a couple of Dath Ilan posts, and this provided a nice coverage of the AI-in-Dath-Ilan concepts, many of the specifics of which I had not read previously.

Comment by antanaclasis on 20 Modern Heresies · 2022-04-04T03:33:05.841Z · LW · GW

My understanding of it is that there is conflict between different “types” of the mixed population based on e.g. skin lightness and which particular blend of ethnic groups makes up a person’s ancestry.

EDIT: my knowledge on this topic mostly concerns Mexico, but should still generally apply to Brazil.

Comment by antanaclasis on Soares, Tallinn, and Yudkowsky discuss AGI cognition · 2021-12-03T08:01:00.261Z · LW · GW

That PDF seems like it is a part of a spoken presentation (it’s rather abbreviated for a standalone thing). Does there exist such a presentation? If so, I was not successful in funding it, and would appreciate it if you could point it out.

Comment by antanaclasis on Visible Thoughts Project and Bounty Announcement · 2021-11-30T08:37:16.677Z · LW · GW

I similarly offer myself as an author, in either the dungeon master or player role. I could possibly get involved in the management or technical side of things, but would likely not be effective in heading a project (for similar reasons to Brangus), and do not have practical experience in machine learning.

I am best reached through direct message or comment reply here on Lesswrong, and can provide other contact information if someone wants to work with me.

Comment by antanaclasis on How much Bayesian evidence from rapid antigen and PCR tests? · 2021-11-24T09:51:43.676Z · LW · GW

The main post of what amounts of evidence different tests give is this one: https://www.lesswrong.com/posts/cEohkb9mqbc3JwSLW/how-much-should-you-update-on-a-covid-test-result

Also related is part of this post from Zvi (specifically the section starting “Michael Mena”): https://www.lesswrong.com/posts/CoZitvxi2ru9ehypC/covid-9-9-passing-the-peak

Combining the information from the two, it seems like insofar as you care about infectivity rather than the person having dead virus RNA still in their body, the actual amount of evidence from rapid antigen tests will be higher than the amounts given in the first post. There’s a good case to be made that the sensitivity with respect to infectiousness would be 95% or more, though I am not aware of any research directly addressing this question.

Comment by antanaclasis on The Maker of MIND · 2021-11-22T18:40:54.272Z · LW · GW

This is a good piece of writing. It reminds me of another piece of fiction (somewhat happier in tone) which I cannot find again. The plot involves a woman trying to rescue her boyfriend from a nemesis in a similar AI-managed world. I think it involves her jumping out of a plane, and landing in the garden of someone who eschews AI-protection for his garden, rendering it vulnerable to destruction without his consent. Does anyone recall the name/location of this story?

Comment by antanaclasis on Bayeswatch 12: The Singularity War · 2021-10-10T05:49:46.280Z · LW · GW

Copyediting: “Miriam removed off her cornea too” should probably not have the “off”.

Comment by antanaclasis on LessWrong is providing feedback and proofreading on drafts as a service · 2021-09-08T02:13:27.212Z · LW · GW

The part about hiring proofreading brought a question to mind: where does the operating budget for the lesswrong website come from, both for stuff like that and standard server costs?

Comment by antanaclasis on Fire Law Incentives · 2021-07-23T14:12:55.672Z · LW · GW

Do you have any recommendations of such stories?

Comment by antanaclasis on Working With Monsters · 2021-07-20T22:26:04.866Z · LW · GW

If you also consider the indirect deaths due to the collapse of civilization, I would say that 95% lies within the realm of reason. You don’t need anywhere close to 95% of the population to be fully affected by the scissor to bring about 95% destruction.

Comment by antanaclasis on Re: Competent Elites · 2021-07-15T17:46:51.292Z · LW · GW

Sorry if I was ambiguous in my remark. The comparison that I’m musing about is between “fierce” vs “not fierce” nerds, with no particular consideration of those who are not nerds in the first place.

Comment by antanaclasis on Re: Competent Elites · 2021-07-15T15:30:08.032Z · LW · GW

It’s interesting to read posts like this and “Fierce Nerds” while myself being much less ambitious/fierce/driven than the objects of said essays. I wonder what other psychological traits are associated with the difference between those who are more vs less ambitious/fierce/driven, other things being equal.

Comment by antanaclasis on Musing on the Many Worlds Hypothesis · 2021-07-06T14:04:20.411Z · LW · GW

Nice poem! It’s cool to see philosophical and mathematical concepts expressed through elegant language, though it it somewhat less common, due to the divergence of interests and skills.

Comment by antanaclasis on What are examples of the opposite of perverse incentives? · 2021-06-18T18:20:02.151Z · LW · GW

I’d say a lot of domains have reasonably-aligned incentives a lot of the time, but that’s a boring non-answer. For a specific example, there’s the classic case of how whenever I go to the grocery store, I’m presented with a panoply of cheap, good quality foodstuffs available for me to purchase. The incentives along the chain from production -> store -> me are reasonably well-aligned.

Comment by antanaclasis on Which rationalists faced significant side-effects from COVID-19 vaccination? · 2021-06-15T07:17:20.585Z · LW · GW

J&J (1 shot): mild tiredness the next day, no other symptoms.

Comment by antanaclasis on How to Write Science Fiction and Fantasy - A Short Summary · 2021-05-30T15:09:43.559Z · LW · GW

Thanks for the summary. A minor copyediting note: the sentence «They begin as the caracter becomes uncontent with their situation, and» cuts off part way.

Comment by antanaclasis on Curated conversations with brilliant rationalists · 2021-05-30T14:56:15.289Z · LW · GW

Is there anywhere that there are transcripts available for these conversations?

Comment by antanaclasis on Questions are tools to help answerers optimize utility · 2021-05-25T00:07:41.291Z · LW · GW

Copyediting note: it appears that the parenthetical statement <(Note: agent here just means “being”, not> got cut off.

Comment by antanaclasis on Social host liability · 2021-05-23T00:38:25.927Z · LW · GW

I think it is? That was kind of the implication that I read into it at least.

Comment by antanaclasis on People Will Listen · 2021-04-12T02:49:04.512Z · LW · GW

You mention the EA investing group. Where is that? A cursory search didn’t seem to bring anything up. Also, more generally speaking, what would be your top few recommendations of places to keep up with the latest rationalist investment advice?

Comment by antanaclasis on Violating the EMH - Prediction Markets · 2021-04-12T02:42:03.529Z · LW · GW

On this note, I would definitely be willing to pay premium to be part of a fund run by a rationalist who’s more intimately involved with the crypto and prediction markets than I am, and would thereby be able to get significantly more edge than I currently can.

Comment by antanaclasis on Eric Raymond's Shortform · 2021-03-26T16:20:39.662Z · LW · GW

It would definitely be neat to read a history of that sort. Having myself not read many of the books that Eliezer references as forerunners, that area of history is one that I at least would like to learn more about.