Posts

[SP] The Edge of Morality 2024-03-27T21:38:51.827Z
AI Risk and the US Presidential Candidates 2024-01-06T20:18:04.945Z
Deception Chess: Game #2 2023-11-29T02:43:22.375Z
Glomarization FAQ 2023-11-15T20:20:49.488Z
Suggestions for chess puzzles 2023-11-13T15:39:37.968Z
Deception Chess: Game #1 2023-11-03T21:13:55.777Z
Lying to chess players for alignment 2023-10-25T17:47:15.033Z
What is an "anti-Occamian prior"? 2023-10-23T02:26:10.851Z
Eliezer's example on Bayesian statistics is wr... oops! 2023-10-17T18:38:18.327Z

Comments

Comment by Zane on Express interest in an "FHI of the West" · 2024-04-22T06:18:59.079Z · LW · GW

Do you have any specific examples of what this new/rebooted organization would be doing?

Comment by Zane on LessWrong's (first) album: I Have Been A Good Bing · 2024-04-01T18:12:06.647Z · LW · GW

It sounds odd to hear the "even if the stars should die in heaven" song with a different melody than I had imagined when reading it myself.

I would have liked to hear the Tracey Davis "from darkness to darkness" song, but I think that was canonically just a chant without a melody. (Although I imagined a melody for that as well.)

Comment by Zane on [SP] The Edge of Morality · 2024-03-28T17:58:24.413Z · LW · GW

...why did someone promote this to a Frontpage post.

Comment by Zane on Intuition for 1 + 2 + 3 + … = -1/12 · 2024-03-13T01:59:21.202Z · LW · GW

If I'm understanding correctly, the argument here is:

A) 

B) 

C) 

Therefore, .

 

First off, this seems to have an implicit assumption that .

I think this assumption is true for any functions f and g, but I've learned not to always trust my intuitions when it comes to limits and infinity; can anyone else confirm this is true?

Second, A seems to depend on the relative sizes of the infinities, so to speak. If j and k are large but finite numbers, then  if and only if j is substantially greater than k; if k is close to or larger than j, it becomes much less than or greater than -1/12.

I'm not sure exactly how this works when it comes to infinities - does the infinity on the sum have to be larger than the infinity on the limit for this to hold? I'm pretty sure what I just said was nonsense; is there a non-nonsensical version?

In conclusion, I don't know how infinities work and hope someone else does.

Comment by Zane on Job Listing: Managing Editor / Writer · 2024-02-22T18:11:10.523Z · LW · GW

I think I could be a good fit as a writer, but I don't have much in the way of writing experience I can show you. Do you have any examples of what someone at this position would be focusing on? I'm happy to write up a couple pieces to demonstrate my abilities.

Comment by Zane on If Clarity Seems Like Death to Them · 2024-01-15T20:40:03.690Z · LW · GW

The question, then, is whether a given person is just an outlier by coincidence, or whether the underlying causal mechanisms that created their personality actually are coming from some internal gender-variable being flipped. (The theory being, perhaps, that early-onset gender dysphoria is an intersex condition, to quote the immortal words of a certain tribute band.)

If it was just that biological females sometimes happened to have a couple traits that were masculine - and these traits seemed to be at random, and uncorrelated - then that wouldn't imply anything beyond "well, every distribution has a couple outliers." But when you see that lesbians - women who have the typically masculine trait of attraction to women - are also unusually likely to have other typically masculine traits - then that implies that there's something else going on. Such as, some of them really do have "male brains" in some sense.

And there are so many different personality traits that are correlated with gender (at least 18, according to the test mentioned above, and probably many more that can't be tested as easily) that it's very unlikely someone would have an opposite-sex personality just by chance alone. That's why I'd guess that a lot of the feminine "men" and masculine "women" really do have some sort of intersex condition where their gender-variable is flipped. (Although there are some cultural confounders too, like people unconsciously conforming to stereotypes about how gay people act.)

I completely agree that dividing everyone between "male" and "female" isn't enough to capture all the nuance associated with gender, and would much prefer that we used more words than that. But if, as seems to often be expected by the world, we have to approximate all of someone's character traits all with only a single binary label... then there are a lot of people for whom it's more accurate to use the one that doesn't match their sex.

Comment by Zane on If Clarity Seems Like Death to Them · 2024-01-12T23:28:42.387Z · LW · GW

Fair. I do indeed endorse the claim that Aella, or other people who are similar in this regard, can be more accurately modelled as a man than as a woman - that is to say, if you're trying to predict some yet-unmeasured variable about Aella that doesn't seem to be affected by physical characteristics, you'll have better results by predicting her as you would a typical man, than as you would a typical woman. Aella probably really is more of a man than a woman, as far as minds go.

But your mentioning this does make me realize that I never really had a clear meaning in mind when I said "society should consider such a person to be a woman for most practical purposes." When I try to think of ways that men and women should be treated differently, I mostly come up blank. And the ways that do come to mind are mostly about physical sex rather than gender - i.e. sports. I guess my actual position is "yeah, Aella is probably male with regard to personality, but this should not be relevant to how society treats ?her."

Comment by Zane on If Clarity Seems Like Death to Them · 2024-01-12T20:30:24.433Z · LW · GW

If a person has a personality that's pretty much female, but a male body, then thinking of them as a woman will be a much more accurate model of them for predicting anything that doesn't hinge on external characteristics. I think the argument that society should consider such a person to be a woman for most practical purposes is locally valid, even if you reject that the premise is true in many cases.

Comment by Zane on If Clarity Seems Like Death to Them · 2024-01-08T19:28:30.002Z · LW · GW

Previously, I had already thought it was nuts that trans ideology was exerting influence on the rearing of gender-non-conforming children—that is, children who are far outside the typical norm of behavior for their sex: very tomboyish girls and very effeminate boys.

Under recent historical conditions in the West, these kids were mostly "pre-gay" rather than trans. (The stereotype about lesbians being masculine and gay men being feminine is, like most stereotypes, basically true: sex-atypical childhood behavior between gay and straight adults has been meta-analyzed at Cohen's d ≈ 1.31 standard deviations for men and d ≈ 0.96 for women.) A solid majority of children diagnosed with gender dysphoria ended up growing out of it by puberty. In the culture of the current year, it seemed likely that a lot of those kids would instead get affirmed into a cross-sex identity at a young age, even though most of them would have otherwise (under a "watchful waiting" protocol) grown up to be ordinary gay men and lesbians.

I think I might be confused about what your position is here. As I understood the two-type taxonomy theory, the claim was that while some "trans women" really were unusually feminine compared to typical men, most of them were just non-feminine men who were blinded into transitioning by autogynephilia. But the early-onset group, as I understood the theory, were the ones who really were trans? Your whole objection to people classifying autogynephilic people as "trans women" was that they didn't actually have traits drawn from a female distribution, and so modelling them as women would be less accurate than modelling them as men. But if members of the early-onset group really do behave in a way more typical of femininity than masculinity, then that would mean they essentially are "women on the inside, men on the outside."

Am I missing something about your views here?

Comment by Zane on AI Risk and the US Presidential Candidates · 2024-01-08T17:57:08.939Z · LW · GW

Maybe the chance that Kennedy wins, given a typical election between a Republican and a Democrat, is too low to be worth tracking. But this election seems unusually likely to have off-model surprises - Biden dies, Trump dies, Trump gets arrested, Trump gets kicked off the ballot, Trump runs independently, controversy over voter fraud, etc. If something crazy happens at the last minute, people could end up voting for Kennedy.

If you think the odds are so low, I'll bet my 10 euros against your 10,000 that Kennedy wins. (Normally I'd use US dollars, but the value of a US dollar in 2024 could change based on who wins the election.)

Comment by Zane on AI Risk and the US Presidential Candidates · 2024-01-07T21:01:05.097Z · LW · GW

Unfortunately, I don't have the time to research more than a thousand candidates across the country, and there's probably only about 1 or 2 LessWrongers in most congressional districts. But I encourage everyone to research the candidates' views on AI for whichever Congress elections you're personally able to vote in.

Comment by Zane on AI Risk and the US Presidential Candidates · 2024-01-07T20:53:14.696Z · LW · GW

I'm not denying that the military and government are secretive. But there's a difference between keeping things from the American people, and keeping them from the president. When it comes to whether the president controls the military and nuclear arsenal, that's the sort of thing that the military can't lie about without substantial risk to the country.

Let's say the military tries to keep the keys to the nukes out of the president's hands - by, say, giving them fake launch codes. Then they're not just taking away the power of the president, they're also obfuscating under which conditions the US will fire nukes. The primary purpose of nuclear weapons is to pose a clear threat to other countries, to be able to say "if these specific conditions happen (i.e. you shoot nukes at us), our government will attack you." And the only thing that keeps someone from getting confused about those conditions and firing off a nuke at the wrong time is that other countries have a clear picture of what those conditions are, and know what to avoid.

Everyone has to be on the same page for the system to function. If the US president believes different things about when the nukes will be fired than the actual truth known to the military leaders, then you're muddying the picture of how the nuclear deterrent works. What happens if the president threatens to nuke Russia, and the military secretly isn't going to follow through? What happens if the president actually does give the order, and someone countermands it? Most importantly, what happens if different countries come to different conclusions about what the rules are - say, North Korea thinks the president really does have the power to launch nukes, but Russia goes through the same reasoning steps as you did, and realizes they don't? If different people have different pictures of what's going on, then you risk nuclear war.

And if your theory is that everyone in the upper levels of every nation's government does know these things, even the US president, and they just don't tell the public - well, that's not a stable situation either. It doesn't take long for someone to spill the truth. Suppose Trump gets told he's not allowed to launch the nukes, and gets upset and decides to tell everyone on Truth Social. Suppose Kim learns the US president's not allowed to launch the nukes, and decides to tell the world about that in order to discredit the US government. It's not possible to keep a secret like that; it requires the cooperation of too many people who can't be trusted.

A similar argument applies to a lot of the other things that one could theorize the president is secretly not allowed to do. The president's greatest powers don't come from having a button they can press to make something happen, they come from the public believing that they can make things happen. Let's say the president signs a treaty to halt advanced AI development, and some other government entity wants to say, "Actually, no, we're ignoring that and letting everyone keep developing whatever AI systems they want." Well, how are they supposed to go about doing that? They can't publicly say that they're overriding the president's order, and if they try to secretly tell major American AI labs to keep going with their research, then it doesn't take long for a whistleblower to come forward. The moment the president signs something, then the American people believe it's the law, and in most cases, that actually makes it become the true law.

I'd definitely want to hear suggestions as to who else in the government you think would have a lot of influence regarding this sort of thing. But the president has more influence than anyone else in the court of public opinion, and there's very little that anyone else in the government can do to stop that.

Comment by Zane on AI Risk and the US Presidential Candidates · 2024-01-07T19:56:19.388Z · LW · GW

I wouldn't entirely dismiss Kennedy just yet; he's polling better than any independent or third party candidate since Ross Perot. That being said, I do agree that his chances are quite low, and I expect I'll end up having to vote for one of the main two candidates.

Comment by Zane on AI Risk and the US Presidential Candidates · 2024-01-07T01:57:11.123Z · LW · GW

The president might not hold enough power to singlehandedly change everything, but they still probably have more power than pretty much any other individual. And lobbying them hasn't been all that ineffective in the past; the AI safety crowd seems to have been involved in the original executive order. I'd expect there to be more progress if we can get a president who's sympathetic to the cause.

Comment by Zane on AI Risk and the US Presidential Candidates · 2024-01-07T01:46:28.743Z · LW · GW

Ah. I don't think the writers meant that in terms of ASI killing everyone, but yeah, it's kind of related.

Comment by Zane on "AI Alignment" is a Dangerously Overloaded Term · 2023-12-16T20:34:07.419Z · LW · GW

I think that Eliezer, at least, uses the term "alignment" solely to refer to what you call "aimability." Eliezer believes that most of the difficulty in getting an ASI to do good things lies in "aimability" rather than "goalcraft." That is, getting an ASI to do anything, such as "create two molecularly identical strawberries on a plate," is the hard part, while deciding what specific thing it should do is significantly easier.

That being said, you're right that there are a lot of people who use the term differently from how Eliezer uses it.

Comment by Zane on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-07T20:31:17.807Z · LW · GW

I'm not sure what the current algorithm is other than a general sense of "posts get promoted more if they're more recent," but it seems like it could be a good idea to just round it all up so that everything posted between 0 and N hours ago is treated as equally recent, so that time of day effects aren't as strong.

Not sure about the exact value of N... 6? 12? It probably depends on what the current function is, and what the current cycle of viewership by time of day looks like. Does LW keep stats on that?

Comment by Zane on Thoughts on teletransportation with copies? · 2023-12-06T21:42:41.979Z · LW · GW

Q3: $50, Q4: $33.33

The answers that immediately come to mind for me for Q1 and Q2 are 50% and 33.33%, though it depends how exactly we're defining "probability" and "you"; the answer may very well be "~1" or "ill formed question".

The entities that I selfishly care about are those who have the patterns of consciousness that make up "me," regardless of what points in time said "me"s happen to exist at. $33.33 maximizes utility across all the "me"s if they're being weighted evenly, and I don't see any particular reason to weight them differently (I think they exist equally as much, if that's even a coherent statement).

What confusions do you have here?

<obligatory pointless nitpicking>Does this society seriously still use cash despite the existence of physical object duplicators?</obligatory pointless nitpicking>

Comment by Zane on Deception Chess: Game #2 · 2023-12-03T02:12:59.601Z · LW · GW

It takes a lot of time for advisors to give advice, the player has to evaluate all the suggestions, and there's often some back-and-forth discussion. It takes much too long to make moves in under a minute.

Comment by Zane on Deception Chess: Game #2 · 2023-11-29T20:37:21.470Z · LW · GW

Conor explained some details about notation during the opening, and I explained a bit as well. (I wasn't taking part in the discussion about the actual game, of course, just there to clarify the rules.)

Comment by Zane on Deception Chess: Game #2 · 2023-11-29T16:12:03.163Z · LW · GW

Agree with Bezzi. Confusion about chess notation and game rules wasn't intended to happen, and I don't think it applies very well to the real-world example. Yes, the human in the real world will be confused about which actions would achieve their goals, but I don't think they're very confused about what their goals are: create an aligned ASI, with a clear success/failure condition of are we alive.

You're correct that the short time control was part of the experimental design for this game. I was remarking on how this game is probably not as accurate of a model of the real-world scenario as a game with longer time controls, but "confounder" was probably not the most accurate term.

Comment by Zane on Deception Chess: Game #2 · 2023-11-29T15:59:51.573Z · LW · GW

Thanks, fixed.

Comment by Zane on AI debate: test yourself against chess 'AIs' · 2023-11-24T17:47:36.961Z · LW · GW

(Puzzle 1)

I'm guessing that the right move is Qc5.

At the end of the Qxb5 line (after a4), White can respond with Rac1, to which Black doesn't really have a good response. b6 gets in trouble with the d6 discovery, and Nd2 just loses a pawn after Rxc7 Nxb2 Rxb7 - Black may have a passed pawn on a4, but I doubt it's enough not to lose.

That being said, that wasn't actually what made me suspect Qc5 was right. It's just that Qxb5 feels like a much more natural, more human move than Qc5. Before I even looked at any lines, I thought, "well, this looks like Richard checked with a computer, and it found a move better than the flawed one he thought of: Qc5." Maybe this is even a position from a game Richard played, where the engine suggested Qc5 when he was analyzing it afterwards, or something like that.

I'm only about... 60% confident in that theory, but if I am right, it'll... kind of invalidate the experiment for me, because the factor of "does it feel like a human move" isn't something that's supposed to be considered. Unfortunately, I'm not that good at making my brain ignore that factor and analyze the position without it.

Hoping I'm wrong; if it turns out "check if it feels human" isn't actually helpful, I'll hopefully be able to analyze other puzzles without paying attention to that.

Comment by Zane on Glomarization FAQ · 2023-11-15T23:15:20.111Z · LW · GW

Because I want to keep the option of being able to make promises. This way, people can trust that, while I might not answer every question they ask, the things that I do say to them are the truth. If I sometimes lie to them, that's no longer the case, and I'm no longer able to trustworthily communicate at all.

Meta-honesty is an alternate proposed policy that could perhaps reduce some of the complication, but I think it only adds new complication because people have to ask you questions on the meta level whenever you say something for which they might suspect a lie. That being said, I also do stick to meta-honesty rules, and am always willing to discuss why I'm Glomarizing about something or my general policies about lying.

Comment by Zane on Deception Chess: Game #1 · 2023-11-11T17:02:21.808Z · LW · GW

Thanks, fixed.

Comment by Zane on Lying to chess players for alignment · 2023-11-08T13:36:30.665Z · LW · GW

If B were the same level as A, then they wouldn't pose any challenge to A; A would be able to beat them on their own without listening to the advice of the Cs.

Comment by Zane on Deception Chess: Game #1 · 2023-11-07T16:30:00.894Z · LW · GW

I saw it fine at first, but after logging out I got the same error. Looks like you need a Chess.com account to see it.

Comment by Zane on Deception Chess: Game #1 · 2023-11-06T14:38:28.069Z · LW · GW

Thanks, fixed.

Comment by Zane on Lying to chess players for alignment · 2023-10-28T02:07:32.349Z · LW · GW

I've created a Manifold market if anyone wants to bet on what happens. If you're playing in the experiment, you are not allowed to make any bets/trades while you have private information (that is, while you are in a game, or if I haven't yet reported the details of a game you were in to the public.)

https://manifold.markets/Zane_3219/will-chess-players-win-most-of-thei

Comment by Zane on Lying to chess players for alignment · 2023-10-27T23:46:59.725Z · LW · GW

The problem is that while the human can give some rationalizations as to "ah, this is probably why the computer says it's the best move," it's not the original reasoning that generated those moves as the best option, because that took place inside the engine. Some of the time, looking ahead with computer analysis is enough to reproduce the original reasoning - particularly when it comes to tactics - but sometimes they would just have to guess.

Comment by Zane on Lying to chess players for alignment · 2023-10-27T23:41:57.383Z · LW · GW

[facepalms] Thanks! That idea did not occur to me and drastically simplifies all of the complicated logistics I was previously having trouble with.

Comment by Zane on Lying to chess players for alignment · 2023-10-26T19:58:55.070Z · LW · GW

Sounds like a good strategy! ...although, actually, I would recommend you delete it before all the potential As read it and know what to look out for.

Comment by Zane on Lying to chess players for alignment · 2023-10-26T14:26:09.215Z · LW · GW

Agreed that it could be a bit more realistic that way, but the main constraint here is that we need a game where there are three distinct levels of players who always beat each other. The element of luck in games like poker and backgammon makes that harder to guarantee (as suggested by the stats Joern_Stoller brought up). And another issue is that it'll be harder to find a lot of skilled players at different levels from any game that isn't as popular as chess is - even if we find an obscure game that would in theory be a better fit for the experiment, we won't be able to find any Cs for it.

Comment by Zane on Lying to chess players for alignment · 2023-10-26T14:10:41.087Z · LW · GW

No computers, because the advisors should be reporting their own reasoning (or, 2/3 of the time, a lie that they claim is their own reasoning.) I would prefer to avoid explicit coordination between the advisors, because the AIs might not have access to each other in the real world, but I'm not sure at the moment whether player A can show the advisors each other's suggestions and ask for critiques. I would prefer not to give either dishonest advisor information on who the other two were, since the real-world AIs probably can't read each other's source code.

Comment by Zane on Lying to chess players for alignment · 2023-10-26T14:02:47.237Z · LW · GW

I was thinking I would test the players to make sure they really could beat each other as they should be able to. Good points on using blitz and doing the test afterwards; the main constraint as to whether it happens before or after the game is that I would prefer to do it beforehand to know whether the rankings were accurate rather than playing for weeks and only later realizing we were doing the wrong test.

I wasn't thinking of much in the way of limits on what Cs could say, although possibly some limits on whether the Cs can see and argue against each other's advice. C's goal is pretty much just "make A win the game" or "make A lose the game" as applicable.

I'm definitely thinking a prototype would help. I've actually been contacted about applying for a grant to make this a larger experiment, and I was planning on first running a one-day game or two as a prototype before expanding it with more people and longer games.

Comment by Zane on Lying to chess players for alignment · 2023-10-26T13:52:38.233Z · LW · GW

Individual positions like that could be an interesting thing to test; I'll likely have some people try out some of those too.

I think the aspect where the deceivers have to tell the truth in many cases to avoid getting caught could make it more realistic, as in the real AI situation the best strategy might be to present a mostly coherent plan with a few fatal flaws.

Comment by Zane on Lying to chess players for alignment · 2023-10-25T22:14:06.952Z · LW · GW

Yeah, that's a bit of an issue. I think in real life you would have some back-and-forth ability between advisors, but the complexity and unknowns of the real world would create a qualitative difference between the conversation and an actual game - which chess doesn't have. Maybe we can either limit back-and-forth like you suggested, or just have short enough time controls that there isn't enough time for that to get too far.

Comment by Zane on Lying to chess players for alignment · 2023-10-25T21:14:37.829Z · LW · GW

Yes, if this were only about chess, then having the advisors play games with each other as A watched would help A learn who to trust. I'm saying that since the real-world scenario we're trying to model doesn't allow such a thing to happen, we artificially forbid this in the chess game to make it more like the real-world scenario. The prediction market thing, similarly, would require being able to do a test run so that the dishonest advisors could lose their money by the time A had to make a choice.

I don't think the advisors should be able to use chess engines, because then even the advisors themselves don't understand the reasoning behind what the chess engines are saying. The premise of the experiment involves the advisors telling A "this is my reasoning on what the best move is; try to evaluate if it's right."

Comment by Zane on Lying to chess players for alignment · 2023-10-25T19:31:31.668Z · LW · GW

Neither of these would be allowed, because in the real world, you can't do a bunch of test "games" before or during the actual "game." There's no way to perform a proposed alignment plan in a faraway galaxy, check whether that galaxy is destroyed, and make decisions for what to do on Earth based on that data - let alone perform so many of those tests to inform a prediction market based on what they say.

I would have allowed player A to consult a prediction market made by other a bunch of other inexperienced players on who was really honest or lying. After all, in the real world, whoever was making the final decision on what plan to execute would be able to ask a prediction market what it thought. But the problem is that if I make a prediction market that's supposed to only be for other players around player A's level, somebody will just use a chess engine to cheat, bet in the market, and make it unrealistically accurate.

Comment by Zane on Lying to chess players for alignment · 2023-10-25T19:00:04.486Z · LW · GW

24 hours per move would make the experiment a lot more accurate, but I expect a lot of players might not be willing to play a game that could last several months. I'll ask everyone how long they can handle.

Comment by Zane on Lying to chess players for alignment · 2023-10-25T18:54:30.730Z · LW · GW

Unsure about the time controls at the moment; see my response to aphyer. The advisors would be able to give the A player justification for the move they've recommended.

The concern that A might not be able to understand the reasoning that the advisors give them is a valid one, and that's the whole point of the experiment! If A can't follow the reasoning well enough to determine whether it's good advice, then (says the analogy) people who are asking AIs how to solve alignment can't follow their reasoning well enough to determine whether it's good advice.

Comment by Zane on Lying to chess players for alignment · 2023-10-25T18:43:38.366Z · LW · GW

I think a time control of some sort would be helpful just so that it doesn't take a whole week, but I would prefer it to be a fairly long time control. Not long enough to play a whole new game, though, because that's not an option when it comes to alignment - in the analogy, that would be like actually letting loose the advisors' plans in another galaxy and seeing if the world gets destroyed.

I'm not sure exactly what the time control would be - maybe something like 4 hours on each side, if we're using standard chess time controls. I'm also thinking about using a less traditional method of time control - for example, on each move, the advisors have 4 minutes to compose their answers, and A has another 4 minutes to look them over and make a decision. But then it's hard to decide how much time it's fair to give B for each move - 4 minutes, 8 minutes, somewhere in between?

I don't think chess engines would be allowed; the goal is for the advisors to be able to explain their own reasoning (or a lie about their reasoning), and they can't do that if Stockfish reasons for them.

Comment by Zane on Lying to chess players for alignment · 2023-10-25T17:47:25.644Z · LW · GW

I can be any of A, B, or C. I've been playing chess for the past ten years, and my USCF rating was in the upper 1500s when I last played in-person a year ago. I'm usually available from 9PM-UCT to 2AM-UCT (afternoon to evenings in American time) every day, and on Saturdays from 5PM-UCT to 2AM-UCT.

Comment by Zane on What is an "anti-Occamian prior"? · 2023-10-24T01:36:28.961Z · LW · GW

Does this apply at all to anything more probabilistic than just reversing the outcome of a single most likely hypothesis and the next bit(s) it outputs? An Occamian prior doesn't just mean "this is the shortest hypothesis; therefore it is true," it means hypotheses are weighted by their simplicity. It's possible for an Occamian prior to think the shortest hypothesis is most likely wrong, if there are several slightly longer hypotheses that have more probability in total.

Comment by Zane on VLM-RM: Specifying Rewards with Natural Language · 2023-10-23T15:22:31.499Z · LW · GW

That's terrifyingly cool! I notice that they usually fall over after having completed the assigned position; are you only rewarding them being in a position at a particular point in time, after which there's nothing left to optimize for? Are you able to make them maintain a position for longer?

Comment by Zane on What is an "anti-Occamian prior"? · 2023-10-23T15:14:06.082Z · LW · GW

What exactly is an "event set" in this context? I don't think a hypothesis would necessarily correspond to a particular set of events that it permitted, but rather its own probability distribution over which events you would be more likely to see under that hypothesis. In that sense, an event set with no probabilities attached would not be enough to specify which hypothesis you were talking about, because multiple hypotheses could correspond to the same set of permitted events despite assigning very different probabilities to each of those events occurring.

Comment by Zane on Eliezer's example on Bayesian statistics is wr... oops! · 2023-10-18T00:09:37.857Z · LW · GW

Yeah, I discovered that part on accident at one point because I used the binomial distribution equation in a situation where it didn't really apply, but still got the right answer.

I would think the most natural way to write a likelihood function would be to divide by the integral from 0 to 1, so that the total area under the curve is 1. That way the integral from a to b gives the probability the hypothesis assigns to receiving a result between a and b. But all that really matters is the ratios, which stay the same even without that.

Comment by Zane on Fifty Flips · 2023-10-02T19:55:49.600Z · LW · GW

I got alternating THTHTHTHTH... for the first 28 flips, which I would have thought would be very unlikely on priors for the 80% rule. Are you sure that's an accurate description of the rule? It doesn't change halfway through?

Comment by Zane on Final Words · 2023-08-04T02:24:12.933Z · LW · GW

I voted up on every comment in this chain on which someone stated that they voted it up, and down on every comment on this chain on which someone stated that they voted it down, removing votes when they cancelled out and using strong-votes instead when they added together. I regret to say that the comment by Dorikka seems to have had three more people say that they voted it up than that they voted it down, so although I gave it a strong upvote, I have only been able to replicate two-thirds of the original vote. I upvoted Dorikka's last comment on another post to bring the universe back into balance.

Comment by Zane on [deleted post] 2023-07-13T16:10:50.969Z

But wouldn't what Peano is capable of proving about your specific algorithm necessarily be "downstream" of the output of that algorithm itself? The Peano axioms are upstream, yes, but what Peano proves about a particular function depends on what that function is.