Posts

List of Probability Calibration Exercises 2022-01-23T02:12:41.798Z
Question Gravity 2022-01-13T06:30:56.013Z
"Rational Agents Win" 2021-09-23T07:59:09.072Z

Comments

Comment by KingSupernova on Vavilov Day Discussion Post · 2022-01-27T03:27:48.054Z · LW · GW

The framing wasn't an intentional choice, I wasn't considering that aspect when I made the comment. I haven't been privy to any of the off-LW conflict about it, so it wasn't something that I was primed to look out for. I am not suggesting that there should be a community-wide standard (or that there shouldn't be). I intended it as "here's an idea that people may find interesting."

Comment by KingSupernova on Vavilov Day Discussion Post · 2022-01-27T01:27:01.184Z · LW · GW

Thoughts on having part of the holiday be "have tasty food easily accessible (perhaps within sight range) during the fast"?

Pros:

  • It's in keeping with the original story.
  • It can help us see the dangers of having instant gratification available, and let us practice our ability to resist short-term urges for long-term benefits.
  • If the goal of rationalist holidays is to help us feel like our own community, then this could help people feel more "special". Many religions have holidays that call for a fast, but as far as I know none of them expect one to tempt themselves.

Cons:

  • It makes the fast harder. If people are used to their self-control strategy being "don't tempt myself", this will be new to them, and if they end up breaking their fast, they'd likely feel demoralized.
Comment by KingSupernova on An Observation of Vavilov Day · 2022-01-27T01:20:30.144Z · LW · GW

This was probably meant sarcastically, but I do think that having part of the tradition be "have tasty food nearby during the fast" is worth consideration.

If the goal of rationalist holidays is to help us feel like a community, then this could make us feel more "special" and perhaps help towards that goal. (Many religions have holidays that call for a fast, but as far as I know none of them expect one to tempt themselves.)

It's also a nice display of self-control and the dangers of having instant gratification available. There's value in learning the ability to resist those urges for one's long-term benefit.

Comment by KingSupernova on List of Probability Calibration Exercises · 2022-01-24T21:29:10.339Z · LW · GW

Well the biggest problem is that it doesn't seem to work. I tested in a 2-player game where we both locked in an answer, but the game didn't progress to the next round. I waited for the timer to run out, but it still didn't progress to the next round, just stayed at 0:00. Changes in my probability are also not visible to the other players until I lock mine in.

A few more minor issues:

  • After locking in a probability, there's no indication in the UI that I've done so. I can even press the "lock" button again and get the same popup, despite the fact that it's already locked. It would be better to have the lock button disappear or grey out, and/or have some other clear visual indicator that it's locked.
  • If two people join with the same username, the game seems to think that they're all the same person. They all show up as "you" on the player list, although they are given different answers.
  • "Wendy's" shows up as "Wendy's", and same problem for all other words containing apostrophes. (Probably because you're setting element.innerText rather than element.innerHTML, or something like that.)
  • If I try to join an invalid room, nothing happens, which is confusing. It would be better to have some sort of error message displayed.
  • It's possible to join a room with a blank username by accident. Fixing that is a pain due to:
  • If in the waiting room someone quits and rejoins, they'll show up as a third player. It doesn't seem possible to remove a player from the game, you have to cancel and create a whole new game.
  • Pressing the "back" button in the browser has some very unintuitive results. I'd expect it to take me back to the homepage, but it seems to either leave me in the same game or put me back into a previous game.
  • And not a bug, but it would be nice to be able to customize the timer length.
Comment by KingSupernova on List of Probability Calibration Exercises · 2022-01-24T21:11:14.492Z · LW · GW

Questions about a topic that I don't know about result in me just putting the maxent distribution on that question, which is fine if it's rare, but leads to unhelpful results if they make up a large proportion of all the questions. Most calibration tests I found pulled from generic trivia categories such as sports, politics, celebrities, science, and geography. I didn't find many that were domain-specific, so that might be a good area to focus on.

Some of them don't tell me what the right answers are at the end, or even which questions I got wrong, which I found unsatisfying. If there's a question that I marked as 95% and got wrong, I'd like to know what it was so that I can look into that topic further.

It's easiest to get people to answer small numbers of questions (<50), but that leads to a lot of noise in the results. A perfectly calibrated human answering 25 questions at 70% confidence could easily get 80% or 60% of them right and show up as miscalibrated. Incorporating statistical techniques to prevent that would be good. (For example, calculate the standard deviation for that number of questions at that confidence level, and only tell the user that they're over/under confident if they fall outside it.) The fifth one in my list above does something neat where they say "Your chance of being well calibrated, relative to the null hypothesis, is X percent". I'm not sure how that's calculated though.

Comment by KingSupernova on List of Probability Calibration Exercises · 2022-01-23T21:09:37.151Z · LW · GW

This looks super neat, thank you for sharing. I just did a quick test and can confirm that it is in fact riddled with bugs. If it would help, I can write up a list of what needs fixing.

Comment by KingSupernova on Looking for information on scoring calibration · 2022-01-15T21:50:57.921Z · LW · GW

Wouldn't an observed mismatch between assigned probability and observed probability count as Bayesian evidence towards miscalibration?

Comment by KingSupernova on Strategic ignorance and plausible deniability · 2021-10-15T00:06:43.617Z · LW · GW

I think you're confusing ignorance with other people's beliefs about that agent's ignorance. In your example of the police or the STD test, there is no benefit gained by that person being ignorant of the information. There is however a benefit of other people thinking the person was ignorant. If someone is able to find out whether they have an STD without anyone else knowing they've had that test, that's only a benefit for them. (Not including the internal cognitive burden of having to explicitly lie.)

Comment by KingSupernova on Test Your Calibration! · 2021-10-05T05:00:53.320Z · LW · GW

An open-ended probability calibration test is something I've been planning to build. I'd be curious to hear your thoughts on how the specifics should be implemented. How should they grade their own test in a way that avoids bias and still gives useful results?

Comment by KingSupernova on "Rational Agents Win" · 2021-09-24T21:03:43.443Z · LW · GW

Whether Omega ended up being right or wrong is irrelevant to the problem, since the players only find out if it was right or wrong after all decisions have been made. It has no bearing on what decision is correct at the time; only our prior probability of whether Omega will be right or wrong matters.

Comment by KingSupernova on "Rational Agents Win" · 2021-09-24T21:02:16.343Z · LW · GW

I think you have to consider what winning means more carefully.

A rational agent doesn't buy a lottery ticket because it's a bad bet. If that ticket ends up winning, does that contradict the principle that "rational agents win"?

That doesn't seem at all analogous. At the time they had the opportunity to purchase the ticket, they had no way to know it was going to win.

An Irene who acts like your model of Irene will win slightly more when omega makes an incorrect prediction (she wins the lottery), but will be given the million dollars far less commonly because Omega is almost always correct. On average she loses. And rational agents win on average.

By average I don't mean average within a particular world (repeated iteration), but on average across all possible worlds.

I agree with all of this. I'm not sure why you're bringing it up?

Comment by KingSupernova on "Rational Agents Win" · 2021-09-24T20:52:56.778Z · LW · GW

I think you're missing my point. After the $1,000,000 has been taken, Irene doesn't suddenly lose her free will. She's perfectly capable of taking the $1000; she's just decided not to.

You seem to think I'm making some claim like "one-boxing is irrational" or "Newcomb's problem is impossible", which is not at all what I'm doing. I'm trying to demonstrate that the idea of "rational agents just do what maximizes their utility and don't worry about having to have a consistent underlying decision theory" appears to result in a contradiction as soon as Irene's decision has been made.

Comment by KingSupernova on "Rational Agents Win" · 2021-09-23T18:11:06.215Z · LW · GW

Ah, that makes sense.

Comment by KingSupernova on "Rational Agents Win" · 2021-09-23T17:24:39.065Z · LW · GW

Some clarifications on my intentions writing this story.

Omega being dead and Irene having taken the money from one box before having the conversation with Rachel are both not relevant to the core problem. I included them as a literary flourish to push people's intuitions towards thinking that Irene should open the second box, similar to what Eliezer was doing here.

Omega was wrong in this scenario, which departs from the traditional Newcomb's problem. I could have written an ending where Rachel made the same arguments and Irene still decided against doing it, but that seemed less fun. It's not relevant whether Omega was right or wrong, because after Irene has made her decision, she always has the "choice" to take the extra money and prove Omega wrong. My point here is that leaving the $1000 behind falls prey to the same "rational agents should win" problem that's usually used to justify one-boxing. After taking the $1,000,000 you can either have some elaborate justification for why it would be irrational to open the second box, or you could just... do it.

Here's another version of the story that might demonstrate this more succinctly:

Irene wakes up in her apartment one morning and finds Omega standing before her with $1,000,000 on her bedside table and a box on the floor next to it. Omega says "I predicted your behavior in Newcomb's problem and guessed that you'd take only one box, so I've expedited the process and given you the money straight away, no decision needed. There's $1000 in that box on the floor, you can throw it away in the dumpster out back. I have 346 other thought experiments to get to today, so I really need to get going."

Comment by KingSupernova on "Rational Agents Win" · 2021-09-23T16:46:49.242Z · LW · GW

I just did that to be consistent with the traditional formulation of Newcomb's problem, it's not relevant to the story. I needed some labels for the boxes, and "box A" and "box B" are not very descriptive and make it easy for the reader to forget which is which.

Comment by KingSupernova on "Rational Agents Win" · 2021-09-23T16:45:16.777Z · LW · GW

I don't find the simulation argument very compelling. I can conceive of many ways for Omega to arrive at a prediction with high probability of being correct that don't involve a full, particle-by-particle simulation of the actors.

Comment by KingSupernova on "Rational Agents Win" · 2021-09-23T16:44:43.981Z · LW · GW

In the case where you find yourself holding the $1,000,000 and the $1000 are still available, sure, you can pick them up. That only happens if either Omega failed to predict what you will do, or if you somehow set things up such that you couldn't, or had to pay a big price, to break your precommitment.

I don't think that's true. The traditional Newcomb's problem could use the exact setup that I used here, the only difference would be that either the opaque box is empty, or Irene never opens the transparent box. The idea that the $1000 is always "available" to the player is central to Newcomb's problem.

Comment by KingSupernova on "Rational Agents Win" · 2021-09-23T16:38:15.702Z · LW · GW

I don't find the simulation argument very compelling. I can conceive of many ways for Omega to arrive at a prediction with high probability of being correct that don't involve a full, particle-by-particle simulation of the actors.

Comment by KingSupernova on Noticing Frame Differences · 2021-09-12T16:41:21.450Z · LW · GW

making piece

should be

making peace

Comment by KingSupernova on Long Covid Is Not Necessarily Your Biggest Problem · 2021-09-06T16:11:03.859Z · LW · GW

so it includes both asymptomatic cases

I think that "includes" should be "excludes"?

Comment by KingSupernova on Agency in Conway’s Game of Life · 2021-07-10T01:53:49.943Z · LW · GW

This is an interesting question, but I think your hypothesis is wrong.

Any pattern of physics that eventually exerts control over a region much larger than its initial configuration does so by means of perception, cognition, and action that are recognizably AI-like.

In order to not include things like an exploding supernova as "controlling a region much larger than its initial configuration" we would want to require that such patterns be capable of arranging matter and energy into an arbitrary but low-complexity shape, such as a giant smiley face in Life.

If it is indeed possible to build some sort of simple and robust "area clearer" pattern as the other comments discuss (related post here), then nothing approaching the complexity of an AI is necessary. Simply initialize the small region with the area clearer pattern facing outwards and a smiley face constructer behind it.

It seems to me that the only reason you'd need an AI for this is if that sort of "area clearer" pattern is not possible. In that case, the only way to clear the large square would be to individually sense and interact with each piece of ash and tailor a response to its structure, and you'd need something somewhat "intelligent" to do that. Even in this case, I'm unconvinced that the amount of "intelligence" necessary to do this would put it on par with what we think of as a general intelligence.