Posts
Comments
Impressive, I didn't think it could be automatized (and even if it could, that it could go so many digits before hitting a computational threshold for large exponentials). My only regret is that I have but 1 upvote to give.
In the interest of challenging my mental abilities, I used as few resources as possible (and I suck at writing code). It took fewer than 3^^^3 steps, thankfully.
Partially just to prove it is a real number with real properties, but mostly because I wanted a challenge and wasn't getting it from my current math classes (I'm currently in college, majoring in math). As much as I'd like to say it was to outdo the AI at math (since calculators can't do anything with the number 3^^^3, even take its mod 2), I had to use a calculator for all but the last 3 digits.
I started with some iterated powers of 3 and tried to find patterns. For instance, 3 to an odd (natural number) power is always 3 mod 4, and 3 to the power of (a natural number that's 3 mod 4) always has a 7 in the one's place.
I solved the last 8 digits of 3^^^3 (they're ...64,195,387). Take that ultrafinitists!
Hmm. "Three to the 'three to the pentation of three plus two'-ation of three". Alternatively, "big" would also work.
"Three to the pentation of three".
Although making precommitments to enforce threats can be self-destructive, it seems the only reason they were for the baron is because he didn't account for a 3rd outcome, rather than just the basic set {you do what I want, you do what I don't want} and 3rd outcomes kept happening.
Newcomb's problem does happen (and has happened) in real life. Also, omega is trying to maximize his stake rather than minimize yours; he made a bet with alpha with much higher stakes than the $1,000,000. Not to mention newcomb's problem bears some vital semblance to the prisoners' dilemma, which occurs in real life.
Ok, so we do agree that it can be rational to one-box when predicted by a human (if they predict based upon factors you control such as your facial cues). This may have been a misunderstanding between us then, because I thought you were defending the computationalist view that you should only one-box if you might be an alternate you used in the prediction.
So you would never one-box unless the simulator did some sort of scan/simulation upon your brain? But it's better to one-box and be derivable as the kind of person to (probably) one-box than to two-box and be derivable as the kind of person to (probably) two-box.
The only reason to one-box is when your actions (which include both the final decision and the thoughts leading up to it) effect the actual arrangement of the boxes.
Your final decision never affects the actual arrangement of the boxes, but its causes do.
True, the 75% would merely be a past history (and I am in fact a poker player). Indeed, if the factors used were entirely or mostly comprised of factors beyond my control (and I knew this), I would two-box. However, two-boxing is not necessarily optimal because of a predictor whose prediction methods you do not know the mechanics of. In the limited predictor problem, the predictor doesn't use simulations/scanners of any sort but instead uses logic, and yet one-boxers still win.
Yeah, the argument would hold just as much with an inaccurate simulation as with an accurate one. The point I was trying to make wasn't so much that the simulation isn't going to be accurate enough, but that a simulation argument shouldn't be a prerequisite to one-boxing. If the experiment were performed with human predictors (let's say a psychologist who predicts correctly 75% of the time), one-boxing would still be rational despite knowing you're not a simulation. I think LW relies on computationalism as a substitute for actually being reflectively consistent in problems such as these.
Right, any predictor with at least a 50.05% accuracy is worth one-boxing upon (well, maybe a higher percentage for those with concave functions in money). A predictor with sufficiently high accuracy that it's worth one-boxing isn't unrealistic or counterintuitive at all in itself, but it seems (to me at least) that many people reach the right answer for the wrong reason: the "you don't know whether you're real or a simulation" argument. Realistically, while backwards causality isn't feasible, neither is precise mind duplication. The decision to one-box can be rationally reached without those reasons: you choose to be the kind of person to (predictably) one-box, and as a consequence of that, you actually do one-box.
Not that I disagree with the one-boxing conclusion, but this formulation requires physically reducible free will (which has recently been brought back into discussion). It would also require knowing the position and momentum of a lot of particles to arbitrary precision, which is provably impossible.
Relative to UFAI, FAI work seems like it would be mathier and more insight-based, where UFAI can more easily cobble together lots of pieces. This means that UFAI parallelizes better than FAI. UFAI also probably benefits from brute-force computing power more than FAI. Both of these imply, so far as I can tell, that slower economic growth is good news for FAI; it lengthens the deadline to UFAI and gives us more time to get the job done.
Forgive me if this is a stupid question, but wouldn't UFAI and FAI have identical or near-identical computational abilities/methods/limits and differ only by goals/values?