Posts

Comments

Comment by answer on New Monthly Thread: Bragging · 2013-08-12T18:31:49.051Z · LW · GW

Impressive, I didn't think it could be automatized (and even if it could, that it could go so many digits before hitting a computational threshold for large exponentials). My only regret is that I have but 1 upvote to give.

Comment by answer on New Monthly Thread: Bragging · 2013-08-12T05:19:41.784Z · LW · GW

In the interest of challenging my mental abilities, I used as few resources as possible (and I suck at writing code). It took fewer than 3^^^3 steps, thankfully.

Comment by answer on New Monthly Thread: Bragging · 2013-08-12T04:48:33.172Z · LW · GW

Partially just to prove it is a real number with real properties, but mostly because I wanted a challenge and wasn't getting it from my current math classes (I'm currently in college, majoring in math). As much as I'd like to say it was to outdo the AI at math (since calculators can't do anything with the number 3^^^3, even take its mod 2), I had to use a calculator for all but the last 3 digits.

Comment by answer on New Monthly Thread: Bragging · 2013-08-12T02:59:09.947Z · LW · GW

I started with some iterated powers of 3 and tried to find patterns. For instance, 3 to an odd (natural number) power is always 3 mod 4, and 3 to the power of (a natural number that's 3 mod 4) always has a 7 in the one's place.

Comment by answer on New Monthly Thread: Bragging · 2013-08-12T02:41:38.248Z · LW · GW

I solved the last 8 digits of 3^^^3 (they're ...64,195,387). Take that ultrafinitists!

Comment by answer on More "Stupid" Questions · 2013-08-04T20:46:03.647Z · LW · GW

Hmm. "Three to the 'three to the pentation of three plus two'-ation of three". Alternatively, "big" would also work.

Comment by answer on More "Stupid" Questions · 2013-08-01T01:07:45.360Z · LW · GW

"Three to the pentation of three".

Comment by answer on Countess and Baron attempt to define blackmail, fail · 2013-07-15T18:09:32.868Z · LW · GW

Although making precommitments to enforce threats can be self-destructive, it seems the only reason they were for the baron is because he didn't account for a 3rd outcome, rather than just the basic set {you do what I want, you do what I don't want} and 3rd outcomes kept happening.

Comment by answer on Why do theists, undergrads, and Less Wrongers favor one-boxing on Newcomb? · 2013-06-19T23:19:34.962Z · LW · GW

Newcomb's problem does happen (and has happened) in real life. Also, omega is trying to maximize his stake rather than minimize yours; he made a bet with alpha with much higher stakes than the $1,000,000. Not to mention newcomb's problem bears some vital semblance to the prisoners' dilemma, which occurs in real life.

Comment by answer on Newcomb's Problem and Regret of Rationality · 2013-06-19T22:59:44.967Z · LW · GW

Ok, so we do agree that it can be rational to one-box when predicted by a human (if they predict based upon factors you control such as your facial cues). This may have been a misunderstanding between us then, because I thought you were defending the computationalist view that you should only one-box if you might be an alternate you used in the prediction.

Comment by answer on Newcomb's Problem and Regret of Rationality · 2013-06-19T22:42:52.832Z · LW · GW

So you would never one-box unless the simulator did some sort of scan/simulation upon your brain? But it's better to one-box and be derivable as the kind of person to (probably) one-box than to two-box and be derivable as the kind of person to (probably) two-box.

The only reason to one-box is when your actions (which include both the final decision and the thoughts leading up to it) effect the actual arrangement of the boxes.

Your final decision never affects the actual arrangement of the boxes, but its causes do.

Comment by answer on Newcomb's Problem and Regret of Rationality · 2013-06-19T22:29:17.505Z · LW · GW

True, the 75% would merely be a past history (and I am in fact a poker player). Indeed, if the factors used were entirely or mostly comprised of factors beyond my control (and I knew this), I would two-box. However, two-boxing is not necessarily optimal because of a predictor whose prediction methods you do not know the mechanics of. In the limited predictor problem, the predictor doesn't use simulations/scanners of any sort but instead uses logic, and yet one-boxers still win.

Comment by answer on Newcomb's Problem and Regret of Rationality · 2013-06-19T21:01:43.039Z · LW · GW

Yeah, the argument would hold just as much with an inaccurate simulation as with an accurate one. The point I was trying to make wasn't so much that the simulation isn't going to be accurate enough, but that a simulation argument shouldn't be a prerequisite to one-boxing. If the experiment were performed with human predictors (let's say a psychologist who predicts correctly 75% of the time), one-boxing would still be rational despite knowing you're not a simulation. I think LW relies on computationalism as a substitute for actually being reflectively consistent in problems such as these.

Comment by answer on Newcomb's Problem and Regret of Rationality · 2013-06-19T20:32:58.770Z · LW · GW

Right, any predictor with at least a 50.05% accuracy is worth one-boxing upon (well, maybe a higher percentage for those with concave functions in money). A predictor with sufficiently high accuracy that it's worth one-boxing isn't unrealistic or counterintuitive at all in itself, but it seems (to me at least) that many people reach the right answer for the wrong reason: the "you don't know whether you're real or a simulation" argument. Realistically, while backwards causality isn't feasible, neither is precise mind duplication. The decision to one-box can be rationally reached without those reasons: you choose to be the kind of person to (predictably) one-box, and as a consequence of that, you actually do one-box.

Comment by answer on Newcomb's Problem and Regret of Rationality · 2013-06-19T18:53:19.007Z · LW · GW

Not that I disagree with the one-boxing conclusion, but this formulation requires physically reducible free will (which has recently been brought back into discussion). It would also require knowing the position and momentum of a lot of particles to arbitrary precision, which is provably impossible.

Comment by answer on Do Earths with slower economic growth have a better chance at FAI? · 2013-06-13T06:38:16.380Z · LW · GW

Relative to UFAI, FAI work seems like it would be mathier and more insight-based, where UFAI can more easily cobble together lots of pieces. This means that UFAI parallelizes better than FAI. UFAI also probably benefits from brute-force computing power more than FAI. Both of these imply, so far as I can tell, that slower economic growth is good news for FAI; it lengthens the deadline to UFAI and gives us more time to get the job done.

Forgive me if this is a stupid question, but wouldn't UFAI and FAI have identical or near-identical computational abilities/methods/limits and differ only by goals/values?