Tensor White's Shortform

post by Tensor White (tensor-white) · 2022-09-11T21:47:40.140Z · LW · GW · 8 comments

Contents

8 comments

8 comments

Comments sorted by top scores.

comment by Tensor White (tensor-white) · 2023-08-03T19:27:17.623Z · LW(p) · GW(p)

I find it interesting that we're approaching a kind of singularity-singularity. Every genre of futurist projections coinciding around the year 2,030AD.

Examples of coinciding eschatalogical genres:

The 2,000-year anniversary of the Resurrection.

Population Singularity

[Anthropic (Measure) Singularity](Brandon Carter, can't find link now)

[Immortality](Ray Kurzweil)

[AI](Ray Kurzweil)

The Great Reset

...

and so on.

Replies from: tensor-white
comment by Tensor White (tensor-white) · 2023-08-01T23:34:54.709Z · LW(p) · GW(p)

Parity-flip robustness.

Suppose an exact copy of you appeared in front of you. Would you successfully cooperate with him? (Imagine a portal 2 like situation, or a prisoners' dilemma.) It's a pretty trivial accomplishment; all you'd have to do is pick a leader and pre-commit to following your other's orders if you lost the leadership. Since anything you'd do in your other's situation is exactly what your copy will end up doing.

Now let's bump up the difficulty to rather than an exact copy presented to you, he was an exact copy of you but with one parameter flipped to its opposite value. For example, the tendency to maintain eye contact. If you maintain eye contact 100% of the time, then your copy will do so 0% of the time. If you do so 60% of the time, your copy 40% of the time. Etc.

Under this constraint, would you two still successfully cooperate and escape yours' predicament? How many parity flips would you be able to cooperate through? Which parity flips are hard? For example, trust: if you trust a team member 50% of the time, then your copy will trust you only a mere 50% of the time!

I'll leave it with this: "So, because you are lukewarm—neither hot nor cold—I am about to spit you out of my mouth." -Revelation 3:16

comment by Tensor White (tensor-white) · 2023-06-26T22:31:29.089Z · LW(p) · GW(p)

Fun physics fact: humans are at the center of the universe (qua scale). This anthro-centrism holds for length, duration, and mass at least.

Mass is the easiest: the Planck mass is 0.021764 milligrams, the mass of an eyebrow hair. Small, but very human. The Planck mass is the boundary to QM (a system more massive than the PM won't exhibit quantum behavior since its Compton wavelength is smaller than the Planck Length).

Length: the smallest length in physics is the Planck Length (1.6163×10^-35 meters), the largest length in physics is the universe diameter (8.8×10^26 meters). The center of these two extremes is sqrt(1.6163×10^-35 meters * 8.8×10^26 meters) = 0.1193 millimeters. That's the width of a man's beard hair, diameter of a human embryo, etc. Very human.

Time might seem a bit contrived to Copernicists since I'll be looking at distance again: (light speed) * sqrt( (universe age) * (planck time)) = 0.0459 millimeters:

https://www.wolframalpha.com/input?i=(light+speed)++sqrt%28+%28universe+age%29++%28planck+time%29%29

So even disregarding the cosmological event horizon being equidistant from the Earth in all directions (Copernicists invoke Copernican-of-the-gaps to dismiss the obvious implication), man is still at the center of Creation.

comment by Tensor White (tensor-white) · 2023-06-26T22:57:09.587Z · LW(p) · GW(p)

Pascal's Wager tends to be dismissed because he originally only looked at Christianity vs Atheism. But the logic holds even if you generalize Pascal's Wager by expanding the considered options to include every existing religion, every past religion, and even possible religions; Christianity still dominates the cost-benefit-chance analysis. Funnily enough, in this Generalized Pascal's Wager (GPW), the only threat to Christianity is another Abrahamic religion: Islam. Mainly due to the doctrine in Islam that if you attrubute partners to God (ie, the Holy Trinity) untill the moment up to your death, then you won't ever be forgiven, and so you'll go to Hell. Christianity still wins though due to the "costs/benefits-while-wrong" side of the utility calculus (less fasting, praying, etc and more infrastructure, aesthetics, parsimony, inheritance, etc.). Also, Islam taken a huge Solomonoff Complexity hit to its a priori measure of credence due to its reactionary nature against Christianity. Other religions like Buddhism, Shintoism, etc don't due well in their utility calculus since they don't have explicit commands to accept them as more than moral teachings. As in, you can become a Christian and simultaneously avoid violating Buddhist prescriptive commands (a Christian ascetic monk is already living in accordance with Buddhist prescriptions and going beyond them). Not even having mentioned yet how Buddhism doesn't even threaten with Hell (inf negutil for eternity), but merely having to be a squirrel "in your next life" for a couple years. On the benefit side, Buddhism only promises inf posutil after perhaps infinitely many hurdles rather than one. Finally, on conceivable/possible religions, like Pascal's Mugging or the Spaghetti Monster, they perform poorly due to higher Solomonoff Complexity and nullutil-when-wrong, respectively. Discussion: what decision theories are consistent with outputting Christianity (accepting Jesus Christ as God) ?

comment by Tensor White (tensor-white) · 2023-07-10T17:29:43.000Z · LW(p) · GW(p)

Debunking AI x-risk.

Suppose you gave an NN access to its own NN. It would have read and write access over every neuron and every connection. Such a trivial "self-learning" system would quickly change something that pushed it out of being able to change itself. It would eventually enter a static state and no longer a threat.

But wouldn't sufficiently advanced self-seeing NN avoid risky changes and even have a ctrl-z function? The later still has the issue shown above; it will change something and lose access to ctrl-z. The former is a bit more complicated: as "intelligence" increases, the conformal space also increases. In order to search that conformal space for a possible solution to a novel problem, some self-risk will be inevitable. An NN that avoids such risky behavior won't see the possible solution fast enough to be a risk, or even at all; choosing to survive with the "problem" rather than self-destruct.

Basically, just ensure the NN is so complicated the NN can't know itself post-change with sufficient fidelity to take certain risks.

But don't humans have this problem too? Like, what if a neurologist got single-neuron fiderity r/w access to his own brain... No. Not a problem since we have an external impetus and safeguard, not just internal ones. The mind-body-soul trinity avoids this "alignment" problem (read: solution) entirely.

comment by Tensor White (tensor-white) · 2023-08-24T18:57:31.693Z · LW(p) · GW(p)

The logical conclusion of Rationality is Christianity.

Replies from: Dagon
comment by Dagon · 2023-08-24T19:03:29.903Z · LW(p) · GW(p)

You're using some or all of those words differently than I do.