Some informal ramblings abaut what a defenition of reality might look like.

post by Armok_GoB · 2011-02-25T16:04:38.567Z · LW · GW · Legacy · 3 comments

I were originally going to just chat about this on IRC but last few times I did that I ended up concludin it'd have been better to make an inchoherent discusion post on it and i just got some karma to burn, so here goes nothing. Sorry for the lots and lots of things I'm assuming everywhere with no citing sources or proof, this wasn't really meant as anything articlelike.

Many problems would have become a lot easier if we could assume everything you might even theoretically care about was Turing computable, and given a few more assumptions that most LWers appear to consider likely most kinds of uncertainty could be completely eliminated. However, that assumption seems that, while it might be true in practice, it's not obviously true or true per definition. Even if we turn out to live in a Turing computable universe, we still might care about non computable things outside it and try to influence then through ambient control or timeless trade or other such things.

Looking at Bayes theorem, and under the influence of LW ideas in general, it seems that the difference between the probability of being in a certain universe, and how much you care about that universe relatively, is indistinguishable from the inside. I have been planing to write something about this topic but never got around do it so lets just have it as an assumption for now.

Imagine a civilization building an AI to maximize the distance of handcomponents to hands, from http://lesswrong.com/lw/p2/hand_vs_fingers/ . It's defined through the green dots on the hand radar. The task involves escaping the universe so the AI decides to foom, and then it takes a look at the data again, and come to a 40% probability of the actual scenario, were it's supergoal is no longer even defined, and as a logical impossibility not simply local to this universe or time, and all actions have zero utility. However, this AI also realizes it can archive far, far more computational power if the universe turns out to be this way, and this computational power could be sold counter-factually on the timeless market, so it pre commits to gaining this computational power and trying to help it's counter factual self in that'd have happened if math had turned out differently. TDT means this'd have been the end result even if it hadn't realized the possibility and pre commited in advance.

So now we have an AI that cares about a logically impossible world, and if probability, "realness", is indistinguishable from caring about somehting, then at least some logically impossible worlds are in some sense "real"; we cant exclude the possibility a human might care about them as well, and even subjectively anticipate being there. And I completly forgot where I were going with this and my head is all fuzzy insde.

 

Our universe may or may not be turing computable, but it certainly seems you could approximate it, approaching it as a limit by simulating with increasingly higher resolution and higher precision without ever actually reaching it. One may propose then that anything that can be aproximated by a turing machine is the real math. but then you might come up with somehting that can be aproximated by somehting that can be aproximated by a turing machine, but only at perfect/infinite reslution thus itself can not be aproximated by a turing machine, and propose that stuff that can be aproximated by somehting that can be aproximated by a turing machine might be the true stuff of reality. And then you could repeat this. So we end up with somehting like a flood fill of aproximations, that spread around and may be arbitarily far away. I have considered the posibility this might be a good defention of math and atleast it includes evrything that could be aproximated by a human brain.

However, you cant approximate a world where pi=4 or hands are separate from fingers and palms, like a talked abaut a few paragraps above, so it's not evrything anythign might care about. I propose  mind caring about somehting logically impossible as a second kind of link. I muse abaut that if you foloodfill throguth bioth these kinds of links you might get a set thats a good defenition of reality. I might revisit this article tomorow or somehting I'm just panicingly scrabling to type out my flow of conciusnes before I forget stuff and half of it might be nonsence but you miht find a few persl in it and It gave me a headach and it's to close right now to consider objectively anyway so i wont fix it up right now, please coment some.

3 comments

Comments sorted by top scores.

comment by TheOtherDave · 2011-02-25T17:51:03.586Z · LW(p) · GW(p)

I mostly don't understand what you're getting at here.

And frankly, I feel most of the effort I'd have to exert to understand it is effort you ought to have put into it to make it clearer prior to posting.

I would recommend, in the future, that if you know an article is incomplete and can be improved by giving it some marginal additional thought -- or even just by returning to it when you are less frantic, and can take the time to run it through a spellchecker -- that you save a draft of the article somewhere, set it aside, and return to it later. If nothing else, it shows more consideration to your audience.

Replies from: Armok_GoB
comment by Armok_GoB · 2011-02-25T17:56:38.083Z · LW(p) · GW(p)

It's not an article, it's just a stream of conciousness and a starting point for discussion. Ending up looking like an article is just an unfortunate coincidence. I'm not even sure IF I were trying to get at any coherent point.

comment by Armok_GoB · 2011-02-25T18:19:30.826Z · LW(p) · GW(p)

Deleted this due to downvoting.