Three questions about source code uncertainty

post by cousin_it · 2014-07-24T13:18:01.363Z · LW · GW · Legacy · 28 comments

Contents

28 comments

In decision theory, we often talk about programs that know their own source code. I'm very confused about how that theory applies to people, or even to computer programs that don't happen to know their own source code. I've managed to distill my confusion into three short questions:

1) Am I uncertain about my own source code?

2) If yes, what kind of uncertainty is that? Logical, indexical, or something else?

3) What is the mathematically correct way for me to handle such uncertainty?

Don't try to answer them all at once! I'll be glad to see even a 10% answer to one question.

28 comments

Comments sorted by top scores.

comment by Kawoomba · 2014-07-24T15:36:15.477Z · LW(p) · GW(p)

1) If you were certain about your source code, i.e. if you knew your source code, uploading your mind should be immediately feasible, subject to resource constraints. Since you do not know how would go about immediately uploading your mind, you aren't certain about your source code. Because the answer is binary (tertium non datur), it follows you're uncertain about your own source code. (No, I don't count vague constraints such as "I know it's Turing computable" as "certainty about my own source code", just as you wouldn't say you know a program's source code just because you know it's implemented on a JVM.)

2) The uncertainty falls in several categories, because there are many ways to partition "uncertainty". For example, the uncertainty is mostly epistemic (lack of knowledge of the exact parameters), rather than aleatoric. Using a different partitioning, the uncertainty is structural (we don't know how to correctly model your source code). There are many more true attributes of the relevant uncertainty.

3) I don't understand the question. Handle to what end?

comment by Adele_L · 2014-07-24T18:25:28.937Z · LW(p) · GW(p)

3) It seems unlikely that subjective bayesian probability would work for this kind of uncertainty. In particular, I would expect the correct theory to violate Cox's assumption of consistency. To illustrate, we can normally calculate P(A,B|X) by either P(A|X)P(B|A,X) or P(B|X)P(A|B,X). But what if A is the proposition that we calculate the probability P(A,B|X) by using P(A|X)*P(B|A,X)? Then we will get different answers depending on how we do the calculation.

comment by TsviBT · 2014-07-24T17:13:08.817Z · LW(p) · GW(p)

1) Yes, presumably; your brain is a vast store of (evolved)(wetware)(non-serial)(ad-hoc)(etc.)algorithms that has so far been difficult for neuroscientists to document.

2) Just plain empirical? There's nothing stopping you from learning your own source code, in principle, it's just that we don't AFAIK have scanners that can view "many" nearby neurons, in real time, individually (as opposed an fMRI).

3) Well that's much more difficult. Not sure why Mark_Friedenbach's comment was downvoted though, except maybe snarkiness; heuristics and biases is a small step towards understanding some of the algorithms you are (and correcting for their systematic errors in a principled way).

comment by Squark · 2014-07-25T14:42:28.988Z · LW(p) · GW(p)

I think this is a distinct type of uncertainty which I call "introspective". It is closely related to the expectation value over T in the definition of the updateless intelligence metric.

comment by Viliam_Bur · 2014-07-24T18:12:24.542Z · LW(p) · GW(p)

2) The most obvious obstacle for a human is, that I don't have the power to precisely observe and remember everything that I do, and I absolutely don't have an ability to reason about which specific source code could cause me to do exactly those things. Even the information that in theory is there, I can't process it. I guess this is logical uncertainty. It's like being unable to calculate the millionth digit of pi, especially if I couldn't even count to ten correctly.

But even if I had the super-ability to correctly determine which kinds of source code could produce my behavior and which couldn't, there would still be multiple solutions. I could limit the set of possible source codes to a subset, but I couldn't limit it to exactly one source code. Not even to a group of behaviorally identical source codes, because there are always realistic situations that I have never experienced, and some of the remaining source codes could do different things there. So within the remaining set, this seems like indexical uncertainty. I could be any of them, meaning that different copies of "me" in different possible worlds could have different algorithms within this set, and so far the same experience.

There is a problem with the second part -- if I have an information about maximum possible size of my source code, it means there are only finitely many options, so I could hypothetically gradually reduce it to exactly one, which means removing the indexical uncertainty. On the other hand, this would work for "normal" scenarios, but not for the "brain in the jar" scenarios: if I am in a Matrix, my assumption that my human source code is limited by the size of my body could be wrong.

comment by evand · 2014-07-24T14:19:53.934Z · LW(p) · GW(p)

Interesting!

I would say that you (as a real human in the present time) are uncertain about your source code in the traditional sense of the word "uncertain". Once we have brain scans and ems and such, if you get scanned and have access to the scan, you're probably uncertain in something more like a logical uncertainty sense: you have access, and the ability to answer some questions, but you don't "know" everything that is implied by that knowledge.

Indexical uncertainty can apply to a perfect Bayesian reasoner. (Right? I mean, given that those can't exist in the real world,...) So it doesn't feel like it's indexical.

Does it make sense to talk about a "computationally-limited but otherwise perfect Bayesian reasoner"? Because that reasoner can exhibit logical uncertainty, but I don't think it exhibits source code uncertainty in the sense that you do, namely that you have trouble predicting your own future actions or running yourself in simulation.

comment by Lumifer · 2014-07-24T16:28:44.631Z · LW(p) · GW(p)

I'm very confused about how that theory applies to people

It does not.

The concept of "source code" is of doubtful use when applied to wetware, anyway.

Replies from: Adele_L, polymathwannabe
comment by Adele_L · 2014-07-24T18:15:45.967Z · LW(p) · GW(p)

In principle, it is possible to simulate a brain on a computer, and I think it's meaningful to say that if you could do this, you would know your "source code". In general, you can think of something's source code as a (computable) mathematical description of that thing.

Also, the point of the post is to generalize the theory to this domain. Humans don't know their source code, but they do have models of other people, and use these to make complicated decisions. What would a formalization of this kind of process look like?

Replies from: TheAncientGeek, Lumifer
comment by TheAncientGeek · 2014-07-25T10:17:56.091Z · LW(p) · GW(p)

It's not known that a software/hardware distinctive is even applicable to brains.

Moreover, If you simulated a brain, you might be simulating in software what was originally done in hardware .

Replies from: Antiochus
comment by Antiochus · 2014-07-25T13:25:52.174Z · LW(p) · GW(p)

You could think of software as being any element that is programmable - ie, even a physical plugboard can be thought of as software even though it's not typically the format we store it on.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-07-25T15:43:42.485Z · LW(p) · GW(p)

You could think of a plugboard as hardware, too, hence there is no longer a clean hardware/software distinction

Replies from: Antiochus
comment by Antiochus · 2014-07-25T18:08:34.634Z · LW(p) · GW(p)

What I'm getting at is that it doesn't matter if the software is expressed in electron arrangement or plugs or neurons, if it's computable. I don't see any trouble here distinguishing between connectome and neuron.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-07-26T10:34:13.150Z · LW(p) · GW(p)

What I am saying is that ifnyiu can't separate software from hardware, you are dealing with software in a reifiable sense.

Hardware is never computable, in the sense that simulated planes don't fly.

comment by Lumifer · 2014-07-24T18:27:01.845Z · LW(p) · GW(p)

In principle, it is possible to simulate a brain on a computer

That's a hypothesis, unproven and untested. Especially if you claim the equivalence between the mind and the simulation -- which you have to do in order to say that the simulation delivers the "source code" of the mind.

you can think of something's source code as a (computable) mathematical description of that thing.

A mathematical description of my mind would be beyond the capabilities of my mind to understand (and so, know). Besides, my mind changes constantly both in terms of patterns of neural impulses and, more importantly, in terms of the underlying "hardware". Is neuron growth or, say, serotonin release part of my "source code"?

Replies from: Adele_L, ThisSpaceAvailable
comment by Adele_L · 2014-07-24T18:51:45.816Z · LW(p) · GW(p)

The laws of physics as we currently understand them are computable (not efficiently, but still), and there is no reason to hypothesize new physics to explain how the brain works. I'm claiming there is an isomorphism.

Dynamic systems have mathematical descriptions also...

Replies from: Lumifer
comment by Lumifer · 2014-07-24T19:02:09.822Z · LW(p) · GW(p)

The laws of physics as we currently understand them are computable

What do you mean by that? E.g. quantum mechanics, or even the many-bodies problem in classical mechanics...

Do note that being able to write a mathematical expression does not necessarily mean it's computable. Among other things, our universe is finite.

Replies from: evand, TheAncientGeek
comment by evand · 2014-07-24T19:43:57.854Z · LW(p) · GW(p)

I strongly suspect "computable" is being used in the mathematical sense here, not in the sense of "tractable on a reasonable computer".

comment by TheAncientGeek · 2014-07-25T10:42:00.352Z · LW(p) · GW(p)

QM is compatible. Cclassical physics us not.

We do.nt know whether the universe is finite or infinite.

comment by ThisSpaceAvailable · 2014-07-26T04:09:13.414Z · LW(p) · GW(p)

That's a hypothesis, unproven and untested.

In the broadest sense, the hypothesis is somewhat trivial. For instance, if we are communicating with an agent over a channel with n bits of information capacity, then there are 2^n possible exchanges. Given any n, it is possible to create a simulation that picks the "right" exchange, such that it is indistinguishable from a human. Where the hypothesis becomes less proven is if the requirement is not for fixed n.

Replies from: Lumifer
comment by Lumifer · 2014-07-28T15:45:44.901Z · LW(p) · GW(p)

In the broadest sense, the hypothesis is somewhat trivial.

No, I don't think so.

For instance, if we are communicating with an agent over a channel with n bits of information capacity, then there are 2^n possible exchanges. Given any n, it is possible to create a simulation that picks the "right" exchange, such that it is indistinguishable from a human.

Are you making the Searle's Chinese Room argument?

In any case, even if we accept the purely functional approach, it doesn't seem obvious to me that you must be able to create a simulation which picks the "right" answer in the future. You don't get to run 2^n instances and say "Pick whichever one you satisfies your criteria".

Replies from: ThisSpaceAvailable
comment by ThisSpaceAvailable · 2014-07-29T02:54:17.441Z · LW(p) · GW(p)

Well, I did say "In the broadest sense", so yes, that does imply a purely functional approach.

You don't get to run 2^n instances and say "Pick whichever one you satisfies your criteria".

The claim was that it is possible in principle. And yes, It is possible, in principle, to run 2^n instances and pick the one that satisfies the criteria.

Replies from: Lumifer
comment by Lumifer · 2014-07-29T04:04:40.095Z · LW(p) · GW(p)

And yes, It is possible, in principle, to run 2^n instances and pick the one that satisfies the criteria.

That's not simulating intelligence. That's just a crude exhaustive search.

And I am not sure you have enough energy in the universe to run 2^n instances, anyway.

comment by polymathwannabe · 2014-07-24T16:31:12.692Z · LW(p) · GW(p)

However, "Am I uncertain about my own source code?" is a question I'd love to hear Descartes tackle.

Replies from: Lumifer
comment by Lumifer · 2014-07-24T17:14:59.425Z · LW(p) · GW(p)

I'd love to hear Descartes tackle

Well, there was an unfortunate accident. One evening he was sitting in a bar and the bartender asked him whether he wanted another glass of wine. "I think not" Descartes answered and poof! was never seen again...

Replies from: gwern
comment by gwern · 2014-07-24T18:57:37.723Z · LW(p) · GW(p)

The joke thread is ----> http://lesswrong.com/r/discussion/lw/ki0/jokes_thread/ thataway.

comment by Slider · 2014-07-24T20:16:38.299Z · LW(p) · GW(p)

1) You can know about your DNA and your upbringing. Suppose that you are in the Trueman Show and a clone of you will be put through the same script. Even if we don't know about the spesifics of how it compiles I think we are pretty sure the results would be similar to the degree that we can get the DNA / upbringing to match exactly. In this sense no you are not unsure.

1) If you can reliably answer hypotheticals about your actions then you do know how you function. However unreasonable levels of honesty would be required. In this sense you are sure.

1) You probably are not a quine in that your verbal output would contain a representation of you (I am a little uncertain whether sexual reproduction would count as being a quine). If you are highly reflective you can be aware of large part of your thoughts (ie you can meditate). However there must be a toplevel thought that is not reflected upon or that is selfrepresenting for then your finite head would contain infinite amunt of information and as information requires energy to be encoded knowing that your head is only finitely massive you don't have that.

Replies from: ChristianKl
comment by ChristianKl · 2014-07-25T15:39:10.597Z · LW(p) · GW(p)

1) You can know about your DNA and your upbringing.

I actually don't know much about how my DNA and which kind of mutations I have that aren't typical and how those effect my decision making. There are tons of experiences I had in my childhood that I don't remember and that influenced me.

Suppose that you are in the Trueman Show and a clone of you will be put through the same script. Even if we don't know about the spesifics of how it compiles I think we are pretty sure the results would be similar to the degree that we can get the DNA / upbringing to match exactly.

Given that the brain is a complex system chaos theory suggests that slight derivations are enough to change outcomes. Having the same script won't be enough.

2) If you can reliably answer hypotheticals about your actions then you do know how you function. However unreasonable levels of honesty would be required. In this sense you are sure.

Humans often don't act in the way they think they would act.

If you are highly reflective you can be aware of large part of your thoughts (ie you can meditate).

Being aware of your thoughts doesn't mean that you are aware of emotional conditioning. If you feel averse towards a woman because you had a very unpleasant experience with another woman who wore the same perfume, that not something you can identify on the level of thoughts.

It takes a high awareness to even know that there something that's triggering you.

comment by [deleted] · 2014-07-24T15:16:41.530Z · LW(p) · GW(p)

The whole point of knowing heuristics and biases is to know your own source code...