Posts
Comments
Why isn't building a decision theory equivalent to building a whole AI from scratch?
I understand, what I wrote was wrong. What if we use n%3=0 and ~(n%3=0) though?
A natural number n can be even or odd: i.e. n%2=0 or n%2=1.
If X = {n is natural number} then you showed that we can use P(n%2=0|X) + P(n%2=1|X) = 1 and P(n%2=0|X) = P(n%2=1|X) together to get P(n%2=0|X) = 1/2.
The same logic works for the three statements n%3=0,n%3=1,n%3=2 to give us P(n%3=0|X) = P(n%3=1|X) = P(n%3=2|X) = 1/3.
But then the same logic also works for the two indistinguishable statements n%3=0,n%3=1 \/ n%3=2 to give us P(n%3=0|X) = P(n%3=1 \/ n%3=2) = 1/2.
But 1/2 = 1/3 is a contradiction, so we find that axiom 3 leads to inconsistencies.
Isn't it just strategy stealing? Calling it tit-for-tat maybe focuses away from the fundamental reason why it wins.
I'd like to ask him for an explanation of what the hard problem is and why it's an actual problem, in a way that I can understand it (without reference to undefinable things like "qualia" or "subjective experience"). Would probably have to discuss it in person with him and even then doubt either of us would get anywhere though.