Posts
Comments
Nothing is fundamentally a black box.
That claim is unjustified and unjustifiable. Everything is fundamentally a black box until proven otherwise. And we will never find any conclusive proof. (I want to tell you to look up Hume's problem of induction and Karl Popper's solution, although I feel that making such a remark would be insulting your intelligence.) Our ability to imagine systems behaving in ways that are 100% predictable and our ability to test systems so as to ensure that they behave predictably does not change the fact that everything is always fundamentally a black box.
Thanks for offering that solution. It seems appropriate to me. I think that the issue at stake is related to the difference in programming language semantics between a probabilistic and nondeterministic semantics. Once you have decided on a nondeterministic semantics, you can't simply start adding in probabilities and expect it to make sense. So, your solution suggests that we should have had grounded the entire problem in a probability distribution, whereas I was saying that, because we hadn't done that, we couldn't legitimately add probabilities into the picture at a later step. I wasn't ruling out the possibility of a solution like yours, and it would indeed be interesting to know whether yours can be generalized in any way. In a prior draft of this post, I actually suggested that we could introduce a random variable before the envelope was chosen (although I hadn't even attempted to work out the details). It was only for the sake of brevity that I omitted suggesting that idea.
My interest is more in the philosophy of language and how language can be deceptive — which is clearly happening in some way in statement of this problem — and what we can do to guard ourselves against that. What bothers me is that, even when I claimed to have spotted where where and how the false step occurred, nobody wanted to believe that I spotted it, or at least they they didn't believe that it mattered. That's rather disturbing to me because this problem involves a relatively simple use of language. And I think that humans are in a bit a trouble if we can't even get on the same page about something this simple... because we've got very serious problems right now in regard to A.I. that are much more complicated and tricky than this to deal with than this one.
But I do like your solution, and I'm glad that it's documented here if nowhere else.
And for anyone who reads this, I apologize if the tone of my post was off-putting. I deliberately chose a slightly provocative title simply to draw attention to this post. I don't mind being corrected if I'm mistaken or have misspoken.
Thank you for responding. This is indeed a very tricky issue, and I was looking for a sounding board... anyone who could challenge me in order to help me to clarify my explanation. I didn't expect so many haters in this forum, but the show must go on with or without them.
My undergraduate degree is in math, and mathematicians sometimes use the phrase "without loss of generality" (WLOG). Every once in a while they will make a semi-apologetic remark about the phrase because they all know that, if it were ever to be used in an inappropriate way, then everything could fall apart. Appealing to WLOG is not a cop-out but rather an attempt to tell those who are evaluating the proof, "Tell me if I'm wrong."
In your example of a coin flip, I can find no loss of generality. However, in the two envelopes problem, I can. If step (1) of the argument had said "unselected envelope" rather than "selected envelope", then the argument would have led the player to choose to keeping the selected argument rather than switching it. Why should the argument using the words "selected envelope" be more persuasive than the argument involving the words "unselected envelope"? Do you see what I mean? There is an implicit "WLOG" but, in this case, with an actual loss in generality.
This problem still leaves me feeling very troubled because, even to the extent that I understand the fallacy, it still seems very difficult for me to know whether I have explained it in a way that leaves absolutely no room for confusion (which is very rare when I see an actual error in somebody's reasoning). And apparently, I was not able to explain the fallacy in a way that others could understand. As far as I'm concerned, that's a sign of a very dangerous fallacy. And I've encountered some very deep and dangerous fallacies. So, this one is still quite disturbing to me.