Posts
Comments
In case you were exposing a core uncertainty you had - 'I want a) people to exist after me more than I want b) a MODEL that people exist after me, but my thinking incorporates b) instead of a); and that means my priorities are wrong' - and it's still troubling you, I'd like to suggest the opposite: if you have a model that predicts what you want, that's perfect! Your model (I think) takes your experiences, feeds them into a Bayesian algo, and predicts the future - what better way is there to think? I mean, I lack such computing power and honesty...but if an honest computer takes my experiences and says, 'Therefore, people exist after me,' then my best possible guess is that people exist after me, and I can improve the chance of that using my model.
I savor the succulent choleric chaos of declaring that I value mere phlegm above yellow bile. That is almost a contradiction, but not quite; and the resulting blend has a choleric quality as well: a delicious humor.
I think the experiment's conclusion that subjects sought to model the cards instead of to maximise wins is only valid if they had the probabilities, and/or could easily verify them, at the start; and (as many have noted) saw the deck reshuffled after each trial. (Without the probabilities, it sounds like their 'mistake' would be not noticing a majority color or not optimising when they did - I think I read the experiment as intended, but readers might find doing so easier if given these conditions.)