Posts

Comments

Comment by heresieding on The Useful Idea of Truth · 2024-10-07T19:15:11.854Z · LW · GW

In case you were exposing a core uncertainty you had - 'I want a) people to exist after me more than I want b) a MODEL that people exist after me, but my thinking incorporates b) instead of a); and that means my priorities are wrong' - and it's still troubling you, I'd like to suggest the opposite: if you have a model that predicts what you want, that's perfect! Your model (I think) takes your experiences, feeds them into a Bayesian algo, and predicts the future - what better way is there to think? I mean, I lack such computing power and honesty...but if an honest computer takes my experiences and says, 'Therefore, people exist after me,' then my best possible guess is that people exist after me, and I can improve the chance of that using my model.

Comment by heresieding on Something to Protect · 2022-04-13T02:30:06.796Z · LW · GW

I savor the succulent choleric chaos of declaring that I value mere phlegm above yellow bile. That is almost a contradiction, but not quite; and the resulting blend has a choleric quality as well: a delicious humor.

Comment by heresieding on Lawful Uncertainty · 2022-03-30T23:33:16.367Z · LW · GW

I think the experiment's conclusion that subjects sought to model the cards instead of to maximise wins is only valid if they had the probabilities, and/or could easily verify them, at the start; and (as many have noted) saw the deck reshuffled after each trial. (Without the probabilities, it sounds like their 'mistake' would be not noticing a majority color or not optimising when they did - I think I read the experiment as intended, but readers might find doing so easier if given these conditions.)