Posts

Comments

Comment by Joseph Greenwood (Hibron) on ACX/SSC Boulder meetup- September 23 · 2023-08-26T01:55:45.828Z · LW · GW

I will be attending this meetup. I have never been to an ACX meetup before (nor a SSC meetup, nor a DSL meetup), but I am excited to try this one out.

Comment by Joseph Greenwood (Hibron) on Value Learning is only Asymptotically Safe · 2019-04-09T14:45:31.189Z · LW · GW

Even granting that it is possible for cosmic rays to flip any given bit, or any sequence of bits, in a computer's memory, it is far from clear to me that the probability of this happening approaches 1 over the lifetime of the universe. It isn't very hard to come up with cases where an event is both completely possible, and has probability 0: for instance, if I pick a number at random with uniform distribution from the closed interval [0,1], the probability I will pick 1 is 0 even though 1 is as likely a choice as any other option on the interval. And in the concrete case you're referring to, the universe has finite time to flip these bits before it sinks into entropy. Moreover, I wouldn't expect the sequence of datapoints needed to convince an AI that humans are hostile (or whatever) to be invariant across time: as the AI accrued more data, it would plausibly require more data to persuade it to change its mind.

Comment by Joseph Greenwood (Hibron) on Decision Theory FAQ · 2015-10-19T14:29:59.004Z · LW · GW

In the last chapter of his book "Utility Theory for Decision Making," Peter Fishburn published a concise rendering of Leonard Savage's proof that "rational" preferences over events implied that one behaved "as if" he (or she) was obeying Expected Utility Theory. He furthermore proved that following Savage's axioms implied that your utility function is bounded (he attributes this extension of the proof, in its essence, to Savage). So Subjective Expected Utility Theory has an answer to the St. Petersburg Paradox "built in" to its axioms. That seems like a point well worth mentioning in this article.