Posts
Comments
A case against this type of trade is made here https://www.lesswrong.com/posts/yFkNYyspBBqfSeBx9/against-using-stock-prices-to-forecast-ai-timelines
Relevant quote: "Fourth, and quite importantly, it is not obvious whether expectations of transformative AI would raise or lower stock prices. This is because, as described in the previous subsection, stock prices reflect the present-discounted value of future profits; and advanced AI may raise those future profits, but – as the central thesis of this piece argues – advanced AI would also raise the interest rate used to discount those profits. The net effect on stock prices is not immediately obvious."
I assume the person on the other side of the trade thinks a reasonable probability distribution for the S&P 500 tail events is roughly what the distribution in fact was for S&P 500 tail events in the last century (with some minor-moderate adjustments for changes to the economy). The current market price for 12k-2028 implies a chance of profit of 1.81%, which is around how often, in Sapphire's analysis, that the S&P 500 doubled in 4 years, 4.3% (which is the scenario where the 12k-2028 calls would be in the money by expiration).
Less precisely, the other person basically expects that the options they're selling will most likely go to 0, since similar options have almost always gone to 0 historically. So writing more contracts gets them money in exchange for a slim chance they would need to sell other assets to cover a loss.
So is it impossible for me to abstractly describe a Turing Machine, and then wonder whether it would halt, with that necessarily having a fact of the matter, all without resorting to physical instantiations?
The idea I'm trying to express is that "a proof using PA axioms is a valid proof using PA axioms if and only if it is enumerated by the following TM: [standard PA TM enumerator description]".
My question is what's an example of a PA proof you think is arguably valid but wouldn't be enumerated?
Agreed that a difference in intuitions provides a perfectly consistent way to deny one and not the other, I don't think omnizoid would deny this.
On the second point - presumably there would need to be an account of why normativity should get different treatment epistemologically when compared with mathematics? Otherwise it would seem to be an unmotivated distinction to just hold "the epistemological standards for normativity are simply different from the mathematical standards, just because". I don't doubt you have an account of an important distinction, but I just think that account would be doing the work. The initial "tricky" claim would hold up to the extent that identifying a relevant distinction is or isn't "tricky".
Then you wouldn't be rejecting intuitions to justify the other, as in omnizoid's comment (you'd be using intuitions to reject the other). Also the prior comment uses the phrase "permitting moral realism" - I wouldn't have taken this to imply REQUIRING moral realism, independent of intuitions.
So you don't think there's a Turing Machine which enumerates all and only valid PA proofs?
For what proof encoded by only a non-standard number would you endorse the claim "this proof doesn't objectively lack proof-hood"?
One can construct a Turing Machine which iterates over all possible PA proofs, halting if it ever finds an inconsistency. Given this, if you're going to hold that there's no objective fact of the matter about whether PA is consistent, you'll also have to hold there's no objective fact of the matter about whether this Turing Machine halts.