Posts

Comments

Comment by j_timeberlake on Tail SP 500 Call Options · 2025-01-24T21:07:58.097Z · LW · GW

What are you specifically planning to accomplish?

In a post-ASI world, the assumption that investment capital returns are honored by society is basically gone.  Like the last game in a very long series of iterated prisoner's dilemma, there's no longer a need to Cooperate.  There's still time between now and then to invest, but the generic "more long-term capital = good" mindset seems insufficient without an exit strategy or final use case.

Personally, I'm trying to balance various risks regarding the choppy years right before ASI, and also maximize charitable outcomes while I still have some agency in this world.

Comment by j_timeberlake on The Sun is big, but superintelligences will not spare Earth a little sunlight · 2024-09-24T17:50:10.550Z · LW · GW

It's not acceptable to him, so he's trying to manipulate people into thinking existential risk is approaching 100% when it clearly isn't.  He pretends there aren't obvious reasons AI would keep us alive, and also pretends the Grabby Alien Hypothesis is fact (so people think alien intervention is basically impossible), and also pretends there aren't probably sun-sized unknown-unknowns in play here.

If it weren't so transparent, I'd appreciate that it could actually trick the world into caring more about AI-safety, but if it's so transparent that even I can see through it, then it's not going to trick anyone smart enough to matter.

Comment by j_timeberlake on The Sun is big, but superintelligences will not spare Earth a little sunlight · 2024-09-23T16:36:37.081Z · LW · GW

Yudkowsky is obviously smart enough to know this. You can't wake someone who is only pretending to be asleep.

It would go against his agenda to admit AI could cheaply hedge its bets by leaving humanity alive, just in case there's a stronger power out in reality that values humanity.

Comment by j_timeberlake on Thiel on AI & Racing with China · 2024-08-21T00:30:44.496Z · LW · GW

I'm sorry, I read the tone of it ruder than it was intended.

Comment by j_timeberlake on Thiel on AI & Racing with China · 2024-08-20T23:57:42.767Z · LW · GW

[Rogan pivots to talking about aliens for a while, which I have no interest in and do not believe the hypothesis is worth privileging. I point you to (and endorse) the bets on this that many LessWrongers have made of up to $150k against the hypothesis. 

This reeks of soldier mindset, instead of just ignoring that part of the transcript, you felt the need to seek validation in your opposing opinion by telling us what to think in an unrelated section.  The readers can think for themselves and do not need your help to do so.

Comment by j_timeberlake on Fear of centralized power vs. fear of misaligned AGI: Vitalik Buterin on 80,000 Hours · 2024-08-07T20:08:19.148Z · LW · GW

This is why I'm expecting an international project for safe AI.  The USA government isn't going to leave powerful AI in the hands of Altman or Google, and the rest of the world isn't going to sit idly while the USA becomes the sole AGI powerhouse.

An international project to create utopian AI is the only path I can imagine which avoids MAD.  If there's a better plan, I haven't heard it.