Posts

Comments

Comment by followthesilence on What Depression Is Like · 2024-08-29T14:40:47.423Z · LW · GW

The disagreement here seems to be around how literally one should interpret the metaphor. 

I agree depression could be more accurately described as "lack of caring" than "must do endless puzzles". However, the purpose of the post is to describe the depressive experience to people who cannot relate.

To that end, I like the sudoku metaphor. If you tell someone "depression means I just don't care and can't muster willpower to do things I should/need to do" a lot of people may -- consciously or not -- judge this as a voluntary condition where the solution approximates to "have you tried caring?" 

Sudokus help illustrate the (what feels like) involuntary roadblocks to otherwise simple life processes, the way these roadblocks ramify insidiously into more sub-components of life over time, and the level of fatigue, suffering, and defeat this inflicts.

Comment by followthesilence on Dragon Agnosticism · 2024-08-02T06:32:11.176Z · LW · GW

May be a Rorschach... For me, of the dozen or so things i thought about replacing dragons with, "race science" wasn't one of them

Comment by followthesilence on Poker is a bad game for teaching epistemics. Figgie is a better one. · 2024-07-09T05:12:23.935Z · LW · GW

Thanks for the intro to Figgie. It makes sense that it's a better game to teach trading concepts given it was designed specifically to teach trading interns, has its own trading platform with bid-ask pricing, and all the other good reasons you mention above.

I would take issue with the first part ("poker is a bad game for teaching epistemics"), especially relative to the universe of well-known games out there. To address your criticisms:

In poker, most decisions don't give you feedback about whether you were right for the right reasons.

This strikes me as more feature than bug. Just as it can be "to your advantage to hide how you're playing certain combinations of cards from your tablemates", so too is it typical for firms to try to disguise their motives and trading strategies from rivals. Poker (and trading) is about making optimal decisions with incomplete information. Learning to do this without immediate feedback is itself a valuable skill. Relying on results from a single hand/trade is too noisy and often the best you can do is guess/deduce the likelihood your play was +EV -- the most valuable feedback comes from your long-term results.

If your poker playing partners aren't sufficiently skilled, you'll learn bad lessons.

A big part of the game is understanding your relative skill and assessing your adversaries ("If you can't spot the sucker in your first half hour at the table, you are the sucker"). Once someone becomes proficient at poker, arguably the most lucrative skill becomes identifying unsophisticated players/markets and exploiting them. Clearly transferable to trading, though maybe not to being a decent human being.

My favorite poker concept applicable to trading and other areas of life is: What level are they on?, Where levels are sequentially: "What do I have?", "What do they have?", "What do they think I have?", "What do they think I think they have?" and so on. 

I see this as applicable in speculative markets. For instance, when the last Bitcoin halving date was approaching, funny investment theses could abound: "(1) BTC Halving --> less supply --> BUY", "(2) Halving already priced in --> Level 1 thinkers will dump holdings when they don't get anticipated halving bounce --> SELL", "(3) Level 2 is right that BTC has already appreciated due to anticipated halving, but they don't realize that demand from new BTC ETF inflows are going to vastly outstrip newly constrained supply and we'll get a squeeze --> BUY", etc. Here some Level always risks learning a bad lesson (being right for the wrong reasons). The true skill is being able to deduce whether you can, over a larger sample, correctly assess the state/thinking of the market. 

It takes a long time to get reasonably good at poker

Good is a relative term here. Basic competence and understanding of key concepts that have transferability to trading can be achieved over much shorter timelines than those poker boards suggested. They are more referring to holding your own against professionals (or bots if playing online) for real money. 

Poker players spend most of the time at the table not making decisions.

Probably depends on what you're trading, but in my experience traders technically spent most of the time at their desks not making trades. Whether waiting to act or waiting for the next hand, there is value in gathering information and observing how your opponents are playing. 

A few poker situations turn the emotional stakes way up, past the level that's helpful.

This is another feature (not bug) to me. Even just setting up a toy game with play money or nickel stakes, poker has an amazing ability to put people "on tilt" where emotions distract from pursuit of optimal play or cause them to take outsized risks to chase losses. This can teach valuable lessons to junior traders learning to manage real assets. The best traders and poker professionals possess the skill, whether innate or learned, of tuning out the noise and not letting losing streaks get to their head.

Comment by followthesilence on OMMC Announces RIP · 2024-04-02T02:22:49.346Z · LW · GW

I'm highly skeptical that it's even possible to create omnicidal machines. Can you point empirically to a single omnicidal machine that's been created? What specifically would an OAL-4 machine look like? Whatever it is, just don't do that. To the extent you do develop anything OAL-4, we should be fine so long as certain safeguards are in place and you encourage others not to develop the same machines. Godspeed.

Comment by followthesilence on A Back-Of-The-Envelope Calculation On How Unlikely The Circumstantial Evidence Around Covid-19 Is · 2024-02-08T03:48:02.192Z · LW · GW

Post hoc probability calculations like these are a Sisyphean task. There are infinite variables to consider, most can't be properly measured, even ballparked.

On (1), pandemics are arguably more likely to originate in large cities because population density facilitates spread, large wildlife markets are more likely, and they serve as major travel hubs. I'm confused why the denominator is China's population for (1) but all the world's BSL-4 labs in (3). I don't understand the calculation for (2)... that seems the opposite of "fairly easy to get a ballpark figure for." Ditto for (4).

Comment by followthesilence on Brute Force Manufactured Consensus is Hiding the Crime of the Century · 2024-02-07T11:41:37.952Z · LW · GW

Rootclaim sold the debate as a public good that would enhance knowledge but ultimately shirked responsibility to competently argue for one side of the debate, so it was a very one-sided affair that left viewers (and judges) to conclude this was probably natural origin. Several people on LW say the debate has strongly swayed them against lab leak. 

The winning argument (as I saw it) came down to Peter's presentation of case mapping data (location and chronology) suggesting an undeniable tie to the seafood market. Saar did little to undercut this, which was disappointing because the Worobey paper and WHO report have no shortage of issues. Meanwhile, Peter did his homework on basically every source Saar cited (even engaging with some authors on Twitter to better understand the source) and was quick to show any errors in the weak ones, leaving viewers with the impression that Rootclaim's case stood on a house of cards.

Peter was just infinitely more prepared for the debates, had counterpoints for anything Saar said, and seemingly made 10 logical arguments in the time it took Saar to form a coherent sentence. It wasn't exactly like watching Mayweather fight Michael Cera, but it wasn't not that. Didn't seem a fair fight. 

Comment by followthesilence on Most experts believe COVID-19 was probably not a lab leak · 2024-02-05T03:55:28.481Z · LW · GW

Zoonotic will win this debate because Peter outclassed Saar on all fronts, from research/preparation to intelligibly engaging with counterclaims and judge's questions. 

Saar seemed too focused on talking his book and presenting slides with conditional probability calculations. He was not well-versed enough in the debate topics to defend anything when Peter undercut a slide's assumptions, nor was he able to poke sufficient holes in Peter's main arguments. Peter relied heavily on case mapping data, and Saar failed to demonstrate the ascertainment bias inherent to that data. He even admitted he did no follow-up research after the initial presentation. 

I get the sense Saar either thought lab leak was so self-evident that showing the judges his probability spreadsheet would be a wrap, or he was happy to pony up $100k just to advertise Rootclaim. Maybe both.

For those reasons the Rootclaim verdict doesn't seem like a proper referendum on the truth of the matter. But I would be more sympathetic to people updating toward zoonotic on the basis of having watched that debate, rather than on the basis of these survey results.

Comment by followthesilence on Most experts believe COVID-19 was probably not a lab leak · 2024-02-03T06:29:39.919Z · LW · GW

Yes, by virtue of the alliance with the "top virologists".

Comment by followthesilence on Most experts believe COVID-19 was probably not a lab leak · 2024-02-03T04:27:41.428Z · LW · GW

In Feb 2020 Anthony Fauci convened a bunch of virologists to assess SARS-CoV-2 origins. The initial take from the group (revealed in private Slack messages via FOIA requests from 2023) was this was likely engineered. In Kristian Andersen of Scripps Research's view, it was "so friggin likely because they were already doing this work." 

The same month, Fauci held an off-the-record call with the group. After that, everyone's tunes changed and shortly after (in a matter of weeks) we got the Proximal Origins paper, with Kristian Andersen doing a 180 as the lead author. The paper posits that there is "strong evidence that SARS-CoV-2 is not the product of purposeful manipulation." I encourage you to read the paper to determine its merits. Their evidence as I understand it is a) the structure of the spike protein is not what a computer would have generated as optimally viral, and b) pangolins. Pangolins were ruled out as carriers shortly after the paper's release. (a) can be dismissed -- or at least mitigated -- by the fact that serial passage can naturally develop what a computer may not. Andersen's Scripps Research coincidentally got a multi-million dollar grant shortly after publishing Proximal Origins, but again, that's merely coincidence.

Some in the comments seem stuck on the fact this virus could have been obtained in the wild, and thus is zoonotic in origin. That is ignoring the substantial work the Wuhan lab undertook to take natural viruses and create chimeric viruses that were optimized for human contagiousness

Fauci took the Proximal Origins paper on his circuit of 60 Minutes interviews, NYTimes podcasts, and Congressional testimonies, declaring, "the leading virologists say this was most likely of natural origin". 

This rhetoric undoubtedly has a massive chilling effect on any "experts" who would otherwise posit that this could be lab origin. The High Minister of Science has declared it was zoonotic, definitive proof will probably never be established either way, so you better be on the side of the High Minister of Science.

If there were an omniscient arbiter of truth that could make markets on this issue, I would take lab leak >50% in a heartbeat, and have it closer to 85%. Alas, there never will be such an arbiter, and we'll have to rely on the experts who are heavily reliant on government research grants, gain of function research as the way of the future, and generally not rocking the boat. 

Comment by followthesilence on Prediction markets are consistently underconfident. Why? · 2024-01-11T22:42:58.406Z · LW · GW
Comment by followthesilence on Prediction markets are consistently underconfident. Why? · 2024-01-11T17:50:02.412Z · LW · GW

The Metaculus point scoring system incentivizes* middling predictions that would earn you points no matter the outcome (or at least provide upside in one direction with no point downside if you're wrong) so that would encourage participants with no opinion/knowledge on the matter to blindly predict in the middle.

Harder to explain with real money markets, but Peter's explanation is a good one. Also, for contracts closing several months or years out where the outcome is basically known, they will still trade at a discount to $0.99 because time value of money and opportunity cost of tying up capital in a contract that has very low prospective ROI.

*Haven't been on the site in a while but this was at least true as of a few months ago.

Comment by followthesilence on The Dark Arts · 2024-01-03T05:26:31.695Z · LW · GW

Good post, thank you. I imagine to go undefeated, you must excel at things beyond the dark arts described (in my experience, some judges refuse to buy an argument no matter how poorly opponents respond)? How much of your success do you attribute to 1) your general rhetorical skills or eloquence, and 2) ability to read judges to gauge which dark arts they seem most susceptible to?

Comment by followthesilence on Ability to solve long-horizon tasks correlates with wanting things in the behaviorist sense · 2023-11-28T07:37:52.071Z · LW · GW

"Want" seems ill-defined in this discussion. To the extent it is defined in the OP, it seems to be "able to pursue long-term goals", at which point tautologies are inevitable. The discussion gives me strong stochastic parrot / "it's just predicting next tokens not really thinking" vibes, where want/think are je ne sais quoi words to describe the human experience and provide comfort (or at least a shorthand explanation) for why LLMs aren't exhibiting advanced human behaviors. I have little doubt many are trying to optimize for long-term planning and that AI systems will exhibit increasingly better long-term planning capabilities over time, but have no confidence whether that will coincide with increases in "want", mainly because I don't know what that means. Just my $0.02, as someone with no technical or linguistics background.

Comment by followthesilence on why did OpenAI employees sign · 2023-11-28T06:10:06.262Z · LW · GW

Not sure if this page is broken or I'm technically inept, but I can't figure out how to reply to qualiia's comment directly:

Primarily #5 and #7 was my gut reaction, but quailia's post articulates rationale better than I could. 

One useful piece of information that would influence my weights: what was OAI's general hiring criteria? If they sought solely "best and brightest" on technical skills and enticed talent primarily with premiere pay packages, I'd lean #5 harder. If they sought cultural/mission fits in some meaningful way I might update lower on #5/7 and higher on others. I read the external blog post about the bulk of OAI compensation being in PPUs, but that's not necessarily incompatible with mission fit.

Well done on the list overall, seems pretty complete, though aphyer provides a good unique reason (albeit adjacent to #2).

Comment by followthesilence on why did OpenAI employees sign · 2023-11-28T06:05:35.696Z · LW · GW
Comment by followthesilence on Apocalypse insurance, and the hardline libertarian take on AI risk · 2023-11-28T05:30:36.913Z · LW · GW

The answer is that apocalypse insurance—unlike liability insurance—must pay out in advance of the destruction of everyone. If somebody wishes to risk killing you (with some probability), there's presumably some amount of money they could pay you now, in exchange for the ability to take that risk.

 

Pretty sure you mean they should pay premiums rather than payouts? 

I like the spirit of this idea, but think it's both theoretically and practically impossible: how do you value apocalypse? Payouts are incalculable/infinite/meaningless if no one is around. 

The underlying idea seems sound to me: there are unpredictable civilizational outcomes resulting from pursuing this technology -- some spectacular, some horrendous -- and the pursuers should not reap all the upside when they're highly unlikely to bear any meaningful downside risks. 

I suspect this line of thinking could be grating to many self-described libertarians who lean e/acc and underweight the possibility that technological progress != prosperity in all cases. 

It also seems highly impractical because there is not much precedent for insuring against novel transformative events for which there's no empirical basis*. Good luck getting OAI, FB, MSFT, etc. to consent to such premiums, much less getting politicians to coalesce around a forced insurance scheme that will inevitably be denounced as stymying progress and innovation with no tangible harms to point to (until it's too late).

Far more likely (imo) are post hoc reaction scenarios where either:

a) We get spectacular takeoff driven by one/few AI labs that eat all human jobs and accrue all profits, and society deems these payoffs unfair and arrives at a redistribution scheme that seems satisfactory (to the extent "society" or existing political structures have sufficient power to enforce such a scheme)

b) We get a horrendous outcome and everyone's SOL

* Haven't researched this and would be delighted to hear discordant examples.

Comment by followthesilence on Spaced repetition for teaching two-year olds how to read (Interview) · 2023-11-28T03:05:37.458Z · LW · GW

It sounds quite intense, though I'm hesitant to describe it as "too hard" as I don't know how children should be reared. The cringing was more at what I perceive as some cognitive dissonance, with "I didn't want to be a tiger parent" coinciding with informing them they didn't really have a choice because it was their job (I don't see the compromise there, nor do I put much stock in a 3-5 year old's ability to negotiate compromises, though these do sound like extraordinary children). But my views are strongly influenced by my upbringing which was a very hands off, "do what you enjoy" mentality. That could be a terrible approach. Internally I grapple with what the appropriate level of parental guidance is, to the extent that can be ascertained... [Narrator: It can't.]

Comment by followthesilence on Spaced repetition for teaching two-year olds how to read (Interview) · 2023-11-27T06:06:05.320Z · LW · GW

Credit to their dad and these kids who achieved these early results. As noted, genetics could factor into aptitude at such a young age -- I'm curious (if not skeptical) whether this system is reproducible in many children of the same age. The following excerpts in conjunction made me cringe a little bit:

I really, really thought I was pushing too hard; I had no desire to be a "tiger dad", but he took it with extreme grace. I was ready to stop at any moment, but he was fine. 

Hannah went through a phase where she didn't want to do it. We tried to compromise and work through it. Eventually, it became part of her "job" -- we told her that every human has a job, and her job was to do Anki. Other than that, we never had to coerce any of the kids.

But that's more a personal values issue, and I'm in no position to judge parenting styles. Congrats again to this family, and I hope Anki is useful for other families.

Comment by followthesilence on OpenAI: The Battle of the Board · 2023-11-23T05:17:40.170Z · LW · GW

If Sam is as politically astute as he is made out to be, loading the board with blatant MSFT proxies would be bad optics and detract from his image. He just needs to be relatively sure they won't get in his way or try to coup him again.

Comment by followthesilence on OpenAI: The Battle of the Board · 2023-11-23T05:12:51.202Z · LW · GW

This is a great post, synthesizing a lot of recent developments and (I think) correctly identifying a lot of what's going on in real time, at least with the limited information we have to go off of. Just curious what evidence supports the idea of Summers being "bullet-biting" or associated with EA?

Comment by followthesilence on OpenAI: The Battle of the Board · 2023-11-23T04:59:54.371Z · LW · GW

Like many I have no idea what's happening behind the scenes, so this is pure conjecture, but one can imagine a world in which Toner "addressed concerns privately" but those concerns fell on deaf ears. At that point, it doesn't seem like "resigning board seat and making case publicly" is the appropriate course of action, whether or not that is a "nonprofit governance norm". I would think your role as a board member, particularly in the unique case of OpenAI, is to honor the nonprofit's mission. If you have a rogue CEO who seems bent on pursuing power, status, and profits for your biggest investor (again, purely hypothetical without knowing what's going on here), and those pursuits are contra the board's stated mission, resigning your post and expressing concerns publicly when you no longer have direct power seems suboptimal. Seems to presume the board should have no say whether the CEO is doing their job correctly when, in this case, that seems to be the only role of the board.

Comment by followthesilence on “Why can’t you just turn it off?” · 2023-11-20T06:52:59.997Z · LW · GW

Granted this all rests on unsubstantiated rumors and hypotheticals, but in a scenario in which the board said "shut it down this is too risky", doesn't the response suggest we're doomed either way? Either

a) Investors have more say than the board and want money, so board resigns and SA is reinstated to pursue premiere AGI status

b) Board holds firm in decision to oust SA, but all his employees follow him to a new venture and investors follow suit and they're up and running with no more meaningful checks on their pursuit of godlike AI

After some recent (surprising) updates in favor of "oh maybe people are taking this more seriously than I expected and maybe there's hope", this ordeal leads me to update in the opposite direction of "we're in full speed ahead arms race to AGI and the only thing to stop it will be strong global government interventionist policy that is extremely unlikely". Not that the latter wasn't heavily weighted already, but this feels like the nail in the coffin.

Comment by followthesilence on President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence · 2023-11-01T19:29:28.288Z · LW · GW

I agree, I was trying to highlight it as one of the most specific, useful policies from the EO. Understand the confusion given my comment was skeptical overall.

Comment by followthesilence on President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence · 2023-11-01T06:04:10.610Z · LW · GW

this is crazy, perhaps the most sweeping action taken by government on AI yet. 

Seems like  too much consulting jargon and "we know it when we see it" vibes, with few concrete bright-lines. Maybe a lot hinges on enforcement of the dual-use foundation model policy... any chance developers can game the system to avoid qualifying as a dual-use model? Watermarking synthetic content does appear on its face a widely-applicable and helpful requirement.

Comment by followthesilence on Book Review: Going Infinite · 2023-10-30T04:58:08.380Z · LW · GW

No idea how likely it is. I'm not going to create a market but welcome someone else doing so. I agree the likelihood "evidence will come out [...] over the next year" is <10%. That is not the same as the likelihood it happened, which I'd put at >10%. More than anything, I just cannot reconcile my former conception of Michael Lewis with his current form as a SBF shill in the face of a mountain of evidence that SBF committed fraud. I asked the question because Zvi seems smarter than me, especially on this issue, and I'm seeking reasons to believe Lewis is just confused or wildly mistaken rather than succumbing to ulterior motives.

Comment by followthesilence on The Overkill Conspiracy Hypothesis · 2023-10-30T04:40:08.065Z · LW · GW

Thanks. I'm probably missing the point, but I don't see how these definitions apply to moon landing conspiracies, which much of your post seems to center on. The thrust of their argument, as I understand it, is that the US committed to landing on the moon by the end of the 60s, but that turned out to be much harder than anticipated so the landing was fabricated to maintain some geopolitical prestige/advantage. As you pointed out, pulling this off would require the secrecy of countless scientists and astronauts to their grave, or at least compartmentalizing tasks such that countless people think they're solving real scientific problems that are achieving moon landing with a smaller group conspiring to fake the results. This seems improbable. Like you said, it could be "easier to just... go to the moon for real". 

But moon conspiracists seem to explicitly dismiss -- rather than assume -- these circumstances. They argue that landing on the moon was physically too difficult (or impossible) for the time such that faking the landing was the easier route. Applying OCH here seems to assume the conclusion, and I don't understand how it provides a better/faster route to dismissing moon conspiracies than just applying existing evidence or Occam's razor. Perhaps, though, I'm missing the "circumstances [moon landing] conspiracy theories must assume" in this example.

Comment by followthesilence on Book Review: Going Infinite · 2023-10-30T04:15:27.079Z · LW · GW

Great review. Brilliant excerpts, excellent analysis. My only quibble would be:

What Michael Lewis is not is for sale.

What leads you to this conclusion? I don't know much about Lewis, but based on his prior books I would've said one thing he is not is stupid, or bad at understanding people. I feel you have to be inconceivably ignorant to stand by SBF and suggest he probably didn't intentionally commit fraud, particularly in light of all the stories presented in the book. 

Bizarre statements like "There’s still an SBF-shaped hole in the world that needs filling" have me speechless with no good explanation other than Lewis was on the take.

Comment by followthesilence on The Overkill Conspiracy Hypothesis · 2023-10-21T23:27:16.225Z · LW · GW

Can you succinctly explain what OCH is? Is it, roughly, applying Occam's razor to conspiracy theories?

Comment by followthesilence on Inside Views, Impostor Syndrome, and the Great LARP · 2023-09-27T23:16:00.830Z · LW · GW

IMO a lot of claims of having imposter syndrome is implicit status signaling. It's announcing that your biggest worry is the fact that you may just be a regular person.

Imposter syndrome being a regular person is your "biggest worry". 

Comment by followthesilence on Immortality or death by AGI · 2023-09-22T04:43:10.367Z · LW · GW

I'm pretty bullish on hypothetical capabilities of AGI, but on first thought decided a 40% chance of "solving aging" and stopping the aging process completely seemed optimistic. Then reconsidered and thought maybe it's too pessimistic. Leading me to the conclusion that it's hard to approximate this likelihood. Don't know what I don't know. Would be curious to see a (conditional) prediction market for this.

Comment by followthesilence on AI #20: Code Interpreter and Claude 2.0 for Everyone · 2023-07-13T23:29:57.627Z · LW · GW

Voting that you finish/publish the RFK Jr piece. Thanks for this weekly content.

Comment by followthesilence on When do "brains beat brawn" in Chess? An experiment · 2023-06-28T21:05:16.105Z · LW · GW

Enjoyed this post, thanks. Not sure how well chess handicapping translates to handicapping future AGI, but it is an interesting perspective to at least consider.

Comment by followthesilence on Matt Taibbi's COVID reporting · 2023-06-16T00:38:06.431Z · LW · GW

Spoiler: Less than 1% will admit they were wrong. Straight denial, reasoning that it doesn't actually matter, or pretending they knew the whole time lab origin was possible are all preferable alternatives. Admitting you were wrong is career suicide.

The political investments in natural origin are strong. Trump claiming a Chinese lab was responsible automatically put a large chunk of Americans in the opposite camp. My interest in the topic actually started with reading up to confirm why he was wrong, only to find the Daszak-orchestrated Lancet letter that miscited numerous articles and the Proximal Origins paper that might be one of the dumbest things I've ever read. The Lancet letter's declaration that "lab origin theories = racist" influenced discourse in a way that cannot be understated. It also seems many view more deadly viruses as an adjoining component of climate change: a notion that civilizing more square footage of earth means we are inevitably bound to suffer nature's increasing wrath in the form of increasingly virulent, deadly pathogens.

The professional motivations are stark and gross. “It is difficult to get a man to understand something, when his salary depends on his not understanding it.” Thoughts on the origin are frequently dismissed if you're not a virologist. But all the money in virology is in gain of function. Oops!

Comment by followthesilence on Transformative AGI by 2043 is <1% likely · 2023-06-07T22:32:08.865Z · LW · GW

Apologies, I'm not trying to dispute math identities. And thank you, the link provided helps put words to my gut concern: that this essay's conclusion relies heavily on a multi-stage fallacy, and arriving at point probability estimates for each event independently is fraught/difficult.

Comment by followthesilence on Transformative AGI by 2043 is <1% likely · 2023-06-07T19:32:15.036Z · LW · GW

Thanks, I suppose I'm taking issue with sequencing five distinct conditional events that seem to be massively correlated with one another. The likelihoods of Events 1-5 seem to depend upon each other in ways such that you cannot assume point probabilities for each event and multiply them together to arrive at 1%. Event 5 certainly doesn't require Events 1-4 as a prerequisite, and arguably makes Events 1-4 much more likely if it comes to pass.

Comment by followthesilence on Transformative AGI by 2043 is <1% likely · 2023-06-07T17:11:17.112Z · LW · GW

Can you explain how Events #1-5 from your list are not correlated? 

For instance, I'd guess #2 (learns faster than humans) follows naturally -- or is much more likely -- if #1 (algos for transformative AI) comes to pass. Similarly, #3 (inference costs <$25/hr) seems to me a foregone conclusion if #5 (massive chip/power scale) and #2 happen.

Treating the first five as conditionally independent puts you at 1% before arriving at 0.4% with external derailments, so it's doing most of the work to make your final probability miniscule. But I suspect they are highly correlated events and would bet a decent chunk of money (at 100:1 odds, at least) that all five come to pass.