Posts

Jaan Tallinn's 2023 Philanthropy Overview 2024-05-20T12:11:39.416Z
Jaan Tallinn's 2022 Philanthropy Overview 2023-05-14T15:35:04.297Z
Jaan Tallinn's 2021 Philanthropy Overview 2022-04-28T09:55:50.789Z
Soares, Tallinn, and Yudkowsky discuss AGI cognition 2021-11-29T19:26:33.232Z
Jaan Tallinn's 2020 Philanthropy Overview 2021-04-27T16:22:08.120Z
Jaan Tallinn's Philanthropic Pledge 2020-02-22T10:03:29.946Z

Comments

Comment by jaan on Me & My Clone · 2024-07-19T06:01:56.136Z · LW · GW

correct! i’ve tried to use this symmetry argument (“how do you know you’re not the clone?”) over the years to explain the multiverse: https://youtu.be/29AgSo6KOtI?t=869

Comment by jaan on What percent of the sun would a Dyson Sphere cover? · 2024-07-05T06:28:13.089Z · LW · GW

interesting! still, aestivation seems to easily trump the black hole heat dumping, no?

Comment by jaan on What percent of the sun would a Dyson Sphere cover? · 2024-07-04T05:56:45.671Z · LW · GW

dyson spheres are for newbs; real men (and ASIs, i strongly suspect) starlift.

Comment by jaan on MIRI 2024 Communications Strategy · 2024-06-03T06:49:53.367Z · LW · GW

thank you for continuing to stretch the overton window! note that, luckily, the “off-switch” is now inside the window (though just barely so, and i hear that big tech is actively - and very myopically - lobbying against on-chip governance). i just got back from a UN AIAB meeting and our interim report does include the sentence “Develop and collectively maintain an emergency response capacity, off-switches and other stabilization measures” (while rest of the report assumes that AI will not be a big deal any time soon).

Comment by jaan on Jaan Tallinn's 2023 Philanthropy Overview · 2024-06-03T06:19:27.240Z · LW · GW

thanks! basically, i think that the top priority should be to (quickly!) slow down the extinction race. if that’s successful, we’ll have time for more deliberate interventions — and the one you propose sounds confidently net positive to me! (with sign uncertainties being so common, confident net positive interventions are surprisingly rare).

Comment by jaan on "If we go extinct due to misaligned AI, at least nature will continue, right? ... right?" · 2024-05-20T15:47:21.053Z · LW · GW

AI takeover.

Comment by jaan on "If we go extinct due to misaligned AI, at least nature will continue, right? ... right?" · 2024-05-20T05:24:45.746Z · LW · GW

i might be confused about this but “witnessing a super-early universe” seems to support “a typical universe moment is not generating observer moments for your reference class”. but, yeah, anthropics is very confusing, so i’m not confident in this.

Comment by jaan on "If we go extinct due to misaligned AI, at least nature will continue, right? ... right?" · 2024-05-19T06:49:05.685Z · LW · GW

three most convincing arguments i know for OP’s thesis are:

  1. atoms on earth are “close by” and thus much more valuable to fast running ASI than the atoms elsewhere.

  2. (somewhat contrary to the previous argument), an ASI will be interested in quickly reaching the edge of the hubble volume, as that’s slipping behind the cosmic horizon — so it will starlift the sun for its initial energy budget.

  3. robin hanson’s “grabby aliens” argument: witnessing a super-young universe (as we do) is strong evidence against it remaining compatible with biological life for long.

that said, i’m also very interested in the counter arguments (so thanks for linking to paul’s comments!) — especially if they’d suggest actions we could take in preparation.

Comment by jaan on RSPs are pauses done right · 2023-10-15T10:20:28.213Z · LW · GW

i would love to see competing RSPs (or, better yet, RTDPs, as @Joe_Collman pointed out in a cousin comment).

Comment by jaan on RSPs are pauses done right · 2023-10-14T10:12:35.151Z · LW · GW

Sure, but I guess I would say that we're back to nebulous territory then—how much longer than six months? When if ever does the pause end?

i agree that, if hashed out, the end criteria may very well resemble RSPs. still, i would strongly advocate for scaling moratorium until widely (internationally) acceptable RSPs are put in place.

I'd very surprised if there was substantial x-risk from the next model generation.

i share the intuition that the current and next LLM generations are unlikely an xrisk. however, i don't trust my (or anyone else's) intuitons strongly enough to say that there's a less than 1% xrisk per 10x scaling of compute. in expectation, that's killing 80M existing people -- people who are unaware that this is happening to them right now.

Comment by jaan on RSPs are pauses done right · 2023-10-14T06:13:42.117Z · LW · GW

the FLI letter asked for “pause for at least 6 months the training of AI systems more powerful than GPT-4” and i’m very much willing to defend that!

my own worry with RSPs is that they bake in (and legitimise) the assumptions that a) near term (eval-less) scaling poses trivial xrisk, and b) there is a substantial period during which models trigger evals but are existentially safe. you must have thought about them, so i’m curious what you think.

that said, thank you for the post, it’s a very valuable discussion to have! upvoted.

Comment by jaan on Sharing Information About Nonlinear · 2023-09-08T10:18:22.350Z · LW · GW

the werewolf vs villager strategy heuristic is brilliant. thank you!

Comment by jaan on Demystifying Born's rule · 2023-06-14T05:37:33.374Z · LW · GW

if i understand it correctly (i may not!), scott aaronson argues that hidden variable theories (such as bohmian / pilot wave) imply hypercomputation (which should count as an evidence against them): https://www.scottaaronson.com/papers/npcomplete.pdf

Comment by jaan on Jaan Tallinn's 2022 Philanthropy Overview · 2023-05-15T07:17:28.522Z · LW · GW

interesting, i have bewelltuned.com in my reading queue for a few years now -- i take your comment as an upvote!

myself i swear by FDT (somewhat abstract, sure, but seems to work well) and freestyle dancing (the opposite of abstract, but also seems to work well). also coding (eg, just spent several days using pandas to combine and clean up my philanthropy data) -- code grounds one in reality.

Comment by jaan on On the FLI Open Letter · 2023-03-31T19:51:26.820Z · LW · GW

having seen the “kitchen side” of the letter effort, i endorse almost all zvi’s points here. one thing i’d add is that one of my hopes urging the letter along was to create common knowledge that a lot of people (we’re going to get to 100k signatures it looks like) are afraid of the thing that comes after GPT4. like i am.

thanks, everyone, who signed.

EDIT: basically this: https://twitter.com/andreas212nyc/status/1641795173972672512

Comment by jaan on We have to Upgrade · 2023-03-24T07:05:07.214Z · LW · GW

while it’s easy to agree with some abstract version of “upgrade” (as in try to channel AI capability gains into our ability to align them), the main bottleneck to physical upgrading is the speed difference between silicon and wet carbon: https://www.lesswrong.com/posts/Ccsx339LE9Jhoii9K/slow-motion-videos-as-ai-risk-intuition-pumps

Comment by jaan on All AGI Safety questions welcome (especially basic ones) [~monthly thread] · 2023-01-27T17:29:22.368Z · LW · GW

yup, i tried invoking church-turing once, too. worked about as well as you’d expect :)

Comment by jaan on All AGI Safety questions welcome (especially basic ones) [~monthly thread] · 2023-01-27T08:13:29.373Z · LW · GW

looks great, thanks for doing this!

one question i get every once in a while and wish i had a canonical answer to is (probably can be worded more pithily):

"humans have always thought their minds are equivalent to whatever's their latest technological achievement -- eg, see the steam engines. computers are just the latest fad that we currently compare our minds to, so it's silly to think they somehow pose a threat. move on, nothing to see here."

note that the canonical answer has to work for people whose ontology does not include the concepts of "computation" nor "simulation". they have seen increasingly universal smartphones and increasingly realistic computer games (things i've been gesturing at in my poor attempts to answer) but have no idea how they work.

Comment by jaan on We don’t trade with ants · 2023-01-12T08:22:51.770Z · LW · GW

the potentially enormous speed difference (https://www.lesswrong.com/posts/Ccsx339LE9Jhoii9K/slow-motion-videos-as-ai-risk-intuition-pumps) will almost certainly be an effective communications barrier between humans and AI. there’s a wonderful scene of AIs vs humans negotiation in william hertling’s “A.I. apocalypse” that highlights this.

Comment by jaan on AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years · 2023-01-11T07:58:01.801Z · LW · GW

i agree that there's the 3rd alternative future that the post does not consider (unless i missed it!):

3. markets remain in an inadequate equilibrium until the end of times, because those participants (like myself!) who consider short timelines remain in too small minority to "call the bluff".

see the big short for a dramatic depiction of such situation.

great post otherwise. upvoted.

 

Comment by jaan on Decision theory does not imply that we get to have nice things · 2022-10-23T06:32:55.172Z · LW · GW

yeah, this seems to be the crux: what will CEV prescribe for spending the altruistic (reciprocal cooperation) budget on. my intuition continues to insist that purchasing the original star systems from UFAIs is pretty high on the shopping list, but i can see arguments (including a few you gave above) against that.

oh, btw, one sad failure mode would be getting clipped by a proto-UFAI that’s too stupid to realise it’s in a multi-agent environment or something,

ETA: and, tbc, just like interstice points out below, my “us/me” label casts a wider net than “us in this particular everett branch where things look particularly bleak”.

Comment by jaan on Decision theory does not imply that we get to have nice things · 2022-10-22T17:25:38.655Z · LW · GW

roger. i think (and my model of you agrees) that this discussion bottoms out in speculating what CEV (or equivalent) would prescribe.

my own intuition (as somewhat supported by the moral progress/moral circle expansion in our culture) is that it will have a nonzero component of “try to help out the fellow humans/biologicals/evolved minds/conscious minds/agents with diminishing utility function if not too expensive, and especially if they would do the same in your position”.

Comment by jaan on Decision theory does not imply that we get to have nice things · 2022-10-22T12:39:49.166Z · LW · GW

yeah, as far as i can currently tell (and influence), we’re totally going to use a sizeable fraction of FAI-worlds to help out the less fortunate ones. or perhaps implement a more general strategy, like mutual insurance pact of evolved minds (MIPEM).

this, indeed, assumes that human CEV has diminishing returns to resources, but (unlike nate in the sibling comment!) i’d be shocked if that wasn’t true.

Comment by jaan on Jaan Tallinn's 2021 Philanthropy Overview · 2022-04-29T11:26:25.890Z · LW · GW

sure, this is always a consideration. i'd even claim that the "wait.. what about the negative side effects?" question is a potential expected value spoiler for pretty much all longtermist interventions (because they often aim for effects that are multiple causal steps down the road), and as such not really specific to software.

Comment by jaan on Create a prediction market in two minutes on Manifold Markets · 2022-02-10T07:13:12.382Z · LW · GW

great idea! since my metamed days i’ve been wishing there was a prediction market for personal medical outcomes — it feels like manifold mechanism might be a good fit for this (eg, at the extreme end, consider the “will this be my last market if i undertake the surgery X at Y?” question). should you decide to develop such aspect at one point, i’d be very interested in supporting/subsidising.

Comment by jaan on Biology-Inspired AGI Timelines: The Trick That Never Works · 2021-12-27T08:04:28.507Z · LW · GW

actually, the premise of david brin’s existence is a close match to moravec’s paragraph (not a coincidence, i bet, given that david hung around similar circles).

Comment by jaan on [Linkpost] Chinese government's guidelines on AI · 2021-12-11T05:50:05.823Z · LW · GW

confirmed. as far as i can tell (i’ve talked to him for about 2h in total) yi really seems to care, and i’m really impressed by his ability to influence such official documents.

Comment by jaan on Soares, Tallinn, and Yudkowsky discuss AGI cognition · 2021-11-30T13:34:14.007Z · LW · GW

indeed, i even gave a talk almost a decade ago about the evolution:humans :: humans:AGI symmetry (see below)!

what confuses me though is that "is general reasoner" and "can support cultural evolution" properties seemed to emerge pretty much simultaneously in humans -- a coincidence that requires its own explanation (or dissolution). furthermore, eliezer seems to think that the former property is much more important / discontinuity causing than the latter. and, indeed, outsized progress being made by individual human reasoners (scientists/inventors/etc.) seems to evidence such view.

Comment by jaan on How To Get Into Independent Research On Alignment/Agency · 2021-11-19T15:49:44.111Z · LW · GW

amazing post! scaling up the community of independent alignment researchers sounds like one of the most robust ways to convert money into relevant insights.

Comment by jaan on Can you control the past? · 2021-08-31T09:08:09.641Z · LW · GW

indeed they are now. retrocausality in action? :)

Comment by jaan on Jaan Tallinn's 2020 Philanthropy Overview · 2021-04-29T08:46:24.269Z · LW · GW

well, i've always considered human life extension as less important than "civilisation's life extension" (ie, xrisk reduction). still, they're both very important causes, and i'm happy to support both, especially given that they don't compete much for talent. as for the LRI specifically, i believe they simply haven't applied to more recent SFF grant rounds.