Posts

What does it look like for AI to significantly improve human coordination, before superintelligence? 2024-01-15T19:22:50.079Z
How do you feel about LessWrong these days? [Open feedback thread] 2023-12-05T20:54:42.317Z
Vote on worthwhile OpenAI topics to discuss 2023-11-21T00:03:03.898Z
New LessWrong feature: Dialogue Matching 2023-11-16T21:27:16.763Z
Does davidad's uploading moonshot work? 2023-11-03T02:21:51.720Z
Holly Elmore and Rob Miles dialogue on AI Safety Advocacy 2023-10-20T21:04:32.645Z
How to partition teams to move fast? Debating "low-dimensional cuts" 2023-10-13T21:43:53.067Z
Thomas Kwa's MIRI research experience 2023-10-02T16:42:37.886Z
Feedback-loops, Deliberate Practice, and Transfer Learning 2023-09-07T01:57:33.066Z
A Golden Age of Building? Excerpts and lessons from Empire State, Pentagon, Skunk Works and SpaceX 2023-09-01T04:03:41.067Z
Consider applying to a 2-week alignment project with former GitHub CEO 2023-04-04T06:20:49.532Z
How I buy things when Lightcone wants them fast 2022-09-26T05:02:09.003Z
How my team at Lightcone sometimes gets stuff done 2022-09-19T05:47:06.787Z
($1000 bounty) How effective are marginal vaccine doses against the covid delta variant? 2021-07-22T01:26:26.117Z
What other peptide vaccines might it be useful to make? 2021-03-03T06:25:40.130Z
Credence polls for 26 claims from the 2019 Review 2021-01-09T07:13:24.166Z
Weekend Review Bash: Guided review writing, Forecasting and co-working, in EU and US times 2021-01-08T21:04:12.332Z
Thread for making 2019 Review accountability commitments 2020-12-18T05:07:25.533Z
Which sources do you trust the most on nutrition advice for exercise? 2020-12-16T03:22:40.088Z
The LessWrong 2018 Book is Available for Pre-order 2020-12-01T08:00:00.000Z
Why is there a "clogged drainpipe" effect in idea generation? 2020-11-20T19:08:08.461Z
Final Babble Challenge (for now): 100 ways to light a candle 2020-11-12T23:17:07.790Z
Babble Challenge: 50 thoughts on stable, cooperative institutions 2020-11-05T06:38:38.997Z
Babble challenge: 50 consequences of intelligent ant colonies 2020-10-29T07:21:33.379Z
Babble challenge: 50 ways of solving a problem in your life 2020-10-22T04:49:42.661Z
What are some beautiful, rationalist artworks? 2020-10-17T06:32:43.142Z
Babble challenge: 50 ways of hiding Einstein's pen for fifty years 2020-10-15T07:23:48.541Z
Babble challenge: 50 ways to escape a locked room 2020-10-08T05:13:06.985Z
Babble challenge: 50 ways of sending something to the moon 2020-10-01T04:20:24.016Z
Sunday August 16, 12pm (PDT) — talks by Ozzie Gooen, habryka, Ben Pace 2020-08-14T18:32:35.378Z
Sunday August 9, 1pm (PDT) — talks by elityre, jacobjacob, Ruby 2020-08-06T22:50:21.550Z
Sunday August 2, 12pm (PDT) — talks by jimrandomh, johnswenthworth, Daniel Filan, Jacobian 2020-07-30T23:55:44.712Z
$1000 bounty for OpenAI to show whether GPT3 was "deliberately" pretending to be stupider than it is 2020-07-21T18:42:44.704Z
Lessons on AI Takeover from the conquistadors 2020-07-17T22:35:32.265Z
Meta-preferences are weird 2020-07-16T23:03:40.226Z
Sunday July 19, 1pm (PDT) — talks by Raemon, ricraz, mr-hire, Jameson Quinn 2020-07-16T20:04:37.974Z
Mazes and Duality 2020-07-14T19:54:42.479Z
Sunday July 12 — talks by Scott Garrabrant, Alexflint, alexei, Stuart_Armstrong 2020-07-08T00:27:57.876Z
Public Positions and Private Guts 2020-06-26T23:00:52.838Z
Missing dog reasoning 2020-06-26T21:30:00.491Z
Sunday June 28 – talks by johnswentworth, Daniel kokotajlo, Charlie Steiner, TurnTrout 2020-06-26T19:13:23.754Z
DontDoxScottAlexander.com - A Petition 2020-06-25T05:44:50.050Z
Sunday June 21st – talks by Abram Demski, alkjash, orthonormal, eukaryote, Vaniver 2020-06-18T20:10:38.978Z
FHI paper on COVID-19 government countermeasures 2020-06-04T21:06:51.287Z
[Job ad] Lead an ambitious COVID-19 forecasting project [Deadline extended: June 10th] 2020-05-27T16:38:04.084Z
Crisis and opportunity during coronavirus 2020-03-12T20:20:55.703Z
[Link] Beyond the hill: thoughts on ontologies for thinking, essay-completeness and forecasting 2020-02-02T12:39:06.563Z
[Part 1] Amplifying generalist research via forecasting – Models of impact and challenges 2019-12-19T15:50:33.412Z
[Part 2] Amplifying generalist research via forecasting – results from a preliminary exploration 2019-12-19T15:49:45.901Z
Running Effective Structured Forecasting Sessions 2019-09-06T21:30:25.829Z

Comments

Comment by jacobjacob on Express interest in an "FHI of the West" · 2024-04-19T08:45:13.435Z · LW · GW

Noting that a nicer name that's just waiting to be had, in this context, is "Future of the Lightcone Institute" :) 

Comment by jacobjacob on Express interest in an "FHI of the West" · 2024-04-18T09:17:00.726Z · LW · GW

Two notes: 

  1. I think the title is a somewhat obscure pun referencing the old saying that Stanford was the "Harvard of the West". If one is not familiar with that saying, I guess some of the nuance is lost in the choice of term. (I personally had never heard that saying before recently, and I'm not even quite sure I'm referencing the right "X of the West" pun)
  2. habryka did have a call with Nick Bostrom a few weeks back, to discuss his idea for an "FHI of the West", and I'm quite confident he referred to it with that phrase on the call, too. Far as I'm aware Nick didn't particularly react to it with more than a bit humor. 
Comment by jacobjacob on Does anyone know good essays on how different AI timelines will affect asset prices? · 2024-03-06T20:17:55.814Z · LW · GW

See this: https://www.lesswrong.com/posts/CTBta9i8sav7tjC2r/how-to-hopefully-ethically-make-money-off-of-agi

Comment by jacobjacob on Increasing IQ is trivial · 2024-03-02T04:48:05.944Z · LW · GW

Can you CC me too? 

I work from the same office as John; and the location also happens to have dozens of LessWrong readers work there on a regular basis. We could probably set up an experiment here with many willing volunteers; and I'm interested in helping to make it happen (if it continues to seem promising after thinking more about it). 

Comment by jacobjacob on Something to Protect · 2024-02-05T19:24:12.279Z · LW · GW

[Mod note: I edited out your email from the comment, to save you from getting spam email and similar. If you really want it there, feel free to add it back! :) ]

Comment by jacobjacob on Brute Force Manufactured Consensus is Hiding the Crime of the Century · 2024-02-05T19:08:42.659Z · LW · GW

Mod here: most of the team were away over the weekend so we just didn't get around to processing this for personal vs frontpage yet. (All posts start as personal until approved to frontpage.) About to make a decision in this morning's moderation review session, as we do for all other new posts. 

Comment by jacobjacob on Deliberate Dysentery: Q&A about Human Challenge Trials · 2024-01-22T15:09:13.742Z · LW · GW

Jake himself has participated in both Zika and Shigella challenge trials. 

Your civilisation thanks you 🫡

Comment by jacobjacob on Announcing the Double Crux Bot · 2024-01-10T00:37:05.416Z · LW · GW

Cool idea and congrats on shipping! Installed it now and am trying it. One user feedback is I found the having-to-wait for replies a bit frictiony. Maybe you could stream responses in chunks? (I did for a gpt-to-slack app once. You just can't do letter-by-letter because you'll be rate limited). 

Comment by jacobjacob on Benchmark Study #3: HellaSwag (Task, MCQ) · 2024-01-07T20:33:38.294Z · LW · GW

If that's your belief, I think you should edit in a disclaimer to your TL;DR section, like "Gemini and GPT-4 authors report results close to or matching human performance at 95%, though I don't trust their methodology". 

Also, the numbers aren't "non-provable": anyone could just replicate them with the GPT-4 API! (Modulo dataset contamination considerations.)

Comment by jacobjacob on Benchmark Study #3: HellaSwag (Task, MCQ) · 2024-01-07T17:49:58.587Z · LW · GW

Humans achieve over 95% accuracy, while no model surpasses 50% accuracy. (2019)


A series on benchmarks does seem very interesting and useful -- but you really gotta report more recent model results than from 2019!! GPT-4 allegedly surpasses 95.3% on HellaSwag, making that initial claim in the post very misleading. 

A Google Gemini benchmark performance chart provided by Google.
Comment by jacobjacob on Announcing Dialogues · 2024-01-07T17:23:04.698Z · LW · GW

Ah! I investigated and realise what the bug is. (Currently, only the single dialogue main author can archive it, not the other authors.) Will fix! 

Comment by jacobjacob on Announcing Dialogues · 2024-01-07T01:30:47.095Z · LW · GW

You can go to your profile page and press the "Archive" icon, that appears when hovering to the right of a dialogue. 

Comment by jacobjacob on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-22T05:34:33.411Z · LW · GW

Yeah, I'm interested in features in this space!

Another idea is to implement a similar algorithm to Twitter's community votes: identify comments that have gotten upvotes by people who usually disagree with each other, and highlight those. 

Comment by jacobjacob on OpenAI: Preparedness framework · 2023-12-18T20:42:58.608Z · LW · GW

Oops, somehow didn't see there was actually a market baked into your question

I'd also be interested in "Will there be a publicly revealed instance of a pause in either deployment or development, as a result of a model scoring High or Critical on a scorecard, by Date X?"

Comment by jacobjacob on OpenAI: Preparedness framework · 2023-12-18T20:30:59.243Z · LW · GW

Made a Manifold market

Might make more later, and would welcome others to do the same! (I think one could ask more interesting questions than the one I asked above.)

Comment by jacobjacob on Mapping the semantic void: Strange goings-on in GPT embedding spaces · 2023-12-15T18:57:22.120Z · LW · GW

Heads up, we support latex :)

Use Ctrl-4 to open the LaTex prompt (or Cmd-4 if you're on a Mac). Open a centred LaTex popup using Ctrl-M (aka Cmd-M). If you’ve written some maths in normal writing and want to turn it into LaTex, if you highlight the text and then hit the LaTex editor button it will turn straight into LaTex.

https://www.lesswrong.com/posts/xWrihbjp2a46KBTDe/editor-mini-guide

Comment by jacobjacob on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-13T02:59:37.016Z · LW · GW

I feel pretty frustrated at how rarely people actually bet or make quantitative predictions about existential risk from AI.

Without commenting on how often people do or don't bet, I think overall betting is great and I'd love to see more it! 

I'm also excited how much of it I've seen since Manifold started gaining traction. So I'd like to give a shout out to LessWrong users who are active on Manifold, in particular on AI questions. Some I've seen are:

Rob Bensinger 

Jonas Vollmer 

Arthur Conmy 

Jaime Sevilla Molina 

Isaac King 

Eliezer Yudkowsky 

Noa Nabeshima 

Mikhail Samin 

Daniel Filan 

Daniel Kokotajlo 

Zvi 

Eli Tyre 

Ben Pace 

Allison Duettmann 

Matthew Barnett 

Peter Barnett 

Joe Brenton 

Austin Chen 

lc 

Good job everyone for betting on your beliefs :) 

There are definitely more folks than this: feel free to mention more folks in the comments who you want to give kudos to (though please don't dox anyone who's name on either platforms is pseudonymous and doesn't match the other). 

Comment by jacobjacob on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-11T19:03:49.257Z · LW · GW

LLM summaries aren't yet non-hallucinatory enough that we've felt comfortable putting them on the site, but we have run some internal experiments on this. 

Comment by jacobjacob on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-11T18:58:58.138Z · LW · GW

Yep. Will set myself a reminder for 6 months from now!

Comment by jacobjacob on Open Thread – Winter 2023/2024 · 2023-12-08T00:16:18.972Z · LW · GW

They get a list of topics I've written/commented on, but so far as I can see I don't have any way to see that list

Yeah, users can't currently see that list for themselves (unless of course you create a new account, upvote yourself, and then look at the matching page through that account!). 

However, the SQL for this is actually open source, in the function getUserTopTags: https://github.com/ForumMagnum/ForumMagnum/blob/master/packages/lesswrong/server/repos/TagsRepo.ts

What we show is "The tags a user commented on in the last 3 years, sorted by comment count, and excluding a set of tags that I deemed as less interesting to show to other users, for example because they were too general (World Modeling, ...), too niche (Has Diagram, ...) or too political (Drama, LW Moderation, ...)."

Comment by jacobjacob on (Report) Evaluating Taiwan's Tactics to Safeguard its Semiconductor Assets Against a Chinese Invasion · 2023-12-07T19:19:42.404Z · LW · GW

(Sidenote, but you probably want to fix it: https://bristolaisafety.org/ appears to be down, as of the posting of this message)

Comment by jacobjacob on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-07T03:23:08.075Z · LW · GW

I use Cursor, Copilot, sometimes GPT-4 in the chat, and also Hex.tech's built-in SQL shoggoth. 

I would say the combination of all those helps a huge amount, and I think has been key in allowing me to go from pre-junior to junior dev in the last few months. (That is, from not being able to make any site changes without painstaking handholding, to leading and building a lot of the Dialogue matching feature and associated stuff (I also had a lot of help from teammates, but less in a "they need to carry things over the finish line for me", and more "I'm able to build features of this complexity, and they help out as collaborators")). 

But also, PR review and advise from senior devs on the team has also been key, and much appreciated.

Comment by jacobjacob on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-05T22:39:55.408Z · LW · GW

Yeah, that reminds me of this thread https://www.lesswrong.com/posts/P32AuYu9MqM2ejKKY/so-geez-there-s-a-lot-of-ai-content-these-days

Comment by jacobjacob on Dialogue on the Claim: "OpenAI's Firing of Sam Altman (And Shortly-Subsequent Events) On Net Reduced Existential Risk From AGI" · 2023-11-21T19:44:32.066Z · LW · GW

In the poll most people (31) disagreed with the claim John is defending here, but I'm tagging the additional few (3) who agreed with it @Charlie Steiner @Oliver Sourbut @Thane Ruthenis 

Interested to hear your guys' reasons, in addition to John's above! 

Comment by jacobjacob on Dialogue on the Claim: "OpenAI's Firing of Sam Altman (And Shortly-Subsequent Events) On Net Reduced Existential Risk From AGI" · 2023-11-21T19:03:28.406Z · LW · GW

One of my takeaways of how the negotiations went is that it seems sama is extremely concerned with securing access to lots of compute, and that the person who ultimately got their way was the person who sat on the compute.

The "sama running Microsoft" idea seems a bit magical to me. Surely the realpolitik update here should be: power lies in the hands of those with legal voting power, and those controlling the compute. Sama has neither of those things at Microsoft. If he can be fired by a board most people have never heard of, then for sure he can get fired by the CEO of Microsoft. 

People seem to think he is somehow a linchpin of building AGI. Remind me... how many of OpenAI's key papers did he coauthor? Paul Graham says if you dropped him into an island of cannibals he would be king in 5 years. Seems plausible. Paul Graham did not say he would've figured out how to engineer a raft good enough to get him out of there. If there were any Manifold markets on "Sama is the linchpin to building AGI", I would short them for sure. 

We already have strong suspicion from the open letter vote counts there's a personality cult around Sama at OpenAI (no democratic election ever ends with a vote of 97% in favor). It also makes sense people in the LessWrong sphere would view AGI as the central thing to the future of the world and on everyone's minds, and thus fall in the trap of also viewing Sama as the most important thing at Microsoft. (Question to ask yourself about such a belief: who does it benefit? And is that beneficiary also a powerful agent deliberately attempting to shape narratives to their own benefit?) 

Satya Nadella might have a very different perspective than that, on what's important for Microsoft and who's running it.

Comment by jacobjacob on Vote on worthwhile OpenAI topics to discuss · 2023-11-21T00:07:19.469Z · LW · GW

It would be a promising move, to reduce existential risk, for Anthropic to take over what will remain of OpenAI and consolidate efforts into a single project. 

Comment by jacobjacob on Vote on worthwhile OpenAI topics to discuss · 2023-11-21T00:04:25.779Z · LW · GW

EAs need to aggressively recruit and fund additional ambitious Sam's, to ensure there's one to sacrifice for Samsgiving November 2024. 

Comment by jacobjacob on Vote on worthwhile OpenAI topics to discuss · 2023-11-21T00:00:30.045Z · LW · GW

New leadership should shut down OpenAI. 

Comment by jacobjacob on Vote on worthwhile OpenAI topics to discuss · 2023-11-20T23:58:45.742Z · LW · GW

If there was actually a spooky capabilities advance that convinced the board that drastic action was needed, then the board's actions were on net justified, regardless of what other dynamics were at play and whether cooperative principles were followed.

Comment by jacobjacob on Vote on worthwhile OpenAI topics to discuss · 2023-11-20T23:53:54.954Z · LW · GW

Open-ended: A dialogue between an OpenAI employee who signed the open letter, and someone outside opposed to the open letter, about their reasoning and the options. 

(Up/down-vote if you're interested in reading discussion of this. React paperclip if you have an opinion and would be up for dialoguing)

Comment by jacobjacob on Vote on worthwhile OpenAI topics to discuss · 2023-11-20T23:49:54.238Z · LW · GW

If the board did not abide by cooperative principles in the firing nor acted on substantial evidence to warrant the firing in line with the charter, and nonetheless were largely EA motivated, then EA should be disavowed and dismantled. 

Comment by jacobjacob on Vote on worthwhile OpenAI topics to discuss · 2023-11-20T23:42:17.200Z · LW · GW

The events of the OpenAI board CEO-ousting on net reduced existential risk from AGI.

Comment by jacobjacob on Vote on worthwhile OpenAI topics to discuss · 2023-11-20T21:50:19.517Z · LW · GW

Open-ended: If >50% of employees end up staying at OpenAI: how, if at all, should OpenAI change its structure and direction going forwards? 

(Up/down-vote if you're interested in reading discussion of this. React paperclip if you have an opinion and would be up for discussing)

Comment by jacobjacob on Vote on worthwhile OpenAI topics to discuss · 2023-11-20T21:48:37.309Z · LW · GW

Open-ended: If >90% of employees leave OpenAI: what plan should Emmett Shear set for OpenAI going forwards? 

(Up/down-vote if you're interested in reading discussion of this. React paperclip if you have an opinion and would be up for discussing)

Comment by jacobjacob on Vote on worthwhile OpenAI topics to discuss · 2023-11-20T21:44:40.911Z · LW · GW

It is important that the board release another public statement explaining their actions, and providing any key pieces of evidence. 

Comment by jacobjacob on New LessWrong feature: Dialogue Matching · 2023-11-18T03:19:48.429Z · LW · GW

Yeah I'm gonna ship a fix to that now. No more monologues!  

Comment by jacobjacob on New LessWrong feature: Dialogue Matching · 2023-11-17T20:29:29.071Z · LW · GW

(If others want this too, upvote @faul_sname's comment as a vote! It would be easy to build, most of my uncertainty is in how it would change the experience)

Comment by jacobjacob on New LessWrong feature: Dialogue Matching · 2023-11-17T18:48:56.231Z · LW · GW

Those are some interesting papers, thanks for linking. 

In the case at hand, I do disagree with your conclusion though. 

In this situation, the most a user could find out is who checked them in dialogues. They wouldn't be able to find any data about checks not concerning themselves. 

If they happened to be a capable enough dev and were willing to go through the schleps to obtain that information, then, well... we're a small team and the world is on fire, and I don't think we should really be prioritising making Dialogue Matching robust to this kind of adversarial cyber threat for information of comparable scope and sensitivity! Folks with those resources could probably uncover all kinds of private vote data already, if they wanted to

Comment by jacobjacob on New LessWrong feature: Dialogue Matching · 2023-11-16T20:16:14.244Z · LW · GW

On data privacy

Here's some quick notes on how I think of LessWrong user data. 

Any data that's already public -- reacts, tags, comments, etc -- is fair game. It just seems nice to do some data science and help folks uncover interesting patterns here. 

On the other side of the spectrum, me and the team generally never look at users' up and downvotes, except in cases where there's strong enough suspicion of malicious voting behavior (like targeted mass downvoting). 

Then there's stuff in the middle.  Like, what if we tell a user "you and this user frequently upvote each other"? That particular example currently feels like it reveals too much private data.  As another example, the other day me and a teammate had a discussion of whether, on the matchmaking page, we could show people recently active users who already checked you, to make it more likely you'd find a match. We tenatively postulated it would be fine to do this as long as seeing a name on your match page gave no more than like a 5:1 update about those people having checked you. We sketched out some algorithms to implement this, that would also be stable under repeated refreshing and similar. (We haven't implemented the algorithm nor the feature yet.)

So my general take on features "in the middle" is for now to treat them on a case by case basis, with some principles like "try hard to avoid revealing anything that's not already public, and if doing so, try to leave plausible deniability bounded by some number of leaked bits, only reveal metadata or aggregate data, reveal it only to one other or a smaller set of users, think about whether this is actually a piece of info that seems high or low stakes, and see if you can get away with just using data from people who opted in to revealing it". 

Comment by jacobjacob on 'Theories of Values' and 'Theories of Agents': confusions, musings and desiderata · 2023-11-15T20:56:53.283Z · LW · GW

I can't quite tell how that's different from embeddedness. (Also if you have links to other places it's explained feel free to share them.)

Comment by jacobjacob on 'Theories of Values' and 'Theories of Agents': confusions, musings and desiderata · 2023-11-15T19:55:04.135Z · LW · GW

bounded, embedded, enactive, nested.

I know about boundedness, embededness, and I guess nestedness is about hierarchical agents. 

But what's enactive?

Comment by jacobjacob on A bet on critical periods in neural networks · 2023-11-06T23:39:36.776Z · LW · GW
Comment by jacobjacob on Does davidad's uploading moonshot work? · 2023-11-04T03:16:54.723Z · LW · GW

Space flight doesn't involve a 100 percent chance of physical death

I think historically folks have gone to war or on other kinds of missions that had death rates of like, at least, 50%. And folks, I dunno, climb Mount Everest, or figured out how to fly planes before they could figure out how to make them safe. 

Some of them were for sure fanatics or lunatics. But I guess I also think there's just great, sane, and in many ways whole, people, who care about things greater than their own personal life and death, and are psychologically consituted to be willing to pursue those greater things. 

Comment by jacobjacob on Does davidad's uploading moonshot work? · 2023-11-04T03:08:36.281Z · LW · GW

Hm, here's a test case: 

GPT4 can't solve IMO problems. Now take an IMO gold medalist about to walk into their exam, and upload them at that state into an Em without synaptic plasticity. Would the resulting upload would still be able to solve the exam at a similar level as the full human?

I don't have a strong belief, but my intuition is that they would. I recall once chatting to @Neel Nanda  about how he solved problems (as he is in fact an IMO gold winner), and recall him describing something that to me sounded like "introspecting really hard and having the answers just suddenly 'appear'..." (though hopefully he can correct that butchered impression)

Do you think such a student Em would or would not perform similarly well in the exam? 

Comment by jacobjacob on Does davidad's uploading moonshot work? · 2023-11-04T02:54:18.354Z · LW · GW

I have an important appointment this weekend that will take up most of my time, but hope to come back to this after that, but wanted to quickly note: 

but definitely are not back propagation.

Why? 

Last time I looked into this 6 years ago seemed like an open question and it could plausibly be backprop or at least close enough: https://www.lesswrong.com/posts/QWyYcjrXASQuRHqC5/brains-and-backprop-a-key-timeline-crux

3yrs ago Daniel Kokotajlo shared some further updates in that direction: https://www.lesswrong.com/posts/QWyYcjrXASQuRHqC5/brains-and-backprop-a-key-timeline-crux?commentId=RvZAPmy6KStmzidPF

Comment by jacobjacob on Does davidad's uploading moonshot work? · 2023-11-03T20:56:58.667Z · LW · GW

Separately, I'm kind of awed by the idea of an "uploadonaut": the best and brightest of this young civilisation, undergoing extensive mental and research training to have their minds able to deal with what they might experience post upload, and then courageously setting out on a dangerous mission of crucial importance for humanity.

(I tried generating some Dall-E 1960's style NASA recruitment posters for this, but they didn't come out great. Might try more later)

Comment by jacobjacob on Does davidad's uploading moonshot work? · 2023-11-03T20:44:11.878Z · LW · GW

Noting that I gave this a weak downvote as I found this comment to be stating many strong claims but without correspondingly strong (or sometimes not really any) arguments. I am still interested in the reasons you believe these things though (for example, like a fermi on inferece cost at runtime). 

Comment by jacobjacob on Does davidad's uploading moonshot work? · 2023-11-03T20:36:51.504Z · LW · GW

I don't think you're going to get a lot of volunteers for destructive uploading (or actually even for nondestructive uploading). Especially not if the upload is going to be run with limited fidelity. Anybody who does volunteer is probably deeply atypical and potentially a dangerous fanatic.

Seems falsified by the existence of astronauts? 

Comment by jacobjacob on [deleted post] 2023-10-25T18:33:09.675Z

https://manifold.markets/ZviMowshowitz/will-google-have-the-best-llm-by-eo?r=SmFjb2JMYWdlcnJvcw

Comment by jacobjacob on Anthropic, Google, Microsoft & OpenAI announce Executive Director of the Frontier Model Forum & over $10 million for a new AI Safety Fund · 2023-10-25T18:09:30.665Z · LW · GW

Reference class: I'm old enough to remember the founding of the Partnership on AI. My sense from back in the day was that some (innocently misguided) folks wanted in their hearts for it to be an alignment collaboration vehicle. But I think it's decayed into some kind of epiphenomenal social justice thingy. (And for some reason they have 30 staff. I wonder what they all do all day.)

I hope Frontier Model Forum can be something better, but my hopes ain't my betting odds.