Posts

Book Review (mini): Co-Intelligence by Ethan Mollick 2024-04-03T17:33:52.097Z
Announcing New Beginner-friendly Book on AI Safety and Risk 2023-11-25T15:57:08.078Z
A Bat and Ball made me Sad 2023-09-11T13:48:24.686Z
Seeking Input to AI Safety Book for non-technical audience 2023-08-10T17:58:29.618Z
Should AI systems have to identify themselves? 2022-12-31T02:57:11.429Z
Has anyone increased their AGI timelines? 2022-11-06T00:03:11.756Z
FYI: I’m working on a book about the threat of AGI/ASI for a general audience. I hope it will be of value to the cause and the community 2022-06-15T18:08:42.754Z

Comments

Comment by Darren McKee on A Bat and Ball made me Sad · 2023-09-11T22:42:36.758Z · LW · GW

I guess it depends on what your priors already were but 23% is far higher than the usual 'lizardman', so one update might be to greatly expand how much error is associated with any survey. If the numbers are that high, it gets harder to understand many things (unless more rigorous survey methods are used etc)

Comment by Darren McKee on A Bat and Ball made me Sad · 2023-09-11T22:40:07.402Z · LW · GW

The boldface was part of the problem presented to participants.    It is a famous question because so many people get it wrong. 

Comment by Darren McKee on Seeking Input to AI Safety Book for non-technical audience · 2023-08-11T13:09:54.412Z · LW · GW

I can loop you in later this month

Comment by Darren McKee on Seeking Input to AI Safety Book for non-technical audience · 2023-08-11T13:08:18.497Z · LW · GW

They sure are

Comment by Darren McKee on Why was the AI Alignment community so unprepared for this moment? · 2023-07-16T01:59:51.298Z · LW · GW

An excellent question and one I sympathize with.  While it is true that reasonable public statements have been coordinated and gotten widespread coverage, in general the community has been scrambling to explain different aspects of the AI safety/risk case in a way that diverse groups would understand.  Largely, to have it all in one place, in an accessible manner (instead of spread across fora and blog posts and podcasts).

I think it was an underestimation of timelines and a related underfunding of efforts. I started working on a non-technical AI safety book last June and I also think I underestimated timelines.  I hope to have the book out in October because it is urgent but it sure would have been better if I started earlier. 

Given the stakes of all the things we say we care about, I think a large donor should have spent up to $1M several years ago to support the writing of 5-10 accessible books on AI safety, Bio risk, etc (at least 2 per topic), so there would be something on hand that could a) be discussed, b) be distributed, and c) be drawn from for other comms materials. 

This is probably still a good idea. 

Comment by Darren McKee on How not to write the Cookbook of Doom? · 2023-06-17T13:57:04.316Z · LW · GW

No answer for you yet but I'm trying to achieve something similar in my book. I want to avoid infohazards but communicate the threat, and also provide hope.  A tricky thing to navigate for sure, especially with diverse audiences. 

Comment by Darren McKee on Foresight for AGI Safety Strategy: Mitigating Risks and Identifying Golden Opportunities · 2022-12-06T17:45:17.730Z · LW · GW

Great post!  I definitely think that the use of strategic foresight is one of the many tools we should be applying to the problem.

Comment by Darren McKee on Has anyone increased their AGI timelines? · 2022-11-07T22:52:51.063Z · LW · GW

Hahaha. With enough creativity, one never has to change their mind ;)

Comment by Darren McKee on Vael Gates: Risks from Advanced AI (June 2022) · 2022-06-22T14:48:38.647Z · LW · GW

I saw your presentation and thought it was great, and I'm happy you've shared it here as I'm 
FYI: I’m working on a book about the threat of AGI/ASI for a general audience. I hope it will be of value to the cause and the community - LessWrong

Comment by Darren McKee on FYI: I’m working on a book about the threat of AGI/ASI for a general audience. I hope it will be of value to the cause and the community · 2022-06-18T16:34:58.846Z · LW · GW

I don't have much more to share about the book at this stage as many parts are still in flux. I don't have much on hand to point you towards (like a personal website or anything). I had a blog years ago and do that podcast I mentioned. Perhaps if you have a specific question or two?

I think a couple loose objectives. 1. To allow for synergies if others are doing something similar, 2. to possible hear good arguments for why it shouldn't happen, 3. to see about getting help, and 4. other unknown possibilities (perhaps someone connects me to someone else what provides a useful insight or something)

Comment by Darren McKee on FYI: I’m working on a book about the threat of AGI/ASI for a general audience. I hope it will be of value to the cause and the community · 2022-06-17T11:12:21.016Z · LW · GW

Thank you, much appreciated and it sounds like a great initiative. I'll follow up. 

Comment by Darren McKee on FYI: I’m working on a book about the threat of AGI/ASI for a general audience. I hope it will be of value to the cause and the community · 2022-06-17T11:10:12.683Z · LW · GW

Thank you.  I'll follow up. 

Comment by Darren McKee on FYI: I’m working on a book about the threat of AGI/ASI for a general audience. I hope it will be of value to the cause and the community · 2022-06-17T11:09:06.681Z · LW · GW

Yes, thank you. I shall.  I should probably also cross-post to the EA Forum.

Comment by Darren McKee on FYI: I’m working on a book about the threat of AGI/ASI for a general audience. I hope it will be of value to the cause and the community · 2022-06-16T12:12:08.602Z · LW · GW

Mighty kind of you. Let me make a bit more progress and then follow up :)

Comment by Darren McKee on FYI: I’m working on a book about the threat of AGI/ASI for a general audience. I hope it will be of value to the cause and the community · 2022-06-15T22:08:17.692Z · LW · GW

None taken, it's a reasonable question to ask. It's part of the broader problem of knowing if anything will be good or bad (unintended consequences and such).  To clarify a bit, by general audience, I don't mean everyone because most people don't read many books, let alone non-fiction books, let alone non-fiction books that aren't memoirs/biographies or the like. So, my loose model is that (1) there is a group of people who would care about this issue if they knew more about it and (2) their concerns will lead to interest from those with more power to (3) increase funding for AI safety and/or governance that might help. 
Expanding on 1, it could also increase those who want to work on the issue, in a wide range of domains beyond technical work. 
It's also possible that it is net-positive but still insufficient but was worth trying. 

Comment by Darren McKee on FYI: I’m working on a book about the threat of AGI/ASI for a general audience. I hope it will be of value to the cause and the community · 2022-06-15T18:47:54.144Z · LW · GW

Thanks for the comment. I agree and was already thinking along those lines. 
It is a very tricky, delicate issue where we need to put more work into figuring out what to do while communicating it is urgent, but not so urgent that people act imprudently and make things worse. 
Credibility is key and providing reasons for beliefs, like timelines, is an important part of the project.