Posts
Comments
Been thinking a lot about whether it's possible to stop humanity from developing AI.
I think the answer is almost definitely not.
Interesting that the very first thing he discusses is whether AI can be stopped
This post is really really good, and will likely be the starting point for my plans henceforth
I was just starting to write up some high level thoughts to evaluate what my next steps should be. The thoughts would've been a subset of this post
I haven't yet had time to process the substance of this post, just commenting that you've done a better job of putting words to what my feelings were getting at, than I expect I myself would have at this stage
Scott Alexander wrote a solid follow up to this piece last year.
TLDR; the brain obviously wishes to avoid pain, but not at the cost of like, avoiding thinking about painful things at all costs.
Like, you don't want to be eaten by a lion, so you avoid doing things that lead to you being eaten by lions.
But this pain avoidance shouldn't compromise on your epistemics; you shouldn't go so far to avoid pain as to avoid thinking about lions at all. this doesn't work.
This is potentially also what's going on with ugh fields. avoiding thinking about painful thoughts, as an overextension of pain-avoiding mechanisms of the brain.
Thinking about ugh fields in this way helps me actually confront them far more reliably.
wait so is a viable strat re: antibiotic resistant infections to just go a while without any antibiotics, then restart? granted, the patient needs to survive, and ideally not spread the infection to others, but still. Is this a documented action suite?
as is said in some of the recommended resources at the bottom of my intro to AI doom and alignment
This link is broken
here's the correct link https://www.lesswrong.com/posts/T4KZ62LJsxDkMf4nF/a-casual-intro-to-ai-doom-and-alignment-1
feels unlikely
This post seems better suited to twitter. If this LW post was for a signal boost, I'd suggest at least posting on twitter in parallel, and linking to that on all your posts.
also, in the correct contrarian cluster, atheism is listed twice.
Temporarily gaining status is very doable though; there are many communities where I believe I’m high status, but i don’t tend to interact with them much. Some of them would be a good fit for me if I were in OP’s position.
"should"s kinda cripple me. a lot. so far, this has possibly been the most useful post in the replacing guilt sequence.
Researchers aren't exactly fungible; replacing a skilled researcher with a new hire would still slow down progress. given how many people want to help, but have no idea how to help, this is a gap in the market worth filling.
Not everyone concerned about safety is looking to leave. The concerned have three options: stay and try to steer towards safety, continue moving on the current trajectory, or just leave. Helping some of those who’ve changed their mind about capabilities gain actually get out is only a net negative if those people staying in the field would’ve changed the trajectory of the field. I simply don’t think that everyone should try help by staying and trying to change. There is absolutely room for people to help by just leaving, and reducing the amount of work going into capabilities gain. Different paths will make sense for different people. There’s space to support both those researchers who’re trying to steer towards safety, and those who just want out. I’ve seen a lot of work towards the former, but almost none towards the latter. I want to speak to concerned researchers so that I can begin to better understand which individual researchers should indeed just leave, and which should stay.
I really don’t think the problem is overdetermined.
That's an old blog, he's currently active on https://members.themindhackersguild.com/ and https://theeffortlessway.com/
For anyone interested in working on this, you should add yourself on this spreadsheet. https://docs.google.com/spreadsheets/d/1WEsiHjTub9y28DLtGVeWNUyPO6tIm_75bMF1oeqpJpA/edit?usp=sharing
It's very useful for people building such an organisation to know of interested people, and vice versa.
If you don't want to use the spreadsheet, you can also DM me and I'll keep you in the loop privately.
If you're making such an organisation, please contact me. I'd like to work with you.
For anyone else also interested, you should add yourself on this spreadsheet. https://docs.google.com/spreadsheets/d/1WEsiHjTub9y28DLtGVeWNUyPO6tIm_75bMF1oeqpJpA/edit?usp=sharing
It's very useful for people building such an organisation to know of interested people, and vice versa.
If you don't want to use the spreadsheet, you can also DM me and I'll keep you in the loop privately.
If you're making such an organisation, please contact me. I'd like to work with you.
Top researchers are not easy to replace. Without the 0.1st percentile of researchers, progress would be slowed much more than 0.1%