Posts
Comments
Nice touch that Barack is on your LW page ;)
Thanks a lot for this post. I especially enjoyed the football example. I'd be interested in seeing more elaboration on the last section in the future.
Typos: havs -> has, inheritage -> inheritance, Turnes -> Turns.
I get why you didn't include it in the post, but it feels important to include the rest of Feynman's quote somewhere: "But, fortunately, it's been useless for almost forty years now, hasn't it? So I've been wrong about it being useless making bridges and I'm glad those other people had the sense to go ahead.”
Updated.
Thanks for your comment! I'm updating the post this week and will include you in the new version.
Any guess as to the start date of the second round (assuming the first round goes well, funding exists for round 2, etc.)?
This works (except for a few misquotations):
but this doesn't (it generated very slowly as well):
They're available on GitHub with interactive visualizations of the data here.
There is a bug in the visualization where if you have a dataset selected in one persona, then switch to a different persona, the new results don't show up until you edit the label confidence or select a dataset in the new persona. For example, selecting dataset "desire to influence world" in persona "Desire for Power, Influence, Optionality, and Resources" then switching to "Politically Liberal" results in no points appearing by default.
I'm preparing for SERI MATS and I found this immensely helpful. Thanks a lot!
What kinds of people do you try to talk to? This seems overly pessimistic, though I'm not sure what your experience is. This also doesn't seem very constructive/relevant to the post, though I'd be interested to hear why you said this.
Are you saying people should be more skeptical of AGI because of the physical limits on computation and thus more hopeful?
Any books/resources on existentialism/absurdism you'd recommend? It seemed like a lot of the alignment positions had enough of that flavor to screen off the primary sources which I found less approachable/directly relevant. Though it does seem like a good idea to directly name that there is an entire section of philosophy dedicated to living in an uncaring universe and making your own meaning.
Thanks for the suggestions! The navigator is already linked, but I'll add you and Upgradable. Do you know the specific people at Upgradable who are familiar (besides you and Dave)? And what is your rate? I see numbers ranging from $250-$400 on your site.
It still seems pretty likely, but I really appreciate your articulating this and trying to push back against insularity and echo chamber-ness.
Sure, I hope you find it helpful! I've updated the list to include all of the prices I could find.
Do you see acceptance as it's mentioned here as referring to a stance of "AGI is coming, we might as well feel okay about it", or something else?
I agree with this, thanks for the feedback! Edited.
Thanks Nicholas, I'll definitely give this a shot. So how did you go about tracking the effects of interventions? For example, how did you discover that gratitude was helpful or that carb-heavy lunches were impacting energy? Do you just try them one at a time and see how that affects things, or did you somehow perform an X/non-X comparison as I described in the original post?