0 comments
Comments sorted by top scores.
comment by sudo · 2022-12-08T00:50:02.874Z · LW(p) · GW(p)
Why not create non-AI startups that are way less likely to burn capabilities commons?
Replies from: Heighn, lahwran↑ comment by Heighn · 2022-12-08T07:32:08.561Z · LW(p) · GW(p)
It seems to me joshc is arguing that it's relatively easy to make money with AI startups at the moment.
Replies from: joshua-clymer↑ comment by joshc (joshua-clymer) · 2022-12-08T17:39:55.255Z · LW(p) · GW(p)
Also, AI startups make AI safety resources more likely to scale with AI capabilities.
↑ comment by the gears to ascension (lahwran) · 2022-12-08T01:07:47.098Z · LW(p) · GW(p)
The commons is on fire and the fire is already self-preserving. Do you want to put the fire out? then become the fire. stop trying to tell the fire to slow down, it's an extremely useless thing to do unless you're ready to start pushing against capitalism as a whole.
Replies from: sudo↑ comment by sudo · 2022-12-08T01:11:39.234Z · LW(p) · GW(p)
Your reply does not even remotely resemble good faith engagement.
You can unilaterally slow down AI progress by not working on it. Each additional day until the singularity is one additional day to work on alignment.
"Becoming the fire" because you're doomer-pilled is maximally undignified.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2022-12-08T01:14:23.994Z · LW(p) · GW(p)
You cannot unilaterally slow down AI progress by not working on it??? what the fuck kind of opinion is that? deepmind is ahead of you. Deepmind will always be ahead of you. You cannot catch up to deepmind. for fuck's sake, deepmind has a good shot of having TAI right now, and you want me to slow the fuck down? the fuck is your problem, have you still not updated off of deep learning?
Replies from: sudo↑ comment by sudo · 2022-12-08T01:15:43.689Z · LW(p) · GW(p)
Default comment guidelines:
- Aim to explain, not persuade
- Try to offer concrete models and predictions
- If you disagree, try getting curious about what your partner is thinking
- Don't be afraid to say 'oops' and change your mind
↑ comment by the gears to ascension (lahwran) · 2022-12-08T01:18:08.707Z · LW(p) · GW(p)
I mean, yeah, I definitely don't belong on this website, I'm way too argumentative. like, I'm not gonna contest that. But are you gonna actually do anything about your beliefs, or are you gonna sit around insisting we gotta slow down?
Replies from: sudo↑ comment by sudo · 2022-12-08T01:22:44.316Z · LW(p) · GW(p)
I find the accusation that I'm not going to do anything slightly offensive.
Of course, I cannot share what I have done and plan to do without severely de-anonymizing myself.
I'm simply not going to take humanity's horrific odds of success as a license to make things worse, which is exactly what you seem to be insisting upon.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2022-12-08T01:28:42.647Z · LW(p) · GW(p)
no, there's no way to make it better that doesn't involve going through, though. your model that any attempt to understand or use capabilities is failure is nonsense, and I wish people on this website would look in a mirror about what they're claiming when they say that. that attitude was what resulted in mispredicting alphago! real safety research is always, always, always capabilities research! it could not be otherwise!
Replies from: sudo↑ comment by sudo · 2022-12-08T01:58:44.898Z · LW(p) · GW(p)
You don’t have an accurate picture of my beliefs, and I’m currently pessimistic about my ability to convey them to you. I’ll step out of this thread for now.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2022-12-08T02:04:23.749Z · LW(p) · GW(p)
that's fair. I apologize for my behavior here; I should have encoded my point better, but my frustration is clearly incoherent and overcalibrated. I'm sorry to have wasted your time and reduced the quality of this comments section.
Replies from: sudocomment by Chris_Leong · 2022-12-08T01:33:00.753Z · LW(p) · GW(p)
Upvoted, but it's important to be very cautious about advancing capabilities.
comment by the gears to ascension (lahwran) · 2022-12-08T00:32:23.820Z · LW(p) · GW(p)
Strong upvote for promoting SafetyCapabilities. Good to see there are people who aren't wooed by the MIRI SafetyOnly or the current-industry CapabilitiesOnly approaches.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2022-12-08T00:33:16.208Z · LW(p) · GW(p)
I'm not able to run a company, but I'd love to join a startup with this attitude.
[edit: the reason I'm not able to run a company is well displayed by my errors in this comment section.]