AprilSR's Shortform

post by AprilSR · 2022-03-23T18:36:27.608Z · LW · GW · 11 comments

11 comments

Comments sorted by top scores.

comment by AprilSR · 2022-04-01T19:47:28.314Z · LW(p) · GW(p)

wait i just got the pun

comment by AprilSR · 2022-03-23T18:36:28.411Z · LW(p) · GW(p)

I think a position some AI safety people have is: “Powerful AI is necessary to take a pivotal act.”

I can buy that it is impossible to safely have an AI make extremely advanced progress in ie nanotechnology. But it seems somewhat surprising to me if you need a general AI to stop anyone else from making a general AI.

Political solutions for example certainly seem very hard, but… all solutions seem very hard? The reasons AI-based solutions are hard don’t seem obviously weaker than political-based solutions to me or anything.

Replies from: AprilSR
comment by AprilSR · 2022-03-23T18:39:08.269Z · LW(p) · GW(p)

(The described position is probably a strawman, mostly posting this to help further my own thought than as a criticism of anyone else in particular or anything.)

comment by AprilSR · 2023-07-27T18:09:02.387Z · LW(p) · GW(p)

Does anyone know if there is a PDF version of the Sequence Highlights anywhere? (Or any ebook format is fine probably.)

comment by AprilSR · 2022-04-09T19:57:29.156Z · LW(p) · GW(p)

Humans are a proof by example that you can have a general AI which is not a very good utility function maximizer; arguably they even suggest that this is what you get by default.

I mostly buy that eventually you get a “spark of consequentialism” in some sense, but that might actually happen at a substantially superhuman level.

I’m not sure this actually extends timelines very much if it is true, but I’m updating towards expecting the foom to happen a little later into the AI improvement curve.

Replies from: TLW, MackGopherSena
comment by TLW · 2022-04-10T02:20:59.744Z · LW(p) · GW(p)

...to the extent that evolved intelligence is similar to AI, at least.

comment by MackGopherSena · 2022-04-11T14:20:07.162Z · LW(p) · GW(p)

[edited]

Replies from: AprilSR
comment by AprilSR · 2022-04-11T16:08:50.535Z · LW(p) · GW(p)

That's fair, my brain might be doing great at maximizing something which isn't especially correlated with what I actually want / what actually makes me happy.

comment by AprilSR · 2023-09-07T17:33:46.863Z · LW(p) · GW(p)

I think we should have a community norm that threatening libel suits (or actually suing) is incredibly unacceptable in almost all cases—I'm not sure what the exact exceptions should be, but maybe it should require "they were knowingly making false claims."

I feel unsure whether it would be good to enforce such a norm regarding the current Nonlinear situation because there wasn't common knowledge beforehand and because I feel too strongly about this norm to not be afraid that I'm biased (and because hearing them out is the principled thing to do). But I think building common knowledge of such a norm would be good.

Replies from: frontier64
comment by frontier64 · 2023-09-07T18:18:10.216Z · LW(p) · GW(p)

Under this community norm, how does Alice respond when Bob lies about her in public in a way that hurts her commercial business?

Replies from: AprilSR
comment by AprilSR · 2023-09-07T20:11:38.361Z · LW(p) · GW(p)

I'm more confident that we should generally have norms prohibiting using threats of legal action to prevent exchange of information than I am of the exact form those norms should take. But to give my immediate thoughts:

I think the best thing for Alice to do if Bob is lying about her is to just refute the lies. In an ideal world, this is sufficient. In practice, I guess maybe it's insufficient, or maybe refuting the lies would require sharing private information, so if necessary I would next escalate to informing forum moderators, presenting evidence privately, and requesting a ban.

Only once those avenues are exhausted might I consider threatening a libel suit acceptable.

I do notice now that the Nonlinear situation in particular is impacted by Ben Pace being a LessWrong admin—so if step 1 doesn't work, step 2 might have issues, so maybe escalating to step 3 might be acceptable sooner than usual.

Concerns have been raised that there might be some sort of large first-mover advantage. I'm not sure I buy this—my instinct is that the Nonlinear cofounders are just bad-faith actors making any arguments that seem advantageous to them (though out of principle I'm trying to withhold final judgement). That said, I could definitely imagine deciding in the future that this is a large enough concern to justify weaker norms against rapid escalation.