laserfiche's Shortform

post by laserfiche · 2023-04-19T13:50:23.232Z · LW · GW · 8 comments

Contents

8 comments

8 comments

Comments sorted by top scores.

comment by laserfiche · 2023-04-19T13:50:23.533Z · LW(p) · GW(p)

Under the tag of AI Safety Materials, 48 posts come up.  There are exactly two posts by sprouts:

An example elevator pitch for AI doom [LW · GW] Score: -8[1]

On urgency, priority and collective reaction to AI-Risks: Part I [LW · GW] Score: -12

These are also the only two posts with negative scores.  

In both cases, it was the user's first post.  For Denreik in particular you can tell that he suffered over it and put many hours into it. 

Is it counterproductive to discourage new arrivals attempting to assist in the AI alignment effort?

Is there a systemic bias against new posters?

  1. ^

    Full disclosure, this was posted by me.  

Replies from: niplav, Raemon
comment by niplav · 2023-04-19T14:49:24.381Z · LW(p) · GW(p)

Man I have conflicting opinions about this. "People want to help" is a good thing. But then the upvote/downvote mechanism is not about the poster but about the post and has the function of ranking things that others find helpful.

And both posts you linked just…aren't that great? Yours doesn't deserve getting downvoted, but it also doesn't really deserve getting upvoted all that much imho—there's so much AI alignment intro material out there, from popular articles to youtube videos to book-length explainers from so many people…and e.g. this one [LW · GW] fits pretty well into your desiderata?

As for Denreiks post: It doesn't smell like a thing I'd want to read (no paragraph breaks, no clear statement of the conclusion at the top, slightly confusing writing…), and while I haven't read it (and therefore didn't vote either way), such things are unfortunately a reliable indicator.

Then again: I'd love it if there was some way of showing someone "Hey, I like that you're trying to help! Maybe lurk moar (a lot moar, maybe ratio of 100:1 or 1000:1 for contributing/reading), start by commenting or shortforming". But also there needs to be some mechanism of ranking content [LW · GW].

Replies from: laserfiche
comment by laserfiche · 2023-04-19T16:29:58.978Z · LW(p) · GW(p)

Upvoted, I agree with the gist of what you saying, with some caveats. I think I would have expected the two posts to end up with a score of 0 to 5, but there is a world of difference between a 5 and a -12.

It's worth noting that the example explainer you linked to doesn't appeal to me at all.  And that's fine.  It doesn't mean that there's something wrong with the argument, or with you, or with me.  But it's important to note that it demonstrates a gap.  I've read all the alignment material[1], and I still see huge chunks of the population that will not be compelled by the existing arguments.  Also, many of the arguments are outdated and are less applicable to the current state of events.

 

  1. ^

    https://docs.google.com/document/d/1zx_WpcwuT3Stpx8GJJHcvJLSgv6dLje0eslVKvuk1yQ/edit

Replies from: niplav
comment by niplav · 2023-04-19T17:23:10.315Z · LW(p) · GW(p)

Huh, I see. Agree about the 0-5 vs. -12 (in this case -8) difference.

I don't see myself in the business of making good explainer material for the general public, so I'll defer to you on that (since you have read more of the introductions than I have).

Also, I guess posting that google doc here would probably would be upvoted?

comment by Raemon · 2023-04-19T18:13:10.479Z · LW(p) · GW(p)

I mostly don't want new people to contribute to public materials efforts. I want people to have thought concretely about the problem and fleshed out their understanding of it before focusing on communicating it to others.

I do want people who are entering the space to have a good experience. I'm mulling over some posts that give newcomers a clearer set of handholds on what to do to get started.

comment by laserfiche · 2023-08-25T21:58:10.082Z · LW(p) · GW(p)

Are we misreporting p(doom)s?

I usually say that my p(doom) is 50%, but that doesn't mean the same thing that it does in a weather forecast.

In weather forecasts, the percentage states that they ran a series of simulations, and that percentage of simulations produced that result. A forecast of a 100% chance of rain, then, does not mean that there is near a 100% chance of rain. Forecasts still have error bars; 10 days out, a forecast will be wrong 50% of the time. Therefore, a 10 forecast of 100% chance of rain means that there is actually a 50%.

In my mental simulations, the outcome is bad 100% of the time. I can't construct a convincing scenario in my mind where things work out, at least contingent on the continued development of AI. But I know that there is much that I don't know, things I haven't yet considered, etc. Hence the 50% error margin. But like in the weather forecast, this can be misinterpreted as me thinking that 50% of the time it works out.

Is there a terminology that currently accounts for this? If not, does it mean that p(doom)s are being misunderstood, or reported with different meanings?

Replies from: MakoYass
comment by mako yass (MakoYass) · 2023-08-26T06:18:21.031Z · LW(p) · GW(p)

Possibly https://en.wikipedia.org/wiki/Knightian_uncertainty ?

Replies from: laserfiche
comment by laserfiche · 2023-08-26T14:19:34.905Z · LW(p) · GW(p)

Yes, thank you, I think that's it exactly. I don't think that people are communicating this well when they are reporting predictions.