post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Nicholas / Heather Kross (NicholasKross) · 2022-05-26T22:28:14.285Z · LW(p) · GW(p)

(Semi-dumb LW category suggestion: Posts That Could Have [LW · GW] Made You Good Money [LW · GW] In Hindsight)

Replies from: sinclair-chen
comment by Sinclair Chen (sinclair-chen) · 2022-05-27T19:31:39.661Z · LW(p) · GW(p)

this suggests also a category for posts that could have lost you good money in hindsight

comment by Bezzi · 2022-05-28T07:57:25.972Z · LW(p) · GW(p)

The majority of the entries are crappy 6-word slogans precisely because the contest is explicitly asking for one-liners to slap in the face of the audience. If the most effective strategy to solve something really is shouting one-liners to the policymakers, then I am the one who doesn't want to live on this planet anymore.

For what's worth, I strongly upvoted the first comment by johnswentworth [LW(p) · GW(p)] on that post:

I'd like to complain that this project sounds epistemically absolutely awful. It's offering money for arguments explicitly optimized to be convincing (rather than true), it offers money only for prizes making one particular side of the case (i.e. no money for arguments that AI risk is no big deal), and to top it off it's explicitly asking for one-liners.

Replies from: TrevorWiesinger
comment by trevor (TrevorWiesinger) · 2022-05-28T18:58:19.797Z · LW(p) · GW(p)

I think we're speaking different languages here, since I'm saying that the contest is obviously the right thing to do and you're saying that the contest is obviously the wrong thing to do. I have a significant policy background and I can't fathom why anyone would be so hostile to the contest; these people have short attention spans and expect to be lied to, so if we're going to be honest to them then we might as well be charismatic and persuasive while doing so.

For what it's worth, this is the second half of that comment by johnwentworth

I understand that it is plausibly worth doing regardless, but man, it feels so wrong having this on LessWrong.

comment by Nanda Ale · 2022-05-28T07:06:00.714Z · LW(p) · GW(p)

Thank you for this post. I wish I had seen it earlier, but in the time I did have I had a lot of fun both coming up with my own stuff and binging a bunch of AI content and extracting the arguments that I found most compelling into a format suitable for the contest.

comment by ryan_b · 2022-06-01T12:24:00.484Z · LW(p) · GW(p)

Meta: I endorse attempts to signal boost things that posters feel are neglected, especially things already on LessWrong. Upvoted.

comment by Mitchell_Porter · 2022-05-28T00:49:05.480Z · LW(p) · GW(p)

I would guess that the resistance in Washington, is not so much resistance to the basic idea of risk from AI, but resistance to the idea that anyone in particular has the answer, especially a group not directly affiliated with a major technology company. Does that sound right?

comment by Nicholas / Heather Kross (NicholasKross) · 2022-05-26T22:15:06.976Z · LW(p) · GW(p)

This is important! We need higher-quality entries (although, due to the Pareto principle, I've submitted a good chunk of the low-quality 6-word slogans :/ )

Point is: you can easily do better in this market.

comment by Adam Zerner (adamzerner) · 2022-05-27T01:56:23.763Z · LW(p) · GW(p)

When you tell someone that you think a supercomputer will one day spawn an unstoppable eldritch abomination, which proceeds to ruin everything for everyone forever, and the only solution is to give some people in SF a ton of money... the person you're talking to, no matter who, tends to reevaluate associating themself with you (especially compared to their many alternatives in the DC networking scene).

I suspect that the best way of solving this problem is via social proof: get reputable people to acknowledge the problem and then say to the DC people "Look, Alice, Bob and Carol are all saying it's a big deal".

My understanding is that there are people like Elon Musk and Bill Gates who have said something like that, but I think we probably need something with more substance than "we should pay more attention to it". Hopefully something like "I think there is a >20% chance that humanity will be wiped out from unfriendly AI some time in the next 50 years."

It also seems worth doing some research into what sorts of statements the DC people would find convincing. Ie asking them "If I told you X how would you feel? What about Y? Z?" And also what sort of reputable people they would be influenced by. Professors? Tech CEOs? Public figures?

Replies from: TrevorWiesinger
comment by trevor (TrevorWiesinger) · 2022-05-27T02:57:29.524Z · LW(p) · GW(p)

My understanding is that there are people like Elon Musk and Bill Gates who have said something like that, but I think we probably need something with more substance than "we should pay more attention to it".

 

Fun fact: Elon Musk and Bill Gates have actually stopped saying that. Now it's mostly crypto people like Sam Bankman-Fried and Peter Thiel, who will likely take the blame if revelations break that crypto was always just rich people minting worthless tokens and selling them to poor people. 

It's really easy to imagine a NYT article pointing fingers at the people who donate 5% of their income to a cause (AGI) that has nothing to do with inequality, or to malaria interventions in africa that "ignore people here at home". Hence why I think there should be plenty of ways to explain AGI to people with short attention spans: anger and righteous rage might one day be the thing that keeps their attention spans short.

Replies from: lc
comment by lc · 2022-05-27T17:38:56.260Z · LW(p) · GW(p)

It's really easy to imagine a NYT article pointing fingers at the people who donate 5% of their income to a cause (AGI) that has nothing to do with inequality, or to malaria interventions in africa that "ignore people here at home". Hence why I think there should be plenty of ways to explain AGI to people with short attention spans: anger and righteous rage might one day be the thing that keeps their attention spans short.

This is a serious problem with most proposed AI governance and outreach plans that I find unaddressed. It's not an unsolvable problem either, which irks me.

comment by RedMan · 2022-05-30T05:41:26.446Z · LW(p) · GW(p)

I threw in a few, I wasn't expecting to win, and I'm expecting probability of win to correlate with overall forum karma. Aka, it's not what's said, it's who's saying it.

Replies from: ThomasWoodside
comment by TW123 (ThomasWoodside) · 2022-07-07T16:33:53.625Z · LW(p) · GW(p)

We're still working on judging right now, but I want to assure you that we looked at neither the name of the submitter nor the number of upvotes when judging the prizes. Of course, some of the submissions are quotes from well known people like Stuart Russell and Stephen Hawking, and we do take that into account, but we didn't include the names of individual submitters in judging any of the prizes. (Using a quote from Stephen Hawking can add some ethos to the outside world, but using a quote from "a high-karma LessWrong user" doesn't.)

Of course, that doesn't mean it isn't going to correlate with forum karma; maybe people with more forum karma are better at writing. But the assertion "it's not what's said, it's who's saying it" is not true in the context of "who will be awarded".