post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Raghuvar Nadig (raghuvar-nadig) · 2024-03-04T14:15:07.416Z · LW(p) · GW(p)

I'm curious how people are parsing this rumor (part of Connor's tweets):

I recall a story of how a group of AI researchers at a leading org (consider this rumor completely fictional and illustrative, but if you wanted to find its source it's not that hard to find in Berkeley) became extremely depressed about AGI and alignment, thinking that they were doomed if their company kept building AGI like this. So what did they do? Quit? Organize a protest? Petition the government? They drove out, deep into the desert, and did a shit ton of acid...and when they were back, they all just didn't feel quite so stressed out about this whole AGI doom thing anymore, and there was no need for them to have to have a stressful confrontation with their big, scary, CEO. 

Do people who are in proximity to the relevant community consider this anecdote fictional/not-pertinent/exaggerated/but-of-course with respect to AI safety?

comment by gull · 2024-03-03T23:21:49.273Z · LW(p) · GW(p)

What if asymmetric fake trust technologies are orders of magnitude easier to build and scale sustainably than symmetric real trust technologies?

It already seems like asymmetric technologies work better than symmetric technologies, and that fake trust technologies are easier to scale than real trust technologies. 

Symmetry and correct trust are both specific states and there's tons of directions to depart from them, and the only thing making them attractor states would be people who want the world to be more safe instead of less safe. That sort of thing is not well-reputed for being a great investment strategy ("Socially Responsible Indexes" did not help the matter).

Replies from: TrevorWiesinger
comment by trevor (TrevorWiesinger) · 2024-03-03T23:58:57.482Z · LW(p) · GW(p)

I think that brings up a good point, but the main reason not to work on trust tech is actually cultural (Ayn Rand type stuff), not out of self-interest. There's actually tons of social status and org reputation to be gained from building technology that fixes a lot of problems, and it makes the world safer for the self-interested people building it.

It might not code as something their society values (e.g. cash return on investment) but the net upside is way bigger than the net downside. Bryan Johnson, for example, is one of the few billionaires investing any money at all in anti-aging tech, even though so little money is going into it that it's in their personal interest to form a coalition that invests >1% of their wealth into technological advancement in that area.