On Raising Awareness
post by Tomás B. (Bjartur Tómas) · 2021-11-17T17:12:36.843Z · LW · GW · 10 commentsContents
Become a Human Catalyst Message Targeting Using Low-Status Signals of Cognitive Ability Promoting Existing EA/AI Podcasts on 3rd-Party Podcast Apps Raising Morale The 10 Million Dollar Tao None 10 comments
Those who can, do. Those who can't, teach. And those who can't teach, "raise awareness".
I have been spending some time thinking about ideas for "raising awareness" of the AI alignment problem, focusing on those that sound crazy enough that perhaps smarter people have glossed over them, or avoided thinking about what literally implementing them might look like. I have made some comments about these ideas here and on Discord, and the purpose of this post is to centralize them, and hopefully get some more crazy ideas in the comments.
Become a Human Catalyst
A well-known alignment researcher and a very famous engineer had a conversation about alignment recently. I will not get into specifics, but the obvious inference is obvious.
My impression is the conversation did not go well (which may be putting it mildly), but what interests me more is I am acquainted with the person who was responsible for this meeting occurring. He knew someone who knew them both, and suggested that it might be interesting if they had a conversation.
This struck me as an example of someone being a "human catalyst", which I think is a really high-leverage social role.
I have been trying to write a post about this role for awhile now. But it is very illegible, and it is hard to find historical examples - Oppenheimer, Wheller, Harry Nyquist and various enlightenment saloniers might fit. Though I am not involved in the IRL rationalist community, I get the impression that Alicorn and Anna Salamon may also be human catalysts.
If you are a person with interesting contacts, it is always worth thinking hard about whom it would be productive to introduce to whom.
And if you are a human catalyst, please write a post about it! I think this is a social role that should be more legible as something one can aspire to be.
Message Targeting Using Low-Status Signals of Cognitive Ability
Psychometrics is both correct and incredibly unpopular - so this means there is possibly an arbitrage opportunity for anyone willing to act on it.
Very high IQ people are rare, and sometimes have hobbies that are considered low-status in the general population. Searching for low-status signals that are predictive of cognitive ability looks to be an efficient means of message targeting.
It is interesting to note that Demis Hassabis’s prodigious ability was obvious to anyone paying attention to board games competitions in the late 90s. It may have been high ROI to sponsor the Mind Sports Olympiad at that time, just for a small shot at influencing someone like Demis. There are likely other low-status signals of cognitive ability that could allow us to find diamonds in the rough.
Those who do well in strategic video games, board games, and challenging musical endeavours may be worth targeting. (Heavy metal for example - being relatively low-status and extremely technical musically - is a good candidate for being underpriced).
With this in mind, one obvious idea for messaging is to run ads. Unfortunately, potentially high-impact people likely have ad-blockers on their phones and computers.
However, directly sponsoring niche podcasts/YouTube channels with extremely high-IQ audiences may be promising - by "directly sponsoring" I mean creator-presented ads. There are likely mathematics, music theory, games and puzzle podcasts/channels that are small enough to have not attracted conventional advertisers, but are enriched enough in intelligent listeners to be a gold mine from this perspective.
Promoting Existing EA/AI Podcasts on 3rd-Party Podcast Apps
Most niche 3rd party apps allow podcasters to advertise their podcasts in their search results pages. On the iPhone, at least, these cannot be adblocked trivially.
Rank speculation, but it is possible the average IQ of a 3rd-party podcast app user is likely slighter higher than those who use first-party podcast apps (for the same reason Opera users had the highest IQ of all browser users) and the audience is possibly slightly enriched for high-impact people already. (Though plausibly, really smart people prefer text to audio.) By focusing ads on podcast categories that are both cheap and good proxies for listener’s IQs, one may be able to do even better.
I did a trial run of this for the AXRP podcast on the Overcast podcast app, and it has worked out to about ~5 dollars per subscriber acquisition. (I did this without asking the permission of the podcast's host.)
Due to the recurring nature of podcasts and the parasocial relationship podcast listeners develop to the hosts of podcasts, it is my opinion their usefulness as a propaganda and inculcation tool is underappreciated . It is very plausible to me that 5 dollars per subscriber may be cheap for the right podcast.
Raising Morale
There are a large number of smart people who are interested in alignment already, but are not acting on the problem. Inspiring action in such people may be high value. Possibly, there is still some juice left in this orange.
I am confused on how to go about this, but Irrational Modesty [LW · GW] and Going Out With Dignity [LW · GW]were my attempts.
The 10 Million Dollar Tao
It is probably a bit of a stretch to call this raising awareness, but I made this comment recently:
I know we used to joke about this, but has anyone considered actually implementing the strategy of paying Terry Tao 10 million dollars to work on the problem for a year?
Though it sounds comical, I think it is probably worth trying in both the literally offer Tao 10 million and the generalized case of finding the highest g people in the world and offering them salaries that seem outrageous.
Here and on the EA forum, where another person posted the same idea, many claimed genius people would not care about 10 million dollars. I think this is, to put it generously, not at all obvious. Paying people is the standard means of incentivizing them. And it is not obvious to me that Tao-level people are immune to such incentives. This may be something we should establish empirically.
To the extent we can identify the literal smartest people on the planet, we would be a really pathetic civilization if we were not willing to offer them NBA-level salaries to work on alignment. Perhaps 10 million is too high. Maybe 1 million for a month as a first try?
10 comments
Comments sorted by top scores.
comment by James_Miller · 2021-11-17T17:40:50.477Z · LW(p) · GW(p)
The focus should be on getting extremely bright young computer programmers interested in the AI alignment problem so you should target podcasts they listen to. Someone should also try to reach members of the Davidson Institute which is an organization for profoundly gifted children.
Replies from: gwern, dpandey, Bjartur Tómas↑ comment by devansh (dpandey) · 2021-11-18T00:01:09.469Z · LW(p) · GW(p)
I'm a Davidson YS and have access to the general email list. Is there a somewhat standard intro to EA that I could modify and post there without seeming like I'm proselytizing?
Replies from: James_Miller↑ comment by James_Miller · 2021-11-18T00:03:38.416Z · LW(p) · GW(p)
This: https://www.effectivealtruism.org/articles/introduction-to-effective-altruism/
↑ comment by Tomás B. (Bjartur Tómas) · 2021-11-18T23:20:41.479Z · LW(p) · GW(p)
Or those who might choose to become programmers.
Replies from: James_Miller↑ comment by James_Miller · 2021-11-19T00:08:40.484Z · LW(p) · GW(p)
Yes, although you want to be very careful not to attract people to the field of AGI who don't end up working on alignment but end up shortening the time to when we get super-human AGI.
Replies from: Bjartur Tómas↑ comment by Tomás B. (Bjartur Tómas) · 2021-11-19T15:45:32.443Z · LW(p) · GW(p)
Yeah, any ideas how to filter for this? Seems difficult not to have this effect on someone. One would hope the smarter people would get orthogonality, but like empirically that does not seem to be the case. The brightest people in AI have insane naïveté on the likely results of AGI.
Replies from: James_Miller, Harmless↑ comment by James_Miller · 2021-11-19T18:17:12.930Z · LW(p) · GW(p)
I was going to suggest you try to reach EA people, but they might want to achieve AGI as quickly as possible since a friendly AGI would likely quickly improve the world. While the pool is very small, I have noticed a strong overlap between people worried about unfriendly AGI and people who have signed up for cryonics or who at least who think cryonics is a reasonable choice. It might be worth doing a survey of computer programmers who have thought about AGI to see which traits correlate with being worried about unaligned AGI.
From a selfish viewpoint, younger people should want AGI development to go slower than older people do since, cryonics aside, the older you are the more likely you will die before an AGI has the ability to cure aging.
Replies from: None↑ comment by [deleted] · 2022-03-03T12:41:12.711Z · LW(p) · GW(p)
Most EAs are much more worried about AGI being an x-risk than they are excited about AGI improving the world (if you look at the EA Forum, there is a lot of talk about the former and pretty much none about the latter). Also, no need to specifically try and reach EAs; pretty much everyone in the community is aware.
..Unless you meant Electronic Arts!? :)
↑ comment by Harmless · 2021-11-19T17:09:06.632Z · LW(p) · GW(p)
You might want to try recruiting from people from a more philosophical/mathematical background as opposed to recruiting from a programming background (hopefully we might be able to crack the problem from the pure logic perspective before we get to an application), but yeah now that you mention it "recruiting people to help the AGI issue without also worsening it" looks like it might be an underappreciated issue.