Posts
Comments
[APPRENTICE] Sent you a private message
To give a short, very bad, but sort-of meaninfgful summary of my ideas: Even idiots have resources. It might help to give a concrete example of a plausible-ish archetype of something that might happen. I don't necessarily think this exact thing will happen, but it may help to clarify what I'm thinking.
-
Suppose 5% of Americans would be willing to vote for political candidates based purely on their privacy regulation promises, if they were properly persuaded (or donate to privacy nonprofits, or contribute in some other way).
-
Privacy regulations could meaningfully restrict data access and therefore slow down the progress of deep learning capabilities.
-
Suppose a significant portion of those would never be persuaded of AI X-Risk arguments and would never contribute meaningfully to alignment work otherwise.
-
If those thee facts are true, I think it would be net positive to advocate for privacy regulation directly, rather than telling people about x-risks, since there are more peoole who are receptive to privacy arguments than x-risk arguments.Obviously this would have to require careful consideration of your audience. If you think you're talking to thoughtful people who could recognize the importance of alignment and contribute to it, then it is clearly better to actually tell them about alignment directly.
Does this chain of thought seem reasonable to you? If not, what do you think is missing or wrong?
Judging from this, might privacy regulations be one of the best ways to slow down AI development? Privacy is a widely accepted mainstream issue, so it should be a lot easier to advocate for. I think it would be a lot easier for regular people to understand and get behind privacy regulation than DL regulation. On the other hand, it's not neglected and therefore less important on the margin.