AI as a Civilizational Risk Part 6/6: What can be done
post by PashaKamyshev · 2022-11-03T19:48:52.376Z · LW · GW · 4 commentsContents
Fix or destroy social media Meta discourse issues Math and Economics research None 4 comments
Fix or destroy social media
One of the critical positive developments is the potential of Elon Musk to buy Twitter. The acquisition happened as the essay was being finalized. Positive effects that can happen are cracking down on bots and re-vamping moderation to avoid bad AI-driven mischaracterization of public opinion. However, the main benefit would be a potential implementation of a non-optimization feed ranking algorithm in the vein of TrustRank. Proper feed ranking would promote socially cohesive ideas instead of wedge issues.
Aside from Elon's specific actions around Twitter, most social media needs to be destroyed or drastically reformed. We need to be careful around ranking algorithms. Any algorithm with an "optimization nature" rather than a "contractual nature" must be viewed with suspicion. At the very least, science needs to test the effect of using some websites for a prolonged time. If it causes mental health issues in people or small groups, this is a sign of imposing unacceptable externalities. Setting up such tests requires a good assessment of mental health problems and how to measure them correctly. Even lacking great assessments with the crude approximation that we have today, we can design social media that does not slowly cause people to go insane. In addition to personal defense, there needs to be "group defense" against hostile outside optimization. This reasoning has caused me to research this area, developing TrustRank, hoping that it becomes a core algorithm in the future of social media, similar to the way PageRank is a core algorithm of current search engines. Even correctly measuring social cohesion can give decision-makers some ideas about how to preserve it. Of course, this requires decision-makers that care about the nation's well-being, the absence of which is part of the problem. We would also need to implement solutions to the governments forcing social media to use AI to de-platform people with valuable insights. However, since these issues are well-known, they are more likely to happen by default through web3 systems.
However, completely replacing social media is not enough to stave off civilizational risk. We still have the background problems of social cohesions, which are not new but reappear every so often in civilization. Elites need to improve social cohesion among themselves and the people they manage. Even if the Western order is doomed, there are suggestions that a core group of people with good internal social cohesion can manage to "reboot" the good parts of it in a new form of it, such as a network state, and avoid a further Dark Age.
Meta discourse issues
What else can be done, and why is this hard? Even if we accept some of the optimism and hope that civilizational risk may decrease existential risk, this still leaves a lot of potential problems on the table. Even if we accept that civilizational collapse may give us the necessary impetus to do something about existential risk, we still have many issues. The main one is that due to declining social cohesion, there is little meta capacity to discuss these issues properly.
The culture war and the left-right dichotomy are now splitting the meta-object dichotomy. Effective altruism, mainly on the broad left, occasionally libertarian or centrist, tends to focus on just the meta-issues. Many discussions about AI's effect on humanity tend to be about abstract agents, abstractly acting in the world against abstract human values. Occasionally this can be made concrete, but the concrete examples usually involve a manufacturing or power dichotomy. They tend not to concretize this in political terms partly for good reason because political terms are themselves divisive [LW · GW] and can spur an unnecessary backlash.
The right, however, tends to focus on the object level, civilizational issues, which means sounding the alarm that West civilization is collapsing and the current directionality of society is negative. The counter-arguments of "everything is great" that roughly claim that "metrics such as GDP and life-expectancy have been going up and therefore everything is broadly ok" are not cutting it because life-expectancy has gone down quite a bit recently.
The left/right, meta/object dichotomies do not engage with each other due to fear of being perceived as one another. At some point, the government will start using narrow AIs or bots to sway the public's perception of itself, demoralize the public, or radicalize it for war. Due to left/right issues, there will be less productive dialogue than needed between the pro-regime but worried about AI left and anti-government right. Occasionally, there is some bipartisan talk about digital social media, but the diagnosis of it and the solutions tend to diverge. As a result, an essay like this one is hard to write, and it is likely to become more challenging as the culture war increases in magnitude, even as the issues become more apparent than today.
Math and Economics research
There is also mathematical research that needs to happen. There are many examples [LW · GW], but a big one is the notion of approximation of utility functions. When I talked about narrow AIs of search engines vs. social media, I talked about search engines "approximately" being closer to human "utility" without being perfectly aligned. However, this mathematical notion of "more aligned" is intuitive but not precise enough to discuss formally. The lack of formalization is surprising because people have been talking about aligned versus non-aligned corporations or ideas for a long time. However, we need a theory to describe or compare two utility functions in precise terms.
We need to have economic research which furthers the understanding of externalities and political externalities. COVID shows that economists and politicians do not correctly model proper disease-spread externalities. Airplane travel and large inside gatherings likely have some disease vector externalities, but these activities do not correctly price these externalities. Political decision-makers end up promoting or banning certain activities. However, there does not seem to be a proper utility calculation with some activities, such as cruise ships being under-regulated and small gatherings of people who already know each other being over-regulated. Biological externalities are a good metaphor for understanding "social cohesion" externalities. They are a more complicated problem, but economics research can approach it. Pollution metaphors can help us understand the notion of "signal pollution" and improper "behavioral modification." Putting a price on improper "nudges" from companies may be tricky, but it can re-use some of the existing protections from "fraud" or "misleading advertising." Of course, given that a lot of behavioral modification and signal pollution comes from the government itself, the notion of "self-regulation" is essential. If the framework of "human rights" continues to function as intended, which is a big if, then we might need to develop new rights for the digital age, such as the right of individuals to "not be nudged" too hard.
Platforms and people can use economics research to evaluate the cost of using advertising vs. subscription business methods. These problems are old, but we have to solve them now. We must understand the pricing of externalities, measures of "anti-economy," and behavioral modification. This understanding can be essential to reduce the impact of narrow AIs and establish the groundwork for AGI safety research.
One of the core philosophical directions of avoiding and mitigating both civilizational and existential risk is the ability to define, measure, and act on proper vs. improper behavior modification. What changes to the Human Being do we allow AIs to perform without our consent? Staying on the side of caution is most prudent here. Distributing and decentralizing the power of modification away from technocapital and towards kin networks is likely safer than centralization.
All parts
P1: Historical Priors [LW · GW]
P2: Behavioral Modification [LW · GW]
P3: Anti-economy and Signal Pollution [LW · GW]
P4: Bioweapons and Philosophy of Modification [LW · GW]
P5: X-risk vs. C-risk [LW · GW]
P6: What Can Be Done [LW · GW]
Thanks to Elliot Olds and Mike Anderson for previewing earlier drafts of the post.
My name is Pasha Kamyshev. Pre-pandemic I was very active in the Seattle EA community. My previous LessWrong account is agilecaveman [LW · GW]. Follow me on my Substack, Twitter or ask me for an invite to my work-in-progress startup: youtiki.com
4 comments
Comments sorted by top scores.
comment by scrollop · 2022-11-06T08:56:45.189Z · LW(p) · GW(p)
re Twitter:
Twitter is one of the worst sites in relation to social cohesion, and Musk's actions and likely future libertarian actions of allowing all speech with what seems little moderation will likely worsen the situation.
Most people may acknowledge that current social media trends (of STEM trained white male billionaire nerd goals of using psychological techniques to addict humans to their platform to get richer) are "unhealthy", however how do you pry the junk from the junky?
- Government intervention? Won't sit well
- Social media companies admitting their nefarious intentions and agreeing a charter to "save humanity" and alter their algorithms? Not likely.
- The situation worsens until there is "an event" ie. hits rock bottom, giving the junkies a breath or two to look up, see the sunshione and try to climb out of the hole. Not likely.
- Grass roots movements? - Some Twitter users are moving to Mastodon. Facebook is slowly (hopefully) suffocating aka myspace. People would likely not move until a platform has enough weight and "cool".
"the government will start using narrow AIs or bots to sway the public's perception of itself, demoralize the public, or radicalize it for war."
Not sure Western government's would dare do this - even if trump re-entered government. Could they do this by proxy? Perhaps, though if discovered the reaction would be intense.
re Mastodon - anyone know anyone of value to follow?
Replies from: PashaKamyshev↑ comment by PashaKamyshev · 2022-11-06T20:12:41.845Z · LW(p) · GW(p)
While I agree that Twitter is a bad site, I expect some of Musk's actions to make it better (but not fully fix it). Your attempt to tie personality-based critiques (stem / white / male) isn't helpful. Addiction to social platforms is a general issue and needs to be solved in a general way.
However, the solutions you outline are in fact some of the ways that the situation will proceed. I don't think 1. [government] is likely or will sit well either.
However, 2 [fix] is plausible. Companies would not "admin" problems, but they could fix without "admitting." Again, this requires thinking they are not used to, but is plausible.
4 [new media] is plausible. The field is becoming more open, but there are barriers to entry.
The piece in general is aimed toward people who have the ability to make 2 and /or 4 happen.
In absence of 2 and 4, the West collapses eventually and other nations learn from its mistakes.
Replies from: scrollop, scrollop↑ comment by scrollop · 2022-11-09T10:37:09.790Z · LW(p) · GW(p)
What are you basing your optimism for Musk's future for Twitter on?
(Sorry, I'm doing something wrong trying to insert links with markdown on)
[AP: eport: Tweets with racial slurs soar since Musk takeover]
(https://apnews.com/article/elon-musk-technology-business-government-and-politics-2907d382db132cfd7446152b9309992c?)
[BBC: Scale of abuse of politicians on Twitter revealed] (https://www.bbc.co.uk/news/uk-63330885)
[Reuters: Elon Musk's Twitter slow to act on misleading U.S. election content, experts say](https://www.reuters.com/technology/elon-musks-twitter-girds-surge-us-midterm-election-misinformation-2022-11-08/)
What Musk says: ["Mr Musk insisted that the platform's commitment to moderation remained "absolutely unchanged"."](https://news.sky.com/story/elon-musk-defends-culling-twitter-staff-but-insists-commitment-to-moderation-remains-absolutely-unchanged-12738642)
What Musk does: ["Yesterday’s reduction in force affected approximately 15% of our Trust & Safety organization (as opposed to approximately 50% cuts company-wide"](https://twitter.com/yoyoel/status/1588657227035918337)
The market will decide what to do with Twitter, it seems, though these are early days.
His antics and hypocrisy [aren't a good sign](https://www.thelondoneconomic.com/news/elon-musk-becomes-the-butt-of-the-joke-after-he-welcomes-comedy-back-on-twitter-338363/)
In terms of your riposte "Your attempt to tie personality-based critiques (stem / white / male) isn't helpful.":
The following quotes are from the book: [The Psychology of Silicon Valley, Cook, K. (2020)][https://link.springer.com/chapter/10.1007/978-3-030-27364-4_2)
"Simon Baron-Cohen, a psychologist and researcher at the University of Cambridge, has researched the neurological characteristics endemic in certain fields, most notably in science, technology, engineering, and mathematics (STEM ) professions. Baron-Cohen has repeatedly found that those with autism or autistic traits are over-represented in these disciplines, particularly in engineering and mathematics,Footnote 36,Footnote 37,Footnote 38 a finding that has been corroborated by different research teams.Footnote "
There is much anecdotal evidence and growing research that points to a correlation between the type of work necessitated in tech and the analytical, highly intelligent, and cognitively-focused minds of “Aspies” who may be instinctively drawn to the engineering community.
In 2012, technology journalist Ryan Tate published an article in which he argued that this obsessiveness was in fact “a major asset in the field of computer programming, which rewards long hours spent immersed in a world of variables, data Surveillance structures, nested loops and compiler errors.”Footnote 44 Tate contended that the number of engineers with Asperger’s was increasing in the Bay Area, given the skillset many tech positions demanded.
Entrepreneur and venture capitalist Peter Thiel similarly described the prevalence of Asperger’s in Silicon Valley as “rampant.” Autism spokesperson Temple Grandin, a professor at Colorado State University who identifies as an Aspie, also echoes Tate, Thiel, and Baron-Cohen’s conclusion:
Is there a connection between Asperger’s and IT? We wouldn’t even have any computers if we didn’t have Asperger’s…. All these labels—‘geek’ and ‘nerd’ and ‘mild Asperger’s’—are all getting at the same thing. ….The Asperger’s brain is interested in things rather than people, and people who are interested in things have given us the computer you’re working on right now.Footnote 47
**The most notable result is what many describe as a deficiency of emotional intelligence, particularly empathy, throughout the tech industry**
Alex Stamos, former Chief Security Officer at Facebook,:
As an industry we have a real problem with empathy. And I don’t just mean empathy towards each other… but we have a real inability to put ourselves in the shoes of the people that we’re trying to protect…. We’ve got to put ourselves in the shoes of the people who are using our products.
"...the woman described a systemic belief, particularly amongst executives, which held that those in the industry were the smartest and best suited to solve the problems they were tasked with, and therefore couldn’t “really learn anything from anyone else.”" I asked what she believed informed this attitude, the woman replied the problem stemmed, in her experience, from a lack of awareness and emotional intelligence within Silicon Valley.
Berners-Lee argues that the root cause of this returns, again and again, to “companies that have been built to maximise profit more than to maximise social good.”
I would say that personality type of executives has an impact on final goals and methods of tech firms, hence the comment.
EDIT: One more quote from the book:
"studies have shown how power rewires our brains in a way that Dacher Keltner, a professor of psychology at University of California, Berkeley, explains is comparable to a traumatic brain injury. Research by Keltner and others have found evidence of an inverse relationship between elevated social power and the capacity for empathy and compassion.29,30 These studies suggest that the degree of power people experience changes how their brains respond to others, most notably in the regions of the brain associated with mirror neurons, which are highly correlated with empathy and compassion.31 Keltner explains that as our sense of power increases, activity in regions of the orbito-frontal lobe decreases, leading those in positions of power to “stop attending carefully to what other people think,”32 become “more impulsive, less risk-aware, and, crucially, less adept at seeing things from other people’s point of view.”33"