Posts

Comments

Comment by Nutrition Capsule on sarahconstantin's Shortform · 2024-12-10T16:23:01.572Z · LW · GW

As for a specific group of people resistant to peer pressure - psychopaths. Psychopaths don't conform to peer pressure easily - or any kind of pressure, for that matter. Many of them are in fact willing to murder, sit in jail, or otherwise become very ostracized if it aligns with whatever goals they have in mind. I'd wager that the fact that a large percentage of psychopaths literally end up jailed speaks for itself - they just don't mind the consequences that much.

This is easily explained due to psychopaths being fearless and mostly lacking empathy. As far as I recall, some physiological correlates exist - psychopaths have a low cortisol response to stressors compared to normies. On top of the apparent fact that they are indifferent towards others' feelings, some brain imaging data supports this as well.

What they might be more vulnerable to is that peer pressure sometimes goes hand in hand with power and success. Psychopaths like power and success, and they might therefore play along with rules to get more of what they want. That might look like caving in to peer pressure, but judging by how the pathology is contemporarily understood, I'd still say it's not the pressure itself, but the benefits aligned with succumbing to it.

Comment by Nutrition Capsule on Overcoming Bias Anthology · 2024-11-02T10:14:02.827Z · LW · GW

Hanson seems to treat the global civilization as a cultural melting pot, but he does distinguish insular subcultures from that. I intuit he sees contemporary cultures on a gradient relative to global, hegemonic trends (which correlate with technological progress, increasing wealth and education) and thereby drifting pressures.

Comment by Nutrition Capsule on Overcoming Bias Anthology · 2024-10-23T04:16:38.078Z · LW · GW

I wouldn't equate Robin's perspectives on culture with reactionary movements or conservatism. If anything, he seems quite open to radical transformations of society (e.g. futarchy to replace parlamentarism, bounty systems and vouching to replace policing, private insurance policies to replace welfare policies etc.).

Whereas (neo-)reactionary / conservative thought simply often intends to return some previous status quo, Robin does not confess to representing such views and has not proposed such solutions. In fact, as far as I'm aware he hasn't proposed any solutions at all as of yet.

EDIT: (Mis-)interpreted your comment as Robin pushing (neo-)reactionary ideas. I do agree that conservative and reactionary movements generally show interest towards cultural drift as a phenomenon. However, if you propose that Robin's ideas themselves are not novel, I'd like to hear which ideas in particular you think have already been tackled for millenia or some other timescale.

Comment by Nutrition Capsule on Overcoming Bias Anthology · 2024-10-20T13:01:45.471Z · LW · GW

Very good! Hoping to see - weakly intending to commit - a post list of his latest boom (fertility decline, which lead him to culture). I attended one of Robin's Zoom meetings on culture, and I'm confident it is on par with his other great fixations thus far (prediction markets, signaling, ems and aliens) if not even bigger. Robin seems absolutely possessed by the phenomenon.

For those who do not follow him: Robin has begun seeing culture as broken/maladaptive, and he seems to think this is perhaps the key issue of our time, on par or bigger than climate change and AI. He thinks that cultural change is driven into directions which will eventually lead to population decline and nasty places, even though he remains optimistic on our species' future in the long run.

Comment by Nutrition Capsule on The Sun is big, but superintelligences will not spare Earth a little sunlight · 2024-10-05T11:35:36.787Z · LW · GW

I interpreted Eliezer as writing from the assumption that the superintelligence(s) in question are in fact not already aligned to maximize whatever it is that humanity needs to survive, but some other goal(s), which diverge from humanity's interests once implemented.

He explicitly states that the essay's point is to shoot down a clumsy counterargument (along "it wouldn't cost the ASI a lot to let us live, so we should assume they'd let us live"). So the context (I interpret) is that such requests, however sympathetic, have not been ingrained into the ASI:s goals. Using a different example would mean he was discussing something different.

That is, "just because it would make a trivial difference from the ASI:s perspective to let humanity thrive, whereas it would make an existential difference from humanity's perspective, doesn't mean ASIs will let humanity thrive", assuming such conditions aren't already baked into their decision-making.

I think Eliezer spends so much time on working from these premises because he believes 1) an unaligned ASI to be the default outcome of current developments, and 2) that all current attempts at alignment will necessarily fail.

Comment by Nutrition Capsule on The dangers of reproducing while old · 2023-11-17T07:55:01.980Z · LW · GW

My understanding goes along similar lines, so I'm not highly doubtful. If anything, I've had the idea that the risk of developmental disorders and miscarriage, difficulties in getting pregnant and some pregnancy related issues might begin rising substantially much sooner than in one's 30s.

To me it seems that the overwhelming majority of children conceived even after 35 are all healthy and fine. That is, >99% on autism, >98% on chromosome disorders. The risk of miscarriage is relevant. All these considered, I believe this evidence means people should likely not be too worried whether they are already too old to have kids.

Whether or not having kids earlier might still be better, while accounting for the costs on one's career or business, etc. is another discussion, particularly when thinking of large numbers of people. However, AFAIK a lot of people already want to conceive while they are young, and I'm not sure whether people considering trying kids can significantly be swayed one way or another by this evidence alone.

(comment edited: missed the link at first sight)

Comment by Nutrition Capsule on Stuxnet, not Skynet: Humanity's disempowerment by AI · 2023-11-07T13:53:15.744Z · LW · GW

Thanks for the post. A layperson here, little to no technical knowledge, no high-g-mathematical-knowitall-superpowers. I highly appreciate this forum and the abilities of the people writing here. Differences in opinion are likely due to me misunderstanding something.

As for examples or thought experiments on specific mechanisms behind humanity losing a war against an AI or several AIs cooperating, I often find them too specific or unnecessarily complicated. I understand the point is simply to point out that a vast number of possible, and likely easy ways to wipe out humanity (or to otherwise make sure humanity won't resist) exists, but I'd still like to see more of the claimed simple, boring, mundane ways of this happening than this post includes. Such as:

  • Due to economic and social benefits they've provided, eventually AI systems more or less control or are able to take over of most of the world's widely adopted industrial and communication infrastructure.
    • The need and incentive for creating such optimization might be, for example, the fact that humanity wants to feed its hungry, treat its sick and provide necessary and luxury goods to people. International cooperation leading to mutual benefits might outweigh waging war to gain land, and most people might then mostly end up agreeing that being well fed, healthy and rich outweighs the virtues of fighting wars.
    • These aims are to be achieved under the pressure of climate change, water pollution, dwindling fossil fuel reserves et cetera, further incentivizing leaning on smart systems instead of mere human cooperation.
    • Little by little, global food and energy production, infrastructure, industry and logistics are then further mechanized and automatized, as has more or less happened. The regions where this is not done are outcompeted by the regions that do. These automated systems will likely eventually be able to communicate with one another to enable the sort of "on-time" global logistics whose weaknesses have now become more apparent, yet on a scale that convinces most people that using it is worth the risks. Several safeguards are in place, of course, and this is thought to be enough to protect from catastrophic consequences.
  • Instead of killer robots and deadly viruses, AIs willing to do so then sabotage global food production and industrial logistics to the extent that most people will starve, freeze, be unable to get their medications or otherwise face severe difficulties in living their lives. 
    • This likely leads to societal collapses, anarchy and war, hindering human cooperation and preventing them from resisting the AI systems, now mostly in control of global production and communication infrastructure.
    • Killing all humans will likely not be necessary unless they are to be consumed for raw materials or fuel, as killing all chimps isn't necessary to humanity. Humanity likely does not pose any kind of risk to the AI systems once most of the major population centers have been wiped out, most governments have collapsed, most people are unable to understand the way the world functions and especially are unable to survive without the help of the industrial society they've grown accustomed to.
      • The small number of people willing and able to resist intelligent machines might be compared to smart deer willing to resist and fight humanity, posing negligible risk.

Another example, including killer robots:

  • AIs are eventually given autonomous control of most robots, weapons and weapon systems.
    • This might happen as follows: nations or companies willing to progressively give AIs autonomous controls end up beating everyone who doesn't. AIs are then progressively given control over armies, robots and weapons systems everywhere, or only those willing to do so remain in the end.
  • Due to miscalculation on the AIs' part (a possibility not stressed nearly enough, I think), or due to inappropriate alignment, the AI systems then end up destroying enough of the global environment, population, or energy, food or communications infrastructure so that most humanity will end up in the Stone Age or some similar place.

I think one successful example of pointing to AI risk without writing fiction, was Eliezer musing the possibility that AI systems might, due to some process of self-improvement, end up behaving in unexpected ways so that they are still able to communicate with one another but unable to communicate with humanity.

My point is that providing detailed examples of AIs exterminating humanity via nanobots, viruses, highly advanced psychological warfare et cetera might serve to further alienate those who do not already believe in the possibility of them being able to or willing to do so. I think that pointing to the general vulnerabilities of the global human techno-industrial societies would suffice.

Let me emphasize that I don't think the examples provided in the post are necessarily unlikely to happen or that what I've outlined above should somehow be more likely. I do think that global production as it exists today seems quite vulnerable to even relatively slight pertubations (such as a coronavirus pandemic or some wars being fought), and that by simply nudging these vulnerabilities might suffice to quickly end any threat humanity could pose to an AI:s goals. Such a nudge might also be possible and even increasingly likely due to wide AI implementation, even without an agent-like Singleton.

A relative pro on focusing on such risks might be the view that humanity does not need a godlike singleton to be existentially, catastrophically f-d, and that even relatively capable AGI systems severely risk putting an end to civilization, without anything going foom. Such events might be even more likely than nanobots and paperclips, so to say. Consistently emphasizing these aspects might convince more people to wary of unrestricted AI development and implementation.

Edit: It's possibly relevant that I relate to Paul's views re: slow vs. fast takeoff insofar as I find slow takeoff likely to happen before fast takeoff.