Posts

On knowledge extrapolation in AI models 2024-05-14T04:28:19.467Z

Comments

Comment by Inosen Infinity (Inosen_Infinity) on Why the Best Writers Endure Isolation · 2024-07-17T08:53:58.208Z · LW · GW

I'm curious to know more. Could you describe your environment and your actions in more detail?

Were you in a place with absolutely nothing to do or was there at least something to turn your attention to? 

How did you spend that day -- were you, say, staring at a blank wall or lying on a sofa or walking around the room or something else?

And what was in your mind? Even if you accomplished nothing that day, did you perhaps think of some ideas on your topics of interest?

Comment by Inosen Infinity (Inosen_Infinity) on AI takeoff and nuclear war · 2024-06-12T10:43:51.828Z · LW · GW

Thanks!

The idea sounds nice, but practically it may also occur to be a double edged sword. If there is an AI that could significantly help in oversight of decision-makers, then there is almost surely an AI that could help the decision-makers drive public opinion in their desired direction. And since leaders usually have more resources (network, money) than the public, I'd assume that this scenario has larger probability than the successful oversight scenario. Intuitively, way larger.

I wonder how we could achieve oversight without getting controlled back in the process. Seems like a tough problem.

Comment by Inosen Infinity (Inosen_Infinity) on AI takeoff and nuclear war · 2024-06-12T07:51:41.103Z · LW · GW
  • Good AI tools could help people to make better sense of the world, and make more rational decisions.

I have a feeling this may go both ways. If AI development is nation-led (which may become true at some point somewhere), the nation's leaders would perhaps want the AI to be aligned with their own values. There is some risk that in such way biases could be solidified instead of overcome, and the AI would recommend even more irrational (in terms of the common good) decisions -- or rather, rational decisions based on irrational premises. Which could lead to increased risk of conflicts. It might be especially true for authoritarian countries.

  • AI could potentially give new powerful tools for democratic accountability, holding individual decisions to higher standards of scrutiny (without creating undue overhead or privacy issues)

The way I understand it could work is that democratic leaders with "democracy-aligned AI" would get more effective influence on nondemocratic figures (by fine-tuned persuasion or some kind of AI-designed political zugzwang or etc), thus reducing totalitarian risks. Is my understanding correct? 

(I also had a thought that maybe you meant a yet-misaligned leader would agree to cooperate with aligned-AI, but it sounds unlikely -- such leader would probably refuse because their values would differ from the AI's)