Constituency-sized AI congress?

post by Nathan Helm-Burger (nathan-helm-burger) · 2024-02-09T16:01:09.592Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    4 Zac Hatfield-Dodds
    4 Gerald Monroe
None
1 comment

I just had the idea for a constituency-sized AI congress. Each member of the constituency would have their personal debate agent trained on their preferences and values. The agents would debate tirelessly at superhuman speed to develop proposals which represented the best available win-win compromises given the issues on the table.

The final proposals after a set amount of debate would be presented to the constituency and voted on.

I haven't researched this yet or thought about it for long. I'd love for your feedback on the idea and links to related work.

Answers

answer by Zac Hatfield-Dodds · 2024-02-11T07:11:25.846Z · LW(p) · GW(p)

I think there's a lot of interesting potential in such ideas - but that this isn't ambitious enough! Democracy isn't just about compromising on the issues on the table; the best forms involve learning more and perhaps changing our minds... as well as, yes, trying to find creative win-win outcomes that everyone can at least accept.

I think that trying to improve democracy with better voting systems is fairly similar to trying to improve the economy with better price and capital-allocation sytems. In both cases, there have been enormous advances since the mid-1800s; in both there's a realistic prospect of modern computers enabling wildly better-than-historical systems; and in both cases it focuses effort on a technical subproblem which not sufficient and maybe not even necessary. (and also there's the spectre of communism in Europe haunting both)

A few bodies of thought and work on this that I like:

But as usual, the hard and valuable part is the doing!

comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-02-13T17:48:46.936Z · LW(p) · GW(p)

Yes, I agree that this is at best just one piece of the puzzle. I have a doc collecting ideas here: Governance and Epistemics resources

answer by Gerald Monroe · 2024-02-10T07:17:06.724Z · LW(p) · GW(p)

I think there are a couple of interesting elements here.

  1. Acknowledging that the individual AI representatives will act on preferences/values, there will be many situations where the optimal move is not what an individual person believes should be done.

Take a simple example. A large part (basically all?) of the US population wants cheap housing to be available, and for elite housing to be built in a value maximizing way (aka the elite want to get their money's worth). Yet a common preference is "no new housing built near me, where the noise/traffic/sight will affect me. And "building new luxury housing won't lower the market price for housing because demand is infinite". "Also I don't like seeing homeless people ".

What a person claims to want is opposed to how they want the government to act.

This also will make it difficult to audit ones AI representative. Decisions will become extremely complex negotiations.

  1. If a single person's only voice is a vote, then for most issues the preferences of most voters don't matter. They can be ignored on the margin. This is because current democracy "bundles" decisions. Perhaps you had in mind a direct democracy where a person's ai representative votes on every decision.

If you can separate the how from the what, I wonder what people actually disagree on. An enormous amount of political conflicts seem to be disputes over the how, where people cannot agree on what policy has the highest probability of achieving a goal.

This is essentially just human ignorance: given a common data set about the world, you cannot agree to disagree: there is exactly one optimal policy using the rational policy that at that instant in time which has the highest EV (measured by back testing etc)

comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-02-10T16:48:48.585Z · LW(p) · GW(p)

Very good points. Yes, I was imagining that this would enable a direct democracy style system, with less dependence on elected representatives and less bundling of issues. I was also imagining that it could be tested to see what the theoretical outcomes would have been. And tried out on small politically polarized groups.

The difficulty of auditing is a tricky one. But since people will have control over their own agent, they can instruct their agent to be more blunt and less strategic if they want.

I think separating the how from the what is tricky. I think futarchy is one of the few proposals I've heard to potentially help with this. I think having a congress of AI agents all based on the same LLM, differing only in the complex prompt they have been given, at least reduces the problem of intelligence differentials. I imagine the prompt is generated by an automated process of the user answering a long series of questions about their values. Then users can opt to add additional specifications such as the directive to be blunt for easier auditing. But only via a process of dialogue with the agent and automatic summarization of the dialogue, so that it would be harder to do weird prompt engineering stuff.

I do think that even once you've gotten past the problem of how to focus on the what, you will find at least some remaining disagreements. Different fundamental values between different people.

1 comment

Comments sorted by top scores.

comment by Shankar Sivarajan (shankar-sivarajan) · 2024-02-09T21:36:38.631Z · LW(p) · GW(p)

Are you trying make plebiscites work with AI? Interesting idea.