post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Dagon · 2024-01-14T05:42:10.545Z · LW(p) · GW(p)

Paperclips have no problems, right?

More seriously, this is rather tenuous far-mode thinking - it's more a definition (smart-enough AI can solve all problems that can be solved by intelligence.  Any unsolved problem either indicates not smart ENOUGH or nor a "real" problem).  With some amount of truth that MANY current problems feel like they can be solved if there's a trustworthy near-omniscient leader, who will naturally collect massive amounts of power in order to blast through coordination problems caused by people wanting "wrong" things.

Replies from: bhauth
comment by bhauth · 2024-01-14T06:30:18.868Z · LW(p) · GW(p)

Tenuous far-mode thinking? At this very moment, I see people using ChatGPT to emails and reports that have wrong technical statements but are written in a way that appeals to people who don't know they're wrong. Even medical students are doing this. This is even increased intentionally: commercial LLMs have a RLHF process that in some ways makes them more useful but in some ways makes them less accurate.

Replies from: Dagon
comment by Dagon · 2024-01-14T06:55:08.275Z · LW(p) · GW(p)

I think those examples reinforce my point.  Maybe I was unclear about the referent for "this".  People who claim that AI (which doesn't kill everyone) will solve all our problems are engaging in tenuous far-mode thinking.  Enumerating the problems would allow them (and us) to discuss the likelihood that a friendly/controlled AI will solve them.

Replies from: bhauth
comment by bhauth · 2024-01-14T07:08:39.357Z · LW(p) · GW(p)

I see, that's what you meant.

comment by quetzal_rainbow · 2024-01-13T15:09:49.548Z · LW(p) · GW(p)

"Good selection process for AI" is called "solved AI alignment". Conditional on solved alignment, we can reasonably expect AIs to be better at being rulers, because it's an equivalent to "we understand human neuropsychology and can use neurosurgery to make politicians honest, althruistic, smart and cooperative". Conditional on unsolved alignment, we are going to die and question of politics becomes here utterly uninteresting.

Replies from: bhauth
comment by bhauth · 2024-01-13T15:15:15.803Z · LW(p) · GW(p)

Solved-ness of complex problems is a continuum, not a binary. Shouldn't we consider intermediate states?

Replies from: quetzal_rainbow
comment by quetzal_rainbow · 2024-01-13T15:25:09.393Z · LW(p) · GW(p)

If we solve alignment at the level "can safely design nanotech, can't safely rule the galaxies", we can utilize current level of alignment for moves like "solve mind uploading, use speedup to solve the rest of the alignment problem" and "make sure that no stupid person put not-fully-aligned AI in charge". We can imagine different intermediate stages, what matters is attractors.

Replies from: bhauth
comment by bhauth · 2024-01-13T15:31:13.514Z · LW(p) · GW(p)

It's strange to me that you think there's only a couple "attractors" in that situation.

Replies from: quetzal_rainbow
comment by quetzal_rainbow · 2024-01-13T15:48:20.571Z · LW(p) · GW(p)

Well, first of all, your question is phrased as "are AIs better at politics" and "better or not" is a binary. Second, let's suppose that we somehow solved notkilleveryoneism in a sense that we can ask ASI for general-purpose nanotech, get nanotech and not die. Let's suppose that some reason X blocks AIs from becoming better politicians than humans. Why would we not solve this by asking ASI "how to solve/workaround X"? The only reason imaginable to me is "it's physically impossible" and claim "it's physically impossible to make better politicians from AIs" is a quite extraordinary.

Replies from: bhauth
comment by bhauth · 2024-01-13T16:33:54.781Z · LW(p) · GW(p)

You seem to be misunderstanding the point of my post. The title isn't about AI being better at politics than politicians, it's about AI being ineffective/incompetent at accomplishing desired goals but chosen by selection systems, like how some current politicians are. It's about whether AIs would have similar types of flaws to current politicians.

You don't ask "an AI" to "make a technology". You choose a possible AI to make that technology for you, based on some criteria. Saying "the problem is just alignment here" implies that all of those potential AI selections are capable of making the technology for you. But what's actually being selected for is getting selected. Why should we expect an AI's output to be a real answer instead of some BS that appeals to people more than a real answer and is easier to output?

Replies from: quetzal_rainbow
comment by quetzal_rainbow · 2024-01-13T17:44:24.575Z · LW(p) · GW(p)

I meant by "AIs as better politicians" exactly "better as politicians who achieve desires and goals of population".