Posts

Comments

Comment by agi-hater on Simulation arguments · 2022-09-28T18:44:03.441Z · LW · GW

It does not follow that computationally cheaper things are more likely to happen than computationally expensive things. Moreover, describing something as "computationally difficult" is a subjective value judgment (unless you can reasonably prove otherwise) and implies that all actions/events can be reduced to some form of computation.

Comment by agi-hater on Why I think strong general AI is coming soon · 2022-09-28T17:39:30.598Z · LW · GW

Furthermore, you compare humans to computers and brains to machines and imply that consciousness is computation. To say that "consciousness is not computation" is comparable to "god of gaps" argument is ironic considering the existence of the AI effect. Your view is hardly coherent in any other worldview than hardcore materialism (which itself is not coherent). Again, we stumble into an area of philosophy, which you hardly addressed in your article. Instead you focused on predicting how good our future computers will be at computing while making appeals to emotion, appeals to unending progress, appealing to the fallacy that solving the last 10% of the "problem" is as easy as the other 90% - that because we are "close" to imitating it (and we are not if you consider the full view of intelligence), we somehow grasped the essence of it and "if only we get slightly better at X or Y we will solve it".

Scientists have been predicting coming of AGI since '50s, some believed 70 years ago that it will only take 20 years. We have clearly not changed as humans. The question of intelligence and, thus, the question of AGI is in many ways inherently linked to philosophy and it is clear that your philosophy is that of materialism which cannot provide good understanding of "intelligence" and all related ideas like mind, consciousness, sentience, etc. If you were to reconsider your position and ditch materialism, you might find that your idea of AGI is not compatible with abilities of a computer, or non-living matter in general.

Comment by agi-hater on Why I think strong general AI is coming soon · 2022-09-28T17:36:45.509Z · LW · GW

Your definition of AGI ("the kind of AI with sufficient capability to make it a genuine threat to humanity's future or survival if it is misused or misaligned") is tragically insufficient, vague, subjective, and arguably misaligned with the generally accepted definition of AGI.

From what you wrote elsewhere ("An AGI having its own goals and actively pursuing them as an agent") you imply that the threat could come from AGI's intentions, that is, you imply that AGI will have consciousness, intentionality, etc. - qualities so far exclusively prescribed to living things (you have provided no arguments to think otherwise).

However, you decided to define "intelligence" as "stuff like complex problem solving that's useful for achieving goals" which means that intentionality, consciousness, etc. is unconnected to it (realistically any "complex"-enough algorithm satisfies this condition). Such simplistic and reductionistic definition implies that it is not enough for an "intelligent" computer to be an AGI. So, while you may be able to prove that a computer could have "intelligence" it still does not follow that AGI is possible.

Your core idea that "We've already captured way too much of intelligence with way too little effort." may be true with your definition of "intelligence", but I hope I've shown that such definition is not enough. Researchers at Harvard suggest existence of multiple types of intelligence, which your statement does not take into account and groups all types of intelligence into one even though some are impossible for a computer to have and some could be considered as defining qualities of a computer.