Posts

Comments

Comment by Brian Edwards (brian-edwards) on Seeking (Paid) Case Studies on Standards · 2023-06-01T14:03:46.046Z · LW · GW

I submitted a proposal but did not receive a confirmation that it was received. Perhaps I should submit again?

Comment by Brian Edwards (brian-edwards) on What if LaMDA is indeed sentient / self-aware / worth having rights? · 2022-06-16T19:00:03.429Z · LW · GW

These questions are ridiculous because they conflate "intelligence" and "sentience", also known as sensory experience or "qualia". While we often have a solid epistemic foundation for the claims we make about intelligence because we can measure it. Sentience is not something that can be measured on a relative spectrum. Spontaneous emotional and sensory experience are entirely independent of intelligence and most definitely independent of an external prompt.

You are right that infants are DEFINITELY sentient, but how does that have anything to do with Lemoine's claims, or even language? Humans are born sentient and do not develop sentience or mature from a non-sentient to sentient state during infancy. We know this because despite having no language skills of their own, infants are born capable of distinguishing their parents voices from others. They can instinctively communicate their desires in the form of emotional outbursts that signal to us their potential needs or sources of irritation. Human sentience is a priori from our first sensory experience. Not one bit of learned intelligence or language is necessary for sentience, nor are demonstrations of intelligence and language sufficient evidence of sentience by themselves.

Also, what is the basis for thinking silicon-based systems and carbon-based systems have comparable qualia? This is a serious question.

Comment by Brian Edwards (brian-edwards) on Blake Richards on Why he is Skeptical of Existential Risk from AI · 2022-06-16T13:14:55.250Z · LW · GW

"AGI" doesn't actually make ANY claim at all. That is my primary point, it is an utterly useless term, other than that is sufficiently meaningful and meaningless at the same time that it can be the basis for conveying an intangible concept.

YOU, specifically, have not made a single claim that can be falsified. Please point me at your claim if you think I missed it.

Comment by Brian Edwards (brian-edwards) on Blake Richards on Why he is Skeptical of Existential Risk from AI · 2022-06-15T00:18:20.359Z · LW · GW

If that's what "general" means, why not just say "conscious AI"? I suspect the answer is because the field has already come to terms with the fact that conscious machines are philosophically unattainable. Another word was needed that was both sufficiently meaningful and also sufficiently meaningless to refocus (or more accurately misdirect) attention to "The Thing Humans Do That Machines Don't That Is Very Useful".

The burden of defining concepts like "AGI" is on the true believers, not the skeptics. Labeling someone "disappointingly stupid" who isn't making any non falsifiable claims about binary systems doing the "sort of stuff I can do". Simply making fun of your critics for lacking sufficient imagination to comprehend your epistemically incoherent claims is nothing more than lazy burden shifting.

I do get a kick out of statements like "but you can't explain to me how you recognize a cat" as if the epistemically weak explanations for human general intelligence excuse or even somehow validate epistemically weak explanations for AGI.