Posts
Comments
JL, I’ve programmed in several languages, but you have me correctly pegged as someone who is more familiar with databases. And since I’ve never designed anything on the scale we’re discussing I’m happy to defer to your experience. It sounds like an enormously fun exercise though.
My original point remains unanswered however. We’re demanding a level of intellectual rigour from our monotheistic party goer. Fair enough. But nothing I’ve seen here leads me to believe that we’re as open minded as we’re asking him to be. Would you put aside your convictions and adopt religion if a skilful debater put forward an argument more compelling than yours? If you were to still say “no” in the face of overwhelming logic, you wouldn’t justifiably be able to identify yourself as a critical thinker. And THAT’S what I was driving at. Perhaps I’m reading subtexts where none exist, but this whole anecdote has felt less like an exercise in deductive reasoning than having sport at someone else’s expense (which is plainly out of order).
I don’t really have any passion for debating so I’ll leave it there. I’m sure EY can pass along the email address I entered on this site if you’re determined to talk me out of my wayward Christianity.
Best of luck to you all
Johnny Logic: some good questions.
“Tolerance is not about letting every person's ideas go unchallenged; it's about refraining from other measures (enforced conformity, violence) when faced with intractable personal differences.”
That’s certainly the bare minimum. His beliefs have great personal value to him, and it costs us nothing to let him keep them (as long as he doesn’t initiate theological debates). Why not respect that?
“How do you know what the putative AI "believes" about what is advantageous or logical?”
By definition, wouldn’t our AI friend have clearly defined rules that tell us what it believes? Even if we employ some sort of Bayesian learning algorithm that changes behaviour, its actions would be well scripted.
“How do you know that other humans are feeling compassion?”
I’m not sure this can be answered without an emotive argument. If you’re confident that your actions are always consistent with your personal desires (if they exist), then you have me beaten. I personally didn’t want to wake up and go to work on Monday, but you wouldn’t know it by my actions since I showed up anyway. You’ll just have to take my word for it that I had other unquantifiable impulses.
“In other words, how you feel about the Turing test, and how, other than their behavior, would you be able to know about what people or AIs believe and feel?”
I think you might be misapplying the Turing test. Let’s frame this as a statistical problem. When you perform analysis, you separate factors into those that have predictive power and those that don’t. A successful Turing test would tell us that a perfect predictive formula is possible, and that we might be able to ignore some factors that don’t help us anticipate behaviour. It wouldn’t tell us that those factors don’t exist however.
“It was a bludgeoning by someone with training and practice in logical reasoning on someone without.”
I’m inclined to agree. I also found it less than convincing.
Let’s put aside the question of whether intelligence indicates the presence of a soul (although I’ve known more than a few highly intelligent people that are also morally bankrupt).
If it’s true that you can disprove his religion by building an all-encompassing algorithm that passes as a pseudo-soul, then the inverse must also be true. If you can’t quantify all the constituent parts of a soul, then you would have to accept that his religion offers a better explanation of the nature of being than AI. So you would have to start believing his religion until a better explanation presents itself. That seems fair, no?
If you can’t make that leap, then now would be a good time to examine your motives for any satisfaction you felt at his mauling. I’d argue your enjoyment is less about debating ability, and more about the enjoyment of putting the “uneducated” in their place.
So let’s consider the emotion compassion. You can design an algorithm so that it knows was compassionate behaviour looks like. You could also design it so that it learns when this behaviour is appropriate. But at no point is your algorithm actually “feeling” compassion, even if it’s demonstrating it. It’s following a set of predefined rules (with perhaps some randomness and adaptation built in) because it believes it’s advantageous or logical to do so. If this was a human being, we’d apply the label “sociopath”. That, to me, is a critical distinction between AI and soul.
Debates like these take all the fun right out of AI. It’s disappointing that we need to debate the merits of tolerance on forums like this one.