"Artificial Intelligence" (new entry at Stanford Encyclopedia of Philosophy)
post by fortyeridania
This is a link post for https://plato.stanford.edu/entries/artificial-intelligence/
Comments sorted by top scores.
comment by fortyeridania ·
2018-07-19T09:52:34.186Z · LW(p) · GW(p)
The author is Selmer Bringsjord.
Wikipedia: https://en.wikipedia.org/wiki/Selmer_BringsjordReplies from: gjm
↑ comment by gjm ·
2018-07-19T15:13:36.379Z · LW(p) · GW(p)
Bringsjord is the author of a "proof" that P=NP. It is ... not an impressive piece of work, or at least I don't find it so. And it fails to be impressive in a way that seems highly relevant to philosophizing about AI. Namely, Bringsjord seems to think he's entitled to leap from "such-and-such a physical system seems like it correctly finds optimal solutions to small instances of the Steiner tree problem" to "such-and-such a physical system will somehow find the optimal solution in every instance of the Steiner tree problem, at least given a large enough universe to do it in", which is complete nonsense, and to my mind making such a fundamental error of reasoning (and doubling down on it when challenged, which seems to have happened) indicates a mind unsuited for this kind of thinking. It's not just that his logic is bad; it's that he seems not to be even trying to think in terms of what the universe might actually be doing to solve small easy STP instances, and what might happen when you try to generalize that to big hard ones.
[EDITED to add:] His SEP article more or less endorses Searle's "Chinese Room" argument, while observing that most AI practitioners think it's rubbish (with a bit of a sneer at said practitioners). A surprising amount of the article is based specifically on, or responding specifically to, Russell & Norvig's AIMA, which seems a weird approach for a survey of a whole field to take; one uncharitable but tempting explanation is that Bringsjord doesn't really know the field so well. The final section is a predictably sneery discussion of predictions of superhuman AI. The thing as a whole isn't by any means uniformly terrible, but I can't help thinking they could have done rather better than Bringsjord.Replies from: paulfchristiano, CronoDAS
↑ comment by paulfchristiano ·
2018-07-19T15:56:20.729Z · LW(p) · GW(p)
There are an impressive number of things impressively wrong in the P=NP paper. If it's intended seriously, I think that should probably be enough to disqualify Bringsjord from serious discussion of any technical topic.Replies from: CarlShulman
↑ comment by CarlShulman ·
2018-07-19T19:45:47.604Z · LW(p) · GW(p)
Another Bringsjord classic :
> However, we give herein a novel, formal modal argument showing that since it's mathematically possible that human minds are hypercomputers, such minds are in fact hypercomputers. Replies from: CarlShulman
↑ comment by CronoDAS ·
2018-07-19T15:22:31.371Z · LW(p) · GW(p)
Incidentally, people have refuted the claim experimentally; soap bubbles fail to correctly solve larger instances of the tree problem.