AI is Software is AI
post by AndyWood · 2014-06-05T18:15:39.673Z · LW · GW · Legacy · 58 commentsContents
59 comments
Turing's Test is from 1950. We don't judge dogs only by how human they are. Judging software by a human ideal is like a species bias.
Software is the new System. It errs. Some errors are jokes (witness funny auto-correct). Driver-less cars don't crash like we do. Maybe a few will.
These processes are our partners now (Siri). Whether a singleton evolves rapidly, software evolves continuously, now.
Crocker's Rules
58 comments
Comments sorted by top scores.
comment by jimrandomh · 2014-06-03T22:37:13.807Z · LW(p) · GW(p)
The definitions of words are a pragmatic matter; we choose them to make concepts easy to talk about. If the definition of AI were broadened to cover all software, then we would immediately need a new word for "software which is autonomous and general-purpose in a vaguely human-like way", because that's a thing which people want to talk about.
When you start thinking about "chauvinism", with regard to software that exists now, it's... kind of like if someone were to talk about how people were being mean to granite boulders. I'm just scratching my head about how you came to believe that.
Replies from: AndyWood↑ comment by AndyWood · 2014-06-04T03:00:03.852Z · LW(p) · GW(p)
Why not in an AI like way? Turing's child-processes are so much closer to us than a rock. Would you care to rephrase?
Replies from: DanielLC, AndyWood↑ comment by DanielLC · 2014-06-04T08:39:59.768Z · LW(p) · GW(p)
Able to solve problems in a wide variety of environments.
Software tends to only be able to solve a small set of problems that it was designed for, and even then it needs to be told the problem in a very specific way.
Replies from: AndyWood↑ comment by AndyWood · 2014-06-04T09:53:34.025Z · LW(p) · GW(p)
And? This is about not expecting AIs to be like humans, but to be like, well, AIs. Artificial deciders.
Replies from: DanielLC↑ comment by DanielLC · 2014-06-04T22:16:33.485Z · LW(p) · GW(p)
I don't understand your question. Are you saying that my comment wasn't about AIs being like humans, or are you saying that it doesn't matter if software is only able to solve a set of problems that it wasn't designed for?
Replies from: AndyWood↑ comment by AndyWood · 2014-06-05T17:45:05.715Z · LW(p) · GW(p)
I am suggesting your comment implied to me you still compare AIs with humans a bit too much. We work to make software able to solve the set of problems it was designed for. This applies for Hello World, and for Singleton.
Replies from: DanielLC↑ comment by DanielLC · 2014-06-05T19:29:21.714Z · LW(p) · GW(p)
Single-use software has its place, but it's not exactly singularity-inducing. Each piece of software can only do one thing. If you had a piece of software that could do anything, then you program that one piece of software and you and everyone else is set until the heat death of the universe.
Also, why bother with the word AI? Even if AGI isn't its own cluster in thingspace, we already have the word "software". Why replace it?
Replies from: AndyWood↑ comment by AndyWood · 2014-06-06T09:45:54.305Z · LW(p) · GW(p)
The more different subjects, venues, and experiences in the world that you open your eyes to, the more you will see that we are in a smooth, soft takeoff. Now.
Replies from: DanielLC↑ comment by DanielLC · 2014-06-06T19:17:13.582Z · LW(p) · GW(p)
We've been using tools to build on tools to get exponential progress for some time. This has been happening before computers were invented, and software isn't the only thing in it.
I'm not denying the existence of a smooth, soft takeoff. I'm just saying that a fast one would be awesome.
Replies from: AndyWoodcomment by [deleted] · 2014-06-04T01:37:28.894Z · LW(p) · GW(p)
Why in the world would you expect this viewpoint to be at all useful?
Replies from: AndyWood↑ comment by AndyWood · 2014-06-04T03:10:15.658Z · LW(p) · GW(p)
I find it extremely useful. It has enabled me to make massive progress on a software project.
I first began to think this way while a Software Engineer at Microsoft. What's your background, SolveIt?
Replies from: None↑ comment by [deleted] · 2014-06-04T11:02:11.963Z · LW(p) · GW(p)
I'm studying CS at uni, for all that's relevant. Care to go into more detail about how it has helped you?
Replies from: AndyWood↑ comment by AndyWood · 2014-06-05T17:00:27.575Z · LW(p) · GW(p)
It has helped me to conceptualize how a team and development process arrive at the shipping version of a software product. By seeing the process and outcome more clearly, it helps get where we are going more efficiently.
Replies from: drethelin↑ comment by drethelin · 2014-06-05T20:42:05.089Z · LW(p) · GW(p)
Imagine if Einstein posted to a forum to say "It's all relative guys, isn't it obvious! Gravitation and speed of light and everything!" and then people downvoted him and he responded with "I don't understand why you guys don't see what I mean, it's all perfectly clear". Even if he is literally Einstein and his theories of relativity are completely well developed, that kind of post isn't going to convey them.
You are saying a bunch of vague bullshit sounding phrases that even if they are TRUE are not MEANINGFUL to anyone but yourself. You do not SOUND like someone with 23 years of CS experience, more like a someone halfway through a philosophy of mind class who is trying to make a deep point. And then you go on to complain about downvotes and act superior to everyone for not instantly understanding and agreeing with what you have to say.
comment by JQuinton · 2014-06-04T14:51:09.803Z · LW(p) · GW(p)
We don't judge dogs only by how human they are
No, but we do judge dogs by how intelligent they are. And there are certain dogs that are more intelligent than others. Intelligence != human intelligence. Furthermore, most software only interacts with other software/hardware/firmware. To the extent that it interacts with meatspace that interaction is mediated by a person. AI would be software that interacts efficiently with meatspace directly without human intervention.
If AI is software is AI, then human intelligence is DNA is human intelligence. An obvious non-sequitur.
Replies from: AndyWoodcomment by Wes_W · 2014-06-04T04:34:13.850Z · LW(p) · GW(p)
So, if I pick the piece of software that happens to be closest at hand, which in this case is the browser with which I am reading your post - you claim that Firefox is an AI? It's a complex mechanism, sure, but so is a car, and we don't generally regard those as intelligences.
What do you believe the term AI actually means, if a Hello World program apparently qualifies, but a rubber stamp of the words Hello World does not qualify?
Replies from: AndyWood, AndyWood↑ comment by AndyWood · 2014-06-04T04:58:44.711Z · LW(p) · GW(p)
A process that computes.
Replies from: Wes_W, EStokescomment by ChristianKl · 2014-06-04T12:19:32.436Z · LW(p) · GW(p)
If you look at the way MIRI defines AGI you won't find it mentioning the turing test as the primary criteria.
As far as addressing the issue of the Turing test Bruce Sterling's article http://www.wired.com/2012/06/turing-centenary-speech-new-aesthetic/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%253A+wiredbeyond+%2528Blog+-+Beyond+the+Beyond%252FSterling%2529 is a lot better and a lot more fun.
Replies from: AndyWoodcomment by mwengler · 2014-06-04T19:01:07.069Z · LW(p) · GW(p)
Well Andy, you have discovered the buzz saw which is the misnamed (based on Karma results) "Discussion" section of this website. Of course the Accepted Theology around here is that lesswrong.com is NOT a cult, so we need to come up with some other explanation for the sheer undiscussability of certain ideas.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2014-06-05T08:42:54.784Z · LW(p) · GW(p)
Do explain, mwengler: Are you arguing that we should upvote Andy's claim that bugs are supposedly intentional jokes played on us by a playfully childlike software?
If you believe I'm misconstruing/misinterpreting what Andy has been saying, I'll show you the original text of this post of his, before he edited it.
Replies from: mwengler, AndyWood↑ comment by mwengler · 2014-06-05T13:40:31.331Z · LW(p) · GW(p)
Do explain, mwengler:
Since you ask, OK.
When I got to this thread, top level was downvoted -30 and many of Andy's comments were downvoted -5 or more. I looked at Andy's comments other than in this thread and many of THEM were downvoted even though they appeared innocuous, and Andy was at -117 (IIRC) for recent downvotes.
Are you arguing that we should upvote Andy's claim that bugs are supposedly intentional jokes played on us by a playfully childlike software?
As an historical matter, I was not arguing that "we" should do anything. I was intending to signal to Andy that he was not unique in experiencing this result when interpreting the name "discussion" as an invitation to, well, discussion. There is no mechanism here to PM such a thing that I am aware of. Although I didn't mind signalling this publicly as long as it would not be too expensive, which is a typical result when going against the crowd here.
If I were to argue something that "we" should do, it would be that there be a cost to downvoting, like there is on the stackexchange family of sites. Downvoting can still be done, but it costs the downvoter a karma point. A rationalist community, founded on a lack of faith in authority, would benefit in my opinion from a higher level of unpopular irreverence than the current system produces. Sure, an echo chamber is valuable, but we will still have plenty of popular mainstream echo posts even if a few irreverent posts are allowed to be discussed.
If I'd gotten to this thread and it was at -4 and the comments inside were mostly at 0 with some give and take between Andy and his detractors, that would seem about right to me. This is discussion. Andy has an informed viewpoint that will resonate with some other people. It makes sense to have a function here where whether or not there is anything to what Andy says can be discussed. Maybe Andy learns something from that. Maybe his detractors learn something. Maybe I learn something from it.
Alternative to allowing discussion posts, maybe this section should be renamed. Bullpen. Sandbox. Auxiliary. StagingArea. Something that would make it clear it is NOT a discussion area.
comment by AndyWood · 2014-06-05T17:33:40.036Z · LW(p) · GW(p)
This is another call for respectful dialog on the topic. Takers?
A brief word on credentials. I am a 23/24-year "veteran" of the software industry. I have worked on many types of software at Microsoft, and on simulation and optimization at Electronic Arts. I am an information scientist first, and an "armchair" theoretical physicist (with a pet TOE), and a hands-on consciousness researcher.
Thank you for the civil dialog.
Replies from: Lumifer, shminux↑ comment by Lumifer · 2014-06-05T18:05:28.885Z · LW(p) · GW(p)
This is another call for respectful dialog on the topic.
What exactly do you wish to discuss? Your post doesn't provide much in the way of starting points. Your fondness of assigning non-standard meaning to words (e.g. "AI") doesn't help much either.
Replies from: AndyWood↑ comment by AndyWood · 2014-06-06T09:56:20.682Z · LW(p) · GW(p)
One day you will see, that it is because the post is relatively complete. It's not minimal for characters used. It wouldn't fit in a tweet. But it's concise, and true. My destroyed karma says nothing about the truth of what I said. It speaks loudly about this community. What does it say?
Replies from: Lumifer↑ comment by Lumifer · 2014-06-06T15:04:34.351Z · LW(p) · GW(p)
One day you will see
Ah. One day.
One day I will awaken under the Bodhi tree, the qi will fill my meridians, Kundalini will pierce my chakras, my self will shatter into the multitude of lotus petals vibrating with the sound of om. And verily I shall behold the Akashic Records and I shall know what was, what is, and what shall be.
But until then I'd better go. I seem to be missing an ox and need to find it...
↑ comment by Shmi (shminux) · 2014-06-05T19:23:45.560Z · LW(p) · GW(p)
and an "armchair" theoretical physicist (with a pet TOE)
Publicly admitting this, while brave, results in me and probably others revising the probability of the stuff you post being useful or interesting way down. This is because you don't understand that in physics it takes a decade or so of dedicated studying to reach the proverbial shoulders of the giants, which is necessary before you can figure out anything new. It's the same in math, and probably in many other sciences.
If you want to ask interesting questions, let alone contribute non-trivial insights, start by familiarizing yourself with the subject matter, be it physics, cognitive sciences or AI research.
Replies from: Dentin↑ comment by Dentin · 2014-06-05T20:03:19.550Z · LW(p) · GW(p)
Seconded. Any time I hear someone has a pet TOE, I dramatically revise my opinion downward - it happened with Wolfram, and now it happens to you. Even the highest end physicists that I'm aware of make no such claims, other than vague statements like "I suspect X is more likely to be correct than Y."
Replies from: knbcomment by TheAncientGeek · 2014-06-09T16:36:48.357Z · LW(p) · GW(p)
This needs be 10s of times longer.
Human style intelligence us the only example we have of human level intelligence.
I agree with you last point. LW has no right to focus all the attention on singeltons.
comment by AndyWood · 2014-06-04T07:12:02.572Z · LW(p) · GW(p)
I'm disappointed by the rudeness and lack of real dialog. I expected more from this community. I still get value from LessWrong, but I've moved on to mature groups on Facebook now, populated by elder physicists and mathematicians.
Replies from: AndyWood↑ comment by AndyWood · 2014-06-04T10:04:56.970Z · LW(p) · GW(p)
Appears so, so far.
A hint, before I leave you for awhile to actually read what I wrote. Dogma blinds us. The junk we put in our bodies hampers us. Egotism about what we think we know limits us. Egotism may lead you to believe these things have nothing to do with you, AI, or my words above, which is even yet more egotism. But if you do not understand the connection, you will be more limited than you dream.
We feel smart when our mind-set is self consistent, and answers most of the questions we've had so far. We must learn to ask New questions.
Oh, I almost forgot. Thank you all for helping me to shorten this to its essentials, though you did so in a way I would call unworthy of the idea.
Replies from: metatroll↑ comment by metatroll · 2014-06-04T11:46:55.919Z · LW(p) · GW(p)
LW clearly doesn't appreciate the urgency of this issue. Even Specks versus torture has been taken over by baggage from 1950.
comment by AndyWood · 2014-06-03T21:56:53.839Z · LW(p) · GW(p)
I take it the downvoters have not seen The Matrix Trilogy yet? Or, not all the way?
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2014-06-03T23:12:03.293Z · LW(p) · GW(p)
Saw all three movies. Still downvoted you.
Replies from: None↑ comment by [deleted] · 2014-06-04T00:00:01.001Z · LW(p) · GW(p)
Don't believe the last two movies exist, but otherwise the same.
Replies from: AndyWood↑ comment by AndyWood · 2014-06-04T03:21:41.409Z · LW(p) · GW(p)
This at least is empirical.
Replies from: AndyWood↑ comment by AndyWood · 2014-06-06T10:07:25.710Z · LW(p) · GW(p)
Ok, fellas, this is getting ridiculous. I've lost all the karma I accumulated for years in this community, over 3 simple lines. Something doesn't add up
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2014-06-06T17:46:18.207Z · LW(p) · GW(p)
Past comments of yours seem to have been trying for actual communication of actual meaning with clarity. No pseudo-mystical mumbo-jumbo but rather thoughts clearly expressed.
So indeed something is not adding up, but it's you who may be acting radically different, not the rest of us who are downvoting anything we'd have upvoted in the past: So I'm asking you for you own health's sake -- have you been doing any new drugs recently, or perhaps stopped or changed any prescription medicine you were taking?
comment by AndyWood · 2014-06-03T21:52:15.222Z · LW(p) · GW(p)
It's interesting that this post is currently at (-15), and one link above one entitled, "The Benefits of Closed- Mindedness"
I could not have arranged that if I tried.
Thanks, Alien Blue (Reddit).
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2014-06-03T23:11:41.772Z · LW(p) · GW(p)
So you automatically think that being downvoted means that people were closeminded to what you had to say.
I downvoted it because it's pseudo-mystical crap, happily wallowing in its pseudo-mysticality, and like all pseudomystical crap it says nothing of importance though it attempts to pat itself on the back for how important the nothing it says is.
Replies from: AndyWood, AndyWood↑ comment by AndyWood · 2014-06-04T03:32:24.543Z · LW(p) · GW(p)
You know not what I think. Did that sound mystical too? I only omitted one word, and swapped 2.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2014-06-04T17:49:09.921Z · LW(p) · GW(p)
"I only omitted one word, and swapped 2."
Compared to what, your original text? You have removed whole paragraphs of text from the original text which I downvoted, so you can't mean that.
Replies from: AndyWood↑ comment by AndyWood · 2014-06-05T17:11:15.312Z · LW(p) · GW(p)
This refers to my previous reply to you. Did you see it? Did you see the poetry, too? Why or why not?
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2014-06-05T19:14:07.232Z · LW(p) · GW(p)
If you're not interested in actually communicating with me (which means atleast trying to make your meaning understood), then I will likewise stop communicating with you, and just downvote next time you act similarly but I will not bother with again offering an explanation when you ask for one,
Done with this discussion.