Posts

Comments

Comment by examachine on Debunking Fallacies in the Theory of AI Motivation · 2015-05-06T23:16:03.579Z · LW · GW

This is like posting an article about genetic evolution on a creationist forum. They will pretend to "understand" what you are saying and then dig down deeper into their irrational dogma.

Comment by examachine on Nick Bostrom's TED talk on Superintelligence is now online · 2015-05-02T01:49:24.587Z · LW · GW

Bostrom is a crypto-creationist "philosopher" with farcical arguments in favor of abrahamic mythology and neo-luddism. People are giving too much credit to lunatics who promote AI eschatology. Please do not listen to schizophrenics like Bostrom. The whole "academic career" of Bostrom may be summarized as "non-solutions to non-problems". I have never seen a less useful thinker. He could not be more wrong! I sometimes think that philosophy departments should be shut down, if they are to breed this kind of ignorance.

It's quite ironic that Bostrom is talking about superintelligence, BTW. How will he imagine what intelligent entities think?

Comment by examachine on Musk on AGI Timeframes · 2015-04-22T12:33:50.352Z · LW · GW

Wow, that's clearly foolish. Sorry. :) I mean I can't stop laughing so I won't be able to answer. Are you people retarded or something? Read my lips: AI DOES NOT MEAN FULLY AUTONOMOUS AGENT.

And AI Box experiment is more bullshit. I can PROGRAM an agent so that it never walks out of a box. It never wants to. Period. Imbeciles. You don't have to "imprison" any AI agent.

So, no, because it doesn't have to be fully autonomous.

Comment by examachine on Musk on AGI Timeframes · 2015-02-03T19:52:09.079Z · LW · GW

Because life isn't a third grade science fiction movie, where the super scientists who program AI agents are at the same time so incompetent that their experiments break out of the lab and kill everyone. :) Not going to happen. Sorry!

Comment by examachine on What Peter Thiel thinks about AI risk · 2014-12-12T19:10:15.901Z · LW · GW

I'm sorry to say that even a chatbot might refute this line of reasoning. Of course, economical impact is more important than such unfounded concerns. That might be the greatest danger of AI software. It might end up refuting a lot of pseudo-science about ethics.

Countries are starting wars over oil. High technology is a good thing, it might make us more wealthy, more capable, more peaceful. If employed wisely, of course. What we must concern ourselves with is how wise, how ethical we ourselves are in our own actions and plans.

Comment by examachine on Musk on AGI Timeframes · 2014-11-28T03:41:56.259Z · LW · GW

I do. Nick Bostrom is a creationist idiot (simulation "argument" is creationism), with absolutely no expertise in AI, who thinks the doomsday argument is true. Funnily enough, he does claim to be an expert in several extremely difficult fields including AI and computational neuroscience despite lack of any serious technical publications, on his book cover. That's usually a red flag indicating a charlatan. Despite whatever you might think, a "social scientist" is ill-equipped to say anything about AI. That's enough for now. For a more detailed exposition, I am afraid you will have to wait a while longer. You will know it, when you see it, stay tuned!

Comment by examachine on Musk on AGI Timeframes · 2014-11-26T17:14:08.522Z · LW · GW

It is entertaining indeed that a non computer scientist entrepreneur (Elon Musk) is emotionally influenced by the incredibly fallacious pseudo-scientific bullshit of Nick Bostrom, another non-computer scientist, and that people are talking about it.

So let's see, a clown writes a book, and an investor thinks it is a credible book while it is not true. What makes this hilarious is people's reactions to it. A ship of fools.

Comment by examachine on Musk on AGI Timeframes · 2014-11-19T01:10:53.767Z · LW · GW

I cannot possibly disclose confidential research here, so you will have to be content with that.

At any rate, believing that human-level AI is an extremely dangerous technology is pseudo-scientific.

Comment by examachine on Neo-reactionaries, why are you neo-reactionary? · 2014-11-18T05:03:34.259Z · LW · GW

Racists. Why does anyone even care about such people? Just ignore them.

Comment by examachine on Musk on AGI Timeframes · 2014-11-18T04:30:15.408Z · LW · GW

I think it would actually be helpful if researchers made more experiments with AGI agents showing what could go wrong and how to deal with such error conditions. I don't think that the "social sciences" approach to that works.

Comment by examachine on Musk on AGI Timeframes · 2014-11-18T04:05:42.239Z · LW · GW

I didn't read it, but I heard that Elon Musk is badly influenced by it. I know of his papers prior to the book, and I've taken a look at the content, I know the material being discussed. I think he is vastly exaggerating the risks from AI technology. AI technology will be as pervasive as the internet, it is a very spook/military like mindset to believe that it will only be owned by a few powerful entities, who will wield it to dominate the world, or the developers will be so extremely ignorant that they will have AI agents escaping their labs and start killing people. Those are merely bad science fiction scenarios, like they have on Hollywood movies, it's not even good science fiction, because he is talking about very improbable events. An engineer who can build an AI smarter than himself probably isn't that stupid or reckless. Terminator/Matrix scenarios won't happen; they will remain in the movies.

Moreover, as a startup person, I think he doesn't understand the computer industry well, and fails to see the realistic (not comic book) applications of AI technology. AGI researchers must certainly do a better job at revealing the future applications. That will both help them find better funding and attracting public attention, and of course, obtaining public approval.

Thus, let me state it. AI really is the next big thing (after wearable/VR/3dprinting, stuff that's already taking off, I would predict). It's right now like a few years before the Mosaic browser showed up. I think that in AI there will be something for everybody, just like the internet. And Bostrom's fears are completely irrational and unfounded, it seems to me. People should cheer up if they think they can have the first true AI in just 5 years.

Comment by examachine on Musk on AGI Timeframes · 2014-11-18T03:54:27.514Z · LW · GW

Confidential stuff, it could be an army of 1000 hamsters. :) To be honest, I don't think teams more crowded than 5-6 are good for this kind of work. But please note that we are doing absolutely nothing that is dangerous in the slightest. It is a tool, not even an agent. Although I will be working on an AGI agent code as soon as we finish the next version of the "kernel", to demonstrate how well our code can be applied to robotics problems. Demo or die.

Comment by examachine on Musk on AGI Timeframes · 2014-11-18T03:50:32.286Z · LW · GW

Well, achieving better than human performance on a sufficiently wide benchmark. Preparing that benchmark is almost as hard as writing the code, it seems. Of course, any such estimates must be taken with a grain of salt, but I think that conceptually solid AGI projects have a significant chance by that time (including OpenCog), although previously I have argued that neuromorphic approaches are likely to succeed by 2030, latest.

Comment by examachine on Musk on AGI Timeframes · 2014-11-17T14:23:48.648Z · LW · GW

We believe we can achieve trans-sapient performance by 2018, he is not that off the mark. But dangers as such, those are highly over-blown, exaggerated, pseudo-scientific fears, as always.

Comment by examachine on Q&A with experts on risks from AI #4 · 2012-01-19T21:35:01.776Z · LW · GW

If human-level is defined as "able to solve the same set of problems that a human can within the same time", I don't think there would be the problem that you mention. The whole purpose of the "human-level" adjective, as far as I can tell, is to avoid the condition that the AI architecture in question is similar to the human brain in any way whatsoever.

Consecutively, the set of human-level AI's is much larger than the set of human-level human-like AI's.

Comment by examachine on Q&A with experts on risks from AI #4 · 2012-01-19T21:24:21.528Z · LW · GW

Thanks for the nice interview Alexander. I'm Eray Ozkural by the way, if you have any further questions, I would love to answer them.

I actually think that SIAI is serving a useful purpose when they are highlighting the importance of ethics in AI research and computer science in general.

Most engineers, whether because we are slightly autistic or not I do not know, have no or very little interest in ethical consequences of their work. I have met many good engineers who work for military firms (firms that thrive on easy money from the military), and not once have they raised a point about ethics. Neither do data mining or information retrieval researchers seem to have such qualms (except when they are pretending to, at academic conferences). At companies like facebook, they think they have a right to exploit the data they have collected and use it for all sorts of commercial and police state purposes. Likewise in AI and robotics, I see people cheering whenever military drones or robots are mentioned, as if automatization of warfare is civil or better in some sense because it is higher technology.

I think that at least AGI researchers must understand that they must have no dealing with the military and the government, and by doing so, they may be putting themselves and all of us at risk. Maybe fear tactics will work, I don't know.

On the other hand, I don't think that "friendly" AI is such a big concern, for reasons I mention above, artificial persons simply aren't needed. I have heard the argument that "but someone will build it sooner or later", though there is no reason that person is going to listen to you. The way I see it, it's better to focus on technology right now, so we can have a better sense of applications first. People seem to think that we should equip robots with fully autonomous AGI. Why is that? People have mentioned to me robotic bartenders, robotic geishas, fire rescuers, and cleaners. Well, is that serious? Do you really want a bartender that can solve general relativity problems while cleaning glasses? It's just nonsense. Or does a fire rescuer really need to think about whether it wants to go on to exterminate the human race after extinguishing the fire? The simple answer is that the people who give those examples are not focusing on the engineering requirements of the applications they have in mind. Another example: military robots. They think that military robots must have a sense of morality. I ask you, why is it important to have moral individuals in an enterprise that is fundamentally immoral? All war is murder, and I suggest you to stay away from professional murder business. That is, if you have any true sense of morality.

Instead of "friendly", a sense of "benevolence" may instead be thought, and that might make sense from an ethical theory viewpoint. It is possible to formalize some theories of ethics and implement them on an autonomous AI, however, for all the capabilities that autonomous trans-sapient AI's may possess, I think it is not a good idea to let such machines develop into distinctive personalities of their own, or meddle in human affairs. I think there are already too many people on earth, I don't think we need artificial persons. We might need robots, we might need AI's, but not artificial persons, or AI's that will decide instead of us. I prefer that as humans we remain at the helm. That I say with respect to some totalitarian sounding proposals like CEV. In general, I do not think that we need to replace critical decision making with AI's. Give AI's to us scientists and engineers and that shall be enough. For the rest, like replacing corrupt and ineffective politicians, a broken economic system, social injustice, etc., we need human solutions, because ultimately we must replace some sub-standard human models with harmful motivations like greed and superstitious ideas, with better human models that have the intellectual capacity to understand the human condition, science, philosophy, etc., regardless of any progress in AI. :) In the end, there is a pandemic of stupidity and ignorance that we must cure for those social problems, and I doubt we can cure it with an AI vaccine.

Comment by examachine on Human consciousness as a tractable scientific problem · 2012-01-14T16:34:48.927Z · LW · GW

Subjective experience isn't limited to sensory experience, a headache, or any feeling, like happiness, without any sensory reason, would also count. The idea is that you can trace most of those to electrical/biochemical states. Might be why some drugs can make you feel happy and how anesthetics work!

Comment by examachine on Human consciousness as a tractable scientific problem · 2011-09-09T15:09:30.516Z · LW · GW

There are in fact some plausible scientific hypotheses that try to isolate particular physical states associated with "qualia". Without giving references to those, obviously, as I'm sure you'll all agree, there is no reason to debate the truth of physicalism.

The mentioned approach is probably bogus, and seems to be a rip-off of Marvin Minsky's older A-B brain ideas in "The Society of Mind". I wish I were a "cognitive scientist" it would be so much easier to publish!

However, needless to say any such hypothesis must be founded on the correct philosophical explanation, which is pretty much neurophysiological identity theory. I don't see a need to debate that, either. Debates of dualism etc. are for the weak minded.

Furthermore, awareness is not quite the same thing as phenomenal consciousness, either. Awareness itself is a quite high cognitive function. But a system could have phenomenal consciousness without any discernible perceptual awareness. I suspect that these theories are not sufficiently informed by neuroscience and philosophy, but neither am I going to offer free clues about the solution to that :) For now, let us just say that it is entirely plausible that small nervous systems (like that of an insect) with no possibility of higher order representations still may have subjective experience. There is also a hint of anthropocentricism in the cited approach (we're conscious because we can make those higher order representations...), which I usually think points to the falsehood of a theory of mind (similar errors are often seen on this site, as well).

Is Dennett to blame here? I hope not :/ Dennett has many excellent ideas, but his approach to consciousness may push the people the wrong way (as it has some flavor of behaviorism, which is not the most advanced view).