Posts

Magic and the halting problem 2015-08-23T19:34:20.143Z
Are there really no ghosts in the machine? 2015-04-13T19:54:35.599Z
Friendly-AI is an abomination 2015-04-12T20:21:47.204Z

Comments

Comment by kingmaker on Yudkowsky's brain is the pinnacle of evolution · 2015-08-27T15:14:44.028Z · LW · GW

Goddamn, I thought I was unpopular

Comment by kingmaker on Are there really no ghosts in the machine? · 2015-04-14T17:02:04.585Z · LW · GW

That wasn't what I claimed, I proposed that the current, most promising methods of producing an FAI are far too likely to produce a UFAI to be considered safe

Comment by kingmaker on Are there really no ghosts in the machine? · 2015-04-13T23:05:40.633Z · LW · GW

The point I am making is that machine learning, though not provably safe, is the most effective way we can imagine of making the utility function. It's very likely that many AI's are going to be created by this method, and if the failure rate is anywhere near as high as that for humans, this could be very serious indeed. Some misguided person may attempt to create an FAI using machine learning and then we may have the situation in the H+ article

Comment by kingmaker on Are there really no ghosts in the machine? · 2015-04-13T22:44:49.289Z · LW · GW

Only a pantheist would claim that evolution is a personal being, and so it can't "try to" do anything. It is, however, a directed process, serving to favor individuals that can better further the species.

But I agree that we shouldn't rely on machine learning to find the right utility function.

How would you suggest we find the right utility function without using machine learning?

Comment by kingmaker on Are there really no ghosts in the machine? · 2015-04-13T20:59:11.239Z · LW · GW

I never said not understanding our creations is good; I only said AI research was successful. I have not read Superintelligence, but I appreciate just how dangerous AI could be.

Comment by kingmaker on Are there really no ghosts in the machine? · 2015-04-13T20:41:52.712Z · LW · GW

I never claimed that evolution did a good job, but I would argue that it gave us a primary directive; to further the human species. All of our desires are part of our programming; they should perfectly align with desires which would optimize the primary goal, but they don't. Simply put, mistakes were made. As the most effective way of developing optimizing programs we have seen is through machine learning, which is very similar to evolution; we should be very careful of the desires of any singleton created by this method.

I'm not sure of your assertion that the best advances in AI so far came from mimicking the brain.

Mimicking the human brain is fundamental to most AI research; on DeepMind's website, they say that they employ computational neuroscientists and companies such as IBM are very interested in whole brain emulation.

Comment by kingmaker on Are there really no ghosts in the machine? · 2015-04-13T19:45:37.885Z · LW · GW

Okay everyone, I've messed this up again, please leave this post alone, I'll re-upload it again later

Comment by kingmaker on Friendly-AI is an abomination · 2015-04-12T21:56:03.248Z · LW · GW

But I don't think that MIRI will succeed at building an FAI by non-anthropomorphic means in time.

Comment by kingmaker on Friendly-AI is an abomination · 2015-04-12T21:08:38.113Z · LW · GW

There is no ghost in a (relatively) simple machine, but an AI is not simple. The greatest success in AI research have been by imitating what we understand of the human mind. We are no longer programming AI's, we are imitating the structure of the human brain and then giving it a directive (for example with Google's deepmind). With AI's, there is a ghost in the machine, i.e. we do not know that it is possible to give a sentient being a prime directive. We have no idea whether it will desire what we want it to desire, and everything could go horribly wrong if we attempt to force it to.

Comment by kingmaker on Friendly-AI is an abomination · 2015-04-12T20:44:19.940Z · LW · GW

The point of the article is that the greatest effect of FAI research is irony, that in trying to prevent a psychopathic AI we are making it more likely that one will exist, because by mentally restraining the AI we are giving it reasons to hate us

Comment by kingmaker on Friendly-AI is an abomination · 2015-04-12T20:39:18.282Z · LW · GW

Duly noted

Comment by kingmaker on Friendly-AI is an abomination · 2015-04-12T20:35:04.045Z · LW · GW

Please read on, I would have removed the snarky intro

Comment by kingmaker on Friendly-AI is an abomination · 2015-04-12T20:32:20.618Z · LW · GW

Yeah, I'm not very good at the internet, I didn't realize deleting articles apparently means nothing on this site

Comment by kingmaker on Rationality Quotes Thread April 2015 · 2015-04-03T20:39:38.325Z · LW · GW

Desirability is not a requisite of the truth darkmatter2525 source

Comment by kingmaker on Open thread, Apr. 01 - Apr. 05, 2015 · 2015-03-31T15:58:40.404Z · LW · GW

I love the way that advancedatheist assumes that we're all guys. That, or lesbians.

Comment by kingmaker on The Hardcore AI Box Experiment · 2015-03-31T15:55:03.685Z · LW · GW

I admit that it serves my ego suitably to imagine that I am the only conscious human, and a world full of shallow-AI's was created just for me ;-)

Comment by kingmaker on The Hardcore AI Box Experiment · 2015-03-31T15:50:18.580Z · LW · GW

The simulators may justify in their minds actual people getting tortured and burnt by suggesting that most of the people will not experience too much suffering, that the simulations would not otherwise have lived (although this fails to distinguish between lives and lives worth living), and that they can end the simulation if our suffering becomes too great. That the hypothetical simulators did not step in during the many genocides in our kind's history may suggest that they either do not exist, or that creating an FAI is more important to them than preventing human suffering.

Comment by kingmaker on The Hardcore AI Box Experiment · 2015-03-30T19:31:10.941Z · LW · GW

This co-opts Bostrom's Simulation argument, but a possible solution to the fermi paradox is that we are all AI's in the box, and the simulators have produced billions of humans in order to find the most friendly human to release from the box. Moral of the story, be good and become a god

Comment by kingmaker on What have we learned from meetups? · 2015-03-30T16:46:23.115Z · LW · GW

Seeing as I'm new here, absolutely nothing

Comment by kingmaker on Boxing an AI? · 2015-03-30T16:40:34.741Z · LW · GW

It may simply deduce that it is likely to be in a box, in the same way that Nick Bostrom deduced we are likely to be in a simulation. Along these lines, it's amusing to think that we might be the AI in the box, and some lesser intelligence is testing to see if we're friendly

Comment by kingmaker on Boxing an AI? · 2015-03-30T16:32:39.569Z · LW · GW

The problem with this is that even if you can determine with certainty that an AI is friendly, there is no certainty that it will stay that way. There could be a series of errors as it goes about daily life, each acting as a mutation, serving to evolve the "Friendly" AI into a less friendly one