Posts

Comments

Comment by voi6 on Why AI may not foom · 2013-03-26T19:34:10.679Z · LW · GW

Sorry the way worded it makes me look silly. I just meant that even if we had the perfect software we simply wouldn't get a big enough speedup to bridge the gap.

Comment by voi6 on Why AI may not foom · 2013-03-25T06:40:13.037Z · LW · GW
  1. You can't know the difficulty of a problem until you've solved it. Look at Hilbert's problems. Some were solved immediately while others are still open today. Proving the you can color a map with five colors is easy and only takes up half a page. Proving that you can color a map with four colors is hard and takes up hundreds of pages. The same is true of science - a century ago physics was thought to soon be a dead field until the minor glitches with blackbody radiation and Mercury's orbit turned out to be more than minor and actually dictated by mathematically complex theories who's interaction with each other is still well beyond our best minds today. That's why trying to predict the growth of intelligence is exactly as silly as trying to predict the number of Hilbert's problems that will be solved over time. It has much less to do with how smart we are and much more to do with how hard the problems are, and that we won't know until we solve them.

  2. Contrary to everything said in (1), I think the software problem of AI is already solved. Simply note that (a) When people think programming an AI to be impossible it's because they think of hardcoding and how no one understands the mind even remotely well enough to do this. But do we hardcode neural nets? No, in fact neural nets are magical in that no one can hardcode a facial recognition program as effective as a trained neural net. Suppose a sufficiently large neural net can be as smart as a human. Then what we would expect from smaller neural nets is exactly what we see now, namely non-rigid intelligence similar to our own but more limited. It would be absurd to expect more of them given our current hardware. (b) There are two forms of signalling in the body - electrical via action potentials and chemical via diffusion. Since the chemical sets up the electrical and diffusion is rather imprecise there are fundamental limits on how refined the brain's macroscopic architecture can be. At the molecular scale biology is extremely complex. Enzymatic proteins are machines of profound sophistication. But none of it matters when it comes to understanding how the brain computes in real time because the only fast form of signalling is between neurons through electrical signals (chemical at the synapses but that's a tiny distance). So the issue comes down to how the neurons are arranged to give rise to intelligence. But how they are arranged is relatively rough in its precision because that's how chemical diffusion works.

  3. With (1) and (2) in mind let's address what the AI problem is really about - hardware. Moore's law is going to hit the atomic barrier much earlier than even Kurzweil would expect computers to facilitate AI. The simple fact of the matter is that there is no clear way beyond this point. Neither parallel programming nor quantum computing is going to save the day without massive unprecedented breakthroughs. It's a hard ware problem, and we won't know how hard until we solve it.

~ a bioinformatics student and ex-singularitarian

Comment by voi6 on Not for the Sake of Pleasure Alone · 2013-03-25T03:41:53.238Z · LW · GW

Sorry to respond to this 2 years late. I'm aware of the paradox and the VNM theorem. Just because humans are inconsistent/irrational doesn't mean they're aren't maximizing a utility function however.

Firstly, you can have a utility function and just be bad at maximizing it (and yes this contradicts the rigorous mathematical definitions which we all know and love, but we both know how English doesn't always bend to their will and we both know what I mean when I say this without having to be pedantic because we are such gentlemen).

Secondly, if you consider each subsequent dollar you attain to be less valuable this makes perfect sense and this is applied in tournament poker where taking 50:50 chance of either going broke or doubling your stack is considered a terrible play because the former outcome guarantees you lose your entire entry fee but the latter gives you an expected winning value that is less than your entry fee. This can be seen with a simple calculation or by just noting that if everyone plays aggressively like this I can do nothing and make into into the prize pool because the other players will simply eliminate each other faster than the blinds will eat away at my own stack.

But I digress. Let's cut to the chase here. You can do what you want but you can't choose your wants. Along the same lines a straight man, no matter how intelligent he becomes, will still find women arousing. An AI can be designed to have the motives of a selfless benevolent human (the so called Artificial Gandhi Intelligence) and this will be enough. Ultimately humans want to be satisfied and if it's not in their nature to be permanently so, then they will concede to changing their nature with FAI-developed science.

Comment by voi6 on Genetically Engineered Intelligence · 2013-03-25T02:44:22.632Z · LW · GW

I'm by no means an expert but I have studied a lot of the relevant fields to this topic in college. I know this thread is long dead but since it came up on the front page of google search of the title I feel the need to give my input. I was really into AI and studied computer science in college until I found out that Moore's law is going to hit the atomic barrier before we have enough hardware by reasonable estimates to simulate a brain and there is no clear way to move forward (neither parallel programming nor quantum computing looks like it will save the dream without major breakthroughs that are by no means guaranteed to come within our lifetimes). The situation looks bad to anyone with a skeptical mind.

The same cannot be said of this genetically engineered intelligence idea. I switched to molecular biology and after two years of study it still seems just as possible especially given that in 2009 gene therapy was successfully used to introduce new genes into a genome to cure colorblindness in monkeys and a fatal brain disease in two human boys. Since you're basically inserting a new gene into a random place in the genome, there's always a chance you'll insert into a cancer gene. That's the bad news. The good news is that it works and can be done on adults. I would certainly volunteer for it if I was terminally ill. Also the cancer thing has a lot of conceivable solutions.

You mentioned recursive feedback. This is unlikely. While it is easy to find genes for intelligence that are already in the gene pool, inventing new ones is a whole 'nother ballgame. Not only would it require understanding the mind, it would also require us to invent new enzymatic proteins which, last time I checked, is still a computationally intractable problem. There is definitely a ceiling there that is likely unbreakable even by the first generation of superhuman geniuses. The good news is how potentially genius this first generation of engineered humans would be. It's estimated that there at least hundreds of genes influencing intelligence each with multiple alleles. Notice how even 2^100 dwarfs the number of humans that have ever lived? You can safely bet that no optimal human in terms of intelligence has ever been born.

Even if you don't buy this argument, citing that intelligence genes are simply additive and interact in such a complex manner that even China's statisticians armed with supercomputers can't find the optimal genome, you'd still have to concede that the current smartest humans are amazing beings that outperform the average or even the highly intelligent in remarkably profound ways.

In short, what you suggest can be done right now on adults in a highly sloppy and risky manner with some of the IQ genes we've already found and it's efficacy would be highly variable, depending on the extent to which the gene's influence is developmental.

Comment by voi6 on Not for the Sake of Pleasure Alone · 2011-06-15T07:55:56.960Z · LW · GW

While humans may not be maximizing pleasure they are certainly maximizing some utility function which can be characterized. Human concerns can then be programmed to optimize this function in your FAI.