Steelmaning AI risk critiques

post by Stuart_Armstrong · 2015-07-23T10:01:02.117Z · score: 26 (27 votes) · LW · GW · Legacy · 99 comments

At some point soon, I'm going to attempt to steelman the position of those who reject the AI risk thesis, to see if it can be made solid. Here, I'm just asking if people can link to the most convincing arguments they've found against AI risk.

EDIT: Thanks for all the contribution! Keep them coming...

99 comments

Comments sorted by top scores.

comment by jessicat · 2015-07-23T19:20:14.963Z · score: 13 (13 votes) · LW · GW

One of the most common objection's I've seen is that we're too far from getting AGI to know what AGI will be like, so we can't productively work on the problem without making a lot of conjunctive assumptions -- e.g. see this post.

comment by SilentCal · 2015-07-23T16:10:56.756Z · score: 13 (13 votes) · LW · GW

I'm sure there's no need to point to Robin Hanson's anti-foom writings? The best single article is IMO Irreducible Detail essentially questioning the generality of intelligence.

comment by jacob_cannell · 2015-07-24T04:35:20.800Z · score: 7 (9 votes) · LW · GW

Here is a key quote:

Human brains are smart mainly by containing many powerful not-fully-general modules, and using many modules to do each task. These modules would not work well in all possible universes, but they often do in ours - See more at: http://www.overcomingbias.com/2014/07/limits-on-generality.html#sthash.S1KygaG4.dpuf

It is true that adult human brains are built out of many domain specific modules, but these modules develop via a very general universal learning system. The neuroscientific evidence directly contradicts the evolved modularity hypothesis, which hanson appears to be heavily influenced by. That being said, his points about AI progress being driven by a large number of mostly independent advances still carries through.

Hanson's general analysis of the economics of AGI takeoff seem pretty sound - even if it is much more likely that neuro-AGI precedes ems.

comment by sentientplatypus · 2015-07-24T03:28:50.204Z · score: 2 (2 votes) · LW · GW

I hadn't seen this before. Hanson's conception of intelligence actually seems much simpler and more plausible than how I had previously imagined it. I think 'intelligence' can easily act as a Semantic Stopsign because it feels like a singular entity through the experience of consciousness, but actually may be quite modular as Hanson suggests.

comment by roystgnr · 2015-07-27T21:36:29.114Z · score: 1 (1 votes) · LW · GW

Intelligence must be very modular - that's what drives Moravec's paradox (problems like vision and locomotion that we have good modules for feel "easy", problems that we have to solve with "general" intelligence feel "hard"), the Wason Selection task results (people don't always have a great "general logic" module even when they could easily solve an isomorphic problem applied to a specific context), etc.

Does this greatly affect the AGI takeoff debate, though? So long as we can't create a module which is itself capable of creating modules, what we have doesn't qualify as human-equivalent AGI. But if/when we can, then it's likely that it can also create an improved version of itself, and so it's still an open question as to how fast or how far it can improve.

comment by [deleted] · 2015-07-23T21:22:15.401Z · score: 1 (1 votes) · LW · GW

Thank you for that Irreducible Detail article, I remember reading it before but couldn't find it later. Hanson's argument is very convincing and intuitive, and really sheds light on what intelligence might really be about. When I think about my own intelligence, it doesn't feel like I have some overarching general module planning, but more like I have many simple heuristics, and rules of thumb, and automatic behaviors that just happen to work. This feels more like Hanson's idea of intelligence.

I think this is the single best argument against MIRI's idea of intelligence.

Here is an interesting article in the same vein.

comment by jacob_cannell · 2015-07-24T04:42:05.021Z · score: 9 (13 votes) · LW · GW

Here is a novel argument you may or may not have heard: We live in the best of all probable worlds due to simulation anthropics. Future FAI civs spend a significant amount of their resources to resimulate and resurrect past humanity - winning the sim race by a landslide (as UFAI is not strongly motivated to sim us in large numbers). As a result of this anthropic selection force, we find ourselves in a universe that is very lucky - it is far more likely to lead to FAI than you would otherwise think.

The best standard argument is this: the brain is a universal learning machine - the same general architecture that will necessarily form the basis for any practical AGI. In addition the brain is already near optimal in terms of what can be done for 10 watts with any irreversible learning machine (this is relatively easy to show from wiring energy analysis). Thus any practical AGI is going to be roughly brain like, similar to baby emulations. All of the techniques used to raise humans safely can thus be used to raise AGI safely. LW/MIRI historically reject this argument based - as far as I can tell - on a handwavey notion of 'anthropomorphic bias', which has no technical foundation.

I've presented the above argument about four years ago, but I never bothered to spend the time backing it up in excruciating formal detail. Until more recently. The last 5 years of progress in AI strongly supports this anthropomorphic AGI viewpoint.

comment by Wei_Dai · 2015-07-25T04:48:20.651Z · score: 7 (7 votes) · LW · GW

the brain is already near optimal in terms of what can be done for 10 watts with any irreversible computer (this is relatively easy to show from wiring energy analysis).

Do you have a citation for this? My understanding is that biological neural networks operate far from the Landauer Limit (sorry I couldn't find a better citation but this seems to be a common understanding), whereas we already have proposals for hardware that is near that limit.

comment by jacob_cannell · 2015-07-29T15:59:00.185Z · score: 3 (3 votes) · LW · GW

I should probably rephrase the brain optimality argument, as it isn't just about energy per se. The brain is on the pareto efficiency surface - it is optimal with respect to some complex tradeoffs between area/volume, energy, and speed/latency.

Energy is pretty dominant, so it's much closer to those limits than the rest. The typical futurist understanding about the Landauer limit is not even wrong - way off, as I point out in my earlier reply below and related links.

A consequence of the brain being near optimal for energy of computation for intelligence given it's structure is that it is also near optimal in terms of intelligence per switching events.

The brain computes with just around 10^14 switching events per second (10^14 synapses * 1 hz average firing rate). That is something of an upper bound for the average firing rate.1

The typical synapse is very small, has a low SNR and thus is equivalent to a low bit op, and only activates maybe 25% of the time.2 We can roughly compare these minimal SNR analog ops with the high precision single bit ops that digital transistors implement. The landauer principle allows us to rate them as reasonably equivalent in computational power.

So the brain computes with just 10^14 switching events per second. That is essentially miraculous. A modern GPU uses perhaps 10^18 switching events per second.

So the important thing here is not just energy - but overall circuit efficiency. The brain is crazy super efficient - and as far as we can tell near optimal - in its use of computation towards intelligence.

This explains why our best SOTA techniques in almost all AI are some version of brain-like ANNs (the key defining principle being search/optimization over circuit space). It predicts that the best we can do for AGI is to reverse engineer the brain. Yes eventually we will scale far beyond the brain, but that doesn't mean that we will use radically different algorithms.

comment by ESRogs · 2015-08-18T16:47:57.230Z · score: 0 (0 votes) · LW · GW

A consequence of the brain being near optimal for energy of computation for intelligence given its structure is that it is also near optimal in terms of intelligence per switching events.

So the brain computes with just 10^14 switching events per second.

What do you mean by, given its structure? Does this still leave open that a brain with some differences in organization could get more intelligence out of the same number of switching events per second?

Similarly, I assume the same argument applies to all animal brains. Do you happen to have stats on the number of switching events per second for e.g. the chimpanzee?

comment by jacob_cannell · 2015-07-26T23:57:01.206Z · score: 2 (4 votes) · LW · GW

EDIT: see this comment and this comment on reddit for some references on circuit efficiency.

Computers are circuits and thus networks/graphs. For primitive devices the switches (nodes) are huge so they use up significant energy. For advanced devices the switches are not much larger than wires, and the wire energy dominates. If you look at the cross section of a modern chip, it contains a hierarchy of metal layers of decreasing wire size, with the transistors at the bottom. The side view section of the cortex looks similar with vasculature and long distance wiring taking the place of the upper meta layers.

The vast majority of the volume in both modern digital circuits and brain circuits consists of wiring. The transistors and the synapses are just tiny little things in comparison.

Modern computer mem systems have a wire energy eff of around 10^-12 to 10^-13 J/bit/mm. The limit for reliable signals is perhaps only 10x better. I think the absolute limit for unreliable bits is 10^-15 or so, will check citation for that when I get home. Wire energy eff for bandwidth is not improving at all and hasn't since the 90's. The next big innovation is simply moving the memory closer , that's about all we can do.

The min wire energy is close to that predicted by a simple model of a molecular wire where each molecule sized 1 nm section is a switch (10^-19 to 10^-21 * 10^6 = 10^-13 to 10^-15). In reality of course it's somewhat more complex - smaller wires actually dissipate more energy, but also require less to represent a signal.

Also keep in mind that synapses are analog devices which require analog impulse inputs and outputs - they do more work than a single binary switch.

So moores law is ending and we are already pretty close to the limits of wire efficiency. If you add up the wiring paths in the brain you get a similar estimate. Axons/dendrites appear to be at least as efficient as digital wires and are thus near optimal. None of this should be surprising - biological cells are energy optimal true nanocomputers. Neural circuits evolved from the bottom up - there was never a time at which they were inefficient.

However, it is possible to avoid wire dissipation entirely with some reversible signal path. Optics is one route but photons and thus photonic devices are impractically large. The other option is superconducting circuits, which work in labs but also have far too many disadvantages to be practical yet. Eventually cold superconducting reversible computers could bypass energy issues, but that tech appears to be far.

comment by Wei_Dai · 2015-07-27T06:18:30.987Z · score: 2 (2 votes) · LW · GW

The other option is superconducting circuits, which work in labs but also have to many disadvantages to be practical yet. Eventually cold superconducting reversible computers could bypass energy issues, but that tech appears to be far.

What about just replacing the copper wire inside a conventional CMOS chip with a superductor? It took some searching, but I managed to find a paper titled Cryogenically Cooled CMOS which talks about the benefits and feasibility of doing this. Quoting from the relevant section:

If lower interconnect resistance improves performance, the use of ‘zero-resistance’ superconductors should provide the ultimate in performance improvement. Unfortunately, although performance improvements would be expected, they would not be as great as the simplistic statement above suggests. Furthermore, several technical obstacles remain before high-temperature superconductors (HTS) can be effectively integrated with VLSI technology.

Actually, as we will see, the resistance of superconducting films is not truly zero, except in the limits of zero frequency or zero temperature. Nevertheless, at 77 K and 1 GHz, measurements on patterned YBa2Cu3O7-x (YBCO) films have already demonstrated surface resistances one to two orders of magnitude below those for Cu under the same conditions. Theoretical predictions for YBCO suggest four orders of magnitude would be possible. Unfortunately, good-quality (epitaxial) YBCO films grow best on perovskite substrates having high dielectric constants. Lanthanum aluminate (LaAlO3), which is a popular substrate for HTS microwave circuits, has a relative dielectric constant of 25. Assuming the same interconnect geometry, this makes all capacitances more than 6× greater than would be the case for a SiO2 dielectric. Thus, some of the low-resistance benefits of HTS films are cancelled by the high dielectric constants of their associated substrates.

So it looks like there's no fundamental reason why it couldn't be done, just a matter of finding the right substrate material and solving other engineering problems.

comment by jacob_cannell · 2015-07-27T06:37:55.595Z · score: 2 (2 votes) · LW · GW

What about just replacing the copper wire inside a conventional CMOS chip with a superductor?

That is the type of tech I was referring to by superconducting circuits as precursor to full reversible. From what I understand, if you chill everything down then you also change resistance in the semiconductor along with all the other properties, so it probably isn't as easy as just replacing the copper wires.

A room temperature superconductor circuit breakthrough is one of the main wild cards over the next decade or so. Cryogenic cooling is pretty impractical for mainstream computing.

So it looks like there's no fundamental reason why it couldn't be done, just a matter of finding the right substrate material and solving other engineering problems.

Yeah, its just a question of timetables. If it's decades away, we have a longer period of stalled moore's law during which AGI will slowly surpass the brain, rather than rapidly.

comment by Wei_Dai · 2015-07-29T08:26:20.692Z · score: 0 (0 votes) · LW · GW

From what I understand, if you chill everything down then you also change resistance in the semiconductor along with all the other properties, so it probably isn't as easy as just replacing the copper wires.

From the sources I've read, there aren't any major issues running CMOS at 77 K, you only run into problems at lower temperatures, less than 40 K. I guess people aren't seriously trying this because it's probably not much harder to go directly to full superconducting computers (i.e., with logic gates made out of superconductors as well) which offers a lot more benefits. Here is an article about a major IARPA project pursuing that. It doesn't seem safe to assume that we'll get AGI before we get superconducting computers. Do you disagree, if so can you explain why?

comment by jacob_cannell · 2015-07-29T15:18:11.142Z · score: 0 (0 votes) · LW · GW

There was similar interest in superconducting chips about a decade ago which was pretty much the same story - DARPA/IARPA spearheading research, major customer would be US intelligence.

The 500 gigaflops per watt figure is about 100 times more computation/watt than on a current GPU - which is useful because it shows that about 99% of GPU energy cost is interconnect/wiring.

In terms of viability and impact, it is still uncertain how much funding superconducting circuits will require to become competitive. And even if it is competitive in some markets for say the NSA, that doesn't make it competitive for general consumer markets. Cryogenic cooling means these things will only work in very special data rooms - so the market is more niche.

The bigger issue though is total cost competitiveness. GPUs are sort of balanced in that the energy cost is about half of the TCO (total cost of ownership). It is extremely unlikely that superconducting chips will be competitive in total cost of computation in the near future. All the various tradeoffs in a superconducting design and the overall newness of the tech imply lower circuit densities. Smaller market implies less research amortization and higher costs. Even if a superconducting chip used 0 energy, it will still be much more expensive and provide less ops/$.

Once we run out of scope for further CPU/GPU improvements over the next decade, then the TCO budget will shift increasingly towards energy, and these types of chips will become increasing viable. So I'd estimate that the probability of impact in the next 5 years is small, but 10 years or more out it's harder to say. To make a more viable forecast I'd need to read more on this tech and understand more about the costs of cryogenic cooling.

But really roughly - the net effect of this could be to add another leg to moore's law style growth, at least for server computation.

comment by V_V · 2015-07-29T10:06:46.632Z · score: -1 (1 votes) · LW · GW

I guess people aren't seriously trying this because it's probably not much harder to go directly to full superconducting computers (i.e., with logic gates made out of superconductors as well) which offers a lot more benefits

It takes energy to maintain cryogenic temperatures, probably much more than the energy that would be saved by eliminating wire resistance. If I understand correctly, the interest in superconducting circuits is mostly in using them to implement quantum computation.
Barring room temperature superconductors, there are probably no benefits of using superconducting circuits for classical computation.

comment by Wei_Dai · 2015-07-29T12:26:59.548Z · score: 0 (0 votes) · LW · GW

From the article I linked to:

Studies indicate the technology, which uses low temperatures in the 4-10 kelvin range to enable information to be transmitted with minimal energy loss, could yield one-petaflop systems that use just 25 kW and 100 petaflop systems that operate at 200 kW, including the cryogenic cooler. Compare this to the current greenest system, the L-CSC supercomputer from the GSI Helmholtz Center, which achieved 5.27 gigaflops-per-watt on the most-recent Green500 list. If scaled linearly to an exaflop supercomputing system, it would consume about 190 megawatts (MW), still quite a bit short of DARPA targets, which range from 20MW to 67MW.

ETA: 100 petaflops per 200 kW equals 500 gigaflops per watt, so it's estimated to be about 100 times more energy efficient.

comment by V_V · 2015-07-29T15:40:41.469Z · score: -1 (1 votes) · LW · GW

Ok, I guess it depends on how big your computer is, due to the square-cube law. Bigger computers would be at an advantage.

comment by V_V · 2015-07-26T09:08:12.081Z · score: 1 (3 votes) · LW · GW

As the efficiency of a logically irreversible computer approaches the Landauer limit, its speed must approach zero, for the same reason why as the efficiency of a heat engine approaches the Carnot limit its speed must approach zero.

I don't have an equation at hand, but I wouldn't be surprised if it turned out that biological neurons operate close to the physical limit for their speed.

EDIT:

I found this Physics Stack Exchange answer about the thermodynamic efficiency of human muscles.

comment by Wei_Dai · 2015-07-26T11:15:14.919Z · score: 4 (4 votes) · LW · GW

Hmm... after more searching, I found this page, which says:

The faster the processor runs, the larger the energy required to maintain the bit in the predefined 1 or 0 state. You can spend a lot of time arguing about a sensible value but something like the following is not too unreasonable: The Landauer switching limit at finite (GHz) clock speed:

Energy to switch 1 bit > 100 k_B T ln(2)

So biological neurons still don't seem to be near the physical limit since they fire at only around 100 hz and according to my previous link dissipates millions to billions times more than k_B T ln(2).

comment by jacob_cannell · 2015-07-27T00:07:21.655Z · score: 2 (2 votes) · LW · GW

A 100kT signal Is only reliable for a distance of a few nanometers. The energy cost is all in pushing signals through wires. So the synapse signal is a million times larger than 100kT to cross a distance of around 1 mm or so, which works out to 10^-13 J per synaptic event. Thus 10 watts for 10^14 synapses and a 1 hz rate. For a 100 hz rate, the average dist would need to be less.

comment by V_V · 2015-07-27T15:54:56.533Z · score: -1 (1 votes) · LW · GW

Energy to switch 1 bit > 100 k_B T ln(2)

Not my field of expertise, but I don't understand where this bound comes form. In this paper for short erasure cycles they find an exponential law, although they don't give the constants (I suppose they are system-dependent).

comment by snarles · 2015-07-25T04:05:59.569Z · score: 5 (5 votes) · LW · GW

There is no way to raise a human safely if that human has the power to exponentially increase their own capabilities and survive independently of society.

comment by kokotajlod · 2015-07-26T20:35:56.555Z · score: 0 (2 votes) · LW · GW

Yep. "The melancholy of haruhi suzumiya" can be thought of as an example of something in the same reference class.

comment by AndreInfante · 2015-07-24T20:14:10.832Z · score: 3 (5 votes) · LW · GW

To rebut: sociopaths exist.

comment by jacob_cannell · 2015-07-27T06:26:14.844Z · score: 2 (2 votes) · LW · GW

Super obvious re-rebut: sociopaths exist, and yet civilization endures.

Also, we can rather obviously test in safe simulation sandboxes and avoid copying sociopaths. The argument that sociopaths are a fundemental showstopper must be based then on some magical view of the brain (because obviously evolution succeeds in producing non sociopaths, so we can copy its techniques if they are nonmagical).

Remember the argument is against existential threat level UFAI, not some fraction of evil AIs in a large population.

comment by AndreInfante · 2015-07-27T20:43:52.113Z · score: 0 (4 votes) · LW · GW

I think you misunderstand my argument. The point is that it's ridiculous to say that human beings are 'universal learning machines' and you can just raise any learning algorithm as a human child and it'll turn out fine. We can't even raise 2-5% of HUMAN CHILDREN as human children and have it reliably turn out okay.

Sociopaths are different from baseline humans by a tiny degree. It's got to be a small number of single-gene mutations. A tiny shift in information. And that's all it takes to make them consistently UnFriendly, regardless of how well they're raised. Obviously, AIs are going to be more different from us than that. And that's a pretty good reason to think that we can't just blithely assume that putting Skynet through preschool is going to keep us safe.

Human values are obviously hard coded in large part, and the hard coded portions seem to be crucial. That hard coding is not going to be present in an arbitrary AI, which means we have to go and duplicate it out of a human brain. Which is HARD. Which is why we're having this discussion in the first place.

comment by jacob_cannell · 2015-07-28T01:31:59.053Z · score: 1 (1 votes) · LW · GW

The point is that it's ridiculous to say that human beings are 'universal learning machines'

No - it is not. See the article for the in depth argument and citations backing up this statement.

you can just raise any learning algorithm as a human child and it'll turn out fine.

Well almost - A ULM also requires a utility function or reward circuitry with some initial complexity, but we can also use the same universal learning algorithms to learn that component. It is just another circuit, and we can learn any circuit that evolution learned.

And that's all it takes to make them consistently UnFriendly, regardless of how well they're raised.

Sure - which is why I discussed sim sandbox testing. Did you read about my sim sandbox idea? We test designs in a safe sandbox sim, and we don't copy sociopaths.

Obviously, AIs are going to be more different from us than that

No, this isn't obvious at all. AGI is going to be built from the same principles as the brain - because the brain is a universal learning machine. The AGI's mind structure will be learned from training and experiential data such that the AI learns how to think like humans and learns how to be human - just like humans do. Human minds are software constructs - without that software we would just be animals (feral humans). An artificial brain is just another computer that can run the human mind software.

That hard coding is not going to be present in an arbitrary AI, which means we have to go and duplicate it out of a human brain. Which is HARD.

Yes, but it's only a part of the brain and a fraction of the brain's complexity, so obviously it can't be harder than reverse engineering the whole brain.

comment by AndreInfante · 2015-07-28T03:40:37.366Z · score: 0 (2 votes) · LW · GW

A ULM also requires a utility function or reward circuitry with some initial complexity, but we can also use the same universal learning algorithms to learn that component. It is just another circuit, and we can learn any circuit that evolution learned.

Okay, so we just have to determine human terminal values in detail, and plug them into a powerful maximizer. I'm not sure I see how that's different from the standard problem statement for friendly AI. Learning values by observing people is exactly what MIRI is working on, and it's not a trivial problem.

For example: say your universal learning algorithm observes a human being fail a math test. How does it determine that the human being didn't want to fail the math test? How does it cleanly separate values from their (flawed) implementation? What does it do when peoples' values differ? These are hard questions, and precisely the ones that are being worked on by the AI risk people.

Other points of critique:

Saying the phrase "safe sandbox sim" is much easier than making a virtual machine that can withstand a superhuman intelligence trying to get out of it. Even if your software is perfect, it can still figure out that its world is artificial and figure out ways of blackmailing its captors. Probably doing what MIRI is looking into, and designing agents that won't resist attempts to modify them (corrigibility) is a more robust solution.

You want to be careful about just plugging in a learned human utility function into a powerful maximizer, and then raising it. If it's maximizing its own utility, which is necessary if you want it to behave anything like a child, what's to stop it from learning human greed and cruelty, and becoming an eternal tyrant? I don't trust a typical human to be god.

And even if you give up on that idea, and have to maximize a utility function defined in terms of humanity's values, you still have problems. For starters, you want to be able to prove formally that its goals will remain stable as it self-modifies, and it won't create powerful sub-agents who don't share those goals. Which is the other class of problems that MIRI works on.

comment by [deleted] · 2015-08-03T03:59:33.386Z · score: 1 (1 votes) · LW · GW

Okay, so we just have to determine human terminal values in detail, and plug them into a powerful maximizer.

Why do you even go around thinking that the concept of "terminal values", which is basically just a consequentialist steelmanning Aristotle, cuts reality at the joints?

For starters, you want to be able to prove formally that its goals will remain stable as it self-modifies

That part honestly isn't that hard once you read the available literature about paradox theorems.

comment by jacob_cannell · 2015-07-28T04:14:23.479Z · score: 0 (2 votes) · LW · GW

Okay, so we just have to determine human terminal values in detail, and plug them into a powerful maximizer.

No - not at all. Perhaps you have read too much MIRI material, and not enough of the neuroscience and machine learning I referenced. An infant is not born with human 'terminal values'. It is born with some minimal initial reward learning circuitry to bootstrap learning of complex values from adults.

Stop thinking of AGI as some wierd mathy program. Instead think of brain emulations - and then you have obvious answers to all of these questions.

Saying the phrase "safe sandbox sim" is much easier than making a virtual machine that can withstand a superhuman intelligence trying to get out of it.

You apparently didn't read my article or links to earlier discussion? We can easily limit the capability of minds by controlling knowledge. A million smart evil humans is dangerous - but only if they have modern knowledge. If they have only say medieval knowledge, they are hardly dangerous. Also - they don't realize they are in a sim. Also - the point of the sandbox sims is to test architectures, reward learning systems, and most importantly - altruism. Designs that work well in these safe sims are then copied into less safe sims and finally the real world.

Consider the orthogonality thesis - AI of any intelligence level can be combined with any values. Thus we can test values on young/limited AI before scaling up their power.

Sandbox sims can be arbitrarily safe. It is the only truly practical workable proposal to date. It is also the closest to what is already used in industry. Thus it is the solution by default.

Even if your software is perfect, it can still figure out that its world is artificial and figure out ways of blackmailing its captors

Ridiculous nonsense. Many humans today are aware of the sim argument. The gnostics were aware in some sense 2,000 years ago. Do you think any of them broke out? Are you trying to break out? How?

If it's maximizing its own utility, which is necessary if you want it to behave anything like a child, what's to stop it from learning human greed and cruelty, and becoming an eternal tyrant?

Again, stop thinking we create a single AI program and then we are done. It will be a largescale evolutionary process, with endless selection, testing, and refinement. We can select for super altruistic moral beings - like bhudda/gandhi/jesus level. We can take the human capability for altruism, refine it, and expand on it vastly.

For starters, you want to be able to prove formally that its goals will remain stable as it self-modifies,

Quixotic waste of time.

comment by AndreInfante · 2015-07-28T05:31:28.781Z · score: -1 (5 votes) · LW · GW

So, to sum up, your plan is to create an arbitrarily safe VM, and use it to run brain-emulation-style denovo AIs patterned on human babies (presumably with additional infrastructure to emulate the hard-coded changes that occur in the brain during development to adulthood: adult humans are not babies + education). You then want to raise many, many iterations of these things under different conditions to try to produce morally superior specimens, then turn those AIs loose and let them self modify to godhood.

Is that accurate? (Seriously, let me know if I'm misrepresenting your position).


A few problems immediately come to mind. We'll set aside the moral horror of what you just described as a necessary evil to avert the apocalypse, for the time being.

More practically, I think you're being racist against weird mathy programs.

For starters, I think weird mathy programs will be a good deal easier to develop than digital people. Human beings are not just general optimizers. We have modules that function roughly like one, which we use under some limited circumstances, but anyone who's ever struggled with procrastination or put their keys in the refrigerator knows that your goal-oriented systems are entangled with a huge number of cheap heuristics at various levels, many of which are not remotely goal-oriented.

All of this stuff is deeply tangled up with what we think of as the human 'utility function,' because evolution has no incentive to design a clean separation between planning and values. Replicating all of that accurately enough to get something that thinks and behaves like a person is likely much harder than making a weird mathy program that's good at modelling the world and coming up with plans.

There's also the point that there really isn't a good way to make a brain emulation smarter. Weird, mathy programs - even ones that use neural networks as subroutines - often have obvious avenues to making them smarter, and many can scale smoothly with processing resources. Brain emulations are much harder to bootstrap, and it'd be very difficult to preserve their behavior through the transition.

My best guess is, they'd probably go nuts and end up as an eldritch horror. And if not, they're still going to get curb stomped by the first weird mathy program to come along, because they're saddled with all of our human imperfections and unnecessary complexity. The upshot of all of this is that they don't serve the purpose of protecting us from future UFAIs.

Finally, the process you described isn't really something you can start on (aside from the VM angle) until you already have human level AGIs, and a deep and total understanding of all of the operation of the human brain. Then, while you're setting up your crazy AI concentration camp and burning tens of thousands of man-years of compute time searching for AI Buddha, some bright spark in a basement with a GPU cluster has the much easier task of just cludging together something smart enough to recursively self-improve. You're in a race with a bunch of people trying to solve a much easier problem, and (unlike MIRI) you don't have decades of lead time to get a head start on the problem. Your large-scale evolutionary process would take much, much too much time and money to actually save the world.

In short, I think it's a really bad idea. Although now that I understand what you're getting at, it's less obviously silly than what I originally thought you were proposing. I apologize.

comment by jacob_cannell · 2015-07-28T22:29:58.396Z · score: 4 (4 votes) · LW · GW

So, to sum up, your plan is to create an arbitrarily safe VM, and use it to run brain-emulation-style denovo AIs

No. I said:

Stop thinking of AGI as some wierd mathy program. Instead think of brain emulations - and then you have obvious answers to all of these questions.

I used brain emulations as analogy to help aid your understanding. Because unless you have deep knowledge of machine learning and computational neuroscience, there are huge inferential distances to cross.

Human beings are not just general optimizers.

Yes we are. I have made a detailed, extensive, citation-full, and well reviewed case that human minds are just that.

All of our understanding about the future of AGI is based ultimately on our models of the brain and AI in general. I am claiming that the MIRI viewpoint is based on an outdated model of the brain, and a poor understanding of the limits of computation and intelligence.

I will summarize for one last time. I will then no longer repeat myself because it is not worthy of my time - any time spent arguing this is better spent preparing another detailed article, rather than a little comment.

There is extensive uncertainty concerning how the brain works and what types of future AI are possible in practice. In situations of such uncertainty, any good sane probabilistic reasoning agent should come up with a multimodal distribution that spreads belief across several major clusters. If your understanding of AI comes mainly from reading LW - you are probably biased beyond hope. I'm sorry, but this is true. You are stuck in box and don't even know it.

Here are the main key questions that lead to different belief clusters:

  • Are the brain's algorithms for intelligence complex or simple?
  • And related - are human minds mainly software or mainly hardware?
  • At the practical computational level, does the brain implement said algorithms efficiently or not?

If the human mind is built out of a complex mess of hardware specific circuits, and the brain is far from efficient, than there is little to learn from the brain. This is Yudkowsky/MIRI's position. This viewpoint leads to a focus on pure math and avoidance of anything brain-like (such as neural nets). In this viewpoint hard takeoff is likely, AI is predicted to be nothing like human minds, etc.

If you believe that the human is complex and messy hardware, but the brain is efficient, than you get Hanson's viewpoint where the future is dominated by brain emulations. The brain ems win over brain inspired AI because scanning real brain circuitry is easier than figuring out how it works.

Now what if the brain's algorithms are not complex, and the brain is efficient? Then you get my viewpoint cluster.

These questions are empirical - and they can be answered today. In fact, I realized all this years ago and spent a huge amount of time learning more about the future of computer hardware, the limits of computation, machine learning, and computational neuroscience.

Yudkowsky, Hanson, and to some extent Bostrom - were all heavily inspired by the highly influential evolved modularity hypothesis in ev psych from Tooby and Cosmides. In this viewpoint, the brain is complex, and most of our algorithmic content is hardware based rather than software. I have argued that this viewpoint has been tested empirically and now disproven. The brain is built out of relatively simple universal learning algorithms. It will essentially be almost impossible to build practical AGI that is very different from the brain (remember, AGI is defined as software which can do everything the brain does).

Bostrom/Yudkowksky have also argued that the brain is very far from efficient. For example, from true sources of disagreement:

Human neurons run at less than a millionth the speed of transistors, transmit spikes at less than a millionth the speed of light, and dissipate around a million times the heat per synaptic operation as the thermodynamic minimum for a one-bit operation at room temperature. Physically speaking, it ought to be possible to run a brain at a million times the speed without shrinking it, cooling it, or invoking reversible computing or quantum computing.

The first two statements are true, the third statement is problematic, and the thrust of the conclusion is incorrect. The minimum realistic energy for a brain-like circuit is probably close to what the brain actually uses:

  • the landauer bound depends on speed and reliability. The 10^-21 J/bit bound only applies to a signal of infinitely low frequency. For realistic fast reliable signals, the bound is 100 times higher: around 10^-19 J/bit.
  • the landauer bound applies to single 1 bit ops. The fundamental bound for a 32 bit flop is around 10^5 or 10^6 times higher. Moore's Law is ending and we are actually close to these bounds already. Synapses perform analog ops which have lower cost than a 32 bit flop, but still a much higher cost than a single bit op.
  • most of the energy consumption in any advanced computer comes from wire dissipation, not switch dissipation. Signaling in the brain uses roughly 0.5x10^-14 J/bit/mm (5 fJ/bit/mm) 2, which appears to be within an order of mag or two of optimal, and is perhaps one order of magnitude more efficient than current computers. Wire signal energy in computers is not improving significantly. For example, for 40nm tech in 2010, the wire energy is 240 fj/bit/mm, and is predicted to be around 150 to 115 by 2017 3. The practical limit is perhaps around 1 fJ/bit/mm, but that would probably require much lower speeds.

These errors add up to around 6 orders of magnitude or so. The brain is near the limits of energy efficiency for what it does in terms of irreversible computation. No practical machine we will ever build in the near future is going to be many orders of magnitude more efficient than the brain. Yes, eventually reversible and quantum computing could perhaps result in large improvements, but those technologies are far and will come long after neuromorphic AGI.

comment by [deleted] · 2015-08-03T04:01:35.280Z · score: 0 (0 votes) · LW · GW

Yes we are. I have made a detailed, extensive, citation-full, and well reviewed case that human minds are just that.

That isn't quite correct. We do have hard wiring that raises and lowers the from-the-inside importance of specific features present in our learning data. That is, we have a nontrivial inductive bias which not all possible minds will have, even when we start by assuming that all minds are semi-modular universal learners.

comment by AndreInfante · 2015-07-29T01:44:37.533Z · score: -1 (3 votes) · LW · GW

Yes, I've read your big universal learner post, and I'm not convinced. This does seem to be the crux of our disagreement, so let me take some time to rebut:

First off, you're seriously misrepresenting the success of deep learning as support for your thesis. Deep learning algorithms are extremely powerful, and probably have a role to play in building AGI, but they aren't the end-all, be-all of AI research. For starters, modern deep learning systems are absolutely fine-tuned to the task at hand. You say that they have only "a small number of hyperparameters." which is something of a misrepresentation. There are actually quite a few of these hyperparameters in state-of-the-art networks, and there are more in networks tackling more difficult tasks.

Tuning these hyperparameters is hard enough that only a small number of researchers can do it well enough to achieve state of the art results. We do not use the same network for image recognition and audio processing, because that wouldn't work very well.

We tune the architecture of deep learning systems to the task at hand. Presumably, if we can garner benefits from doing that, evolution has an incentive to do the same. There's a core, simple algorithm at work, but targeted to specific tasks. Evolution has no incentive to produce a clean design if cludgy tweaks give better results. You argue that evolution has a bias against complexity, but that certainly hasn't stopped other organs from developing complex structure to make them marginally better at the task.

There's also the point that there's plenty of tasks that deep learning methods can't solve yet (like how to store long-term memories of a complex and partially observed system in an efficient manner) - not to mention higher level cognitive skills that we have no clue how to approach.

Nobody thinks this stuff is just a question of throwing yet larger deep learning networks at the problem. They will be solved by finding different hard-wired network architectures that make the problem more manageable by knowing things about it in advance.


The ferret brain rewiring result is not a slam-dunk for the universal learning by itself. It just means that different brain modules can switch which pre-programmed neural algorithms they implement on the fly. Which makes sense, because on some level these things have to be self-organizing in the first place to be compactly genetically coded.

The real test here would be to take a brain and give it an entirely new sense - something that bears no resemblance to any sense it or any of its ancestors has ever had, and see if it can use that sense as naturally as hearing or vision. Personally, I doubt it. Humans can learn echolocation, but they can't learn echolocation the way bats and dolphins can learn echolocation - and echolocation bears a fair degree of resemblance to other tasks that humans already have specialized networks for (like pinpointing the location of a sound in space).

Notably, the general learner hypothesis does not explain why non-surgically-modified brains are so standardized in structure and functional layout. Something that you yourself bring up in your article.

It also does not explain why birds are better at language tasks than cats. Cat brains are much larger. The training rewards in the lab are the same. And, yet, cats significantly underperform parrots at every single language-related task we can come up with. Why? Because the parrots have had a greater evolutionary pressure to be good at language-style tasks - and, as a result, they have evolved task-specific neurological algorithms to make it easier.

Also, plenty of mammals, fresh out of the womb, have complex behaviors and vocalizations. Humans are something of an outlier, due to being born premature by mammal standards. If mammal brains are 99% universal learning, why can baby cows walk within minutes of birth?

Look, obviously, to some degree, both things are true. The brain is capable of general learning to some degree. Otherwise, we'd never have developed math. It also obviously has hard-coded specialized modules, to some degree, which is why (for example) all human cultures develop language and music, which isn't something you'd expect if we were all starting from zero. The question is which aspect dominates brain performance. You're proposing an extreme swing to one end of the possibility space that doesn't seem even remotely plausible - and then you'e using that assumption as evidence that no non-brain-like intelligence can exist.

What about Watson? It's the best-performing NLP system ever made, and it's absolutely a "weird mathy program." It uses neural networks as subroutines, but the architecture of the whole bears no resemblance to the human brain. It's not a simple universal learning algorithm. If you gave a single deep neural network access to the same computational resources, it would underperform Watson. That seems like a pretty tough pill to swallow if 'simple universal learner' is all there is to intelligence.


Finally, I don't have the background to refute your argument on the efficiency of the brain (although I know clever people who do who disagree with you). But, taking it as a given that you're right, it sounds like you're assuming all future AIs will draw the same amount of power as a real brain and fit in the same spatial footprint. Well... what if they didn't? What if the AI brain is the size of a fridge and cooled with LN2 and consumes as much power as a city block? Surely at the physical limits of computation you believe in, that would be able to beat the pants off little old us.

To sum up: yes, I've read your thing. No, it's not as convincing as you seem to believe.

comment by jacob_cannell · 2015-07-29T20:31:04.168Z · score: 1 (1 votes) · LW · GW

It also does not explain why birds are better at language tasks than cats. Cat brains are much larger. The training rewards in the lab are the same. And, yet, cats significantly underperform parrots at every single language-related task we can come up with. Why? Because the parrots have had a greater evolutionary pressure to be good at language-style tasks - and, as a result, they have evolved task-specific neurological algorithms to make it easier.

Cat brains are much larger, but physical size is irrelevant. What matters is neuron/synapse count.

According to my ULM theory - the most likely explanation for the superior learning ability of parrots is a larger number of neurons/synapses in their general learning modules - (whatever the equivalent of the cortex is in birds) and thus more computational power available for general learning.

Stop right now, and consider this bet - I will bet that parrots have more neurons/synapses in their cortex-equivalent brain regions than cats.

Now a little google searching leads to this blog article which summarizes this recent research - Complex brains for complex cognition - neuronal scaling rules for bird brains,

From the abstract:

We show that in parrots and songbirds the total brain mass as well as telencephalic mass scales approximately linearly with the total number of neurons, i.e. neuronal density does not change significantly as brains get larger. The neuronal densities in the telencephalon exceed those observed in the cerebral cortex of primates by a factor of 2-8. As a result, the numbers of telencephalic neurons in the brains of the largest birds examined (raven, kea and macaw) equal or exceed those observed in the cerebral cortex of many species of monkeys.

Finally, our findings of comparable numbers of neurons in the cerebral cortex of medium-sized primates and in the telencephalon of large parrots and songbirds (particularly corvids) strongly suggest that large numbers of forebrain neurons, and hence a large computational capacity, underpin the behavioral and cognitive complexity reported for parrots and songbirds, despite their small brain size.

The telencephalon is believed to be the equivalent of the cortex in birds. The cortex of the smallest monkeys have about 400 million neurons, whereas the cat's cortex has about 300 million neurons. A medium sized monkey such as a night monkey has more than 1 billion cortical neurons.

comment by AndreInfante · 2015-07-29T21:53:47.844Z · score: 1 (1 votes) · LW · GW

Interesting! I didn't know that, and that makes a lot of sense.

If I were to restate my objection more strongly, I'd say that parrots also seem to exceed chimps in language capabilities (chimps having six billion cortical neurons). The reason I didn't bring this up originally is that chimp language research is a horrible, horrible field full of a lot of bad science, so it's difficult to be too confident in that result.

Plenty of people will tell you that signing chimps are just as capable as Alex the parrot - they just need a little bit of interpretation from the handler, and get too nervous to perform well when the handler isn't working with them. Personally, I think that sounds a lot like why psychics suddenly stop working when James Randi shows up, but obviously the situation is a little more complicated.

comment by jacob_cannell · 2015-07-29T22:18:12.782Z · score: 3 (3 votes) · LW · GW

I'd strongly suggest the movie project nim, if you haven't seen it. In some respects chimpanzee intelligence develops faster than that of a human child, but it also planes off much earlier. Their childhood development period is much shorter.

To first approximation, general intelligence in animals can be predicted by number of neurons/synapses in general learning modules, but this isn't the only factor. I don't have an exact figure, but that poster article suggests parrots have perhaps 1-3 billion ish cortical neuron equivalent.

The next most important factor is probably degree of neotany or learning window. Human intelligence develops over the span of 20 years. Parrots seem exceptional in terms of lifespan and are thus perhaps more human like - where they maintain a childlike state for much longer. We know from machine learning that the 'learning rate' is a super important hyperparameter - learning faster has a huge advantage, but if you learn too fast you get inferior long term results for your capacity. Learning slowly is obviously more costly, but it can generate more efficient circuits in the long term.

I inferred/guessed that parrots have very long neotenic learning windows, and the articles on Alex seem to confirm this.

Alex reached a vocabulary of about 100 words by age 29, a few year's before his untimely death. The trainer - Irene Pepperberg - claims that Alex was still learning and had not reached peak capability. She rated Alex's intelligence as roughly equivalent to that of a 5 year old. This about makes sense if the parrot has roughly 1/6th our number of cortical neurons, but has similar learning efficiency and long learning window.

To really compare chimp vs parrot learning ability, we'd need more than a handful of samples. There is also a large selection effect here - because parrots make reasonably good pets, whereas chimps are terrible dangerous pets. So we haven't tested chimps as much. Alex is more likely to be a very bright parrot, whereas the handful of chimps we have tested are more likely to be average.

comment by AndreInfante · 2015-07-30T10:19:00.527Z · score: 0 (0 votes) · LW · GW

Not much to add here, except that it's unlikely that Alex is an exceptional example of a parrot. The researcher purchased him from a pet store at random to try to eliminate that objection.

comment by Wei_Dai · 2015-07-30T07:11:46.872Z · score: 0 (0 votes) · LW · GW

The neuronal densities in the telencephalon exceed those observed in the cerebral cortex of primates by a factor of 2-8.

This is curious. I wonder if bird brains are also more energy efficient as a result of the greater neuronal densities (since that implies shorter wires). According to Ratio of central nervous system to body metabolism in vertebrates: its constancy and functional basis the metabolism of the brain of Corvus sp (unknown species of genus Corvus, which includes the ravens) is 0.52 cm^3 O2/min whereas the metabolism of the brain of a macaque monkey is 3.4 cm^3 O2/min. Presumably the macaque monkey has more non-cortical neurons which account for some the difference, but this still seems impressive if the Corvus sp and macaque monkey have a similar number of telencephalic/cortical neurons (1.4B for the macaque according to this paper). Unfortunately I can't find the full paper of the abstract you linked to to check the details.

comment by jacob_cannell · 2015-07-30T16:52:50.297Z · score: 0 (0 votes) · LW · GW

I wonder if bird brains are also more energy efficient as a result of the greater neuronal densities (since that implies shorter wires).

Yes - that seems to be the point of that poster I found earlier.

From an evolutionary point of view it makes sense - birds are under tremendous optimization pressure for mass efficiency. Hummingbirds are a great example of how far evolution can push flight and weight efficiency.

Primate/human brains also appear to have more density optimization than say elephants or cetaceans, but it is interesting that birds are even so much more density efficient. Presumably there are some other tradeoffs - perhaps the bird brain design is too hot to scale up to large sizes, and uses too much resources, etc.

Unfortunately I can't find the full paper of the abstract you linked to to check the details.

It was a recent poster - so perhaps it is still a paper in progress? They claim to have ran the defractionator experiments on bird brains, so they should have estimates of the actual neuron counts to back up their general claims, but they didn't provide those in the abstract. Perhaps the data exists somewhere as an image from the actual presentation. Oh well.

comment by jacob_cannell · 2015-07-29T04:17:35.875Z · score: 0 (4 votes) · LW · GW

Yes, I've read your big universal learner post, and I'm not convinced.

Do you actually believe that evolved modularity is a better explanation of the brain then the ULM hypothesis? Do you have evidence for this belief or is it simply that which you want to be true? Do you understand why the computational neuroscience and machine learning folks are moving away from the latter towards the former? If you do have evidence please provide it in a critique in the comments for that post where I will respond.

First off, you're seriously misrepresenting the success of deep learning as support for your thesis. Deep learning algorithms are extremely powerful, and probably have a role to play in building AGI, but they aren't the end-all, be-all of AI research.

Make some specific predictions for the next 5 years about deep learning or ANNs. Let us see if we actually have significant differences of opinion. If so I expect to dominate you in any prediction market or bets concerning the near term future of AI.

First off the bat, you absolutely can create an AGI that is a pure ANN. In fact the most successful early precursor AGI we have - the atari deepmind agent - is a pure ANN. Your claim that ANNs/Deep Learning is not the end of all AGI research is quickly becoming a minority position.

Humans can learn echolocation, but they can't learn echolocation the way bats and dolphins can learn echolocation

What the scottsman!

The real test here would be to take a brain and give it an entirely new sense

Done and done. Next!

Notably, the general learner hypothesis does not explain why non-surgically-modified brains are so standardized in structure and functional layout. Something that you yourself bring up in your article.

I discussed this in the comments - it absolutely does explain neurotypical standardization. It's a result of topographic/geometric wiring optimization. There is an exactly optimal location for every piece of functionality, and the brain tends to find those same optimal locations in each human. But if you significantly perturb the input sense or the brain geometry, you can get radically different results.

Consider the case of extreme hydrocephaly - where fluid fills in the center of the brain and replaces most of the brain and squeezes the remainder out to a thin surface near the skull. And yet, these patients can have above average IQs. Optimal dynamic wiring can explain this - the brain is constantly doing global optimization across the wiring structure, adapting to even extreme deformations and damage. How does evolved modularity explain this?

It also obviously has hard-coded specialized modules, to some degree, which is why (for example) all human cultures develop language and music, which isn't something you'd expect if we were all starting from zero.

This is nonsense - language processing develops in general purpose cortical modules, there is no specific language circuitry.

There is a small amount of innate circuit structures - mainly in the brainstem, which can generate innate algorithms especially for walking behavior.

The question is which aspect dominates brain performance.

This is rather obvious - it depends on the ratio of pure learning structures (cortex, hippocampus, cerebellum) to innate circuit structures (brainstem, some midbrain, etc). In humans 95% or more of the circuitry is general purpose learning.

What about Watson?

Not an AGI.

Finally, I don't have the background to refute your argument on the efficiency of the brain (although I know clever people who do who disagree with you).

The correct thing to do here is update. Instead you are searching for ways in which you can ignore the evidence.

But, taking it as a given that you're right, it sounds like you're assuming all future AIs will draw the same amount of power as a real brain and fit in the same spatial footprint.

Obviously not - in theory given a power budget you can split it up into N AGIs or one big AGI. In practice due to parallel scaling limitations, there is always some optimal N. Even on a single GPU today, you need N about 100 or more to get good performance.

You can't just invest all your energy into one big AGI and expect better performance - that is a mind numbingly naive strategy.

To sum up: yes, I've read your thing. No, it's not as convincing as you seem to believe.

Update, or provide counter evidence, or stop wasting my time.

comment by V_V · 2015-07-29T23:48:11.753Z · score: -1 (1 votes) · LW · GW

In fact the most successful early precursor AGI we have - the atari deepmind agent - is a pure ANN.

People have been using ANNs for reinforcement learning tasks since at least the TD-Gammon system with varying success. The Deepmind Atari agent is bigger and the task is sexier, but calling it an early precursor AGI seems far fetched.

Consider the case of extreme hydrocephaly - where fluid fills in the center of the brain and replaces most of the brain and squeezes the remainder out to a thin surface near the skull. And yet, these patients can have above average IQs. Optimal dynamic wiring can explain this - the brain is constantly doing global optimization across the wiring structure, adapting to even extreme deformations and damage. How does evolved modularity explain this?

I suppose that the network topology of these brains is essentially normal, isn't it? If that's the case, then all the modules are still there, they are just squeezed against the skull wall.

This is nonsense - language processing develops in general purpose cortical modules, there is no specific language circuitry.

If I understand correctly, damage to Broca's area or Wernicke's area tends to cause speech impairment.
This may be more or less severe depending on the individual, which is consistent with the evolved modularity hypotheses: genetically different individuals may have small differences in the location and shape of the brain modules.

Under the universal learning machine hypothesis, instead, we would expect that speech impairment following localized brain damage to quickly heal in most cases as other brain areas are recruited to the task. Note that there are large rewards for regaining linguistic ability, hence the brain would sacrifice other abilities if it could. This generally does not happen.

In fact, for most people with completely healthy brains it is difficult to learn a new language as well as a native speaker after the age of 10. This suggests that our language processing machinery is hard-wired to a significant extent.

comment by jacob_cannell · 2015-07-30T03:49:10.047Z · score: 3 (3 votes) · LW · GW

The Deepmind Atari agent is bigger and the task is sexier, but calling it an early precursor AGI seems far fetched.

Hardly. It can learn a wide variety of tasks - many at above human level - in a variety of environments - all with only a few million neurons. It was on the cover of Nature for a reason.

Remember a mouse brain has the same core architecture as a human brain. The main components are all there and basically the same - just smaller - and with different size allocations across modules.

I suppose that the network topology of these brains is essentially normal, isn't it? If that's the case, then all the modules are still there, they are just squeezed against the skull wall.

From what I've read the topology is radically deformed, modules are lost, timing between remaining modules is totally changed - it's massive brain damage. It's so wierd that they can even still think that it has lead some neuroscientists to seriously consider that cognition comes from something other than neurons and synapses.

Under the universal learning machine hypothesis, instead, we would expect that speech impairment following localized brain damage to quickly heal in most cases as other brain areas are recruited to the task.

Not at all - relearning language would take at least as much time and computational power as learning it in the first place. Language is perhaps the most computationally challenging thing that humans learn - it takes roughly a decade to learn up to a high fluent adult level. Children learn faster - they have far more free cortical capacity. All of this is consistent with the ULH, and I bet it can even vaguely predict the time required for relearning language - although measuring the exact extent of damage to language centers is probably difficult .

This suggests that our language processing machinery is hard-wired to a significant extent.

Absolutely not - because you can look at the typical language modules in the microscope, and they are basically the same as the other cortical modules. Furthermore, there is no strong case for any mechanism that can encode any significant genetically predetermined task specific wiring complexity into the cortex. It is just like an ANN - the wiring is random. The modules are all basically the same.

comment by Lumifer · 2015-07-29T14:32:38.658Z · score: -1 (3 votes) · LW · GW

Let me point out the blatant hubris:

Let us see if we actually have significant differences of opinion. If so I expect to dominate you in any prediction market or bets concerning the near term future of AI.

and rudeness

or stop wasting my time

No one has any obligation to manage your time. If you want to stop wasting your time, you stop wasting your time.

comment by jacob_cannell · 2015-07-29T15:24:41.716Z · score: 2 (2 votes) · LW · GW

Hubris - perhaps, but it was a challenge. Making predictions/bets can help clarify differences in world models.

and rudeness

The full quote was this:

Update, or provide counter evidence, or stop wasting my time.

In the context that he had just claimed that he wasn't going to update.

comment by AndreInfante · 2015-07-29T05:08:14.273Z · score: -1 (3 votes) · LW · GW

First off the bat, you absolutely can create an AGI that is a pure ANN. In fact the most successful early precursor AGI we have - the atari deepmind agent - is a pure ANN. Your claim that ANNs/Deep Learning is not the end of all AGI research is quickly becoming a minority position.

The deepmind agent has no memory, one of the problems that I noted in the first place with naive ANN systems. The deepmind's team's solution to this is the neural Turing machine model, which is a hybrid system between a neural network and a database. It's not a pure ANN. It isn't even neuromorphic.

Improving its performance is going to involve giving it more structure and more specialized components, and not just throwing more neurons and training time at it.

For goodness sake: Geoffrey Hinton, the father of deep learning, believes that the future of machine vision is explicitly integrating the idea of three dimensional coordinates and geometry into the structure of the network itself, and moving away from more naive and general purpose conv-nets.

Source: https://github.com/WalnutiQ/WalnutiQ/issues/157

Your position is not as mainstream as you like to present it.

The real test here would be to take a brain and give it an entirely new sense

Done and done. Next!

If you'd read the full sentence that I wrote, you'd appreciate that remapping existing senses doesn't actually address my disagreement. I want a new sense, to make absolutely sure that the subjects aren't just re-using hard coding from a different system. Snarky, but not a useful contribution to the conversation.

This is nonsense - language processing develops in general purpose cortical modules, there is no specific language circuitry.

This is far from the mainstream linguistic perspective. Go argue with Noam Chomsky; he's smarter than I am. Incidentally, you didn't answer the question about birds and cats. Why can't cats learn to do complex language tasks? Surely they also implement the universal learning algorithm just as parrots do.

What about Watson?

Not an AGI.

AGIs literally don't exist, so that's hardly a useful argument. Watson is the most powerful thing in its (fairly broad) class, and it's not a neural network.

Finally, I don't have the background to refute your argument on the efficiency of the brain (although I know clever people who do who disagree with you).

The correct thing to do here is update. Instead you are searching for ways in which you can ignore the evidence.

No, it really isn't. I don't update based on forum posts on topics I don't understand, because I have no way to distinguish experts from crackpots.

comment by jacob_cannell · 2015-07-29T05:24:12.190Z · score: 1 (1 votes) · LW · GW

The deepmind's team's solution to this is the neural Turing machine model, which is a hybrid system between a neural network and a database. It's not a pure ANN.

Yes it is a pure ANN - according to my use of the term ANN (arguing over definitions is a waste of time). ANNs are fully general circuit models, which obviously can re-implement any module from any computer - memory, database, whatever. The defining characteristics of an ANN are - simulated network circuit structure based on analog/real valued nodes, and some universal learning algorithm over the weights - such as SGD.

Your position is not as mainstream as you like to present it.

You don't understand my position. I don't believe DL as it exists today is somehow the grail of AI. And yes I'm familiar with Hinton's 'Capsule' proposals. And yes I agree there is still substantial room for improvement in ANN microarchitecture, and especially for learning invariances - and unsupervised especially.

This is far from the mainstream linguistic perspective.

For any theory of anything the brain does - if it isn't grounded in computational neuroscience data, it is probably wrong - mainstream or not.

No, it really isn't. I don't update based on forum posts on topics I don't understand, because I have no way to distinguish experts from crackpots.

You don't update on forum posts? Really? You seem pretty familiar with MIRI and LW positions. So are you saying that you arrived at those positions all on your own somehow? Then you just showed up here, thankfully finding other people who just happened to have arrived at all the same ideas?

comment by V_V · 2015-07-29T22:56:16.997Z · score: 0 (0 votes) · LW · GW

Yes it is a pure ANN - according to my use of the term ANN (arguing over definitions is a waste of time). ANNs are fully general circuit models, which obviously can re-implement any module from any computer - memory, database, whatever. The defining characteristics of an ANN are - simulated network circuit structure based on analog/real valued nodes, and some universal learning algorithm over the weights - such as SGD.

You could say that any machine learning system is an ANN, under a sufficiently vague definition. That's not particularly useful in a discussion, however.

comment by AndreInfante · 2015-07-29T09:20:59.108Z · score: -1 (1 votes) · LW · GW

Yes it is a pure ANN - according to my use of the term ANN (arguing over definitions is a waste of time). ANNs are fully general circuit models, which obviously can re-implement any module from any computer - memory, database, whatever. The defining characteristics of an ANN are - simulated network circuit structure based on analog/real valued nodes, and some universal learning algorithm over the weights - such as SGD.

I think you misunderstood me. The current DeepMind AI that they've shown the public is a pure ANN. However, it has serious limitations because it's not easy to implement long-term memory as a naive ANN. So they're working on a successor called the "neural Turing machine" which marries an ANN to a database retrieval system - a specialized module.

You don't understand my position. I don't believe DL as it exists today is somehow the grail of AI. And yes I'm familiar with Hinton's 'Capsule' proposals. And yes I agree there is still substantial room for improvement in ANN microarchitecture, and especially for learning invariances - and unsupervised especially.

The thing is, many of those improvements are dependent on the task at hand. It's really, really hard for an off-the-shelf convnet neural network to learn the rules of three dimensional geometry, so we have to build it into the network. Our own visual processing shows signs of having the same structure imbedded in it.

The same structure would not, for example, benefit an NLP system, so we'd give it a different specialized structure, tuned to the hierarchical nature of language. The future, past a certain point, isn't making 'neural networks' better. It's making 'machine vision' networks better, or 'natural language' networks better. To make a long story short, specialized modules are an obvious place to go when you run into problem too complex to teach a naive convnet to do efficiently. Both for human engineers over the next 5-10, and for evolution over the last couple of billion.

You don't update on forum posts? Really? You seem pretty familiar with MIRI and LW positions. So are you saying that you arrived at those positions all on your own somehow?

I have a CS and machine learning background, and am well-read on the subject outside LW. My math is extremely spotty, and my physics is non-existent. I update on things I read that I understand, or things from people I believe to be reputable. I don't know you well enough to judge whether you usually say things that make sense, and I don't have the physics to understand the argument you made or judge its validity. Therefore, I'm not inclined to update much on your conclusion.

EDIT: Oh, and you still haven't responded to the cat thing. Which, seriously, seems like a pretty big hole in the universal learner hypothesis.

comment by jacob_cannell · 2015-07-29T20:41:38.108Z · score: 0 (0 votes) · LW · GW

I update on things I read that I understand, or things from people I believe to be reputable.

So you are claiming that either you already understood AI/AGI completely when you arrived to LW, or you updated on LW/MIRI writings because they are 'reputable' - even though their positions are disavowed or even ridiculed by many machine learning experts.

EDIT: Oh, and you still haven't responded to the cat thing. Which, seriously, seems like a pretty big hole in the universal learner hypothesis.

I replied here, and as expected - it looks like you are factually mistaken in your assertion that disagreed with the ULH. Better yet, the outcome of your cat vs bird observation was correctly predicted by the ULH, so that's yet more evidence in its favor.

comment by David_Bolin · 2015-07-25T08:06:05.269Z · score: 2 (2 votes) · LW · GW

That is not a useful rebuttal if in fact it is impossible to guarantee that your AGI will not be a socialpath no matter how you program it.

Eliezer's position generally is that we should make sure everything is set in advance. Jacob_cannell seems to be basically saying that much of an AGI's behavior is going to be determined by its education, environment, and history, much as is the case with human beings now. If this is the case it is unlikely there is any way to guarantee a good outcome, but there are ways to make that outcome more likely.

comment by Nornagest · 2015-07-24T20:18:08.509Z · score: 2 (4 votes) · LW · GW

UFAI is not strongly motivated to sim us in large numbers

This is the weakest assumption in your chain of reasoning. Design space for UFAI is far bigger than for FAI, and we can't make strong assumptions about what it is or is not motivated to do -- there are lots of ways for Friendliness to fail that don't involve paperclips.

comment by jacob_cannell · 2015-07-27T06:25:18.217Z · score: 2 (2 votes) · LW · GW

This is the weakest assumption in your chain of reasoning. Design space for UFAI is far bigger than for FAI,

Irrelevant. The design space of all programs is infinite - do you somehow think that the set of programs that humans create is a random sample from the set of all programs?. The size of the design space has absolutely nothing whatsoever to do with any realistic actual probability distribution over that space.

we can't make strong assumptions about what it is or is not motivated to do

Of course we can - because UFAI is defined as superintelligence that doesn't care about humans!

comment by Nornagest · 2015-07-27T06:43:43.476Z · score: 1 (3 votes) · LW · GW

Of course we can - because UFAI is defined as superintelligence that doesn't care about humans!

For a certain narrow sense of "care", yes -- but it's a sense narrow enough that it doesn't exclude a motivation to sim humans, or give us any good grounds for probabilistic reasoning about whether a Friendly intelligence is more likely to simulate us. So narrow, in fact, that it's not actually a very strong assumption, if by strength we mean something like bits of specification.

comment by jacob_cannell · 2015-07-27T18:21:45.125Z · score: 2 (2 votes) · LW · GW

narrow enough that it doesn't exclude a motivation to sim humans

Most UFAI will have convergent instrumental reasons to sim at least some humans, just as a component of simulating the universal in general towards better prediction/understanding.

FAI has that same small motivation plus the more direct end goal of creating enormous numbers of sims to satisfy human's highly convergent desire for an afterlife to exist. The creation of an immortal afterlife is the single most important defining characteristic of FAI. Humans have spent a huge amount of time thinking and debating about what kinds of gods should/could exist, and afterlife/immortality is the number one concern - and transhumanists are certainly no exception.

comment by TheAncientGeek · 2015-07-23T14:55:49.920Z · score: 9 (9 votes) · LW · GW

Arguments against AI risk, .or arguments against the MIRI conception of AI risk?

I have heard a hint of a whisper of a rumour that I am considered a bit of a contrarian around here...but I am actually a little more convinced of AI threat in general than I used be before I encountered less wrong. (in particular, at one time, I would have said "just pull the plug out", but there's some mileage in the unknowing arguments)

The short version of the arguments against MIRIs version of AI threat is that it is highly conjunctive. The long version is long. a consequence of having a multi stage argument, with a fan out of alternative possibilities at each stage.

comment by jsteinhardt · 2015-07-23T18:10:02.843Z · score: 8 (8 votes) · LW · GW

For an argument against at least some of MIRI's technical agenda, see Paul Christiano's medium post.

comment by turchin · 2015-07-23T11:32:20.361Z · score: 9 (11 votes) · LW · GW

One may try the following conjecture: Synthetic biology is so simple and AI is so complex, that risks of extinction from artificial viruses are far earlier in time. Even if both risks have the same probability individually, the one that comes first gets biggest part of total probability.

For example, lets Pv = 0.9 is the risks of viruses in the absence of any other risks, and Pai = 0.9 is the risk of AI in absence of any viruses. But Pv may happened in the first half of 21 century, and Pai in the second. In this case we have total probability of extinction =0.99, from which 0.9 comes from viruses, and 0.09 comes cent from AI.

If it true, than promoting AI as the main existential risk is misallocation of resources.

If we look closer in Pv and Pai, we may find that Pv is exponentially increasing in time because of Moore law in biotech, while Pai describes one time event and is constant. AI will be friendly or not. (It also may have more complex time depending, but here I just estimate the probability that FAI theory will be created and implemented.)

And assuming that AI is the only mean to stop creating dangerous viruses (it may be untrue but for the sake of the argument we will suppose it) when we need AI as earlier as possible, even if it will have smaller chances to be friendly.

So, this line of reasoning suggests that AI is not risk because its benefits will outweigh its risks if we look in larger picture.

Personally I think that we must invest in creating Safe AI, but we need to do it as soon as possible.

Update: the same logic may be applied in different efforts in AI field. AI do not need to be able to self-improve to cause human extinction. It may be just 200 IQ Stuxnet that hacks critical infrastructure. Such AI may appear before real self-improving AI, as the latter must be based on non self-improving but clever AI. So our efforts to prevent dangerous non-self-improving AIs may be more urgent, and all work on Godelian agents is misallocation of resources.

comment by ChristianKl · 2015-07-23T12:30:33.212Z · score: 2 (2 votes) · LW · GW

A 200 IQ Stuxnet is a self improving AGI. Anything that has a real IQ is an AGI and if it's smarter than human researchers on the subject it can self-improve.

comment by turchin · 2015-07-23T12:56:31.877Z · score: 0 (2 votes) · LW · GW

It may not use its technical ability to self-improve to kill all humans. It may also limit it self to low level self- improvement aka learning. Self-improvement is not necessary condition for UFAI. But it may be one of its instruments.

comment by David_Bolin · 2015-07-23T10:53:07.024Z · score: 8 (8 votes) · LW · GW

Ramez Naam discusses it here: http://rameznaam.com/2015/05/12/the-singularity-is-further-than-it-appears/

I find the discussion of corporations as superintelligences somewhat persuasive. I understand why Eliezer and others do not consider them superintelligences, but it seems to me a question of degree; they could become self-improving in more and more respects and at no point would I expect a singularity or a world-takeover.

I also think the argument from diminishing returns is pretty reasonable: http://www.sphere-engineering.com/blog/the-singularity-is-not-coming.html

comment by [deleted] · 2015-07-23T17:40:47.659Z · score: 8 (8 votes) · LW · GW

On the same note, but probably already widely known, Scott Aaronson on "The Signularity Is Far" (2008): http://www.scottaaronson.com/blog/?p=346

comment by [deleted] · 2015-07-27T18:02:24.915Z · score: 0 (0 votes) · LW · GW

Here is another article arguing why we are nowhere near the singularity:

https://timdettmers.wordpress.com/2015/07/27/brain-vs-deep-learning-singularity/

And here is the corresponding thread on /r/machinelearning:

https://www.reddit.com/r/MachineLearning/comments/3eriyg/the_brain_vs_deep_learning_part_i_computational/

comment by TheAncientGeek · 2015-07-23T15:13:07.423Z · score: 1 (1 votes) · LW · GW

Now, that's what I was looking for.

comment by AndreInfante · 2015-07-27T21:59:37.574Z · score: 7 (7 votes) · LW · GW

Here's one from a friend of mine. It's not exactly an argument against AI risk, but it is an argument that the problem may be less urgent than it's traditionally presented.

  1. There's plenty of reason to believe that Moore's Law will slow down in the near future

  2. Progress on AI algorithms has historically been rather slow.

  3. AI programming is an extremely high level cognitive task, and will likely be among the hardest things to get an AI to do.

  4. These three things together suggest that there will be a 'grace period' between the development of general agents, and the creation of a FOOM-capable AI.

  5. Our best guess for the duration of this grace period is on the order of multiple decades.

  6. During this time, general-but-dumb agents will be widely used for economic purposes.

  7. These agents will have exactly the same perverse instantiation problems as a FOOM-capable AI, but on a much smaller scale. When they start trying to turn people into paperclips, the fallout will be limited by their intelligence.

  8. This will ensure that the problem is taken seriously, and these dumb agents will make it much easier to solve FAI-related problems, by giving us an actual test bed for our ideas where they can't go too badly wrong.


This is a plausible-but-not-guaranteed scenario for the future, which feels much less grim than the standard AI-risk narrative. You might be able to extend it into something more robust.

comment by turchin · 2015-07-28T23:42:54.797Z · score: 2 (2 votes) · LW · GW

Dumb agent could also cause human extinction. "To kill all humans" is computationly simpler task than to create superintelligence. And it may be simplier by many orders of magnitude.

comment by AndreInfante · 2015-07-29T00:35:23.575Z · score: 2 (2 votes) · LW · GW

I seriously doubt that. Plenty of humans want to kill everyone (or, at least, large groups of people). Very few succeed. These agents would be a good deal less capable.

comment by turchin · 2015-07-29T00:43:30.158Z · score: 1 (1 votes) · LW · GW

Just imagine a Stuxnet-style computer virus which will find DNA-synthesisers and print different viruses on each of them, calculating exact DNA mutations for hundreds different flu strains.

comment by V_V · 2015-07-29T09:57:23.291Z · score: 0 (2 votes) · LW · GW

You can't manufacture new flu stains with just by just hacking a DNA synthesizer, And anyway, most of non-intelligently created flu strains would be non-viable or non-lethal.

comment by turchin · 2015-07-29T13:07:03.789Z · score: 1 (1 votes) · LW · GW

I mean that the virus will be as intelligent as human bioligist, may be EM. It is enough for virus synthesis but not for personal self-imprivement

comment by Gram_Stone · 2015-07-29T21:27:28.383Z · score: 1 (1 votes) · LW · GW

There are parts that are different, but it seems worth mentioning that this is quite similar to certain forms of Bostrom's second-guessing arguments, as discussed in Chapter 14 of Superintelligence and in Technological Revolutions: Ethics and Policy in the Dark:

A related type of argument is that we ought—rather callously—to welcome small and medium-scale catastrophes on grounds that they make us aware of our vulnerabilities and spur us into taking precautions that reduce the probability of an existential catastrophe. The idea is that a small or medium-scale catastrophe acts like an inoculation, challenging civilization with a relatively survivable form of a threat and stimulating an immune response that readies the world to deal with the existential variety of the threat.

I should mention that he does seem to be generally against attempting to manipulate people into doing the best thing.

comment by [deleted] · 2015-08-03T03:52:47.851Z · score: 0 (0 votes) · LW · GW

I should mention that he does seem to be generally against attempting to manipulate people into doing the best thing.

Well that's actually quite refreshing.

comment by [deleted] · 2015-07-23T21:25:41.732Z · score: 6 (8 votes) · LW · GW

http://kruel.co/2012/07/17/ai-risk-critiques-index/

Kruel's critique sounded very convincing when I first read it.

comment by AndreInfante · 2015-07-27T20:50:01.116Z · score: 1 (7 votes) · LW · GW

(1) Intelligence is an extendible method that enables software to satisfy human preferences. (2) If human preferences can be satisfied by an extendible method, humans have the capacity to extend the method. (3) Extending the method that satisfies human preferences will yield software that is better at satisfying human preferences. (4) Magic happens. (5) There will be software that can satisfy all human preferences perfectly but which will instead satisfy orthogonal preferences, causing human extinction.

This is deeply silly. The thing about arguing from definitions is that you can prove anything you want if you just pick a sufficiently bad definition. That definition of intelligence is a sufficiently bad definition.

EDIT:

To extend this rebuttal in more detail:

I'm going to accept the definition of 'intelligence' given above. Now, here's a parallel argument of my own:

  1. Entelligence is an extendible method for satisfying an arbitrary set of preferences that are not human preferences.

  2. If these preferences can be satisfied by an extendible method, then the entelligent agent has the capacity to extend the method.

  3. Extending the method that satisfies these non-human preferences will yield software that's better at satisfying non-human preferences.

  4. The inevitable happens.

  5. There will be software that will satisfy non-human preferences, causing human extinction.


Now, I pose to you: how do we make sure that we're making intelligent software, and not "entelligent" software, under the above definitions? Obviously, this puts us back to the original problem of how to make a safe AI.

The original argument is rhetorical slight of hand. The given definition of intelligence implicitly assumes that the problem doesn't exist, and all AI's will be safe, and then goes on to prove that all AIs will be safe.

It's really, fundamentally silly.

comment by CellBioGuy · 2015-07-24T07:08:53.810Z · score: 1 (9 votes) · LW · GW

They remain extraordinarily convincing/accurate to me.

EDIT: Downvotes? Really?

comment by Gram_Stone · 2015-07-29T22:19:31.069Z · score: 3 (5 votes) · LW · GW

EDIT: Downvotes? Really?

I didn't downvote, but besides the obvious explanation of people being anti-anti-AI risk, I've seen you write these sort of one-liner comments that express your antipathy towards AI risk and do nothing else. Some people probably feel that their time is being wasted, and some people probably find it improbable that you can simultaneously be thinking your own thoughts and agree with an entire index of critiques. On the other hand, I can see from your perspective that there is a selection effect favoring people who take AI risk seriously and that you might think it prudent to represent your position whenever you can.

Let's just take one of his most recent critiques. It's an uncharitable interpretation of the standard position on why AGIs will not automatically do what you mean. The reason that there is not already UFAI is that even though AIs don't share our goals, they lack optimization power. If I can trivially discover a misinterpretation of the standard position, then that lowers my estimate that you are examining his arguments critically or engaging in this debate charitably, which is behavior that is subject to social punishment in this community.

comment by John_Maxwell (John_Maxwell_IV) · 2015-07-24T04:50:42.398Z · score: 5 (5 votes) · LW · GW

Holden Karnofsky's tool AI critique (see also Eliezer's response). Outdated wiki page.

comment by Riteofwhey · 2015-07-24T02:47:37.626Z · score: 5 (5 votes) · LW · GW

Thanks for doing this. A lack of self criticism about AI risk is one of the reasons I don't take it too seriously.

I generally agree with http://su3su2u1.tumblr.com/ , but it may not be organized enough to be helpful.

As for MIRI specifically, I think you'd be much better served by mainstream software verification and cryptography research. I've never seen anyone address why that is not the case.

I have a bunch of disorganized notes about why I'm not convinced of AI risk, if you're interested I could share more.

comment by gjm · 2015-07-24T11:18:17.277Z · score: 3 (3 votes) · LW · GW

I've never seen anyone address why that is not the case.

It's solving a different problem.

Problem One: You know exactly what you want your software to do, at a level of detail sufficient to write the software, but you are concerned that you may introduce bugs in the implementation or that it may be fed bad data by a malicious third party, and that in that case terrible consequences will ensue.

Problem Two: You know in a vague handwavy way what you want your software to do, but you yet don't know with enough precision to write the software. You are concerned that if you get this wrong, the software will do something subtly different from what you really wanted, and terrible consequences will ensue.

Software verification and crypto address Problem One. AI safety is an instance of Problem Two, and potentially an exceptionally difficult one.

comment by Riteofwhey · 2015-07-24T15:36:06.463Z · score: 1 (1 votes) · LW · GW

Verification seems like a strictly simpler problem. If we can't prove properties for a web server, how are we going to do anything about a completely unspecified AI?

The AI take over scenarios I've head almost always involve some kind of hacking, because today hacking is easy. I don't see why that would necessarily be the case a decade from now. We could prove some operating system security guarantees for instance.

comment by gjm · 2015-07-24T17:21:54.953Z · score: 2 (2 votes) · LW · GW

Yes, verification is a strictly simpler problem, and one that's fairly thoroughly addressed by existing research -- which is why people working specifically on AI safety are paying attention to other things.

(Maybe they should actually be working on doing verification better first, but that doesn't seem obviously a superior strategy.)

Some AI takeover scenarios involve hacking (by the AI, of other systems). We might hope to make AI safer by making that harder, but that would require securing all the other important computer systems in the world. Even though making an AI safe is really hard, it may well be easier than that.

comment by jsteinhardt · 2015-07-27T02:22:34.833Z · score: 3 (3 votes) · LW · GW

Yes, verification is a strictly simpler problem, and one that's fairly thoroughly addressed by existing research -- which is why people working specifically on AI safety are paying attention to other things.

This doesn't really seem true to me. We are currently pretty bad at software verification, only able to deal with either fairly simple properties or fairly simple programs. I also think that people in verification do care about the "specification problem", which is roughly problem 2 above (although I don't think anyone really has that many ideas for how to address it).

comment by Riteofwhey · 2015-07-25T08:09:00.595Z · score: 3 (3 votes) · LW · GW

I would be somewhat more convinced that MIRI was up to it's mission if they could contribute to much simpler problems in prerequisite fields.

comment by [deleted] · 2015-08-03T04:04:03.531Z · score: 0 (0 votes) · LW · GW

I mildly disagree. A certain amount of AI safety specifically involves trying to extend our available tools for dealing with Problem One to the situations that we expect to happen when we deal with powerful learning agents. Goal-system stability, for instance, is a matter of program verification -- hence why all the papers about it deal with mathematical logic.

comment by gjm · 2015-08-03T08:57:53.979Z · score: 0 (0 votes) · LW · GW

I haven't read any technical papers on goal-system stability; isn't it the case that real-world attempts at that are going to have at least as much of Problem Two as of Problem One about them? ("Internally" -- in the notion of what counts as self-improvement -- if not "externally" in whatever problem(s) the system is trying to solve.) I haven't thought (or read) enough about this for my opinion to have much weight; I could well be completely wrong about it.

Regardless, you're certainly right that Problem One is going to be important as well as Problem Two, and I should have said something like "AI safety is also an instance of Problem Two".

comment by [deleted] · 2015-08-03T12:47:04.195Z · score: 0 (0 votes) · LW · GW

isn't it the case that real-world attempts at that are going to have at least as much of Problem Two as of Problem One about them? ("Internally" -- in the notion of what counts as self-improvement -- if not "externally" in whatever problem(s) the system is trying to solve.) I haven't thought (or read) enough about this for my opinion to have much weight; I could well be completely wrong about it.

Kind of. We expect intuitively that a reasoning system can reason about its own goals and successor-agents. Problem is, that actually requires degrees of self-reference that put you into the territory of paradox theorems. So we expect that if we come up with the right way to deal with paradox theorems, the agent's ability to "stay stable" will fall out pretty naturally.

comment by gjm · 2015-08-03T14:48:58.626Z · score: 0 (0 votes) · LW · GW

that actually requires degrees of self-reference that put you into the territory of paradox theorems.

Oh, OK, the Löbstacle thing. You're right, that's a matter of program verification and as such more in the territory of Problem One than of Problem Two.

comment by Houshalter · 2015-07-24T07:39:18.097Z · score: 3 (3 votes) · LW · GW

Creating stable AGI that operates in the real world may be unexpectedly difficult. What I mean by this is we might solve some hard problems in AI, and the result might work in some limited domains, but isn't stable in the real world.

An example would be Pascal's Mugging. An AI that maximizes expected utility with an unbounded utility function, would spend all it's time worrying about incredibly improbable scenarios.

Reinforcement learning agents might simply hijack their own reinforcement channel, set it to INF, and be done.

Or the Anvil Problem where a reinforcement learning-type AI simply doesn't act as though it's brain exists in the universe it's observing, and could result in strange behavior.

It might place a strong value on literal self preservation, and refuse to upgrade itself or create copies, or even allow it's physical computer to be rebooted. This would constrain the AI a great deal.

Further, it might not create other AIs that serve it, since the friendliness problem would be just as hard for it.

There could be technical or philosophical issues we haven't even thought of yet that a superintelligent AI would encounter and not be built to deal with. And most of these issues depend a great deal on the technical details of the AGI, which we don't even know yet. There are all sorts of hypothetical problems that are specific to neural networks, or evolved AIs, or open-cog like AIs, etc.

I'm not very confident that these will stop UFAI though. Pascal's Mugging can simply bound the utility function at some arbitrarily high number and create a dangerous AI. Reinforcement learning agents would probably still value self preservation after maximizing their input channel. The AI won't anvil itself since it would prevent it from manipulating the world or decrease it's reward. Self preservation for an AI is more about preserving it's reward machinery. Not the the actual AI program that maximizes it.

I definitely believe an unfriendly AI can be built that just maximizes some stupid goal. And if there are technical issues, it's only a matter of time before someone solves them. But I'm not 100% confident of it.

comment by Stuart_Armstrong · 2015-07-27T16:12:22.910Z · score: 2 (4 votes) · LW · GW

I've always consider the psychological critiques of AI risk (eg "the singularity is just rapture of the nerds") to be very weak ad hominems. However, they might be relevant for parts of the AI risk thesis that depend on the judgements of the people presenting it. The most relevant part would be in checking whether people have fully considered the arguments against their position, and gone out to find more such arguments.

comment by turchin · 2015-07-28T23:49:09.528Z · score: 1 (1 votes) · LW · GW

Autist people like to put one thing on the other (see more about autism and repetive behaviour in https://www.autismspeaks.org/science/science-news/study-suggests-repetitive-behaviors-emerge-early-autism) Recursive self-improvement is the same idea on higher level. That is why nerds (me too) may overestimate the probability of such type of AI. Sorry to said that.

comment by V_V · 2015-07-28T21:55:31.204Z · score: 1 (3 votes) · LW · GW

The argument is that people who talk about the singularity in general or AI risk (the hard-takeoff FOOM scenario) are privileging some low-probability hypotheses based on intuitions that come either directly from religion or from some underlying psychological mechanisms that also generate religious beliefs.

Most beliefs of this kind are wrong. They tend to be unparsimonious. Hence, when presented with a claim of this kind, before we look at the evidence or specific arguments, we should infer at first that the claim is likely wrong. Strong evidence or strong arguments would "screen off" this effect, while lack of evidence or weak arguments based on subjective estimates would not.

comment by snarles · 2015-07-25T04:58:57.524Z · score: 2 (2 votes) · LW · GW

There exists a technological plateau for general intelligence algorithms, and biological neural networks already come close to optimal. Hence, recursive self-improvement quickly hits an asymptote.

Therefore, artificial intelligence represents a potentially much cheaper way to produce and coordinate intelligence compared to raising humans. However, it will not have orders of magnitude more capability for innovation than the human race. In particular, if humans are unable to discover breakthroughs enabling vastly more efficient production of computational substrate, then artificial intelligence will likewise be unable. In that case, unfriendly AI poses an existential threat primarily through dangers that we can already imagine, rather than unanticipated technological breakthroughs.

comment by HungryHobo · 2015-07-28T12:06:10.139Z · score: 1 (1 votes) · LW · GW

It doesn't matter how safe you are about AI if there's a million other civilizations in the universe and some non-trivial portion of them aren't being as careful as they should be.

A UFAI is unlikely to stop at the home planet of the civilization that creates it. Rather you'd expect such a thing to continue to convert the remainder of the universe into computronium to store the integer for it's fitness function or some similar doomsday scenario.

AI doesn't work as a filter because it's the kind of disaster likely to keep spreading and we'd expect to see large parts of the sky going dark as the stars get turned into pictures of smiling faces or computronium.

Which either argues for AI-risk not being so risky or for an early filter causing few civilisations.

comment by turchin · 2015-07-28T23:35:03.487Z · score: 1 (1 votes) · LW · GW

That is why I am against premature SETI. But also if AI nanobots spread with near light speed, you can't see black spots in the sky.

comment by James_Miller · 2015-08-12T16:34:22.765Z · score: 0 (0 votes) · LW · GW

It did a little bit of this towards the end of my review of EY's rationality book.

comment by Pentashagon · 2015-07-25T04:16:17.554Z · score: 0 (0 votes) · LW · GW

Ray Kurzwiel seems to believe that humans will keep pace with AI through implants or other augmentation, presumably up to the point that WBE becomes possible and humans get all/most of the advantages an AGI would have. Arguments from self-interest might show that humans will very strongly prefer human WBE over training an arbitrary neural network of the same size to the point that it becomes AGI simply because they hope to be the human who gets WBE. If humans are content with creating AGIs that are provably less intelligent than the most intelligent humans then AGIs could still help drive the race to superintelligence without winning it (by doing the busywork that can be verified by sufficiently intelligent humans).

The steelman also seems to require an argument that no market process will lead to a singleton, thus allowing standard economic/social/political processes to guide the development of human intelligence as it advances while preventing a single augmented dictator (or group of dictators) from overpowering the rest of humanity, or an argument that given a cabal of sufficient size the cabal will continue to act in humanity's best interests because they are each acting in their own best interest, and are still nominally human. One potential argument for this is that R&D and manufacturing cycles will not become fast enough to realize substantial jumps in intelligence before a significant number of humans are able to acquire the latest generation.

The most interesting steelman argument to come out of this one might be that at some point enhanced humans become convinced of AI risk, when it is actually rational to become concerned. That would leave only steelmanning the period between the first human augmentation and reaching sufficient intelligence to be convinced of the risk.

comment by Username · 2015-07-24T05:25:21.750Z · score: 0 (0 votes) · LW · GW

I wrote some arguments that I think are novel here: http://lesswrong.com/r/discussion/lw/ly8/values_at_compile_time/cam0

comment by TheAncientGeek · 2015-07-23T16:03:35.500Z · score: 0 (0 votes) · LW · GW

""