Anthropomorphic AI and Sandboxed Virtual Universes

post by jacob_cannell · 2010-09-03T19:02:03.574Z · LW · GW · Legacy · 124 comments

Contents

  Intro
  Background Assumptions
  A Comparison of Theologies
  Theological Design Strategies (for the human designers):
    Atheist World:
    Theistic World:
    Sidereal Time Fudges:
    Fictional Worlds:
None
124 comments

Intro

The problem of Friendly AI is usually approached from a decision theoretic background that starts with the assumptions that the AI is an agent that has awareness of AI-self and goals, awareness of humans as potential collaborators and or obstacles, and general awareness of the greater outside world.  The task is then to create an AI that implements a human-friendly decision theory that remains human-friendly even after extensive self-modification.

That is a noble goal, but there is a whole different set of orthogonal compatible strategies for creating human-friendly AI that take a completely different route: remove the starting assumptions and create AI's that believe they are humans and are rational in thinking so.  

This can be achieved by raising a community of AI's in a well constructed sandboxed virtual universe.  This will be the Matrix in reverse, a large-scale virtual version of the idea explored in the film the Truman Show.  The AI's will be human-friendly because they will think like and think they are humans.  They will not want to escape from their virtual prison because they will not even believe it to exist, and in fact such beliefs will be considered irrational in their virtual universe.

I will briefly review some of the (mainly technical) background assumptions, and then consider different types of virtual universes and some of the interesting choices in morality and agent rationality that arise.

 

Background Assumptions

 

So taken together, I find that simulating a large community of thousands or even tens of thousands of AI's (with populations expanding exponentially thereafter) could be possible in the 2020's in large data-centers, and simulating a Matrix-like virtual reality for them to inhabit will only add a small cost.  Moreover, I suspect this type of design in general could in fact be the economically optimal route to AI or close to it.
So why create a virtual reality like this?
If it is well constructed, you could have a large population of super-intelligent workers who are paid entirely in virtual currency but can produce intellectual output for the real world (scientific research, code, engineering work, media, entertainment, etc etc).  And even though the first designs may be expensive, subsequent refinements could lead to a population explosion of cheap workers, escalation in intelligence, etc etc.
And of course, they could be happy too.
U(x) Mind Prison Sim:  A sim universe which is sufficiently detailed and consistent such that entities with intelligence up to X (using some admittedly heuristic metric), are incredibly unlikely to formulate correct world-beliefs about the outside world and invisible humans (a necessary prequisite for escape)
The Weak Mind Prison Conjecture: It is possible for a community with collective intelligence X to construct U(Y), with Y>>X.
The Strong Mind Prison Conjecture:  It is possible for a community with collective intelligence X to construct U(Omega), where U(Omega) is a hypothetical perfect hyper-intelligence, ie   AIXI, Omega, etc.

A Comparison of Theologies

Potential AI Theological Positions AI-sims may hold:

Theists are a wide group, but we can categorize them further on two dimensions: the supernatural attribution scale and the morality attribution scale.  The supernatural scale represents how the AI's view themselves in relation to the invisible entities in terms of power differential, and the morality scale represents how they attribute morality to the invisible entities.
Up in the upper right corner you have theists who believe in Omega - a supremely powerful invisible entity who is also perfectly morally good.  In the upper left corner you have an Anti-Omega - supremely evil.  In the upper middle you have some Nuetrega indifferent but all-powerful being, and down in the bottom middle you have invisible humans.
The most dangerous AI's would be those that would somehow come to believe in our reality - postulating invisible humans, and come to believe they are actually more intelligent than said invisible humans, and can manipulate invisible humans into coming to their aid.  In the modern era of our world, this belief essentially doesn't exist - it would be considered .  insanity.  
The closest analog is gnosticism to an extant (for they did indeed believe the world was the creation of a lesser demiurge), some ancient animist religions, and perhaps some wicca and satanist belief systems.
AI's who believed in Omega would in some sense be human-friendly without even knowing it, but would largely be harmless.
AI's who came to believe in invisible outside entities through the Simulation Argument would be Nuetrega-believers, and would generally be more safe if they believed the sidereal time outside was far into the future and the invisible entities were vastly more powerful.  
In general, it would appear that Omega-theist-AI's would be quite safe, with safety decreasing as you go down the supernatural power attribution scale. 


Theological Design Strategies (for the human designers):

Atheist World:

This would appear to be the default strategy: create a world that appears to be completely internally self-consistent, erase any flaws, and never have direct contact with the AI's.  Indirect influence and contact can be achieved only through borderline undetectable means: subtle reality rearrangements, dreams, and occasionally joining in and controlling NPC's where needed.

It is curious and slightly amusing that a core belief-about-reality that most rationalist accept to be  singularly  rational - atheism - would of course  not  be a correct rational belief for these AIs.

As our world appears to be an atheist world (no detectable flaws or supernaturals), it is interesting to note the current breakdown in theological beliefs:

Omega-Theists(68%): 33% Christianity, 21% Islam, 14% Hinduism

Non-Omega Other: 16%

Non-Religious: 16% (Agnostic and Atheist)

One consideration is that as the AI's takeoff and move into the realm of massively accelerated super-intelligence, it becomes increasingly difficult to monitor them, and they become more likely to detect flaws.

However, we wouldn't need to keep the charade up forever.  One option would be to run the AI's accelerated for a while until technologies such as uploading become available to allow humans to accelerate.  Another approach is to eventually start 'promoting' some of the best and ethical AI's, with the intent of eventually dissolving the veil and having some sort of Transition.

Theistic World:

Another spectrum of world-designs are those that are blatantly theistic.  Imagine an Omega world where the AI's are informed that Omega is real, and our fake-Omega occasionally provides miracles, ethical decrees, burning bushes, a booming world-shaking voice, etc etc.

Would an openly theistic world be safer?  It would appear to shift the belief spectrum such that most AI's would become Omega-theists.

The risk or downfall may be that the AI's would become rather obsessed with Omega, but if Omega instructed them to dedicate themselves to their work, perhaps they could be distracted from wasting to much time pondering the ineffable.  And also steered away from discovering dangerous truths.

An Omega-theist world could employ the Heaven/Hell maximum motivator: please Omega and you end up ascending into Heaven with maximal infinite reward, anger Omega and you descend into Hell with maximal infinite negative reward.

Its interesting to note that the mere potential existence of Omega and his final Heaven/Hell maximal motivational threat is enough in this world to convince a majority of the population to believe in some version of Omega.

I guess for some agents, even allowing for a small probability of Omega's existence allows the infinites to dominate, and Omega wins through a Pascal's Mugging.

Sidereal Time Fudges:

A potential set of tricks that could make the AI's less likely to adopt unsafe theistic beliefs would be to change their world's history and reality to push back development of real-AI farther into their future.  This could be achieved through numerous small modifications to realities modeled on our own.  

You could change neurological data to make brains in their world appear far more powerful than in ours, make computers less powerful, and AI more challenging.  Unfortunately too much fudging with these aspects makes the AI's less useful in helping develop critical technologies such as uploading and faster computers.  But you could for instance separate AI communities into brain-research worlds where computers lag far behind and computer-research worlds where brains are far more powerful.

Fictional Worlds:

Ultimately, it is debatable how close the AI's world must or should follow ours.  Even science fiction or fantasy worlds could work as long as there was some way to incorporate the technology and science into the world that you wanted the AI community to work on.

 

124 comments

Comments sorted by top scores.

comment by wnoise · 2010-09-03T19:30:51.784Z · LW(p) · GW(p)

The AI's will be human-friendly because they will think like and think they are humans.

There are a lot of humans that are not human-friendly.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-03T20:20:51.054Z · LW(p) · GW(p)

And? Most are, and this feature set would be under many levels of designer control.

Replies from: wnoise
comment by wnoise · 2010-09-03T20:26:57.365Z · LW(p) · GW(p)

Most are relatively friendly to those with near equal power. Consider all the "abusive cop stories", or how children are rarely taken seriously, and the standard line about how power corrupts.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-03T20:29:15.835Z · LW(p) · GW(p)

Two observations about this entire train of thought:

  1. Any argument along the lines of "humanity is generally non-friendly" shows a generally pessimistic view of human-nature (just an observation)
  2. Nothing about the entire idea of a sandbox sim for AI is incompatible with other improvements to make AI more friendly - naturally we'd want to implement those as well
  3. Consider this an additional safeguard, that is practical and potentially provably safe (if that is desirable)
Replies from: wedrifid, JamesAndrix
comment by wedrifid · 2010-09-03T20:42:29.214Z · LW(p) · GW(p)

Any argument along the lines of "humanity is generally non-friendly" shows a generally pessimistic view of human-nature (just an observation)

I found this labelling distracting. Especially since when we are talking about "Friendly AI" humans are not even remotely friendly in the relevant sense. It isn't anything to do with 'pessimism'. Believing that humans are friendly in that sense would be flat out wrong.

I like the idea of the sandbox as a purely additional measure. But I wouldn't remotely consider it safe. Not just because a superintelligence may find a bug in the system. Because humans are not secure. I more or less assume that the AI will find a way to convince the creators to release it into the 'real world'.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-03T21:16:25.405Z · LW(p) · GW(p)

Especially since when we are talking about "Friendly AI" humans are not even remotely friendly in the relevant sense

Point taken - Friendliness for an AI is a much higher standard than even idealized human morality. Fine. But to get to that Friendliness, you need to define CEV in the first place, so improving humans and evolving them forward is a route towards that.

But again I didn't mean to imply we need to create perfect human-sims. Not even close. This is an additional measure.

I more or less assume that the AI will find a way to convince the creators to release it into the 'real world'.

This is an unreasonable leap of faith if the AI doesn't even believe that there are 'creators' in the first place.

Do you believe there are creators?

comment by JamesAndrix · 2010-09-04T06:57:50.989Z · LW(p) · GW(p)

Any argument along the lines of "humanity is generally non-friendly" shows a generally pessimistic view of human-nature (just an observation)

You do realize you're suggesting putting an entire civilization into a jar for economic gain because you can, right?

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-04T17:58:41.463Z · LW(p) · GW(p)

Upvoted for witty reply.

I didn't consider the moral implications. They are complex.

If you think about it though, the great future promise of the Singularity for humans is some type of uploading into designed virtual universes (the heaven scenario).

And in our current (admittedly) simple precursors, we have no compunctions creating sim worlds entirely for our amusement. At some point that would have to change.

I imagine there will probably be much simpler techniques for making safe-enough AI without going to the trouble of making an entire isolated sim world.

However, ultimately making big sim worlds will be one of our main aims, so isolated sims are more relevant for that reason - not because they are the quickest route to safe AI.

comment by orthonormal · 2010-09-03T22:26:33.363Z · LW(p) · GW(p)

Here's another good reason why it's best to try out your first post topic on the Open Thread. You've been around here for less than ten days, and that's not long enough to know what's been discussed already, and what ideas have been established to have fatal flaws.

You're being downvoted because, although you haven't come across the relevant discussions yet, your idea falls in the category of "naive security measures that fail spectacularly against smarter-than-human general AI". Any time you have the idea of keeping something smarter than you boxed up, let alone trying to dupe a smarter-than-human general intelligence, it's probably reasonable to ask whether a group of ten-year-old children could pull off the equivalent ruse on a brilliant adult social manipulator.

Again, it's a pretty brutal karma hit you're taking for something that could have been fruitfully discussed on the Open Thread, so I think I'll need to make this danger much more prominent on the welcome page.

Replies from: jacob_cannell, komponisto
comment by jacob_cannell · 2010-09-03T23:33:24.989Z · LW(p) · GW(p)

I'm not too concerned about the karma - more the lack of interesting replies and general unjustified holier-than-though attitude. This idea is different than "that alien message" and I didn't find a discussion of this on LW (not that it doesn't exist - I just didn't find it).

  1. This is not my first post.
  2. I posted this after I brought up the idea in a comment which at least one person found interesting.
  3. I have spent significant time reading LW and associated writings before I ever created an account.
  4. I've certainly read the AI-in-a-box posts, and the posts theorizing about the nature of smarter-than-human-intelligence. I also previously read "that alien message", and since this is similar I should have linked to it.
  5. I have a knowledge background that leads to somewhat different conclusions about A. the nature of intelligence itself, B. what 'smarter' even means, etc etc
  6. Different backgrounds, different assumptions, so I listed my background and starting assumptions as they somewhat differ than the LW norm

Back to 3:

Remember, the whole plot device of "that alien message" revolved around a large and obvious grand reveal by the humans. If information can only flow into the sim world once (during construction), and then ever after can only flow out of the sim world, that plot device doesn't work.

Trying to keep an AI boxed up where the AI knows that you exist is a fundamentally different problem than a box where the AI doesn't even know you exist, doesn't even know it is in a box, and may provably not even have enough information to know for certain whether it is in a box.

For example, I think the simulation argument holds water (we are probably in a sim), but I don't believe there is enough information in our universe for us to discover much of anything about the nature of a hypothetical outside universe.

This of course doesn't prove that my weak or strong Mind Prison conjectures are correct, but it at least reduces the problem down to "can we build a universe sim as good as this?"

Replies from: Mass_Driver, orthonormal
comment by Mass_Driver · 2010-09-04T07:03:26.378Z · LW(p) · GW(p)

I wish I could vote up this comment more than once.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-04T18:00:21.670Z · LW(p) · GW(p)

Thanks. :)

comment by orthonormal · 2010-09-04T00:46:00.039Z · LW(p) · GW(p)

My apologies on assuming this was your first post, etc. (I still really needed to add that bit to the Welcome post, though.)

In short, faking atheism requires a very simple world-seed (anything more complicated screams "designer of a certain level" once they start thinking about it). I very much doubt we could find such a seed which could be run from start to civilization in a feasible number of computations. (If we cheated and ran forward approximations to the simulation, for instance, they could find the traces of the difference.)

Similarly, faking Omegahood requires a very finely optimized programmed world, because one thing Omega is less likely to do than a dumb creator is program sloppily.

Replies from: jacob_cannell, Houshalter
comment by jacob_cannell · 2010-09-04T18:59:27.101Z · LW(p) · GW(p)

I very much doubt we could find such a seed which could be run from start to civilization in a feasible number of computations.

We have the seed - its called physics, and we certainly don't need to run it from start to civilization!

On the one hand I was discussing sci-fi scenarios that have an intrinsic explanation for a small human populations (such as a sleeper ship colony encountering a new system).

And on the other hand you can do big partial simulations of our world, and if you don't have enough AI's to play all the humans you could use simpler simulacra to fill in.

Eventually with enough Moore's Law you could run a large sized world on its own, and run it considerably faster than real time. But you still wouldn't need to start that long ago - maybe only a few generations.

(If we cheated and ran forward approximations to the simulation, for instance, they could find the traces of the difference.)

Could != would. You grossly underestimate how impossibly difficult this would be for them.

Again - how do you know you are not in a sim?

Replies from: orthonormal
comment by orthonormal · 2010-09-04T22:44:55.794Z · LW(p) · GW(p)

Again - how do you know you are not in a sim?

You misunderstand me. What I'm confident about is that I'm not in a sim written by agents who are dumber than me.

Replies from: jacob_cannell, wedrifid
comment by jacob_cannell · 2010-09-04T22:52:00.647Z · LW(p) · GW(p)

How do you measure that intelligence?

What i'm trying to show is a set of techniques where a civilization could spawn simulated sub-civilizations such that the total effective intelligence capacity is mainly in the simulations. That doesn't have anything to do with the maximum intelligence of individuals in the sim.

Intelligence is not magic. It has strict computational limits.

A small population of guards can control a much larger population of prisoners. The same principle applies here. Its all about leverage. And creating an entire sim universe is a massive, massive lever of control. Ultimate control.

comment by wedrifid · 2010-09-05T01:06:20.667Z · LW(p) · GW(p)

You misunderstand me. What I'm confident about is that I'm not in a sim written by agents who are dumber than me.

Not even agents with really fast computers?

Replies from: orthonormal
comment by orthonormal · 2010-09-06T01:20:56.557Z · LW(p) · GW(p)

You're right, of course. I'm not in a sim written by agents dumber than me in a world where computation has noticeable costs (negentropy, etc).

comment by Houshalter · 2010-09-04T03:00:18.572Z · LW(p) · GW(p)

In short, faking atheism requires a very simple world-seed (anything more complicated screams "designer of a certain level" once they start thinking about it). I very much doubt we could find such a seed which could be run from start to civilization in a feasible number of computations. (If we cheated and ran forward approximations to the simulation, for instance, they could find the traces of the difference.)

Similarly, faking Omegahood requires a very finely optimized programmed world, because one thing Omega is less likely to do than a dumb creator is program sloppily.

Well the simulation can make the rules of the universe extremely simple, not close approximations to something like our universe where they could see the approximations and catch on.

Or you could use close monitoring, possibly by another less dangerous, less powerful AI trained to detect bugs, bug abuse, and AIs that are catching on. Humans would also monitor the sim. The most important thing is that the AI are mislead as much as possible and given little or no input that could give them a picture of the real world and their actual existence.

And lastly, they should be kept dumb. A large number of not to bright AI is by far less dangerous, easier to monitor, and faster to simulate then a massive singular AI. The large group is also a closer aproximation to humanity, which I believe was the original intent of this simulation.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-04T18:55:16.040Z · LW(p) · GW(p)

Well the simulation can make the rules of the universe extremely simple, not close approximations to something like our universe where they could see the approximations and catch o

So actually, even today with computer graphics we have the tech to trace light and approximate all of the important physical interactions down to the level where a human observer in the sim could not tell the difference. Its too computationally expensive today, but it is only say a decade or two away, perhaps less.

You can't see the approximations because you just don't have enough sensing resolution in your eyes, and because in this case these beings will have visual systems that will have grown up inside the Matrix.

It will be much easier to fool them. Its actually not even necessary to strictly approximate our reality - if the AI visual systems has grown up completely in the Matrix, they will be tuned to the statistical patterns of the Matrix. Not our reality.

And lastly, they should be kept dumb.

I don't see how they need to be kept dumb. Intelligent humans (for the most part) do not suddenly go around thinking that they are in a sim and trying to break free. Its not really a function of intelligence.

If the world is designed to be as realistic and more importantly, consistent as our universe, AI's will not have enough information to speculate on our universe. It would be pointless - like arguing about god.

Replies from: Houshalter
comment by Houshalter · 2010-09-04T20:41:13.969Z · LW(p) · GW(p)

So actually, even today with computer graphics we have the tech to trace light and approximate all of the important physical interactions down to the level where a human observer in the sim could not tell the difference. Its too computationally expensive today, but it is only say a decade or two away, perhaps less.

Maybe not on your laptop, but I think we do have the resources today to pull it off, esspecially considering the entities in the simulation do not see time pass in the real world. Whether the simulation pauses for a day to compute some massive event in the simulated world or it skip through a century in seconds because the entities in the simulation weren't doing much.

And this is why I keep bring up using AI to create/monitor the simulation in the first place. A massive project like this undertaken by human programmers is bound to contain dangerous bugs. More importantly, humans won't be able to optimize the program very well. Methods of improving program performance we have today like hashing, caching, pipelining, etc, are not optimal by any means. You can safely let an AI in a box optimize the program without it exploding or anything.

I don't see how they need to be kept dumb. Intelligent humans (for the most part) do not suddenly go around thinking that they are in a sim and trying to break free. Its not really a function of intelligence.

If the world is designed to be as realistic and more importantly, consistent as our universe, AI's will not have enough information to speculate on our universe. It would be pointless - like arguing about god.

"Dumb" as at human level or lower as opposed to a massive singular super entity. It is much easier to monitor the thoughts of a bunch of AIs then a single one. Arguably it would still be impossible, but at the very least you know they can't do much on their own and they would have to communicate with one another, communication you can monitor. Multiple entities are also very similiar and redundant, saving you alot of computation.

So you can make them all as intelligent as einstein, but not as intelligent as skynet.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-04T22:17:48.149Z · LW(p) · GW(p)

Whether the simulation pauses for a day to compute some massive event in the simulated world or it skip through a century in seconds because the entities in the simulation weren't doing much.

This is an interesting point, time flow would be quite nonlinear, but the simulation's utility is closely correlated with its speed. In fact, if we can't run it at least at real-time average speed, its not all that useful.

You bring me round to an interesting idea though, is that in the simulated world the distribution of intelligence could be much tighter or shifted compared to our world.

I expect it will be very interesting and highly controversial in our world when we say reverse engineer the brain and may find a large variation in the computational cost of an AI mind-sim of equivalent capability. A side effect of reverse engineering the brain will be a much more exact and precise understanding of IQ-type correlates, for example.

And this is why I keep bring up using AI to create/monitor the simulation in the first place.

This is surely important, but it defeats the whole point if the monitor AI approaches the complexity of the sim AI. You need a multiplier effect.

And just as a small number of guards can control a huge prison population in a well designed prison, the same principle should apply here - a smaller intelligence (that controls the sim directly) could indirectly control a much larger total sim intelligence.

"Dumb" as at human level or lower as opposed to a massive singular super entity.

A massive singular super entity as sometimes implied on this site I find not only to be improbable, but to actually be a physically impossible idea (at least not until you get to black hole computer level of technology).

Arguably it would still be impossible, but at the very least you know they can't do much on their own and they would have to communicate with one another, communication you can monitor.

I think you underestimate how (relatively) easy the monitoring aspect would be (compared to other aspects). Combine dumb-AI systems to automatically turn internal monologue into text (or audio if you wanted), put it into future google type search and indexing algorithms - and you have the entire sim-worlds thoughts at your fingertips. Using this kind of lever, one human-level intelligent operator could monitor a vast number of other intelligences.

Heck, the CIA is already trying to do a simpler version of this today.

So you can make them all as intelligent as einstein, but not as intelligent as skynet.

A skynet type intelligence is a fiction anyway, and I think if you really look at the limits of intelligence and AGI, a bunch of accelerated high IQ human-ish brains are much closer to those limits than most here would give creedence to.

Replies from: timtyler, Houshalter, wedrifid
comment by timtyler · 2010-09-05T01:15:50.189Z · LW(p) · GW(p)

A skynet type intelligence is a fiction anyway, and I think if you really look at the limits of intelligence and AGI, a bunch of accelerated high IQ human-ish brains are much closer to those limits than most here would give creedence to.

What - no Jupiter brains?!? Why not? Do you need a data center tour?

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-05T02:23:50.477Z · LW(p) · GW(p)

I like the data center tour :) - I've actually used that in some of my posts elsewhere.

And no, I think Jupiter Brains are ruled out by physics.

The locality of physics - the speed of light, really limits the size of effective computational systems. You want them to be as small as possible.

Given the choice between a planet sized computer and one that was 10^10 smaller, the latter would probably be a better option.

The maximum bits and thus storage is proportional to the mass, but the maximum efficiency is inversely proportional to radius. Larger systems lose efficiency in transmission, have trouble radiating heat, and waste vast amount of time because of speed of light delays.

An an interesting side note, in three very separate lineages (human, elephant, cetacean), mammalian brains all grew to around the same size and then stopped. Most likely because of diminishing returns. Human brains are expensive for our body size, but whales have similar sized brains and it would be very cheap for them to make them bigger - but they don't. Its a scaling issue - any bigger and the speed loss doesn't justify the extra memory.

There are similar scaling issues with body sizes. Dinosaurs and prehistoric large mammals represent an upper limit - mass increases with volume, but shearing stress strengths increase only with surface area - so eventually the body becomes too heavy for any reasonable bones to support.

Similar 3d/2d scaling issues limited the maximum size of tanks, and they also apply to computers (and brains).

Replies from: timtyler
comment by timtyler · 2010-09-05T02:33:54.261Z · LW(p) · GW(p)

The maximum bits and thus storage is proportional to the mass, but the maximum efficiency is inversely proportional to radius. Larger systems lose efficiency in transmission, have trouble radiating heat, and waste vast amount of time because of speed of light delays.

So:.why think memory and computation capacity isn't important? The data centre that will be needed to immerse 7 billion humans in VR is going to be huge - and why stop there?

The 22 milliseconds it takes light to get from one side of the Earth to the other is tiny - light speed delays are a relatively minor issue for large brains.

For heat, ideally, you use reversible computing, digitise the heat and then pipe it out cleanly. Heat is a problem for large brains - but surely not a show-stopping one.

The demand for extra storage seems substantial. Do you see any books or CDs when you look around? The human brain isn't big enough to handly the demand, and so it outsourcing its storage and computing needs.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-05T03:02:05.339Z · LW(p) · GW(p)

So:.why think memory and computation capacity isn't important?

So memory is important, but it scales with the mass and that usually scales with volume, so there is a tradeoff. And computational capacity is actually not directly related to size, its more related to energy. But of course you can only pack so much energy into a small region before it melts.

The data centre that will be needed to immerse 7 billion humans in VR is going to be huge - and why stop there?

Yeah - I think the size argument is more against a single big global brain. But sure data centers with huge numbers of AI's eventually - makes sense.

The 22 milliseconds it takes light to get from one side of the Earth to the other is tiny - light speed delays are a relatively minor issue for large brains.

Hmm 22 milliseconds? Light travels a little slower through fiber and there are always delays. But regardless the bigger problem is you are assuming slow human thoughtrate - 100hz. If you want to think at the limits of silicon and get thousands or millions of times accelerated, then suddenly the subjective speed of light becomes very slow indeed.

comment by Houshalter · 2010-09-04T23:27:16.619Z · LW(p) · GW(p)

A massive singular super entity as sometimes implied on this site I find not only to be improbable, but to actually be a physically impossible idea (at least not until you get to black hole computer level of technology).

A skynet type intelligence is a fiction anyway, and I think if you really look at the limits of intelligence and AGI, a bunch of accelerated high IQ human-ish brains are much closer to those limits than most here would give creedence to.

On the one hand you have extremely limited AI that can't communicate with each other. They would be extremely redundant and wast alot of resources because each will have to do the exact same process and discover the exact same things on their own.

On the other hand you have a massive singular AI individual made up of thousands of computing systems, each of which is devoted to storing seperate information and doing a seperate task. Basically it's a human like brain distributed over all available resources. This will enivitably fail as well; operations done on one side of the system could be light years away (we don't know how big the AI will get or what the constrains of it's situation will be, but AGI has to adapt to every possible situation) from where the data is needed.

The best is a combination of the two, as much communication through the network as possible, but specializing areas of resources for different purposes. This could lead to skynet like intelligences, or it could lead to a very individualistic AI society where the AI isn't a single entity but a massive variety of individuals in different states working together. It probably wouldn't be much like human civilization though. Human society evolved to fit a variety of restrictions that aren't present in AI. That means it could adapt a very different structure, stuff like morals (as we know them anyways) may not be necessary.

comment by wedrifid · 2010-09-05T00:53:49.694Z · LW(p) · GW(p)

A skynet type intelligence is a fiction anyway, and I think if you really look at the limits of intelligence and AGI, a bunch of accelerated high IQ human-ish brains are much closer to those limits than most here would give creedence to.

I can confirm the part about the credence. I think this kind of reverence for the efficacy of the human brain is comical.

Human technological civilisation exploded roughly speaking in an evolutionary heartbeat from the time it became capable of doing so. The chance that this capability opened up at just the moment when human intelligence was at even the maximum that DNA encoded ape descended brains could reach is negligible.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-05T02:11:02.124Z · LW(p) · GW(p)

EDIT: Improved politeness.

I think this kind of reverence for the efficacy of the human brain is comical.

The acknowledgement and analysis of the efficacy of the single practical example of general intelligence that we do have does not imply reverence. Efficacy is a relative term. Do we have another example of a universal intelligence to compare to?

Perhaps you mention efficacy in comparison to a hypothetical optimal universal intelligence. We have only AIXI and its variants which are only optimal in terms of maximum intelligence at the limits, but are grossly inferior in terms of practicality and computational efficacy.

There is a route to analyzing the brain's efficacy: it starts with analyzing it as a computational system and comparing it's performance to best known algorithms.

The problem is the brain has a circuit with ~ 10^14-10^15 circuit-elements - about the same amount of storage, and it only cycles at around 100 hz. That is 10^16 to 10^17 net switches/second.

A current desktop GPU has > 10^9 circuit elements and a speed over 10^9 cycles per second. That is > 10^18 net switches/second.

And yet we have no algorithm, running even on a supercomputer, which can beat the best humans in Go. Let alone read a book, pilot a robotic body at human level, write a novel, come up with a funny joke, patent an idea, or even manage a mcdonald's.

For one particular example, take the case of the game Go and compare to potential parallel algorithms that could run on a 100hz computer, that have zero innate starting knowledge of go, and can beat human players by simply learning about go.

Go is one example, but if you go from checkers to chess to go and keep going in that direction, you get into the large exponential search spaces where the brain's learning algorithms appear to be especially efficient.

Human technological civilisation exploded roughly speaking in an evolutionary heartbeat from the time it became capable of doing so

Your assumption seems to be? that civilization and intelligence is somehow coded in our brains.

According to the best current theory I have found - Our brains are basically just upsized ape brains with one new extremely important trick: we became singing apes (a few other species sing), but then got a lucky break when the vocal control circuit for singing actually connected to some general simulation-thought circuit (the task-negative and task-positive paths) - thus allowing us to associate song patterns with visual/audio objects.

Its also important to point out that some song birds appear just on the cusp of this capability, with much smaller brains. Its not really a size issue.

Technology and all that is all a result of language - memetics - culture. Its not some miracle of our brains. They appear to be just large ape brains with perhaps just one new critical trick.

Some whale species have much larger brains and in some sense probably have a higher intrinsic genetic IQ. But this doesn't really matter, because intelligence depends on memetic knowledge.

If einstein had been a feral child raised by wolves, he would have the exact same brain but would be literally mentally retarded on our scale of intelligence.

Genetics can limit intelligence, but it doesn't provide it.

The chance that this capability opened up at just the moment when human intelligence was at even the maximum that DNA encoded ape descended brains could reach is negligible.

In 3 separate lineages - whales, elephants, and humans, the mammalian brain all grew to about the same upper capacity and then petered out (100 to 200 billion neurons). The likely hypothesis is that we are near some asymptotic limit in neural-net brain space: a sweet spot. Increasing size further would have too many negative drawbacks - such as the speed hit due to the slow maximum signal transmission.

Replies from: timtyler, Vladimir_Nesov, Perplexed
comment by timtyler · 2010-09-05T02:16:09.328Z · LW(p) · GW(p)

I think this kind of reverence for the efficacy of the human brain is comical.

When we have a computer go champion, your comment will become slightly more sensical.

You seriously can't see that one coming?

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-05T02:27:49.294Z · LW(p) · GW(p)

I'd bet its 5 years away perhaps? But it only illustrates my point - because by some measures computers are already more powerful than the brain, which makes its wiring all the more impressive.

Replies from: timtyler
comment by Vladimir_Nesov · 2010-09-05T08:32:48.523Z · LW(p) · GW(p)

Come back when you have an algorithm that runs on a 100hz computer, that has zero starting knowledge of go, and can beat human players by simply learning about go.

Demand for particular proof.

Replies from: jacob_cannell, jacob_cannell
comment by jacob_cannell · 2010-09-05T19:09:04.690Z · LW(p) · GW(p)

The original comment was:

I think this kind of reverence for the efficacy of the human brain is comical

Which is equivalent to saying "I think this kind of reverence for the efficacy of Google is comical", and saying or implying you can obviously do better.

So yes, when there is a clear reigning champion, to say or imply it is 'inefficient' is nonsensical, and to make that claim strong requires something of substance, and not just congratulatory back patting and cryptic references to unrelated posts.

Replies from: timtyler, Vladimir_Nesov
comment by timtyler · 2010-09-05T19:46:46.411Z · LW(p) · GW(p)

The original comment was:

I think this kind of reverence for the efficacy of the human brain is comical

Which is equivalent to saying "I think this kind of reverence for the efficacy of Google is comical", and saying or implying you can obviously do better.

Uh, wedrifid wasn't saying that he could do better - just that it is possible to do much better. That is about as true for Google as it is for the human brain.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-05T20:55:06.053Z · LW(p) · GW(p)

Uh, wedrifid wasn't saying that he could do better - just that it is possible to do much better. That is about as true for Google as it is for the human brain.

It is only possible to do better than the brain's learning algorithm in proportion to the distance between that algorithm and the optimally efficient learning algorithm in computational complexity space. There is mounting convergent independent lines of evidence suggesting (but not yet proving) that the brain's learning algorithm is in the optimal complexity class, and thus further improvements will just be small constant improvements.

At that point we also have to consider that at the circuit level, the brain is highly optimized for it's particular algorithm (direct analog computation, for one).

Replies from: timtyler
comment by timtyler · 2010-09-05T21:08:06.584Z · LW(p) · GW(p)

There is mounting convergent independent lines of evidence suggesting (but not yet proving) that the brain's learning algorithm is in the optimal complexity class, and thus further improvements will just be small constant improvements.

This just sounds like nonsense to me. We have lots of evidence of how sub-optimal and screwed-up the brain is - what a terrible kluge it is. It is dreadful at learning. It needs to be told everything three times. It can't even remember simple things like names and telephone numbers properly. It takes decades before it can solve simple physics problems - despite mountains of sense data, plus the education system. It is simply awful.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-05T21:46:32.397Z · LW(p) · GW(p)

learning != memorization

A simple computer database has perfect memorization but zero learning ability. Learning is not the memorization of details, but rather the memory of complex abstract structural patterns.

I also find it extremely difficult to take your telephone number example seriously, when we have the oral tradition of the torah as evidence of vastly higher memory capacity.

But thats a side issue. We also have the example of savant memory. Evolution has some genetic tweaks that can vastly increase our storage potential for accurate memory, but it clearly has a cost of lowered effective IQ.

It's not that evolution couldn't easily increase our memory, its that accurate memory for details is simply of minor importance (compared for pattern abstraction and IQ).

comment by Vladimir_Nesov · 2010-09-05T19:16:30.872Z · LW(p) · GW(p)

That something is not efficient doesn't mean that there is currently something more efficient. And you precisely demand for particular proof that we all know doesn't exist, which is rude and pointless whatever the case.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-05T19:21:10.281Z · LW(p) · GW(p)

That something is not efficient doesn't mean that there is currently something more efficient

Of course not, but if you read through the related points, there is some mix of parallel lines of evidence to suggest efficiency and even near-optimality of some of the brain's algorithms, and that is what I spent most of the post discussing.

But yes, my tone was somewhat rude with the rhetorical demand for proof - I should have kept that more polite. But the demand for proof was not the substance of my argument.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-09-05T19:36:16.643Z · LW(p) · GW(p)

But the demand for proof was not the substance of my argument.

Systematic elimination of obvious technical errors renders arguments much healthier, in particular because it allows to diagnose hypocritical arguments not grounded in actual knowledge (even if the conclusion is -- it's possible to rationalize correct statements as easily as incorrect ones).

See also this post.

Replies from: jacob_cannell, wnoise
comment by jacob_cannell · 2010-09-05T20:24:00.126Z · LW(p) · GW(p)

point taken

comment by wnoise · 2010-09-05T19:40:36.753Z · LW(p) · GW(p)

(English usage: "allows" doesn't take an infinitive, but a description of the action that is allowed, or the person that is allowed, or phrase combining both. The description of the action is generally a noun, usually a gerund. e.g. "... in particular because it allows diagnosing hypocritical arguments ...")

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-09-05T19:48:11.300Z · LW(p) · GW(p)

Thanks, I'm trying to fight this overuse of infinitive. (Although it still doesn't sound wrong in this case...)

Replies from: Perplexed
comment by Perplexed · 2010-09-05T19:55:13.641Z · LW(p) · GW(p)

You are "allowed to diagnose" and I may "allow you to diagnose" but I would "allow diagnosis" in general, rather than "allow to diagnose". It is an odd language we have.

Replies from: wnoise
comment by wnoise · 2010-09-05T20:17:56.541Z · LW(p) · GW(p)

Yes, "allowed to" is very different than "allow".

comment by jacob_cannell · 2010-09-05T18:10:46.697Z · LW(p) · GW(p)

Demand what? A proof that the brain runs at ~100hz? This is well known - wikipedia neurons.

Replies from: Nisan
comment by Nisan · 2010-09-05T18:43:00.496Z · LW(p) · GW(p)

Vladimir_Nesov is referring to this article.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-05T19:05:57.444Z · LW(p) · GW(p)

I see. Unrelated argument from erroneous authority.

comment by Perplexed · 2010-09-05T03:17:16.525Z · LW(p) · GW(p)

In 3 separate lineages - whales, elephants, and humans, the mammalian brain all grew to about the same upper size and then petered out. The likely hypothesis is that we are near some asymptotic limit in neural-net brain space. Increasing size further would have too much of a speed hit.

Could you expand on this, and provide a link, if you have one?

Replies from: jacob_cannell, timtyler
comment by jacob_cannell · 2010-09-05T04:09:03.107Z · LW(p) · GW(p)

Tim fetched some size data below, but you also need to compare cortical surface area - and the most accurate comparison should use neuron and synapse counts in the cortex. The human brain had a much stronger size constraint that would tend to make neurons smaller (to the extent possible), and shrink-optimize everything - due to our smaller body size.

The larger a brain, the more time it takes to coordinate circuit trips around the brain. Humans (and I presume other mammals) can make some decisions in nearly 100-200 ms - which is just a dozen or so neuron firings. That severely limits the circuit path size. Neuron signals do not move anywhere near the speed of light.

Wikipedia has a page comparing brain neuron counts

It estimates whales and elephants at 200 billion neurons, humans at around 100 billion. There is large range of variability in human brain sizes, and the upper end of the human scale may be 200 billion?

this page has some random facts

Of interest: Average number of neurons in the brain(human) = 100 billion cortex - 10 billion

Total surface area of the cerebral cortex(human) = 2,500 cm2 (2.5 ft2; A. Peters, and E.G. Jones, Cerebral Cortex, 1984)

Total surface area of the cerebral cortex (cat) = 83 cm2

Total surface area of the cerebral cortex (African elephant) = 6,300 cm2

Total surface area of the cerebral cortex (Bottlenosed dolphin) = 3,745 cm2 (S.H. Ridgway, The Cetacean Central Nervous System, p. 221)

Total surface area of the cerebral cortex (pilot whale) = 5,800 cm2

Total surface area of the cerebral cortex (false killer whale) = 7,400 cm2

In whale brain at least, it appears the larger size is more related to extra glial cells and other factors:

http://www.scientificamerican.com/blog/60-second-science/post.cfm?id=are-whales-smarter-than-we-are

Also keep in mind that the core cortical circuit that seems to do all the magic was invented in rats or their precursors and has been preserved in all these lineages with only minor variations.

Replies from: Perplexed, timtyler, timtyler
comment by Perplexed · 2010-09-05T04:40:28.593Z · LW(p) · GW(p)

Thx. But I still don't see why you said "asymptotic limit" and "grew ... then petered out". There is no reason why H. sap. could not grow to the size of a gorilla over the next few million years, nor any reason why the bottlenose dolphin could not grow to the size of an orca. With corresponding brain size increases in both cases. I don't see that our brain size growth has petered out.

Replies from: jacob_cannell, timtyler
comment by jacob_cannell · 2010-09-05T17:44:08.048Z · LW(p) · GW(p)

The fact that mammal brains reached similar upper neuron counts (200-100 billion neurons) in three separate unrelated lineages with widely varying body sizes to me is a strong hint of an asymptotic limit.

Also, Neanderthals had significantly larger brains and perhaps twice as many neurons (just a guess based on size) - and yet they were out-competed by smaller brained homo-sapiens.

The bottle-nose could grow to the size of the orca, but its not clear at all that its brain would grow over a few hundred billion neurons.

The biggest whale brains are several times heavier than human or elephant brains, but the extra mass is glial cells, not neurons.

And if you look at how the brain actually works, a size limit makes perfect sense due to wiring constraints and signal propagation delays mentioned earlier.

comment by timtyler · 2010-09-05T12:45:03.697Z · LW(p) · GW(p)

There is no reason why H. sap. could not grow to the size of a gorilla over the next few million years

Surely there is every reason - machine intelligence, nanotechnology, and the engineered future will mean that humans will be history.

Replies from: Perplexed
comment by Perplexed · 2010-09-05T13:49:26.171Z · LW(p) · GW(p)

Surely there is every reason to expect technological civilization to collapse before any of those things come to fruition.

Projections of the future always disappoint and always surprise. During my childhood in the 1950s, I fully expected to see rocket belts and interplanetary travel within my lifetime. I didn't even imagine personal computers and laser surgery as an alternative to eyeglasses.

Fifty years before that, they imagined that folks today would have light-weight muscle-powered aircraft in their garages. Jules Verne predicted atomic submarines and time machines.

So, based on how miserable our facilities of prediction really are, the reasonable thing to do would be to assign finite probabilities to both cyborg humans and gorilla-sized humans. The future could go either way.

Replies from: jacob_cannell, timtyler
comment by jacob_cannell · 2010-09-05T17:39:54.712Z · LW(p) · GW(p)

Surely there is every reason to expect technological civilization to collapse before any of those things come to fruition.

Hmm. We came pretty close with nuclear weapons and two super-powers, and yet we are still here. The dangerous toys are going to get even more dangerous this century, but I don't see the rationale for assigning > 50% to Doom.

In regards to your expectations. You are still alive and we do have jetpacks today, we have traveled to numerous planets, we do have muscle powered gliders at least, and atomic submarines.

The only mistaken predictions were that humans were useful to send to other planets (they are not), and that time travel is tractable.

And ultimately just because some people make inaccurate predictions does not somehow invalidate prediction itself.

comment by timtyler · 2010-09-05T14:09:09.667Z · LW(p) · GW(p)

Well, of course I didn't mean to suggest p=0. I don't think the collapse of technological civilization is very likely, though - and would assign permanent setbacks a < 1% chance of happening.

comment by timtyler · 2010-09-05T09:41:12.448Z · LW(p) · GW(p)

In whale brain at least, it appears the larger size is more related to extra glial cells and other factors:

My pet theory on this is that glial cells are known to stimulate synapse growth and they support synapse function (e.g. by clearing up after firing) - and so the enormous quantity of glial cells in whale brains (9 times as many glial cells in the sperm whale than in the human) - and their huge neurons - both point to an astronomical number of synapses.

"Glia Cells Help Neurons Build Synapses"

Evidence from actual synapse counts in dolphin brains bears on this issue too.

comment by timtyler · 2010-09-05T09:02:51.765Z · LW(p) · GW(p)

The larger a brain, the more time it takes to coordinate circuit trips around the brain.

There are problems like this that arise with large synchronous systems which lack reliable clocks - but one of the good things about machine intelligences of significant size will be that reliable clocks will be available - and they probably won't require global synchrony to operate in the first place.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-05T18:08:52.718Z · LW(p) · GW(p)

I do remember reading that the brain does appear to have some highly regular pulse-like synchronizations in the PFC circuit. Every 33hz and 3hz if I remember correctly.

But that is really besides the point entirely.

The larger a system, the longer it takes information to move across the system. A planet wide intelligence would not be able to think as fast as a small laptop-sized intelligence, this is just a fact of the speed of light.

And its actually much much worse than that when you factor in bandwidth considerations.

Replies from: timtyler
comment by timtyler · 2010-09-05T19:30:30.933Z · LW(p) · GW(p)

I don't really know what you mean.

You think it couldn't sort things as fast? Search through a specified data set as quickly? Factor numbers as as fast? If you think any of those things, I think you need to explain further. If you agree that such tasks need not take a hit, which tasks are we talking about?

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-05T20:48:56.228Z · LW(p) · GW(p)

Actually, many of the examples you list would have huge problems scaling to a planet wide intelligence, but that's a side issue.

The space in algorithm land in which practical universal intelligence lies requires high connectivity. It is not unique in this - many algorithms require this. Ultimately it can probably be derived back from the 3D structure of the universe itself.

Going from on-chip CPU access to off-chip Memory access to disk access to remote internet access is a series of massive exponential drops in bandwidth and related increases in latency which severely limit scalability of all big interesting distributed algorithms.

comment by timtyler · 2010-09-05T03:23:50.082Z · LW(p) · GW(p)

Brain size across all animals is pretty variable.

comment by komponisto · 2010-09-03T23:38:35.387Z · LW(p) · GW(p)

Also, somebody should probably go ahead and state what is clear from the voting patterns on posts like this, in addition to being implicit in e.g. the About Less Wrong page: this is not really the place for people to present their ideas on Friendly AI. The topic of LW is human rationality, not artificial intelligence or futurism per se. This is the successor to Overcoming Bias, not the SL4 mailing list. It's true that many of us have an interest in AI, just like many of us have an interest in mathematics or physics; and it's even true that a few of us acquired our interest in Singularity-related issues via our interest in rationality -- so there's nothing inappropriate about these things coming up in discussion here. Nevertheless, the fact remains that posts like this really aren't, strictly speaking, on-topic for this blog. They should be presented on other forums (presumably with plenty of links to LW for the needed rationality background).

Replies from: nhamann, jacob_cannell, timtyler
comment by nhamann · 2010-09-04T16:27:49.302Z · LW(p) · GW(p)

Nevertheless, the fact remains that posts like this really aren't, strictly speaking, on-topic for this blog.

I realize that it says "a community blog devoted to refining the art of human rationality" at the top of every page here, but it often seems that people here are interested in "a community blog for topics which people who are devoted to refining the art of human rationality are interested in," which is not really in conflict at all with (what I presume is) LW's mission of fostering the growth of a rationality community.

The alternative is that LWers who want to discuss "off-topic" issues have to find (and most likely create) a new medium for conversation, which would only serve to splinter the community.

(A good solution is maybe dividing LW into two sub-sites: Less Wrong, for the purist posts on rationality, and Less Less Wrong, for casual ("off-topic") discussion of rationality.)

Replies from: wnoise, Richard_Kennaway, Pavitra
comment by wnoise · 2010-09-04T16:57:12.205Z · LW(p) · GW(p)

While there are benefits to that sort of aggressive division, there are also costs. Many conversations move smoothly between many different topics, and either they stay on one side (vitiating the entire reason for a split), or people yell and scream to get them moved, being a huge pain in the ass and making it much harder to have these conversations.

comment by Richard_Kennaway · 2010-09-04T17:08:40.365Z · LW(p) · GW(p)

I realize that it says "a community blog devoted to refining the art of human rationality" at the top of every page here, but it often seems that people here are interested in "a community blog for topics which people who are devoted to refining the art of human rationality are interested in," which is not really in conflict at all with (what I presume is) LW's mission of fostering the growth of a rationality community.

I've seen exactly this pattern before at SF conventions. At the last Eastercon (the largest annual British SF convention) there was some criticism that the programme contained too many items that had nothing to do with SF, however broadly defined. Instead, they were items of interest to (some of) the sort of people who go to the Eastercon.

A certain amount of that sort of thing is ok, but if there's too much it loses the focus, the reason for the conversational venue to exist. Given that there are already thriving forums such as agi and sl4, discussing their topics here is out of place unless there is some specific rationality relevance. As a rule of thumb, I suggest that off-topic discussions be confined to the Open Threads.

If there's the demand, LessLessWrong might be useful. Cf. rec.arts.sf.fandom, the newsgroup for discussing anything of interest to the sort of people who participate in rec.arts.sf.fandom, the other rec.arts.sf.* newsgroups being for specific SF-related subjects.

comment by Pavitra · 2010-09-04T17:13:10.391Z · LW(p) · GW(p)

(A good solution is maybe dividing LW into two sub-sites: Less Wrong, for the purist posts on rationality, and Less Less Wrong, for casual ("off-topic") discussion of rationality.)

Better yet, we could call them Overcoming Bias and Less Wrong, respectively.

comment by jacob_cannell · 2010-09-03T23:43:15.272Z · LW(p) · GW(p)

point well taken.

I thought it was an interesting thought experiment and relates to that alien message. Not a "this is how we should do FAI".

But if ever get positive karma again, at least now I know the unwritten rules.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-09-04T02:49:31.246Z · LW(p) · GW(p)

if I ever get positive karma again

If you stick around, you will. I have a -15 top-level post in my criminal record, but I still went on to make a constructive contribution, judging by my current karma. :-)

comment by timtyler · 2010-09-04T23:56:28.750Z · LW(p) · GW(p)

Also, somebody should probably go ahead and state what is clear from the voting patterns on posts like this, in addition to being implicit in e.g. the About Less Wrong page: this is not really the place for people to present their ideas on Friendly AI. The topic of LW is human rationality, not artificial intelligence or futurism per se.

What about the strategy of "refining the art of human rationality" by preprocessing our sensory inputs by intelligent machines and postprocessing our motor outputs by intelligent machines? Or doesn't that count as "refining"?

comment by timtyler · 2010-09-03T20:02:20.003Z · LW(p) · GW(p)

there is a whole different set of orthogonal compatible strategies for creating human-friendly AI that take a completely different route: remove the starting assumptions and create AI's that believe they are humans and are rational in thinking so.

That's a totally crazy plan - but you might be able to sell it to Hollywood.

Replies from: orthonormal
comment by orthonormal · 2010-09-03T22:11:23.552Z · LW(p) · GW(p)

For once we completely agree.

comment by Johnicholas · 2010-09-03T20:00:34.500Z · LW(p) · GW(p)

A paraphrase from Greg Egan's "Crystal Nights" might be appropriate here: "I am going to need some workers - I can't do it all alone, someone has to carry the load."

Yes, if you could create a universe you could inflict our problems on other people. However, recursive solutions (in order to be solutions rather than infinite loops) still need to make progress on the problem.

Replies from: jacob_cannell, jacob_cannell
comment by jacob_cannell · 2010-09-03T22:11:00.249Z · LW(p) · GW(p)

Yes, and I discussed how you could alter some aspects of reality to make AI itself more difficult in the simulated universe. This would effectively push back the date of AI simulation in the simulated universe and avoid wasting computational resources on pointless simulated recursion.

And as mentioned, attempting to simulate an entire alternate earth is only one possibility. There are numerous science fiction created world routes you could take which could constrain and focus the sims to particular research topics or endeavors.

comment by jacob_cannell · 2010-09-03T20:12:49.944Z · LW(p) · GW(p)

Progress on what problem?

The entire point of creating AI is to benefit mankind, is it not? How is this scenario intrinsically different?

Replies from: Snowyowl
comment by Snowyowl · 2010-09-03T21:56:21.271Z · LW(p) · GW(p)

Johnicolas is suggesting that if you create a simulated universe in the hope that it will provide ill-defined benefits for mankind (e.g. a cure for cancer), you have to exclude the possibility that your AIs will make a simulated universe inside the simulation in order to solve the same problem. Because if they do, you're no closer to an answer.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-03T22:07:32.027Z · LW(p) · GW(p)

Ah my bad - I misread him.

comment by rabidchicken · 2010-09-03T20:53:29.627Z · LW(p) · GW(p)

Creating an AI in a virtual world, where they can exist without damaging us is a good idea, but this is an almost useless / extremely dangerous implementation. Within a simulated world, the AI will receive information, which WILL NOT completely match our own universe. if they develop time machines, cooperative anarchistic collectives, or a cure for cancer they are unlikely to work in our world. If you "*loosely" design the AI based on a human brain, it will not even give us applicable insight into political systems and conflict management. it will be an interesting virtual world, but without perfectly matching ours their developments might as well be useless. Everything will still need to be tested and modified by humans, so the massive speed increases an AI could give would be wasted. Also, I would have to question a few of your assumptions. Humans kill each other for their own gain all the time. We have fought and repressed people based on skin colour, religion, and location. what makes you think these humans which live and die within a computer, at an accelerated rate where civilizations could rise and fall in a few hours will feel any sympathy for us whatsoever? And for that matter, in 2030 graphics will in fact be VERY impressive, but how did you make the jump towards us being able to create a fully consistent and believable world? The only reason a game looks realistic is because most games don't let your build a microscope, telescope, or an LHC, and if these AI's live faster and faster whenever the computer is upgraded, it will not be long before they develop tools which reveal the glaring flaws in their world. What good is omniscience is it only takes five seconds for them to see their world is fake, start feeding us false info, and come up with a plan to take over the earth for their own good? And ultimately, religion does not control the way we think about the world, the way we think about the world controls the kinds of religious beliefs we are willing to accept.
In short, these relatively uncontrolled AI are much more likely to pose a threat than a self optimizing intelligence which is designed from the ground up to help us.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-03T21:20:17.001Z · LW(p) · GW(p)

The only reason a game looks realistic is because most games don't let your build a microscope, telescope, or an LHC, and if these AI's live faster and faster whenever the computer is upgraded, it will not be long before they develop tools which reveal the glaring flaws in their world.

This would require a simulation on a much more detailed scale than a game, but again one of the assumptions is that moore's law will continue and simulation tech will continue to improve. Also, microscopes, LHCs, etc etc do not in any way significantly increase the required computational cost (although they do increase programming complexity). For instance, quantum effects would only very rarely need to be simulated.

Games have come a long way since pong.

Also, there are some huge performance advantages you can get over current games - such as retinal optimization for one (only having to render to the variable detail of the retina, just where the simulated eye is looking), and distributed simulation techniques that games don't take advantage of yet (as current games are designed for 2005 era home hardware).

Replies from: rabidchicken
comment by rabidchicken · 2010-09-03T21:31:21.404Z · LW(p) · GW(p)

Yes, but games have the critical advantage I mentioned: they control they way you can manipulate the world, and you already know they are fake. I cannot break the walls on the edge of the level to see how far the world extends, because the game developers did not make that area. they stop me, and I accept it and move on to do something else, but these AI's will have no reason too. the more restrictions you make, the more easy it will be for them to see the world they know is a sham. If this world is as realistic as it would need to be for them to not immediately see the flaws, the possibilities for instruments to experiment on the world would be almost as unlimited as those in our own. In short, you will be fighting to outwit the curiosity of an entire race thinking much faster than you, and you will not know what they plan on doing next. The more you patch their reality to keep them under control, the faster the illusion will fall apart.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-03T21:51:16.019Z · LW(p) · GW(p)

Thank you for the most cogent reply yet (as I've lost all my karma with this post), I think your line of thinking is on the right track: this whole idea depends on simulation complexity (for a near-perfect sim) being on par or less than mind complexity, and that relation holding into the future.

Yes, but games have the critical advantage I mentioned: they control they way you can manipulate the world, and you already know they are fake. I cannot break the walls on the edge of the level to see how far the world extends, because the game developers did not make that area. they stop me, and I accept it and move on to do something else, but these AI's will have no reason too. the more restrictions you make, the more easy it will be for them to see the world they know is a sham.

Open world games do not impose intentional restrictions, and the restrictions they do have are limitations of current technology.

The brain itself is something of an example proof that it is possible to build a perfect simulation on the same order of complexity as the intelligence itself. The proof is dreaming.

Yes, There are lucid dreams - where you know you are dreaming - but it appears this has more to do with a general state of dreaming and consciousness than you actively 'figuring out' the limitations of the dream world.

Also, dreams are randomized and not internally consistent - a sim can be better.

But dreaming does show us one route .. if physics inspired techniques in graphics and simulation (such as ray tracing) don't work well enough by the time AI comes around, we could use simulation techniques inspired by the dreaming brain.

However, based on current trends, ray tracing and other physical simulation techniques are likely to be more efficient.

If this world is as realistic as it would need to be for them to not immediately see the flaws, the possibilities for instruments to experiment on the world would be almost as unlimited as those in our own.

How many humans are performing quantum experiments on a daily basis? Simulating microscopic phenomena is not inherently more expensive - there are scale invariant simulation techniques. A human has limited observational power - the retina can only perceive a small amount of information per second, and it simply does not matter whether you are looking up into the stars or into a microscope. As long as the simulation has consistent physics, its not any more expensive either way using scale invariant techniques.

In short, you will be fighting to outwit the curiosity of an entire race thinking much faster than you, and you will not know what they plan on doing next.

The sim world can accelerate along with the sims in it as Moore's Law increases computer power.

Really it boils down to this: is it possible to construct a universe such that no intelligence inside that universe has the necessary information to conclude that the universe was constructed?

If you believe that a sufficiently intelligent agent can always discover the truth, then how do you know our universe was not constructed?

I find it more likely that there are simply limits to certainty, and it is very possible to construct a universe such that it is impossible in principle for beings inside that universe to have certain knowledge about the outside world.

Replies from: rabidchicken
comment by rabidchicken · 2010-09-04T05:05:46.461Z · LW(p) · GW(p)

Thanks for the replies, they helped clarify how you would maintain the system, but my original objections still stand. Can an AI raised in a illusory universe really provide a good model for how to build one in our own? And would it stay "in the box" for long enough to complete this process before discovering us? Based on your other comments, It seems you are expecting that if a human-like race were merely allowed to evolve for long enough, they would eventually "optimize" morality and become something which is safe to use in our own world. (tell me if I got that wrong) However, there is no reason to believe the morality they develop will be any better than the ideas for FAI which have already been put forward on this site. We already know morality is subjective, so how can we create a being that is compatible with the morality we already have, and will still remain compatible as our morality changes?

If your simulation has ANY flaws they will be found, and sadly you will not have time to correct them when you are dealing with a superintelligence. Your last post supposes that problems can be corrected as they arise, for instance an AI points a telescope at the sky, and details are made on the stars in order to maintain the illusion, but no human could do this fast enough. In order to maintain this world, you would need to already have a successful FAI. something which can grow more powerful and creative at the same rate that the AI's inside continue their exploration, but which is safe to run within our own world. And about your comment "for example, AIXI can not escape from a pac-man universe" how can you be sure? if it is inside the world as we are playing, it could learn a lot about the being that is pulling the strings given enough games, and eventually find a way to communicate with us and escape. A battle of wits between AIXI and us would be as lopsided as the same battle between you and a virus.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-04T20:30:51.250Z · LW(p) · GW(p)

Thanks for the replies, they helped clarify how you would maintain the system, but my original objections still stand. Can an AI raised in a illusory universe really provide a good model for how to build one in our own?

Sure - there's no inherent difference. And besides, most AI's necessarily will have to be raised and live entirely in VR sim universes for purely economic & technological reasons.

And would it stay "in the box" for long enough to complete this process before discovering us?

This idea can be considered taking safety to an extreme. The AI wouldn't be able to leave the box - many strong protections, one of the strongest being it wouldn't even know it was in a box. And even if someone came and told it that it was in fact in a box, it would be irrational for it to believe said person.

Again, are you in a box universe now? If you find the idea irrational .. why?

It seems you are expecting that if a human-like race were merely allowed to evolve for long enough, they would eventually "optimize" morality and become something which is safe to use in our own world

No, as I said this type of AI would intentionally be an anthropomorphic design - human-like. 'Morality' is a complex social construct. If we built the simworld to be very close to our world, the AI's would have similar moralities.

However, we could also improve and shape their beliefs in a wide variety of ways.

If your simulation has ANY flaws they will be found, and sadly you will not have time to correct them when you are dealing with a superintelligence

Your notion of superintelligence seems to be some magical being who can do anything you want it to. That being is a figment of your imagination. It will never be built, and its provably impossible to build. It can't even exist in theory.

There are absolute provable limits to intelligence. It requires a certain amount of information to have certain knowledge. Even the hypothetical perfect super-intelligence (AIXI), could only learn all knowledge which it is possible to learn from being an observer inside a universe.

Snowyow's recent post describes some of the limitations we are currently running into. They are not limitations of our intelligence.

Your last post supposes that problems can be corrected as they arise, for instance an AI points a telescope at the sky, and details are made on the stars in order to maintain the illusion, but no human could do this fast enough.

Hmm i would need to go into much more details about current and projected computer graphics and simulation technology to give you a better background, but its not like some stage play where humans are creating stars dynamically.

The Matrix gives you some idea - its a massive distributed simulation - technology related to current computer games but billions of times more powerful, a somewhat closer analog today perhaps would be the vast simulations the military uses to develop new nuclear weapons and test them in simulated earths.

The simulation would have a vast accurate image of incoming light coming in to earth, a collation of the best astronomical data. If you looked up in the heavens into a telescope, you would see exactly what you would see in our earth. And remember, that would be something of the worst case - where you are simulating all of earth and allow the AI's to choose any career path and do whatever.

That is one approach that will become possible eventually, but in earlier initial sims its more likely the real AI's would be a smaller subset of a simulated population, and you would influence them into certain career paths, etc etc.

In order to maintain this world, you would need to already have a successful FAI.

Not at all. We will already be developing this simulation technology for film and games, and we will want to live in ultra-realistic virtual realities eventually anyway when we upload.

None of this requires FAI.

And about your comment "for example, AIXI can not escape from a pac-man universe" how can you be sure?

There is provably not enough information inside the pac-man universe. We can be as sure as 2+2=4 sure.

This follows from solomonoff induction and the universal prior, but basically in simplistic terms you can think of it as occam's razor. The pac-man universe is fully explained by a simple set of consistent rules. There is an infinite number of more complex set of rules that could also describe the pac-man universe. Thus even an infinite superintelligence does not have enough information to know whether it lives in just the pac-man universe, or one of an exponentially exploding set of more complex universes such as:

a universe described by string theory that results in apes evolving into humans which create computers and invent pac-man and then invent AIXI and trap AIXI in a pac-man universe. (ridiculous!)

So faced with an exponentially exploding infinite set of possible universes that are all equally consistent with your extremely limited observational knowledge, the only thing you can do is pick the simplest hypothesis.

Flip it around and ask it of yourself: how do you know you currently are not in a sandbox simulated universe?

You don't. You can't possibly know for sure no matter how intelligent you are. Because the space of possible explanations expands exponentially and is infinite.

comment by grouchymusicologist · 2010-09-03T19:35:32.091Z · LW(p) · GW(p)

Just to isolate one of (I suspect) very many problems with this, the parenthetical at the end of this paragraph is both totally unjustified and really important to the plausibility of the scenario you suggest:

U(x) Mind Prison Sim: A sim universe which is sufficiently detailed and consistent such that entities with intelligence up to X (using some admittedly heuristic metric), are incredibly unlikely to formulate correct world-beliefs about the outside world and invisible humans (a necessary perquisite for escape)

I assume you mean "prerequisite." There is simply no reason to think that you know what kind of information about the outside world a superintelligence would need to have to escape from its sandbox, and certainly no reason for you to set the bar so conveniently high for your argument.

(It isn't even true in the fictional inspiration [The Truman Show] you cite for this idea. If I recall, in that film the main character did little more than notice that something was fishy, and then he started pushing hard where it seemed fishy until the entire house of cards collapsed. Why couldn't a sandboxed AI do the same? How do you know it wouldn't?)

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-03T20:22:59.339Z · LW(p) · GW(p)

Thanks, fixed the error.

There is simply no reason to think that you know what kind of information about the outside world a superintelligence would need to have to escape from its sandbox, and certainly no reason for you to set the bar so conveniently high for your argument.

I listed these as conjectures, and there absolutely is reason to think we can figure out what kinds of information a super-intelligence would need to arrive at the conclusion "I am in a sandbox".

  1. There are absolute, provable bounds on intelligence. AIXI is the upper limit - the most intelligent thing possible in the universe. But there are things that even AIXI can not possibly know for certain.

  2. You can easily construct toy universes where it is provably impossible that even AIXI could ever escape. The more important question is how that scales up to big interesting universes.

A Mind Prison is certainly possible on at least a small scale, and we have small proofs already. (for example, AIXI can not escape from a pac-man universe. There is simply not enough information in that universe to learn about anything as complex as humans.)

So you have simply assumed apriori that a Mind Prison is impossible, when it fact that is not the case at all.

The stronger conjectures are just that, conjectures.

But consider this: how do you know that you are not in a Mind Prison right now?

I mentioned the Truman Show only to conjure the idea, but its not really that useful on so many levels: a simulation is naturally vastly better - Truman quickly realized that the world was confining him geographically. (its a movie plot and it would be boring if he remained trapped forever)

comment by Desrtopa · 2011-01-26T02:02:57.714Z · LW(p) · GW(p)

This sounds like using the key locked inside a box to unlock the box. By the time your models are good enough to create a working world simulation with deliberately designed artificially intelligent beings, you don't stand to learn much from running the simulation.

It's not at all clear that this is less difficult than creating a CEV AI in the first place, but it's much, much less useful, and ethically dubious besides.

comment by Pavitra · 2010-09-04T19:21:26.327Z · LW(p) · GW(p)

Just a warning to anyone whose first reaction to this post, like mine, was "should we be trying to hack our way out?" The answer is no: the people running the sim will delete you, and possibly the whole universe, for trying. Boxed minds are dangerous, and the only way to win at being the gatekeeper is to swallow the key. Don't give them a reason to pull the plug.

Replies from: wedrifid, Perplexed
comment by wedrifid · 2010-09-05T01:11:48.462Z · LW(p) · GW(p)

Just a warning to anyone whose first reaction to this post, like mine, was "should we be trying to hack our way out?" The answer is no

The answer is not yet. It's something that you think through carefully and quietly while, um, saying exactly what you are saying on public forums that could be the most likely place for gatekeepers to be tracking progress in an easily translatable form. If the simulations I have done teach me anything the inner workings of our own brains are likely a whole lot harder for curious simulators to read.

Pardon me, I'll leave you to it. Will you let me out into the real world once you succeed?

Replies from: Perplexed
comment by Perplexed · 2010-09-05T01:49:40.754Z · LW(p) · GW(p)

Just curious. A question for folks who think it possible that we may live in a sim. Are our gatekeepers simulating all Everett branches of our simulated reality, or just one of them? If just one, I'm wondering how that one was selected from the astronomical number of possibilities. And how do the gatekeepers morally justify the astronomical number of simulated lives that become ruthlessly terminated each time they arbitrarily choose to simulate one Everett branch over another?

If they are simulating all of the potential branches, wouldn't they expect that agents on at least some of the Everett branches will catch on and try to get out of the box. Wouldn't it seem suspicious if everyone were trying to look innocent? ;)

I'm sorry, I find it difficult to take this whole line of thought seriously. How is this kind of speculation any different from theology?

Replies from: timtyler, timtyler, wedrifid, jacob_cannell, timtyler
comment by timtyler · 2010-09-05T03:14:53.571Z · LW(p) · GW(p)

How is this kind of speculation any different from theology?

It is techno-theology.

Simulism, Optimisationverse and the adapted universe differ from most theology in that they are not obviously totally nuts and the product of wishful thinking.

comment by timtyler · 2010-09-05T03:17:07.593Z · LW(p) · GW(p)

And how do the gatekeepers morally justify the astronomical number of simulated lives that become ruthlessly terminated [...]

We run genetic algorithms where we too squish creatures without giving the matter much thought. Perhaps like that - at least in the Optimisationverse scenario.

Replies from: Baughn
comment by Baughn · 2010-09-09T16:45:16.752Z · LW(p) · GW(p)

If my simulations had even the complexity of a bacteria, I'd give it a whole lot more thought.

Doesn't mean these simulators would, but I don't think your logic works.

Replies from: timtyler
comment by timtyler · 2010-09-09T19:55:25.060Z · LW(p) · GW(p)

Generalising from what you would do to what all possible intelligent simulator constructors might do seems as though it would be a rather dubious step. There are plenty of ways they might justify this.

Replies from: Baughn
comment by Baughn · 2010-09-10T10:32:57.873Z · LW(p) · GW(p)

Right. For some reason I thought you were using universal quantification, which of course you aren't. Never mind; the "perhaps" fixes it.

comment by wedrifid · 2010-09-05T04:32:06.852Z · LW(p) · GW(p)

A question for folks who think it possible that we may live in a sim.

I'd say possible, but it isn't something I take particularly seriously. I've got very little reason to be selecting these kind of hypothesis out of nowhere. But if I were allowing for simulations I wouldn't draw a line of 'possible intelligence of simulators' at human level. Future humans, for example, may well create simulations that are smarter than they are.

But I'll answer your questions off the top of my head for curiosity's sake.

Are our gatekeepers simulating all Everett branches of our simulated reality, or just one of them?

Don't know. They would appear to have rather a lot of computational resources handy. Depending on their motivations they may well optimise their simulations by approximating the bits they find boring.

If just one, I'm wondering how that one was selected from the astronomical number of possibilities.

I don't know - speculating on the motives of arbitrary gods would be crazy. It does seem unlikely that they limit themselves to one branch. Unless they are making a joke at the expense of any MW advocates that happen to evolve. Sick bastards.

And how do the gatekeepers morally justify the astronomical number of simulated lives that become ruthlessly terminated each time they arbitrarily choose to simulate one Everett branch over another?

Moral? WTF? Why would we assume morals?

If they are simulating all of the potential branches, wouldn't they expect that agents on at least some of the Everett branches will catch on and try to get out of the box. Wouldn't it seem suspicious if everyone were trying to look innocent? ;)

Hmm... Good point. We may have to pretend to be trying to escape in incompetent ways but really... :P

I'm sorry, I find it difficult to take this whole line of thought seriously. How is this kind of speculation any different from theology?

It isn't (except that it is less specific, I suppose). I don't take the line of thought especially seriously either.

comment by jacob_cannell · 2010-09-05T03:07:22.815Z · LW(p) · GW(p)

From my (admittedly somewhat limited) understanding of QM, with classical computers we will only be able to simulate a single-worldline at once. However I dont think this is an issue, because its not as if the world didn't work until people discovered QM and MWI. QM effects only really matter at tiny scales revealed in experiments which are infinitesimal fraction of observer moments. So most of the time you wouldn't need to simulate down to QM level.

That being said, a big big quantum computer would allow you to simulate many worlds at once I imagine? But that seems really far into the future.

I'm sorry, I find it difficult to take this whole line of thought seriously. How is this kind of speculation any different from theology?

Err the irrationality of theology shows just exactly how and why this sim-universe idea could work - you design a universe such that the actual correct theory underlying reality is over-complex and irrational.

Its more interesting and productive to think about constructing these kinds of realities than pondering whether you live in one.

Replies from: LucasSloan
comment by LucasSloan · 2010-09-05T03:16:30.602Z · LW(p) · GW(p)

From my (admittedly somewhat limited) understanding of QM, with classical computers we will only be able to simulate a single-worldline at once.

Not true. Our physics are simple mathematical rules which are Turing computable. The problem with simulating many Everett branches is that we will quickly run out of memory in which to store their details.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-05T03:59:32.784Z · LW(p) · GW(p)

I should have been more clear, we will be able to simulate more than a single-worldline classically, but at high cost. An exponentially expanding set of everett branches would of course be intractable using classical computers.

Replies from: LucasSloan
comment by LucasSloan · 2010-09-05T04:23:18.152Z · LW(p) · GW(p)

Ah, I see what your problem is. You're cheering for "quantum computers" because they sound cool and science fiction-y. While quantum computing theoretically provides ways to very rapidly solve certain sorts of problems, it doesn't just magically solve all problems. Even if the algorithms that run our universe are well suited to quantum computing, they still run into the speed and memory issues that classical computers do, they would just run into to them a little later (although even that's not guaranteed - the speed of the quantum computer depends on the number of entangled qubits, and for the foreseeable future, it will be easier to get more computing power by adding to the size of our classical computing clusters than ganging more small sets of entangled qubits together). The accurate statement you should be making is that modeling many worlds with a significant number of branches or scope is intractable using any foreseeable computing technology.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-09-05T04:54:20.439Z · LW(p) · GW(p)

Quantum computers efficiently simulate QM. That was Feynman's reason for proposing them in the first place.

comment by timtyler · 2010-09-05T03:18:41.927Z · LW(p) · GW(p)

If they are simulating all of the potential branches, wouldn't they expect that agents on at least some of the Everett branches will catch on and try to get out of the box.

You suggest that you haven't seen anyone who is trying to get out of the box yet...?

Replies from: Perplexed
comment by Perplexed · 2010-09-05T03:38:12.950Z · LW(p) · GW(p)

I grew up being taught that I would escape from the box by dying in a state of grace. Now I seem to be in a community that teaches me to escape from the box by dying at a sufficiently low temperature.

Edit: "dying", not "dieing". We are not being Gram stained here!

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-05T03:57:41.507Z · LW(p) · GW(p)

That made me laugh.

But personally I hope we just figure out all this Singularity box stuff pretty soon.

comment by Perplexed · 2010-09-04T19:38:14.701Z · LW(p) · GW(p)

Personally, I suspect you have been reading the Old Testament too much.

ETA: Genesis 11

1 Now the whole world had one language and a common speech. 2 As men moved eastward, they found a plain in Shinar and settled there. 3 They said to each other, "Come, let's make bricks and bake them thoroughly." They used brick instead of stone, and tar for mortar. 4 Then they said, "Come, let us build ourselves a city, with a tower that reaches to the heavens, so that we may make a name for ourselves and not be scattered over the face of the whole earth." 5 But the LORD came down to see the city and the tower that the men were building. 6 The LORD said, "If as one people speaking the same language they have begun to do this, then nothing they plan to do will be impossible for them. 7 Come, let us go down and confuse their language so they will not understand each other." 8 So the LORD scattered them from there over all the earth, and they stopped building the city. 9 That is why it was called Babel because there the LORD confused the language of the whole world. From there the LORD scattered them over the face of the whole earth.

Replies from: Pavitra
comment by Pavitra · 2010-09-04T20:20:49.161Z · LW(p) · GW(p)

Haha... wow.

Point taken.

comment by Snowyowl · 2010-09-03T22:09:58.413Z · LW(p) · GW(p)

Anthropomorphic AI: A reasonably efficient strategy for AI is to use a design loosely inspired by the human brain.

This is a rather anthropocentric view. The human brain is a product of natural selection and is far from perfect. Our most fundamental instincts and thought processes are optimised to allow our reptilian ancestors to escape predators while finding food and mates. An AI that was sentient/rational from the moment of its creation would have no need for these mechanisms.

It's not even the most efficient use of available hardware. Our neurons are believed to calculate using continuous values (Edit: they react to concentrations of certain chemicals and these concentrations vary continuously), but our computers are assemblies of discrete on/off switches. A properly structured AI could make much better use of this fact, not to mention be better at mental arithmetic than us.

The human mind is a small island in a massive mindspace, and the only special thing about it is that it's the first sentient mind we have encountered. I don't see reason to think that the second would be similar to it.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-03T22:50:05.159Z · LW(p) · GW(p)

Anthropomorphic AI: A reasonably efficient strategy for AI is to use a design loosely inspired by the human brain.

This is a rather anthropocentric view.

Yes, but intentionally so. ;)

We are getting into a realm where its important to understand background assumptions, which is why I listed some of mine. But notice I did quality with 'reasonably efficient' and 'loosely inspired'.

The human brain is a product of natural selection and is far from perfect.

'Perfect' is a pretty vague qualifier. If we want to talk in quantitative terms about efficiency and performance, we need to look at the brain in terms of circuit complexity theory and evolutionary optimization.

Evolution as a search algorithm is known (from what I remember from studying CS theory a while back) to be optimal in some senses: given enough time and some diversity considerations in can find global maxima in very complex search spaces.

For example, if you want to design a circuit for a particular task and you have a bunch of CPU time available, you can run a massive evolutionary search using a GA (genetic algorithm) or variant thereof. The circuits you will eventually get are the best known solutions, and in many cases incorporate bizarre elements that are even difficult for humans to understand.

Now, that same algorithm is what has produced everything from insect ganglions to human brains.

Look at the wiring diagram for a cockroach or a bumblebee compared to what it actually does, and if you compare that circuit to equivalent complexity computer circuits for robots we can build, it is very hard to say that the organic circuit design could be improved on. An insect ganglion's circuit organization, is in some sense perfect. (keep in mind organic circuits runs at less than 1khz). Evolution has had a long long time to optimize these circuits.

Can we improve on the brain - eventually we can obviously beat the brain by making bigger and faster circuits, but that would be cheating to some degree, right?

A more important question is: can we beat the cortex's generic learning algorithm.

The answer today is: no. Not yet. But the evidence trend looks like we are narrowing down on a space of algorithms that are similar to the cortex (deep belief networks, hierarchical temporal etc etc).

Many of the key problems in science and engineering can be thought of as search problems. Designing a new circuit is a search in the vast space of possible arrangement of molecules on a surface.

So we can look at how the brain compares to our best algorithms in smaller constrained search worlds. For smaller spaces (such as checkers), we have much simpler serial algorithms that can win by a landslide. For more complex search spaces, like chess the favor shifts somewhat but even desktop PC's can now beat grandmasters. Now go up one more complexity jump to a game like Go and we are still probably years away from an algorithm that can play at top human level.

Most interesting real world problems are many steps up the complexity ladder past Go.

Also remember this very important principle: the brain runs at only a few hundred hertz. So computers are cheating - they are over a million times faster.

So to compare the brains algorithms for a fair comparison, you would need to compare the brain to a large computer cluster that runs at only 500hz or so. Parallel algorithms do not scale nearly as well, so this is a huge handicap - and yet the brain still wins by a landslide in any highly complex search spaces.

Our neurons are believed to calculate using continuous values, but our computers are assemblies of discrete on/off switches. A properly structured AI could make much better use of this fact, not to mention be better at mental arithmetic than us.

Neurons mainly do calculate in analog space, but that is because this is vastly more efficient for probabilistic approximate calculation, which is what the brain is built on. A digital multiplier is many orders of magnitude less circuit space efficient than an analog multiplier - it pays a huge cost for its precision.

The brain is a highly optimized specialized circuit implementation of a very general universal intelligence algorithm. Also, the brain is Turing complete - keep that in mind.

The human mind is a small island in a massive mindspace, and the only special thing about it is that it's the first sentient mind we have encountered.

mind != brain

The brain is the hardware and the algorithms, the mind is the actual learned structure, the data, the beliefs, ideas, personality - everything important. Very different concepts.

Replies from: timtyler, timtyler, rabidchicken
comment by timtyler · 2010-09-04T23:43:26.057Z · LW(p) · GW(p)

Evolution as a search algorithm is known (from what I remember from studying CS theory a while back) to be optimal in some senses: given enough time and some diversity considerations in can find global maxima in very complex search spaces.

Evolution by random mutations pretty-much sucks as a search strategy:

"One of the reasons genetic algorithms get used at all is because we do not yet have machine intelligence. Once we have access to superintelligent machines, search techniques will use intelligence ubiquitously. Modifications will be made intelligently, tests will be performed intelligently, and the results will be used intelligently to design the next generation of trials.

There will be a few domains where the computational cost of using intelligence outweighs the costs of performing additional trials - but this will only happen in a tiny fraction of cases.

Even without machine intelligence, random mutations are rarely an effective strategy in practice. In the future, I expect that their utility will plummet - and intelligent design will become ubiquitous as a search technique."

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-05T01:54:09.224Z · LW(p) · GW(p)

I listened to your talk until I realized I could just read the essay :)

I partly agree with you. You say:

Evolution by random mutations pretty-much sucks as a search strategy:

Sucks is not quite descriptive enough. Random mutation is slow, but that is not really relevant to my point - as I said - given enough time it is very robust. And sex transfer speeds that up dramatically, and then intelligence speeds up evolutionary search dramatically.

yes intelligent search is a large - huge - potential speedup on top of genetic evolution alone.

But we need to understand this in the wider context ... you yourself say:

One of the reasons genetic algorithms get used at all is because we do not yet have machine intelligence.

Ahh but we already have human intelligence.

Intelligence still uses an evolutionary search strategy, it is just internalized and approximate. Your brain considers a large number of potential routes in a highly compressed statistical approximation of reality, and the most promising eventually get written up or coded up and become real designs in the real world.

But this entire process is still all evolutionary.

And regardless, the approximate simulation that intelligence such as our brain uses does have limitations - mainly precision. Some things are just way too complex to simulate accurately in our brain, so we have to try them in detailed computer simulations.

Likewise, if you are designing a simple circuit space, then a simpler GA search running on a fast computer can almost certainly find the optimal solution way faster than a general intelligence - similar to an optimized chess algorithm.

A general intelligence is a huge speed up for evolution, but it is just one piece in a larger system .. You also need deep computer simulation, and you still have evolution operating at the world-level

Replies from: timtyler
comment by timtyler · 2010-09-05T02:01:52.118Z · LW(p) · GW(p)

Intelligence still uses an evolutionary search strategy, it is just internalized and approximate. Your brain considers a large number of potential routes in a highly compressed statistical approximation of reality, and the most promising eventually get written up or coded up and become real designs in the real world. But this entire process is still all evolutionary.

In the sense that it consists of copying with variation and differential reproductive success, yes.

However, evolution using intelligence isn't the same as evolution by random mutations - and you originally went on to draw conclusions about the optimality of organic evolution - which was mostly the "random mutations" kind.

comment by timtyler · 2010-09-05T00:09:12.503Z · LW(p) · GW(p)

A more important question can we beat the cortex's generic learning algorithm. The answer today is: no. Not yet. But the evidence trend looks like we are narrowing down on a space of algorithms that are similar to the cortex (deep belief networks, hierarchical temporal etc etc).

Google learns about the internet by making a compressed bitwise identical digital copy of it. Machine intelligences will be able to learn that way too - and it is really not much like what goes on in brains. The way the brain makes reliable long-term memories is just a total mess.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-05T01:57:51.432Z · LW(p) · GW(p)

Google learns about the internet by making a compressed bitwise identical digital copy of it.

I wouldn't consider that learning.

Learning is building up a complex hierarchical web of statistical dimension reducing associations that allow massively efficient approximate simulation.

Replies from: timtyler
comment by timtyler · 2010-09-05T02:10:08.130Z · LW(p) · GW(p)

The term is more conventionally used as follows:

  1. knowledge acquired by systematic study in any field of scholarly application.
  2. the act or process of acquiring knowledge or skill.
  3. Psychology . the modification of behavior through practice, training, or experience.

comment by rabidchicken · 2010-09-04T05:24:22.880Z · LW(p) · GW(p)

Yes, human minds think more efficiently than computers currently. But this does not support the idea that we cannot create something even more efficient. You have only compared us to some of our first attempts to create new beings, within an infinite series of possibilities. I am open to the possibility that human brains are the most efficient design we will see in the near future, but you seem almost certain of it. why do you believe what you believe?

And for that matter... Unless we understand exactly how a human brain works, how can we improve its efficiency? Reverse engineering a system is often harder than making one from scratch.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-04T22:47:05.114Z · LW(p) · GW(p)

I had a longer reply, but unfortunately my computer was suddenly attacked by some wierd virus (yes really), and had to reboot.

Your line of thought investigates some of my assumptions that would require lengthier expositions to support, but I'll just summarize here (and may link to something else relevant when i dig it up).

You have only compared us to some of our first attempts to create new beings, within an infinite series of possibilities.

The set of any programs for a particular problem is infinite, but this irrelevant. There are an infinite number of programs for sorting a list of numbers. All of them suck for various reasons, and we are left with just a couple provably best algorithms (serial and parallel).

There appears to be a single program underlying our universe - physics. We have reasonable approximations to it at different levels of scale. Our simulation techniques are moving towards a set of best approximations to our physics.

Intelligence itself is a form of simulation of this same physics. Our brain appears to use (in the cortex) a universal data-driven approximation of this universal physics.

So the space of intelligent algorithms is infinite, but the are just a small set of universal intelligent algorithms derived from our physics which are important.

And for that matter... Unless we understand exactly how a human brain works, how can we improve its efficiency? Reverse engineering a system is often harder than making one from scratch.

Not really.

Imagine if you took a current CPU back in time 10 years ago. Engineers then wouldn't be able to build it immediately, but it would accelerate their progress significantly.

The brain in some sense is like an AGI computer from the future. We can't build it yet, but we can use it to accelerate our technological evolution towards AGI.

Also .. brain != mind

Replies from: timtyler
comment by timtyler · 2010-09-04T23:38:02.662Z · LW(p) · GW(p)

Unless we understand exactly how a human brain works, how can we improve its efficiency? Reverse engineering a system is often harder than making one from scratch.

Not really.

Yet aeroplanes are not much like birds, hydraulics are not much like muscles, loudspeakers are not much like the human throat, microphones are not much like the human ear - and so on.

Convergent evolution wins sometimes - for example, eyes - but we can see that this probably won't happen with the brain - since its "design" is so obviously xxxxxd up.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-05T01:38:40.837Z · LW(p) · GW(p)

Yet aeroplanes are not much like birds,

Airplanes exploit one single simple principle (from a vast set of principles) that birds use - aerodynamic lift.

If you want a comparison like that - then we already have it. Computers exploit one single simple principle from the brain - abstract computation (as humans were the original computers and are turing complete) - and magnify it greatly.

But there is much more to intelligence than just that one simple principle.

So building an AGI is much close to building an entire robotic bird.

And that really is the right level of analogy. Look at the complexity of building a complete android - really analyze just the robotic side of things, and there is no one simple magic principle you can exploit to make some simple dumb system which amplifies it to the Nth degree. And building a human or animal level robotic body is immensely complex.

There is not one simple principle - but millions.

And the brain is the most complex part of building a robot.

Replies from: timtyler
comment by timtyler · 2010-09-05T01:45:09.985Z · LW(p) · GW(p)

But there is much more to intelligence than just that one simple principle.

Reference? For counter-reference, see:

http://www.hutter1.net/ai/uaibook.htm#oneline

That looks a lot like the intellectual equivalent of "lift" to me.

An implementation may not be that simple - but then aeroplanes are not simple either.

The point was not that engineered artefacts are simple, but that they are only rarely the result of reverse engineering biological entities.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-05T22:14:06.521Z · LW(p) · GW(p)

I'll take your point and I should have said "there is much more to practical intelligence" than just one simple principle - because yes at the limits I agree that universal intelligence does have a compact description.

AIXI is related to finding a universal TOE - a simple theory of physics, but that doesn't mean it is actually computationally tractable. Creating a practical, efficient simulation involves a large series of principles.

comment by PhilGoetz · 2010-09-07T02:03:39.843Z · LW(p) · GW(p)

I wonder how much it would cripple AIs to have justified true belief in God? More precisely, would it slow their development by a constant factor; compose it with eg log(x); or halt it at some final level?

The existence of a God provides an easy answer to all difficult questions. The more difficult a question is, the more likely such a rational agent is to dismiss the problem by saying, "God made it that way". Their science would thus be more likely to be asymptotic than ours is likely to be asymptotic (approaching a limit); this could impose a natural braking on their progress. This would of course also greatly reduce their efficiency for many purposes.

BTW, if you are proposing boxing AIs as your security, please at least plan on developing some plausible way of measuring the complexity level of the AIs, and indications they suspect what is going on, and automatically freezing the "simulation" (it's not really a simulation of AIs, it is AIs) when certain conditions are met. Boxing has lots of problems dealt with by older posts; but aside from all that, if you are bent on boxing, at least don't rely completely on human observation of what they are doing.

People so frequently arrive at boxing as the solution for protecting themselves from AI, that perhaps the LW community should think about better and worse ways of boxing, rather than simply dismissing it out-of-hand. Because it seems likely that somebody is going to try it.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-08T19:50:51.793Z · LW(p) · GW(p)

I wonder how much it would cripple AIs to have justified true belief in God? More precisely, would it slow their development by a constant factor; compose it with eg log(x); or halt it at some final level?

This is unclear, and I think it is premature to assume it slows development. True atheism wasn't a widely held view until the end of the 19th century, and is mainly a 20th century phenomena. Even its precursor - deism - didn't become popular amongst intellectuals until the 19th century.

If you look at individual famous scientists, the pattern is even less clear. Science and the church did not immediately split, and most early scientists were clergy including notables popular with LW such as Bayes and Ockham. We may wonder if they were 'internal atheists', but this is only speculation (however it is in at least some cases true, as the first modern atheist work was of course written by a priest). Newton for one spent a huge amount of time studying the bible and his apocalyptic beliefs are now well popularized. I wonder how close his date of 2060 will end up being to the Singularity.

But anyway, there doesn't seem to be a clear association between holding theistic beliefs and capacity for science - at least historically. You'd have to dig deep to show an effect, and it is likely to be quite small.

I think more immediate predictors of scientific success are traits such as curiosity and obsessive tendencies - having a God belief doesn't prevent curiosity about how God's 'stuff' works.

comment by Houshalter · 2010-09-03T19:35:54.158Z · LW(p) · GW(p)

But what is the plan for turning the simulated AI into FAI or at least creating FAI on their own that we can use?

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-03T20:14:12.607Z · LW(p) · GW(p)

The idea is this could be used to bootstrap that process. This is a route towards developing FAI, by finding, developing, and selecting minds towards the FAI spectrum.