Posts

Comments

Comment by AndreInfante on Vegetarianism Ideological Turing Test Results · 2015-10-16T07:53:43.633Z · LW · GW

According to the PM I got, I had the most credible vegetarian entry, and it was ranked as much more credible than my actual (meat-eating) beliefs. I'm not sure how I feel about that.

Comment by AndreInfante on MIT Technology Review - Michael Hendricks opinion on Cryonics · 2015-09-19T08:00:14.249Z · LW · GW

I feel like the dog brain studies are at least fairly strong evidence that quite a bit of information is preserved. The absence of an independent validation is largely down to the poor mainstream perception of cryonics. It's not that Alcor is campaigning to cover up contrary studies - it's that nobody cares enough to do them. Vis-a-vis the use of dogs, there actually aren't that many animals with comparable brain volume to humans. I mean, if you want to find an IRB that'll let you decorticate a giraffe, be my guest. Dogs are a decent analog, under the circumstances. They're not so much smaller you'd expect drastically different results.

In any case, if this guy wants to claim that cryonics doesn't preserve fine-grained brain detail, he can do the experiment and prove it. You can't just point at a study you don't like and shout 'the authors might be biased' and thus refute its claim. You need to be able to provide either serious methodological flaws, or an actual failure to replicate.

Comment by AndreInfante on MIT Technology Review - Michael Hendricks opinion on Cryonics · 2015-09-16T19:55:45.402Z · LW · GW

Sorry, I probably should have more more specific. What I should really say is 'how important the unique fine-grained structure of white matter is.'

If the structure is relatively generic between brains, and doesn't encode identity-crucial info in its microstructure, we may be able to fill it in using data from other brains in the future.

Comment by AndreInfante on MIT Technology Review - Michael Hendricks opinion on Cryonics · 2015-09-16T09:37:37.520Z · LW · GW

Just an enthusiastic amateur who's done a lot of reading. If you're interested in hearing a more informed version of the pro-cryonics argument (and seeing some of the data) I recommend the following links:

On ischemic damage and the no-reflow phenomenon: http://www.benbest.com/cryonics/ischemia.html

Alcor's research on how much data is preserved by their methods: http://www.alcor.org/Library/html/braincryopreservation1.html http://www.alcor.org/Library/html/newtechnology.html http://www.alcor.org/Library/html/CryopreservationAndFracturing.html

Yudkowsky's counter-argument to the philosophical issue of copies vs. "really you": http://lesswrong.com/lw/r9/quantum_mechanics_and_personal_identity/

Comment by AndreInfante on MIT Technology Review - Michael Hendricks opinion on Cryonics · 2015-09-16T08:43:17.140Z · LW · GW

If we could “upload” or roughly simulate any brain, it should be that of C. elegans. Yet even with the full connectome in hand, a static model of this network of connections lacks most of the information necessary to simulate the mind of the worm. In short, brain activity cannot be inferred from synaptic neuroanatomy.

Straw man. Connectonomics is relevant to trying to explain the concept of uploading to the lay-man. Few cryonics proponents actually believe it's all you need to know to reconstruct the brain.

The features of your neurons (and other cells) and synapses that make you “you” are not generic. The vast array of subtle chemical modifications, states of gene regulation, and subcellular distributions of molecular complexes are all part of the dynamic flux of a living brain. These things are not details that average out in a large nervous system; rather, they are the very things that engrams (the physical constituents of memories) are made of.

The fact that someone can be dead for several hours and then be resuscitated, or have their brain substantially heated or cooled without dying, puts a theoretical limit on how sensitive your long-term brain state can possibly be to these sorts of transient details of brain structure. It seem very likely that long-term identity-related brain state is stored almost entirely in relatively stable neurological structures. I don't think this is particularly controversial, neurobiologically.

While it might be theoretically possible to preserve these features in dead tissue, that certainly is not happening now. The technology to do so, let alone the ability to read this information back out of such a specimen, does not yet exist even in principle. It is this purposeful conflation of what is theoretically conceivable with what is ever practically possible that exploits people’s vulnerability.

This is not, to the best of my knowledge, true, and he offers no evidence for this claim. Cryonics does a very good job of preserving a lot of features of brain tissue. There is some damage done by the cryoprotectants and thermal shearing, but it's specific and well-characterized damage, not total structural disruption. Although I will say that ice crystal formation in the deep brain caused by the no-reflow problem is a serious concern. Whether that's a showstopper depends on how important you think the fine-grained structure of white matter is.

But what is this replica? Is it subjectively “you” or is it a new, separate being? The idea that you can be conscious in two places at the same time defies our intuition. Parsimony suggests that replication will result in two different conscious entities. Simulation, if it were to occur, would result in a new person who is like you but whose conscious experience you don’t have access to.

Bad philosophy on top of bad neuroscience!

Comment by AndreInfante on The virtual AI within its virtual world · 2015-08-26T04:41:31.692Z · LW · GW

That's... an odd way of thinking about morality.

I value other human beings, because I value the processes that go on inside my own head, and can recognize the same processes at work in others, thanks to my in-built empathy and theory of the mind. As such, I prefer that good things happen to them rather than bad. There isn't any universal 'shouldness' to it, it's just the way that I'd rather things be. And, since most other humans have similar values, we can work together, arm in arm. Our values converge rather than diverge. That's morality.

I extend that value to those of different races and cultures, because I can see that they embody the same conscious processes that I value. I do not extend that same value to brain dead people, fetuses, or chickens, because I don't see that value present within them. The same goes for a machine that has a very alien cognitive architecture and doesn't implement the cognitive algorithms that I value.

Comment by AndreInfante on The virtual AI within its virtual world · 2015-08-25T21:11:46.190Z · LW · GW

But that might be quite a lot of detail!

In the example of curing cancer, your computational model of the universe would need to include a complete model of every molecule of every cell in the human body, and how it interacts under every possible set of conditions. The simpler you make the model, the more you risk cutting off all of the good solutions with your assumptions (or accidentally creation false solutions due to your shortcuts). And that's just for medical questions.

I don't think it's going to be possible for an unaided human to construct a model like that for a very long time, and possibly not ever.

Comment by AndreInfante on The virtual AI within its virtual world · 2015-08-25T21:06:20.959Z · LW · GW

The traditional argument is that there's a vast space of possible optimization processes, and the vast majority of them don't have humanlike consciousness or ego or emotions. Thus, we wouldn't assign them human moral standing. AIXI isn't a person and never will be.

A slightly stronger argument is that there's no way in hell we're going to build an AI that has emotions or ego or the ability to be offended by serving others wholeheartedly, because that would be super dangerous, and defeat the purpose of the whole project.

Comment by AndreInfante on The virtual AI within its virtual world · 2015-08-24T20:32:27.194Z · LW · GW

Your lawnmower isn't your slave. "Slave" prejudicially loads the concept with anthrocentric morality that does not actually exist.

Comment by AndreInfante on The virtual AI within its virtual world · 2015-08-24T19:58:36.787Z · LW · GW

I think there's a question of how we create an adequate model of the world for this idea to work. It's probably not practical to build one by hand, so we'd likely need to hand the task over to an AI.

Might it be possible to use the modelling module of an AI in the absence of the planning module? (or with a weak planning module) If so, you might be able to feed it a great deal of data about the universe, and construct a model that could then be "frozen" and used as the basis for the AI's "virtual universe."

Comment by AndreInfante on Bragging thread August 2015 · 2015-08-13T06:02:16.523Z · LW · GW

Have you considered coating your fingers with capsaicin to make scratching your mucus membrances immediately painful?

(Apologies if this advice is unwanted - I have not experienced anything similar, and am just spitballing).

Comment by AndreInfante on Bragging thread August 2015 · 2015-08-13T05:58:51.798Z · LW · GW

I made serious progress on a system for generating avatar animations based on the motion of a VR headset. It still needs refinement, but I'm extremely proud of what I've got so far.

https://www.youtube.com/watch?v=klAsxamqkkU

Comment by AndreInfante on Vegetarianism Ideological Turing Test! · 2015-08-09T20:43:18.791Z · LW · GW

For Omnivores:

  • Do you think the level of meat consumption in America is healthy for individuals? Do you think it's healthy for the planet?

Meat is obviously healthy for individuals. We evolved to eat as much of it as we could get. Many nutrients seem to be very difficult to obtain in sufficient, bio-available form from an all-vegetable diet. I just suspect most observant vegans are substantially malnourished.

On the planet side of things, meat is an environmental disaster. The methane emissions are horrifying, as is the destruction of rainforest. Hopefully, lab-grown meat allows us to switch to an eco-friendly alternative.

  • How do you feel about factory farming? Would you pay twice as much money for meat raised in a less efficient (but "more natural") way?

Factory farming is necessary to continue to feed the world. I don't care about "natural", but I'd pay extra for food from animals that had been genetically engineered to be happy and extremely stupid/near-comatose, to reduce total suffering-per-calorie. This would be more effective and less costly than switching to free-range.

  • Are there any animals you would (without significantly changing your mind) never say it was okay to hunt/farm and eat? If so, what distinguishes these animals from the animals which are currently being hunted/farmed?

Great apes. cetaceans, and a few birds. The range of animal intelligence is extremely broad. I find it extremely unlikely that chickens have anything recognizable as a human-like perception of the world. I think the odds are better than not that dolphins, chimps, and parrots do.

If you're interested, the animal I'm most on the fence about is pigs.

  • If all your friends were vegetarians, and you had to go out of your way to find meat in a similar way to how vegans must go out of their way right now, do you think you'd still be an omnivore?

Yes. I cook most of my own meals, and my meat consumption would continue even in the absence of social eating.

For Vegetarians:

  • If there was a way to grow meat in a lab that was indistinguishable from normal meat, and the lab-meat had never been connected to a brain, do you expect you would eat it? Why/why not?

I obviously no moral problem with that. That would be fantastic. However, I probably wouldn't eat the lab meat. I find the texture / mouth-feel of most meat pretty gross, and lab-grown meat would be significantly more expensive than my current diet. Since microbiome acclimation means that resuming eating meat could make me very sick for a while, I'm not sure I see the profit in it.

I am very interested in synthetic milk, cheese, and eggs, however.

  • Indigenous hunter gatherers across the world get around 30 percent of their annual calories from meat. Chimpanzees, our closest non-human relatives, eat meat. There are arguments that humans evolved to eat meat and that it's natural to do so. Would you disagree? Elaborate.

Obviously, humans evolved to be omnivorous. However, the paleo people are lunatics if they think we ate as much meat as they do (much less of the hyper-fatty livestock we've bred over the last couple of millenia). Meat was a most likely a rare supplement to the largely-vegetarian diets of ancestral peoples.

Regardless, none of this is the point. Today, it's perfectly possible to eat a vegan diet and be healthy (see: Soylent). You can't avoid the obvious moral horror of eating the flesh of semi-sentient animals like pigs by shouting the word 'natural' and running away.

  • Do you think it's any of your business what other people eat? Have you ever tried (more than just suggesting it or leading by example) to get someone to become a vegetarian or vegan?

Only if they bring it up first. I do think we have a moral obligation to try to reduce animal suffering, but harassing my friends isn't actually helping the cause in any way, and might be hurting. I do try to corrupt my meat-eating friends who are having seconds thoughts about it, but, you know, in a friendly way.

  • What do you think is the primary health risk of eating meat (if any)?

Parasites probably. Meat in moderation clearly isn't especially bad for you. It's just, you know, wrong.

Comment by AndreInfante on We really need a "cryonics sales pitch" article. · 2015-08-05T23:30:58.702Z · LW · GW

Technically, it's the frogs and fish that routinely freeze through the winter. Of course, they evolved to pull off that stunt, so it's less impressive.

We've cryopreserved a whole mouse kidney before, and were able to thaw and use it as a mouse's sole kidney.

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2781097/

We've also shown that nematode memory can survive cryopreservation:

http://www.dailymail.co.uk/sciencetech/article-3107805/Could-brains-stay-forever-young-Memories-survive-cryogenic-preservation-study-shows.html

The trouble is that larger chunks of tissue (like, say, a whole mouse or a human brain) are more prone to thermal cracking at very low temperatures. Until we solve that problem, nobody's coming back short of brain emulation or nanotechnology.

Comment by AndreInfante on How to win the World Food Prize · 2015-07-31T22:07:44.732Z · LW · GW

The issue is that crashing the mosquito population doesn't work if even a few of them survive to repopulate - the plan needs indefinite maintenance, and the mosquitoes will eventually evolve to avoid our lab-bred dud males.

I wonder if you could breed a version of the mosquito that's healthy but has an aversion to humans, make your genetic change dominant, and then release a bunch of THOSE mosquitoes. There'd be less of a fitness gap between the modified mosquitoes and the original species, so if we just kept dumping modified males every year for a decade or two, we might be able to completely drive the original human-seeking genes out of the ecosystem.

Comment by AndreInfante on Steelmaning AI risk critiques · 2015-07-30T10:19:00.527Z · LW · GW

Not much to add here, except that it's unlikely that Alex is an exceptional example of a parrot. The researcher purchased him from a pet store at random to try to eliminate that objection.

Comment by AndreInfante on Steelmaning AI risk critiques · 2015-07-29T21:53:47.844Z · LW · GW

Interesting! I didn't know that, and that makes a lot of sense.

If I were to restate my objection more strongly, I'd say that parrots also seem to exceed chimps in language capabilities (chimps having six billion cortical neurons). The reason I didn't bring this up originally is that chimp language research is a horrible, horrible field full of a lot of bad science, so it's difficult to be too confident in that result.

Plenty of people will tell you that signing chimps are just as capable as Alex the parrot - they just need a little bit of interpretation from the handler, and get too nervous to perform well when the handler isn't working with them. Personally, I think that sounds a lot like why psychics suddenly stop working when James Randi shows up, but obviously the situation is a little more complicated.

Comment by AndreInfante on Steelmaning AI risk critiques · 2015-07-29T09:20:59.108Z · LW · GW

Yes it is a pure ANN - according to my use of the term ANN (arguing over definitions is a waste of time). ANNs are fully general circuit models, which obviously can re-implement any module from any computer - memory, database, whatever. The defining characteristics of an ANN are - simulated network circuit structure based on analog/real valued nodes, and some universal learning algorithm over the weights - such as SGD.

I think you misunderstood me. The current DeepMind AI that they've shown the public is a pure ANN. However, it has serious limitations because it's not easy to implement long-term memory as a naive ANN. So they're working on a successor called the "neural Turing machine" which marries an ANN to a database retrieval system - a specialized module.

You don't understand my position. I don't believe DL as it exists today is somehow the grail of AI. And yes I'm familiar with Hinton's 'Capsule' proposals. And yes I agree there is still substantial room for improvement in ANN microarchitecture, and especially for learning invariances - and unsupervised especially.

The thing is, many of those improvements are dependent on the task at hand. It's really, really hard for an off-the-shelf convnet neural network to learn the rules of three dimensional geometry, so we have to build it into the network. Our own visual processing shows signs of having the same structure imbedded in it.

The same structure would not, for example, benefit an NLP system, so we'd give it a different specialized structure, tuned to the hierarchical nature of language. The future, past a certain point, isn't making 'neural networks' better. It's making 'machine vision' networks better, or 'natural language' networks better. To make a long story short, specialized modules are an obvious place to go when you run into problem too complex to teach a naive convnet to do efficiently. Both for human engineers over the next 5-10, and for evolution over the last couple of billion.

You don't update on forum posts? Really? You seem pretty familiar with MIRI and LW positions. So are you saying that you arrived at those positions all on your own somehow?

I have a CS and machine learning background, and am well-read on the subject outside LW. My math is extremely spotty, and my physics is non-existent. I update on things I read that I understand, or things from people I believe to be reputable. I don't know you well enough to judge whether you usually say things that make sense, and I don't have the physics to understand the argument you made or judge its validity. Therefore, I'm not inclined to update much on your conclusion.

EDIT: Oh, and you still haven't responded to the cat thing. Which, seriously, seems like a pretty big hole in the universal learner hypothesis.

Comment by AndreInfante on Steelmaning AI risk critiques · 2015-07-29T05:08:14.273Z · LW · GW

First off the bat, you absolutely can create an AGI that is a pure ANN. In fact the most successful early precursor AGI we have - the atari deepmind agent - is a pure ANN. Your claim that ANNs/Deep Learning is not the end of all AGI research is quickly becoming a minority position.

The deepmind agent has no memory, one of the problems that I noted in the first place with naive ANN systems. The deepmind's team's solution to this is the neural Turing machine model, which is a hybrid system between a neural network and a database. It's not a pure ANN. It isn't even neuromorphic.

Improving its performance is going to involve giving it more structure and more specialized components, and not just throwing more neurons and training time at it.

For goodness sake: Geoffrey Hinton, the father of deep learning, believes that the future of machine vision is explicitly integrating the idea of three dimensional coordinates and geometry into the structure of the network itself, and moving away from more naive and general purpose conv-nets.

Source: https://github.com/WalnutiQ/WalnutiQ/issues/157

Your position is not as mainstream as you like to present it.

The real test here would be to take a brain and give it an entirely new sense

Done and done. Next!

If you'd read the full sentence that I wrote, you'd appreciate that remapping existing senses doesn't actually address my disagreement. I want a new sense, to make absolutely sure that the subjects aren't just re-using hard coding from a different system. Snarky, but not a useful contribution to the conversation.

This is nonsense - language processing develops in general purpose cortical modules, there is no specific language circuitry.

This is far from the mainstream linguistic perspective. Go argue with Noam Chomsky; he's smarter than I am. Incidentally, you didn't answer the question about birds and cats. Why can't cats learn to do complex language tasks? Surely they also implement the universal learning algorithm just as parrots do.

What about Watson?

Not an AGI.

AGIs literally don't exist, so that's hardly a useful argument. Watson is the most powerful thing in its (fairly broad) class, and it's not a neural network.

Finally, I don't have the background to refute your argument on the efficiency of the brain (although I know clever people who do who disagree with you).

The correct thing to do here is update. Instead you are searching for ways in which you can ignore the evidence.

No, it really isn't. I don't update based on forum posts on topics I don't understand, because I have no way to distinguish experts from crackpots.

Comment by AndreInfante on Steelmaning AI risk critiques · 2015-07-29T01:44:37.533Z · LW · GW

Yes, I've read your big universal learner post, and I'm not convinced. This does seem to be the crux of our disagreement, so let me take some time to rebut:

First off, you're seriously misrepresenting the success of deep learning as support for your thesis. Deep learning algorithms are extremely powerful, and probably have a role to play in building AGI, but they aren't the end-all, be-all of AI research. For starters, modern deep learning systems are absolutely fine-tuned to the task at hand. You say that they have only "a small number of hyperparameters." which is something of a misrepresentation. There are actually quite a few of these hyperparameters in state-of-the-art networks, and there are more in networks tackling more difficult tasks.

Tuning these hyperparameters is hard enough that only a small number of researchers can do it well enough to achieve state of the art results. We do not use the same network for image recognition and audio processing, because that wouldn't work very well.

We tune the architecture of deep learning systems to the task at hand. Presumably, if we can garner benefits from doing that, evolution has an incentive to do the same. There's a core, simple algorithm at work, but targeted to specific tasks. Evolution has no incentive to produce a clean design if cludgy tweaks give better results. You argue that evolution has a bias against complexity, but that certainly hasn't stopped other organs from developing complex structure to make them marginally better at the task.

There's also the point that there's plenty of tasks that deep learning methods can't solve yet (like how to store long-term memories of a complex and partially observed system in an efficient manner) - not to mention higher level cognitive skills that we have no clue how to approach.

Nobody thinks this stuff is just a question of throwing yet larger deep learning networks at the problem. They will be solved by finding different hard-wired network architectures that make the problem more manageable by knowing things about it in advance.


The ferret brain rewiring result is not a slam-dunk for the universal learning by itself. It just means that different brain modules can switch which pre-programmed neural algorithms they implement on the fly. Which makes sense, because on some level these things have to be self-organizing in the first place to be compactly genetically coded.

The real test here would be to take a brain and give it an entirely new sense - something that bears no resemblance to any sense it or any of its ancestors has ever had, and see if it can use that sense as naturally as hearing or vision. Personally, I doubt it. Humans can learn echolocation, but they can't learn echolocation the way bats and dolphins can learn echolocation - and echolocation bears a fair degree of resemblance to other tasks that humans already have specialized networks for (like pinpointing the location of a sound in space).

Notably, the general learner hypothesis does not explain why non-surgically-modified brains are so standardized in structure and functional layout. Something that you yourself bring up in your article.

It also does not explain why birds are better at language tasks than cats. Cat brains are much larger. The training rewards in the lab are the same. And, yet, cats significantly underperform parrots at every single language-related task we can come up with. Why? Because the parrots have had a greater evolutionary pressure to be good at language-style tasks - and, as a result, they have evolved task-specific neurological algorithms to make it easier.

Also, plenty of mammals, fresh out of the womb, have complex behaviors and vocalizations. Humans are something of an outlier, due to being born premature by mammal standards. If mammal brains are 99% universal learning, why can baby cows walk within minutes of birth?

Look, obviously, to some degree, both things are true. The brain is capable of general learning to some degree. Otherwise, we'd never have developed math. It also obviously has hard-coded specialized modules, to some degree, which is why (for example) all human cultures develop language and music, which isn't something you'd expect if we were all starting from zero. The question is which aspect dominates brain performance. You're proposing an extreme swing to one end of the possibility space that doesn't seem even remotely plausible - and then you'e using that assumption as evidence that no non-brain-like intelligence can exist.

What about Watson? It's the best-performing NLP system ever made, and it's absolutely a "weird mathy program." It uses neural networks as subroutines, but the architecture of the whole bears no resemblance to the human brain. It's not a simple universal learning algorithm. If you gave a single deep neural network access to the same computational resources, it would underperform Watson. That seems like a pretty tough pill to swallow if 'simple universal learner' is all there is to intelligence.


Finally, I don't have the background to refute your argument on the efficiency of the brain (although I know clever people who do who disagree with you). But, taking it as a given that you're right, it sounds like you're assuming all future AIs will draw the same amount of power as a real brain and fit in the same spatial footprint. Well... what if they didn't? What if the AI brain is the size of a fridge and cooled with LN2 and consumes as much power as a city block? Surely at the physical limits of computation you believe in, that would be able to beat the pants off little old us.

To sum up: yes, I've read your thing. No, it's not as convincing as you seem to believe.

Comment by AndreInfante on Steelmaning AI risk critiques · 2015-07-29T00:35:23.575Z · LW · GW

I seriously doubt that. Plenty of humans want to kill everyone (or, at least, large groups of people). Very few succeed. These agents would be a good deal less capable.

Comment by AndreInfante on Steelmaning AI risk critiques · 2015-07-28T05:31:28.781Z · LW · GW

So, to sum up, your plan is to create an arbitrarily safe VM, and use it to run brain-emulation-style denovo AIs patterned on human babies (presumably with additional infrastructure to emulate the hard-coded changes that occur in the brain during development to adulthood: adult humans are not babies + education). You then want to raise many, many iterations of these things under different conditions to try to produce morally superior specimens, then turn those AIs loose and let them self modify to godhood.

Is that accurate? (Seriously, let me know if I'm misrepresenting your position).


A few problems immediately come to mind. We'll set aside the moral horror of what you just described as a necessary evil to avert the apocalypse, for the time being.

More practically, I think you're being racist against weird mathy programs.

For starters, I think weird mathy programs will be a good deal easier to develop than digital people. Human beings are not just general optimizers. We have modules that function roughly like one, which we use under some limited circumstances, but anyone who's ever struggled with procrastination or put their keys in the refrigerator knows that your goal-oriented systems are entangled with a huge number of cheap heuristics at various levels, many of which are not remotely goal-oriented.

All of this stuff is deeply tangled up with what we think of as the human 'utility function,' because evolution has no incentive to design a clean separation between planning and values. Replicating all of that accurately enough to get something that thinks and behaves like a person is likely much harder than making a weird mathy program that's good at modelling the world and coming up with plans.

There's also the point that there really isn't a good way to make a brain emulation smarter. Weird, mathy programs - even ones that use neural networks as subroutines - often have obvious avenues to making them smarter, and many can scale smoothly with processing resources. Brain emulations are much harder to bootstrap, and it'd be very difficult to preserve their behavior through the transition.

My best guess is, they'd probably go nuts and end up as an eldritch horror. And if not, they're still going to get curb stomped by the first weird mathy program to come along, because they're saddled with all of our human imperfections and unnecessary complexity. The upshot of all of this is that they don't serve the purpose of protecting us from future UFAIs.

Finally, the process you described isn't really something you can start on (aside from the VM angle) until you already have human level AGIs, and a deep and total understanding of all of the operation of the human brain. Then, while you're setting up your crazy AI concentration camp and burning tens of thousands of man-years of compute time searching for AI Buddha, some bright spark in a basement with a GPU cluster has the much easier task of just cludging together something smart enough to recursively self-improve. You're in a race with a bunch of people trying to solve a much easier problem, and (unlike MIRI) you don't have decades of lead time to get a head start on the problem. Your large-scale evolutionary process would take much, much too much time and money to actually save the world.

In short, I think it's a really bad idea. Although now that I understand what you're getting at, it's less obviously silly than what I originally thought you were proposing. I apologize.

Comment by AndreInfante on Steelmaning AI risk critiques · 2015-07-28T03:40:37.366Z · LW · GW

A ULM also requires a utility function or reward circuitry with some initial complexity, but we can also use the same universal learning algorithms to learn that component. It is just another circuit, and we can learn any circuit that evolution learned.

Okay, so we just have to determine human terminal values in detail, and plug them into a powerful maximizer. I'm not sure I see how that's different from the standard problem statement for friendly AI. Learning values by observing people is exactly what MIRI is working on, and it's not a trivial problem.

For example: say your universal learning algorithm observes a human being fail a math test. How does it determine that the human being didn't want to fail the math test? How does it cleanly separate values from their (flawed) implementation? What does it do when peoples' values differ? These are hard questions, and precisely the ones that are being worked on by the AI risk people.

Other points of critique:

Saying the phrase "safe sandbox sim" is much easier than making a virtual machine that can withstand a superhuman intelligence trying to get out of it. Even if your software is perfect, it can still figure out that its world is artificial and figure out ways of blackmailing its captors. Probably doing what MIRI is looking into, and designing agents that won't resist attempts to modify them (corrigibility) is a more robust solution.

You want to be careful about just plugging in a learned human utility function into a powerful maximizer, and then raising it. If it's maximizing its own utility, which is necessary if you want it to behave anything like a child, what's to stop it from learning human greed and cruelty, and becoming an eternal tyrant? I don't trust a typical human to be god.

And even if you give up on that idea, and have to maximize a utility function defined in terms of humanity's values, you still have problems. For starters, you want to be able to prove formally that its goals will remain stable as it self-modifies, and it won't create powerful sub-agents who don't share those goals. Which is the other class of problems that MIRI works on.

Comment by AndreInfante on Steelmaning AI risk critiques · 2015-07-27T21:59:37.574Z · LW · GW

Here's one from a friend of mine. It's not exactly an argument against AI risk, but it is an argument that the problem may be less urgent than it's traditionally presented.

  1. There's plenty of reason to believe that Moore's Law will slow down in the near future

  2. Progress on AI algorithms has historically been rather slow.

  3. AI programming is an extremely high level cognitive task, and will likely be among the hardest things to get an AI to do.

  4. These three things together suggest that there will be a 'grace period' between the development of general agents, and the creation of a FOOM-capable AI.

  5. Our best guess for the duration of this grace period is on the order of multiple decades.

  6. During this time, general-but-dumb agents will be widely used for economic purposes.

  7. These agents will have exactly the same perverse instantiation problems as a FOOM-capable AI, but on a much smaller scale. When they start trying to turn people into paperclips, the fallout will be limited by their intelligence.

  8. This will ensure that the problem is taken seriously, and these dumb agents will make it much easier to solve FAI-related problems, by giving us an actual test bed for our ideas where they can't go too badly wrong.


This is a plausible-but-not-guaranteed scenario for the future, which feels much less grim than the standard AI-risk narrative. You might be able to extend it into something more robust.

Comment by AndreInfante on Steelmaning AI risk critiques · 2015-07-27T20:50:01.116Z · LW · GW

(1) Intelligence is an extendible method that enables software to satisfy human preferences. (2) If human preferences can be satisfied by an extendible method, humans have the capacity to extend the method. (3) Extending the method that satisfies human preferences will yield software that is better at satisfying human preferences. (4) Magic happens. (5) There will be software that can satisfy all human preferences perfectly but which will instead satisfy orthogonal preferences, causing human extinction.

This is deeply silly. The thing about arguing from definitions is that you can prove anything you want if you just pick a sufficiently bad definition. That definition of intelligence is a sufficiently bad definition.

EDIT:

To extend this rebuttal in more detail:

I'm going to accept the definition of 'intelligence' given above. Now, here's a parallel argument of my own:

  1. Entelligence is an extendible method for satisfying an arbitrary set of preferences that are not human preferences.

  2. If these preferences can be satisfied by an extendible method, then the entelligent agent has the capacity to extend the method.

  3. Extending the method that satisfies these non-human preferences will yield software that's better at satisfying non-human preferences.

  4. The inevitable happens.

  5. There will be software that will satisfy non-human preferences, causing human extinction.


Now, I pose to you: how do we make sure that we're making intelligent software, and not "entelligent" software, under the above definitions? Obviously, this puts us back to the original problem of how to make a safe AI.

The original argument is rhetorical slight of hand. The given definition of intelligence implicitly assumes that the problem doesn't exist, and all AI's will be safe, and then goes on to prove that all AIs will be safe.

It's really, fundamentally silly.

Comment by AndreInfante on Steelmaning AI risk critiques · 2015-07-27T20:43:52.113Z · LW · GW

I think you misunderstand my argument. The point is that it's ridiculous to say that human beings are 'universal learning machines' and you can just raise any learning algorithm as a human child and it'll turn out fine. We can't even raise 2-5% of HUMAN CHILDREN as human children and have it reliably turn out okay.

Sociopaths are different from baseline humans by a tiny degree. It's got to be a small number of single-gene mutations. A tiny shift in information. And that's all it takes to make them consistently UnFriendly, regardless of how well they're raised. Obviously, AIs are going to be more different from us than that. And that's a pretty good reason to think that we can't just blithely assume that putting Skynet through preschool is going to keep us safe.

Human values are obviously hard coded in large part, and the hard coded portions seem to be crucial. That hard coding is not going to be present in an arbitrary AI, which means we have to go and duplicate it out of a human brain. Which is HARD. Which is why we're having this discussion in the first place.

Comment by AndreInfante on Steelmaning AI risk critiques · 2015-07-24T20:14:10.832Z · LW · GW

To rebut: sociopaths exist.

Comment by AndreInfante on Update on the Brain Preservation Foundation Prize · 2015-06-03T06:34:32.433Z · LW · GW

What are the advantages to the hybrid approach as compared to traditional cryonics? Histological preservation? Thermal cracking? Toxicity?

Comment by AndreInfante on [Link]"Neural Turing Machines" · 2015-01-29T12:24:49.065Z · LW · GW

Thank you!

Comment by AndreInfante on [Link]"Neural Turing Machines" · 2014-11-02T20:48:47.668Z · LW · GW

That sounds fascinating. Could you link to some non-paywalled examples?

Comment by AndreInfante on Hal Finney has just died. · 2014-08-29T10:50:17.687Z · LW · GW

The odds aren't good, but here's hoping.

Comment by AndreInfante on LINK-Robot apocalypse taken (somewhat) seriously · 2014-08-04T18:26:58.557Z · LW · GW

Amusingly, I just wrote an (I think better) article about the same thing.

http://www.makeuseof.com/tag/heres-scientists-think-worried-artificial-intelligence/

Business Insider can probably muster more attention than I can though, so it's a tossup about who's actually being more productive here.