Cynical explanations of FAI critics (including myself)

post by Wei Dai (Wei_Dai) · 2012-08-13T21:19:06.671Z · LW · GW · Legacy · 49 comments

Contents

49 comments

Related Posts: A cynical explanation for why rationalists worry about FAIA belief propagation graph

Lately I've been pondering the fact that while there are many critics of SIAI and its plan to form a team to build FAI, few of us seem to agree on what SIAI or we should do instead. Here are some of the alternative suggestions offered so far:

Given that ideal reasoners are not supposed to disagree, it seems likely that most if not all of these alternative suggestions can also be explained by their proponents being less than rational. Looking at myself and my suggestion to work on IA or uploading, I've noticed that I have a tendency to be initially over-optimistic about some technology and then become gradually more pessimistic as I learn more details about it, so that I end up being more optimistic about technologies that I'm less familiar with than the ones that I've studied in detail. (Another example of this is me being initially enamoured with Cypherpunk ideas and then giving up on them after inventing some key pieces of the necessary technology and seeing in more detail how it would actually have to work.)
I'll skip giving explanations for other critics to avoid offending them, but it shouldn't be too hard for the reader to come up with their own explanations. It seems that I can't trust any of the FAI critics, including myself, nor do I think Eliezer and company are much better at reasoning or intuiting their way to a correct conclusion about how we should face the apparent threat and opportunity that is the Singularity. What useful implications can I draw from this? I don't know, but it seems like it can't hurt to pose the question to LessWrong. 

 

49 comments

Comments sorted by top scores.

comment by gwern · 2012-08-13T21:45:28.494Z · LW(p) · GW(p)

If there are thousands of possible avenues of research, and critics have a noisy lock on truth in the sense of picking a few hundred avenues they like best, then we could easily wind up with all the critics agreeing that strategy X is just a bad idea, but also disagreeing on whether strategy A is better than strategies B or C. So I don't see disagreement among critics as proving much at all other than the critics are not perfect, which they surely would agree with; it doesn't vindicate X. There are so many more wrong research avenues than right ones.

(Imagine we have 10,000 possible research topics, 3 critics who have identified their top 10 strategies, and the critics are guaranteed to identify 'the right' strategy in those 10 but beyond that pick randomly the top 1. If someone picks research topic X which is genuinely wrong, then the critics will almost certainly all agree that that topic is indeed the wrong topic: the number of strategies endorsed by any of them just 28 strategies out of the 10,000 and 28 / 10,000 is a pretty small chance for X to get lucky and be one of them. But at the same time, will the 3 critics all rank the same strategy as the top strategy? 1/10 1/10 1/10 is not great odds either! So even though the critics have an amazing truth-finding ability in being able to shrink 10,000 all the way down to 10, they still may not agree because of their remaining noise.)

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-08-13T21:49:54.921Z · LW(p) · GW(p)

it doesn't vindicate X

To be clear, I'm not suggesting that the fact that critics of FAI disagree vindicates FAI.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-08-14T09:11:33.216Z · LW(p) · GW(p)

What I am saying is more like this:

It looks like humans trying to answer "How should we face the Singularity?" are so noisy as to be virtually useless. Crap, now what? It's a long shot, but maybe LW has some ideas about how to extract something useful from all the noise?

(Note that we have disagreement not just about what is the best avenue of research, but also about whether any given approach has positive, negative, or negligible expected utility (compared to doing nothing) so we can't even safely say "let's just collectively make enough money to fund the top N approaches" and expect to be doing some good. ETA: Nor can we take the average of people's answers and use that since much of the noise is probably driven by systemic biases which are not likely to cancel out nicely. Nor is it clear how to subtract out the biases since whoever is trying to do that would most likely be heavily biased themselves relative to the strength of the signal they're trying to extract.)

Replies from: private_messaging
comment by private_messaging · 2012-08-14T11:43:41.677Z · LW(p) · GW(p)

This just bugs me too much.

No, I do not think there is disagreement. You tell many people that X is a largest member of the set. They come up with members that are larger than X. If they give different answers, that is not disagreement of some kind. If X is particularly ill chosen for being the largest member of the set, then there can be enormous number of members larger than X.

If you want to claim substantial disagreement, e.g. if you claim that promoters of intelligence amplification see computer security as entirely unhelpful and a net increase in the risk, you got to provide examples (surely a sufficiently twisted reasoner can argue that computer security will be an annoying obstacle that will piss off the future cyborg overlord). edit: that's it, I think claiming that those who disagree are selling their examples as 'the best that could be done' is a bit of uncharitable interpretation (or actually, a lot). For the most part, it's just examples of what is better to do than FAI.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-08-14T12:30:33.793Z · LW(p) · GW(p)

Holden presumably thinks that many academic AGI approaches are too risky since they are agent designs:

I believe that tools are safer than agents (even agents that make use of the best "Friendliness" theory that can reasonably be hoped for) and that SI encourages a focus on building agents, thus increasing risk.

Nick Szabo thinks working on mind uploading is a waste of time.

I personally promoted intelligence amplification and argued that working on security is of little utility.

Robin Hanson thinks the Singularity will be an important event that we can help make better by improving laws/institutions or advancing certain technologies ahead of others, and presumably would disagree that we should stop worrying about it.

Replies from: private_messaging
comment by private_messaging · 2012-08-14T12:44:03.705Z · LW(p) · GW(p)

Holden presumably thinks that many academic AGI approaches are too risky since they are agent designs:

He's an example of biased selection in critics. No detailed critique from him wouldn't have been heard if he didn't take it seriously enough in the first place.

Nick Szabo thinks working on mind uploading is a waste of time.

You don't work on mind uploading today, you work on neurology, that solves a lot of practical problems including treatments for disorders, and which may lead to uploading, or not. I am rather sceptical that the future mind uploading is a significant contributor to the utility of such work.

I personally promoted intelligence amplification and argued that working on security is of little utility.

I do think it is of little utility because I do not believe in some over the internet foom. But if such foom is given, then security can stop it (or rather, work on the tools that would allow provably unhackable software). Ultimately the topic is entirely speculative and you only make arguments by adopting some of the assumptions. With regards to 'provably friendly AGI', once again the important bit here is 'provably', that requires techniques and tools that are over the board useful what ever comes in the future (by improving our degree of reliable understanding and control over our creations of any kind), while the 'friendly' is something you can't even work on without knowing how the 'provably' is going to be accomplished.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-08-14T13:23:34.089Z · LW(p) · GW(p)

David Dalrymple criticized FAI and is working directly on mind uploading today, so apparently he disagrees with both you and Nick Szabo.

Nick Szabo explicitly suggested working on computer security so he seems to disagree with you about the utility. I disagree with you and him about whether provably unhackable software is feasible.

Do you think I've satisfied your request for examples of substantive disagreements? (I'd rather not go into object-level arguments since that's not what this post is about.)

Replies from: private_messaging
comment by private_messaging · 2012-08-14T14:40:52.763Z · LW(p) · GW(p)

What I mean, is that I think most of the critics would agree that the approaches which they see as far fetched (and which you say they 'disagree' about) are still much more realizable than FAI.

Furthermore, the arguments are highly conditional on specific speculations which are taken to be true for sake of the argument. For example, if I am to assume that unfriendly AI would destroy the world, but such can be prevented with FAI, it means that the AI that is of the kind that is actually designed and can be controlled can be done in time. The algorithms relevant to making it cull the search space to manageable size are also highly relevant to the tools for solving of all sorts of technological problems including biomedical research for mind uploading. This line of argument by no means implies that I believe the mind uploading to be likely.

Furthermore the 'provably friendly' implies existence of much superior techniques of design of provably-something software; proving absence of e.g. buffer overruns and SQL injections is a task much more readily achievable.

It would be incredibly difficult to track all the cross-dependencies and rank the future technologies in the order of the appearance (something that may well have lower utility than just picking one and working on it), but you do not need to do that to see that some particularly spectacular solution (which is practically reliant on everything, including neurology for sake of figuring out and formally specifying what constitutes human in such a way that superintelligence wouldn't come up with some really weird interpretation) is much further down the timeline than the other solutions.

comment by billswift · 2012-08-14T01:04:44.340Z · LW(p) · GW(p)

I criticize FAI because I don't think it will work. But I am not at all unhappy that someone is working on it, because I could be wrong or their work could contribute to something else that does work even if FAI doesn't (serendipity is the inverse of Murphy's law). Nor do I think they should spread their resources excessively by trying to work on too many different ideas. I just think LessWrong should act more as a clearinghouse for other, parallel ideas, such as intelligence amplification, that may prevent a bad Singularity in the absence of FAI.

comment by Xachariah · 2012-08-13T22:21:47.763Z · LW(p) · GW(p)

CEV and understanding recursive self-modification. Everything boils down to those two linked disciplines. The CEV is for understanding what we want, and the recursive self-modification is so that whatever is FOOMing doesn't lose sight of CEV while changes itself. I simply do not trust that anything will FOOM first and then come up with perfect CEV afterwards. By that time it will already be too far removed from humanity. This was, I think, a topic in Eliezer's metaethics sequence. It's the still unanswered question about what to do when you actually have unlimited power, including the power to change yourself.

Every option eventually boils down to a FOOM. This is why CEV and recursive self-modification must be finished before any scenario completes. AI is an artificial life-form FOOM, and may be friendly or unfriendly; uploading is human FOOM, and we already know that they're unfriendly with sufficient power; intelligence amplification is a slower biological FOOM; the first question asked of Oracle/Tool AI will be how to FOOM; and mainstream AGI is trying to build an AI to FOOM, except slower. The only non-FOOM related options are improving laws and institutions, which is already an ethical question, and computer security (and I'm not sure how that one relates to SIAI's mission).

The issue is that both of these are really hard. Arguably every philosopher since ever has been trying to do CEV. Recursive self-modification is hard as well, since humans can barely self-modify our ethical systems as it is. Though, as I understand, CFAR is now working to finding out what it takes to actually change people's minds/habits/ethics/actions.

Edit: But at the end of the day, it doesn't help one bit if SIAI comes up with CEV while the Pentagon or China comes up with uFAI. So starting work on AI is probably a good idea.

Replies from: torekp, Pentashagon
comment by torekp · 2012-08-14T01:43:08.216Z · LW(p) · GW(p)

uploading is human FOOM, and we already know that they're unfriendly with sufficient power

We do? Consider the Amish, a highly recognizable out-group with very backward technology. Other groups could easily wipe them out and take their stuff, if they so chose. But they seem to be in no particular danger. Now, one can easily come up with explanations that might not apply to uploads: the non-Amish are too diverse to coordinate and press their advantage; their culture overlaps too much with the Amish to make genocide palatable; yada yada. But, why wouldn't those factors also apply to uploads? Couldn't uploads be diverse? Share a lot of culture with bio humans? Etc.

Replies from: gwern
comment by gwern · 2012-08-14T01:47:32.176Z · LW(p) · GW(p)

The Amish are surprisingly wealthy, likely a profit center for neighbors & the government due to their refusal to use government services but still paying taxes, and are not (yet) disturbingly large proportions of the population.

They are also currently a case of selection bias: there are many countries where recognizable out-groups most certainly have fallen prey to unFriendly humans. (How many Jews are there now in Iran, or Syria, or Iraq? How are the Christians doing in those countries? Do the Copts in Egypt feel very optimistic about their future? Just to name some very recent examples...)

Replies from: roystgnr
comment by roystgnr · 2012-08-14T05:34:25.028Z · LW(p) · GW(p)

For that matter, when you think of the Amish and the other "Swiss Bretheren" religions, why do you think "Pennsylvania" rather than "Switzerland and neighboring countries"? A sect that had to cross oceans to find a state promising religious freedom is our best example of humans' high tolerance for diversity?

Replies from: gwern
comment by gwern · 2012-08-14T16:55:49.556Z · LW(p) · GW(p)

Yes, that's a good point, although now that I think about it I don't actually know what happened to the 'original' Amish. The Wikipedia Swiss Brethren mentions a lot of persecution, but it also says they sort of survive as the Swiss Mennonite Conference; regardless, they clearly don't number in the hundreds of thousands or millions like they do in America.

comment by Pentashagon · 2012-08-14T18:02:22.325Z · LW(p) · GW(p)

It's not just CEV and recursive self-modification, either. CEV only works on individuals and (many) individuals will FOOM once they acquire FAI. If individuals don't FOOM into FAI's (and I see no reason that they would choose to do so) we need a fully general moral/political theory that individuals can use to cooperate in a post-singularity world. How does ownership work when individuals suddenly have the ability to manipulate matter and energy on much greater scales than even governments can today? Can individuals clone themselves in unlimited number? Who actually owns solar and interstellar resources? I may trust a FAI to implement CEV for an individual but I don't necessarily trust it to implement a fair universal political system; that's asking me to put too much faith in a process that won't have sufficient input until it's too late. If every individual FOOMed at the same rate perhaps CEV could be used over all of humanity to derive a fair political system, but that situation seems highly unlikely. The most advanced individuals will want and need the most obscure and strange sounding things and current humans will simply be unable to fully evaluate their requests and the implications of agreeing to them.

Replies from: Xachariah
comment by Xachariah · 2012-08-14T20:15:17.493Z · LW(p) · GW(p)

I think we have the same sentiment, though we may be using different terminology. To paraphrase Eliezer on CEV, "Not to trust the self of this passing moment, but to try to extrapolate a me who knew more, thought faster, and were more the person I wished I were. Such a person might be able to avoid the fundamental errors. And still fearful that I bore the stamp of my mistakes, I should include all of the world in my extrapolation." Basically, I believe there is no CEV except the entire whole of human morality. Though I do admit that CEV has a hard problem in the case of mutually conflicting desires.

If you hold CEV a personal rather than universal, then I agree that the SIAI should work on that 'universal CEV', whatever it be named.

Replies from: Pentashagon
comment by Pentashagon · 2012-08-15T13:00:46.256Z · LW(p) · GW(p)

I just re-read EY's CEV paper and noticed that I had forgotten quite a bit since the last time I read it. He goes over most of the things I whined about. My lingering complaint/worry is that human desires won't converge, but so long as CEV just says "fail" in that case instead of "become X maximizers" we can potentially start over with individual or smaller-group CEV. A thought experiment I have in mind is what would happen if more than one group of humans independently invented FAI at the same time. Would the FAIs merge, cooperate, or fight?

I guess I am also not quite sure how FAI will actively prevent other AI projects or whole brain simulations or other FOOMable things, or if that's even the point. I guess it may be up to humans to ask the FAI how to prevent existential risks and then implement the solutions themselves.

comment by Shmi (shminux) · 2012-08-13T22:50:10.088Z · LW(p) · GW(p)

To add to your list of various alternatives... My personal skepticism re SI is the apparent lack of any kind of "Friendly AI roadmap", or at least nothing that I could easily find on the SI site or here. (Could be my sub-par search skills, of course.)

Replies from: AlexMennen, Bruno_Coelho
comment by AlexMennen · 2012-08-14T00:28:07.220Z · LW(p) · GW(p)

I hear Eliezer is planning to start writing a sequence on open problems in Friendly AI soon.

Replies from: shminux
comment by Shmi (shminux) · 2012-08-14T02:17:03.291Z · LW(p) · GW(p)

That's a different task... I'd expect to see something like "in phase one we plan to do this during this timeframe, next, depending on the outcome of phase one, we plan to proceed along the following lines, which we expect will take from m to n years...", rather than a comprehensive list of all open problems. The latter is hard and time consuming, the former is something that should not take longer than a page written down, at least as a first draft.

Replies from: Kaj_Sotala, AlexMennen
comment by Kaj_Sotala · 2012-08-14T08:15:04.709Z · LW(p) · GW(p)

I don't think it would be reasonable to develop such a roadmap at this point, given that it would require having a relatively high certainty in a specific plan. But given that it's not yet clear whether the best idea is to proceed on being the first one to develop FAI, or to pursue one of the proposals listed in the OP, or to do something else entirely, and furthermore it's not even clear how long it will take to figure that out, such specific roadmaps seem impossible.

Replies from: shminux
comment by Shmi (shminux) · 2012-08-14T15:09:22.458Z · LW(p) · GW(p)

And this vagueness pattern-matched perfectly to various failed undertakings, hence my skepticism.

Replies from: DaFranker
comment by DaFranker · 2012-08-16T15:26:45.313Z · LW(p) · GW(p)

In my model, it also pattern-matches with "Fundamental research that eventually gave us Motion, Thermodynamics, Relativity, Transistors, etc."

comment by AlexMennen · 2012-08-14T04:21:34.892Z · LW(p) · GW(p)

This looks somewhat like what you're asking for, although it does leave a bit to be desired.

Replies from: shminux
comment by Shmi (shminux) · 2012-08-14T06:09:14.037Z · LW(p) · GW(p)

No, it does not at all look like a roadmap. This is a roadmap. Concrete measurable goals. The strategic plan has no milestones and no timelines.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2012-08-14T14:00:57.026Z · LW(p) · GW(p)

Timelines don't work very well on such slippery topics. You work at it until you're done. Milestones are no less necessary, for sure.

comment by Bruno_Coelho · 2012-08-14T01:33:58.001Z · LW(p) · GW(p)

Apparently they are figuring out that roadmap, a problem with any research, even non-academic ones. I assume the sub-topics discussed so far are central points in question, but if SI keep in secret some insights, well, it's for the sake of security, or lack of transparency. In some cases, people bothers to much about the institution.

comment by Epiphany · 2012-08-30T00:58:01.598Z · LW(p) · GW(p)

Kill all those birds with one stone: Work on understanding and preventing the risks. Here's why that stone kills each bird:

Re: Work on some aspect of building an AI

Even the sequences are against this. "Do not propose solutions until the problem has been discussed as thoroughly as possible" Working on risk should go first. Also:

  • Somebody needs to judge the safety of AIs.

    Including any that you guys would make. If not SIAI, who will do this at all?

  • SIAI can't do both safety and production, it's perverse.

    The people who MAKE the AI should NOT be the same people who JUDGE the AI for the same reason that I would not purchase medical treatments from a doctor who claimed to test them himself but did not do a scientific study. You cannot peer review, independently test or check and balance your own project. Those that want to make an AI should plan to be part of a different organization, either now or in the future.

  • You'll get speed and quality advantages this way.

    If you build the AI first, you will certainly consider safety problems that hadn't been obvious before because you'll be in the thick of all those details and they'll give you new ideas. But those can always be added to the list of safety guidelines at that time. There is no reason to do that part first. If you make safety guidelines first, you can build the AI with safety in mind from the ground up. As you know, reprogramming something that has a flaw in a critical spot can be very, very time-consuming. By focusing on safety first, you will have a speed advantage while coding as well as a quality advantage. Others will make dangerous AIs and be forced to recall them and start over. So, this is a likely advantage.

Re: "improve laws and institutions"

You need to understand the risks thoroughly before you will be able to recommend good laws, and before people will listen to you and push for any legislation. After you understand the risks, you'd need to work on improving laws so that when the interested people go to build their AI there's a legal framework there.

Re: "computer security"

This should be included under "risk research and prevention." because it's definitely a risk. There are likely to be interactions between security, and other risks that you'd want to know about at the same time as working on the security, it's all connected, and you may not discover these if you don't think about them at the same time.

Re: "stop worrying about the Singularity and work on more mundane goals"

Considering the personalities, capabilities and prior investments of those involved, this simply isn't likely to happen. They need to be ambitious. Ambitious people need the assistance of others who are more specialized in mundane tasks and would be happy to help them so that they can focus on ambitions - we all specialize.

Focusing on risk research and prevention is also the first step to everything else:

  • How will you get funding for something people see as risky?

  • How will you develop AI in a world destroyed by other's AI projects that SIAI didn't take the time to stop?

  • How will SIAI develop credibility and trust if it doesn't prove it's capable of intellectual rigor by doing a thorough job of risk prevention? This entire industry has no trust. Even as an AI project, you'll have no trust for that reason.

  • How will SIAI prove it is effective in the world if it doesn't do something before making an AI such as change some laws, and do risk prevention?

  • Who is going to be there to independently test your AI project if you choose to do that instead?

I don't think the solution is "Do some, not others." I think it is "Do them in the right order." and for which type of AI project to chose, wouldn't it be safer to decide AFTER you research risks as thoroughly as possible?

Additionally, if SIAI chooses to dedicate itself to risk research and prevention and agrees that the AI building activities should be split off into a different group, I'd be interested in doing some volunteer work for the risk research and prevention group, especially regarding preventing an AGI arms race. I think the ideas I explain there or similar ones would be a really good way for you to prove that SIAI is capable of actually doing something, which addresses a common objection to funding SIAI.

See any way to break the above line of reasoning and argue for a different route? If so, I will attempt to resolve those conflicts also.

comment by nickLW · 2012-08-15T08:08:21.620Z · LW(p) · GW(p)

ideal reasoners are not supposed to disagree

My ideal thinkers do disagree, even with themselves. Especially about areas as radically uncertain as this.

comment by mwengler · 2012-08-14T21:46:50.633Z · LW(p) · GW(p)

Given that ideal reasoners are not supposed to disagree, it seems likely that most if not all of these alternative suggestions can also be explained by their proponents being less than rational.

Please support this statement with any kind of evidence. From where I sit it looks to be simply an error.

As far as I know values do not come from reason, they are a "given" from the point of view of reason. So if I value paper clips and you value thumbtacks, we can be as rational as all get out and still disagree on what we should do.

Further, I think even on matters of "fact," it is not "ideal reasoners" who do not disagree, rather it is reasoners who have agreed on one of a few possible methodologies of reasoning. So I think I have seen the statement, something like "rational bayesians cannot agree to disagree on probability estimates."

comment by V_V · 2012-09-05T00:50:40.311Z · LW(p) · GW(p)

Given that ideal reasoners are not supposed to disagree,

Under some non-trivial assumptions.

it seems likely that most if not all of these alternative suggestions can also be explained by their proponents being less than rational.

That sounds pretty much condescending. These suggestions are not all mutually exclusive, and proponents with different values might have different preferences without being "less than rational".

comment by vi21maobk9vp · 2012-08-14T06:38:20.342Z · LW(p) · GW(p)

About agreement: for the agreement we need all our evidence to be shareable, and our priors to be close enough. Actual evidence (or hard-to-notice inferences) about possibility of significantly super-human AGI on reasonable hardware cited in the Sequences are quite limited, and not enough to overcome difference in priors.

I do think humanity will build slightly super-human AGI, but as usual with computers it will mimc our then-current idea of how human brain actually works and then be improved as the design allows. In that direction, HTM (as done by Jeff Hawkins via his current Numenta startup) may end up polished into a next big thing in machine learning or a near-flop with few uses.

Also, it is not clear that people will ever get around to building general function-optimizing AI. Maybe executing behaviours will end up being the way to safeguard AI from wild decisions.

comment by nykos · 2012-08-14T09:00:30.651Z · LW(p) · GW(p)

The problem with FAI is that it is nearly impossible for human minds of even high intellect to get good results solely through philosophy - without experimental feedback. Aristotle famously got it wrong when he deduced philosophically that rocks fall faster than feathers.

Also, I believe that it is a pointless endeavor for now. Here are 2 reasons why I think that's the case.

*1. We humans don't have any idea whatsoever as to what constitutes the essence of an intelligent system. Because of our limited intellects, our best bet is to simply take the one intelligent system that we know of - the human brain - and simply replicate it in an artificial manner. This is a far easier task than designing an intelligence from scratch, since in this case the part of design was already done by natural (and sexual) selection.

Our best hope and easiest path for AI is simply to replicate the human brain (preferably the brain of an intelligent and docile human being), and make a body suitable for it to inhabit. Henry Markram is working on this (hopefully he will use himself or someone like himself for the first template - instead of some stupid or deranged human), and he notably hasn't been terribly concerned with Friendly AI. Ask yourself this: what makes for FH (Friendly Humans)? And here we turn to neuroscience, evo-psych and... the thing that some people want to avoid discussing for fear of making others uncomfortable: HBD. People of higher average IQ are, on average, less predisposed to violence. Inbred populations are more predisposed to clannish behavior (we would ideally want an AI that is the opposite of that, that is most willing to be tolerant of out-groups). Some populations of human beings are more predisposed to violence, while some have a reputation for docility (you can see that in the crime rates). It's in the genes and the brain that they produce, combined with some environmental factors like random mutations, the way proteins fold and are expressed, etc.

So obviously the most promising way to create Friendly AI at this point in time is to replicate the brain of a Friendly Human.

*2. We might not be smart or creative enough on average to be able to build a FAI, or it might take too long a time to do so. This is a problem that, if exists, will not only not go away, but actually compound itself. As long as there are no restrictions whatsoever on reproduction and some form of welfarism and socialism exists in most nations on Earth, there will be dysgenics with regards to intelligence - since intelligent people generally have less children than those on the left half of the Bell curve - while the latter are basically subsidized to reproduce by means of wealth transfer from the rich (who are also more likely to have above-average IQs, else they wouldn't be rich).

Even if we do possess the knowledge to replicate the human brain, I believe it is highly unlikely that it will happen in a single generation. AI (friendly or not) is NOT just around the corner. Humanity doesn't even possess the ability to write a bugless operating system, or build a computer that obeys sane laws of personal computing. What's worse, it did possess the ability to built something reasonably close to these ideals, but that ability is lost today. If building FAI takes more than one generation, and the survival of billions of people depends on it, then we should rather have it sooner rather than later.

The current bottleneck with AI and most science in general is with the number of human minds able and willing to do it. Without the ability to mass-produce at least human-level AI, we simply desperately need to maximize the proportion of intelligent and conscientious human beings, by producing as many of them as possible. The sad truth is this: one Einstein or Feinman is more valuable when it comes to the continued well-being of humanity than 99% of the rest of human beings who are simply incapable of producing such high-level work and thought because of either genetics and environmental factors, i.e. conditions in the uterus, enough iodine, etc. The higher the average intelligence of humanity, the more science thrives.

Eugenics for intelligence is the obvious answer. This can be achieved through various means, discussed in this very good post on West Hunter. Just one example, which is one of the slowest but the one advanced nations are 100% capable of doing right now: advanced nations already possess the means to create embryos using the sperm and eggs of the best and brightest of scientists alive today. If our leaders simply conditioned welfare and even payments of large sums of money for the below-average IQ women on them acting as surrogate mothers for "genius" embryos, in 20-30 years we could have dozens of Feynmans and tens of thousands of Yudkowskys working on AI. This would have the added benefit on keeping the low-IQ mothers otherwise pregnant and unavailable for spreading low-IQ genes to the next generation, which would result in less people who are a net drain on the future society and would cause only time-consuming problems for the genius kids (like stealing their possessions or engaging in other criminal activities).

I do realize that increasing intelligence in this manner is bound to have an upper limit and, furthermore, will have some other drawbacks. The high incidence of Tay-Sachs disease among the 110 average IQ Ashkenazi Jews is an illustration of this. But I believe that the discoveries of the healthy high IQ people have the potential to provide more hedons than the dolors of the Tay-Sachs sufferers (or other afflictions of high-IQ people, including some less serious ones like myopia).

EDIT: Given the above, especially if *2. is indeed the case, it is not unreasonable to believe that donating to AmRen or Steve Sailer has greater utility than donating to SIAI. I believe that the brainpower at SIAI is better spent on a problem that is almost as difficult as FAI, namely making HBD acceptable discourse in the scientific and political circles (preferably without telling people who wouldn't fully grasp it and would instead use it as justification for hatred towards Blacks), and specifically peaceful, non-violent eugenics for intelligence as a policy for the improvement of human societies over time.

Replies from: Luke_A_Somers, Randaly, OrphanWilde, nykos
comment by Luke_A_Somers · 2012-08-14T14:13:50.049Z · LW(p) · GW(p)

A Feynman raised by an 80 IQ mother... wouldn't be Feynman

Replies from: Risto_Saarelma, nykos
comment by Risto_Saarelma · 2012-08-14T17:41:38.842Z · LW(p) · GW(p)

Judith Rich Harris might disagree.

comment by nykos · 2012-08-14T18:35:56.147Z · LW(p) · GW(p)

I concede that, under some really extreme environmental conditions, any genetic advantages would be canceled out. So, you might actually be right if the IQ 80 mother is really bad. Money should be provided to poor families by the state, but only as long as they raise their child well - as determined by periodic medical checks. Any person, no matter the IQ, can do one thing reasonably well, and that is to raise children to maturity.

But I believe you are taking the importance of parenthood way too far, and disregarding the hereditarian point of view too easily. The blank-slate bias is something to be avoided. I would suggest you read this article by Matt Ridley.

Excerpt:

Today, a third of a century after the study began and with other studies of reunited twins having reached the same conclusion, the numbers are striking. Monozygotic twins raised apart are more similar in IQ (74%) than dizygotic (fraternal) twins raised together (60%) and much more than parent-children pairs (42%); half-siblings (31%); adoptive siblings (29%-34%); virtual twins, or similarly aged but unrelated children raised together (28%); adoptive parent-child pairs (19%) and cousins (15%). Nothing but genes can explain this hierarchy.

Replies from: Luke_A_Somers, DaFranker
comment by Luke_A_Somers · 2012-08-15T09:46:05.292Z · LW(p) · GW(p)

IQ, sure. What he does with it? That's another story. I shudder to think what a Feynman could have done in service of some strict agenda he'd been trained into.

comment by DaFranker · 2012-08-16T15:20:27.337Z · LW(p) · GW(p)

Any person, no matter the IQ, can do one thing reasonably well, and that is to raise children to maturity.

This statement is obviously false and obviously falsifiable.

Insert example of vegetative-state life-support cripple "raising a child" (AKA not actually doing anything and having an effective/apparent IQ of ~0, perhaps even dying as soon as the child touches something they weren't supposed to).

At this point, a rock would be just as good at raising a child. At least the child can use the rock to kill a small animal and eat it.

Replies from: nykos
comment by nykos · 2012-08-22T15:55:58.743Z · LW(p) · GW(p)

Is a "vegetative-state life-support cripple" a person at all?

Replies from: DaFranker
comment by DaFranker · 2012-08-22T16:36:42.887Z · LW(p) · GW(p)

Is your question/objection rhetorical, or did you just not understand the A Human's Guide to Words sequence?

Taboo "person", and if that doesn't work, taboo "raise children", and if that still doesn't work, taboo "no matter the IQ" or "can do" or "reasonably well" or even the entire list of symbols that is generating the confusion.

I objected and gave a thought experiment to illustrate the falsifiability of one specific assertion, which can be nothing else than what I believed you meant by that list of symbols, based on my prior beliefs on what the symbols represented in empirical conceptspace.

If you question my objection on the grounds of using a symbol incorrectly, then you should question the symbol usage, not the objection as a whole through a straw-manned assertion built with your different version of the symbol.

comment by Randaly · 2012-08-14T20:34:20.042Z · LW(p) · GW(p)

The problem with FAI is that it is nearly impossible for human minds of even high intellect to get good results solely through philosophy - without experimental feedback

I do not understand how this has anything to do with FAI

Because of our limited intellects, our best bet is to simply take the one intelligent system that we know of - the human brain - and simply replicate it in an artificial manner.

This is not in fact "simple" to do. It's not even clear what level of details will be needed- just a neural network? Hormones? Glial cells? Modelling of the actual neurons?

So obviously the most promising way to create Friendly AI at this point in time is to replicate the brain of a Friendly Human.

Are you sure you understand what FAI actually refers to? In particular, with p~~1, no living human qualifies as Friendly; even if they did, we would still need to solve several open problems also needed for FAI (like ensuring that value systems remain unchanged during self-modification) for a Friendly Upload to remain Friendly.

With regards to your claims regarding HBD, eugenics, etc: Evolution is a lot weaker than you think it is, and we know a lot less about genetic influence on intelligence than you seem to think. (See eg here or here.) Such a program would be incredibly difficult to get implemented, and so is probably not worth it.

Replies from: nykos
comment by nykos · 2012-08-22T16:08:51.991Z · LW(p) · GW(p)

I do not understand how this has anything to do with FAI

It has to do because FAI is currently a branch of pure philosophy. Without constant experimental feedback and contact with reality, philosophy simply cannot deliver useful results like science can.

This is not in fact "simple" to do. It's not even clear what level of details will be needed- just a neural network? Hormones? Glial cells? Modelling of the actual neurons?

Are there any other current proposals to build AGI that don't start from the brain? From what I can tell, people don't even know where to begin with those.

Are you sure you understand what FAI actually refers to? In particular, with p~~1, no living human qualifies as Friendly; even if they did, we would still need to solve several open problems also needed for FAI (like ensuring that value systems remain unchanged during self-modification) for a Friendly Upload to remain Friendly.

At some point you have to settle for "good enough" and "friendly enough". Keep in mind that simply stalling AI until you have your perfect FAI philosophy in place may have a serious cost in terms of human lives lost due to inaction.

(like ensuring that value systems remain unchanged during self-modification)

But what if the AI is programmed with a faulty value system by its human creators?

Such a program would be incredibly difficult to get implemented, and so is probably not worth it.

Fair enough, I was giving it as an example because it is possible to implement now - at least technically, though obviously not politically. Things like genome repair seem more distant in time. Cloning brilliant scientists seems like a better course of action in the long run, and without so many controversies. However, this would still leave the problem of what to do with those who are genetically more prone to violence, who are a net drag on society.

comment by OrphanWilde · 2012-08-14T12:33:17.189Z · LW(p) · GW(p)

Before you build a new crop of them, first you should probably make sure society is even listening to its Einsteins and Feynmans, or that the ones you have are even interested in solving these problems. It does no good to create a crop of supergeniuses who aren't interested in solving your problems for you and wouldn't be listened to if they did.

Replies from: nykos, nykos
comment by nykos · 2012-08-14T18:16:34.760Z · LW(p) · GW(p)

The society will be listening to its Einsteins and Feynmans once they band together and figure out how to use the dark arts to take control of the mass-media and universities away from their present owners and use them for their own, more enlightened goals. Or at least ingratiate themselves before the current rulers. They could promise to build new bombs or drones, for example. As for not being interested in solving FAI and these kinds of problems, that's really not a very convincing argument IMO. Throughout history, in societies of high average IQ and a culture tolerant of science, there was never a shortage of people curious about the world. Why wouldn't people with stratospheric IQ be curious about the world and enjoy the challenge of science, especially if they live in a brain-dead society which routinely engages in easy and boring trivialities? I mean, what would you choose between working on FAI or watching the Kardashians? I know what I would, even though my IQ is not very much above average and I'm really bad at probability problems.

There will never be a shortage of nerds and Asperger types out there, at least not for a long time, even with the current dysgenic trends.

Replies from: OrphanWilde
comment by OrphanWilde · 2012-08-14T19:52:22.309Z · LW(p) · GW(p)

You assume they'd want to band together, and you also underestimate modern entertainment; Dwarf Fortress, for example.

You also assume they'd care to -share- the products of their curiosity with a brain-dead society.

comment by nykos · 2012-08-14T19:26:46.858Z · LW(p) · GW(p)

I upvoted you for responding with a refutation and not simply downvoting.

comment by nykos · 2012-08-14T19:16:43.779Z · LW(p) · GW(p)

OK, I got two minuses already, can't say I'm surprised because what I wrote is not politically correct, and probably some of you thought that I broke the "politics is the mind-killer" informal rule (which is not really rational if you happen to believe that the default political position - the one most likely to pass under the radar as non-mindkilling - is not static, but in fact is constantly shifting, usually in a leftwards direction).

For the sake of all rationalists, I hope I was downvoted because of the latter. Otherwise, all hope for rational argument is lost, if even people in the rationalist community adopt thought processes more similar to those of politicians (i.e., demotism) than true scientists.

The unfortunate fact is that you cannot separate the speed of scientific progress from public policy or the particular structure of the society engaged in science. Science is not some abstract ideal, it is the triumph of the human mind, of the still-rare people possessing both intelligence and rationality (the latter may even be restricted only to their area of expertise, see Abdus Salam or Georges Lemaître). Humans are inherently political animals. The quality of science depends directly, first and foremost, on the number and quality of minds performing it, and some political positions happen to be ways to increase that number more than others. Simply ignoring the connection is not an option if you really believe in the promise of science to help improve the lives of every human being no matter his IQ or mental profile (like I do).

If you downvote me, I have one request: I would at least like to read why.

Replies from: CarlShulman, fezziwig
comment by CarlShulman · 2012-08-15T23:20:48.850Z · LW(p) · GW(p)

Discussion of intelligence enhancement via reproductive biotechnology can occur smoothly here, e.g. in Wei Dai's post and associated comment thread several months ago. Looking at those past comments, I am almost certain that I could rewrite your comment to convey the same core points and yet have it be upvoted.

I think your comment was relatively ill-received because:

1) It threw in a number of other questionable claims on different topics without extensive support, rather than focusing on one at a time, and suggested very high confidence in the agglomeration while not addressing important variables (e.g. how much would a shift in the IQ distribution help vs hurt, how much does this depend on social norms rather than just the steady advance of technology, how much leverage do a few people have on these norms by participating in ideological arguments, and so forth).

2) The style was more stream-of-consciousness and in-your-face, rather than cautiously building up an argument for consideration.

3) There was a vibe of "grr, look at that oppressive taboo!" or "Hear me, O naive ideologically-blinkered folks!" That signals to some extent that one is in a "color war" mood, or attracted to the ideological high of striking for one's views against ideological enemies. That positively invites a messy political fight rather than a focused discussion of the prospects of reproductive biotechnology to improve humanity's prospects.

4) People like Nick Bostrom have written whole papers about biological enhancement, e.g. his paper on using evolutionary heuristics to look for promising enhancement possibilities. Look at its bibliography. Or consider the Less Wrong post by Wei Dai I mentioned earlier, and others like it. People focused on AI risk are not simply unaware of the behavioral genetics or psychometrics literatures, and it's a bit annoying to have them presented as some kind of secret knock-down argument.

comment by fezziwig · 2012-08-14T19:54:32.409Z · LW(p) · GW(p)

I didn't downvote you, but I can see why someone reasonably might. Off the top of my head, in no particular order:

  1. Whole brain emulation isn't the consensus best path to general AI. My intuition agrees with yours here, but you don't show any sign that you understand the subtleties involved well enough to be as certain as you are.
  2. Lots of problematic unsupported assertions, e.g. "intelligent people generally have less children than those on the left half of the Bell curve", "[rich people] are also more likely to have above-average IQs, else they wouldn't be rich", and "[violence and docility are] in the genes and the brain that they produce".
  3. Eugenics!?!
  4. Ok, fine, eugenics, let's talk about it. Your discussion is naive: you assume that IQ is the right metric to optimize for (see Raising the Sanity Waterline for another perspective), you assume that we can measure it accurately enough to produce the effect you want, you assume that it will go on being an effective metric even after we start conditioning reproductive success on it, and your policy prescriptions are socially inept even by LW standards.
  5. Also, it's really slow. That seems ok to you because you don't believe that we'll otherwise have recursive self-improvement in our lifetimes, but that's not the consensus view here either.

I'm not interested in debating any of this, I just wanted to give you an outside perspective on your own writing. I hope it helps, and I hope you decide to stick around.