Slowing Moore's Law: Why You Might Want To and How You Would Do It

post by gwern · 2012-03-10T04:22:52.720Z · LW · GW · Legacy · 89 comments

Contents

89 comments

In this essay I argue the following:

Brain emulation requires enormous computing power; enormous computing power requires further progression of Moore’s law; further Moore’s law relies on large-scale production of cheap processors in ever more-advanced chip fabs; cutting-edge chip fabs are both expensive and vulnerable to state actors (but not non-state actors such as terrorists). Therefore: the advent of brain emulation can be delayed by global regulation of chip fabs.

Full essay: http://www.gwern.net/Slowing%20Moore%27s%20Law

89 comments

Comments sorted by top scores.

comment by see · 2012-03-10T05:57:26.178Z · LW(p) · GW(p)

Predictions that improvements in manufacturing will lead to lower prices are made ceteris paribus; rising prices caused by a temporary disruption cannot be used to conclude manufacturing costs have gone up until the original conditions have been restored or been shown to be unable to be restored. Since R&D has largely gone on unmodified, there is no particular reason yet to expect that hard drive prices per unit capacity will be any higher in 2013, after most manufacturing facilities are restored and the market has had time to readjust, than an extrapolation made 1990-2010 would have predicted.

And the relevant question as to whether a facility is too expensive to rebuild is not one of the size of firms in that business currently, but of the expected rate of return on the capital. Sunk costs in the form of destroyed fabs will not prevent new capital from coming in to build new fabs (though it might bankrupt specific firms). For sabotage to actually have a long-term effect, it would have to happen regularly enough and effectively enough to significantly drive down the expected rate of return on capital invested in building fabs.

comment by Kaj_Sotala · 2012-03-10T07:54:06.068Z · LW(p) · GW(p)

Related (about memory chips, but probably still relevant): A 0.07-Second Power Problem at Toshiba Chip-Plant May Affect Digital Device Availability/Prices.

There was a Wall Street Journal news story (among others) this morning reporting that "there was a sudden drop in voltage that caused a 0.07-second power interruption at Toshiba's Yokkaichi memory-chip plant in Mie prefecture" causing a problem which Toshiba said will reduce its shipments of NAND flash memory by 20% for the next two months. This, the Journal article says, would translate into a 7.5% reduction in world-wide shipments through February.

NAND memory is used in everything from USB flash drives to MP3 players to digital cameras to smartphones and tablet PCs. [...]

The WSJ article also says that apparently the uninterruptible power supply system at the Toshiba plant failed when the region was hit by a drop in voltage, causing the chips being fabricated to be ruined.

http://www.evolvingexcellence.com/blog/2010/12/the-value-of-007-seconds.html :

Toshiba's troubles started early Wednesday when, according to power supplier Chubu Electric Power Co., there was a sudden drop in voltage that caused a 0.07-second power interruption at Toshiba's Yokkaichi memory-chip plant in Mie prefecture.

Even the briefest power interruption to the complex machines that make chips can have an effect comparable to disconnecting the power cord on a desktop computer, since the computerized controls on the systems must effectively be rebooted, said Dan Hutcheson, a chip-manufacturing analyst at VLSI Research in San Jose, Calif.

For that reason, chip companies typically take precautions that include installing what the industry calls uninterruptible power supplies. Part of Toshiba's safeguards didn't work this time because the voltage drop was more severe than what the backup system is designed to handle, a company spokesman said.

Power outages frequently cause damage to chips, which are fabricated on silicon wafers about the size of dinner plates that may take eight to 12 weeks to process, Mr. Hutcheson said. Wafers that are inside processing machines at the time of an outage are often ruined, he added, though many that are in storage or in transit among those machines are not.

Replies from: gwern
comment by gwern · 2012-03-10T18:58:53.054Z · LW(p) · GW(p)

That's a very good example. I'll add that and the Phillips fire as an additional example next to the floods.

comment by RolfAndreassen · 2012-03-10T06:19:29.235Z · LW(p) · GW(p)

Norwegian construction reports (with their long experience in underground construction) for power stations and waste treatment plants indicate that underground construction premiums are more like 1.25x and may even save money in the long run. These hardenings, however, are not robust against nuclear or bunker-busting bombs, which require overburdens up to 2000 feet.

Ok, but what kind of actor are you envisioning as wanting to blow up the fabs? Nukes and bunker-busters seems to me to indicate nation-states, which - if genuinely convinced of the dangers of Moore's Law - have all kinds of other options available to them, like regulation. (Just look at what they've done to nuclear power plants...) If you decided to go terrorist and had access to a nuke, would a chip fab really be your highest-priority target?

Replies from: gwern
comment by gwern · 2012-03-10T15:26:27.431Z · LW(p) · GW(p)

The usual argument made against the strategy of regulation is that the economic pressures would drive production underground, possibly literally, frustrating any attempt at regulation or enforcement.

My point is that other economic pressures force consolidation into a few efficient and extremely expensive & vulnerable facilities, which can be affected by ordinary military mechanisms - hence, regulation + military-level enforcement may well work in contrast to the usual pessimism.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2012-03-10T20:37:25.962Z · LW(p) · GW(p)

I have got to say that when you're talking about capital requirements in the billions, and construction of the sort where you use the world's largest cranes, and a need for clean-room facilities, I have a really hard time seeing how production could go underground. Explosives, drugs, and weaponised bacteria, yes; these are all technologies that were known well before 1800, you'll note. Chips? I really don't see it.

Did you perhaps mean that the fabricators will go for unregulated markets, but build openly there? Possible, but they still have to sell their products. I suggest that the usual smuggling paths are not going to be very useful here, because where is the demand for forbidden hardware? Drugs go through porous borders because they'll make money on the other side. But high-performance chips? Are you suggesting a black market of AI researchers, willing to deal with criminal syndicates if only they can get their fix of ever higher performance? And even if you are, note again the capital requirements. Any dirt farmer can set up to grow opium, and many do. If drugs required as much capital, and all in one place at that, as a modern fab does, I suggest that the War On Drugs would be over pretty quickly.

Really, hardening against nukes seems like a completely wrong approach unless you're suggesting that the likes of China would be hiding this fab from the US. For private actors the problem is in finding, not blowing up. If you have military-level enforcement, nukes (!) are just total overkill; send in a couple of platoons of infantry and be done. What are the illegal fabbers going to do, hire the Taliban as plant security? (I mean, maybe they would, and we all know how that fight would go, right?)

I think you've got, not the wrong end of the stick, but the wrong stick entirely, when you start talking about nuclear hardening.

Replies from: gwern
comment by gwern · 2012-03-10T20:52:28.041Z · LW(p) · GW(p)

It does seem implausible that any non-state actor could setup an underground chip fab; but someone is going to suggest just that, despite the idiocy of the suggestion when you look at what cutting-edge chip fabs entail, so I have to argue against it. With that argument knocked out of the way, the next issue is whether a state actor could set up a cutting edge chip fab hardened or underground somehow, which is not quite so moronic and also needs to be argued against.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2012-03-11T05:47:22.430Z · LW(p) · GW(p)

Ok, but then why not point out what I just did, which is that anyone but a state building such a thing is quite implausible? Pointing out that it takes a lot of hardening to protect against nukes just seems like a mis-step.

I think perhaps you need to make clearer who you envision as wanting to stop Moore's Law. Are we talking about private actors, ie basically terrorists with the resulting budget limitations; or states with actual armies?

comment by JoshuaZ · 2012-03-10T15:19:24.091Z · LW(p) · GW(p)

A variety of links are broken- these include the link about suppression of firearms, the Cheyenne mountain link, the 2011 Thailand flood link, and the experience curve effect. It appears that something has messed up all the links that were to Wikipedia.

This piece seems to be proposing a solution to something that isn't obviously the thing to worry about. There are a variety of other threats to long-term human survival that require technological improvement to combat. Three obvious issues are asteroid impacts, large scale disease (due to modern infrastructure allowing the fast spread of otherwise localized diseases), and resource limitations (such as running out of easily accessible fossil fuels). Some of these are not directly connected to chip improvements- the Apollo program happened with mid 1960s level technology, and it is likely that the technological barriers to dealing with an annoying asteroid or comet are not strongly connected to computer tech level. However, others are not so limited- better computers mean better treatment of disease from better drug design and detection. Similarly, more efficient chips mean less use of oil (since less energy cost for the same computation) and less use of rare earth elements (which while not actually rare, are distributed in ways that make them inefficient to obtain except for in specific locations).

In general, the worry here is not existential risk threats by themselves: If an event or series of events puts us back a few centuries, it isn't obvious that we will have the resources to bootstrap us up back to current levels. Easily accessible oil and coal played a major part in allowing the technological and infrastructural improvements of the last two centuries. While we will likely have coal reserves for a long time, and to a lesser extent have oil reserves, none of the remaining reserves are nearly as accessible as those used in the 19th century or the first half of the 20th century.

Finally, it is very hard to suppress one area of technology in a narrow fashion without suppressing others as well. Many other areas require heavy duty computations, and computational power in many areas of biology, astronomy, math and physics are a major limiting factor. Your essay uses the Tokugawa period as an example of a technology being given up. However, there's a fair bit of controversy over how much guns were actually suppressed, and the point has been made that the Edo/Tokugawa period was relatively peaceful. More to the point in this context, almost no scientific or technological research was occurring in Japan of any sort until the Meiji restoration.

Replies from: gwern, Thomas
comment by gwern · 2012-03-10T15:33:12.687Z · LW(p) · GW(p)

However, others are not so limited- better computers mean better treatment of disease from better drug design and detection. Similarly, more efficient chips mean less use of oil (since less energy cost for the same computation) and less use of rare earth elements (which while not actually rare, are distributed in ways that make them inefficient to obtain except for in specific locations).

This is true. I'm not claiming that ending Moore's law via regulating or attacking chip fabs would only affect brain uploads & de novo AGI, without affecting any other existential threat. The question here is whether the chip fabs are vulnerable and whether they would affect uploads, which I think I've established fairly well.

It's not clear to me how the latter would go: nanotech and bioterrorism both seem to be encouraged by widespread cheap computing power, and forcing research onto highly supervised grant-paid-for supercomputers would both slow it down and make it harder for a rogue researcher (as compared to running it on his own laptop), but the slowdown in global economic growth has horrific opportunity costs involved.

Hence, whether this is a strategy anyone would ever actually want to use depends on some pretty difficult utilitarian calculuses.

However, there's a fair bit of controversy over how much guns were actually suppressed, and the point has been made that the Edo/Tokugawa period was relatively peaceful. More to the point in this context, almost no scientific or technological research was occurring in Japan of any sort until the Meiji restoration.

Yes, I've read about that. Even the contrarians admit that guns were hardly used and locally manufactured guns were far behind the local state of the art.

comment by Thomas · 2012-03-10T18:01:37.878Z · LW(p) · GW(p)

to dealing with an annoying asteroid or comet are not strongly connected to computer tech level.

I think it is. For to answer the question "What is the minimal action to avert all the near Earth objects for a long time?" - a lot of computing would be needed. And the computed answer might be "Just send a rocket with mass M, at the time T, from the location L, in the direction D, with the speed S - and it will meet enough of those objects and redirect them, to Earth be safe at least for the next 100 years."

If such a trajectory exists at all, it could be calculated with enough computing power at hand. If it doesn't exist, there is a minimal number of them and that could be calculated also.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-03-10T18:50:32.333Z · LW(p) · GW(p)

Even if one had near indefinite computing power, making such a calculation would be extremely difficult simply due to the lack of precision of observations. Small changes in the estimated size or trajectory of an asteroid could have drastic results on their long-term behavior. Comets are even more ill-behaved. The limiting factor to such calculations would be at least as much observational as calculational.

Moreover, since large impacts are extremely rare threats, dealing with individual impact threats as they arise is a much more optimal strategy.

comment by Dmytry · 2012-03-10T12:43:07.004Z · LW(p) · GW(p)

TBH i'd rather share my planet with the mind uploaded folks than the folks that bomb factories. Both of those folks are potentially non-friendly non-me intelligences, except the latter type is for certain non-friendly while the former type might end up friendly enough in the sense in which corporations and governments - as meta-organisms - are friendly enough.

comment by Dmytry · 2012-03-11T07:05:48.769Z · LW(p) · GW(p)

Looking at the essay more, I would say: the chip fabrication labs are amazingly cheap and even highly coordinated efforts by all terrorist groups can not make even a small dent in the progress (excluding scenarios such as big nuclear war)

The several billion dollars apiece for a fab is cheap . The margins may be thin when everyone else, too, is making chips. But I can pay 10x of what I am paying for my electronic equipment, if needs be, and all chances are you can too if you are posting here. The income distribution being what it is (power law, see Pareto distribution), the costs can be raised massively while retaining the revenue as long as the cheap alternatives are not available. More than 50% of your customers can and would pay >2x (if the alternatives weren't available). Think about it. 20% own 80% of everything (or even more skewed), and its like this all the way to the top. With this kind of income distribution the luxury market alone can support entire industry along with progress. Look at Apple, and how successful it is even though the products are not all that popular worldwide - they make huge margin.

Anyhow, with regards to the de-novo AGI, really, all we have is extremely shaky guesses as to whenever de-novo AGI would be more or less disruptive than mind uploads, arguments which have extremely low external probabilities. Due to this uncertainty bombing fabs would result in negligible change of expected post-AI utility, as calculated by any rational agent that does not delude itself with regards to the strength of it's evidence (and argumentation) towards de-novo AGI being safer than mind uploads. (note that the negligible difference of expected value is different from negligible expected square of difference)

(the pascal's mugging, i think, is only a problem due to our self delusion with regards to strength of the prediction of superpowerful entity based on what superpowerful entity says; it can be that the superpowerful entities too would usually lie when doing pascal's mugging, and sufficiently often lie in an evil way involving eternal tortures when you accept what they say. This overconfidence in own prediction is a huge problem, manifesting itself not just in theoretical examples like pascal's mugging but realistic and soon to be practical examples like attitude towards de-novo AGI vs mind upload)

Replies from: gwern
comment by gwern · 2012-03-11T17:57:14.568Z · LW(p) · GW(p)

But I can pay 10x of what I am paying for my electronic equipment, if needs be, and all chances are you can too if you are posting here.

I can't. And as already pointed out, costs for large tech companies or datacenters are already dominated by such apparently trivial factors as energy costs - a 10x increase in raw processor price would be catastrophic.

The income distribution being what it is (power law, see Pareto distribution), the costs can be raised massively while retaining the revenue as long as the cheap alternatives are not available. More than 50% of your customers can and would pay >2x (if the alternatives weren't available). Think about it. 20% own 80% of everything (or even more skewed), and its like this all the way to the top. With this kind of income distribution the luxury market alone can support entire industry along with progress. Look at Apple, and how successful it is even though the products are not all that popular worldwide - they make huge margin.

/sees a lot of vague economic buzzwords, no specific arguments or data on price elasticity

You will notice that the processor companies are only able to engage in price discrimination up to around $1000 a processor or so. (The most expensive Intel processor I could find at Newegg was $1050.) This suggests you are wrong.

Replies from: Dmytry
comment by Dmytry · 2012-03-11T18:19:20.211Z · LW(p) · GW(p)

I can't. And as already pointed out, costs for large tech companies or datacenters are already dominated by such apparently trivial factors as energy costs - a 10x increase in raw processor price would be catastrophic.

I'm not sure this would apply if the 10x increase in price applies to everyone else as well. We wouldn't pay Google real money if there's Bing for free, but we would have to if there's no competition. edit: also for the energy cost, a: it's by no means trivial and b: only goes to show that if the CPUs get 10x more expensive it won't be such a huge rise in final operational costs.

You will notice that the processor companies are only able to engage in price discrimination up to around $1000 a processor or so. (The most expensive Intel processor I could find at Newegg was $1050.) This suggests you are wrong.

Competition from the lower-range products.

edit: actually, I want to somewhat clarify paying 10x of what I am paying. I would probably upgrade home hardware less often, especially at first (but factor of 10 is only 5 years of Moore's law or so). The hardware for business is another matter entirely. It is not at all expensive to pay Google >>10x of what it makes off you via showing you ads. The ads are amazingly bad at generating income. They still provide quite enough for Google.

I pay ~20$/month for 100 megabits/second up and down (Lithuania, the world #1 internet speed). From what I know (my girlfriend is from US) typical costs in US are 50$ for perhaps 10 megabits on a typical day; god knows how much equally reliable 100megabit costs there. And this ain't hardware costs, because hardware is not any cheaper here than in US. It's probably the labour costs.

My brother lives in Russia. They make way less money here; they still buy computers; if the computers were to cost 10x more, they wouldn't be able to afford it; but people in the developed countries would merely be having to save the money some.

edit: And with regards to purely-luxury spendings, if diamond prices become 10x larger, the people will be spending about same on diamonds and buying 1/10 the diamonds, while the diamond industry will have about same income (actually i could expect income of diamond industry to rise because more expensive diamonds are then possible in the rings without mechanical considerations). The diamonds in jewellery are inherently only worth as much as you pay for them. Now apply this to the chips. Picture semiconductor industry that, in total, makes same as it makes today (or more due to non-elasticity of consumption), while having to produce only 1/10 of the chip surface area. That goes for the luxury market. The non-luxury market can't cut down the consumption as much, and when price rise affects everyone else just as well, can pass the costs on consumer (see paid Google example)

comment by CasioTheSane · 2012-03-10T04:49:09.728Z · LW(p) · GW(p)

Are you actually suggesting that people attack chip fab plants in attempt to prevent WBE from occurring before de novo AGI?

I think if you were successful, you'd be more likely to prevent either from occurring than to prevent WBE from occurring first. It takes a whole lot of unfounded technological optimism to estimate that friendly de novo AGI is simple enough that an action like this would make it occur first, when we don't even know what the obstacles really are.

Replies from: gwern
comment by gwern · 2012-03-10T04:55:20.031Z · LW(p) · GW(p)

Are you actually suggesting that people attack chip fab plants in attempt to prevent WBE from occurring before de novo AGI?

Maybe. What do your probability estimates and expected value calculations say?

It takes a whole lot of unfounded technological optimism to estimate that friendly de novo AGI is simple enough that an action like this would make it occur first

It would take a lot of optimism. Good thing I never suggested that.

Replies from: CasioTheSane
comment by CasioTheSane · 2012-03-10T05:28:04.200Z · LW(p) · GW(p)

What do your probability estimates and expected value calculations say?

I agree with your assessment that this would effectively delay WBE, and therefore increase the chances of it occurring last but I can't even guess at how likely that is to actually be effective without at least a rough estimate of how long it will take to develop de novo AGI.

Your idea is very interesting, but I'll admit I had a strong negative emotional reaction. It's hard for me to imagine any situation under which I would sit down and do some calculations, check them over a few times, and then go start killing people and blowing stuff up over the result. I'm not prepared to joint the Rational Liberation Front just yet.

//edit: I also really want to see where brain emulation research goes, out of scientific curiosity.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-03-10T06:47:29.021Z · LW(p) · GW(p)

//edit: I also really want to see where brain emulation research goes, out of scientific curiosity.

This seems understandable, but I hope it isn't a significant factor in your decision making...

comment by Epiphany · 2012-10-09T08:06:29.521Z · LW(p) · GW(p)

This is haunting the site. I see that your perspective is: "Does this imply that regulation could be accomplished by any modestly capable group, such as a Unabomber imitator or a souped-up ITS? No (reasons)" and that your position is that Terrorism is not effective. However, I have found several mentions of people being creeped out by this article around the site. Here is the last mention of someone being creeped out I noticed.. I think there is a serious presentation problem with this piece that goes like this:

  1. Person clicks article title thinking "This is going to be about ways that Gwern thinks are good ideas to stop Moore's Law". Most of them do not know that Gwern thinks terrorism is not effective.

  2. Person reads "the advent of brain emulation can be delayed by attacks on chip fabs."

  3. Person assumes attacker will be a terrorist, because that's the reflexive reaction after hearing that term so much in the media.

  4. Person thinks "Gee, a guy who thinks terrorism is a good idea. I'm outta here!"

  5. Person never reads far enough to realize that Gwern's conclusion is that only a conventional military attack would work to stop chip fabs.

Please fix this. My suggestion is to do the following three things:

  1. If you change the title, they won't interpret any of the scenarios you analyzed as "My idea of a great way to stop Moore's Law." For instance "Things that would and wouldn't work to stop Moore's Law" sounds more like an exploration of the possibilities, which is what you seem to have intended, than "How would you stop Moore's law?" which sounds like you're setting out to stop it.

  2. There's a lot of mindkill about terrorism. If you state in the very beginning, before mentioning anything about attacks, that your view is that terrorism is not effective, and link to your article on your site, I think that will inoculate against people jumping to that conclusion while they're reading it. Without a blatant statement against terrorism, this is probably going to trigger mindkill for a lot of people.

  3. The beginning of the article is a little bit confusing. I think if you introduced the possibilities you'll be going over before diving in, and stated your point in the beginning, it would be clear what your intent is. For instance: "I analyzed several different ways that people might try to stop Moore's law. Terrorism would not be effective but a conventional military assault could be."

Replies from: gwern
comment by gwern · 2012-10-14T21:48:22.563Z · LW(p) · GW(p)

I wasn't sure that this was worth acting on, but I see that another person seems to be taking it the wrong way, so I guess you are right. I've done the following:

  • Substantially edited the summary here and there to make the logic clearer and mention upfront that terrorism won't work
  • Changed the title of the essay
  • Deleted the body here so people will have to read the full up-to-date version and not whatever old version is here
  • Reworked the intro sections to give more background and a hopefully more logical flow
Replies from: Epiphany
comment by Epiphany · 2012-10-14T22:32:58.680Z · LW(p) · GW(p)

Oh thank goodness you did something about this! I guess you didn't read every comment on your thread, or you just didn't take rwallace seriously at first, but rwallace actually decided to quit LessWrong because of your essay. You can tell for sure because that's the last thing they said here and they haven't posted anything since March: http://lesswrong.com/user/rwallace/

Maybe somebody should let them know... since they don't come to the site anymore, that would be hard, but if you know who the person's friends are, you could ask if they'll pass the message on.

You know, it's really hard to tell how people will take one's writing before it is posted. If you'd like, I will read your stuff before you post it if you'll read mine - we can trade each other pages 1 for 1. That should reduce the risk of this happening to a much lower level.

Replies from: gwern
comment by gwern · 2012-10-14T22:39:22.532Z · LW(p) · GW(p)

Maybe somebody should let them know... since they don't come to the site anymore, that would be hard, but if you know who the person's friends are, you could ask if they'll pass the message on.

I think that would be pretty pointless; if he could think that after reading the original, the amendments aren't going to impress him. If he's that careless a reader, LW may be better off without him. (I did read his comment: I subscribe via RSS to the comments on every article I post.) If you were to track him down and ask him to re-read, I'd give <35% that he'd actually come back and participate (where participate is defined as eg. post >=1 comment a month for the next 6 months).

If you'd like, I will read your stuff before you post it if you'll read mine - we can trade each other pages 1 for 1. That should reduce the risk of this happening to a much lower level.

Nah, I'm fine with #lesswrong as 'beta readers', as it were.

Replies from: Epiphany
comment by Epiphany · 2012-10-18T06:38:54.388Z · LW(p) · GW(p)

If he's that careless a reader, LW may be better off without him.

I don't think the problem was careless reading. When you open with a comment about attacking chip fabs without specifying that you mean a government level military and your audience is mainly in a country where everyone has been bathed in the fear of terrorism for years, this is bound to trigger mind kill reactions. You could argue "Good LW'ers should stay rational while thinking about terrorism." but aside from the fact that everyone has flaws and that's a pretty common one, more importantly, you seem to overlook the fact that the entire rest of the world can see what you wrote. Humans aren't known for being rational. They're known for doing things like burning "witches" and poisoning Socrates. In this time and place, you're less likely to get us killed by an angry mob, but you could easily provoke the equivalent of that in negative attention. Reddit and The Wall Street Journal have both done articles on Less Wrong recently. Big fish are starting to pay attention to LessWrong. That means people are paying attention YOU. Yes, you. This post has gotten 1,344 page views. One of Luke's older posts got 200,000 views. (Google analytics). For contrast, a book is considered a bestseller if it sells 100,000 copies.

YOU could end up getting that much attention on this site, Gwern, and the attention is not just from us. There are only about 13,000 registered users in the user database, and only 500-1000 of them are active in a given month. That just doesn't account for all of the traffic to the posts.

Even if it were true that all the people who misread this are schmoes, choosing to leave the site over it may be a perfectly valid application of one's intelligence. Not associating one's reputation with a group that is mistakenly thought to be in favor of terrorism is a perfectly sane survival move. I wondered if I should quit, myself before deciding to suggest that you edit the post.

Considering the amount of attention that LessWrong is getting, and the fact that you are a very prominent member here whose words will be taken (or mistaken) as an indication of the group's mentality, do you not think it's a good idea to avoid making others look bad?

Replies from: gwern, None
comment by gwern · 2012-10-18T14:59:56.308Z · LW(p) · GW(p)

Mm, maybe. It is difficult for me to see such things; as I pointed out in another comment, before I wrote this, I spent scores of hours reading up on and researching terrorism and indeed posted that to LW as well; to me, terrorism is such an obviously useless approach - for anything but false flag operations - that nothing needs to be said about it.

That means people are paying attention YOU. Yes, you. This post has gotten 1,344 page views. One of Luke's older posts got 200,000 views. (Google analytics). For contrast, a book is considered a bestseller if it sells 100,000 copies.

Page views are worth a lot less than an entire book sale, IMO - one will spend much more time on a book than even a long essay. 1344 page views doesn't impress me. For example, for this October, gwern.net had 51x more or 69,311 total page views. The lifetime total for this essay on my site is already at 7,515, and most of that is from before I deleted the version here so I expect that will boost the numbers a bit in the future.

Replies from: Epiphany
comment by Epiphany · 2012-10-19T02:54:05.573Z · LW(p) · GW(p)

Page views are worth a lot less than an entire book sale, IMO

Agreed, especially if deciding things like whether to invest in publishing a particular author's new book. However, my purpose was just to make the number seem more real. Humans have problems with that - "One death is a tragedy, a million deaths is a statistic." as they say. I think it was an okay metaphor for that purpose.

I'm not trying to say Luke's article is a "bestseller" (in fact it has a bounce rate of about 90%), just that LW posts can get a lot of exposure so even if it is the standard that LW members should be rational enough not to mindkill on posts like that one, we should probably care about it if non-rationalists from the world at large are mind-killing on stuff written here.

comment by [deleted] · 2012-10-18T06:59:47.857Z · LW(p) · GW(p)

Big fish are starting to pay attention to LessWrong. That means people are paying attention YOU. Yes, you. This post has gotten 1,344 page views.

So, much less traffic than gwern.net gets in a month, on an arguably less controversial topic than the usual gwern.net fare.

If you use LessWrong for beta testing, you're not just getting a critique from a handful of friends, you're informing the entire world about who LessWrong is.

He's using the IRC channel, #lesswrong, as his beta testers. #lesswrong is a different thing from LessWrong.

Replies from: Epiphany
comment by Epiphany · 2012-10-18T07:05:53.095Z · LW(p) · GW(p)

So, much less traffic than gwern.net gets in a month, on an arguably less controversial topic than the usual gwern.net fare.

Then it's very odd that he doesn't seem to care that people are mistaking him as being in support of terrorism.

He's using the IRC channel, #lesswrong, as his beta testers. #lesswrong is a different thing from LessWrong.

Oh dear.

I assumed from the context (the fact that this thing got out onto the site without him appearing to know / care that people would think it was pro terrorism) that he was referring to the website.

Does everyone here have Asperger's or something?

Note: I removed the part in my post that referred to using LW as beta testers.

Replies from: wedrifid, None
comment by wedrifid · 2012-10-18T10:54:05.919Z · LW(p) · GW(p)

Does everyone here have Asperger's or something?

No, almost certainly less than 90% of the people here have Asperger's!

Replies from: Epiphany
comment by Epiphany · 2012-10-18T17:24:59.840Z · LW(p) · GW(p)

I'm confused about how this happened. Edit: I think I figured it out.

comment by [deleted] · 2012-10-18T07:11:30.094Z · LW(p) · GW(p)

Does everyone here have Asperger's or something?

It would be incredibly improbable. Not-so-subtly suggesting your interlocutors aren't neurotypical is such a wonderful debate tactic, though; it'd be a pity to let the base rate get in the way.

Replies from: Epiphany
comment by Epiphany · 2012-10-18T07:20:49.948Z · LW(p) · GW(p)

I'm genuinely confused at this point and just trying to figure out how this happened. From my point of view, the fact that this got posted without him realizing that it was going to be mistaken as a pro-terrorism piece is, by itself, surprising. That it was beta tested by other LW'ers first and STILL made it out like this is even more surprising.

I'm not trying to convince you of anything, paper-machine. This isn't a debate. I am just going WTF.

Replies from: Kindly, cata
comment by Kindly · 2012-10-18T13:24:30.630Z · LW(p) · GW(p)

I think you should consider the hypothesis that you are over-reacting before the hypothesis that lots of different beta readers are all under-reacting.

(Which in turn is more likely than the hypothesis that the beta readers have a neurological condition that causes them to under-react.)

Replies from: Epiphany
comment by Epiphany · 2012-10-18T19:16:04.281Z · LW(p) · GW(p)

I think you should consider the hypothesis that you are over-reacting

Except that I didn't over-react. I wasn't upset. I just went "Is this a piece endorsing terrorism?" looked into it further, realized this interpretation is false, and wandered away for a while.

Then I saw mention after mention around the site saying that people were creeped out by this piece.

I came back and saw that someone had left because of it - like, for real, as in they haven't posted since they said they were leaving due to the piece. And then I went "Wow a lot of people are creeped out by this. This is making LessWrong look bad. Even if it IS a misinterpretation, thinking that this post supports terrorism could be a serious PR problem."

My position is still that beta testers should ideally catch any potential PR disasters, and I don't think that's an over-reaction. At all.

(Which in turn is more likely than the hypothesis that the beta readers have a neurological condition that causes them to under-react.)

For the record, even though it did occur to me for a moment as a possible explanation, I didn't say that because I really believed it was likely that everyone here has Asperger's. That would be stupid. I said it as an expression of surprise. I figured it would be obvious that it was an expression of surprise and not a rational assessment.

I think my surprise was due to hindsight bias.

Replies from: Kindly
comment by Kindly · 2012-10-18T21:19:49.095Z · LW(p) · GW(p)

My position is still that beta testers should ideally catch any potential PR disasters, and I don't think that's an over-reaction. At all.

To be specific, the hypothesis I am suggesting is that you are now, currently, over-reacting by calling this a "potential PR disaster".

Replies from: Epiphany
comment by Epiphany · 2012-10-19T01:15:28.061Z · LW(p) · GW(p)

I really didn't expect that. As I see it, a post that multiple people took as being in support of terrorism and somebody quit over is definitely sensational enough to generate a buzz. Surely, you have seen reporters take things out of context. Eliezer has already been targeted for a hatchet job by one reporter.

There was once an occasion where a reporter wrote about me, and did a hatchet job. It was my first time being reported on, and I was completely blindsided by it. I'd known that reporters sometimes wrote hatchet jobs, but I'd thought that it would require malice—I hadn't begun to imagine that someone might write a hatchet job just because it was a cliche, an easy way to generate a few column inches. So I drew upon my own powers of narration, and wrote an autobiographical story on what it felt like to be reported on for the first time—that horrible feeling of violation. I've never sent that story off anywhere, though it's a fine and short piece of writing as I judge it.

For it occurred to me, while I was writing, that journalism is an example of unchecked power—the reporter gets to present only one side of the story, any way they like, and there's nothing that the reported-on can do about it. (If you've never been reported on, then take it from me, that's how it is.) And here I was writing my own story, potentially for publication as traditional journalism, not in an academic forum. I remember realizing that the standards were tremendously lower than in science. That you could get away with damn near anything, so long as it made a good story—that this was the standard in journalism. (If you, having never been reported on yourself, don't believe me that this is the case, then you're as naive as I once was.)

RationalWiki sometimes takes stuff out of context. For instance, the Eliezer facts thread has a "fact" where an LWer edited a picture of him speaking beside a diagram that shows a hierarchy of increasingly more intelligent entities including animals, Einstein and God. The LW'er added Eliezer to the diagram, at a level well beyond God. You can see below this that Eliezer had to add a note for RationalWiki because they had apparently made the mistake of taking this photoshopped diagram out of context.

If some idiot who happens to have a RationalWiki account dropped by or a reporter who was hard up for a scoop discovered this, do you think it isn't likely for them to take it out of context either to make it more sensational or because of mindkill? I, for one, do not think there was anything special about the original post that would prevent it from becoming the subject of a hatchet job.

People act crazy when they're worried about being attacked. I have a friend who has dual citizenship. He came to visit (America) and was harassed at the airport simply because he lives in a different country and the security guard was paranoid about terrorism. I don't see this post getting LW shut down by the government or anything, but it could result in something really disappointing like Eliezer being harassed at airports, or something bad in between.

Considering all this, do you still think the risk of bad publicity is insignificant?

Replies from: Kindly
comment by Kindly · 2012-10-19T02:18:15.765Z · LW(p) · GW(p)

Considering all this, do you still think the risk of bad publicity is insignificant?

Pretty much, yeah. The opinion of RationalWiki is probably worth somewhere in between the opinion of 4chan and the opinion of Conservapedia. And people quit forums all the time, that's not something to worry about.

I see this as a case of "the original version of the article was unclear, and has been edited to make it clearer". Not a scandal of any kind.

Replies from: Epiphany
comment by Epiphany · 2012-10-19T02:27:57.741Z · LW(p) · GW(p)

So do I, to all of the above, so you apparently have more faith in humanity than I do in regards to people taking things out of context and acting stupid about it.

comment by cata · 2012-10-18T08:24:36.196Z · LW(p) · GW(p)

There's a strong feeling in the culture here that it's virtuous to be able to discuss weird and scary ideas without feeling weirded out or scared. See: torture and dust specks, AI risk, uploading, and so on.

Personally, I agree with you now about this article, because I can see that you and the fellow above and probably others feel strongly about it. But when I read it originally, it never occurred to me to feel creeped out, because I've made myself to just think calmly about ideas, at least until they turn into realities -- I think many other readers here are the same. Since I don't feel it automatically, quantifying "how weird" or "how scary" these things are to other people takes a real conscious effort; I forget to do it and I'm not good at it either.

So that's how it happens.

Replies from: Epiphany
comment by Epiphany · 2012-10-18T17:31:18.126Z · LW(p) · GW(p)

I like entertaining ideas that others find weird and scary, too, and I don't mind that they're "weird". I have nothing against it. Even though my initial reaction was "Does this guy support terrorism?" I was calm enough to investigate and discover that no, he does not support terrorism.

Since I don't feel it automatically, quantifying "how weird" or "how scary" these things are to other people takes a real conscious effort; I forget to do it and I'm not good at it either.

Yeah, I relate to this. Not on this particular piece though. I'm having total hindsight bias about it, too. I am like "But I see this, how the heck is it not obvious to everyone else!?"

You know what? I think it might be amount of familiarity with Gwern. I'm new and I've read some of Gwern's stuff but I hadn't encountered his "Terrorism isn't effective" piece, so I didn't have any reason to believe Gwern is against terrorism.

Maybe you guys automatically interpreted Gwern's writing within the context of knowing him, and I didn't...

comment by Eneasz · 2012-03-12T21:36:47.749Z · LW(p) · GW(p)

Isn't the first rule of Fight Club that you don't talk about Fight Club?

comment by Douglas_Knight · 2012-03-11T04:42:45.299Z · LW(p) · GW(p)

It seems to me that the key point is Moore's second law, leading to centralization. Centralized facilities are easy to sabotage. But if this law keeps going, it might end Moore's law all on its own.

If the capital expense of a fab keeps growing exponentially, pretty soon there will be a monopoly on state of the art silicon. What happens to the economics then? It seems to remove much of the incentive to build better fabs. Even if pricing keeps on as normal, the exponentially increasing cost of the fabs seems hard to finance.

The obvious solution is not to make them so big. This gives up some economy of scale, but if the alternative is not building a better fab, it will probably happen, at least for a couple of cycles. How far can this be pushed? It seems to me that this is exactly the same question that people will ask if a single big fab is destroyed and they want to decentralize.

comment by Daniel_Burfoot · 2012-03-11T00:04:44.968Z · LW(p) · GW(p)

Great analysis. I am skeptical, though, that a campaign of targeted disruption could permanently derail Moore's law as long as the economy as a whole remains strong. Markets are really good at responding to dislocations like the destruction of a chip fabrication plant: other facilities would increase production in response to higher prices, unnecessary consumption would be curtailed (think of all those college kids using their fabulously advanced computers to surf Facebook), and future facility development would incorporate the threat of attack into their designs. We might even see companies start to trade special forms of insurance against such attacks.

comment by Epiphany · 2012-10-14T22:46:50.713Z · LW(p) · GW(p)

I have some ideas for slowing Moore's law as well and, I'm wondering what you guys think of them (Gwern, too). I'm thinking of making these into post/s of their own and am curious about whether they'd be well-received or what, if anything, should to be done first.

comment by Rhwawn · 2012-03-10T04:30:03.572Z · LW(p) · GW(p)

Upvoted; very interesting.

comment by Epiphany · 2012-10-19T02:35:27.033Z · LW(p) · GW(p)

the advent of brain emulation can be delayed by global regulation of chip fabs

I think it might be a hard sell to convince governments to intentionally retard their own technological progress. Any country who willingly does this will put themselves at a competitive disadvantage economically and defense-wise.

Nukes are probably an easier sell because they are specific to war - there's no other good use for them.

I think this might be more like Eliezer's "let it out of the box" experiments: The prospect of using the technology is too appealing to restrain it.

Also, another problem is that this is abstract. Nuclear weapons are a very tangible problem - they go boom, people die. Pretty much everyone can universally understand that.

With AI, the problems aren't so easy to understand. First of all, people might not even believe AI is possible in order to believe it is a risk. Secondly, people regard IT people practically the way they'd regard a real life wizard. I am called a genius at work for doing stupid tasks and thanked up and down for accomplishing small things that took five minutes. This is simply because others don't know how to do them. Simultaneously, it is assumed that no matter what type of IT problem I am given, I will be able to solve it. They assume a web developer can fix their computer for instance. I can fix some problems, but I'm no computer tech.

I wonder if they don't understand the risks of AI well enough to realize that the IT people can't fix it.

And then there's optimism bias. I can't think of a potentially useful technology we've passed up because it was dangerous. Can you think of an example where that has actually happened? Or where a large number of people understood an abstract problem, believed in it's feasibility, and took appropriate measures to counteract it?

I'll be thinking about this now...

Replies from: gwern
comment by gwern · 2012-10-19T15:11:50.279Z · LW(p) · GW(p)

Yes, I've pointed out most of those as reasons effective regulation would not be done (especially in China).

Replies from: Epiphany
comment by Epiphany · 2012-10-26T01:38:46.155Z · LW(p) · GW(p)

Oh, sorry about that! After this dawned on me, I just kind of skimmed the rest and the subtitle "The China question" did not trigger a blip on my "you must read this before posting that idea" radar.

What did you think of my ideas for slowing Moore's law?

Replies from: gwern
comment by gwern · 2012-10-26T01:55:27.393Z · LW(p) · GW(p)
  • Patents is a completely unworkable idea.
  • Convincing programmers might work, if we think very few programmers or AI researchers are the ones making actual progress. Herding programmers is like herding cats, so this works only in proportion to how many key coders there are - if you need to convince more than, say, 100,000, I don't think it would work.
  • PR nightmare seems to be the same thing.
  • Winning the race is a reasonable idea but I'm not sure the dynamic actually works that way: someone wanting to produce and sell an AI period might be discouraged by an open-source AI, but a FLOSS AI would just be catnip to anyone who wants to throw it on a supercomputer and make $$$.
Replies from: Epiphany
comment by Epiphany · 2012-10-26T05:21:26.948Z · LW(p) · GW(p)

I wish this was on the idea comment rather than over here... I'm sorry but I think I will have to relocate my response to you by putting it on the other thread where my comment is. This is because discussing it here will result in a bunch of people jumping into the conversation on this thread when the comment we're talking about is on a different thread. So, for the sake of keeping it organized, my response to you regarding the feasibility of convincing programmers to refuse risky AI jobs is on the other thread.

comment by timtyler · 2012-03-10T18:31:50.094Z · LW(p) · GW(p)

If we wanted to shift probability towards de novo AGI (the desirability of uploads is contentious, with pro and con), then we might withhold support from hardware development or actively sabotage it.

Sabotage? Isn't that going to be illegal?

Replies from: gwern
comment by gwern · 2012-03-10T19:19:25.769Z · LW(p) · GW(p)

Depends on whether the ones doing it are also the ones making the laws.

Replies from: see, timtyler
comment by see · 2012-03-11T02:08:03.095Z · LW(p) · GW(p)

And there is the really big flaw in a regulatory scheme. Can you really think of a way to arrange international coordination against making better and faster chips? How well has international coordination on, say, carbon emissions worked? If some countries outlaw making better chips, the others are likely to see that as a place where they can get a competitive advantage. If some countries outlaw importing better chips, that too will be seen by others as a place to get advantage, by using the chips themselves. And smuggling in high-capability chips from places where they are legal will be tempting for anyone who needs computing power.

The presence of the facilities in US allies is not itself particularly useful to overcoming the problem of coordination. The power of the US over, say, Taiwan is limited to the point where seeking shelter under the protection of the nuclear arsenal of the People's Republic of China would seem a better option. There doesn't seem to me to be any plausible scenario where current governments would weigh existential risk from Moore's Law higher than existential risk of nuclear war from bombing a country that has a nuclear umbrella.

Replies from: gwern
comment by gwern · 2012-03-11T02:25:03.910Z · LW(p) · GW(p)

Looking at the pre-requisites and requirements for keeping a chip fab going, and then considering how much additional input is necessary to improve on the state of the art, I think I can safely say that stopping Moore's law is easier than nuclear nonproliferation.

And note that no rogue state has ever developed H-bombs as oppose to merely A-bombs; nor have any nation (rogue or otherwise) ever improved on the Russian and American state of the art.

Replies from: see
comment by see · 2012-03-11T06:29:53.405Z · LW(p) · GW(p)

I didn't suggest it would be physically harder than stopping nuclear proliferation. I suggested it would be politically harder. The success of the scheme "depends on whether the ones doing it are also the ones making the laws", and that means it depends on international politics.

Nuclear proliferation is a very, very bad analogy for your proposal because the NPT imposed no controls on any of the five countries that were already making nuclear weapons at the time the treaty was adopted, who between them were militarily dominant throughout the Earth. It was in the immediate self-interest of the five nuclear powers to organize a cartel on nuclear weapons, and they had the means to enforce one, too. Further, of the countries best situated to defect many were already getting the benefit of having virtually their own nuclear weapons without the expense of their own R&D programs through the NATO Nuclear Sharing program.

On the other hand, organizing to end Moore's Law is to the immediate disadvantage of any country that signs on (much like agreeing to cuts in carbon emissions), and even any country that simply wants to buy better computers. The utterly predictable result is not a world consensus with a handful of rogue nations trying to build fabs in the dark lest they get bombed by the US. Rather, it's the major current manufacturers of semiconductors never agreeing to end R&D and implementation. And then even if they do, a country like the People's Republic of China blowing off the existential risk, then going ahead with its own domestic efforts.

Replies from: gwern
comment by gwern · 2012-03-11T17:49:31.803Z · LW(p) · GW(p)

Nuclear proliferation is a very, very bad analogy for your proposal

Actually, it's fantastic because everyone predicted that nuclear proliferation would fail abysmally within decades of starting and practically every nation would possess nukes, which couldn't have been more wrong. Chip fabs and more advanced chips are even harder than nukes because to continue the Moore's law, we can translate into atomic bomb terms: "another country must not just create A-bombs, not just create H-bombs, but actually push the atomic bomb frontier exponentially, to bombs orders of magnitude more powerful than the state of the art, to bombs not just in the gigaton range but to the teraton range." This dramatically brings out just how difficult the task is.

It may theoretically be in a country's interest to make a chip fabs, but chips are small & hugely fungible so they will capture little of the surplus, in contrast to atomic bombs which never leave their hands.

And then even if they do, a country like the People's Republic of China blowing off the existential risk, then going ahead with its own domestic efforts.

How many decades would it take the PRC to catch up? Their only existing designs are based on industrial espionage of Intel & AMD processors, my understanding was. How many decades with the active opposition of either the US or a world government such that they can use no foreign suppliers for any of the hundreds and thousands of extremely specialized steps and technologies required to make merely a state of the art chip, never mind advancing the state of the art as Moore's law requires?

comment by timtyler · 2012-03-10T22:09:56.801Z · LW(p) · GW(p)

How would the government stop Moore's law? More to the point: why would the government stop Moore's law? It seems to be working out well for them so far. They pump a fair bit of funding into tech industries via the military. They created the marketplace that helps to drive the law forwards. They seem pretty tech-friendly.

Replies from: gwern
comment by gwern · 2012-03-10T22:25:52.260Z · LW(p) · GW(p)

I am at a loss to understand how you could ask either question after reading my post.

Replies from: timtyler
comment by timtyler · 2012-03-10T23:05:31.851Z · LW(p) · GW(p)

Right. So the government is doing nothing to shut down Moore's law. Perhaps your case is not very convincing to them.

Replies from: gwern
comment by gwern · 2012-03-10T23:40:38.314Z · LW(p) · GW(p)

Indeed; governments pay very little attention to any existential threat other than nuclear warfare and maybe asteroid strikes.

Replies from: timtyler
comment by timtyler · 2012-03-11T00:53:20.534Z · LW(p) · GW(p)

Well, in the US, they do have the National Security Agency to deal with security issues. However, shutting down Moore's law seems unlikely to be one of their priorities. That would cede the initiative to other parties - which seems likely to lead to other, more immediate problems.

comment by NancyLebovitz · 2012-03-10T12:42:49.872Z · LW(p) · GW(p)

Maybe there's an intermediate possibility between WBE and de novo AI-- upload an animal brain (how about a crow? a mouse?) and set it to self-improvement. You'd still have to figure out Friendliness for it, but Friendliness might be a hard problem even for an uploaded human brain. How would you identify sufficient Friendliness when you're choosing humans to unpload? I'm assuming that human ems would self-improve, even if it's a much harder problem than improving a de novo AI.

Moore's Second Law reminds me of a good enough for sf notion I've got. Chip fabs keep getting more expensive until there's one per spiral arm.

More seriously, how stable is the Second Law likely to be? The First Law implies increasing competence at making things, and whether the First Law eventually dominates the Second or the other way around isn't obvious.

Replies from: gwern, billswift
comment by gwern · 2012-03-10T17:48:28.812Z · LW(p) · GW(p)

Maybe there's an intermediate possibility between WBE and de novo AI-- upload an animal brain (how about a crow? a mouse?) and set it to self-improvement.

It's possible, but I don't know of any reason we would expect an animal brain could recursively improve. As matters stand, all we know is that groups of humans can self-improve - so we don't even know if a single human is smart enough to self-improve as an upload. (Maybe human brains can fall into ruts or inevitably degenerate after enough subjective time.) This isn't too optimistic about crows or mice or lobsters, however excellent stories they make for.

More seriously, how stable is the Second Law likely to be? The First Law implies increasing competence at making things, and whether the First Law eventually dominates the Second or the other way around isn't obvious.

It's been operating since at least the '80s as far as I can tell, and is stable out to 2015 or so. That's a pretty good run.

Replies from: Dmytry
comment by Dmytry · 2012-03-11T16:27:16.903Z · LW(p) · GW(p)

Single human, or cat, or other mammal, is smart enough to self improve without being uploaded (learning and training, we do start off pretty incapable). Think about it. It may well be enough to upload then just add more cortical columns, over the time, or do some other simple and dumb process that does not require understanding, to make use of the built-in self improvement ability.

comment by billswift · 2012-03-10T17:13:51.813Z · LW(p) · GW(p)

how about a crow? a mouse?

Or the lobsters in Accelerando.

comment by Thomas · 2012-03-10T21:18:15.229Z · LW(p) · GW(p)

The real question is how to accelerate it, not ti stop it. And more likely to happen, too.

comment by Douglas_Reay · 2012-03-10T07:10:05.230Z · LW(p) · GW(p)

In some homes, the electricity used to power computers is already a significant fraction of total household power used. If a carbon tax were applied to the natural gas bonanza from fracking, prices would discourage buying just straight additional CPUs, in favour of making better use of the CPU speed we have already - it would simply be uneconomic for a company to run 100 times the CPU power it currently does.

EDITED TO ADD:

I guess I should expand on my reasoning a little.

Moore's law continues, in part, because there is the demand for additional computing power (as well as the expectation that it will keep getting cheaper). Reduce the demand, and you reduce the pressure for the law to be continued.

There's near infinite demand for increased computing power, so trying to suppress the demand part of the equation might at first seem a dead loss. However compare it to cars. People like cars that can drive fast. So why isn't the top speed of all cars 180 mph, now that technology can achieve that and more? The reason isn't just the purchase price of the car (which is what Moore's law covers - the price per cpu power purchased). The reason is also the cost of running the car. Engines designed for speed use lots of fuel. Having an 4 litre engine in their car is uneconomic for many families.

Keeping a server on for a year costs about $200 in electricity. source. Unless the energy efficiency per CPU dramatically improves, a server with 10 times the computing power would be more expensive, one 100 times would be more expensive still, etc. At some point, more computing power (desirable though that is) is no longer affordable in terms of what it costs to run. And the higher energy prices are, the sooner we reach that point.

I mentioned fracking because that's the reason the American price of natural gas (which fuels many power stations) has more than halved in the last few years. A carbon tax might be one method of reversing that price increase.

I know, not a major effect, but it could be a contributing factor, give us a few years grace. shrugs

Replies from: ZankerH
comment by ZankerH · 2012-03-12T11:39:47.610Z · LW(p) · GW(p)

Actually, the current trend in CPU development is minor (10%-20%) performance increases at same or lower power usage levels. The efficiency is improving dramatically.

comment by timtyler · 2012-03-10T22:16:24.054Z · LW(p) · GW(p)

There are 2 main routes to Singularity, brain emulation/upload and de novo AGI.

Brain emulations are a joke. Intelligence augmentation seems much more significant - though it is not really much of an alternative to machine intelligence.

Replies from: CasioTheSane
comment by CasioTheSane · 2012-03-11T03:23:45.340Z · LW(p) · GW(p)

Why would you think they're a joke? We seem to be on a clear path to achieve it in the near future.

Replies from: timtyler
comment by timtyler · 2012-03-11T12:55:34.155Z · LW(p) · GW(p)

As a route to machine intelligence they don't make sense - because they will become viable too late - they will be beaten.

Replies from: CasioTheSane
comment by CasioTheSane · 2012-03-12T03:33:02.598Z · LW(p) · GW(p)

How do you know that?

Replies from: timtyler
comment by timtyler · 2012-03-12T11:25:36.222Z · LW(p) · GW(p)

Multiple considerations are involved. One of them is to do with bioinspiration. To quote from my Against Whole Brain Emulation essay:

Engineers did not learn how to fly by scanning and copying birds. Nature may have provided a proof of the concept, and inspiration - but it didn't provide the details the engineeres actually used. A bird is not much like a propellor-driven aircraft, a jet aircraft or a helicopter.

The argument applies across many domains. Water filters are not scanned kidneys. The hoover dam is not a scan of a beaver dam. Solar panels are not much like leaves. Humans do not tunnel much like moles do. Submarines do not closely resemble fish. From this perspective, it would be very strange if machine intelligence was much like human intelligence.

Replies from: CasioTheSane, CarlShulman
comment by CasioTheSane · 2012-03-13T04:58:57.634Z · LW(p) · GW(p)

The existence of non-biomimetic technology does not prove that biomimetics are inherently impractical.

There's plenty of recent examples of successful biomimetics... Biomimetic solar: http://www.youtube.com/watch?v=sBpusZSzpyI Anisotropic dry adhesives: http://bdml.stanford.edu/twiki/bin/view/Rise/StickyBot Self cleaning paints: http://www.stocorp.com/blog/?tag=lotusan Genetic algorithms: http://gacs.sourceforge.net/

The reason we didn't have much historical success with biomimetics is because biological systems are far to complex to just understand with a cursory look. We need modern bioinformatics, imaging, and molecular biology techniques to begin understanding how natural systems work, and be able to manipulate things on a small enough scale to replicate them.

It's just now becoming possible. Engineers didn't look at biology before, because they didn't know anything about biology, and lacked tools to manipulate molecular systems. Bioengineering itself is a very new field, and a good portion of the academic bioengineering departments that exist now are less than 5 years old! Bioengineering now is in a similar situation as physics was in the 19th century.

I looked at your essay, and don't see that you have any evidence showing that WBE is infeasible, or will take longer to develop than de novo AI. I would argue there's no way to know how long either will take to develop, because we don't even know what the obstacles are really. WBE could be as simple as building a sufficiently large network with neuron models like the ones we have already, or we could be missing some important details that make it far more difficult than that. It's clear that you don't like WBE, and you have some interesting reasons why we might not want to use WBE.

Replies from: timtyler
comment by timtyler · 2012-03-13T14:21:53.486Z · LW(p) · GW(p)

It's just now becoming possible. Engineers didn't look at biology before, because they didn't know anything about biology, and lacked tools to manipulate molecular systems. Bioengineering itself is a very new field, and a good portion of the academic bioengineering departments that exist now are less than 5 years old! Bioengineering now is in a similar situation as physics was in the 19th century.

That seems as though it is basically my argument. Biomimetic approaches are challenging and lag behind engineering-based ones by many decades.

I looked at your essay, and don't see that you have any evidence showing that WBE is infeasible, or will take longer to develop than de novo AI.

I don't think WBE is infeasible - but I do think there's evidence that it will take longer. We already have pretty sophisticated engineered machine intelligence - while we can't yet create a WBE of a flatworm. Engineered machine intelligence is widely used in industry; WBE does nothing and doesn't work. Engineered machine intelligence is in the lead, and it is much better funded.

I would argue there's no way to know how long either will take to develop, because we don't even know what the obstacles are really.

If one is simpler than the other, absolute timescales matter little - but IMO, we do have some idea about timescales.

Replies from: CasioTheSane
comment by CasioTheSane · 2012-03-14T02:15:45.887Z · LW(p) · GW(p)

Polls of "expert" opinions on when we will develop a technology are not predictors when we will actually develop them. Their opinions could all be skewed in the same direction by missing the same piece of vital information.

For example, they could all be unaware of a particular hurdle that will be difficult to solve, or of an upcoming discovery that makes it possible to bypass problems they assumed to be difficult.

comment by CarlShulman · 2012-03-12T12:37:44.903Z · LW(p) · GW(p)

This is an important generalization, but there are also many counterexamples in our use of biotech in agriculture, medicine, chemical production, etc. We can't design a custom cell, but Craig Venter can create a new 'minimal' genome from raw feedstuffs by copying from nature, and then add further enhancements to it. We produce alcohol using living organisms rather than a more efficient chemical process, and so forth. It looks like humans will be able to radically enhance human intelligence genetically through statistical study of human variation rather than mechanistic understanding of different pathways.

Creating an emulation involves a lot of further work, but one might put it in a reference class with members like the extensive work needed to get DNA synthesis, sequencing, and other biotechnologies to the point of producing Craig Venter's 'minimal genome' cells.

Replies from: timtyler
comment by timtyler · 2012-03-12T14:56:58.772Z · LW(p) · GW(p)

It looks like humans will be able to radically enhance human intelligence genetically through statistical study of human variation rather than mechanistic understanding of different pathways.

Sure - but again, it looks as though that will mostly be relatively insignificant and happen too late. We should still do it. It won't prevent a transition to engineered machine intelligence, though it might smooth the transition a little.

As I argue in my Against Whole Brain Emulation essay the idea is more wishful thinking and marketing than anything else.

Whole brain emulation as a P.R. exercise is a pretty stomach-churing idea from my perspective - but that does seem to be what is happening.

Possibly biotechnology will result in nanotechnological computing substrates. However, that seems to be a bit different from "whole brain emulation".

Replies from: CarlShulman
comment by CarlShulman · 2012-03-12T16:03:08.495Z · LW(p) · GW(p)

Whole brain emulation as a P.R. exercise is a pretty stomach-churing idea from my perspective - but that does seem to be what is happening.

People like Kurzweil (who doesn't think that WBE will come first) may talk about it in the context of "we will merge with the machines, they won't be an alien outgroup" as a P.R. exercise to make AI less scary. Some people also talk about whole brain emulation as an easy-to-explain loose upper bound on AI difficulty. But people like Robin Hanson who argue that WBE will come first do not give any indications of being engaged in PR, aside from their disagreement with you on the difficulty of theoretical advances in AI and so forth.

Replies from: timtyler
comment by timtyler · 2012-03-12T18:24:38.385Z · LW(p) · GW(p)

For W.B.E. P.R. I was mostly thinking of I.B.M. - though they say they have different motives (besides W.B.E., I mean).

Robin Hanson is an oddity - from my perspective. He wrote an early paper on the topic, and perhaps his views got anchored long ago.

The thing I notice about Hanson's involvement is that he uses uploads to argue for the continued relevance of economics and marketplaces - and other material he has invested in. In the type of not-so-competitive future envisaged by others, economics will still be relevant - but not in quite the same way.

Anyway, Robin Hanson being interested in uploads-first counts in their favour - because of who Robin Hanson is. However, it isn't so big a point in their favour that it overcomes all the uploads-first crazyness and implausibility.

comment by rwallace · 2012-03-13T00:59:37.564Z · LW(p) · GW(p)

This used to be an interesting site for discussing rationality. It was bad enough when certain parties started spamming the discussion channel with woo-woo about the machine Rapture, but now we have a post openly advocating terrorism, and instead of being downvoted to oblivion, it becomes one of the most highly upvoted discussion posts, with a string of approving comments?

I think I'll stick to hanging out on sites where the standard of rationality is a little better. Ciao, folks.

Replies from: gwern, None, sp1ky
comment by gwern · 2012-03-13T03:19:59.766Z · LW(p) · GW(p)

a post openly advocating terrorism

This says more about your own beliefs and what you read into the post than anything I actually wrote.

I am reminded of a quote from Aleister Crowley, who I imagine would know:

"The conscience of the world is so guilty that it always assumes that people who investigate heresies must be heretics; just as if a doctor who studies leprosy must be a leper. Indeed, it is only recently that science has been allowed to study anything without reproach."

Replies from: XiXiDu
comment by XiXiDu · 2012-03-13T14:33:24.550Z · LW(p) · GW(p)

a post openly advocating terrorism

This says more about your own beliefs and what you read into the post than anything I actually wrote.

Terrorism was the first thing that came to my mind too.

I suggest that to counteract this you should portray the other side as terrorists. Those who want to build AI without considering its safety. You trying to stop them would just constitute counter-terrorism then ;-)

comment by [deleted] · 2012-03-13T14:52:25.209Z · LW(p) · GW(p)

Gwern wouldn't advocate terrorism as a solution; he already has argued that it's ineffective.

Replies from: sp1ky
comment by sp1ky · 2012-04-18T02:15:23.330Z · LW(p) · GW(p)

If it was effective, it doesn't mean it should be removed. Even more reason it should be known to be at risk.

comment by sp1ky · 2012-04-18T02:19:57.706Z · LW(p) · GW(p)

This post was ALL about rational debate. This is a highly calculated assessment of the fragility of Moore's Law. THis is the kind of stuff government advisors would probably have figured out by now. If you say this helps terrorists (which is ironic because the conclusion was only airbombing can stop fabrication, and terrorists don't have access to that yet), well this is also highly useful to anyone who wants to stop terrorists.

The conclusion is highly interesting. If a war was to break out today between developed nations, taking out the other's fabricators and nuclear capabilities must be the highest priorities.

comment by DuncanS · 2012-03-12T21:49:12.633Z · LW(p) · GW(p)

Moore's law is really hard to stop. Destroying all humans seems to be the only practical method of stopping it for any length of time, and even then I don't think it would be permanent.

Replies from: gwern
comment by gwern · 2012-03-13T02:16:18.599Z · LW(p) · GW(p)

The law of accelerating returns is made by picking a high recent point and a close to zero starting point, extrapolating in between, ignoring any low points or decreasing endpoints, dishonestly claiming prediction hits (see his recent accounting), and finally, reusing the same cherry-picked datapoints.

It's not a counter-argument.