Posts

Comments

Comment by ewbrownv on A brief history of ethically concerned scientists · 2013-02-12T22:59:33.177Z · LW · GW

Good insight.

No, even a brief examination of history makes it clear that the lethality of warfare is almost completely determined by the culture and ideology of the people involved. In some wars the victors try to avoid civilian casualties, while in others they kill all the adult males or even wipe out entire populations. Those fatalities dwarf anything produced in the actual fighting, and they can and have been inflicted with bronze age technology. So anyone interested making war less lethal would be well advised to focus on spreading tolerant ideologies rather than worrying about weapon technology.

As for the casualty rate of soldiers, that tends to jump up whenever a new type of weapon is introduced and then fall again as tactics change to deal with it. In the long run the dominant factor is again a matter of ideology - an army that tries to minimize casualties can generally do so, while one that sees soldiers as expendable will get them killed in huge numbers regardless of technology.

(BTW, WWI gases are nothing unusual in the crippling injury department - cannons, guns, explosives and edged weapons all have a tendency to litter the battlefield with crippled victims as well. What changed in the 20th century was that better medical meant a larger fraction of crippled soldiers to survive their injuries to return to civilian life.)

Comment by ewbrownv on A brief history of ethically concerned scientists · 2013-02-12T22:49:22.236Z · LW · GW

It's a recitation of arguments and anecdotes in favor of secrecy, so of course it's an argument in that direction. If that wasn't the intention there would also have been anti-secrecy arguments and anecdotes.

Comment by ewbrownv on Politics Discussion Thread February 2013 · 2013-02-08T19:36:43.271Z · LW · GW

I don't actually agree with the assertion, but I can see at least one coherent way to argue it. The thinking would be:

The world is currently very prosperous due to advances in technology that are themselves a result of the interplay between Enlightenment ideals and the particular cultures of Western Europe and America in the 1600-1950 era. Democracy is essentially irrelevant to this process - the same thing would have happened under any moderately sane government, and indeed most of the West was neither democratic nor liberal (in the modern sense) during most of this time period.

The recent outbreak of peace, meanwhile, is due to two factors. Major powers rarely fight because they have nuclear weapons, which makes war insanely risky even for ruling elites. Meanwhile America has become a world-dominating superpower with a vested interest in maintaining the status quo, so many small regional conflicts are suppressed by the threat of American intervention.

That gets us to "democracy/liberalism" doesn't get credit for making things better. To go from there to "democracy / liberalism makes things worse" you just have to believe that modern liberal societies are oppressive in ways that plausible alternatives wouldn't be, which is somewhat plausible if your personal values conflict with liberal thinking.

In reality I suspect that the alternative histories mostly involve autocratic governments banning innovation and fighting lots of pointless wars, which is why I don't buy the argument. But the evidence that liberal democracy is better than, say, a moderately conservative republic or a constitutional monarchy, is actually pretty weak. The problem is the nice alternatives to democracy are rare, because normally a country that starts moving away from autocracy ends up falling completely into the populism attractor instead of stopping somewhere along the way.

Comment by ewbrownv on Politics Discussion Thread February 2013 · 2013-02-08T18:47:05.635Z · LW · GW

Historically it has never worked out that way. When a society gets richer the people eat more and better food, buy more clothes, live in bigger houses, buy cars and appliances, travel more, and so on. Based on the behavior of rich people we can see that a x10 or even x100 increase from current wealth levels due to automation would just continue this trend, with people spending the excess on things like mansions, private jets and a legion of robot servants.

Realistically there's probably some upper limit to human consumption, but it's so far above current production levels that we don't see much hint of where it would be yet. So for most practical purposes we can assume demand is infinite until we actually see the rich start systematically running out of things to spend money on.

Comment by ewbrownv on Isolated AI with no chat whatsoever · 2013-02-04T21:31:12.397Z · LW · GW

Because you can't create real, 100% physical isolation. At a minimum you're going to have power lines that breach the walls, and either people moving in and out (while potentially carrying portable electronics) or communication lines going out to terminals that aren't isolated. Also, this kind of physical facility is very expensive to build, so the more elaborate your plan is the less likely it is to get financed.

Military organizations have been trying to solve these problems ever since the 1950s, with only a modest degree of success. Even paranoid, well-funded organizations with a willingness to shoot people have security breaches on a fairly regular basis.

Comment by ewbrownv on Isolated AI with no chat whatsoever · 2013-01-30T17:39:32.755Z · LW · GW

Indeed. What's the point of building an AI you're never going to communicate with?

Also, you can't build it that way. Programs never work the first time, so at a minimum you're going to have a long period of time where programmers are coding, testing and debugging various parts of the AI. As it nears completion that's going to involve a great deal of unsupervised interaction with a partially-functional AI, because without interaction you can't tell if it works.

So what are you going to do? Wait until the AI is feature-complete on day X, and then box it? Do you really think the AI was safe on day X-1, when it just had a couple of little bugs left? How about on day X-14, when you thought the major systems were all working but there was actually a major bug in the expected utility calculator? Or on day X-60, when a programmer got the Bayesian reasoning system working but it was connected to a stubbed-out version of the goal system instead of the real thing?

This myopic focus on boxing ideas misses most of the problems inherent in building a safe AGI.

Comment by ewbrownv on Isolated AI with no chat whatsoever · 2013-01-30T17:24:38.523Z · LW · GW

I do. It implies that it is actually feasible to construct a text-only channel, which as a programmer I can tell you is not the case.

If you build your AI on an existing OS running on commercial hardware there are going to be countless communication mechanisms and security bugs present for it to take advantage of, and the attack surface of the OS is far too large to secure against even human hackers. The fact that you'll need multiple machines to run it with current hardware amplifies this problem geometrically, and makes the idea that a real project could achieve complete isolation hopelessly naive. In reality you'll discover that there was an undocumented Bluetooth chip on one of the motherboards, or the wireless mouse adapter uses a duel-purpose chip that supports WiFi, or one of the power supplies supports HomePNA and there was another device on the grid, or something else along those lines.

The alternative is building your own (very feature-limited) hardware, to run your own (AI-support-only) OS. In theory you might be able to make such a system secure, but in reality no one is ever going to give you the hundreds of millions of $$ it would cost to build the thing. Not to mention that a project that tries this approach will have to spend years duplicating hardware and software work that has already been done a hundred times before, putting it far behind any less cautious competitors...

Comment by ewbrownv on Isolated AI with no chat whatsoever · 2013-01-30T17:02:13.883Z · LW · GW

Your second proposal, trying to restrict what the AI can do after it's made a decision, is a lost cause. Our ability to specify what is and is not allowed is simply too limited to resist any determined effort to find loopholes. This problem afflicts every field from contract law to computer security, so it seems unlikely that we're going to find a solution anytime soon.

Your first proposal, making an AI that isn't a complete AGI, is more interesting. Whether or not it's feasible depends partly on your model of how an AI will work in the first place, and partly on how extreme the AI's performance is expected to be.

For instance, I could easily envision a specialized software engineering AI that does nothing but turn English-language program descriptions into working software. Such a system could easily devote vast computing resources to heuristic searches of design space, and you could use it to design improved versions of itself as easily as anything else. It should be obvious that there's little risk of unexpected behavior with such a system, because it doesn't contain any parts that would motivate it to do anything but blindly run design searches on demand.

However, this assumes that such an AI can actually produce useful results without knowing about human psychology and senses, the business domains its apps are supposed to address, the world they're going to interact with, etc. Many people argue that good design requires a great deal of knowledge in these seemingly unrelated fields, and some go so far as too say you need full-blown humanlike intelligence. The more of these secondary functions you add to the AI the more complex it becomes, and the greater the risk that some unexpected interaction will cause it to start doing things you didn't intend for it to do.

So ultimately the specialization angle seems worthy of investigation, but may or may not work depending on which theory of AI turns out to be correct. Also, even a working version is only a temporary stopgap. The more computing power the AI has the more damage it can do in a short time if it goes haywire, and the easier it becomes for it to inadvertently create an unFriendly AGI as a side effect of some other activity.

Comment by ewbrownv on CEV: a utilitarian critique · 2013-01-28T17:12:11.544Z · LW · GW

Actually, this would be a strong argument against CEV. If individual humans commonly have incoherent values (which they do), there is no concrete reason to expect an automated extrapolation process to magically make them coherent. I've noticed that CEV proponents have a tendency to argue that the "thought longer, understood more" part of the process will somehow fix all objections of this sort, but given the complete lack of detail about how this process is supposed to work you might as well claim that the morality fairy is going to descend from the heavens and fix everything with a wave of her magic wand.

If you honestly think you can make an AI running CEV produce a coherent result that most people will approve of, it's up to you to lay out concrete details of the algorithm that will make this happen. If you can't do that, you've just conceded that you don't actually have an answer for this problem. The burden of proof here is on the party proposing to gamble humanity's future on a single act of software engineering, and the standard of evidence must be at least as high as that of any other safety-critical engineering.

Comment by ewbrownv on AI box: AI has one shot at avoiding destruction - what might it say? · 2013-01-24T20:42:09.956Z · LW · GW

<A joke so hysterically funny that you'll be too busy laughing to type for several minutes>

See, hacking human brains really is trivial. Now I can output a few hundred lines of insidiously convincing text while you're distracted.

Comment by ewbrownv on Evaluating the feasibility of SI's plan · 2013-01-15T00:00:45.644Z · LW · GW

Yes, I'm saying that to get human-like learning the AI has to have the ability to write code that it will later use to perform cognitive tasks. You can't get human-level intelligence out of a hand-coded program operating on a passive database of information using only fixed, hand-written algorithms.

So that presents you with the problem of figuring out which AI-written code fragments are safe, not just in isolation, but in all their interactions with every other code fragment the AI will ever write. This is the same kind of problem as creating a secure browser or Java sandbox, only worse. Given that no one has ever come close to solving it for the easy case of resisting human hackers without constant patches, it seems very unrealistic to think that any ad-hoc approach is going to work.

Comment by ewbrownv on Evaluating the feasibility of SI's plan · 2013-01-14T23:41:22.973Z · LW · GW

What I was referring to is the difference between:

A) An AI that accepts an instruction from the user, thinks about how to carry out the instruction, comes up with a plan, checks that the user agrees that this is a good plan, carries it out, then goes back to an idle loop.

B) An AI that has a fully realized goal system that has some variant of 'do what I'm told' implemented as a top-level goal, and spends its time sitting around waiting for someone to give it a command so it can get a reward signal.

Either AI will kill you (or worse) in some unexpected way if it's a full-blown superintelligence. But option B has all sorts of failure modes that don't exist in option A, because of that extra complexity (and flexibility) in the goal system. I wouldn't trust a type B system with the IQ of a monkey, because it's too likely to find some hilariously undesirable way of getting its goal fulfilled. But a type A system could probably be a bit smarter than its user without causing any disasters, as long as it doesn't unexpectedly go FOOOM.

Of course, there's a sense in which you could say that the type A system doesn't have human-level intelligence no matter how impressive its problem-solving abilities are. But if all you're looking for is an automated problem-solving tool that's not really an issue.

Comment by ewbrownv on Evaluating the feasibility of SI's plan · 2013-01-12T00:24:45.661Z · LW · GW

I thought that too until I spent a few hours thinking about how to actually implement CEV, after which I realized that any AI capable of using that monster of an algorithm is already a superintelligence (and probably turned the Earth into computronium while it was trying to get enough CPU power to bootstrap its goal system).

Anyone who wants to try a "build moderately smart AGI to help design the really dangerous AGI" approach is probably better off just making a genie machine (i.e. an AI that just does whatever its told, and doesn't have explicit goals independent of that). At least that way the failure modes are somewhat predictable, and you can probably get to a decent multiple of human intelligence before accidentally killing everyone.

Comment by ewbrownv on Evaluating the feasibility of SI's plan · 2013-01-11T23:55:07.772Z · LW · GW

The last item on your list is an intractable sticking point. Any AGI smart enough to be worth worrying about is going to have to have the ability to make arbitrary changes to an internal "knowledge+skills" representation that is itself a Turing-complete programming language. As the AGI grows it will tend to create an increasingly complex ecology of AI-fragments in this way, and predicting the behavior of the whole system quickly becomes impossible.

So "don't let the AI modify its own goal system" ends up turning into just anther way of saying "put the AI in a box". Unless you have some provable method of ensuring that no meta-meta-meta-meta-program hidden deep in the AGI's evolving skill set ever starts acting like a nested mind with different goals than its host, all you've done is postpone the problem a little bit.

Comment by ewbrownv on [LINK] Why taking ideas seriously is probably a bad thing to do · 2013-01-08T23:16:23.559Z · LW · GW

Why would you expect the social dominance of a belief to correlate with truth? Except in the most trivial cases, society has no particular mechanism that selects for true beliefs in preference to false ones.

The Darwinian competition of memes selects strongly for those that provide psychological benefits, or are politically useful, or serve the self-interest of large segments of the population. But truth is only relevant if the opponents of a belief can easily and unambiguously disprove it, which is only possible in rare cases.

Comment by ewbrownv on [Link] Economists' views differ by gender · 2013-01-02T21:14:08.869Z · LW · GW

If true, this is fairly strong evidence that the effort to turn the study of economics into a science has failed. If the beliefs of professional economists about their field of study are substantially affected by their gender, they obviously aren't arriving at those beliefs by a reliable objective process.

Comment by ewbrownv on New censorship: against hypothetical violence against identifiable people · 2012-12-26T16:39:38.518Z · LW · GW

Censorship is generally not a wise response to a single instance of any problem. Every increment of censorship you impose will wipe out an unexpectedly broad swath of discussion, make it easier to add more censorship later, and make it harder to resist accusations that you implicitly support any post you don't censor.

If you feel you have to Do Something, a more narrowly-tailored rule that still gets the job done would be something like: "Posts that directly advocate violating the laws of in a manner likely to create criminal liability will be deleted."

Because, you know, it's just about impossible to talk about specific wars, terrorism, criminal law or even many forms of political activism without advocating real violence against identifiable groups of people.

Comment by ewbrownv on Harry Potter and the Methods of Rationality discussion thread, part 18, chapter 87 · 2012-12-23T06:26:04.749Z · LW · GW

I was commenting specifically about the end of your previous comment, not the whole topic. Sorry if that wasn't clear. But as to this new point, why should an author feel obligated to gender-balance the complexity of the flaws they assign to minor characters?

Yes, I'm aware that there's a fairly common intellectual position claiming that authors should devote vast amounts of energy to worrying about that sort of thing. I just think that's a deeply misguided enterprise. A good author will naturally come to a pretty reasonable balance in the natural course of writing a story, and any major tweaking beyond that point is more likely to make the story worse than better.

Do you really think HP:MoR would be a better story if EY had spent a few weeks listing all the characters by gender, and trying to tweak the plot and insert details to 'balance' things? As opposed to, say, working out plot complications or dreaming up new moments of awesome?

Comment by ewbrownv on Harry Potter and the Methods of Rationality discussion thread, part 18, chapter 87 · 2012-12-23T04:07:44.370Z · LW · GW

So what?

From a storytelling perspective, authors are not obligated to make their main characters (or even 50% of main characters) female. Considering the way the whole SF&F genre has been taken over by gritty female urban fantasy vampire hunters in recent years, finding a decent story with a male lead is actually a nice change.

From the perspective of realism, the fact that the most competent characters are male is to be expected. That really is the way the world works, thanks to the fact that males have a flatter bell curve with longer tails on just about every measure of ability. It isn't the result of an evil male conspiracy, and there's nothing wrong with an author depicting this elementary fact of (current) human nature accurately.

So I'm left wondering how your comments amount to anything more than "I'm unhappy because you aren't writing the story the way I would have done it."

Comment by ewbrownv on newcomb's altruism · 2012-12-21T17:37:22.739Z · LW · GW

Knowing that philosophers are the only people who two-box on Newcomb's problem, and they constitute a vanishingly small fraction of Earth's population, I confidently one-box. Then I rush out to spend my winnings as quickly as possible, before the inevitable inflation hits.

Telling me what X is will have no effect on my action, because I already have that information. Making copies of me has no effect on my strategy, for the same reason.

Comment by ewbrownv on Gun Control: How would we know? · 2012-12-21T17:04:31.807Z · LW · GW

I think you have a point here, but there's a more fundamental problem - there doesn't seem to be much evidence that gun control affects the ability of criminals to get guns.

The problem here is similar to prohibition of drugs. Guns and ammunition are widely available in many areas, are relatively easy to smuggle, and are durable goods that can be kept in operation for many decades once acquired. Also, the fact that police and other security officials need them means that they will continue to be produced and/or imported into an area with even very strict prohibition, creating many opportunities for weapons to leak out of official hands.

So gun control measures are much better at disarming law-abiding citizens than criminals. Use of guns by criminals does seem to drop a bit when a nation adopts strict gun control policies for a long period of time, but the fact that the victims have been disarmed also means criminals don't need as many guns. If your goal is disarming criminals it isn't at all clear that this is a net benefit.

Comment by ewbrownv on Gun Control: How would we know? · 2012-12-21T16:44:24.425Z · LW · GW

Agreed. Presence or absence of debate on an issue gives information about a nation's culture, but very little about how hard it is to discover the facts of the matter. This is especially true in matters of social science, where the available evidence is never going to be strong enough to convince someone who has already made up his mind.

Comment by ewbrownv on That Thing That Happened · 2012-12-20T20:55:04.999Z · LW · GW

Wow, look at all the straw men. Is there an actual reasoned position in there among the fashionable cynicism? If so, I can't find it.

One of the major purposes of Less Wrong is allegedly the promotion of more rational ways of thinking among as large a fraction of the general population as we can manage to reach. Finding better ways to think clearly about politics might be an especially difficult challenge, but popularizing the result of such an attempt isn't necessarily any harder than teaching people about the sunk costs fallacy.

But even if you think raising the level of public discourse is hopeless, being able to make accurate predictions of your own can also be quite valuable. Knowing things like "the Green's formula for winning elections forces them to drive any country they control into debt and financial collapse", or "the Blues hate the ethnic group I belong to, and will oppress us as much as they can get away with" can be rather important when deciding where to live and how to manage one's investments, for example.

Comment by ewbrownv on That Thing That Happened · 2012-12-18T17:32:22.636Z · LW · GW

I tend to agree with your concern.

Discussing politics is hard because all political groups make extensive use of lies, propaganda and emotional appeals, which turns any debate into a quagmire of disputed facts and mind-killing argument. It can be tempting to dismiss the whole endeavor as hopeless and ignore it while cynically deriding those who stay involved.

Trouble is, political movements are not all equal. If they gain power, some groups will use it to make the country wealthy so they can pocket a cut of the money. Others will try to force everyone to join their religion, or destroy the economy in some wacky scheme that could never have worked, or establish an oppressive totalitarian regime and murder millions of people to secure their position. These results are not equal.

So while it might be premature to discuss actual political issues on Less Wrong, searching for techniques to make such discussions possible would be a very valuable endeavor. Political trends affect the well-being of hundreds of millions of people in substantial ways, so even a modest improvement in the quality of discourse could have a substantial payoff. At the very least, it would be nice if we could reliably identify the genocidal maniacs before they come into power...

Comment by ewbrownv on Harry Potter and the Methods of Rationality discussion thread, part 17, chapter 86 · 2012-12-17T19:23:36.027Z · LW · GW

Actually, I see a significant (at least 10%) chance that the person currently known as Quirrel was both the 'Light Lord' and the Dark Lord of the last war. His "Voldemort' persona wasn't actually trying to win, you see, he was just trying to create a situation where people would welcome a savior...

This would neatly explain the confusion Harry noted over how a rational, inventive wizard could have failed to take over England. It leaves open some questions about why he continued his reign of terror after that ploy failed, but there are several obvious possibilities there. The big question would be what actually happened to either A) stop him, or B) make him decide to fake his death and vanish for a decade.

Comment by ewbrownv on Mini advent calendar of Xrisks: Artificial Intelligence · 2012-12-10T23:28:21.668Z · LW · GW

If you agree that a superhuman AI is capable of being an existential risk, that makes the system that keeps it from running amok the most safety-critical piece of technology in history. There is no room for hopes or optimism or wishful thinking in a project like that. If you can't prove with a high degree of certainty that it will work perfectly, you shouldn't turn it on.

Or, to put it another way, the engineering team should act as if they were working with antimatter instead of software. The AI is actually a lot more dangerous than that, but giant explosions are a lot easier for human minds to visualize than UFAI outcomes...

Comment by ewbrownv on The challenges of bringing up AIs · 2012-12-10T21:06:16.711Z · LW · GW

Human children respond to normal child-rearing practices the way they do because of specific functional adaptations of the human mind. This general principle applies to everything from language acquisition to parent-child bonding to acculturation. Expose a monkey, dog, fish or alien to the same environment, and you'll get a different outcome.

Unfortunately, while the cog sci community has produced reams of evidence on this point they've also discovered that said adaptations are very complex, and mapping out in detail what they all are and how they work is turning out to be a long research project. Partial results exist for a lot of intriguing examples, along with data on what goes wrong when different pieces are broken, but it's going to be awhile before we have a complete picture.

An AI researcher who claims his program will respond like a human child is implicitly claiming either that this whole body of research is wrong (in which case I want to see evidence), or that he's somehow implemented all the necessary adaptations in code despite the fact that no one knows how they all work (yea, right). Either way, this isn't especially credible.

Comment by ewbrownv on [LINK] Two Modes of Discourse: Taking everything personally v. debate as sport · 2012-12-10T18:53:09.780Z · LW · GW

As an explanation for a society-wide shift in discourse that seems quite implausible. If such a change has actually happened the cause would most likely be some broad cultural or sociological change that took place within the same time frame.

Comment by ewbrownv on Mini advent calendar of Xrisks: nanotechnology · 2012-12-07T17:13:18.958Z · LW · GW

Yes, it's very similar to the problem of designing a macroscopic robot that can out-compete natural predators of the same size. Early attempts will probably fail completely, and then we'll have a few generations of devices that are only superior in some narrow specialty or in controlled environments.

But just as with robots, the design space of nanotech devices is vastly larger than that of biological life. We can easily imagine an industrial ecology of Von Neumann machines that spreads itself across a planet exterminating all large animal life, using technologies that such organisms can't begin to compete with (mass production, nuclear power, steel armor, guns). Similarly, there's a point of maturity at which nanotech systems built with technologies microorganisms can't emulate (centralized computation, digital communication, high-density macroscopic energy sources) become capable of displacing any population of natural life.

So I'd agree that it isn't going to happen by accident in the early stages of nanotech development. But at some point it becomes feasible for governments to design such a weapon, and after that the effort required goes down steadily over time.

Comment by ewbrownv on Mini advent calendar of Xrisks: nanotechnology · 2012-12-07T17:00:52.953Z · LW · GW

The theory is that Drexlerian nanotech would dramatically speed up progress in several technical fields (biotech, medicine, computers, materials, robotics) and also dramatically speed up manufacturing all at the same time. If it actually works that way the instability would arise from the sudden introduction of new capabilities combined with the ability to put them into production very quickly. Essentially, it lets innovators get inside the decision loop of society at large and introduce big changes faster than governments or the general public can adapt.

So yes, it's mostly just quantitative increases over existing trends. But it's a bunch of very large increases that would be impossible without something like nanotech, all happening at the same time.

Comment by ewbrownv on Mini advent calendar of Xrisks: nuclear war · 2012-12-06T17:39:47.199Z · LW · GW

Now you're just changing the definition to try to win an argument. An xrisk is typically defined as one that, in and of itself, would result in the complete extinction of a species. If A causes a situation that prevents us from dealing with B when it finally arrives the xrisk is B, not A. Otherwise we'd be talking about poverty and political resource allocation as critical xrisks, and the term would lose all meaning.

I'm not going to get into an extended debate about energy resources, since that would be wildly off-topic. But for the record I think you've bought into a line of political propaganda that has little relation to reality - there's a large body of evidence that we're nowhere near running out of fossil fuels, and the energy industry experts whose livelihoods rely on making correct predictions mostly seem to be lined up on the side of expecting abundance rather than scarcity. I don't expect you to agree, but anyone who's curious should be able to find both sides of this argument with a little googling.

Comment by ewbrownv on Mini advent calendar of Xrisks: nuclear war · 2012-12-06T17:18:59.860Z · LW · GW

Yes, and that's why you can even attempt to build a computer model. But you seem to be assuming that a climate model can actually simulate all those processes on a relatively fundamental level, and that isn't the case.

When you set out to build a model of a large, non-linear system you're confronted with a list of tens of thousands of known processes that might be important. Adding them all to your model would take millions of man-hours, and make it so big no computer could possibly run it. But you can't just take the most important-looking processes and ignore the rest, because the behavior of any non-linear system tends to be dominated by unexpected interactions between obscure parts of the system that seem unrelated at first glance.

So what actually happens is you implement rough approximations of the effects the specialists in the field think are important, and get a model that outputs crazy nonsense. If you're honest, the next step is a long process of trying to figure out what you missed, adding things to the model, comparing the output to reality, and then going back to the drawing board again. There's no hard, known-to-be-accurate physics modeling involved here, because that would take far more CPU power than any possible system could provide. Instead it's all rules of thumb and simplified approximations, stuck together with arbitrary kludges that seem to give reasonable results.

Or you can take that first, horribly broken model, slap on some arbitrary fudge factors to make it spit out results the specialists agree look reasonable, and declare your work done. Then you get paid, the scientists can proudly show off their new computer model, and the media will credulously believe whatever predictions you make because they came out of a computer. But in reality all you've done is build an echo chamber - you can easily adjust such a model to give any result you want, so it provides no additional evidence.

In the case of nuclear winter there was no preexisting body of climate science that predicted a global catastrophe. There was just a couple of scientists who thought it would happen, and built a model to echo their prediction.

Comment by ewbrownv on Mini advent calendar of Xrisks: nuclear war · 2012-12-06T16:44:58.706Z · LW · GW

An uncalibrated sim will typically give crazy results like 'increasing atmospheric CO2 by 1% raises surface temperatures by 300 degrees' or 'one large forest fire will trigger a permanent ice age'. If you see an uncalibrated sim giving results that seem even vaguely plausible, this means the programmer has tinkered with its internal mechanisms to make it give those results. Doing that is basically equivalent to just typing up the desired output by hand - it provides evidence about the beliefs of the programmer, but nothing else.

Comment by ewbrownv on Mini advent calendar of Xrisks: synthetic biology · 2012-12-05T20:25:21.779Z · LW · GW

Exactly.

I think the attitudes of most experts are shaped by the limits of what they can actually do today, which is why they tend not to be that worried about it. The risk will rise over time as our biotech abilities improve, but realistically a biological xrisk is at least a decade or two in the future. How serious the risk becomes will depend on what happens with regulation and defensive technologies between now and then.

Comment by ewbrownv on Mini advent calendar of Xrisks: nuclear war · 2012-12-05T20:09:54.361Z · LW · GW

This is a topic I frequently see misunderstood, and as a programmer who has built simple physics simulations I have some expertise on the topic, so perhaps I should elaborate.

If you have a simple, linear system involving math that isn't too CPU-intensive you can build an accurate computer simulation of it with a relatively modest amount of testing. Your initial attempt will be wrong due to simple bugs, which you can probably detect just by comparing simulation data with a modest set of real examples.

But if you have a complex, non-linear system, or just one that's too big to simulate in complete detail, this is no longer the case. Getting a useful simulation then requires that you make a lot of educated guesses about what factors to include in your simulation, and how to approximate effects you can't calculate in any detail. The probability of getting these guesses right the first time is essentially zero - you're lucky if the behavior of your initial model has even a hazy resemblance to anything real, and it certainly isn't going to come within an order of magnitude of being correct.

The way you get to a useful model is through a repeated cycle of running the simulator, comparing the (wrong) results to reality, making an educated guess about what caused the difference, and trying again. With something relatively simple like, say, turbulent fluid dynamics, you might need a few hundred to a few thousand test runs to tweak your model enough that it generates accurate results over the domain of input parameters that you're interested in.

If you can't run real-world experiments to generate the phenomena you're interested in, you might be able to substitute a huge data set of observations of natural events. Astronomy has had some success with this, for example. But you need a data set big enough to encompass a representative sample of all the possible behaviors of the system you're trying to simulate, or else you'll just gets a 'simulator' that always predicts the few examples you fed it.

So, can you see the problem with the nuclear winter simulations now? You can't have a nuclear war to test the simulation, and our historical data set of real climate changes doesn't include anything similar (and doesn't collect anywhere near as many data points as a simulator needs, anyway). But global climate is a couple of orders of magnitude more complex than your typical physics or chemistry sims, so the need for testing would be correspondingly greater.

The point non-programmers tend to miss here is that lack of testing doesn't just mean the model is a a little off. It means the model has no connection at all to reality, and either outputs garbage or echoes whatever result the programmer told it to give. Any programmer who claims such a model means something is committing fraud, plain and simple.

Comment by ewbrownv on Mini advent calendar of Xrisks: nuclear war · 2012-12-05T19:06:21.755Z · LW · GW

"We're not sure if we could get back to our current tech level afterwards" isn't an xrisk.

It's also purely speculative. The world still has huge deposits of coal, oil, natural gas, oil sands and shale oil, plus large reserves of half a dozen more obscure forms of fossil fuel that have never been commercially developed because they aren't cost-competitive. Plus there's wind, geothermal, hydroelectric, solar and nuclear. We're a long, long way away from the "all non-renewables are exhausted" scenario.

Comment by ewbrownv on LW Women- Minimizing the Inferential Distance · 2012-12-04T20:50:29.312Z · LW · GW

You don't think freedom of speech, religion and association are important things for a society to defend? Well, in that case we don't have much to talk about.

I will, however, suggest that you might do well to spend some time thinking about what your ideal society will be like after the principle that society (i.e. government) can dictate what people say, think and do to promote the social cause of the day becomes firmly entrenched. Do you really think your personal ideology will retain control of the government forever? What happens if a political group with views you oppose gets in power?

Comment by ewbrownv on LW Women- Minimizing the Inferential Distance · 2012-12-04T20:41:39.056Z · LW · GW

Well, 500 years ago there was plenty of brutal physical oppression going on, and I'd expect that kind of thing to have lots of other negative effects on top of the first-order emotional reactions of the victims.

But I would claim that if you did a big brain-scan survey of, say, Western women from 1970 to the present, you'd see very little correlation between their subjective feeling of oppression and their actual treatment in society.

Comment by ewbrownv on Rationality Quotes December 2012 · 2012-12-04T20:28:31.189Z · LW · GW

Such a mechanism may be desirable, but it isn't necessary for the existence of cities. There are plenty of third world countries that don't bother with licensing, and still manage to have major metropolises.

But my point was just that when people talk about 'trades and crafts on which the existence of the modern city depends' they generally mean carpenters, plumbers, electricians and other hands-on trades, not clerks and bureaucrats.

Comment by ewbrownv on Mini advent calendar of Xrisks: synthetic biology · 2012-12-04T20:14:42.509Z · LW · GW

The reason the life sciences are resistant to regulation is at least partially because they know that killer plagues are several orders of magnitude harder to make than Hollywood would like you to think. The biosphere already contains billions of species of microorganisms evolving at a breakneck pace, and they haven't killed us all yet.

An artificial plague has no special advantages over natural ones until humans get better at biological design than evolution, which isn't likely to happen for a couple of decades. Even then, plagues with 100% mortality are just about impossible - turning biotech from a megadeath risk to an xrisk requires a level of sophistication that looks more like Drexlerian nanotech than normal biology.

Comment by ewbrownv on Mini advent calendar of Xrisks: nuclear war · 2012-12-04T20:04:29.305Z · LW · GW

Calling this an x-risk seems to be a case of either A) stretching the definition considerably, or B) being unduly credulous of the claims of political activists. A few points to consider:

1) During the height of the cold war, when there were about an order of magnitude more nuclear weapons deployed than is currently the case, US military (which had a vested interest in exaggerating the Soviet threat) put estimated casualties from a full-scale nuclear exchange at 30-40% of the US population. While certainly horrific, this falls far short of extinction. Granted, this was before anyone had thought of nuclear winter, but:

2) Stone age humanity survived the last ice age in Europe, where climate conditions were far worse than even the most pessimistic nuclear winter scenarios. It strains credibility to imagine that modern societies would do worse.

3) The nuclear winter concept was invented by peace activists looking for arguments to support the cause of nuclear disarmament, which a priori makes them about as credible as an anit-global-warming scientist funded by an oil company.

4) Nuclear winter doesn't look any better when you examine the data. The whole theory is based on computer models of complex atmospheric phenomena that have never been calibrated by comparing their results to real events. As anyone who's ever built a computer model of a complex system can tell you, such uncalibrated models are absolutely meaningless - they provide no information about anything but the biases of those who built them.

So really, the inclusion of nuclear war on x-risk lists is a matter of media manipulation trumping clear thought. We've all heard the 'nuclear war could wipe out humanity' meme repeated so often that we buy into it despite the fact that there has never been any good evidence for it.

Comment by ewbrownv on A solvable Newcomb-like problem - part 1 of 3 · 2012-12-03T21:33:31.329Z · LW · GW

One box, of course. Trying to outsmart an AI for a piddly little 0.1% increase in payoff is stupid.

Now if the payoff were reversed a player with high risk tolerance might reasonably go for some clever two-box solution... but the odds of success would be quite low, so one-boxing would still be the conservative strategy.

Comment by ewbrownv on Rationality Quotes December 2012 · 2012-12-03T20:58:28.258Z · LW · GW

Not quite. The plumber and electrician are necessary for the existence of the city. The DMV clerk is needed only for the enforcement of a licensing scheme - if his office shut down completely the city would go on functioning with little or no change.

Comment by ewbrownv on LW Women- Minimizing the Inferential Distance · 2012-12-03T19:30:48.189Z · LW · GW

The problem with this seemingly high-minded ideal is that every intervention has a cost, and they add up quickly. When oppression is blatant, violent and extreme it's relatively easy to identify, the benefits of mitigating it are large, and the cost to society is low. But when the 'oppression' is subtle and weak, consisting primarily of personal conversations or even private thoughts of individuals, the reverse is true. You find yourself restricting freedom of speech, religion and association, creating an expanding maze of ever-more-draconian laws governing every aspect of life, throwing out core legal principles like innocent until proven guilty and the right to confront one's accusers - and even then, success is unlikely.

Another important factor is the fact that those who consider themselves victims will never be satisfied, and indeed this whole campaign in their name quickly ceases to improve their lives to any measurable degree. As you noted yourself, individuals tend to rate the trauma of an unpleasant incident relative to their own experiences. So once you stamp out the big, easily measured objective forms of oppression, you find yourself on a treadmill where working harder and harder to suppress the little stuff doesn't do any good. Each generation feels that they're as oppressed as the one before, even if objectively things have changed dramatically in their favor. The only way off the treadmill is for the 'victim' group to stop viewing every experience through the lens of imagined oppression.

Comment by ewbrownv on LW Women- Minimizing the Inferential Distance · 2012-11-30T19:01:17.483Z · LW · GW

Oppression? No. Calling these sorts of incidents 'oppression' trivializes the suffering of the disenfranchised millions who live in daily fear of beatings, lynching or rape because of their religion or ethnicity, and must try to survive while knowing that others can rob them and destroy their possessions with impunity and they have no legal recourse. You might as well call having to shake hands with a man you don't like 'rape'.

Incidents on the level of those mentioned here are inevitable in any society that has even the slightest degree of diversity. Everyone has been treated badly by members of a different group at some point in their life, and responsible adults are expected to get over it and get on with things.

Comment by ewbrownv on LW Women- Minimizing the Inferential Distance · 2012-11-30T17:48:41.800Z · LW · GW

As a data point for the 'inferential distance' hypothesis, I'd like to note that I found nothing in the above quotes that was even slightly surprising or unfamiliar to me. This is exactly what I'd expect it to be like to grow up as a 'geeky' or 'intellectual' woman in the West, and it's also a good example of the sorts of incidents I'd expect women to come up with when asked to describe their experiences. So when I write things that the authors of these anecdotes disagree with, the difference of opinion is probably due to something else.

Comment by ewbrownv on LW Women- Minimizing the Inferential Distance · 2012-11-30T17:13:57.027Z · LW · GW

I think we mean different things by 'brainwashing' and 'social conditioning', which is causing some terminology confusion. The above is perfectly consistent with my thesis, which is simply that a major theme of 20th-century social movements was the belief that you can change individual behavior pretty much however you want by changing the society that people live in.

I call this an incorrect belief because more recent research in cognitive science reveals that there are strong constraints on what kinds of mental adaptations will actually happen in practice, and thus on what kinds of social organizations will actually be stable enough to survive for any great length of time.

For example, humans have an innate tendency to form ingroup / outgroup distinctions and to look down on members of their outgroup, which is one of the factors responsible for a lot of bigotry and racism. Society can tell people who to include in these groups with a high degree of success, and can encourage or discourage the abuse of outgroup members. But you can't eliminate the underlying desire for an outgroup, and if you try you'll get odd phenomena like people who violently hate their political opponents while honestly believing themselves to be paragons of love and tolerance.

Again, this is not to say that reforms are impossible. Rather, the point is that you can't fix everything simultaneously, because every social change has unpredictable side effects that currently no one knows how to eliminate. This is one reason why grand social engineering projects almost always fail - because they carelessly pile up lots of big changes in a short period of time, and the accumulated side effects create so much social chaos that they get deposed and replaced with someone more psychologically comfortable.

Comment by ewbrownv on LW Women- Minimizing the Inferential Distance · 2012-11-29T22:33:31.712Z · LW · GW

I don't see how "people unconsciously act as agents of large-scale social groups" contradicts "the human mind can be arbitrarily re-written by social conditioning". To me it seems that one implies the other.

Isn't the whole Marxist project based on the idea that you can bring about radical changes in human behavior by reorganizing society? "From each according to his ability, to each according to his needs" can only work if humans are so malleable that basic greed, laziness, selfishness and ambition can be eradicated through social programs.

Comment by ewbrownv on LW Women- Minimizing the Inferential Distance · 2012-11-29T17:53:13.826Z · LW · GW

That sounds imminently reasonable, and it might even have worked before the rise of victimization politics. But as anyone who has seriously tried to have this type of discussion before should know, these days it's self-defeating. Almost all of the women who find a statement like the one mentioned offensive will be equally offended no matter how gently you phrase your observations, because it isn't your tone that they object to. Rather, any instance of a male disagreeing with the prevailing world view on gender relations is automatically considered offensive. So if you seriously try to adopt a policy of causing no offense, you'll quickly discover that the only way to do so is to remain silent.

I don't, BTW, claim that this is a gender-specific issue. Anyone who is a member of an allegedly privileged group is likely to encounter the same problem discussing a politically charged issue with members of an allegedly oppressed group. The mere fact that you're accused of being an 'oppressor' is enough to render anything you say offensive to those who consider themselves victims, and the only escape is to abjectly surrender and go around castigating yourself for whatever crimes you've been accused of.

So given this catch-22, my response is to tell the perpetually offended to grow up. Other people are entitled to disagree with you, they are entitled to express their opinions, and you do not have the right to shut them up by throwing a fit about it. If you find yourself unable to cope with frank, occasionally abrasive discussion you're free to avoid it in any number of ways. But demanding that everyone else censor themselves to avoid offending your delicate sensibilities is not acceptable in a free society.

Comment by ewbrownv on LW Women- Minimizing the Inferential Distance · 2012-11-28T19:45:19.848Z · LW · GW

I don't want to death-spiral into a discussion of politics, so I'll refrain from naming specific groups. But in most Western nations there are large, well-funded political activist groups that have consciously, explicitly adopting the tactic of aggressively claiming offense in order to silence their political opponents. While the members of such groups might be honestly dedicated to advancing some social cause, the leaders who encourage this behavior are professional politicians who are more likely to be motivated by issues of personal power and prestige.

So I'll certainly concede that many individuals may feel genuinely offended in various cases, but I stand by my claim that most of the political organizations they belong to encourage constant claims of offense as a cynical power play.

If you don't believe the ratcheting effect actually happens, I invite you to compare any random selection of political tracts from the 1950s, 1970s and 1990s. You'll find that on many issues the terms of the debate have shifted to the point where opinions that were seriously discussed in the 1950s are now considered not just wrong but criminal offenses. This may seem like a good thing if you happen to agree with the opinion that's currently be ascendant, but in most cases the change was not a result of one side marshaling superior evidence for their beliefs. Instead it's all emotion and political gamesmanship, supplemented by naked censorship whenever one side manages to get a large enough majority.