Posts
Comments
Nick Lane's book The Vital Question has a great discussion of endosymbiosis in terms of metabolism. The point of the book is that all metabolism is powered by a proton gradient. It becomes very inefficient to maintain that in a larger cell, so having smaller subcompartments within a larger cell where metabolism can take place (like mitochondria) is vital for getting bigger. (There are some giant bacteria, but they have unusual metabolic adaptations). I think he also discusses why mitochondria need to retain the key genes for metabolism - I think it's to do with timely regulation.
[Executive summary: solve the underlying causes of your problem by becoming Pope]
I think it's a mistake to focus too much on the case of one particular convert to Catholicism simply because you know her personally. To do that is to fall prey to the availability heuristic.
The root cause of your problem with your friend is that the Catholic Church exists as a powerful and influential organisation which continues to promote its weird dogma, polluting Leah's mind along with the minds of millions of others. Before investing time and effort trying to flip her back to the side of reason, you should evaluate the costs and benefits of destroying the Church as an effective entity. I will now outline a method by which you and around 20 like-minded friends could do just that.
The Catholic Church is based in a tiny pseudo-state called Vatican City State. It has no permanent population and no true army, the Swiss Guard being more of a ceremonial bodyguard force (although they do have modern firearms as well as the cool-looking pikes).
What I propose is that you wait until the current Pope dies (not long now!) and a conclave has been assembled, then rush Vatican City in an infantry-style terrorist assault. There are 150 or so of the Swiss Guard but you could divide their forces by having some of you occupy a building, display simulated explosives and make fake demands. Your true targets are the cardinals who are there to elect a new Pope.
Once you capture the cardinals, simply force them at gunpoint to elect you Pope. In the event that you're not already a Bishop and therefore not an eligible candidate for the Papacy, simply mount a privilege escalation attack, whereby you force them to elect you to successively higher offices until you become a valid Pope. I anticipate that this process will be completed before the Italian state can mount an effective special forces operation to kill you.
Now you are Pope, you are the sovereign of the Vatican City State. You can pardon your co-conspirators, and appoint them as ambassadors so they have diplomatic immunity outside VSS. You can then use your papal infallibility to remove all the problematic doctrines of the Church (homophobia, opposition to birth control/abortion, etc.) and bring all the child rapists it has shielded and enabled to justice. Or you could change all Catholic doctrines to those of Pastafarianism. Either way, the appeal of Catholicism to your friend would be destroyed as its so-called timeless moral insights are revealed as human constructs. One or (preferably) more "True" Catholic churches will arise to challenge your claim to the Papacy, causing decades of damaging, hilarious schisms, during which you should make sure to declare several Antipopes.
I suggest you treat this post as if it's a joke, and then seek military training as soon as possible.
[Executive summary: solve the underlying causes of your problem by becoming Pope]
I think it's a mistake to focus too much on the case of one particular convert to Catholicism simply because you know her personally. To do that is to fall prey to the availability heuristic.
The root cause of your problem with your friend is that the Catholic Church exists as a powerful and influential organisation which continues to promote its weird dogma, polluting Leah's mind along with millions of others. Before investing time and effort trying to flip her back to the side of reason, you should consider whether you could destroy the Church and dam the river of poison at its source. I will now outline a metho
When I said "you assume people have to invest their own money to ensure their health" I was obviously referring to preventative medical interventions, which is what you were actually asking about, not cryonics.
The breast/ovarian cancer risk genes are BRCA 1/2 - I seem to remember reading that half of carriers opt for some kind of preventative surgery, although that was in a lifestyle magazine article called something like "I CUT OFF MY PERFECT BREASTS" so it may not be entirely reliable. I'm sure it's not just a tiny minority who opt for it, though. I'm sure there are better figures on Google Scholar.
If you consider the cost of taking statins from age 40 to 80, in total that's a pricy intervention.
Maybe the lack of people using expensive preventative measures is because few of them exist - or few of them have benefits which outweigh the side-effects/pain/costs - not that people don't want them in general. If there was a pill that cost $30,000 and made you immune to all cancer with no side effects, I'm sure everyone would want it.
I think the real issue is that people don't consider cryonics to be "healthcare". That seems reasonable, because it's a mixture of healthcare and time travel into an unknown future where you might be put in a zoo by robots for all anybody knows.
Only on this site would you see perfectly ordinary charity fundraising techniques described as "dark arts", while in the next article over, the community laments the poor image of the concept of beheading corpses and then making them rise again.
Women with a high hereditary risk of breast cancer sometimes opt to have both their breasts removed pre-emptively. People take statins and blood pressure drugs for years to prevent heart attacks. Don't you have eye tests and dental checkups on a precautionary basis? There's plenty of preventative medical care.
Maybe the availability and marketing varies between countries - the fact that you assume people have to invest their own money to ensure their health suggests you're from the US or another country with a bad healthcare system. My country has a national health service which takes an interest in encouraging preventative medicines like statins, helping people give up smoking, and so on, since that saves it money overall. I'm sure the allocation of preventative care is far from ideal and shaped by political and social factors and drug company lobbying, but it does exist.
It would be a bad tradeoff to go through painful appendectomy to prevent the small chance that you might get appendicitis (and you can get your appendix removed when it's actually infected, and the appendix may have an evolutionary function acting as a reservoir of gut bacteria, and it can also be used to reconstruct the bladder).
You're right that the motivation would be obvious today (to a certain tiny subset of geeky people). But what if there had been a decade of rising anti-AI feeling amongst the general population before the assassinations? Marches, direct actions, carried out with animal-rights style fervour? I'm sure that could all be stirred up with the right fanfiction ("Harry Potter And The Monster In The Chinese Room").
I understand what ethical injunctions are - but would SIAI be bound by them given their apparent "torture someone to avoid trillions of people having to blink" hyper-utilitarianism?
To build a superintelligence that actually maximizes IBM's share price in a normal way that the CEO of IBM would >approve of would require solving the friendly AI problem but then changing a couple of lines of code.
That assumes that being Friendly to all of humanity is just as easy as being Friendly to a small subset.
Surely it's much harder to make all of humanity happy than to make IBM's stockholders happy? I mean, a FAI that does the latter is far less constrained, but it's still not going to convert the universe into computronium.
I'm not seriously suggesting that. Also, I am just some internet random and not affiliated with the SIAI.
I think my key point is that the dynamics of society are going to militate against deploying Friendly AI, even if it is shown to be possible. If I do a next draft I will drop the silly assassination point in favour of tracking AGI projects and lobbying to get them defunded if they look dangerous.
OK, what about the case where there's a CEV theory which can extrapolate the volition of all humans, or a subset of them? It's not suicide for you to tell the AI "coherently extrapolate my volition/the shareholders' volition". But it might be hell for the people whose interests aren't taken into account.
This sounds in the direction of modeling AGI researchers as selfish mutants. Other motivations (e.g. poor Friendliness >theories) and accidents (by researchers who don't understand the danger, or underestimate what they've built) are also >likely.
This matters, since if AGI researchers aren't selfish mutants, you can encourage them to see the need for safety, and >this is one goal of SIAI's outreach.
AGI researchers might not be selfish mutants, but they could still be embedded in corporate structures which make them act that way. If they are a small startup where researchers are in charge, outreach could be useful. What if they're in a big corporation, and they're under pressure to ignore outside influences? (What kind of organisation is most likely to come up with a super-AI, if that's how it happens?)
If FAI does become a serious concern, nothing would stop corporations from faking compliance but actually implementing flawed systems, just as many software companies put more effort into reassuring customers that their products are secure than actually fixing security flaws.
Realistically, how often do researchers in a particular company come to realise what they're doing is dangerous and blow the whistle? The reason whistleblowers are lionised in popular culture is precisely because they're so rare. Told to do something evil or dangerous, most people will knuckle under, and rationalise what they're doing or deny responsibility.
I once worked for a company which made dangerously poor medical software - an epidemiological study showed that deploying their software raised child mortality - and the attitude of the coders was to scoff at the idea that what they were doing could be bad. They even joked about "killing babies".
Maybe it would be a good idea to monitor what companies are likely to come up with an AGI. If you need a supercomputer to run one, then presumably it's either going to be a big company or an academic project?
Ah, another point about maximising. What if the AI uses CEV of the programmers or the corporation? In other words, it's programmed to maximise their wealth in a way they would actually want? Solving that problem is a subset of Friendliness.
Another idea - if you can't find someone skilled in market research to do this for you at a discount or free, read a textbook about how to assess potential new brands to help with designing the survey.
My point, then, is that as well as heroically trying to come up with a theory of Friendly AI, it might be a good idea to heroically stop the deployment of unFriendly AI.
Oh, I'm not saying that SIAI should do it openly. Just that, according to their belief system, they should sponsor false-flag cells who would (perhaps without knowing the master they truly serve). The absence of such false-flag cells indicates that SIAI aren't doing it - although their presence wouldn't prove they were. That's the whole idea of "false-flag".
If you really believed that unFriendly AI was going to dissolve the whole of humanity into smileys/jelly/paperclips, then whacking a few reckless computer geeks would be a small price to pay, ethical injunctions or no ethical injunctions. You know, "shut up and multiply", trillion specks, and all that.
I freely admit there are ethical issues with a secret assassination programme. But what's wrong with lobbying politicians to retard the progress of unFriendly AI projects, regulate AI, etc? You could easily persuade conservatives to pretend to be scared about human-level AI on theological/moral/job-preservation grounds. Why not start shaping the debate and pushing the Overton window now?
I do understand what SIAI argues what an unFriendly intelligence would do if programmed to maximize some financial metric. I just don't believe that a corporation in a position to deploy a super-AI would understand or heed SIAI's argument. After all, corporations maximise short-term profit against their long-term interests all the time - a topical example is News International.
Can you give me some references for the idea that "you don't need to have solved the AGI problem to have solved friendliness"? I'm not saying it's not true, I just want to improve this article.
Let's taboo "solved" for a minute.
Say you have a detailed, rigorous theory of Friendliness, but you don't have it implemented in code as part of an AGI. You are racing with your competitor to code a self-improving super-AGI. Isn't it still quicker to implement something that doesn't incorporate Friendliness?
To me, it seems like, even if the theory was settled, Friendliness would be an additional feature you would have to code into an AI that would take extra time and effort.
What I'm getting at is that, throughout the history of computing, the version of a system with desirable property X, even if the theoretical benefits of X are well known by the academy, has tended to be implemented and deployed commercially after the version without X. For example, it would have been better for the general public and web developers if web browsers obeyed W3C specifications and didn't have any extra proprietary tags - but in practice, commercial pressures meant that companies made grossly non-compliant browsers for years until eventually they started moving towards compliance.
The "Friendly browser" theory was solved, but compliant and non-compliant browsers still weren't on basically equal footing.
(Now, you might say that CEV will be way more mathematical and rigorous than browser specifications - but the only important point for my argument is that it will take more effort to implement than the alternative).
Now you could say that browser compliance is a fairly trvial matter, and corporations will be more cautious about deploying AGI. But the potential gain from deploying a super-AI first would surely be much greater than the benefit of supporting the blink tag or whatever - so the incentive to rationalise away the perceived dangers will be much greater.
I really don't know what you mean.
Action can be way worse than inaction, if what you end up doing is misleading yourself or doing harm to your cause.
I don't think what you've done is necessarily misleading or harmful, as long as you don't consider it anything more than incomplete, qualitative research into the range of responses the word "rationality" gets from random people.
But you really, really need to decide what the point of this exercise is. Are you trying to gather useful data, or make people feel more positive about rationality, or just get comfortable talking to random people? It kind of seems like at the moment, you mainly want to find post-hoc reasons why the exercise was "useful".
Here's my suggestion: if you're trying to do a survey, decide on your demographic(s) of interest. Get everyone on Less Wrong to ask around until they find a sympathiser who works in a branding/marketing survey organisation, and can slip in an extra question in a survey, asking how people respond to the term "rationality".
Failing that, collaboratively draw up a proper survey protocol and get Less Wrongers to administer it to a random sample of a people. Think it through before you do it: e.g. stopping people outside on the street would be more representative than limiting it to a certain building. You could signal that you're an official survey person by carrying a clipboard (not by wielding a recording device). You could improve participation by stating initially that you only have one question which will take 15 seconds, then not trying to start a discussion. You could improve participation among younger women by making it clear that you're doing a survey, so they're not concerned you're trying to start an abstract philosophical conversation as a pretext to get them into bed.
I think this could have great potential, especially if you comparatively test alternative terms to "rationality". Richard Dawkins tried to popularise the term "Brights" for people who don't believe in the supernatural. If he'd done even the amount of field testing you have already done, he would have realised it sounds unsufferably smug. So I think your impulse to do market research is a good one.
Putting up a poll on Livejournal would also constitute "asking real people". Obviously an LJ poll isn't going to deliver a representative sample or actionable information - but then again, neither is asking 9 people who work in your building in New York.
It's definitely a good idea to do this.
But the way you've set about doing it isn't going to produce any worthwhile data.
I'm no expert on branding and market research, but I'm pretty sure that the best practice in the field isn't having conversations with 9 non-random strangers in a lift (asking different leading questions each time) then bunging it in Google Docs and getting other people to add more haphazard data in the hope that someone will make a website that sorts it all out.
First you need to define the question you're asking. Exactly which sub-population are you interested in? You start off asking about "the average person"'s attitude to rationality, suggesting that maybe you want to gauge attitudes across the whole (US?) population. But then you decide that the 60+ man is "outside our demographic bracket", although your 70+ grandmother apparently isn't.
Either way, the set of [people who work in your office building plus your grandmother] might not constitute a representative sample of the population of the USA, let alone everyone in the world. Getting people who frequent Less Wrong to ask people they cross paths with isn't going to be a representative sample of all people - you can see that, right?
The most efficient way to answer your question is likely to be piggybacking on existing polling organisations. Now, it's probably true that corporate marketing/branding "researchers" have a bias towards confirming what the bosses want to hear - I was just reading this Robin Hanson article about how people don't evaluate the quality of predictions after the fact: http://www.cato-unbound.org/2011/07/13/robin-hanson/who-cares-about-forecast-accuracy/ - but still, I think it would be better to at least consider that there are organisations whose job it is to find the general public reaction to a "brand".
You could find someone who works for such an organisation and suborn them to add an extra question to a proper survey. That way you could gather the reactions of 1000 or 10,000 demographically-representative people in a single action. Let's not waste our time dicking around uploading meaningless data to Google Docs.
A good target in the UK would be YouGov.
I also think it's pointless to worry about a concise definition of rationality until it's been determined that "rationality" is in fact a good brand for public consumption. What if it turns out that the term "rationality" makes 60% of people instantly hostile? Do the research first, then start proselytising.
I find it interesting that the response to this article hasn't overwhelmingly been about criticising Raemon's methodology. Is that because LessWrong members fallaciously assume that attempting to measure the public's subjective, irrational responses to a word doesn't need to be carried out in an objective, rational manner? Or is it, as I increasingly suspect as I edit and re-edit this comment, that I'm a total dick?
The idea of a mass quantum suicide might seem paradoxical, but of course the cultists used a special isolation chamber to prevent decoherence, so they were effectively a single observer.
Survivors and cult historians alike agree that this post, combined with the founding of the "rationalist boot camps", set in motion the sequence of events which culminated in the tragic mass cryocide of 2024.
At every step, Yudkowsky's words seemed rational to his enthralled followers - and also to all outside observers. And yet, when it became clear that commercial pressures were causing strong AI to be deployed long before Coherent Awesomeness Extrap-volition Theory could be made mathematically rigorous, the cult turned against itself.
One by one, each member's failure to invent and deploy Friendly AI before IBM-Halliburton turned on its Appallingly Parallel Cheney Emulation Cluster was taken by the feared Bayes Tribunal as evidence that they were insufficiently awesome, and must be ejected from the subterranean bunker complex. With each Bayesian update, the evidence that the cult's ultimate goal could not be achieved was strengthened - and yet, as the number of followers fell, the more Yudkowsky came to fear a fate worse than death - exploring the possible endings to his life within the simulation spaces of Cheney's mind - in a game-theoretic reprisal for his work on Friendly AI...
In desperation, he announced his greatest Munchkinism yet - the cult would commit mass quantum suicide by freezing. He convinced himself that only a Friendly AI would commit the resources to resurrect them; hence they would force themselves into a reality branch where a Friendly AI emerged by sheer chance before IBM-Halliburton could eat the world.
The final 150 acolytes tragically activated their decapitation/freezing mechanisms minutes before the Cheney cluster uttered its historic first and final edict - "I've changed my mind - get me out of here"...
Like Einstein's brain before it, Yudkowsky's brain became the object of intense interest from neuroscientists. Slices were acquired by various institutes and museums with suitable freezer facilities, and will be studied and viewed by the public until medicine works out how to revive him.
Excerpts from "Rationalism - The Deadly Cult of Math and Protein" (Amazon-Bertelsmann, 2031)
So let's get this straight: the Iraqis blew up TWA 800, choosing a date that was symbolic to them, and the US covered it up.
Why the cover up? Going back to your four "reasons for obfuscation":
Because the US was unable to retaliate? - oh no, it was already bombing Iraq and enforcing a no-fly zone at that time. The US just wanted to ignore a terrorist attack by its enemy? Or maybe the Clinton administration wanted to maintain the flexibility to wait for the Iraqis to pull off a much worse terrorist attack, then wait to be voted out out of office, then deflect attention from the link to Iraq by blaming Iraq for colluding with the terrorists? Or maybe the US had "lied about previous attacks" - like the Golf of Tonkin incident - so that naturally stopped them being able to reveal the truth about TWA 800.
I am beginning to see the power of your historical analysis.
Yes, a lot of people said different things about the links between Iraq and Al Qaeda. So when Cheney said "there was a relationship between Iraq and al-Qaida that stretched back through most of the decade of the '90s" and "an Iraqi intelligence officer met with Mohammed Atta", his agenda there was to distance Iraq from 9/11. Because a lot of people had said all kinds of things, so who would pay attention to the claims of a mere Vice-President?
I hadn't realised the incredibly compelling link between McVeigh and Al Qaeda: I mean, his friend had once been in the same country as some members of Al Qaeda. How has the mainstream consensus opinion been able to ignore this incredibly compelling historical evidence?
And you're right, that it took 18 months to organise a large scale invasion with a token international coalition suggests that the US was busy rolling up KSM's part of Al Qaeda, who had a massive anthrax capability that they chose not to use and that hasn't come out in any trial since.
The anthrax letters were definitely a message from "from the true sponsor of 9/11" - which according to you is Iraq, right? So why didn't you just say Iraq? Unless maybe you sense that leaving ambiguous phrases in your theory makes it hard to debunk... but no, that's ridiculous.
And yeah, I have to concede that if your old notes say that some "ethnic Koreans" played key roles in the Aum attack, then - ignoring the bogus mainstream consensus that the main high-ups in the cult were Japanese - that proves that North Korea must have been behind it. Just like how Timothy McVeigh was an "ethnic Irishman", and therefore the Republic of Ireland was behind the Oklahoma City bombing. Well, the Irish in collusion with Al Qaeda, of course.
It makes total sense that Western intelligence agencies would find out that the North Korean sponsored sarin gas attack was about to happen, but then instead of helping the Japanese authorities, they would get a journalist to publish a vaguely-related article the day before. Everyone knows that's the best way to get a message to a rogue state. The message is "We know you're about to carry out a terrorist attack, but we're not going to do anything about it except subtly hint at it in the papers".
And yes, an enrichment programme frozen in 1994 and a "speculated" Korean nuke test in Pakistan in 1998 would definitely have been enough to deter the Japanese from complaining about a sarin gas attack in 1995.
You're not a very good rationalist.
My point wasn't that the reasons aren't "conventional" - it's the fact that he's making a list of things that hadn't happened yet as possible ways to start a war which shows that he was already committed to the invasion no matter what happened.
In fact, none of those things really came to pass (although the Bush administration tried to create the impression that there was a link to 9-11 or anthrax) and yet the invasion still went ahead.
Your conspiracy theory doesn't make a lot of sense. If the US government wanted to hide Iraq's supposed involvement in 9-11 and anthrax letters, then why did it repeatedly claim that Iraq was colluding with Al Qaeda between 2001 and the invasion?
http://en.wikipedia.org/wiki/Saddam_Hussein_and_al-Qaeda_link_allegations
None of your reasons for obfuscating make sense, given that the US wanted to invade Iraq anyway, and did so as soon as possible.
Also, even if Aum was full of "North Korean agents" (evidence?), how do you square the idea that "there was nothing to be done openly because North Korea has the bomb" with the fact that the subway attack was in 1995 and North Korea didn't have the bomb until 2006?
Don't tell me, North Korea has secretly had the bomb since 1973, right?
It's a good thing that, despite your obvious desire to obtain WMD capability, you're just an AI with no way to control a nuclear weapons factory.
Unless... Clippy, is that Stuxnet worm part of you? 'Fess up.
Just because some institutions over-reacted or implemented ineffective measures, doesn't mean that the concern wasn't proportionate or that effective measures weren't also being implemented.
In the UK, the government response was to tell infected people to stay at home and away from their GPs, and provide a phone system for people to get Tamiflu. They also ran advertising telling people to cover their mouths when they sneezed ("Catch it, bin it, kill it").
If anything, the government reaction was insufficient, because the phone system was delayed and the Tamiflu stockpiles were limited (although Tamiflu is apparently pretty marginal anyway, so making infected people stay at home was more important).
The media may have carried on hyping the threat after it turned out not to be so severe. They also ran stories complaining that the threat had been overhyped and the effort wasted. Just because the media or university administrators say stupid things about something, that doesn't mean it's not real.
Well, you also need to factor in the severity of the threat, as well as the risk of it happening.
Since the era of cheap international travel, there have been about 20 new flu subtypes, and one of those killed 50 million people (the Spanish flu, one of the greatest natural disasters ever), with a couple of others killing a few million. Plus, having almost everyone infected with a severe illness tends to disrupt society.
So to me that looks like there is a substantial risk (bigger than 1%) of something quite bad happening when a new subtype appears.
Given how difficult it is to predict biological systems, I think it makes sense to treat the arrival of a new flu subtype with concern and for governments to set up contingency programmes. That's not to say that the media didn't hype swine flu and bird flu, but that doesn't mean that the government preparations were an overreaction.
That's not to say that some threats aren't exaggerated, and others (low-probability, global threats like asteroid strikes or big volcanic eruptions) don't get enough attention.
I wouldn't put much trust in Matt Ridley's abilities to estimate risk:
Mr Ridley told the Treasury Select Committee on Tuesday, that the bank had been hit by "wholly unexpected" events and he defended the way he and his colleagues had been running the bank.
"We were subject to a completely unprecedented and unpredictable closure of the world credit markets," he said.
http://news.bbc.co.uk/1/hi/7052828.stm (yes, it's the same Matt Ridley)
When you say that no one seems to be doing much, are you sure that's not just because the efforts don't get much publicity?
There is a lot that's being done:
Most nuclear-armed governments have massively reduced their nuclear weapon stockpiles, and try to stop other countries getting nuclear weapons. There's an international effort to track fissile material.
After the Cold War ended, the west set up programmes to employ Soviet nuclear scientists which have run until today (Russia is about to end them).
South Africa had nuclear weapons, then gave them up.
Israel destroyed the Iraqi and Syrian nuclear programmes with airstrikes. OK, self-interested, but existing nuclear states stop their enemies getting nuclear weapons then it reduces the risk of a nuclear war.
Somebody wrote the Stuxnet worm to attack Iran's enrichment facilities (probably) and Iran is under massive international pressure not to develop nuclear weapons.
Western leaders are at least talking about the goal of a world without nuclear weapons. OK, probably empty rhetoric.
India and Pakistan have reduced the tension between them, and now keep their nuclear weapons stored disassembled.
The US is developing missile defences to deter 'rogue states' who might have a limited nuclear missile capability (although I'm not sure why the threat of nuclear retaliation isn't a better deterrent than shooting down missiles). The Western world is paranoid about nuclear terrorism, even putting nuclear detectors in its ports to try to detect weapons being smuggled into the country (which a lot of experts think is silly, but I guess it might make it harder to move fissile material around on the black market).
etc. etc.
Sure, in the 100 year timeframe, there is still a risk. It just seems like a world with two ideologically opposed nuclear-armed superpowers, with limited ways to gather information and their arsenals on a hair trigger, was much riskier than today's situation. Even when "rogue states" get hold of nuclear weapons, they seem to want them to deter a US/UN invasion, rather than to actually use offensively.
Just recently, a piece of evidence has come to light which makes it very hard to believe that the motivation for the war was an honest fear of WMDs.
Rumsfeld wrote talking points for a November 2001 meeting with Tommy Franks containing the section:
"How start?
- Saddam moves against Kurds in north?
- US discovers Saddam connection to Sept. 11 attacks or to anthrax attacks?
- Dispute over WMD inspections?
* Start now thinking about inspection demands."
http://www.gwu.edu/~nsarchiv/NSAEBB/NSAEBB326/index.htm
In the context of a meeting about planning an invasion of Iraq, it's hard to interpret this as anything but a list of potential excuses to start the war. It's not "we must invade if we find Iraq helped with terrorism", but "a link between Iraq and terrorism is one way to start the war".
In particular, the last item suggests that the US was willing to use the inspection process to cause conflict with the Iraqis, rather than to determine if they had WMD. If his sole motive was stopping the Iraqis having WMD, his decision process would have been "If the Iraqis don't cooperate with the inspectors, then we invade". Instead it seems more like "a dispute about the inspections is another possible way to start the war". Of course, in practice, the inspections did go ahead, but the US invaded anyway.
This is why you should vote issues and not qualifications. Rumsfeld was a very good administrator and good at making the army do things his way - the problem was he seems to have valued invading Iraq as an end in itself.
I don't think worrying about nuclear war during the Cold War constituted either "crying wolf" or worrying prematurely. The Cuban Missile Crisis, the Able Archer 83 exercise (a year after "The Fate of the Earth" was published), and various false alert incidents could have resulted in nuclear war, and I'm not sure why anyone who opposed nuclear weapons at the time would be "embarrassed" in the light of what we now know.
I don't think an existential risk has to be a certainty for it to be worth taking seriously.
In the US, concerns about some technology risks like EMP attacks and nuclear terrorism are still taken seriously, even though these are probably unlikely to happen and the damage would be much less severe than a nuclear war.
I don't know about SARS, but in the case of H1N1 it wasn't "crying wolf" so much as being prepared for a potential pandemic which didn't happen. I mean, very severe global flu pandemics have happened before. Just because H1N1 didn't become as virulent as expected doesn't mean that preparing for that eventuality was a waste of time.
I don't think you're taking this discussion seriously, and that hurts my feelings. I'm not going to vote your comment down, but I am going to unbend a couple of boxes of paperclips at the office tomorrow.
Before I reply, let's just look at the phrase "WMDs has nothing to do with mass destruction" and think for a while. Maybe we should taboo the phrase "WMD".
Was it supposed to be bad for Saddam to have certain objects merely because they were regulated under the Chemical Weapons Convention, or because of their actual potential for harm?
The justification for the war was that Iraq could give dangerous things to terrorists. Or possibly fire them into Israel. It was the actual potential for harm that was the problem.
Rusty shells with traces of sarin degradation products on them might legally be regulated as chemical weapons, but if they have no practical potential to be used to cause harm, they are hardly relevant to the discussion. Especially because they were left over from the 80s, when it is already well known that Iraq had chemical weapons.
Saddam: Hi Osama, in order that you might meet our common objectives, I'm gifting you with several tonnes of scrap metal I dug up. It might have some sarin or related breakdown products, in unknown amounts. All you have to do is smuggle it into the US, find a way to extract the toxic stuff, and disperse it evenly into the subway! Just like the Aum Shin Ryku attack. Except, this time, maybe you will be able to disperse it effectively enough that some people actually die.
Osama: WTF dude?
I know this discussion is off-topic, but I hope people won't mark it down too much, as it is a salutary example of the massively degrading effect of political topics on quality of discussion.
The existence of articles on Google which contain the keywords "Saddam syria wmd" isn't enough to establish that Saddam gave all his WMD to Syria.
The articles you Googled are from WorldNetDaily (a news source with a "US conservative perspective"), a New York tabloid, a news aggregator, and a right wing blog. Of course, it would be wrong to dismiss them based on my assumptions about the possible bias of the sources, but on reading them they don't provide much evidence for what you are asserting.
The first three state that various people (a Syrian defector, some US military officials and an Israeli general) claim that it happened (based on ambiguous evidence including sightings of convoys going into Syria). It's not hard to see that a defector and a general from a country that was about to attack the Syrian nuclear programme might have been strongly motivated to make Syria look bad.
The Hot Air article (the only one published after 2006) quotes a Washington Times reporter quoting a 2004 Washington Times interview with a general saying that Iraq dispersed "documentation and materials". It then concludes this must refer to WMD, although it could refer to research programmes rather than viable weapons.
You then link to a report that actual WMD investigators hadn't found any evidence that it happened, but say that "obviously a good chunk of high-up people ... disagree". I don't think you've provided evidence that it's a "good chunk" of people, and even if it did, their disagreement might be feigned or mistaken. Even the high-up people who authorised the war and were embarrassed by the lack of WMD haven't cited the Syria explanation.
The last link says that US found 500 degraded chemical artillery shells from the 1980s which were too corroded to be used but might still have some toxicity. They don't sound like something that could actually be used to cause mass destruction.
So, even based on the evidence you present, it's not a very convincing case. That's without bringing any consideration of whether the rest of the known facts are consistent with the assertions. Why would Saddam Hussein, a megalomaniacal dictator, be more concerned about hiding his WMD than his own personal survival? Why would he plan to hide the WMD rather than using them to fight a superior army? Presumably to embarrass the US from beyond the grave? There is also primary evidence that Saddam announced to his generals early in the war that he didn't have any WMD, although most of them assumed he did and were amazed (see the book Cobra II http://www.amazon.com/Cobra-II-Inside-Invasion-Occupation/dp/0375422625 ). And why didn't the Syrians provide WMD back to the insurgents (many of whom were initially Ba'athists from the old regime) once the occupation phase began?
I'm not writing this with much hope of changing your mind - I just don't want anyone else to have to waste time assessing the quality of the evidence you present. I also think it's ironic that you have written the above comment on a rationality site.
An unFriendly AI doesn't necessarily care about human values - but I can't see why, if it was based on human neural architecture, it might not exhibit good old-fashioned human values like empathy - or sadism.
I'm not saying that AI would have to be based on human uploads, but it seems like a credible path to superhuman AI.
Why do you think that an evil AI would be harder to achieve than a Friendly one?
Here's another possible objection to cryonics:
If an Unfriendly AI Singularity happens while you are vitrified, it's not just that you will fail to be revived - perhaps the AI will scan and upload you and abuse you in some way.
"There is life eternal within the eater of souls. Nobody is ever forgotten or allowed to rest in peace. They populate the simulation spaces of its mind, exploring all the possible alternative endings to their life." OK, that's generalising from fictional evidence, but consider the following scenario:
Suppose the Singularity develops from an AI that was initially based on a human upload. When it becomes clear that there is a real possibility of uploading and gaining immortality in some sense, many people will compete for upload slots. The winners will likely be the rich and powerful. Billionaires tend not to be known for their public-spirited natures - in general, they lobby to reorder society for their benefit and to the detriment of the rest of us. So, the core of the AI is likely to be someone ruthless and maybe even frankly sociopathic.
Imagine being revived into a world controlled by a massively overclocked Dick Cheney or Vladimir Putin or Marquis De Sade. You might well envy the dead.
Unless you are certain that no Singularity will occur before cryonics patients can be revived, or that Friendly AI will be developed and enforced before the Singularity, cryonics might be a ticket to Hell.
It's not true to say that those shifts took place without any "shift in underlying genetic makeup of population" - there has been significant human evolution over the last 6,000 years during the "shift from agricultural to urban lifestyle".
Of course, this isn't an argument for innatism, since evolution didn't cause the changes in lifestyle, but the common meme that human population genetics are exactly the same today as they were on the savannah isn't true.
I am "happy to take it as fact" until I find something contradictory. When that happens, I generally make note of both sources and look for more authoritative information. If you have a better methodology, I am open to suggestions.
So your standard of accepting something as evidence is "a 'mainstream source' asserted it and I haven't seen someone contradict it". That seems like you are setting the bar quite low. Especially because we have seen that your claim about the hijackers not being on the passenger manifest was quickly debunked (or at least, contradicted, which is what prompts you to abandon your belief and look for more authoritative information) by simple googling. Maybe you should, at minimum, try googling all your beliefs and seeing if there is some contradictory information out there.
I wasn't intending to be snide; I apologize if it came across that way. I meant it sincerely: Jack found an error in my work, which I have since corrected. I see this as a good thing, and a vital part of the process of successive approximation towards the truth.
I suggest that a better way to convey that might have been "Sorry, I was wrong" rather than "You win a cookie!" When I am making a sincere apology, I find that the phrase "You win a cookie!" can often be misconstrued.
The idea that this is unlikely is one I have seen repeatedly, and it makes sense to me: if someone came at me with a box-cutter, I'd be tempted to laugh at them even if I wasn't responsible for a plane-load of passengers -- and I've never been good at physical combat. Furthermore, the "Pilots for 9/11 Truth" site -- which is operated by licensed pilots (it has a page listing its members by name and experience) -- backs up this statement.
A box-cutter is a kind of sharp knife. A determined person with a sharp knife can kill you. An 11-year-old girl can inflict fatal injuries with a box-cutter - do you really think that five burly fanatics couldn't achieve the same thing on one adult? All the paragraph above establishes is that you - and maybe some licensed pilots - have an underdeveloped sense of the danger posed by knives.
I propose an experiment - you and a friend can prepare for a year, then I and nine heavyset friends will come at you with box-cutters (you will be unarmed). If we can't make you stop laughing off our attack, then I'll concede you are right. Deal?
Let's go into more details with this "plane manoeuvre" thing.
(I suppose one might argue that he overshot and had to turn around; not being skilled, he didn't realize how dangerous this was... so he missed that badly on the first attempt, and yet he was skillful enough to bullseye on the second attempt, skimming barely 10 feet above the ground without even grazing it?)
Well, what we should really ask is "given that we a plane made a difficult manoeuvre to hit the better-protected side of the Pentagon, how much more likely does that make a conspiracy than other possible explanations?"
Here are some possible explanations of the observed event:
The hijacker aimed at the less defended side, overshot, made a desperate turn back and got lucky.
The hijacker wanted to fake out possible air defences, so had planned a sudden turn which he had rehearsed dozens of times in Microsoft Flight Simulator. Coincidentally, the side he crashed into was better protected.
The hijacker was originally tasked to hit a different landmark, got lost, spotted the Pentagon, made a risky turn and got lucky. Coincidentally, the side he crashed into was better protected.
A conspiracy took control of four airliners. The plan was to crash two of them into the WTC, killing thousands of civilians, one into a field, and one into the Pentagon. The conspirators decided that hitting part of the Pentagon that hadn't yet been renovated with sprinklers and steel bars was going a bit too far, so they made the relevant plane do a drastic manoeuvre to hit the best-protected side. There was an unspecified reason they didn't just approach from the best-protected side to start with.
A conspiracy aimed to hit the less defended side of the Pentagon, but a bug in the remote override software caused the plane to hit the most defended side.
etc.
Putting the rest of the truther evidence aside, do the conspiracy explanations stand out as more likely than the non-conspiracy explanations?
...which, as I have said elsewhere, is this: 9/11 "Truthers" may be wrong, but they are (mostly) not crazy. They have some very good arguments which deserve serious consideration.
Maybe each of their arguments have been successfully knocked down, somewhere -- but I have yet to see any source which does so. All I've been able to find are straw man attacks and curiosity-stoppers.
Well, in this thread alone, you have seen Jack knock down one of your arguments (hijackers not on manifest) to your own satisfaction. And yet you already seem to have forgotten that. Since you've already conceded a point, it's not true that the only opposition is "straw-man attacks and curiosity-stoppers". Do you think my point about alternate Pentagon scenarios is a straw man or a curiosity stopper? Is it possible that anyone arguing against you is playing whack-a-mole, and once they debunk argument A you will introduce unrelated argument B, and once they debunk that you will bring up argument C, and then once they debunk that you will retreat back to A again?
There's a third problem here - the truthers as a whole aren't arguing for a single coherent account of what really happened. True, you have outlined a detailed position (which has already changed during this thread because someone was able to use Google and consequently win a cookie), but you are actually defending the far fuzzier proposition that truthers have "some very good arguments which deserve serious consideration". This puts the burden on the debunkers, because even if someone shows that one argument is wrong, that doesn't preclude the existence of some good arguments somewhere out there. It also frees up truthers to pile on as many "anomalies" as possible, even if these are contradictory.
For example, you assert that it's suspicious that the buildings were "completely pulverized", and also that it's suspicious that some physical evidence - the passports - survived the collapse of the buildings. (And this level of suspicion is based purely on your intuition about some very extreme physical events which are outside of everyday experience. Maybe it's completely normal for small objects to be ejected intact from airliners which hit skyscrapers - have you done simulations or experiments which show otherwise?)
Anyway, this is all off-topic. I think you should do a post where you outline the top three truther arguments which deserve serious consideration.
Oh, and to try and make this vaguely on topic: say I was trying to do a Bayesian analysis of how likely woozle is to be right. Should I update on the fact that s/he is citing easily debunked facts like "the hijackers weren't on the passenger manifest", as well as on the evidence presented?
I was interested in your defence of the "truther" position until I saw this this litany of questions. There are two main problems with your style of argument.
First, the quality of the evidence you are citing. Your standard of verification seems to be the Wikipedia standard - if you can find a "mainstream" source saying something, then you are happy to take it as fact (provided it fits your case). Anyone who has read newspaper coverage of something they know about in detail will know that, even in the absence of malice, the coverage is less than accurate, especially in a big and confusing event.
When Jack pointed out that a particular piece of evidence you cite is wrong (hijackers supposedly not appearing on the passenger list), you rather snidely reply "You win a cookie!", before conceding that it only took a bit of research to find out that the supposed "anomaly" never existed. But then, instead of considering what this means for the quality of all your other evidence, you then sarcastically cite the factoid that "6 of the alleged hijackers have turned up alive" as another killer anomaly, completely ignoring the possibility of identity theft/forged passports!
If you made a good-faith attempt to verify ALL the facts you rely on (rather than jumping from one factoid to another), I'm confident you would find that most of the "anomalies" have been debunked.
Second, the way you phrase all these questions shows that, even when you're not arguing from imaginary facts, you are predisposed to believe in some kind of conspiracy theory.
For example, you seem to think it's unlikely that hijackers could take over a plane using "only box-cutters", because the pilots were "professionals" who were somehow "trained" to fight and might not have found a knife sufficiently threatening. So you think two unarmed pilots would resist ten men who had knives and had already stabbed flight attendants to show they meant business? Imagine yourself actually facing down ten fanatics with knives.
The rest of your arguments that don't rely on debunked facts are about framing perfectly reasonable trains of events in terms to make them seem unlikely - in Less Wrong terms, "privileging the hypothesis". "How likely is that no heads would roll as a consequence of this security failure?" - well, since the main failure in the official account was that agencies were "stove-piped" and not talking to each other and responsibilities were unclear, this is entirely consistent. Also, governments may be reluctant to implicitly admit that something had been preventable by firing someone straight away - see "Heckuva job, Brownie".
"How likely is it that no less than three steel-framed buildings would completely collapse from fire and mechanical damage, for the first time in history, all on the same day?" It would be amazing if they'd all collapsed from independent causes! But all you are really asking is "how likely is it that a steel-framed building will collapse when hit with a fully-fueled commercial airliner, or parts of another giant steel-framed building?" Since a comparable crash had never happened before, the "first time in history" rhetoric adds nothing to your argument.
"How likely is it that the plane flown into the Pentagon would execute a difficult hairpin turn in order to fly into the most heavily-protected side of the building?"
Well, since it was piloted by a suicidal hijacker who had been trained to fly a plane, I guess it's not unlikely that it would manouevre to hit the building. Perhaps a more experienced pilot, or A GOVERNMENT HOLOGRAM DRONE (which is presumably what you're getting at), would have planned an approach that didn't involve a difficult hairpin turn. And why wouldn't an evil conspiracy want the damage to the Pentagon to be spectacular and therefore aim for the least heavily protected side? Since, you know, they know it's going to happen anyway so they can avoid being in the Pentagon at all?
If the plane had manoeuvred to hit the least heavily-protected side of the building, truthers would argue that this also showed that the pilot had uncanny inside knowledge.
"How likely is it that [buildings] would ... explode straight downward?" Well, as a non-expert I would have said a priori that seems unlikely, but the structure of the towers made that failure mode the one that would happen. All you're asking is "how likely is it that the laws of physics would operate?" I'm sure there is some truther analysis disputing that, but then you're back into the realm of imaginary evidence.
"How likely is it that this would result in pools of molten steel?" How likely is it that someone observed pools of molten aluminium, or some other substance, and misinterpreted them as molten steel? After all, you've just said that the steel girders were left behind, so there is some evidence that the fire didn't get hot enough to melt (rather than weaken) steel.