Is an Intelligence Explosion a Disjunctive or Conjunctive Event?

post by XiXiDu · 2011-11-14T11:35:40.518Z · LW · GW · Legacy · 18 comments

Contents

  Disjunctive arguments
  Hidden complexity
  Making numerical probability estimates
  Not enough empirical evidence
  Logical implications
  Hidden disagreement
None
18 comments

(The following is a summary of some of my previous submissions that I originally created for my personal blog.)

...an intelligence explosion may have fair probability, not because it occurs in one particular detailed scenario, but because, like the evolution of eyes or the emergence of markets, it can come about through many different paths and can gather momentum once it gets started. Humans tend to underestimate the likelihood of such “disjunctive” events, because they can result from many different paths (Tversky and Kahneman 1974). We suspect the considerations in this paper may convince you, as they did us, that this particular disjunctive event (intelligence explosion) is worthy of consideration.

— lukeprog, Intelligence Explosion analysis draft: introduction

It seems to me that all the ways in which we disagree have more to do with philosophy (how to quantify uncertainty; how to deal with conjunctions; how to act in consideration of low probabilities) [...] we are not dealing with well-defined or -quantified probabilities. Any prediction can be rephrased so that it sounds like the product of indefinitely many conjunctions. It seems that I see the “SIAI’s work is useful scenario” as requiring the conjunction of a large number of questionable things [...]

— Holden Karnofsky, 6/24/11 (GiveWell interview with major SIAI donor Jaan Tallinn, PDF)

Disjunctive arguments

People associated with the Singularity Institute for Artificial Intelligence (SIAI) like to claim that the case for risks from AI is supported by years worth of disjunctive lines of reasoning. This basically means that there are many reasons to believe that humanity is likely to be wiped out as a result of artificial general intelligence. More precisely it means that not all of the arguments supporting that possibility need to be true, even if all but one are false risks from AI are to be taken seriously.

The idea of disjunctive arguments is formalized by what is called a logical disjunction. Consider two declarative sentences, A and B. The truth of the conclusion (or output) that follows from the sentences A and B does depend on the truth of A and B. In the case of a logical disjunction the conclusion of A and B is only false if both A and B are false, otherwise it is true. Truth values are usually denoted by 0 for false and 1 for true. A disjunction of declarative sentences is denoted by OR or ∨ as an infix operator. For example, (A(0)∨B(1))(1), or in other words, if statement A is false and B is true then what follows is still true because statement B is sufficient to preserve the truth of the overall conclusion.

Generally there is no problem with disjunctive lines of reasoning as long as the conclusion itself is sound and therefore in principle possible, yet in demand of at least one of several causative factors to become actual. I don’t perceive this to be the case for risks from AI. I agree that there are many ways in which artificial general intelligence (AGI) could be dangerous, but only if I accept several presuppositions regarding AGI that I actually dispute.

By presuppositions I mean requirements that need to be true simultaneously (in conjunction). A logical conjunction is only true if all of its operands are true. In other words, the a conclusion might require all of the arguments leading up to it to be true, otherwise it is false. A conjunction is denoted by AND or ∧.

Now consider the following prediction: <Mary is going to buy one of thousands of products in the supermarket.>

The above prediction can be framed as a disjunction: Mary is going to buy one of thousands of products in the supermarket, 1.) if she is hungry 2.) if she is thirsty 3.) if she needs a new coffee machine. Only one of the 3 given possible arguments need to be true in order to leave the overall conclusion to be true, that Mary is going shopping. Or so it seems.

The same prediction can be framed as a conjunction: Mary is going to buy one of thousands of products in the supermarket 1.) if she has money 2.) if she has some needs 3.) if the supermarket is open. All of the 3 given factors need to be true in order to render the overall conclusion to be true.

That a prediction is framed to be disjunctive does not speak in favor of the possibility in and of itself. I agree that it is likely that Mary is going to visit the supermarket if I accept the hidden presuppositions. But a prediction is only at most as probable as its basic requirements. In this particular case I don’t even know if Mary is a human or a dog, a factor that can influence the probability of the prediction dramatically.

The same is true for risks from AI. The basic argument in favor of risks from AI is that of an intelligence explosion, that intelligence can be applied to itself in an iterative process leading to ever greater levels of intelligence. In short, artificial general intelligence will undergo explosive recursive self-improvement.

Hidden complexity

Explosive recursive self-improvement is one of the presuppositions for the possibility of risks from AI. The problem is that this and other presuppositions are largely ignored and left undefined. All of the disjunctive arguments put forth by the SIAI are trying to show that there are many causative factors that will result in the development of unfriendly artificial general intelligence. Only one of those factors needs to be true for us to be wiped out by AGI. But the whole scenario is at most as probable as the assumption hidden in the words <artificial general intelligence> and <explosive recursive self-improvement>.

<Artificial General Intelligence> and <Explosive Recursive Self-improvement> might appear to be relatively simple and appealing concepts. But most of this superficial simplicity is a result of the vagueness of natural language descriptions. Reducing the vagueness of those concepts by being more specific, or by coming up with technical definitions of each of the words they are made up of, reveals the hidden complexity that is comprised in the vagueness of the terms.

If we were going to define those concepts and each of its terms we would end up with a lot of additional concepts made up of other words or terms. Most of those additional concepts will demand explanations of their own made up of further speculations. If we are precise then any declarative sentence (P#) (all of the terms) used in the final description will have to be true simultaneously (P#∧P#). And this does reveal the true complexity of all hidden presuppositions and thereby influence the overall probability, P(risks from AI) = P(P1∧P2∧P3∧P4∧P5∧P6∧…). That is because the conclusion of an argument that is made up of a lot of statements (terms) that can be false is more unlikely to be true since complex arguments can fail in a lot of different ways. You need to support each part of the argument that can be true or false and you can therefore fail to support one or more of its parts, which in turn will render the overall conclusion false.

To summarize: If we tried to pin down a concept like <Explosive Recursive Self-Improvement> we would end up with requirements that are strongly conjunctive.

Making numerical probability estimates

But even if the SIAI was going to thoroughly define those concepts, there is still more to the probability of risks from AI than the underlying presuppositions and causative factors. We also have to integrate our uncertainty about the very methods we used to come up with those concepts, definitions and our ability to make correct predictions about the future and integrate all of it into our overall probability estimates.

Take for example the following contrived quote:

We have to take over the universe to save it by making the seed of an artificial general intelligence, that is undergoing explosive recursive self-improvement, extrapolate the coherent volition of humanity, while acausally trading with other superhuman intelligences across the multiverse.

Although contrived, the above quote does only comprise actual beliefs hold by people associated with the SIAI. All of those beliefs might seem somewhat plausible inferences and logical implications of speculations and state of the art or bleeding edge knowledge of various fields. But should we base real-life decisions on those ideas, should we take those ideas seriously? Should we take into account conclusions whose truth value does depend on the conjunction of those ideas? And is it wise to make further inferences on those speculations?

Let’s take a closer look at the necessary top-level presuppositions to take the above quote seriously:

  1. The many-worlds interpretation
  2. Belief in the Implied Invisible
  3. Timeless Decision theory
  4. Intelligence explosion

1: Within the lesswrong/SIAI community the many-worlds interpretation of quantum mechanics is proclaimed to be the rational choice of all available interpretations. How to arrive at this conclusion is supposedly also a good exercise in refining the art of rationality.

2: P(Y|X) ≈ 1, then P(X∧Y) ≈ P(X)

In other words, logical implications do not have to pay rent in future anticipations.

3: “Decision theory is the study of principles and algorithms for making correct decisions—that is, decisions that allow an agent to achieve better outcomes with respect to its goals.”

4: “Intelligence explosion is the idea of a positive feedback loop in which an intelligence is making itself smarter, thus getting better at making itself even smarter. A strong version of this idea suggests that once the positive feedback starts to play a role, it will lead to a dramatic leap in capability very quickly.”

To be able to take the above quote seriously you have to assign a non-negligible probability to the truth of the conjunction of #1,2,3,4, 1∧2∧3∧4. Here the question is not not only if our results are sound but if the very methods we used to come up with those results are sufficiently trustworthy. Because any extraordinary conclusions that are implied by the conjunction of various beliefs might outweigh the benefit of each belief if the overall conclusion is just slightly wrong.

Not enough empirical evidence

Don’t get me wrong, I think that there sure are convincing arguments in favor of risks from AI. But do arguments suffice? Nobody is an expert when it comes to intelligence. My problem is that I fear that some convincing blog posts written in natural language are simply not enough.

Just imagine that all there was to climate change was someone who never studied the climate but instead wrote some essays about how it might be physical possible for humans to cause a global warming. If the same person then goes on to make further inferences based on the implications of those speculations, am I going to tell everyone to stop emitting CO2 because of that? Hardly!

Or imagine that all there was to the possibility of asteroid strikes was someone who argued that there might be big chunks of rocks out there which might fall down on our heads and kill us all, inductively based on the fact that the Earth and the moon are also a big rocks. Would I be willing to launch a billion dollar asteroid deflection program solely based on such speculations? I don’t think so.

Luckily, in both cases, we got a lot more than some convincing arguments in support of those risks.

Another example: If there were no studies about the safety of high energy physics experiments then I might assign a 20% chance of a powerful particle accelerator destroying the universe based on some convincing arguments put forth on a blog by someone who never studied high energy physics. We know that such an estimate would be wrong by many orders of magnitude. Yet the reason for being wrong would largely be a result of my inability to make correct probability estimates, the result of vagueness or a failure of the methods I employed to come up with those estimates. The reason for being wrong by many orders of magnitude would have nothing to do with the arguments in favor of the risks, as they might very well be sound given my epistemic state and the prevalent uncertainty.

I believe that mere arguments in favor of one risk do not suffice to neglect other risks that are supported by other kinds of evidence. I believe that logical implications of sound arguments should not reach out indefinitely and thereby outweigh other risks whose implications are fortified by empirical evidence. Sound arguments, predictions, speculations and their logical implications are enough to demand further attention and research, but not much more.

Logical implications

Artificial general intelligence is already an inference made from what we currently believe to be true, going a step further and drawing further inferences from previous speculations, e.g. explosive recursive self-improvement, is in my opinion a very shaky business.

What would happen if we were going to let logical implications of vast utilities outweigh other concrete near-term problems that are based on empirical evidence? Insignificant inferences might exhibit hyperbolic growth in utility: 1.) There is no minimum amount of empirical evidence necessary to extrapolate the expected utility of an outcome. 2.) The extrapolation of counterfactual alternatives is unbounded, logical implications can reach out indefinitely without ever requiring new empirical evidence.

Hidden disagreement

All of the above hints at a general problem that is the reason for why I think that discussions between people associated with the SIAI, its critics and those who try to evaluate the SIAI, won’t lead anywhere. Those discussions miss the underlying reason for most of the superficial disagreement about risks from AI, namely that there is no disagreement about risks from AI in and of itself.

There are a few people who disagree about the possibility of AGI in general, but I don’t want to touch on that subject in this post. I am trying to highlight the disagreement between the SIAI and people who accept the notion of artificial general intelligence. With regard to those who are not skeptical of AGI the problem becomes more obvious when you turn your attention to people like John Baez organisations like GiveWell. Most people would sooner question their grasp of “rationality” than give five dollars to a charity that tries to mitigate risks from AI because their calculations claim it was “rational” (those who have read the article by Eliezer Yudkowsky on Pascal’s Mugging know that I used a statement from that post and slightly rephrased it). The disagreement all comes down to a general averseness to options that have a low probability of being factual, even given that the stakes are high.

Nobody is so far able to beat arguments that bear resemblance to Pascal’s Mugging. At least not by showing that it is irrational to give in from the perspective of a utility maximizer. One can only reject it based on a strong gut feeling that something is wrong. And I think that is what many people are unknowingly doing when they argue against the SIAI or risks from AI. They are signaling that they are unable to take such risks into account. What most people mean when they doubt the reputation of people who claim that risks from AI need to be taken seriously, or who say that AGI might be far off, what those people mean is that risks from AI are too vague to be taken into account at this point, that nobody knows enough to make predictions about the topic right now.

When GiveWell, a charity evaluation service, interviewed the SIAI (PDF), they hinted at the possibility that one could consider the SIAI to be a sort of Pascal’s Mugging:

GiveWell: OK. Well that’s where I stand – I accept a lot of the controversial premises of your mission, but I’m a pretty long way from sold that you have the right team or the right approach. Now some have argued to me that I don’t need to be sold – that even at an infinitesimal probability of success, your project is worthwhile. I see that as a Pascal’s Mugging and don’t accept it; I wouldn’t endorse your project unless it passed the basic hurdles of credibility and workable approach as well as potentially astronomically beneficial goal.

This shows that lot of people do not doubt the possibility of risks from AI but are simply not sure if they should really concentrate their efforts on such vague possibilities.

Technically, from the standpoint of maximizing expected utility, given the absence of other existential risks, the answer might very well be yes. But even though we believe to understand this technical viewpoint of rationality very well in principle, it does also lead to problems such as Pascal’s Mugging. But it doesn’t take a true Pascal’s Mugging scenario to make people feel deeply uncomfortable with what Bayes’ Theorem, the expected utility formula, and Solomonoff induction seem to suggest one should do.

Again, we currently have no rational way to reject arguments that are framed as predictions of worst case scenarios that need to be taken seriously even given a low probability of their occurrence due to the scale of negative consequences associated with them. Many people are nonetheless reluctant to accept this line of reasoning without further evidence supporting the strong claims and request for money made by organisations such as the SIAI.

Here is what mathematician and climate activist John Baez has to say:

Of course, anyone associated with Less Wrong would ask if I’m really maximizing expected utility. Couldn’t a contribution to some place like the Singularity Institute of Artificial Intelligence, despite a lower chance of doing good, actually have a chance to do so much more good that it’d pay to send the cash there instead?

And I’d have to say:

1) Yes, there probably are such places, but it would take me a while to find the one that I trusted, and I haven’t put in the work. When you’re risk-averse and limited in the time you have to make decisions, you tend to put off weighing options that have a very low chance of success but a very high return if they succeed. This is sensible so I don’t feel bad about it.

2) Just to amplify point 1) a bit: you shouldn’t always maximize expected utility if you only live once. Expected values — in other words, averages — are very important when you make the same small bet over and over again. When the stakes get higher and you aren’t in a position to repeat the bet over and over, it may be wise to be risk averse.

3) If you let me put the $100,000 into my retirement account instead of a charity, that’s what I’d do, and I wouldn’t even feel guilty about it. I actually think that the increased security would free me up to do more risky but potentially very good things!

All this shows that there seems to be a fundamental problem with the formalized version of rationality. The problem might be human nature itself, that some people are unable to accept what they should do if they want to maximize their expected utility. Or we are missing something else and our theories are flawed. Either way, to solve this problem we need to research those issues and thereby increase the confidence in the very methods used to decide what to do about risks from AI, or to increase the confidence in risks from AI directly, enough to make it look like a sensible option, a concrete and discernable problem that needs to be solved.

Many people perceive the whole world to be at stake, either due to climate change, war or engineered pathogens. Telling them about something like risks from AI, even though nobody seems to have any idea about the nature of intelligence, let alone general intelligence or the possibility of recursive self-improvement, seems like just another problem, one that is too vague to outweigh all the other risks. Most people feel like having a gun pointed to their heads, telling them about superhuman monsters that might turn them into paperclips then needs some really good arguments to outweigh the combined risk of all other problems.

But there are many other problems with risks from AI. To give a hint at just one example: if there was a risk that might kill us with a probability of .7 and another risk with .1 while our chance to solve the first one was .0001 and the second one .1, which one should we focus on? In other words, our decision to mitigate a certain risk should not only be focused on the probability of its occurence but also on the probability of success in solving it. But as I have written above I believe that the most pressing issue is to increase the confidence into making decisions under extreme uncertainty or to reduce the uncerainty itself.

18 comments

Comments sorted by top scores.

comment by Shmi (shminux) · 2011-11-14T17:18:47.143Z · LW(p) · GW(p)

Is an Intelligence Explosion a Disjunctive or Conjunctive Event?

I think you answered it yourself, it is a false dichotomy. Any Boolean function (such as "humanity is wiped out by intelligence explosion: true or false?") can be represented as a disjunction of conjunctive terms, or, by De Morgan, a conjunction of disjunctive terms. Take your pick.

Oh, and if you state that the MWI is a conjunctive term, I would discount the whole thing as based on a manifestly untestable assumption.

comment by Larks · 2011-11-14T15:30:53.708Z · LW(p) · GW(p)

We have to take over the universe to save it by making the seed of an artificial general intelligence, that is undergoing explosive recursive self-improvement, extrapolate the coherent volition of humanity, while acausally trading with other superhuman intelligences across the multiverse.

...

Let’s take a closer look at the necessary top-level presuppositions to take the above quote seriously:

  1. The many-worlds interpretation
  2. Belief in the Implied Invisible
  3. Timeless Decision theory
  4. Intelligence explosion

To be able to take the above quote seriously you have to assign a non-negligible probability to the truth of the conjunction of #1,2,3,4, 1∧2∧3∧4.

I think you're being unfair here. Presumably you think we need Many-Worlds for acausal trade, but this is far from obvious. Possible Worlds would do it too, and there are various decision theoretic ideas that make sense of it in a single world. Even beyond that though, it's not obvious that acausal trade is particularly important to SIAI's main thesis. SIAI wants to (maybe) build a seed AI, not do acausal trade.

  1. seems trivially true to me. While we could wrap it up in measure theory, it seems about as obvious as any piece of mathematics.

And lots of work has gone into non-TDT theories - other UDTs, like the one Stuart's recently been discussing. Even then I don't see why ¬UDT -> ¬Intelligence Explosion.

am I going to tell everyone to stop emitting CO2 because of that?

This is a bad example; it's equally possible that we might be emitting too little CO2. There's a symmetry here that isn't obviously present in the AI case.

Nobody is so far able to beat arguments that bear resemblance to Pascal’s Mugging... One can only reject it based on a strong gut feeling that something is wrong.

This is untrue; bounded utility functions. Maybe those aren't a good idea for other reasons, but there are systems that don't get mugged for better reasons than their gut.

comment by [deleted] · 2011-11-14T21:16:32.818Z · LW(p) · GW(p)

Good post! I disagree with your conclusions in general, but I like your writing and your choice of subject.

I believe that logical implications of sound arguments should not reach out indefinitely and thereby outweigh other risks whose implications are fortified by empirical evidence.

Evidence is evidence, plausible reasoning is plausible reasoning. I don’t accept the distinction that you are trying to make.

If you think that some particular inference is doubtful, then that’s fine. Perhaps you attach a probability of only 0.05 to the idea that a smarter-than-human AI built by humans would recursively self-improve. But then you should phrase your disagreement with proponents of the intelligence explosion scenario in those terms (your probability estimate differs from theirs), not by referring to an ostensible problem in arguing under uncertainty in general.

Note that there is no rigorous distinction between implications grounded in empirical evidence, and “speculation”. There is uncertainty attached to everything, and it only varies quantitatively.

lot of people do not doubt the possibility of risks from AI but are simply not sure if they should really concentrate their efforts on such vague possibilities.

There’s no such thing as a “vague possibility”. I hope I’m not unfairly picking on your English (which is rather good for a non-native speaker); this problem seems to tie in with your earlier statements. A probability is just a probability.

If I were to guess what you are grasping towards it is this: “For someone who is not an intellect of the highest calibre, working through the arguments in favour of relatively speculative future scenarios like the intelligence explosion could be massively time-consuming. And I might find myself simply unable to understand the concepts involved, even given time. Therefore I reserve the right to attach a low probability to these scenarios without providing specific objections to the inferences involved.”

I’m not sure what to think, if someone were to make that claim. On one hand, bounded rationality seems to apply. On the other, the idea of recursively self-improving intelligence doesn’t seem all that complex to me. It seems like it might be a fully general excuse.

I would probably criticise such a claim to the extent that the person making the claim is intelligent, that the argument in question is simple, and that I believe his prior assigns a generally high credibility to the non-mainstream beliefs of theorists like Yudkowsky.

comment by jhuffman · 2011-11-14T18:07:28.820Z · LW(p) · GW(p)

The disagreement all comes down to a general averseness to options that have a low probability of being factual, even given that the stakes are high.

I think we get a similar artifact when consider cryonics. Even after accepting materialist notions of identity, feasibility of uploading and present technical capabilities I'm still not on the same page as much of LW. The reason is that I consider my personal prospects for revival to be vanishingly small due to pessimism about my future utility to anyone who might have the resources to sim me. Other LW may put the probability in the same ballpark but then multiple by the nearly unbounded possible utility of a second life and find they should go ahead and sign up. I just don't seem to be able to take significant action based upon what I consider to be low probabilities.

I actually think AGI is both possible and probable so in this particular example the article speaks to I'm more in the LW camp.

comment by amcknight · 2011-11-15T02:43:12.781Z · LW(p) · GW(p)

Nice well-written post. You definitely show the possibility that AI risk is unlikely, because recursive self-improvement could be a conjunctive scenario. But without a better sketch of what conjunctions are required for recursive self-improvement (or AGI), you've only succeeded in keeping the possibility open without actually arguing for a lack of risk. I think you've created a great starting point Hypothetical Apostasy for those here that believe strongly in the SIAI. Ultimately though, a healthy discussion about any actual conjunctions involved is what it now takes to decide whether there are risks from AI.

My (10 minutes attempted) challenge to whether there exists a conjunction:

  • Self-improvement is a useful instrumental goal for most imaginable systems with goals.
  • Recursive improvement is implied by the huge room for improvement of... pretty much anything, but specifically, systems with goals. (EDIT: XiXiDu's next post addresses and disagrees with this)
  • AI programmers are creating systems with goals.
  • One might some day be powerful/intelligent enough to realize many of its instrumental goals.

That seems to be all it takes. Are there other relevant factors I'm forgetting? I'd say the first 3 have a probability of .98+. The 4th is what SIAI is trying to deal with.

comment by torekp · 2011-11-15T02:34:14.987Z · LW(p) · GW(p)

All this shows that there seems to be a fundamental problem with the formalized version of rationality.

We already have reason to think that there is.

comment by Donald Hobson (donald-hobson) · 2021-03-04T13:37:50.869Z · LW(p) · GW(p)

Now some have argued to me that I don’t need to be sold – that even at an infinitesimal probability of success, your project is worthwhile. I see that as a Pascal’s Mugging and don’t accept it;

 

If a plan looks kind of convincing, but less than airtight, there is a big range of probabilities, like say 20% where your not convinced and it isn't a pascals mugging. If you have a team of reasonably smart people trying to make a fusion reactor or something, even if you don't think they are trying quite the right approach, I wouldn't assign them exponentially minute prob of success.

comment by Donald Hobson (donald-hobson) · 2021-03-04T13:33:24.712Z · LW(p) · GW(p)

What most people mean when they doubt the reputation of people who claim that risks from AI need to be taken seriously, or who say that AGI might be far off, what those people mean is that risks from AI are too vague to be taken into account at this point, that nobody knows enough to make predictions about the topic right now.

So suppose you have some weak evidence that something might be a problem. One sensible course of action is to invest resources in studying and measuring the problem. Even a vague idea that AGI might be created at some point, and it is possible for AGI to go wrong would suggest the specific actions of setting up a research institute.

comment by Donald Hobson (donald-hobson) · 2021-03-04T13:27:49.891Z · LW(p) · GW(p)

Another example: If there were no studies about the safety of high energy physics experiments then I might assign a 20% chance of a powerful particle accelerator destroying the universe based on some convincing arguments put forth on a blog by someone who never studied high energy physics. We know that such an estimate would be wrong by many orders of magnitude.

That's the way probabilities work. You assign 50 50 to the coinflip before you see the results. If you take 5 somewhat convincing arguments that there might be a problem here, and one turns out to point at an actual problem, 20% is a good assignment. 

comment by Emile · 2011-11-14T14:15:43.965Z · LW(p) · GW(p)

A note in passing while I read this: I really wish you would try and write in a plain and terse way, avoiding fancy words and constructions .... sentences like this:

Those discussions miss the underlying reason for most of the superficial disagreement about risks from AI, namely that there is no disagreement about risks from AI in and of itself.

... are hard to parse (does "in and of itself" correspond to a common word in German?).

(A bit that could have been skipped are those explaining infix operators)

Sorry for only commenting about the form, not the content - I'm not done reading the post yet; I find it somewhat laborious to read and am taking a breathing pause.

Replies from: None, XiXiDu
comment by [deleted] · 2011-11-14T14:52:37.212Z · LW(p) · GW(p)

Huh, interesting observation.

If I encountered this straightforward German translation with the same structure in a German text:

Diese Diskussionen ignorieren den wahren Grund für viele der scheinbaren Meinungsverschiedenheiten über KI-Risiken, nämlich dass es keinen Streit über KI-Risiken an sich gibt.

I wouldn't even have blinked. Totally normal amount of complexity to me. These embedded explanations and minor remarks are very typical for German writing (and Japanese, as far as I can tell).

Maybe some advice to German writers: every time you want to add a comma, end your sentence instead. Yes, really.

Replies from: XiXiDu
comment by XiXiDu · 2011-11-14T15:32:21.402Z · LW(p) · GW(p)

That's a perfect translation. It is expresses exactly what I tried to state. I will think about how to rephrase it as to make it more digestable for others.

This is not meant as an excuse, just an explanation. I never learnt English in school, only the very basics (Abgangszeugnis Hauptschule Klasse 9). And I am not yet at the point where I perceive a course in formal English (or German) to be a priority. I am concentrating on learning math right now (just started with Calculus a while ago).

comment by XiXiDu · 2011-11-14T15:43:02.327Z · LW(p) · GW(p)

... are hard to parse (does "in and of itself" correspond to a common word in German?).

See the translation by muflax, "an sich" is what I am looking for. The best translations I could find are the following:

in and of itself, as such, intrinsically, per se

None of the above really express the German connotation though. I'll see what I can do.

I really wish you would try and write in a plain and terse way, avoiding fancy words and constructions

I didn't do it on purpose. I will try to express myself in as simple terms as possible in future.

Replies from: Emile, jmmcd
comment by Emile · 2011-11-14T16:17:11.587Z · LW(p) · GW(p)

For what it's worth, I didn't think you were doing it on purpose - your post doesn't have the use-fancy-words-to-get-a-good-grade vibe one sometimes encounters on the 'net from people whose writing has been warped by the education system.

French writers have a similar problem of accidentally sounding too formal: many French word have a direct equivalent (cognate) that is more formal (e.g. "mortal" instead of "deadly" though in that case there's also a subtle change of meaning).

comment by jmmcd · 2011-11-15T18:30:13.270Z · LW(p) · GW(p)

I didn't find your style to be difficult to read. The quoted sentence is fine and in no way fancy. I review and write academic papers all the time so I know bad prose when I see it.

comment by Manfred · 2011-11-14T23:35:55.804Z · LW(p) · GW(p)

The first half just needs to cut the dichotomy to be better, but I was strongly unsatisfied with the second half.

Humans are bad at choosing what to do in situations we're not used to or trained in, because our intuitive judgments don't work so good. If you would like me to support this claim further, I know a nice blog I could point you to.

Your argument in the last bits boils down to "valuing options in direct proportion to their probability is unintuitive, therefore we shouldn't do it," and has gigantic red flags plastered to it. Gotta watch out for those red flags.

comment by Larks · 2011-11-14T15:29:40.035Z · LW(p) · GW(p)

You seem to be using logical terminology in a non-standard way. I'm not sure if this has any bearing on your conclusion (though there does seem to be a risk of confusion with 'causative'), but I thought you might like to learn the standard terminology; its hard to pick up if you don't have a philosophy background. If we were intended to make some distinction by your usage, I missed it I'm affraid.

The idea of disjunctive arguments is formalized by what is called a logical disjunction. Consider two declarative sentences, A and B. The truth of the conclusion (or output) that follows from the sentences A and B does depend on the truth of A and B. In the case of a logical disjunction the conclusion of A and B is only false if both A and B are false, otherwise it is true... For example, (A(0)∨B(1))(1), or in other words, if statement A is false and B is true then what follows is still true because statement B is sufficient to preserve the truth of the overall conclusion.

Many conclusions follow from {A,B} whose truth value doesn't depend on the truth value of B - like A, or (Pv¬P). You probably mean that the conjunction (AnB) and the disjunction (AvB)'s truth values depend on the truth values of A and B; you're confusing conclusions, which are things arguments have, with disjunction and conjunctions, which have truth values. Conjunctive (disjunctive) arguments is simply an argument with conjunctive (disjunctive) premises.

Generally there is no problem with disjunctive lines of reasoning as long as the conclusion itself is sound and therefore in principle possible, yet in demand of at least one of several causative factors to become actual. I don’t perceive this to be the case for risks from AI. I agree that there are many ways in which artificial general intelligence (AGI) could be dangerous, but only if I accept several presuppositions regarding AGI that I actually dispute.

Soundness is a property of arguments, not conclusions, and possibility is a modal notion that you probably don't want to bring in. I think you mean;

"Disjunctive arguments are powerful because the probability of the conclusion can be higher than the probability of the disjuncts. However, if each of these disjuncts is in fact a conjunction, then the the disjuncts are a lot less probable than they might appear, which makes the conclusion a lot less probable. You might try transforming the premise into conjunctive normal form to see how conjunctive the argument really is."

By presuppositions I mean requirements that need to be true simultaneously (in conjunction). A logical conjunction is only true if all of its operands are true. In other words, the a conclusion might require all of the arguments leading up to it to be true, otherwise it is false. A conjunction is denoted by AND or ∧.

Again, you're confusing arguments and formulas. A conjunction (AnB) is true iff A is true and B is true. The conclusion of a conjunctive argument, (AnB) |- C, is necessarily true if (AnB) is, but might be true even if (AnB) aren't.

Also, instead of defining 'presuppositions', which already has a different role in logic and language (e.g. my saying "The present king of France is bald" might be thought to presuppose that there is a present king of France, if we follow Strawson rather than Russell.), you could simply talk about the logical implications: if A must be true for B to hold, then (A->B) is true.

comment by Zed · 2011-11-14T14:53:47.869Z · LW(p) · GW(p)

My view about global rationality is similar to that the view of John Baez about individual risk-adversity. An individual should typically be cautious because the maximum downside (destruction of your brain) is huge even for day-to-day actions like crossing the street. In the same way, we have only one habitable planet and one intelligent species. If we (accidentally) destroy either we're boned. Especially when we don't know exactly what we're doing (as is the case with AI) caution should be the default approach, even if we were completely oblivious to the concept of a singularity.

that the most pressing issue is to increase the confidence into making decisions under extreme uncertainty or to reduce the uncerainty itself.

I disagree, it's not the most pressing issue. In a sufficiently complex system there are always going to be vectors we poorly understand. The problem here is that we have a global society where it becomes harder every year for a single part to fail independently of the rest. A disease or pathogen is sure to spread to all parts of the world, thanks to our infrastructure. Failure of the financial markets affect the entire world because the financial markets too are intertwined. Changes in the climate also affect the entire globe, not just the countries who pollute. An unfriendly AI cannot be contained either. Everywhere you look there are now single points of failure. The more connected our world becomes the more vulnerable we become to black swan events that rock the world. Therefore, the more cautious we have to be. The strategy we used in the past 100.000 years (blindly charge forward) got us where we are today but it isn't very good anymore. If we don't know exactly what we're doing we should make absolutely sure that all worst case scenarios affect only a small part of the world. If we can't make such guarantees then we should probably be even more reluctant to act at all. We must learn to walk before we can run.

Under extreme uncertainty we cannot err on the side of caution. We can reduce uncertainty somewhat (by improving our estimates) but there is no reason to assume we will take all significant factors into account. If you start out with a 0.001 probability of killing all of humanity there is no amount of analysis that can rationally lead to the conclusion "eh, whatever, let's just try it and see what happens", because the noise in our confidence will exceed a few parts in a million at the least, which is already an unacceptable level of risk. It took billions of years for evolution to get us to this point. We can now mess it up in the next 1000 years or so because we're in such a damn hurry. That'd be a shame.