Posts

A belief propagation graph 2012-05-10T04:23:45.925Z
Complexity based moral values. 2012-04-06T17:09:29.708Z
Pascal's mugging and Bayes 2012-03-30T19:55:44.917Z
Deadlines and AI theory 2012-03-23T10:57:39.679Z
Open question on the certain 'hot' global issue of importance to FAI 2012-03-23T08:20:46.752Z
Better to be testably wrong than to generate nontestable wrongness 2012-03-20T19:04:15.290Z
Saturating utilities as a model 2012-03-19T21:17:31.857Z
The AI design space near the FAI [draft] 2012-03-18T10:29:41.859Z
Evolutionary psychology: evolving three eyed monsters 2012-03-16T21:28:24.339Z
Conjunction fallacy and probabilistic risk assessment. 2012-03-08T15:07:13.934Z
Which drives can survive intelligence's self modification? 2012-03-06T17:33:30.103Z
[draft] Generalizing from average: a common fallacy? 2012-03-05T11:22:01.690Z
Avoid making implicit assumptions about AI - on example of our universe. [formerly "intuitions about AIs"] 2012-02-27T10:42:09.892Z
Superintelligent AGI in a box - a question. 2012-02-23T18:48:25.819Z
Self awareness - why is it discussed as so profound? 2012-02-22T13:58:28.124Z
Brain shrinkage in humans over past ~20 000 years - what did we lose? 2012-02-18T22:17:41.406Z
[LINK] Computer program that aces 'guess next' in IQ test 2012-02-16T09:01:23.172Z
3^^^3 holes and <10^(3*10^31) pigeons (or vice versa) 2012-02-10T01:25:55.463Z
Deciding what to think about; is it worthwhile to have universal utility function? 2012-02-01T09:44:20.543Z
Describe the ways you can hear/see/feel yourself think. 2012-01-27T14:32:06.340Z
Raising awareness of existential risks - perhaps explaining at "personally stocking canned food" level? 2012-01-24T16:17:41.751Z
Neurological reality of human thought and decision making; implications for rationalism. 2012-01-22T14:39:32.255Z
On accepting an argument if you have limited computational power. 2012-01-11T17:07:07.250Z
Newcomb's problem - one boxer's introspection. 2012-01-01T15:16:40.793Z
Rationality of sometimes missing the point of the stated question, and of certain type of defensive reasoning 2011-12-29T13:09:35.483Z

Comments

Comment by Dmytry on [deleted post] 2012-04-14T21:43:31.792Z

"You might wish to read someone who disagrees with you:"

Quoting from

http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf

To say that a system of any design is an “artificial intelligence”, we mean that it has goals which it tries to accomplish by acting in the world.

I had been thinking, could it be that respected computer vision expert indeed believes that the system will just emerge world intentionality? That'd be pretty odd. Then I see it is his definition of AI here, it already presumes robust implementation of world intentionality. Which is precisely what a tool like optimizing compiler lacks.

edit: and in advance of other objection: I know evolution can produce what ever argument demands. Evolution, however, is a very messy and inefficient process for making very messy and inefficient solutions to the problems nobody has ever even defined.

Comment by Dmytry on [deleted post] 2012-04-14T21:15:13.873Z

I'm mainstream, you guys are fringe, do you understand? I am informing you that you are not only not convincing, but look like complete clowns who don't know big O from a letter of alphabet. I know you want to do better than this. And I know some of people here have technical knowledge.

Comment by Dmytry on [deleted post] 2012-04-14T19:49:53.368Z

What I am certain of is that your provided argument does not support, or even strongly imply your stated thesis.

I know this. I am not making argument here (or actually, trying not to). I'm stating my opinion, primarily on presentation of the argument. If you want argument, you can e.g. see what Hansen has to say about foom. It is, deliberately this way. I am not some messiah hell bent on rescuing you from some wrongness (that would be crazy).

Comment by Dmytry on [deleted post] 2012-04-14T19:41:03.057Z

value states of the world instead of states of their minds

Easier said than done. Valuing state of the world is hard; you have to rely on senses.

Comment by Dmytry on [deleted post] 2012-04-14T19:39:20.990Z

Okay, then, you're right: the manner of presentation of the AI risk issue on lesswrong somehow makes a software developer respond with incredibly bad and unsubstantiated objections.

Why when bunch of people get together, they don't even try to evaluate the impression they make on 1 individual? (except very abstractly)

Comment by Dmytry on [deleted post] 2012-04-14T19:30:52.772Z

Precisely, thank you! I hate arguing such points. Just because you can say something in English does not make it an utility function in the mathematical sense. Furthermore, just because in English it sounds like modification of utility function, does not mean that it is mathematically a modification of utility function. Real-world intentionality seem to be a separate problem from making a system that would figure out how to solve problems (mathematically defined problems), and likely, a very hard problem (in the sense of being very difficult to mathematically define).

Comment by Dmytry on [deleted post] 2012-04-14T17:39:26.713Z

With all of them? How so?

Comment by Dmytry on [deleted post] 2012-04-14T08:48:17.358Z

If even widely read bloggers like EY don't qualify to affect your opinions, it sounds as though you're ignoring almost everyone.

I think you discarded one of conditionals. I read Bruce Schneier's blog. Or Paul Graham's. Furthermore, it is not about disagreement with the notion of AI risk. It's about keeping the data non cherry picked, or less cherry picked.

Comment by Dmytry on [deleted post] 2012-04-14T08:37:02.420Z

Thanks. Glad you like it. I did put some work into it. I also have a habit of keeping epistemic hygiene by not generating a hypothesis first then cherry-picking examples in support of it later, but that gets a lot of flak outside scientific or engineering circles.

Comment by Dmytry on [deleted post] 2012-04-14T08:32:11.743Z

To someone that wants to personally exist for a long time, it becomes very relevant what part humans have in the future.

I think this is an awesome point I overlooked. That talk of future of mankind, that assigning of the moral values to future humans but zero to the AI itself... it does actually make a lot more sense in context of self preservation.

Comment by Dmytry on [deleted post] 2012-04-14T08:26:45.585Z

I didn't go down any road for confirmation. I put your single testimony in a more realistic perspective. Not believing one person who seems to have a highly emotional agenda isn't 'cultish', it's just practical.

I think you grossly overestimate how much emotional agenda can disagreement with counterfactual people produce.

edit: botched the link.

Comment by Dmytry on [deleted post] 2012-04-14T08:13:59.250Z

Something that I forgot to mention, which tends to strike particularly wrong chord: assignation of zero moral value to AI's experiences. The future humans whom may share very few moral values with me, are given nonzero moral utility. The AIs that start from human culture and use it as a starting point to develop something awesome and beautiful, are given zero weight. That is very worrying. When your morality is narrow, others can't trust you. What if you were to assume I am philosophical zombie? What if I am not reflective enough for your taste? What if I am reflective in a very different way? (someone has suggested this as a possibility ) .

Comment by Dmytry on [deleted post] 2012-04-14T07:54:21.938Z

It's not irrational, it's just weak evidence.

Why is it necessarily weak? I found it very instrumentally useful to try to factor out the belief-propagation impacts of people with nothing clearly impressive to show. There is a small risk I miss some useful insights. There is much lower pollution with privileged hypotheses given wrong priors. I am a computationally bounded agent. I can't process everything.

Comment by Dmytry on [deleted post] 2012-04-14T07:32:25.977Z

This another example of method of thinking I dislike - thinking by very loaded analogies, and implicit framing in terms of zero sum problem. We are stuck on a mud ball with severe resource competition. We are very biased to see everything as zero or negative sum game by default. One could easily imagine example where we expand slower than AI, and so our demands always are less than it's charity which is set at constant percentage point. Someone else winning doesn't imply you are losing.

Comment by Dmytry on [deleted post] 2012-04-14T07:14:14.609Z

It's just that I don't believe you folks really are this greedy for sake of mankind, or assume such linear utility functions. If we could just provide food, shelter, and reasonable protection of human beings from other human beings, for everyone a decade earlier, that, in my book, outgoes all the difference between immense riches and more immense riches sometime later. (edit: if the difference ever realizes itself; it may be that at any moment in time we are still ahead)

On top of that, if you fear WBEs self improving - don't we lose ability to become WBEs, and become smarter, under the rule of friendly AI? Now, you have some perfect oracle in model of the AI, and it concludes that this is ok, but I do not have model of perfect oracle in AI and it is abundantly clear AI of any power can't predict outcome of allowing WBE self improvement, especially under ethical constraints that forbid boxed emulation (and even if it would, there's the immense amount of computational resources taken up by the FAI to do this). Once again, the typically selective avenue of thought, you don't think each argument applied both to FAI and AI to make valid comparison. I do know that you already thought a lot about this issue (but i don't think you thought straight; it is not formal mathematics where the inferences do not diverge from sense with the number of steps taken, it is fuzzy verbal reasoning where it unavoidably does). You jump right here on most favourable for you interpretation of what I think.

Comment by Dmytry on [deleted post] 2012-04-14T07:05:40.797Z

Also, I would hope that it would have a number of members with comparable or superior intellectual chops who would act as a check on any of Eliezer's individual biases.

Not if there is self selection for coincidence of their biases with Eliezer's. Even worse if the reasoning you outlined is employed to lower risk estimates.

Comment by Dmytry on [deleted post] 2012-04-14T06:53:43.011Z

e.g. the lone genius point basically amounts to ad hominem

But why it is irrational, exactly?

but empirically, people trying to do things seems to make it more likely that they get done.

As long as they don't use this heuristic too hard to choose which path to take. If it can be shown that some non-explicitly-friendly AGI design is extremely safe, and the FAIs are a case of higher risk, but a chance at slightly better payoff. Are you sure that this is what has to be chosen?

Comment by Dmytry on [deleted post] 2012-04-14T06:49:14.510Z

The hyper-foom is the worst. The cherry picked filtration of what to advertise is also pretty bad.

Comment by Dmytry on [deleted post] 2012-04-14T06:47:23.175Z

EY founded it. Everyone else is self selected for joining (as you yourself explained), and represents extreme outliers as far as I can tell.

Comment by Dmytry on [deleted post] 2012-04-14T06:39:54.011Z

For every one of those people you can have one, or ten, or a hundred, or a thousand, that dismissed your cause. Don't go down this road for confirmation, that's how self reinforcing cults are made.

Comment by Dmytry on [deleted post] 2012-04-14T06:35:48.304Z

Yudkowsky has spent more time on the topic than any of the others on this list, and has specific conclusions that are more idiosyncratic (especially the combination of views on many subjects), but the basic ideas are not so rare or privileged that they do not recur independently among many folk, including subject matter experts.

The argument is for the insights coming out of EY , and the privileging that EY is making for those hypotheses originated by others, aka cherrypicking what to advertise. EY is a good writer.

edit: concrete thought example: There is a drug A that undergoes many tests, with some of them evaluating it as better than placebo, some as equal to placebo, and some as worse to placebo. Worst of all, each trial is conducted on 1 person's opinion. Comes in the charismatic pharmaceutical marketer, or charismatic anti-vaccination campaign leader, and starts bringing to attention the negative or positive trials. That is not good. Even if there's both of those people.

Comment by Dmytry on Reframing the Problem of AI Progress · 2012-04-13T17:20:39.012Z · LW · GW

The issue is that it is a doomsday cult if one is to expect extreme outlier (on doom belief) who had never done anything notable beyond being a popular blogger, to be the best person to listen to. That is incredibly unlikely situation for a genuine risk. Bonus cultism points for knowing Bayesian inference but not applying it here. Regardless of how real is the AI risk. Regardless of how truly qualified that one outlier may be. It is an incredibly unlikely world-state where the AI risk would be best coming from someone like that. No matter how fucked up is the scientific review process, it is incredibly unlikely that world's best AI talk is someone's first notable contribution.

Comment by Dmytry on Reframing the Problem of AI Progress · 2012-04-13T17:18:29.426Z · LW · GW

how is intelligence well specified compared to space travel? We know physics well enough. We know we want to get from point A to point B. The intelligence: we don't even quite know what do exactly we want from it. We know of some ridiculous towers of exponents slow method, that means precisely nothing.

Comment by Dmytry on Reframing the Problem of AI Progress · 2012-04-13T17:01:36.407Z · LW · GW

but brings forward the date by which we must solve it

Does it really? I already explained that if someone makes an automated engineering tool, all users of that tool are at least as powerful as some (U)FAI based upon this engineering tool. Addition of independent will onto tank doesn't make it suddenly win the war against much larger force of tanks with no independent will.

You are rationalizing the position here. If you actually reason forwards, it is clear that creation of such tools may, instead, be the life-saver when someone who thought he solved morality unleashes some horror upon the world. (Or sometime, hardware gets so good that very simple evolution simulator like systems could self improve to point of super-intelligence by evolving, albeit that is very far off into the future)

Suppose I were to convince you of butterfly effect, and explain that you sneezing could kill people, months later. And suppose you couldn't think that non sneezing has same probability. You'd be trying real hard not to sneeze, for nothing, avoid the sudden bright lights (if you have sneeze reflex on bright lights), and so on.

The engineering super-intelligences don't share our values to such profound extent, as to not even share the desire to 'do something' in the real world. Even the engineering intelligence inside my own skull, as far as I can feel. I build designs in real life, because I have rent to pay, or because I am not sure enough it will work and don't trust the internal simulator that I use for design (i.e. imagining) [and that's because my hardware is very flawed]. This is also the case with all my friends whom are good engineers.

The issue here is that you conflate things into 'human level AI'. There's at least three distinct aspects to AI:

1: Engineering, and other problem solving. This is a creation of designs in abstract design space.

2: Will to do something in real world in real time.

3: Morality.

People here see first two as inseparable, while seeing third as unrelated.

Comment by Dmytry on against "AI risk" · 2012-04-13T05:17:47.559Z · LW · GW

Less Wrong has discussed the meme of "SIAI agrees on ideas that most people don't take seriously? They must be a cult!"

Awesome, it has discussed this particular 'meme', to prevalence of viral transmission of which your words seem to imply it attributes it's identification as cult. Has it, however, discussed good Bayesian reasoning and understood the impact of a statistical fact that even when there is a genuine risk (if there is such risk), it is incredibly unlikely that the person most worth listening to will be lacking both academic credentials and any evidence of rounded knowledge, and also be an extreme outlier on degree of belief? There's also the NPD diagnostic criteria to consider. The probabilities multiply here into an incredibly low probability of extreme on many parameters relevant to cult identification, for a non-cult. (For cults, they don't multiply up because there is common cause.)

edit: to spell out details: So you start with prior maybe 0.1 probability that doomsday salvation group is noncult (and that is massive benefit of the doubt right here), then you look at the founder being such incredibly unlikely combination of traits for a non-cult doomsday caution advocate but such a typical founder for a cult - on multitude of parameters - and then you fuzzily do some knee jerk Bayesian reasoning (which however can be perfectly well replicated using a calculator instead of neuronal signals), and you end up virtually certain it is cult. That's if you can do Bayes without doing it explicitly on calculator. Now, the reason I am here, is that I did not take a good look until very recently because I did not care if you guys are a cult or not - the cults can be interesting to argue with. And EY is not a bad guy at all, don't take me wrong, he himself understands that he's risking making a cult, and trying very hard NOT to make a cult. That's very redeeming. I do feel bad for the guy, he happened to let one odd belief through, and then voila, a cult that he didn't want. Or a semi cult, with some people in it for cult reasons and some not so much. He happened not to have formal education, or notable accomplishments that are easily to know are challenging (like being an author of some computer vision library or what ever really). He has some ideas. The cult-follower-type people are dragged towards those ideas like flies to food.

Comment by Dmytry on against "AI risk" · 2012-04-12T21:46:20.170Z · LW · GW

It is unclear to me that artificial intelligence adds any risk there, though, that isn't present from natural stupidity.

Right now, look, so many plastics around us, food additives, and other novel substances. Rising cancer rates even after controlling for age. With all the testing, when you have hundred random things a few bad ones will slip through. Or obesity. This (idiotic solutions) is a problem with technological progress in general.

edit: actually, our all natural intelligence is very prone to quite odd solutions. Say, reproductive drive, secondary sex characteristics, yadda yadda, end result, cosmetic implants. Desire to sell more product, end result, overconsumption. Etc etc.

Comment by Dmytry on against "AI risk" · 2012-04-12T21:25:56.896Z · LW · GW

Yup, we seem safe for the moment because we simply lack the ability to create anything dangerous.

Actually your scenario already happened... Fukushima reactor failure: they used computer modelling to simulate tsunami, it was 1960s, the computers were science woo, and if computer said so, then it was true.

For more subtle cases though - see, the problem is substitution of 'intellectually omnipotent omniscient entity' for AI. If the AI tells to assassinate foreign official, nobody's going to do that; got to be starting the nuclear war via butterfly effect, and that's pretty much intractable.

Comment by Dmytry on against "AI risk" · 2012-04-12T21:16:20.705Z · LW · GW

There are machine learning techniques like genetic programming that can result in black-box models.

Which are even more prone to outputting crap solutions even without being superintelligent.

Comment by Dmytry on against "AI risk" · 2012-04-12T21:02:09.312Z · LW · GW

I'm assuming that the modelling portion is a black box so you can't look inside and see why that solution is expected to lead to a reduction in global temperatures.

Let's just assume that mister president sits on nuclear launch button by accident, shall we?

It isn't an amazing novel philosophical insight that type-1 agents 'love' to solve problems in the wrong way. It is fact of life apparent even in the simplest automated software of that kind. You, of course, also have some pretty visualization of what is the scenario where the parameter was minimized or maximized.

edit: also the answers could be really funny. How do we solve global warming? Okay, just abduct the prime minister of china! That should cool the planet off.

Comment by Dmytry on against "AI risk" · 2012-04-12T20:49:18.132Z · LW · GW

See, that's what is so incredibly irritating about dealing with people who lack any domain specific knowledge. You can't ask it, "how can we reduce global temperatures" in the real world.

You can ask it how to make a model out of data, you can ask it what to do to the model so that such and such function decreases, it may try nuking this model (inside the model), and generate such solution. You got to actually put a lot of effort, like replicating it's in-model actions in real world in mindless manner, for this nuking to happen in real world. (and you'll also have the model visualization to examine, by the way)

Comment by Dmytry on against "AI risk" · 2012-04-12T20:28:11.845Z · LW · GW

I think the problem is conflating different aspects of intelligence into one variable. The three major groups of aspects are:

1: thought/engineering/problem-solving/etc; it can work entirely within mathematical model. This we are making steady progress at.

2: real-world volition, especially the will to form most accurate beliefs of the world. This we don't know how to solve, and don't even need to automate. We ourselves aren't even a shining example of 2, but generally don't care so much about that. 2 is a hard philosophical problem.

3: Morals.

Even strongly superhuman 1 by itself is entirely harmless, even if very general within the problem space of 1. 2 without 1 can't invent anything. The 3 may follow from strong 1 and 2 assuming that AI assigns non zero chance to being under test in a simulation, and strong 1 providing enormous resources.

So, what is your human level AI?

It seems to me that people with high capacity for 1, i.e. the engineers and scientists, are so dubious about AI risk because it is pretty clear to them, both internally, and from the AI effort, that 1 doesn't imply 2 and adding 2 won't strengthen 1. There isn't some great issue with 1 that 2 would resolve. The 1 works just fine. If for example we invent awesome automatic software development AI, it will be harmless even if superhuman at programming, and will self improve as much as possible without 2. Not just harmless, there's no reason why 1-agent plus human are together any less powerful than 1-agent with 2-capability.

Eliezer, it looks like, is very concerned with forming accurate beliefs, i.e. 2-type behaviour, but i don't see him inventing novel solutions as much. Maybe he's so scared of the AI because he attributes other people's problem solving to intellect paralleling his, while it's more orthogonal. Maybe he imagines that very strongly more-2 agent will somehow be innovative and foom, and he sees a lot of room for improving the 2. Or something along those lines. He is a very unusual person; I don't know how he thinks. The way I think it is very natural for me that the problem solving does not require wanting to actually do anything real first. That also parallels the software effort because ultimately everyone who is capable of working effectively as innovative software developers are very 1-orientated and don't see 2 as either necessary or desirable. I don't think 2 would just suddenly appear out of nothing by some emergence or accident.

Comment by Dmytry on against "AI risk" · 2012-04-12T17:40:22.120Z · LW · GW

Pretty ordinary meaning: Bunch of people trusting extraordinary claims not backed with any evidence or expert consensus, originating from a charismatic leader who is earning living off cultists. Subtype doomsday. Now, I don't give any plus or minus points for the leader and living off cultists part, but the general lack of expert concern of the issue is a killer. Experts being people with expertise on relevant subject (but no doomsday experts allowed; has to be something practically useful or at least not all about the doomsday itself. Else you start counting theologians as experts). E.g. for AI risk, the relevant experts may be people with CS accomplishments, the folks who made self driving car, the visual object recognition experts, speech recognition, who developed actual working AI of some kind, etc etc.

I wonder what'd happen if we'd train a SPR for cult recognition. http://lesswrong.com/lw/3gv/statistical_prediction_rules_outperform_expert/ SPRs don't care for any unusual redeeming qualities or special circumstances.

Can you list some non-cult most similar to LW/SIAI ?

Comment by Dmytry on against "AI risk" · 2012-04-12T11:55:29.964Z · LW · GW

If it starts worrying more than astronomers do, sure. The few is as in percentile, at same level of the worry.

More generally, if the degree of the belief is negatively correlated with achievements in relevant areas of expertise, then the extreme forms of belief are very likely false. (And just in case: comparing to Galileo is cherry picking. For each Galileo there's a ton of cranks)

Comment by Dmytry on against "AI risk" · 2012-04-12T11:31:22.897Z · LW · GW

Yep. Majorly awesome scenario degrades into ads vs adblock when you consider everything in the future not just the self willed robot. Matter of fact, a lot of work is put into constructing convincing strings of audio and visual stimuli, and into ignoring those strings.

Comment by Dmytry on against "AI risk" · 2012-04-12T09:44:05.616Z · LW · GW

You're still falling into the same trap, thinking that your work is ok as long as it doesn't immediately destroy the Earth. What if someone takes your proof generator design, and uses the ideas to build something that does affect the real world?

Well let's say in 2022 we have a bunch of tools along the lines of automatic problem solving, unburdened by their own will (not because they were so designed but by simple omission of immense counter productive effort). Someone with a bad idea comes around, downloads some open source software, cobbles together some self propelling 'thing' that is 'vastly superhuman' circa 2012. Keep in mind that we still have our tools that make us 'vastly superhuman' circa 2012 , and i frankly don't see how 'automatic will', for lack of better term, is contributing anything here that would make the fully automated system competitive.

Comment by Dmytry on against "AI risk" · 2012-04-12T09:27:17.598Z · LW · GW

Well, there's this implied assumption that super-intelligence that 'does not share our values' shares our domain of definition of the values. I can make a fairly intelligent proof generator, far beyond human capability if given enough CPU time; it won't share any values with me, not even the domain of applicability; the lack of shared values with it is so profound as to make it not do anything whatsoever in the 'real world' that I am concerned with. Even if it was meta - strategic to the point of potential for e.g. search for ways to hack into a mainframe to gain extra resources to do the task 'sooner' by wallclock time, it seems very dubious that by mere accident it will have proper symbol grounding, won't wirelead (i.e. would privilege the solutions that don't involve just stopping said clock), etc etc. Same goes for other practical AIs, even the evil ones that would e.g. try to take over internet.

Comment by Dmytry on against "AI risk" · 2012-04-12T08:42:43.449Z · LW · GW

I'm kind of dubious that you needed 'beware of destroying mankind' in a physics textbook to get Teller to check if nuke can cause thermonuclear ignition in atmosphere or seawater, but if it is there, I guess it won't hurt.

Comment by Dmytry on Cult impressions of Less Wrong/Singularity Institute · 2012-04-12T05:02:09.765Z · LW · GW

Choosing between mathematically equivalent interpretations adds 1 bit of complexity that doesn't need to be added. Now, if EY had derived the Born probabilities from first principles, that'd be quite interesting.

Comment by Dmytry on against "AI risk" · 2012-04-12T04:20:06.989Z · LW · GW

Seems like a prime example of where to apply rationality: what are the consequences to trying to work on AI risk right now? Versus on something else? Does AI risk work have good payoff?

What's of the historical cases? The one example I know of is this: http://www.fas.org/sgp/othergov/doe/lanl/docs1/00329010.pdf (thermonuclear ignition of atmosphere scenario). Can a bunch of people with little physics related expertise do something about such risks >10 years before? Beyond the usual anti war effort? Bill Gates will work on AI risk when it becomes clear what to do about it.

Comment by Dmytry on against "AI risk" · 2012-04-12T04:18:14.011Z · LW · GW

SI/LW sometimes gives the impression of being a doomsday cult,

Because it fits the pattern exactly. If you have top astronomers worrying about meteorite hitting earth, that is astronomy. If you have nonastronomers (with very few astronomers) worrying about meteorite hitting earth, that's doomsday cult. Or at very best, a vague doomsday cult. edit: Just saying, that's how I classify, works for me. If you have instances (excluding SIAI) where this method of classification fails in damaging way, I am very interested to hear of them, to update my classification method. I might be misclassifying something. I might just go through the list of things that i classified as cults, and classify some items on that list as non-cult, if the classification method fails.

Comment by Dmytry on A belief propagation graph · 2012-04-11T08:59:19.785Z · LW · GW
  • Innate fears - Explained here why I'm not too afraid about AI risks.

You read fiction, some of it is made to play on fears, i.e. to create more fearsome scenarios. The ratio between fearsome, and nice scenarios, is set by market.

  • Political orientation - Used to be libertarian, now not very political. Don't see how either would bias me on AI risks.

You assume zero bias? See, the issue is that I don't think you have a whole lot of signal getting through the graph of unknown blocks. Consequently, any residual biases could win the battle.

  • Religion - Never had one since my parents are atheists.

Maybe a small bias considering that the society is full of religious people.

  • Xenophobia - I don't detect much xenophobia in myself when I think about AIs. (Is there a better test for this?)

I didn't notice your 'we' including the AI in the origin of that thread, so there is at least a little of this bias.

  • Wishful thinking - This would only bias me against thinking AI risks are high, no?

Yes. I am not listing only the biases that are for the AI risk. Fiction for instance can bias both pro and against, depending to choice of fiction.

  • Sunk cost fallacy - I guess I have some sunken costs here (time spent thinking about Singularity strategies) but it seems minimal and only happened after I already started worrying about UFAI.'

But how small it is compared to the signal?

It is not about absolute values of the biases, it is about relative values of the biases against the reasonable signal you could get here.

Comment by Dmytry on A belief propagation graph · 2012-04-11T08:39:04.820Z · LW · GW

My point was that when introducing a new idea, the initial examples ought to be optimized to clearly illustrate the idea, not for "important to discuss".

Not a new idea. Basic planning of effort . Suppose I am to try and predict how much income will a new software project bring, knowing that I have bounded time for making this prediction, much shorter time than the production of software itself that is to make the income. Ultimately, thus rules out the direct rigorous estimate, leaving you with 'look at available examples of similar projects, do a couple programming contests to see if you're up to job, etc'. Perhaps I should have used this as example, but some abstract corporate project does not make people think concrete thought. Most awfully, even when the abstract corporate project is a company of their own (those are known as failed startup attempts).

Do you define rationality as winning? That is a most-win in limited computational time task (perhaps win per time, perhaps something similar). That requires effort planning taking into account the time it takes to complete effort. Jumping on an approximation to the most rigorous approach you can think of is cargo cult not rationality. Bad approximations to good processes are usually entirely ineffective. Now, on the 'approximation' of the hard path, there is so many unknowns as to make those approximations entirely meaningless regardless of whenever it is 'biased' or not.

Also, having fiction as bias brings in all other biases because the fiction is written to entertain, and is biased by design. On top of that, the fiction is people working hard to find a hypothesis to privilege. The hypothesis can be privileged at 1 to 10^100 levels or worse easily when you are generating something (see religion).

Comment by Dmytry on A belief propagation graph · 2012-04-11T04:24:07.033Z · LW · GW

In very short summary, that is also sort of insulting so I am having second thoughts on posting that:

Math homework takes time.

See, one thing I never really even got about LW. So you have some black list of biases, which is weird because the logic is known to work via white list and rigour in using just the whitelisted reasoning. So you supposedly get rid of biases (opinions on this really vary). You still haven't gotten some ultra powers that would instantly get you through enormous math homework which is prediction of anything to any extent what so ever. You know, you can get half grade if you at least got some part of probability homework from the facts to the final estimate, even if you didn't do everything required. Even that, still has a minimum work below which there has not been anything done to even allow some silly guess at the answer. The answer doesn't even start to gradually improve before a lot of work, even if you do numbers by how big they feel. Now, there's this reasoning - if it is not biases, then it must be the answer - no, it could be neuronal noise, or other biases, or the residual weight of biases, or the negations of biases from overcompensation (Happens to the brightest; Nobel Prize Committee one time tried not to be biased against gross unethical-ish looking medical procedures that seem like they can't possibly do any good, got itself biased other way, and gave Nobel Prize to inventor of lobotomy, a crank pseudoscientist with no empirical support, really quickly too. )

Comment by Dmytry on A belief propagation graph · 2012-04-10T16:34:41.949Z · LW · GW

Empirical data needed. (ideally the success rate on non self administered metrics).

Comment by Dmytry on A belief propagation graph · 2012-04-10T15:55:51.101Z · LW · GW

I've heard so too, then I followed news on Fukushima, and the clean up workers were treated worse than Chernobyl cleanup workers, complete with lack of dosimeters, food, and (guessing with a prior from above) replacement respirators - you need to replace this stuff a lot but unlike food you can just reuse and pretend all is fine. (And tsunami is no excuse)

Comment by Dmytry on A belief propagation graph · 2012-04-10T14:36:54.075Z · LW · GW

I think the issue is that our IQ is all too often just like engine in a car to climb hills with. You can go where-ever, including downhill.

Comment by Dmytry on A belief propagation graph · 2012-04-10T14:36:07.860Z · LW · GW

Still a ton better than most other places i've been to, though.

Comment by Dmytry on A belief propagation graph · 2012-04-10T14:14:01.219Z · LW · GW

You need to keep in mind that we are stuck on this planet, and the super-intelligence is not; i'm not assuming that the super-intelligence will be any more benign than us; on the contrary the AI can go and burn resources left and right and eat Jupiter, it's pretty big and dense (dense means low lag if you somehow build computers inside of it). It's just that for AI to keep us, is easier than for entire mankind to keep 1 bonsai tree.

Also, we mankind as meta-organism are pretty damn short sighted.

Comment by Dmytry on A belief propagation graph · 2012-04-10T04:25:15.468Z · LW · GW

Latency is the propagation delay. Until you propagate through the hard path at all, the shorter paths are the only paths that you could propagate through. There is no magical way for skipping multiple unknown nodes in a circuit and still obtaining useful values. It'd be very easy to explain in terms of electrical engineering (the calculation of signal propagation of beliefs through the inference graphs is homologous to the calculation of signal propagation through a network of electrical components; one can construct an equivalent circuit for specific reasoning graph).

The problem with 'hard' is that it does not specify how hard. Usually 'hard' is taken as 'still doable right now'. It can be arbitrarily harder than this even for most elementary propagation through 1 path.

Comment by Dmytry on A belief propagation graph · 2012-04-10T04:23:57.381Z · LW · GW

Why would those correlations invalidate it, assuming we have controlled for origin and education, and are sampling society with low disparity? (e.g. western Europe).

Don't forget we have a direct causal mechanism at work; failure to predict; and we are not concerned with the feelings so much as with the regrettable actions themselves (and thus don't need to care if the intelligent people e.g. regret for longer, or intelligent people notice more often that they could have done better, which can easily result in more intelligent people experiencing feeling of regret more often). Not just that, but ability to predict is part of definition of intelligence.

edit: another direct causal mechanism: more intelligent people tend to have larger set of opportunities (even given same start in life), allowing them to take less risky courses of action, which can be predicted better (e.g. more intelligent people tend to be able to make more money, and consequently have a lower need to commit crime; when committing crime more intelligent people process a larger selection of paths for each goal, and can choose paths with lower risk of getting caught, including subtle unethical high pay off scenarios not classified as crime). The result is that intelligence allows to accommodate for values such as regret better. This is not something that invalidates the effect, but is rather part of effect.