Posts

Comments

Comment by shrink on Non-orthogonality implies uncontrollable superintelligence · 2012-05-09T08:08:35.223Z · LW · GW

If you want to maximize your win, it is a relevant answer.

For the risk estimate per se, I think one needs not so much methods as a better understanding of the topic, which is attained by studying the field of artificial intelligence - in non cherry picked manner - and takes a long time. If you want easier estimate right now, you could try to estimate how privileged is the hypothesis that there is the risk. (There is no method that would let you calculate the wave from spin down and collision of orbiting black holes without spending a lot of time studying GR, applied mathematics, and computer science. Why do you think there's a method for you to use to tackle even harder problem from first principles?)

Best yet, ban of thinking of it as risk (we have introduced, for instrumental reasons, the burden of proof on those whom say there is no risk, when it comes to new drugs etc, and we did so solely because introduction of random chemicals into a well evolved system is much more often harmful than beneficial. In general there is no reason to put burden of proof on those whom say there is no wolf, especially not when people screaming wolf get candy for doing so), and think of it as prediction of what happens in 100 years. Clearly, you would not listen to philosophers whom use ideals for predictions.

Comment by shrink on Do people think Less Wrong rationality is parochial? · 2012-05-05T11:18:44.100Z · LW · GW

The rationality and intelligence are not precisely same thing. You can pick e.g. those anti vaccination campaigners whom have measured IQ >120, and put them in a room, and call that a very intelligent community, that can discuss a variety of topics besides the vaccines. Then you will get some less insane people whom are interested in safety of vaccines coming in and getting terribly misinformed, which just is not a good thing. You can do that with almost any belief, especially using the internet to be able to get the cases from the pool of a billion or so.

Comment by shrink on Do people think Less Wrong rationality is parochial? · 2012-05-05T09:54:17.568Z · LW · GW

Implied in your so called 'question' is the statement that any online community that you know of (I shouldn't assume you know of 0 other communities, right?), you deemed less rational than lesswrong. I would say lesswrong is substantially less rational than average, i.e. if you pick a community at random, it is typically more rational than lesswrong. You can choose any place better than average - physicsforums, gamedev.net, stackexchange, arstechnica observatory, and so on, those are all more rational than LW. But of course, implied in your question is that you won't accept this answer. The LW is rather interested in AI, and the talk about AI here is significantly less rational than talk of almost any technical topic in almost any community of people with technical interest. You would have to go to some alternative energy forums or ufo or conspiracy theorist place to find a match in terms of irrationality of the discussion of the topics of interest.

You would have no problem what so ever finding and joining a more rational place, if you were looking for one. That is why your 'question' is in fact almost purely rhetorical (or you are looking for a place that is more 'foo' than lesswrong, where you use word 'rationality' in place of 'foo').

Comment by shrink on Do people think Less Wrong rationality is parochial? · 2012-05-05T06:41:57.432Z · LW · GW

Instrumental rationality, i.e. "winning"? Lots...

Precisely.

Epistemic rationality? None...

I'm not sure it got that either. It's more like medieval theology / scholasticism. There are questions you think you need answered, you can't answer them now with logical thought, you use empty cargo cult imitation of reasonable thought. How rational is that? Not rational at all. Wei_Dai is here because he was concerned with AI, and calls this community rational because he sees concern with AI as rational and needs confirmation. It is neatly circular system - if the concern with AI is rational, then every community that is rational must be concerned with AI, then the communities that are not concerned with AI are less rational.

Comment by shrink on Non-orthogonality implies uncontrollable superintelligence · 2012-05-04T15:21:43.234Z · LW · GW

It was definitely important to make animals come, or to make it rain, tens thousands years ago. I'm getting a feeling that as I tell you that your rain making method doesn't work, you aren't going to give up trying if I don't provide you with an airplane, a supply of silver iodide, flight training, runway, fuel, and so on (and even then the method will only be applicable to some days, while the pray for rain is applicable any time).

As for the best guess, if you suddenly need a best guess on a topic because someone told you of something and you couldn't really see a major flaw in vague reasoning of the sort that can arrive at anything via a minor flaw on every step, that's a backdoor other agents will exploit to take your money (those agents will likely also opt to modify their own beliefs somewhat, because, hell, it feels a lot better to be saving mankind than to be scamming people). What is actually important to you, is your utility, and the best reasoning here is strategic: do not leave backdoors open.

Comment by shrink on Do people think Less Wrong rationality is parochial? · 2012-05-04T07:37:11.496Z · LW · GW

I think you have somewhat simplistic idea of justice... there is the "voluntary manslaughter", there's the "gross negligence", and so on. I think SIAI falls under the latter category.

How are they worse than any scientist fighting for a grant based on shakey evidence?

Quantitatively, and by a huge amount. edit: Also, the of beliefs, that they claim to hold, when hold honestly, result in massive loss of resources such as moving to cheaper country to save money, etc etc. I dread to imagine what would happen to me if I honestly were this mistaken about AI. The erroneous beliefs damage you.

The lying is about having two sets of incompatible beliefs, that are picked between based on convenience.

edit: To clarify, the justice is not about the beliefs held by the person. It is more about the process that the person is using to arriving at the actions (see the whole 'reasonable person' stuff). If A wants to kill B and A edits A's beliefs to be "B is going to kill me", and then acts in self defense and kills B, if the justice system had a log of A's processing, the A would go for premeditated murder. Even though at the time of the murder A is honestly acting in self defense. (Furthermore, lacking a cross neurophysiological anomaly, it is a fact of reality that the justice can only act based on inputs and outputs of agents)

Comment by shrink on Do people think Less Wrong rationality is parochial? · 2012-05-04T05:57:51.978Z · LW · GW

You are declaring everything gray here so that verbally everything is equal.

There are people with no knowledge in physics and no inventions to their name, whose first 'invention' is a perpetual motion device. You really don't see anything dishonest about holding an unfounded belief that you're this smart? You really see nothing dishonest about accepting money under this premise without doing due diligence such as trying yourself at something testable, even if you think you're this smart?

There are scientists whom are trying very hard to follow processes that are not prone to error, people trying to come up with ways to test their beliefs, do you really see them as all equal in the level of dishonesty?

There are people whom are honestly trying to make a perpetual motion device, whom sink their money into it, and never produce anything that they can show to investors, because they are honest and don't use hidden wires etc. (The equivalent would have Eliezer moving out to a country with a very cheap living, canceling his cryonics subscription, and so on, to maximize the money available for doing the very important work in question)

You can talk all day in qualitative terms how it is the same, state unimportant difference as the only one, and assert that you 'don't see the moral difference', but this 'counter argument' you're making is entirely generic and equally applicable to any form of immoral or criminal conduct. A court wouldn't be the least bit impressed.

Also, I don't go philosophical. I don't care what's going on inside the head unless i'm interested in neurology. I know that the conduct is dishonest, and the beliefs under which the honest agent would have such conduct lack foundation, there isn't some honest error here that did result in belief that leads honest agent to adopt such conduct. The convincing liars don't seem to work by thinking 'how could I lie', they just adopt the convenient falsehood as high priority axiom for talk and as low priority axiom for walk, as to resolve contradictions in most useful way, and that makes it very very murky as to what they actually believe.

You can say that it is honest to act on a belief, but that's an old idea, and nowadays things are more sophisticated and it is a get out of jail free card for almost all liars, whom first make up a very convenient, self serving false belief with not a trace of honesty to the belief making process, and then act on it.

Comment by shrink on Non-orthogonality implies uncontrollable superintelligence · 2012-05-04T05:46:57.673Z · LW · GW

That's how religions were created, you know - they could not actually answer why lightning is thundering, why sun is moving through the sky, etc. So they did look way 'beyond' the non-faulty reasoning, in search for answers now (being inpatient), and got answers that were much much worse than no answers at all. I feel LW is doing precisely same thing with AIs. Ultimately, when you can't compute the right answer in the given time, you will either have no answer or compute a wrong one.

On the orthogonality thesis, it is the case that you can't answer this question given limited knowledge and time (got to know AI's architecture first), and any reasonable reasoning tells you this, while LW's pseudo-rationality keeps giving you wrong answers (that aren't any less wrong than anyone else including the mormon church and any other weird religious group), I don't quite sure what you guys are doing wrong; maybe the focus on biases and conflation of biases with stupidity did lead to a fallacy that lack of (known) biases will lead to non stupidity, i.e. smartness, and if only you won't be biased you'll have a good answer. It doesn't work like this. It leads to another wrongness.

Comment by shrink on Do people think Less Wrong rationality is parochial? · 2012-05-02T16:11:52.996Z · LW · GW

Did they make a living out of those beliefs?

See, what we have here is a belief cluster that makes the belief-generator feel very good (saving the world, the other smart people are less smart, etc etc) and pays his bills. That is awfully convenient for a reasoning error. Not saying that it is entirely impossible to have a serendipitously useful reasoning error, but doesn't seem likely.

edit: note, I'm not speaking about some inconsequential honesty in idle thought, or anything likewise philosophical. I'm speaking of not exploiting others for money. There's nothing circular about the notion that honest person would not be talking a friend into paying him upfront to fix the car when that honest person does not have any discernible objective reason what so ever to think that he could fix the car, and a dishonest person would talk friend into paying. Now, if we were speaking of a very secretive person that doesn't like to talk of himself, there would've been probability of some big list of impressive accomplishments we haven't heard of...

Comment by shrink on Non-orthogonality implies uncontrollable superintelligence · 2012-05-02T16:02:56.320Z · LW · GW

Would you take criticism if it is not 'positive' and doesn't give you alternative method to use for talking about same topic? Faulty reasoning has unlimited domain of application - you can 'reason' about purpose of the universe, number of angels that fit on a tip of a pin, of what superintelligences would do, etc. In those areas, non-faulty reasoning can not compete in terms of providing a sort of pleasure from reasoning, or in terms of interesting sounding 'results' that can be obtained with little effort and knowledge.

You can reason what particular cognitive architecture can do on a given task given N operations; you can reason what the best computational process can do in N operations. But that will involve actually using mathematics, and results will not be useful for unintelligent debates in the way in which your original statement is useful (I imagine you could use it to reply to someone who believes in absolute morality, as a soundbite; i really don't see how it could have any predictive power what so ever about the superintelligence though).

Comment by shrink on Non-orthogonality implies uncontrollable superintelligence · 2012-05-01T05:34:40.091Z · LW · GW

There's so much that can go wrong with such reasoning, given that intelligence (even at the size of a galaxy of Dyson spheres) is not a perfect God, as to render such arguments irrelevant and entirely worthless. Furthermore there's enough ways how the non-orthogonality can hold, such as e.g. almost all intelligences with wrong moral systems crashing or failing to improve, that are not covered by 'converges'.

meta: Tendency to talk seriously about products of very bad reasoning really puts an upper bracket on the sanity of newcomers to LW. As is the idea that very bad argument trumps authority (when it comes to the whole topic).

Comment by shrink on (Almost) every moral theory can be represented by a utility function · 2012-05-01T05:31:04.179Z · LW · GW

You can represent any form of agency with utility function that is 0 for doing what agency does not want to do, and 1 for doing what agency want to do. This looks like a special case of such triviality, as true as it is irrelevant. Generally one of the problems with insufficient training in math is the lack of training for not reading extra purpose into mathematical definitions.

Comment by shrink on Do people think Less Wrong rationality is parochial? · 2012-04-30T07:47:50.748Z · LW · GW

I think you hit nail on the head. It seems to me that LW represent bracketing by rationality - i.e. there's lower limit below which you don't find site interesting, there is the range where you see it as rationality community, and there's upper limit above which you would see it as self important pompous fools being very wrong on some few topics and not interesting on other topics.

Dangerously wrong, even; the progress in computing technology leads to new cures to diseases, and misguided advocacy of great harm of such progress, done by people with no understanding of the limitations of computational processes in general (let alone 'intelligent' processes) is not unlike the anti-vaccination campaigning by people with no solid background in biochemistry. Donating for vaccine safety research performed by someone without solid background in biochemistry, is not only stupid, it will kill people. The computer science is no different now, that it is used for biochemical research. No honest moral individual can go ahead and speak of great harms of medically relevant technologies without first obtaining a very very solid background with solid understanding of the boring fundamentals, and with independent testing of oneself - to avoid self delusion - by doing something competitive in the field. Especially so when those concerns are not shared by the more educated or knowledgeable or accomplished individuals. The only way it could be honest is if one is to honestly believe oneself to be a lot, lot, lot smarter than the smartest people on Earth, and one can't honestly believe such a thing without either accomplishing something impressive that great number of smartest people failed to accomplish, or being a fool.

Comment by shrink on Do people think Less Wrong rationality is parochial? · 2012-04-30T07:44:28.243Z · LW · GW

Popularization is better without novel jargon though.

Comment by shrink on Do people think Less Wrong rationality is parochial? · 2012-04-30T07:34:21.697Z · LW · GW

That's why I said 'self deluded', rather than just 'deluded'. There is a big difference between believing something incorrect that's believed by default, and coming up yourself with a very convenient incorrect belief that makes you feel good and pays the bills, and then actively working to avoid any challenges to this belief. Honest people are those who put such beliefs to good scrutiny (not just talk about putting such beliefs to scrutiny).

The honesty is elusive matter, when the belief works like that dragon in the garage. When you are lying, you have to deceive computational processes that are roughly your equals. That excludes all straightforward approaches to lying, such as waking up in the morning and thinking 'how can i be really bad and evil today?'. Lying is a complicated process, with many shortcuts when it comes to the truth. I define lying as successful generation of convincing untruths - a black box definition without getting into details with regards to what parts of the cortex are processing the truth and what are processing the falsehoods. (I exclude the inconsistent accidental generation of such untruths by mistake, unless the mistakes are being chosen)

Comment by shrink on Do people think Less Wrong rationality is parochial? · 2012-04-29T19:44:30.348Z · LW · GW

Well the issue is that LW is heavily biased towards agreement with the rationalizations of the self important wankery in question (the whole FAI/uFAI thing)...

With the AI, basically, you can see folks who have no understanding what so ever of how to build practical software and whose idea of the AI is 'predict outcomes of actions, choose actions that give best outcome' (entirely impractical model given the enormous number of actions when innovating) accusing the folks in the industry whom do, of anthropomorphizing the AI - and taking it as operating assumption that they somehow know better, on basis of them thinking about some impractical abstract mathematical model. It is like having futurists in 1900 accuse the engineers of bird-morphizing the future modes of transportation when the engineers speak of wings. Then you see widespread agreement with various irrational nonsense, mostly when it's a case of some reverse stupidity, like with 'not anthropomorphizing' the AI far past the point of actually not anthropomorphizing, into the negative anthropomorphizing land whereby if human does some sort of efficient but imperfect trick, the AI necessarily does the terribly inefficient perfect solution, to the point of utter ridiculousness where the inefficiency may be too big for the galaxy of dyson spheres to handle given the quantum computing.

Then there's this association with a bunch of folk whom basically talk other people into giving them money. That puts a very sharp divide - either you agree they are geniuses saving the world, or they are sociopaths, not a lot of middle road here as honest people can't stay self deluded for very long.

Comment by shrink on Do people think Less Wrong rationality is parochial? · 2012-04-28T19:23:17.831Z · LW · GW

Is this place a Kurzweil fanclub?

TBH, I'd rather listen to Kurzweil... I mean, he did create reading OCR software, and other cool stuff. Here we have:

http://lesswrong.com/lw/6dr/discussion_yudowskys_actual_accomplishments/

http://lesswrong.com/lw/bvg/a_question_about_eliezer/

Looks like this gone straight to the hardest problems in the world (I can't see successful practice on easier problems that are not trivial).

This site has captcha, a challenge that people easily solve but bots don't. Despite the possibility that some blind guy would not post a world changing insight because of it, and the FAI effort would go the wrong way, and we all die. That is not seen as irrational. Many smart people, likewise, usually implement an 'arrogant newbie filter'; a genius can rather easily solve things that other smart people can't...

It is kind of hypocritical (and irrational) to assume stupid bot if captcha is not answered, but expects others to assume genius when no challenges were solved. Of course not everyone is filtering, and via internet you can reach plenty of people who won't filter for this reason or that, or people who will only look at superficial signals, but to exploit this is not good.

Comment by shrink on A Kick in the Rationals: What hurts you in your LessWrong Parts? · 2012-04-28T08:34:25.987Z · LW · GW

It's more a question of how charitably you read LW, maybe? The phenomenon I am speaking of is quite generic. About 1% of people are clinical narcissists (probably more), that's a lot of people, and the narcissists dedicate more resources to self promotion, and take on projects that no well calibrated person of same expertise would attempt, such as e.g. making a free energy generator without having studied physics or invented anything not so grandiose first.

Comment by shrink on Do people think Less Wrong rationality is parochial? · 2012-04-28T07:54:17.039Z · LW · GW

Some of the rationality may to significant extent be a subset of standard, but it has important omissions - in the areas of game theory for instance - and much more importantly significant miss-application such as taking the theoretically ideal approaches given infinite computing power as the ideal, and seeing as the best try the approximations to them which are grossly sub-optimal on the limited hardware where different algorithms have to be employed instead. One has to also understand that in practice computations have cost, and any form of fuzzy reasoning (anything other than very well verified mathematical proof) accumulates errors with each step, regardless of whenever it is 'biased' or not.

Choosing such a source for self education is definitely not common. As is the undue focus on what is 'wrong' about thinking (e.g. lists of biases) rather than on more effective alternatives to biases; if you remove the biases that won't in itself give you extra powers of rational thinking; your reasoning will be as sloppy as before and you'll simply be wrong in an unusual way (for instance you'll end up believing in unfalsifiable unjustified propositions other than God; it seems to me that this has occurred in practice)

edit: Note: he asked a question, I'm answering why it is seen as fringe, it may sound like unfair critique but I am just explaining what it looks like from outside. The world is not fair; if you use dense non-standard jargon, that raises the costs, and lowers the expected utility of reading what you wrote (because most people using non-standard jargon don't really have anything new to say). Processing has non zero utility cost, that must be understood, if the mainstream rationalists don't instantly see you as worth reading, they won't read you, that's only rational on their part. You must allow for other agents to act rationally. It is not always rational to even read an argument.

Actually, given that one could only read some small fraction of rationality related material, it is irrational to read anything but known best material, where you have some assurance that the authors have good understanding of the topic, including those parts that are not exciting, or seem too elementary, or go counter to the optimism - the sort of assurance you get when the authors of the material have advanced degrees.

edit: formatting, somewhat expanded.

Comment by shrink on A Kick in the Rationals: What hurts you in your LessWrong Parts? · 2012-04-27T21:30:38.416Z · LW · GW

Look up on quantum gravity (or rather, lack of unified theory with both QM and GR). It is a very complex issue and many basics have to be learnt before it can be at all discussed. The way we do physics right now is by applying inconsistent rules. We can't get QM to work out to GR in large scale. It may gracefully turn 'classical' but this is precisely the problem because the world is not classical at large scale (GR).

Comment by shrink on A Kick in the Rationals: What hurts you in your LessWrong Parts? · 2012-04-27T19:36:48.920Z · LW · GW

One basic thing about MWI is that it is a matter of physical fact that large objects tend to violate 'laws of quantum mechanics' as we know them (the violation is known as gravity), and actual physicists do know that we simply do not know what the quantum mechanics works out to at large scale. To actually have a case for MWI one would need to develop a good quantum gravity theory where many worlds would naturally arise, but that is very difficult (and many worlds may well not naturally arise).

Comment by shrink on A Kick in the Rationals: What hurts you in your LessWrong Parts? · 2012-04-27T07:26:27.044Z · LW · GW

Various cases of NPD online. The NPD-afflicted individuals usually are too arrogant to study or do anything difficult where they can measurably fail, and instead opt to blog on the topics where they don't know the fundamentals, promoting misinformed opinions. Some even live on donations for performing work that they never tried to study for doing. It's unclear what attracts normal people to such individuals, but I guess if you don't think yourself a supergenius you can still think yourself clever for following a genius whom you can detect without relying on such ordinary things as credentials or achievements, as described here.