Posts

Comments

Comment by semianonymous on Muehlhauser-Wang Dialogue · 2012-04-25T21:51:47.918Z · LW · GW

The signal being what exactly?

Comment by semianonymous on Muehlhauser-Wang Dialogue · 2012-04-25T12:37:07.566Z · LW · GW

Outside of politically motivated issues (e.g. global warming), most people tend to generally not disagree with accomplished scientists on the topics within that scientist's area of expertise and accomplishment, and to treat the more accomplished person as source of wisdom rather than as opponent in a debate. It is furthermore my honest opinion that Wang is more intelligent than Luke, and it is also the opinion that most reasonable people would share, and Luke must understand this.

Comment by semianonymous on Muehlhauser-Wang Dialogue · 2012-04-25T12:26:07.785Z · LW · GW

It is not accusation or insult. It is the case though that the people in question (Luke, Eliezer) need to assume the possibility that people they are talking to are more intelligent than they are - something that is clearly more probable than not given available evidence - and they seem not to.

Comment by semianonymous on Muehlhauser-Wang Dialogue · 2012-04-25T05:32:56.460Z · LW · GW

It is clear that you just don't want to hear opinion more intelligent without qualifiers that allow you to disregard this opinion immediately, and you are being obtuse.

Comment by semianonymous on Muehlhauser-Wang Dialogue · 2012-04-24T18:45:45.692Z · LW · GW

It also does not present valid inference. Ideally, you're right but in practice people do not make the inferences they do not like.

Comment by semianonymous on Muehlhauser-Wang Dialogue · 2012-04-24T16:56:36.545Z · LW · GW

I try not to assume narcissist personality disorder. Most people have IQ around 100 and are perfectly comfortable with the notion that accomplished PhD is smarter than they are. Most smart people, also, are perfectly comfortable with the notion that someone significantly more accomplished is probably smarter than they are. Some people have NPD and have operating assumption 'I am the smartest person in the world' but they are a minority across entire spectrum of intelligence. There are also cultural differences.

Comment by semianonymous on Muehlhauser-Wang Dialogue · 2012-04-24T10:37:41.055Z · LW · GW

Are you even serious?

Comment by semianonymous on Muehlhauser-Wang Dialogue · 2012-04-24T06:46:44.386Z · LW · GW

I fail to see how the suggestion that Wang is much smarter than Luke is an insult - unless Luke believes that there can't be a person much smarter than him.

Comment by semianonymous on Muehlhauser-Wang Dialogue · 2012-04-24T06:29:37.561Z · LW · GW

My point is that this bell curve shouldn't be a new argument, it should be the first step in your reasoning and if it was not, you must have been going in the other direction. You seem to be now doing the same with the original social status.

I think I have sufficiently answered your question: I find Wang's writings and accomplishments to require significantly higher intelligence (at minimum) than Luke's, and I started with normal distribution as the prior (as everyone should). In any game of wits with no massive disparity in training in favour of Luke, I would bet on Wang.

Comment by semianonymous on Muehlhauser-Wang Dialogue · 2012-04-24T05:47:04.482Z · LW · GW

Ah, that's what you meant by the other remark. In that case, this isn't backing up claimed prior proxies and is a new argument.

New to you. Not new to me. Should not have been new to you either. Study and train to reduce communication overhead.

Anyone who has read what Luke has to say or interacted with Luke can tell pretty strongly that Luke is on the right side of the Bell curve.

Exercise for you: find formula for distribution of IQ of someone whom you know to have IQ>x . (I mean, find variance and other properties).

There are a lot of Chinese academics who come to the United States. So what do you mean by very difficult?

Those born higher up social ladder don't understand it is hard to climb below them too.

Comment by semianonymous on Muehlhauser-Wang Dialogue · 2012-04-24T05:26:20.853Z · LW · GW

I apologise for my unawareness that you call China second world. It is still the case that it is very difficult to move from China to US.

Also, if we go back in time 20 years, so that Pei Wang would be about the same age Luke is now, do you think you'd have an accomplishment list for Pei Wang that was substantially longer than Luke's current one? If so, how does that impact your claim?

If we move back 20 years, it is 1992, and Pei Wang has already been a lecturer in China then moved to Indiana University. Length of the accomplishment list is a poor proxy, difficulty is important. As I explained in the edit, you shouldn't forget about Bell's curve. No evidence for intelligence is good evidence of absence, on the IQ>100 side of normal distribution.

Comment by semianonymous on Muehlhauser-Wang Dialogue · 2012-04-24T04:59:59.598Z · LW · GW

There is very little data on Luke and that is a proxy for Luke being less intelligent, dramatically so. It is instrumental to Luke's goals to provide such data. On the second world or third world that is irrelevant semantics.

edit: and as rather strong evidence that Luke is approximately as intelligent as the least intelligent version of Luke that can look the same to us, it suffices to cite normal distribution of intelligence.

Comment by semianonymous on Muehlhauser-Wang Dialogue · 2012-04-24T04:17:31.956Z · LW · GW

Accomplishments of all kinds, the position, the likelihood that Wang has actually managed to move from effectively lower class (third world) to upper class (but I didn't look up where he's from, yet), etc.

What proxies do you think would indicate Luke is more intelligent? I can't seem to think of any.

Comment by semianonymous on Muehlhauser-Wang Dialogue · 2012-04-24T04:01:11.878Z · LW · GW

I'm unsure how you are getting more intelligent.

I'm unsure how you are not. Every single proxy for intelligence indicates a fairly dramatic gap in intelligence in favour of Wang. Of course for politeness sake we assume that they would be at least equally intelligent, and for phyg sake that Luke would be more intelligent, but it is simply very, very, very unlikely.

Comment by semianonymous on Muehlhauser-Wang Dialogue · 2012-04-23T21:36:36.057Z · LW · GW

When there is any use of domain specific knowledge and expertise, without a zillion citations for elementary facts, you see "simple errors of reasoning" whereas everyone else sees "you are a clueless dilettante". Wang is a far more intelligent person than Luke, sorry, the world is unjust and there is nothing Luke or Eliezer can do about their relatively low intelligence compared to people in the field. Lack of education on top of the lower intelligence doesn't help at all.

edit: I stand by it. I don't find either Eliezer or Luke to be particularly smart; smarter than average blogger, for sure, but not genuises. I by the way score very high on IQ tests. I can judge not just by accomplishments but simply because I can actually evaluate the difficulty of the work, and, well, they never did anything that's too difficult for IQ of 120 , maybe 125 . If there is one thing that makes LessWrong a cult, it is the high-confidence belief that the gurus are smartest, or among the smartest people on Earth.

Comment by semianonymous on Muehlhauser-Wang Dialogue · 2012-04-23T19:42:38.445Z · LW · GW

If you clear away all the noise arising from the fact that this interaction constitutes a clash of tribal factions...

Pei seems to conflate the possibility...

I'm finding these dialogues worthwhile for (so far) lowering my respect for "mainstream" AI researchers...

and so on.

I think it'd be great if SIAI would not lath on the most favourable and least informative interpretation of any disagreement, in precisely the way how e.g. any community around free energy devices does. It'd be also great if Luke allowed for the possibility that Wang (and most other people whom are more intelligent, better educated, and more experienced than Luke) are actually correct, and Luke is completely wrong (or not even wrong).

Comment by semianonymous on Muehlhauser-Wang Dialogue · 2012-04-23T12:53:35.697Z · LW · GW

I think you must first consider simpler possibility that SIAI actually has a very bad argument, and isn't making any positive contribution to saving mankind from anything. When you have very good reasons to think it isn't so (high iq test scores don't suffice), very well verified given all the biases, you can consider possibility that it is miscommunication.

Comment by semianonymous on Video Q&A with Singularity Institute Executive Director · 2012-04-23T07:28:52.091Z · LW · GW

Well, my prior for someone on the internet who's asking for money being scam is no less than 99% (and I do avoid pascal mugging by not taking strings from such sources as proper hypotheses), and I think that is a very common prior, so there better be good evidence that it isn't scam - a panel of accomplished scientists and engineers, working to save the world, etc etc. think something on the scale of IPCC. rather than some weak evidence that it is scam, and something even less convincing than e.g. Steorn's perpetual motion device.

Scamming works best by self deceit though, so even though you are almost certainly just a bunch of fraudsters, you still feel genuinely wronged and insulted by suggestion that you are, because the first people that you would have defrauded would have been yourselves. You'd also feel wronged that there is nothing you could of done to look better. There isn't; if your cause was genuine it would of been started decades ago by more qualified people.

Comment by semianonymous on A question about Eliezer · 2012-04-22T10:19:22.561Z · LW · GW

You can eliminate the evidence that you consider double counted, for example grandiose self worth and grandiose plans, though those need to be both present because grandiose self worth without grandiose plans would just indicate some sort of miscommunication (and the self worth metric is more subjective), and are alone much poorer indicators than combined.

In any case accurate estimation of anything of this kind is very difficult. In general one just adopts a strategy such that sociopaths would not have sufficient selfish payoff for cheating it; altruism is far cheaper signal for non-selfish agents; in very simple terms if you give someone $3 for donating $4 to very well verified charity, those who value $4 in charity above $1 in pocket, will accept the deal. You just ensure that there is no selfish gain in transactions, and you're fine; if you don't adopt anti cheat strategy, you will be found and exploited with very high confidence as unlike the iterated prisoner dilemma, cheaters get to choose whom to play with, and get to make signals that make easily cheatable agents play with them; a bad strategy is far more likely to be exploited than any conservative estimate would suggest.

Comment by semianonymous on A question about Eliezer · 2012-04-21T09:07:00.969Z · LW · GW

I thought about it some more and the relevant question is - how do we guess what are his abilities? And what is his aptitude at those abilities? Is there statistical methods we can use? (e.g. SPR) What would the outcome be? How can we deduce his utility function?

Normally, when one has e.g. high mathematical aptitude, or programming aptitude, or the like, as a teenager one still has to work on it and train (the brain undergoes significant synaptic pruning at about 20 years of age, limiting your opportunity to improve afterwards), and regardless of the final goal, the intelligent people tend to have a lot of things to show from back when they were practising. I think most people see absence of such stuff as a very strong indicator of lack of ability, especially as seeing it so provides incentive to demonstrate the ability.

Comment by semianonymous on A question about Eliezer · 2012-04-21T04:57:25.861Z · LW · GW

I did understand his point. The issue is that the psychological traits are defined as what is behind the correlation, what ever this may be - brain lesion A, or brain lesion B, or weird childhood, or the like. They are very broad and are defined to include the 'other features'

It is probably better to drop the word 'sociopath' and just say - selfish - but then it is not immediately apparent why e.g. arrogance not backed by achievements is predictive of selfishness, even though it very much is, as it is a case of false signal of capability.

Comment by semianonymous on A question about Eliezer · 2012-04-21T04:41:37.413Z · LW · GW

The cat is defined outside being a combination of traits of owner; that is the difference between the cat and IQ or any other psychological measure. If we were to say 'pet', the formula would have worked, even better if we had a purely black box qualifier into people who have bunch of traits vs people who don't have bunch of traits, regardless of what is the cause (a pet, a cat, a weird fetish for pet related stuff).

It is however the case that narcissism does match sociopathy, to the point that difference between the two is not very well defined. Anyhow we can restate the problem and consider it a guess at the properties of the utility function, adding extra verbiage.

The analogy on the math problems is good but what we are compensating for is miscommunication, status gaming, and such, by normal people.

I would suggest, actually, not the Bayesian approach, but statistical prediction rule or trained neural network.

Comment by semianonymous on A question about Eliezer · 2012-04-20T21:07:27.094Z · LW · GW

He is a high IQ individual, though. That is rare on its own. There are smart people who pretty much maximize their personal utility only.

Comment by semianonymous on A question about Eliezer · 2012-04-20T14:45:09.862Z · LW · GW

People don't gain ability to program out of empty air... everyone able to program has long list of various working projects that they trained on. In any case, programming is real work, it is annoying, it takes training, it takes education, it slaps your ego on the nose just about every time you hit compile after writing any interesting code. And the newbies are grossly mistaken about their abilities. You can't trust anyone to measure their skills accurately, let alone report them.

Comment by semianonymous on A question about Eliezer · 2012-04-20T13:34:53.891Z · LW · GW

She did expensive altruistic stuff that was more expensive than expected self interested payoff, though; the actions that are more expensive to fake than the win from faking are a very strong predictor for non-psychopathy; the distinction between psychopath that is genuinely altruistic, and non-psychopath, is that of philosophical zombie vs human.

Comment by semianonymous on A question about Eliezer · 2012-04-20T13:17:03.010Z · LW · GW

People do it selectively, though. When someone does IQ test and gets high score, you assume that person has high IQ, for instance, and don't postulate existence of 'low IQ people whom solved first two problems on the test', whom would then be more likely to solve other, different problems, while having 'low IQ', and ultimately score high while having 'low IQ'.

Comment by semianonymous on A question about Eliezer · 2012-04-20T09:27:47.728Z · LW · GW

The 'sociopath' label is not a well identified brain lesion; it is a predictor for behaviours; the label is used for the purpose of decreasing the computational overhead by quantizing the quality (and to reduce communication overhead). One could in principle go without this label and directly predict the likehood of unethical self serving act based on the prior observed behaviour, and that is ideally better but more computationally expensive and may result in much higher failure rate.

This exchange is, by the way, why I do not think much of 'rationality' as presented here. It is incredibly important to be able to identify sociopaths; if your decision theory does not permit you to identify sociopaths as you strive for rigour that you can't reach, then you will be taken advantage of.

Comment by semianonymous on A question about Eliezer · 2012-04-20T08:28:05.309Z · LW · GW

They are not independent - the sociopathy (or lesser degree, narcissism) is a common cause.

Comment by semianonymous on A question about Eliezer · 2012-04-20T04:58:14.378Z · LW · GW

Threads like that make me want to apply Bayes theorem to something.

You start with probability 0.03 that Eliezer is sociopath - the baseline. Then you do Bayesian updates on answers to questions like: Does he imagine grandiose importance to him or is he generally modest/in line with actual accomplishments? Does he have grand plans out of the line with his qualifications and prior accomplishments, or are the plans grandiose? Is he talking people into giving him money as source of income? Is he known to do very expensive altruistic stuff that is larger than self interested payoff or not? Did he claim to be an ideally moral being? And so on. You do updates based on the likehood of such for sociopaths and normal people. Now, I'm not saying he is something, all I am saying is that I can't help it but do such updates - first via fast pattern matching by the neural network, then if I find the issue significant enough, explicitly with a calculator if i want to doublecheck.

edit: I think it will be better to change the wording here as different people understand that word differently. Let's say we are evaluating whenever the utility function includes other people to any significant extent, in presence of communication noise and misunderstandings. Considering that some people are prone to being pascal wagered and so the utility function that doesn't include other people leads to attempts to pascal-wager others, i.e. grandiose plans. On the AI work being charitable, I don't believe it, to be honest. One has to study and get into Google (or the like) if one wants the best shot at influencing morality of future AI. I think that's the direction into which everyone genuinely interested in saving the mankind and genuinely worried about the AI has gravitated. If one wants to make impact by talking - one needs to first gain some status among the cool guys, and that means making some really impressive working accomplishments.